deliverator

@deliverator@infosec.exchange
33 Followers
117 Following
1.6K Posts

(Riffing on @hazelweakly 's post)

The two hardest problems in Computer Science are

1. Ethics
2. Getting people in tech to believe that ethics are important

See also folks reading sci-fi for inspirations for cool tech without learning any lessons from the stories behind them.

For those wondering why I'm making such a fuss about #Solstice, here's an excerpt from Terry Pratchett's #Hogfather. Susan, a mortal who is Death's granddaughter, has just saved the Hogfather (a Santa/Father Christmas type figure) from a messy death at the hands of those who would prefer a more clockwork universe.

I WILL GIVE YOU A LIFT BACK, said Death, after a while.

"Thank you. Now... tell me..."

WHAT WOULD HAVE HAPPENED IF YOU HADN'T SAVED HIM?

"Yes! The sun would have risen just the same, yes?"

NO.

"Oh, come on. You can't expect me to believe that. It's an astronomical fact."

THE SUN WOULD NOT HAVE RISEN.

...

"Really? Then what would have happened, pray?"

A MERE BALL OF FLAMING GAS WOULD HAVE ILLUMINATED THE WORLD.

They walked in silence for a moment.

"Ah," said Susan dully. "Trickery with words. I would have thought you'd have been more literal-minded than that. "

I AM NOTHING IF NOT LITERAL-MINDED. TRICKERY WITH WORDS IS WHERE *HUMANS* LIVE.

"All right," said Susan. "I'm not stupid. You're saying humans need... *fantasies* to make life bearable. "

REALLY? AS IF IT WAS SOME KIND OF PINK PILL? NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE.

"Tooth fairies? Hogfathers? Little --"

YES. AS PRACTICE YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES.

"So we can believe the big ones?"

YES. JUSTICE. MERCY. DUTY. THAT SORT OF THING.

-----

Happy #Hogswatch to all who celebrate.

RE: https://infosec.exchange/@geeknik/115776304008335747

Reporting to a corporation still does not work. Report the vulns to @gayint and they'll make sure someone does something about it.

The two hardest problems in Computer Science are

1. Human communication
2. Getting people in tech to believe that human communication is important

I wish I had the words to describe just how valuable @kissane’s latest essay feels to me: https://www.wrecka.ge/landslide-a-ghost-story/
Landslide; a ghost story

On March 27, 1964, a converted liberty ship named the SS Chena brought a shipment of supplies to the port of Valdez, Alaska. Valdez, which I need you to know is pronounced “valDEEZ,” sits at the end of a fjord—a narrow inlet carved by a glacier.

wreckage/salvage

It's December 23rd!

Have a Merry Christmas Adam everybody!

(Always comes before Christmas Eve and is generally unsatisfying.)

New mathematical framework reshapes debate over simulation hypothesis | Santa Fe Institute https://www.santafe.edu/news-center/news/new-mathematical-framework-reshapes-debate-over-simulation-hypothesis
New mathematical framework reshapes debate over simulation hypothesis | Santa Fe Institute

The simulation hypothesis — the idea that our universe might be an artificial construct running on some advanced alien computer — has long captured the public imagination. Yet most arguments about it rest on intuition rather than clear definitions, and few attempts have been made to formally spell out what “simulation” even means. In a new paper, SFI Professor David Wolpert introduces a mathematically precise framework for what it would mean for one universe to simulate another — and shows that several longstanding claims about simulations break down once the concept is defined rigorously.

I wish more people would admit this, but no ...
fixing a project after the code has been thoroughly fucked by ai is called sloppy seconds send toot
“to even claim LLMs “hallucinate” fictional publications misunderstands the threat they pose to our comprehension of the world, because the term “implies that it’s different from the normal, correct perception of reality.” But the chatbots are “always hallucinating,” he says. “It’s not a malfunction. A predictive model predicts some text, and maybe it’s accurate, maybe it isn’t, but the process is the same either way. To put it another way: LLMs are structurally indifferent to truth.”