The difference between thinking about how any given tech impacts you individually—harm versus utility—and thinking about how it impacts society if adopted at scale is effectively the difference between concluding that:
“Cars are fine, actually, the bigger the better. So helpful.”
Versus…
“We’re destroying ourselves and our planet because public transport is less profitable than the car industry.”
I suspect there is an unspoken motive for sticking LLM into everything.
I think they're trying to get us to pay for another tool of oppression. LLMs are effective at manipulating people and automating surveillance so they need fewer secret police to prevent us from overthrowing them as the climate crisis get to be unlivable
@fedops @ParadeGrotesque @baldur
I was reading ASCII text as it came across at 110 baud and was printed on a teletype. 300 baud is faster than you can read text.
I did a fair amount of programming work at 1200 baud. Like all of my college work. Mostly remote.
9600 baud is annoyingly slow for images. But I eventually paid for higher bandwidth in order to download large software packages.
@fedops @ParadeGrotesque @baldur
I curse at our local water company for subjecting me (and everyone else) to full screen full motion video when I just want to login and pay my water bill. Even their own public relations people criticize it for not being very accessible, and for being irrelevant and misleading.
There's a lot of waste — bandwidth, memory, CPU time, disk space.
…
But high resolution streaming video is really nice to have!
@fedops @baldur they were emphatically not. I remember trying to use the web in those days and the experience was very much one of clicking something and then coming back in a few minutes to read it.
That doesn't mean what is happening now is ok though, but in most cases it's not transfer time but rather the time parsing and executing wild amounts of JavaScript that makes it slow.
subtopic
Did you just return home or something? ;)
#Dayton #Hamvention 2025 #ハムベンション #アマチュア無線 https://youtube.com/watch?v=0KVvuK6lqvc&si=_gM1GjV5huVz52NA
And it's not as if high speed internet did not exist at the time: The backbone and well connected sites had high bandwidth. It's just that most homes did not.
And it's not as if no homes had high bandwidth feeds: Many people did have "cable TV," which is high speed distribution to homes. And some had satellite (with high latency issues).
It was practically always an issue of cost and building out infrastructure.
@baldur or "hey copper wires are fine let's just keep throwing smaller and smaller square waves down it with more and more pre-distortion to overcome the lossyness and it'll be grand"
But in the end you still end up with a snarled up result at the other end where the 0s and 1s are blurred into each other and the noise floor, with ever more broken packets and errors.
'Connecting the Clouds - The Internet in New Zealand' is a history of the people, activities and events that contributed to the creation, then growth, of the Internet in New Zealand. Written by author Keith Newman, the book was commissioned by InternetNZ (the Internet Society of New Zealand Inc) and published by Activity Press.
@sarajw It still is, if you're using a satellite in geo synchronous orbit. Had a flash back about 9y ago when using an Inmarsat modem for remote equipment monitoring. SSH was horrible to use, but streaming speed was "good" at 400k.
It still amazes me that my home internet connection is now 5x faster than the LAN when I was at uni.
@ingram lucky you!
Parts of Germany are in the dark ages re consumer bandwidth. We can't get more than 100Mbps at home.
@ingram amazing.
Yeah I remember going to Uni and being like wowwwww broadband is amazing!
Then home in the holidays to 56K 💀
I don't really think that home (and work) internet speeds were the main cause of "the dot com bubble bursting."
But it does look to me that many of the *other* widely accepted causes of the ".com bust" really do apply to (Generative) AI companies and products. And that that is almost certainly accurately predictive.
@dneary If we look past the fact that prompts and chatbots are a fundamentally unsuitable paradigm for serious productivity work (much like touch screens are an unsuitable paradigm for touch typing), you'd still need to figure out parameterisation (structured input that's actually parsed by the model) for security. It's the only way of truly preventing prompt injection.
They also need to be made more deterministic and reliably factual
That's just for starters. All effectively impossible AFAICT

@baldur Yesterday I proved once again that "AI" isn't even an attempt at actual intelligence, by asking one in Swedish whether it spoke Swedish. It didn't even recognize I was using a different language.
But the real proof is that human intelligence is obviously not language-based. Otherwise other highly intelligent mammals—especially great apes—would all have recursively structured language!
LLMs will NEVER be able to do actual mathematics. They can pass high school "math" tests. Big deal.
@baldur Oh, as to what LLMs ACTUALLY are, that's EASY. They are pattern recognition-generation software.
Not artificial intelligence.
They took grammar research, which led to the ability to process language well, and which was indeed an important contribution to the study of artificial intelligence.
They also took neural networks, which are based on the fallacy that mimicking neurons would produce intelligence.
They put these things together with fast database searches.
@joe The intelligence of a chimpanzee and the intelligence of a human must be almost the same thing, IN TERMS OF MECHANISM. And the brains are structured almost the same, as well.
If the requirement of similarity is not obvious to you then you might be either a Creationist or an LLM advocate. :)
Here is a problem that cannot be solved by juggling language like an LLM, and which even most mathematically educated humans solve incorrectly: With what probability is six the last digit of pi?