@joshuagrochow and the lack of moral compass or publicly stated ethicals standards that would allow university employees to steal large enough sets. small sets of text are read and understood by humans who can, far more efficiently, apply appropriate prior written and other formats of source material to a specific use case.
programming a calculator only makes reasonable sense if the computation requires enough repetition to warrant the resources used in building it, or it's a closed set without novelty... like, for example, a numerical calculator. ; )
edited for typos and clarity: it was killing me, apologies for the notification disruption.
The code for the LLM interpreter is relatively simple, and bears the same relationship to the actual LLM as the C compiler does to an operating system. The models are the real software and the ones big and complex enough to be useful are the product of large corporations and mass copyright violation.
@Gargron @df
Yes. The flowchart has three boxes:
1. Create LLM
2. Then a miracle occurs
3. Profit from AGI !!!
The companies pushing so-called "AI" have completed step 1. Some of them try to tell us that they've nearly got a handle on step 2, but that's just an attempt to swindle more investors. There is literally NOTHING that fits in the hole of step 2.
Transformers are neural networks.
LLMs are transformers wrapped in some Python scripting.
Every neural network can be accurately represented as an Excel sheet, even if it ends up having billions of cells.
Since it's just addition and multiplication, the model is fully deterministic. Same input, same output. Not intelligent.
It's Python code that does probabilistic sampling of the output. It's just a few lines of well-understood math plus a dice roll. Again, not intelligent.
@patrys @df @Gargron does determinism imply non-intelligence?
If you hooked up the computer to a Geiger counter for true random noise and used that to modulate the output, would that have any bearing on its intelligence?
Or from the other side, what makes you think our brains are non deterministic, and why does that make us more intelligent than if the exact same history and sense-data always produced the same response?
@FishFace @df @Gargron If it’s deterministic, it can be unrolled into a giant lookup table. Did we kill phone books because they were on the verge of achieving AGI?
To me, intelligence implies a lot of things, like being able to form higher-order abstractions, learn, and thus remember things (no, being passed your “memories” as part of every prompt does not count). It also implies being curious.
@patrys @df @Gargron given that the lookup table would generally be infinite, I don't even see what that would have to do with anything. What about the Geiger counter?
I don't think those things are really needed for human-like intelligence, and something like curiosity can easily be simulated by a rules-based system.
@patrys LLMs are intelligent only in the sense of pattern recognition; that is, they possess logical intelligence. However, some psychologists argue that there are multiple intelligences that cannot be reduced to logic, nor are LLMs capable of possessing them. See psychologist Howard Gardner.
@FishFace @patrys @df @Gargron
"Or from the other side, what makes you think our brains are non deterministic"
Us having free will/being non-deterministic is pretty much the base assumption we all operate on to even be able to function as humans. That of course doesn't mean that it's automatically true, but it makes the question of why do you think your brain is non-deterministic a no-brainer to answer: because we can't help but perceive ourselves as such.
LLMs are Shannon 1948 as far as the theory goes (building on Markov, but adding computer technology). With some compression techniques.
But I think you're talking about something else entirely, not purely syntactical.
imagine for a moment, the billionaires have been beheaded and the yachts sunk into the sea. the value in the output of workers 100% reinvested into local communities. all of it. none for colonial masters far away. the 20 hour work weeks and all human workers hands full of the satisfaction their efforts are meaningful... no more busy work for shareholders to skim value out of. only meaningful work. custom artisanal everything. housewares repaired by local handicrafters. clothes sewn and tailored to each body. homes and townhomes and communal living spaces built and maintained by cooperative owners. neighboring towns and regions and nations translating with loving care between the communities of meaning... interconnected with care. 💜
And that lasts 1-2 generations before new people who don't understand the problems that lead their parents to create the paradise chafe under their constraints and begin changing the system to something its originators wouldn't like, this creating conflict, diversity of thought, and continuing the cycle of history.
See: reality.
@TheServitor hm. so you do not believe in evolution then?
you ignore the myriad plants and beasties with behavior change documented in the geological record? or the changes in human written language documented over millennia? or dr. martin luther king's statement on the arch of the moral universe? or even the historical shifts in graphic novels and comic book tales over 80 or so years?
weird.
nihilism has never had much sway with me. i accept humans believe such a framework applies beyond the self and assert omniscient determinism universally. however, i do not abide such a concept as the universal human. we are varied beasties and i have faith in evolution of arrangements of living matter as well as of patterned practices.
see: the data.
@Gargron would you know if you've seen a good outcome of an LLM? You'd somehow be able to identify when the LLM got it right?
I assure you you've experienced good LLM output and don't even know it. Because that's what good LLM output looks like. Indistinguishable from human output.
Your examples are perhaps false equivalencies. Take asbestos. We didn't abolish insulation. We developed better, safer insulation. We didn't stop dying food colors, we just developed safer dyes etc.
@Gargron ultimately LLMs like any other software is a tool. It's all about how a human uses them.
Lets take photoshop as an example. Humans generate vast amounts of garbage photoshopped images. Ever been to deviant art?
And yet the same tool is used by professionals all day every day to create stuff we like and enjoy.
The same applies to LLM use, and back to my first reply. What you lament is low quality output a human shared. Meanwhile the tool gets used masterfully to great effect elsewhere
@cygnathreadbare @Gargron yeah, that's a garbage way this technology has been developed. Unfortunately if we threw away every technology built on the back of people doing bad things we wouldn't have much technology, unfortunately.
I don't fault lamenting how it's come to be and even how it's used broadly. But claiming it's useless because some folks use it poorly isn't really accurate indicator of the technologies usefulness.
Let me ask you this: It's your birthday.
5 of your friends met some days before and wrote a song for you. It's not really good, the text doesn't even rhyme...but they did this for you and they had fun.
They enjoyed the act of creating.
5 other friends wrote a prompt and pressed a button to generate a song.
Which song will you remember?
@Tekchip @Gargron the tiny potential for very rare good outcomes are not worth the constant poisoning of humanity's collective information corpus.
For every "good" generated content there are dozens of thousands of terrible slop that are difficult to separate from genuine useful information or material when doing research or code reviews, etc.
Not to mention that these "good" outcomes are much costlier to humanity than creating by hand, with no benefit.
@Kiloku @Gargron the problem is you want to assume they are rare outcomes. I don't believe they are. Unfortunately that's where we're at an impasse. It's literally impossible to measure the good outcomes.
I agree the environmental outcome is terrible. I don't like that part. What we can look forward to is the technology improving. General computers used to use WAY more power than they do now. The same is going to happen with LLM technology. Hopefully sooner than later. Folks are working on it.
Machine vs. Human translation of fiction is an excellent analogy. Good translation involves an understanding of complicated material in an intuitive and nuanced way, and conveying those subtleties cleverly using equally complex forms in the target language while retaining the beauty of the writing. It involves much higher level thought than what LLMs do.
Likewise software engineering is much more complex and involves higher level thinking than prompted LLM code generation.
@Gargron You sound like me arguing against the inevitability of mass use of the cell phone.
I never understood why we gave up crystal clear audio, a two way simultaneous connection (yes, both parties could talk at the same time and hear wha5 the other had to say), and phone books for unintelligible garbled speak, dropped calls, delays, and no way to look up the damn phone number.