@joshuagrochow and the lack of moral compass or publicly stated ethicals standards that would allow university employees to steal large enough sets. small sets of text are read and understood by humans who can, far more efficiently, apply appropriate prior written and other formats of source material to a specific use case.
programming a calculator only makes reasonable sense if the computation requires enough repetition to warrant the resources used in building it, or it's a closed set without novelty... like, for example, a numerical calculator. ; )
edited for typos and clarity: it was killing me, apologies for the notification disruption.
The code for the LLM interpreter is relatively simple, and bears the same relationship to the actual LLM as the C compiler does to an operating system. The models are the real software and the ones big and complex enough to be useful are the product of large corporations and mass copyright violation.
@Gargron @df
Yes. The flowchart has three boxes:
1. Create LLM
2. Then a miracle occurs
3. Profit from AGI !!!
The companies pushing so-called "AI" have completed step 1. Some of them try to tell us that they've nearly got a handle on step 2, but that's just an attempt to swindle more investors. There is literally NOTHING that fits in the hole of step 2.
Transformers are neural networks.
LLMs are transformers wrapped in some Python scripting.
Every neural network can be accurately represented as an Excel sheet, even if it ends up having billions of cells.
Since it's just addition and multiplication, the model is fully deterministic. Same input, same output. Not intelligent.
It's Python code that does probabilistic sampling of the output. It's just a few lines of well-understood math plus a dice roll. Again, not intelligent.
@patrys @df @Gargron does determinism imply non-intelligence?
If you hooked up the computer to a Geiger counter for true random noise and used that to modulate the output, would that have any bearing on its intelligence?
Or from the other side, what makes you think our brains are non deterministic, and why does that make us more intelligent than if the exact same history and sense-data always produced the same response?
@FishFace @df @Gargron If itās deterministic, it can be unrolled into a giant lookup table. Did we kill phone books because they were on the verge of achieving AGI?
To me, intelligence implies a lot of things, like being able to form higher-order abstractions, learn, and thus remember things (no, being passed your āmemoriesā as part of every prompt does not count). It also implies being curious.
@patrys @df @Gargron given that the lookup table would generally be infinite, I don't even see what that would have to do with anything. What about the Geiger counter?
I don't think those things are really needed for human-like intelligence, and something like curiosity can easily be simulated by a rules-based system.
@patrys LLMs are intelligent only in the sense of pattern recognition; that is, they possess logical intelligence. However, some psychologists argue that there are multiple intelligences that cannot be reduced to logic, nor are LLMs capable of possessing them. See psychologist Howard Gardner.
@FishFace @patrys @df @Gargron
"Or from the other side, what makes you think our brains are non deterministic"
Us having free will/being non-deterministic is pretty much the base assumption we all operate on to even be able to function as humans. That of course doesn't mean that it's automatically true, but it makes the question of why do you think your brain is non-deterministic a no-brainer to answer: because we can't help but perceive ourselves as such.
LLMs are Shannon 1948 as far as the theory goes (building on Markov, but adding computer technology). With some compression techniques.
But I think you're talking about something else entirely, not purely syntactical.
imagine for a moment, the billionaires have been beheaded and the yachts sunk into the sea. the value in the output of workers 100% reinvested into local communities. all of it. none for colonial masters far away. the 20 hour work weeks and all human workers hands full of the satisfaction their efforts are meaningful... no more busy work for shareholders to skim value out of. only meaningful work. custom artisanal everything. housewares repaired by local handicrafters. clothes sewn and tailored to each body. homes and townhomes and communal living spaces built and maintained by cooperative owners. neighboring towns and regions and nations translating with loving care between the communities of meaning... interconnected with care. š
And that lasts 1-2 generations before new people who don't understand the problems that lead their parents to create the paradise chafe under their constraints and begin changing the system to something its originators wouldn't like, this creating conflict, diversity of thought, and continuing the cycle of history.
See: reality.
@TheServitor hm. so you do not believe in evolution then?
you ignore the myriad plants and beasties with behavior change documented in the geological record? or the changes in human written language documented over millennia? or dr. martin luther king's statement on the arch of the moral universe? or even the historical shifts in graphic novels and comic book tales over 80 or so years?
weird.
nihilism has never had much sway with me. i accept humans believe such a framework applies beyond the self and assert omniscient determinism universally. however, i do not abide such a concept as the universal human. we are varied beasties and i have faith in evolution of arrangements of living matter as well as of patterned practices.
see: the data.
Just because a bunch of drug addicts dump all their money (and that from others) into drugs doesn't make them inevitable/good/useful... š¤·āāļø
Latest example: NFTs
@trisweb The asbestos analogy doesnāt really hold. Asbestos is an inherently harmful material whose risk inevitably increases with use. #LLMs are computational tools: their impact depends on how they are developed, applied, and regulated. Thatās why the key issue isnāt the existence of the technology, but the #ethics around its development and use, namely transparency, accountability, risk mitigation, and public oversight.
@df @Gargron alright, well, let's review:
* literally no one likes it, not even the normies who do not care about any of the myriad ethical issues surrounding it
* a bunch of very rich people dropped an unprecedented amount of cash to make it happen and now, in their desperation for that investment to pay off, are trying VERY hard to gaslight people into thinking they like it
sounds inevitable to me š
@kevin @df @Gargron small models are well and good and hopefully will be focused on actually useful things, as I'm personally still not convinced that LLMs are really that useful at all, and are taking winds out of the sail out of other AI avenues that have been very useful, things that we would classify as machine learning.
But if we want general models... those might just take too many resources to build and I honestly think society will be better off with no new ones of those anyway, while letting stuff like ollama collect enough bitrot that it loses most of its damaging potential.
@abucci Iām not making any demands. What Iām pushing back against is the tone of this recent wave of Mastodon posts, which seem to jump very quickly to blanket prohibition as the supposed āsolutionā to AI.
Of course AI has risks and real problems. No serious person denies that. But proposing to simply ban AI is neither realistic nor particularly helpful. AI isnāt a single product or substance that can be neatly removed from society; itās a broad set of techniques already embedded across science, medicine, infrastructure, and everyday software. Calling to ban it is like trying to stop the wind with your hands.
The asbestos analogy doesnāt hold either. Asbestos is intrinsically harmful: its normal use causes serious health damage. AI is not that kind of thing. Treating them as equivalent is a false analogy. AI is much closer to a tool. Like a knife, it can be used harmfully or constructively. The ethical question is not whether the tool exists, but how it is built, governed, and used.
Youāre also presenting claims about āAIā being built on stolen material and exploited labor as if they applied to the entire field. Some of those criticisms are valid in specific cases, especially regarding certain corporate practices. But generalizing them to all AI development ignores the existence of university research, open-source projects, and systems trained on licensed or public datasets.
Whatās striking is that your argument completely ignores the positive applications that already exist: AI assisting medical diagnosis, enabling accessibility tools for disabled users, improving translation between languages, accelerating scientific research, helping analyze complex datasets, or supporting education. You may think some of those benefits are overstated, but pretending they donāt exist at all weakens your argument rather than strengthening it.
If we want a serious ethical discussion, the relevant question isnāt āshould we ban AI?ā but which uses should be restricted or prohibited, and under what rules the rest should operate. Thatās precisely the direction policymakers are taking, for example with the European Union AI Act @EUCommission which regulates AI according to risk levels and bans specific harmful uses rather than treating the entire technology as inherently illegitimate.
@abucci Banning something like asbestos is not comparable to banning AI. Asbestos is a specific substance with inherently harmful properties. Generative AI and LLMs are techniques used across many different systems and applications, from medical imaging analysis to language technology and scientific data processing. That makes blanket prohibition a fundamentally different, and far more impractical, regulatory problem.
I also donāt think describing AI as a tool is a ācover story.ā Technologies can absolutely be embedded in political and economic projects, but that does not negate the fact that they are still general-purpose tools with multiple possible uses. Both things can be true at the same time.
On the benefits: pointing to accessibility, translation, or scientific data analysis is not āusing disabled people as pawns.ā These are documented applications that already exist and are used in practice. For example, AI-based assistive technologies are widely used for speech-to-text, captioning, and screen-reader improvements; machine translation systems are used daily by millions of people; and machine learning methods are routinely used in fields like protein structure prediction, medical imaging analysis, and climate modeling.
None of this denies the real problems you raise: copyright disputes, labor conditions in data labeling, environmental costs, or misuse. Those are legitimate policy questions. But acknowledging harms does not logically lead to the conclusion that the entire technological field is inherently unethical or should be banned outright. Thatās precisely why current policy discussions, such as the European Union AI Act, focus on risk-based regulation and banning specific harmful uses rather than prohibiting the technology as a whole.