@abucci I’m not making any demands. What I’m pushing back against is the tone of this recent wave of Mastodon posts, which seem to jump very quickly to blanket prohibition as the supposed “solution” to AI.
Of course AI has risks and real problems. No serious person denies that. But proposing to simply ban AI is neither realistic nor particularly helpful. AI isn’t a single product or substance that can be neatly removed from society; it’s a broad set of techniques already embedded across science, medicine, infrastructure, and everyday software. Calling to ban it is like trying to stop the wind with your hands.
The asbestos analogy doesn’t hold either. Asbestos is intrinsically harmful: its normal use causes serious health damage. AI is not that kind of thing. Treating them as equivalent is a false analogy. AI is much closer to a tool. Like a knife, it can be used harmfully or constructively. The ethical question is not whether the tool exists, but how it is built, governed, and used.
You’re also presenting claims about “AI” being built on stolen material and exploited labor as if they applied to the entire field. Some of those criticisms are valid in specific cases, especially regarding certain corporate practices. But generalizing them to all AI development ignores the existence of university research, open-source projects, and systems trained on licensed or public datasets.
What’s striking is that your argument completely ignores the positive applications that already exist: AI assisting medical diagnosis, enabling accessibility tools for disabled users, improving translation between languages, accelerating scientific research, helping analyze complex datasets, or supporting education. You may think some of those benefits are overstated, but pretending they don’t exist at all weakens your argument rather than strengthening it.
If we want a serious ethical discussion, the relevant question isn’t “should we ban AI?” but which uses should be restricted or prohibited, and under what rules the rest should operate. That’s precisely the direction policymakers are taking, for example with the European Union AI Act @EUCommission which regulates AI according to risk levels and bans specific harmful uses rather than treating the entire technology as inherently illegitimate.
@abucci Banning something like asbestos is not comparable to banning AI. Asbestos is a specific substance with inherently harmful properties. Generative AI and LLMs are techniques used across many different systems and applications, from medical imaging analysis to language technology and scientific data processing. That makes blanket prohibition a fundamentally different, and far more impractical, regulatory problem.
I also don’t think describing AI as a tool is a “cover story.” Technologies can absolutely be embedded in political and economic projects, but that does not negate the fact that they are still general-purpose tools with multiple possible uses. Both things can be true at the same time.
On the benefits: pointing to accessibility, translation, or scientific data analysis is not “using disabled people as pawns.” These are documented applications that already exist and are used in practice. For example, AI-based assistive technologies are widely used for speech-to-text, captioning, and screen-reader improvements; machine translation systems are used daily by millions of people; and machine learning methods are routinely used in fields like protein structure prediction, medical imaging analysis, and climate modeling.
None of this denies the real problems you raise: copyright disputes, labor conditions in data labeling, environmental costs, or misuse. Those are legitimate policy questions. But acknowledging harms does not logically lead to the conclusion that the entire technological field is inherently unethical or should be banned outright. That’s precisely why current policy discussions, such as the European Union AI Act, focus on risk-based regulation and banning specific harmful uses rather than prohibiting the technology as a whole.