@anildash
@dweb #DWeb, but thatâs not specific enough.
Iâd look at folks around http://mariafarrell.com/ and https://abebabirhane.com/
@anildash are people ever "genuinely objective"? You probably mean some form of "lack of strong emotions towards AI" but I'd argue that that - given the real impacts and flaws of AI - creates a bit of a false middle ground.
Even though you might not see it as that I would call a lot of them the (neo-)luddites. But I feel like you are looking for something different that is a lot harder to pin down because they can't be structurally critical which kinda limits the target a lot.
I wonder how useful defining that group is though. Is there a neutral position when looking at something actively harming many of the structures defining our world?
@jplebreton @tante @anildash It doesn't really have to be about Grok or whatever. I think the cohort Anil is talking about (evidenced by his mention of Simon) is one that I have a foot in myself -- for example, in my hobby time I'm working with fully local voice assistant tech that uses small LLMs at different levels of the stack, all in service of not sending recordings of my house to Amazon or Google.
"Objective" is probably the wrong word though.
@glyph @anildash @darius @tante
I think, sometimes, that being genuinely thoughtful about the hype actually pushes the boundaries of my emotional regulation
Hundreds of billions of dollars of investment in a technology that doesn't work for most of its hyped use cases â that sounds pretty dysregulated to me
@anildash @simon
Framkly, there is no such thing as a pure "objective" viewpoint without narrowing the question. "Is marmite good" is not a question that can be answered objectively.
You CAN answer a question like: "do current AI coding methods create product of equivalent quality to humans with less effort", if you define all your terms usefully and use only well-conducted studies instead of expert opinion. But even then the answer is unlikely to be purely objective.
@anildash I don't think such people would be in a named club. They're not in a "fan club," they're not "activists against," they're not "gold prospectors," their livelihood is not dependent on a particular level of popularity of the tech.
IOW: Technologists weighing the value of a new tool against the costs of the tool.
We are in an era of loud street teams, influencers, and "true believers" (in an Eric Hoffer sense). Discourse on "is this hammer better than that hammer" has suffered for it.
I mean, if you want to say "AI critics who are not Ed Zitron" you could just say that.
When you require active users of LLMs, that's not "objectivity".
More broadly, you're attempting to reverse the burden of proof.
I commend this thread: https://wandering.shop/@xgranade/115274833212185074
Generally, either "con artists" or "marks." https://me.dm/@anildash/115265668682836682
@anildash I wonder if technological capability is the most interesting category.
A lot of the questions around AI that warrant objective analysis aren't about the tech, they are philosophical (including questions about what intelligence, knowledge, beauty, truth and ethics are)