I spent a long time experimenting with AI before finally writing about it in depth. It can be pretty useful — but is it worth it?

https://www.citationneeded.news/ai-isnt-useless/

#ArtificialIntelligence #AI #newsletter #CitationNeeded

AI isn't useless. But is it worth it?

AI can be kind of useful, but I'm not sure that a "kind of useful" tool justifies the harm.

Citation Needed

When I boil it down, I find my feelings about AI are actually pretty similar to my feelings about blockchains.

#ArtificialIntelligence #AI #newsletter #CitationNeeded

If you liked this essay, consider signing up for a pay-what-you-want subscription.

My writing is never paywalled, but your support is what helps me keep doing this.

https://www.citationneeded.news/signup/

#newsletter #CitationNeeded

Citation Needed

Citation Needed
@molly0xfff Nice marketing. Seriously, that’s good work. Curious to learn how well it works.
@molly0xfff It also doesn't help when "normals" see the grift and hate the products. For example, this rant by IronMouse, who has nothing to do with tech. https://mstdn.social/@mattwilcox/112277570122352261
Matt Wilcox (@[email protected])

The AI stuff isn’t convincing to “normals” either. IronMouse absolutely rinsing Twitter’s “AI” features here, nail on the head. “Reads like when I was at school padding out a bullshit essay with loads of words and not saying anything”. “AI artists?” *laughing* “get that shit out of my art tags. I don’t want to see it.” AI is going to implode so hard in the next 12 months, when the over-valuations of speculative futures adjust to a reality where no one is liking it. https://www.twitch.tv/videos/2120071913?t=2242s

Mastodon 🐘
@molly0xfff Hi,
If I use my GBP Credit Card, I'll get a $2 currency charge?
Do you know if "LINK" allows billing in non USD currencies? I've never heard of or used "LINK".
@Dr_Von2 hi! shoot me an email with the tier you're hoping for and i can give you a GBP payment link. [email protected]

@molly0xfff If only people in Silicon Valley weren’t primarily motivated by becoming billionaires.

This isn’t even an indictment of capitalist success, just…lop off a few zeros there.

You can still get rich quietly making moderately useful things.

@molly0xfff It sure uses a lot of power and cooling water to do a meh-job. Maybe using an intern (or fiverr) would be a better choice because it also helps a person.
@obviousdwest @molly0xfff not to mention that many of these AI applications have in the end depended on low-paid labor somewhere on the other side of the world
@molly0xfff I just pictured Refreshmentbot from Futurama as AI replacement for a sloppy intern

@molly0xfff AI is following the same tech hype train I have seen my entire life in tech.

And like every other hype in tech, it's about 80% greed trying to make money selling a 'perfect future', and 20% true believers who should really, really REALLY know better and yet somehow have become converts to a technical concept or tool whose flaws they are professionally trained and experienced to spot, but are blinded to.

My MS is in machine learning. I know how chatGPT works under the hood (just as I knew how blockchain worked on a technical basis) and I cannot understand some of the people who ALSO know how it works treating LLMs like they think or understand.

I can see how it can fool people who don't have a glimpse under the hood, but even they should start to notice they're talking to a literal Chinese Room example.

Words in, most statistically likely words out in the most likely pattern, zero thought or comprehension.

@EmilyGB2023 Glad to hear that my intuition wasn't too far off! Still, thanks for the insight.
@molly0xfff oohh.. cmon now.. i havent met a blockchain that can create me endless pictures of fairies riding corgis. #AI has its uses.
@molly0xfff I suppose the difference would be that people can lose a lot of money on / with the blockchain.. oh wait…

@molly0xfff I appreciate the nuanced view, and think you nailed it.

And the costs are indeed staggeringly high!

@molly0xfff Interesting read! It aligns really well with my own opinion about the AI.

Sadly, the companies are rushing to automate the kinds of creative work that people enjoy doing, like writing, coding and drawing, instead of automating the dangerous and boring work that no one is excited about.

@molly0xfff
Interesting.

Incidentally, I find that care in terminology is a good way to take apart overhyped tech. I'd avoid conflating "blockchain" and "consensus through proof-of-work". The issues of speed and cost stem from the latter.

@pluralistic

@ideal_CH speed and cost issues are pretty universal to blockchains. POW is worse, sure
@molly0xfff I mean, git or ssb use "blockchains". It does not help understanding to include consensus into the definition of "blockchain".
@ideal_CH git and SSB do not use blockchains

@molly0xfff At least blockchain doesn't first require the theft of tons of people's works to be functional  

Both of them still suck massively though.

@rgbunny @molly0xfff [NFTs based on stolen artwork have entered the chat]
@molly0xfff it is going to be 5 years until real open source ai hits smb sector so they have time to both get worse and better at the same time - my op
@molly0xfff I find the whole AI thing a bit of a smoke and mirrors trick. The image generators are clever but asking it to generate code or anything complex turns out to be rubbish and error prone. It's not much cop for coding, most of what it generates is trash. Also ask it to write an article and it clearly makes up stuff which is blantanly wrong. I guess people think it's amazing because it 'looks' amazing but dig deeper and it's a bit rubbish really.
@molly0xfff Not surprising considering it's the exact same scammers behind both, and even the same business model (selling GPUs/compute time to do utterly pointless shit to fools who think they'll make money on it).
@molly0xfff You use AI every time you make or receive a payment. Without it there would be a lot more fraud undetected and a lot more false positives (legitimate transactions wrongly declined).

@TimWardCam @molly0xfff

The misuse of the term destroys discourse about it. MML about patterns of transactional behaviour hasn't any real overlap with MML using Large Language Models.

There's no intelligence involved in processing the models. There's considerably more precision in financial transactions than loose English language models. That doesn't necessarily mean they're more accurate but the provenance of the data and its mapping is much simpler.

@simon_lucy @molly0xfff Yeah, that was my point really. "AI" as a marketing term covers a wide range of things, some of which are more useful than others.

@TimWardCam @molly0xfff

And I should have used ML, Machine Learning, rather than MML which has all sorts of meanings.

Being able to show the workings out is a necessary requirement, where did the data come from and how were the results produced.

Provenance, 'reasoning' and training sets aren't available for LLM. Making them available wouldn't help users directly but it would enable third party verification.

@simon_lucy @molly0xfff Getting systems to explain *individual decisions* is still, I'm told, a research topic.

@TimWardCam @molly0xfff

It doesn't look as if it's moved on very far in the past 7 years (which is the last time I thought seriously about it) but it is tractable.

It lacks proper incentive and motivation. If the use of results required generating sufficient markers to enable Explain before the product was released then real work would be done.

Right now it's put down as being too expensive, do it later. So it will never be done.

@simon_lucy @molly0xfff I understand it's an active research field because companies in the business expect (or should that be "fear") that financial service regulators will eventually require explainability.

@TimWardCam @molly0xfff

I think it's a lot simpler for financial transactions but they probably think their risk valuation, and the heuristics they apply for fraud are secret sauce.

But the regular kind of disclosure would be audit so controlled not public.

If they were treated as disclosable by someone bringing a well formed suit then that's still manageable and not disclosing would have far worse consequences.

It would help being in a multi-state regulatory authority.

@TimWardCam indeed, and i have a footnote in the first sentence clarifying the terminology i use throughout the piece

@molly0xfff Your entire essay is excellent but especially salient point: "do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely?"

A thousand times yes!

@molly0xfff add illegal in there and 💋
@molly0xfff surely blockchain costs were much worse in terms of energy usage though right?
@molly0xfff
both of them are tools, but the truth is there's usually a better tool for the job, and the few things they are uniquely good at, aren't necessary something that is beneficial to society.

@molly0xfff is think the way AI has been most “useful” is for churning out low quality garbage (slop) that you can fool people into clicking on to generate ad revenue. it’s not a good thing, but it’s something that people have managed to monetize. not sure how long time profitable it will be though.

it has also been fairly successful at deepfakes for scams as well. so i guess being most useful for scammers is another thing that it has in common with blockchain

@molly0xfff is there a delay before the pod episode is published on https://www.spreaker.com/show/6019906/episodes/feed ?
@tinyrabbit only when i forget to hit the publish button 😅 up now.
@molly0xfff This is pretty much how I feel about it. Unlike crypto, I think there actually is an underlying tech here that has some use value and is neat. But its use is very limited and seems like we're in a bubble where companies overstate the use value of LLMs and will ultimately create a bunch of crappy products and layoff people they'll inevitably have to hire back. It's a bummer
@molly0xfff excellent writeup, thank you.
@molly0xfff This is the core message to me... and it's a self-defeating proposal. I see a lot of second-language English speakers using chatGPT to sound more "professional" but it just ends up looking obviously chatGPT. There was a nice post here a while ago framing this as a racism facet of LLMs: https://kolektiva.social/@FractalEcho/111949020988947948
Rua M. Williams (@[email protected])

The racism behind chatGPT we are not talking about.... This year, I learned that students use chatGPT because they believe it helps them sound more respectable. And I learned that it absolutely does not work. A thread. A few weeks ago, I was working on a paper with one of my RAs. I have permission from them to share this story. They had done the research and the draft. I was to come in and make minor edits, clarify the method, add some background literature, and we were to refine the discussion together. The draft was incomprehensible. Whole paragraphs were vague, repetitive, and bewildering. It was like listening to a politician. I could not edit it. I had to rewrite nearly every section. We were on a tight deadline, and I was struggling to articulate what was wrong and how the student could fix it, so I sent them on to further sections while I cleaned up ... this. As I edited, I had to keep my mind from wandering. I had written with this student before, and this was not normal. I usually did some light edits for phrasing, though sometimes with major restructuring. I was worried about my student. They had been going through some complicated domestic issues. They were disabled. They'd had a prior head injury. They had done excellent on their prelims, which of course I couldn't edit for them. What was going on!? We were co-writing the day before the deadline. I could tell they were struggling with how much I had to rewrite. I tried to be encouraging and remind them that this was their research project and they had done all of the interviews and analysis. And they were doing great. In fact, the qualitative write-up they had done the night before was better, and I was back to just adjusting minor grammar and structure. I complimented their new work and noted it was different from the other parts of the draft that I had struggled to edit. Quietly, they asked, "is it okay to use chatGPT to fix sentences to make you sound more white?" "... is... is that what you did with the earlier draft?" They had, a few sentences at a time, completely ruined their own work, and they couldnt tell, because they believed that the chatGPT output had to be better writing. Because it sounded smarter. It sounded fluent. It seemed fluent. But it was nonsense! I nearly cried with relief. I told them I had been so worried. I was going to check in with them when we were done, because I could not figure out what was wrong. I showed them the clear differences between their raw drafting and their "corrected" draft. I told them that I believed in them. They do great work. When I asked them why they felt they had to do that, they told me that another faculty member had told the class that they should use it to make their papers better, and that he and his RAs were doing it. The student also told me that in therapy, their therapist had been misunderstanding them, blaming them, and denying that these misunderstandings were because of a language barrier. They felt that they were so bad at communicating, because of their language, and their culture, and their head injury, that they would never be a good scholar. They thought they had to use chatGPT to make them sound like an American, or they would never get a job. They also told me that when they used chatGPT to help them write emails, they got more responses, which helped them with research recruitment. I've heard this from other students too. That faculty only respond to their emails when they use chatGPT. The great irony of my viral autistic email thread was always that had I actually used AI to write it, I would have sounded decidedly less robotic. ChatGPT is probably pretty good at spitting out the meaningless pleasantries that people associate with respectability. But it's terrible at making coherent, complex, academic arguments! Last semester, I gave my graduate students an assignment. They were to read some reports on labor exploitation and environmental impact of chatGPT and other language models. Then they were to write a reflection on why they have used chatGPT in the past, and how they might chose to use it in the future. I told them I would not be policing their LLM use. But I wanted them to know things about it they were unlikely to know, and I warned them about the ways that using an LLM could cause them to submit inadequate work (incoherent methods and fake references, for example). In their reflections, many international students reported that they used chatGPT to help them correct grammar, and to make their writing "more polished". I was sad that so many students seemed to be relying on chatGPT to make them feel more confident in their writing, because I felt that the real problem was faculty attitudes toward multilingual scholars. I have worked with a number of graduate international students who are told by other faculty that their writing is "bad", or are given bad grades for writing that is reflective of English as a second language, but still clearly demonstrates comprehension of the subject matter. I believe that written communication is important. However, I also believe in focused feedback. As a professor of design, I am grading people's ability to demonstrate that they understand concepts and can apply them in design research and then communicate that process to me. I do not require that communication to read like a first language student, when I am perfectly capable of understanding the intent. When I am confused about meaning, I suggest clarifying edits. I can speak and write in one language with competence. How dare I punish international students for their bravery? Fixation on normative communication chronically suppresses their grades and their confidence. And, most importantly, it doesn't improve their language skills! If I were teaching rhetoric and comp it might be different. But not THAT different. I'm a scholar of neurodivergent and Mad rhetorics. I can't in good conscience support Divergent rhetorics while supressing transnational rhetoric! Anyway, if you want your students to stop using chatGPT then stop being racist and ableist when you grade. #chatGPT #LLM #academic #graduateStudents #internationalStudents #ESL

kolektiva.social
@LeoRJorge yeah, you may have noticed the instructions I gave around proofreading had to include very explicit instructions to *just* proofread, *not* rewrite it in that godawful GPT "voice". i imagine something similar is necessary when trying to get it to help with English
@molly0xfff It's very sad. I'm in academia, and the frequency I get these extremely weird emails from graduate students, even in an informal setting where there would be no need for anything like that... end up with a weird mix of legalese and marketing language in a simple exchange about manuscript reviews
@molly0xfff the tendency of people to trust AI tools more than they should is easily my biggest fear regarding AI. I'm curious if there is any good research out there into just how bad it is
@molly0xfff I 100% agree with both your views on positives, shortcomings, risks - and obviously, the grifter hype and overpromise.
However, for me, it's still a huge net win. For example, writing code at work without Copilot I feel less productive or even frustrated (use it at home for personal projects). Even if it's "just autocomplete" and misses the mark a lot - it still is an amazing autocomplete.
Btw., here is my recent post on my uses - a lot overlap with yours https://bartwronski.com/2024/01/22/how-i-use-chatgpt-daily-scientist-coder-perspective/ :)
How I use ChatGPT daily (scientist/coder perspective)

We all know how the internet works—lots of “hot takes,” polarizing opinions, trolling, and ignorance.  Recently, everyone has opinions on AI and LLMs/GenAI in particular. I won’t focus here on…

Bart Wronski
@molly0xfff and I think *exactly* like with web3 and crypto stuff - a) time will filter the bad actors and grifters and this part will fall apart suddenly and completely. b) people will understand those tools better and their limitations/overpromise. c) there is more and more push on regulations to make it more ethical - hopefully, those will succeed.
@BartWronski thanks for the link, looking forward to reading it