I’m deciding against using AI to generate anything. No, I won't be able to tell if something is generated by AI, but you tell me it is, I won't be interested. I haven't been so far.
AI was a nuclear bomb that we shouldn't have invented, much less given to everybody.
My eyes glaze over when I see a ChatGPT screenshot. Sorry, but I’m not willing to make my mind worse than it already is.
I’ve added AI chatbots to the list of things, as a weird nerd, I still missed out on, even though that was largely a conscious decision.
https://notes.justagwailo.com/missed-outJust a Gwai Lo: Notes - Missed Out
This is a list of the things in nerd culture that passed me by even though I identify as a weird nerd:
Dungeons & Dragons
Cthulhu and all of H.P. Lovecraft
Reddit
Slashdot
Video games (until 2014 when I bought a PS4™)
Mastodon
FFFFOUND and MLKSHK
Comic books
Bitcoin and other cryptocurrencies, the
The only good thing about AI is that the ebooks bots, which mimicked people’s postings based on their old work, are back. Mine was like looking in a glitchy mirror. I’ll have you know that I couldn’t get enough of that.
“It’s too early to put the brakes on giving a nuclear bomb to everybody.”
https://www.axios.com/2023/05/05/runway-generative-ai-chatgpt-video
Runway brings AI movie-making to the masses
A fledgling service lets us create video clips from photos, videos and text.
AxiosWork rolled out an AI thing internally, so the temptation to partake increases. I did use AI once, and I didn’t understand the result, so it’s in the same category as cigarettes (the nicotine variety) for me: I smoked exactly one, and I wondered what the big deal was, so I never did it again.
Can I call myself a conscientious objector if work requires me to us their internal AI tool? Don’t worry, they haven’t yet. I’m not asking if I can thoughtfully decline to do something that’s mandatory. I’ll accept the consequences if they’re just. What I mean is, if I do decline, can I call myself that knowing what it means in the military draft sense of the term?
It’s always a surprise to me when someone I know uses AI tools regularly. And it’s the same feeling as when I find out that somebody I’m in love with smokes cigarettes.
It must be weird to the people looking at my laptop screen over my shoulder that I don’t use any AI tools.
It’s unfair to stop reading an article that’s adorned by AI-generated imagery, but I will point out in a reply to an official social media posting when it’s not clearly labelled as such.
For example, this article credits the header image as “cascoly2 / Adobe Stock” when it’s more precisely that cascoly2, a user on Adobe Stock, used AI to generate the image.
https://bigthink.com/business/how-to-spot-pluralistic-ignorance-before-it-derails-your-team/
How to spot "pluralistic ignorance" before it derails your team
It's 5.37 p.m., and the meeting is dragging on. But no one seems to care. So, you say nothing, but everyone feels the same.
Big ThinkInteresting that the image that adorns this article is no longer the one that (as far as I can tell) was generated by AI. It’s not important that I can prove it (OK, yes it is), it’s more important that I can read the article in good conscience now.
“There’s no need to indulge in sci-fi fantasies of killer computers. We’re doing fine at accelerating extinction without AI, and AI’s mundane, actual harms are real and present dangers.”
https://finiteeyes.net/technology/there-is-no-ethical-use-of-ai/There Is No Ethical Use of AI – Finite Eyes
What percentage of the people named in this article who currently work at Axios will be laid off in six months and replaced by AI?
https://www.nytimes.com/2024/04/11/business/media/axios-ai-strategy.html
Axios Sees A.I. Coming, and Shifts Its Strategy
“The premium for people who can tell you things you do not know will only grow in importance, and no machine will do that,” says Jim VandeHei, C.E.O. of Axios.
The New York Times