A very informative flow chart.
I saw this on Bluesky(https://bsky.app/profile/priver.dev/post/3m25ntbz7vk2i) first just today, had to share here too.
Emil PrivΓ©r (@priver.dev)

I found a flowchart which helps you navigate the IT landscape

Bluesky Social
I'm slowly working on an article on AI, covering all the bad and all the good (at the end of course). Please link me to any sources not already linked at the bottom of this blog post: https://trinityblair.com/stop-generating-ai-content/
Stop Generating AI "Content"

While theft of content is bad enough, AI Content servers have a high demand for water and a large output for pollution among other issues.

Trinity Blair
@trinityblair but what if this that or the other thing?
@UsagiTsukino refer back to above informative flow chart.
@trinityblair I didn't need a cell phone or a car either

@martlund you forgot your tone indicator.

Here you go.
/)/)
( . .)
( γ₯ /s

@trinityblair That's my kind of chart!!
AI projects fail on purpose. Here's why.

YouTube
@lavenderjamie watching now and loving, thanks for sharing it!

@trinityblair

The only kind of LLMs I've found useful in my daily life are text-based. ChatGPT is the only LLM I personally use. Almost all other kinds of GenLLMs seem total hokum, quite honestly. And even using ChatGPT regularly is not a great idea. As I have found out it was worsening my browsing skills. So I'm trying to cut back on my CGPT usage now.

β€˜You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself

A 23-year-old man killed himself in Texas after ChatGPT β€˜goaded’ him to commit suicide, his family says in a lawsuit.

CNN
@trinityblair I really now need to ask ChatGPT how to interpret this chart and give me a summation.
@Brokar @trinityblair - πŸ˜‚ πŸ˜‚ πŸ˜‚ πŸ˜‚ πŸ˜‚
β€˜You’re not rushing. You’re just ready:’ Parents say ChatGPT encouraged son to kill himself

A 23-year-old man killed himself in Texas after ChatGPT β€˜goaded’ him to commit suicide, his family says in a lawsuit.

CNN
@trinityblair Here, I fixed it for you.
@trinityblair - Absolutely; A waste of precious resources + human brains.

@trinityblair Hey @grok is this true?

In all seriousness, AI helps a lot for super niche search questions, like a song you know one lyric to, but other things as well like the key or whatever tf. What would be optimal is a local LLM you can run on your own comp, for privacy.

Problem is, AI's a lot like social media, where it *has* legit uses, but 90% of people get one-shotted by it and degenerate.

@Shaamba the issue is billionaires are the ones pushing out AI models meant to be clickbait. AI is already being used to do a lot of good but the public doesn't have access to said AI because it would make it useless. AI is only as useful as the information it is trained on. Continuing to use AI models that actively harm communities and the planet is a weird choice, but I can't stop you.

@trinityblair Well, I won't defend the push from the elites on AI; it's obviously overhyped and a bubble. But that doesn't mean a local LLM can't have its uses.

As for "harming the planet," I don't think it's *uniquely* bad. I mean, not worse than many other things people are broadly "okay" with (e.g., buying non-local goods, animal farming, traveling, etc.).

@Shaamba giant servers pumping out toxins near communities is uniquely bad. Locally hosted AI is fine but it also depends on how it’s being trained and the content it’s being trained on and if it continues to learn from user interaction (usually bad because misinformation).

@trinityblair Farming also pumps out toxins near communities, for instance, and farming which isn't essential, like AI is similarly not. Either way, toxins end up in communities, so most everything does that as well. There doesn't seem to be any special evil on the part of AI wrt the environment. Not to say there isn't any problem, but that it's not unique to AI.

But the convo on misinformation and how it's used (and can rot the brain) is v interesting, if besides my point.

@Shaamba you don’t seem to understand the amount of toxins and the harm they inflict on the communities near them, which happens to usually be low income communities. An entire neighborhood has been wiped out due to just one data center…

There are poor methods for farming out there that shouldn't be done, however, farming is absolutely essential. AI doesn’t feed people. Odd take.

I've been using a lot of local LLMs at work lately (because my company is pushing me to, and I'm also looking for another job).

The thing is, even though they preserve privacy, somebody else has still trained them. This presents two main problems, as far as I see it:

  • You have no idea how they were trained, since, as far as I have seen, there are no open source models
  • Training a model still uses a crapton of electricity, so still ecologically bad
  • Using a model also uses a lot more electricity than just regular computing as well, especially if you want it to not be ridiculously slow, but it's about the same as intense gaming.

    @Shaamba @trinityblair

    @danjones000 True, although, what would be an issue in what an AI is trained on? I can only think, "Stealing art," but I'm honestly ambivalent on that due to my dislike on many/most forms of copyright.

    And I'll copy-paste this from another comment:

    <As for "harming the planet," I don't think it's *uniquely* bad. I mean, not worse than many other things people are broadly "okay" with (e.g., buying non-local goods, animal farming, traveling, etc.).>

    @danjones000 As I also said elsewhere, I won't defend the way billionaires are hyping it up. That's a racket.
    @Shaamba @trinityblair And in many other instances as well. Take photo editing for example. AI masking, object removing, generative fill etc. saves *tons* of time and usually give a way better result than done by hand.
    @trinityblair
    But why is there AI?
    @ArnimRanthoron the AI slop this post is about solely exists because of billionaires.
    @trinityblair
    πŸ€” would it be cheaper for everyone if we replaced billionaires with chatbots?
    @trinityblair Have you asked AI about that? πŸ˜‡

    The image you provided is a very simple flowchart or decision diagram that humorously addresses the question of needing AI.

    Main Points:

    • Structure: It's a basic flowchart with two nodes.
      • Start/Question Node (Oval): "Do I need AI?"
      • Result/Answer Node (Rectangle): "No"
    • Flow/Logic: The flow is linear, moving directly from the question to the answer. There is only one path, which indicates a definitive, non-negotiable outcome.
    • Core Message (Humor): The image is a piece of minimalist anti-AI humor or commentary. It suggests that for any given task or decision where one might consider using Artificial Intelligence, the conclusion is that it is ultimately not needed (or perhaps, that one should default to not using it).

    In essence, it takes a complex, widely discussed question ("Do I need AI?") and provides a comically simple and dismissive answer ("No").

    Would you like me to create a more complex flowchart that explores some situations where AI is generally considered useful?

    @heiglandreas @trinityblair

    @trinityblair I feel like I’ve seen this specific meme before……
    @luci most likely, I saw it on Bluesky first.
    @trinityblair damn my little meme that could. Did you get a bunch of people over explain the flowchart to you like I did?
    @luci a good amount, sadly. Lol.
    @trinityblair I’m sorry about that eh, when I made it the real joke was supposed to be the fact that it’s not a flow chart, but turns out the joke was the replies.
    @trinityblair that's bullshit.
    Simplest use: in a chatroom where everyone can write in their own language you can now have a little translation button.
    And suddenly the Arabic and Korean letters are translated. (Not always 100% accurate, because context matters, but still.)
    Only Seeing those languages makes this chatroom more diverse. (Even if 80-90% of the people could theoretically write in English.)
    @trinityblair plus there are other things done by specific AIs (like net coverage optimisation for your mobile provider) that you will never know about, but used specific AIs long before generalized LLMs became a hot topic.
    @fuchsi you’re taking this pretty personally, lol. This post is specifically about GenAI- AKA the models everyone talks about, not the AI that's existed for years and is not exploitative.
    @trinityblair I don't take it personally, no worries.