Stop using AI, people. I don't care if it makes things easier for you. You're sucking up water, burning the planet, models were trained on stolen data, and you're making the rich even richer. And I didn't even list all of the issues with AI.
@cmccullough alternatively, if everybody on earth with an internet connection maxed out their free tier every day, I wonder how long openai and anthropic could last?
@cmccullough
Can you say more about how we draw the line.
@davidarnell Honestly, I'm not sure we can. All we can do is make sure everyone knows the issues with the use of AI and hope people make the right choice. Corporations surely aren't going to listen.
@cmccullough how will me hosting my own models and running them on my computer make alibaba rich?
@Ntropic Are your self-hosted models going to be trained on external data or are you going to train them yourself? It's not just about making the rich richer. There are lots of ethical issues.
@cmccullough I for one would love to develop my own model based on my own data... But ya, the corps are gonna kill the planet first.
@cmccullough they arent going to be trained on anything, they are trained already. And I don't care about whether they've been trained on stolen data, since nobody is making money off of them - my creativity was trained on stolen data too, I care about power concentration, and exploitation, but those arguments don't apply to open models.
@cmccullough I don'T think people hate AI - it's just an easier target. They hate techno-fascism, they hate capitalist power concentration. They hate that they are made dependent on income from a craft, and that then the powerful monopolize that craft - while using their crafts output without compensation. The problem was never machine-learning, it was our economic and social system.
@Ntropic @cmccullough no, everyone I know hates AI
@clint @cmccullough I am sure they think so. But maybe you could address the point about that being a misattribution error.
@Ntropic I don't like AI. I don't like the ethical issues, and there are many. Rebuild AI in a way that doesn't use stolen data to train the models, doesn't deplete our water sources, doesn't help scorch the planet, and isn't allowed to be used by the rich and powerful to bring down workers, then I will be okay with it. Maybe.

@clint
@cmccullough yes yes yes there is this one universal website called "ai.com" where you ONLY use GENERATIVE AI to generate pictures of cats
@cmccullough did you mean "Stop MISUSIng Generative AI"
@jarinks No, I mean, don't use AI. Everyone seems to look past all of the ethical issues with AI and LLMs.

@cmccullough Predictive AI helps detect cancer and negate human error if trained properly btw

They are being trained on cancer samples from patients and other conditions

LLMs that dont do image gen or are ethical like Claude are fine

Saying "stop using ai" is like saying "stop using fire because its the root cause of forest fires"

@jarinks @cmccullough I'm not sold on Claude being a golden boy here. I've gotta do more research before I'd say so. I think the resource strain and entanglement with Facist governments is probably as bad as Google or OpenAI but again, don't know.

All I can say is the most ethical of LLMs are the least useful to me. Please cure cancer if they can

@jarinks @cmccullough if I could be as confidently wrong, I’d be a billionaire.

Specialized models are older than AI bubble and don’t consume enormous amount of resources in training and operations.

Ethical oracle models, however, don’t exist. They require climate destroying amount of resources for training and operations alike by their nature. Claude is no exception here.

Quantum βŠ‚ AI (@[email protected])

If you're telling me we should leave aside the ethical concerns for the moment, then you *know* that what you're advocating is terrible

mastodon.me.uk
@cmccullough it does not make things 'easier" for me. it makes things possible for me. like for the first time in 26 years being able to independently access and take photos. Should I lose that?

@freya @cmccullough IMO as an extremely anti-genAI advocate, accessibility uses are the one exception I make. If you are disabled and using it for access, please do be aware of its many dangerous pitfalls, but by all means go ahead and use it; that's the rare case where the benefits outweigh the harms.

OTOH, if you're using it to "provide accessibility" and you've got other options, usually one of them is better and using genAI is a cost/quality tradeoff that is screwing over the disabled people you claim to be helping, *especially* if you're deploying it in an interactive context (relevant scary example here: the Slack bot that when asked about the fire alarm in there building said it was just a test (i.e., generated the most likely answer, as is its function) even though there was a real fire (thankfully nobody was harmed).

@freya @cmccullough
Tl;dr: the answer to your question is a matter of balancing benefits and damages generated by ai technology.

Long version:

Good question. The thing is, for almost every technology there is someone benefitting from it, and taking that technology away would hurt those beneficiaries.

So the true question is, what is the balance? What is the damage done by keeping the technology, what is the damage by abolishing it. And, for both options, what could be done to reduce the damage done?

I am not that worried about electric power consumption. That is an immediate issue, yes, but considering the improvements in renewable power generation, I believe this could be solved in a rather short period (depending on political will).

More critical is water consumption. Clean water is already a highly valuable resource, and consumption by any technology beyond a certain point, this has an immediate impact on plants and animals as well as humans in the area. This can be somewhat mitigated by setting up data centers in water rich locations, but it remains an issue.

Next is the issue of making ruthless corporations and their billionaire owners even richer and more influential. Not too long ago I would have said I don't care, as their money doesn't hurt me, and with a bit of tax law adaptions, we could also somewhat benefit. However, in recent years their factual political influence combined with their push for policies that hurt people all over the globe, I am wary of any cent going into their pockets. This can be somewhat mitigated by using FOSS ai models, however, those are mostly derived from big tech models, so without a certain amount of people feeding big tech, those will disappear as well.

The last issue is about copyright. This is mostly an issue for generative ai. And there it is basically unresolvable, as AI companies have repeatedly argued, that without ignoring copyright, they cannot train their models.
For other types of AI (e.g. image analysis), this is less of an issue, so depending on what kind of ai you're using, this might not be relevant for you.

All these points need to be considered by each individual user, as well as politicians to set up meaningful regulation. For individual users it is a question of consciousness, for politicians a question of weighing overall disadvantages and benefits.

@cmccullough can't put the genie back in the bottle, the only way is to find a good way forward.
@cmccullough I don't say that using ai is good but local ai generated on my computer is at least better.
Sure it's trained on stolen data but no one is making money off of free open models.
What we need is a free open model that is trained on only ethical data. But till then we have to have the best of what we have. No ai or local ai. No ai is always best.
Now the days when we can train our own model is not here yet. But when we can it will be awesome. Then I can make it with my own data.
@cmccullough just because we can does not mean we should
@cmccullough FACTS and THRUTH .. even if told LIES BY AI or BOTS to fill false narratives...
@cmccullough
Run models on your local computer
You already are in a sense if you use a smartphone