I'm tired of hearing about AI, to be honest. I never cared for it. I don't respect people who use generative AI, and I despise companies that sell out people's data to train it. Yes, people will lose jobs to it, but the world will not be better for it. It's just that consequences are rarely immediately apparent in such complex systems.
@Gargron What worries me more, is the amount people think it can do, and how they believe the results are somehow going to do X. Lots of overreach, and this smells very much of the dot.com boom.
@Pontificator_OMF @Gargron We live in a world run by marketing majors with MBAs who aren't as interested in selling reality as they are selling. There are so many decision makers out there who have no idea what an "LLM" is that are signing off on "AI."
@Gargron 💯💯💯 yup totally agree. It's like the Bitcoin and block chain era. Companies are looking for a problem that Gen AI will solve. Machine Learning on its own has better applications.
@Gargron AI is a buzzword. In reality we’re talking high-level applied statistics only now achievable by the computational power we now have at our own expense. Corporations riding this wave will only cause destruction blinded by an immediate need for profit.
@Gargron Sad to see you have this take.
@urlyman Because I consider it a reactionary response. It is a take that I find philosophically and morally unsound (re: copyright and stealing). I dislike the idea that the economy and jobs are something we should sacrifice progress for; the entire capitalist system is broken and should not be defended. And if we are to make a list of things to sacrifice in defense of the environment, AI would be far down that list. It isn't like we don't have the technology to make clean energy now already.
@wagesj45 AI is *directed* by the worst energy-blind strands of capitalism doubling down. Sam Altman is a cheerleader for that. He is far from alone

@urlyman You can make that argument for literally everything. Pharmaceuticals, agriculture, media and news; all owned by the worst people and organizations on the planet. That doesn't mean we make medicine, food, and storytelling illegal.

And you're not even taking open source and academic research in AI into account.

@wagesj45 You’re right. We can. Stepping down the energy consumption ladder is fraught with complexity and danger. But step down we must.

I haven’t mentioned making anything illegal.

As to your final sentence, given this is our first exchange, how exactly do you know what I am and am not doing?

@urlyman >how exactly do you know what I am and am not doing?

>AI is *directed* by the worst energy-blind strands of capitalism doubling down.

You said AI is directed by capitalism. You didn't mention open/academic endeavors. I don't know what you think privately. I can only address the issue you raised.

@wagesj45 I don’t think it’s unreasonable to conclude that the most powerful and prominent voices in the space are the ones who most influence its direction and, crucially, its raison d’être. That does tend to be how power works.

I accept that it’s *possible* that scales will fall from the eyes of the powerful when some far more benign but currently far less visible mode of deploying the tech sweeps through society. That tends not to be how power works.

Fwiw I’m ~here https://mastodon.social/@urlyman/111776354830038771

@Gargron I partly agree with you , although I would not be as negative and I certainly see many area where AI is insanely magic and of great value for society.

When it comes to GPT style AI however I think we have to ask ourselves why we should want to delegate creative and joyful work to computers. Expressing our thoughts and feelings using text, still and moving imagery ultimately defines who we are, where we come from and where we will go to. This is hard work for a reason.

@gnaegi @Gargron giving away creativity and trading it to productivity. Struggling to see generative AI as more than a productivity tool, but happy to be challenged there
@Gargron I've lost count of how many people I've blocked, including people I used to follow, who don't understand just how bad everything to do with generative AI, and AI in general, actually is for the environment, artists rights, and data privacy. It's alarming how many just go "Haha look, funny picture" and then dismiss all the problems out of hand.

@Gargron @TacticalGrace_ the environment, artists rights (and copyright in general), data privacy, ethics as relating to the creatives, ethics as relating to the people in mostly the global south who are exploited prefiltering "training data", the reliability of knowledge (including internet pollution), and probably more.

fuck "AI"

I’ll except local applications where you trained your model from scratch using only your data (legally obtained) and probably don’t distribute that, for nōn-AGI special case applications like medical prediction supplement (never replacement), modulo the environmental factor (but I’ve been told that it’s much more manageable when building small specialised models without using the generic ones).

@Gargron @TacticalGrace_ (and the pictures often aren’t even funny in their too-glossy-to-be-real hidden-horrors-included appearance)

(and the Fæ might wish to have a word, too… given that the above both are their realm)

@Gargron It’s bad search without the references needed to see how bad it is

@Gargron Agreed about despising companies that sell people's data to train AIs.

How do you feel about companies that just _give_ the data to other companies that use it to train AIs?

@thenexusofprivacy @Gargron Did they explicitly ask the people if their data could be given to someone else? No? Then still bad.
@ariaflame I agree, I'm curious if @Gargron also sees it that way.

@Gargron

I'm all for using it for pattern recognition, and as a tool of discovery with solid scientific process, with ethical data sourcing and management.

But the stuff that's getting hyped now? If you can't sustain your business model without theft, then your business shouldn't exist.

There is actually amazing important work that uses ML/AI, and it's getting tarnished with the same brush. Which is also tragic.

On my instance, we require a CW for any discussion of AI on our Local Timeline

@Gargron For me, it's about the ethics. Where did the data come from? Who is being hurt or helped with that data? Where is the consent? Where is the documentation? Why is it opt-out instead of opt-in?

And so much more.

@Gargron We're nowhere near HAL9000 and we all know how THAT went in "2001".
@Gargron AI is just another tool for capitalists to accumulate wealth and economic power faster and lossless as you don't need workers anymore.

@Gargron

Can we and how do we block all AI from our content on our Wordpress sites?

@Gargron There is increasingly room for “the small guy” to tune existing open source models (not using other people’s work) to solve their problems without burning down a rainforest. (See fast ai and Jeremy Howard.) Bubble aside, there is some genuine good in it and hopefully it will come to predominate the bad kind. Unlike crypto where it’s all bad.
@mark @Gargron Even if you tune the small open source models with your own data, they still have initially been created with stolen data...
@sipuliina @Gargron Nothing says it has to be stolen data. You can train language models with Wikipedia or GitHub, for example. More needs to be done on regulating datasets for sure, though.
@Gargron I suggest you browse “21 lessons for the 21st century” for well-researched analysis of how AI will be beneficial to ordinary consumers, in medical services, food, entertainment, government. The book came out six years ago so it’s not influenced by the current AI craze. The current AIs are crude toys, like the first of any human discovery. AI’s could generate movies for your preferred level of violence, romance and plots. I will pay for an AI to generate good Star Wars movies for me.
@Robodad That sounds horrible. I don't want the art I consume to be tailor made for me. In fact I want the opposite: I want it to challenge me.
@sky @Robodad what if it’s tailor made to challenge you, with a deep knowledge of all art that you’ve found challenging before ? Think of an AI music selector that will consider how you reacted to each selection … but now can create the art in the fly too. Or in creating your food.
@Robodad No it can't create new ways in which to challenge me, because fundamentally to challenge the human experience you need to experience it yourself. How can someone who is not a human understand what it's like to be human to the point where it can articulate commentary on it? It makes no sense.

@Gargron So far, it’s been more or less a hobby of mine. The programmer and computer scientist in me is more fascinated by how it works internally and what I can make it do than in the output it generates; meanwhile, the creative spirit in me is more interested in what it synthesises out of the large amounts of digital art I’ve created, edited, composited, et cetera, and fed into it.

Point is, these two things don’t have to exist in opposition.

@Gargron I hate it when I realise, that I was chatting with a support bot. Companies try to be more efficient this way, but they should just offer decent phone call support!

@Gargron
> Честно говоря, я устал слышать об ИИ.
> Меня это никогда не интересовало.
> Я не уважаю людей, использующих генеративный ИИ,
> и презираю компании, которые продают данные людей для его обучения.
> Да, люди потеряют работу из-за этого,
> но мир от этого лучше не станет.
> Просто в таких сложных системах последствия редко проявляются сразу.

Доброго дня!
Но есть же и позитивные моменты в развитии больших языковых моделей (ИИ). Вот например качество перевода ощутимо повысилось ;)

@Gargron I dislike that people use the term "AI" for LLMs. It's not at all general artificial intelligence, it's just a smarter box.

@crackhappy @Gargron Me, too! Someone referred to LLMs as “data-center-scale autocomplete” to which I would add “and summarization”. But that’s really what they are!

Doing autocomplete, summarization, remixing, etc, at a huge scale.

They *can* be very useful and productive, but they aren’t really “AI”.

But… we’ve lost that naming battle to the marketing / hype folks. 🙁

@Gargron Interesting phrasing: »I don't respect people who use generative AI.« That will be my favourite contra against AI, together with the political contra that it's meant and already used for surveillance and oppression.

And we talk about AI at the expense of all the other problems that need solving (or solutions that need promoting) in tech.

@Gargron @joncounts

@Gargron Oppression breeds rebellion

@Gargron The Current AI reminds me of FTTC (Fibre to the cabinet) broadband being sold as "Fibre" which is now causing confusion as they're rolling out actual Fibre to the premises.

So when we get the next generation of AI that's actually a bit more intelligent are we going to get the whole "That AI thing we sold you last week, yeah that's not really AI, how about our new shiny really is AI this time?)

@Gargron "I feel the same way about the Apple Vision Pro. I'm going to struggle to keep a straight face if I see people using the Apple Vision Pro out in public."😂
@Gargron chatgpt: It's understandable that you have reservations about AI. It's a complex topic with various ethical concerns, including data privacy and job displacement. Balancing the benefits and risks of AI development is crucial for responsible innovation.
@Gargron I feel there are good use-cases, but it seems to be applied as if it’s an uni-hammer and everything is a nail. At some point there will be a realisation, I bet, and disillusionment. Then, the ones who applied it efficiently will continue raking value while the ones with a firehose will quickly fall. Like dot-com.
@Gargron It's not even AI, that's just a marketing misnomer. Interesting algorithms, but no intelligence.
@Gargron I don’t respect people who use wrenches. They should turn the nuts with their hands.
@Gargron AI is just complicated computer programs - no matter what claims are made these programs cannot become complex creative or intelligent - they are purely derivative - they can analyze, predict(?) and regurgitate quickly and will replace the work of many people but the real horror will be when people in power give these AI programs the power to make autonomous decisions based on half baked algorithms - and in a real world all algorithms are only ever half baked and facile.
@Gargron On a personal level I really enjoy generative AI. On a societal level I think it needs to be handled carefully and regulated so people aren't put out of jobs, or so that there's a safety net if your job is affected.
I've been trying out chatgpt and it's hilariously bad at certain tasks and gets a lot of details wrong in the domains that I know a lot about.
It's nice to have black-and-white hot takes, but reality is gray.

@Gargron I fully agree with one exception:

I’ve found the generative fill in Photoshop quite useful when I’ve needed to expand a photo (for example), such as turning a landscape into a portrait image or adding a bit of extra room where I’d want to crop something - things like that. Thats the only “AI” stuff I’d actually miss.

All of the rest of it is hell and can burn in a fire though. 🤣

@Gargron The root cause is #Capitalism!

Abolish it and you'll abolish it's problems!!!

https://www.youtube.com/watch?v=7Pq-S557XQU

Humans Are Becoming Horses

Support Grey making videos: https://www.patreon.com/cgpgrey## Robots, Etc:Terex Port automation: http://www.terex.com/port-solutions/en/products/new-equipmen...

YouTube

@Gargron One of the most egregious consequences that AI brings is the unfathomable amount of energy it consumes and thus its contribution to climate change.

When we are trying to save our home, we're throwing more fuel into the fire...

@Gargron I agree with all of this
@Gargron The bit I can't figure out, by the time the companies are peddling AI doing everything... What are we even going to be getting out of bed, or staying alive for?