My background is in #AI and ML, even though I’m not working on it directly now.

One of the reasons is that I’m utterly disgusted by the kind of people who have taken over this field.

These are the same kind of sociopath cynical anarcho-capitalist cryptobros who screwed up the few good ideas about Blockchain and transformed it into just another speculative financial instrument.

And you know why I hate them from the bottom of my heart, why I believe that they have negative added value for society?

Because the previous generations of computer engineers, those who gave us the digital computer, modern operating systems and the Internet, would NEVER get a boner thinking of how many jobs they would have replaced. They didn’t repeat all the times “this will make all typewriter and office clerks redundant so I can keep all the money for myself, it’s so beautiful!”

No, their focus was just on building things that improved society, with humility.

Not on creating chatbots that consumed as much energy as a country, and dreaming about how long it’d take them to kick people out of jobs and screw their lives.

And they don’t even bother to think how to prepare anybody for the transition, or how to build a sustainable post-employment world.

No, the only mission of these filthy motherfuckers is to make them and their shareholders disgustingly rich. Millions losing their jobs in the process is just a small collateral effect in the way of their wealth.

These are parasites screwing up one great idea after another just because they need a next-big-thing wagon to jump on and get rich. They are literally reversed Midas who turn whatever they touch into shit.

This kind of people would belong to a psychiatric facility in a functioning society, but unfortunately in today’s ultracapitalist world they are seen as technological gods - and even inspire new generations of tech enthusiasts to be jerks.

https://gizmodo.com/ai-will-replace-recruiters-and-assistants-in-six-months-says-ceo-behind-chatgpt-rival-2000631871

AI Will Replace Recruiters and Assistants in Six Months, Says CEO Behind ChatGPT Rival

Perplexity's CEO lays out a near-future where entire white-collar roles are automated by a new generation of AI browsers, transforming a week's worth of work into a single prompt.

Gizmodo
@fabio agreed, but I think this super extra large language model bubble will burst sooner than later, exactly because it's so unsustainable and expensive.
@sergedroz the problem is that they don’t even care, just like they didn’t care when the crypto fever waned off. Their goal is not to get rich on something that changes the world. It’s just to drive the value of their investment to the sky and dump everything and run away with their early retirement when they feel that things have peaked.
@fabio I agree with that. Most of the big Tech orgs lost their purpose
@sergedroz @fabio Stealing everybody’s data every time and everywhere is part of that.
They lied about AGI and ”foundation models” emergent abilities, and they know it.
But they also know how they can keep faking it by constantly ingesting and retraining on text that references contemporary topics.
@gimulnautti @sergedroz @fabio '"Quantum Computing" is going to be the next hustle, I can feel it.
They'll probably sell it as "harnessing the power of other universes" or some sci-fi shit, but it won't be anything like that.

@Lazarou @sergedroz @fabio For sure. Even if multiverse theories have gotten hammered on the theoretical front recently quite hard. There simply doesn’t seem to be any need for them.

But in the reactionary ”spiritual” sphere, quantum mysticism has already become de-facto ”religion” in spaces that can’t be reached with christian nationalism..

@Lazarou @gimulnautti @sergedroz @fabio yep, I've seen "quantum AI" pitches really ramping up for several months now as LLM projects under deliver and the grifters are rocking up with the "fix". Quantum computing is so specialised, and the reviewers so clueless they award the money out of even more fomo than they had for "AI". I fear quantum will descend into some kind of warhammer 40k style tech mysticism.

@Lazarou @gimulnautti @sergedroz @fabio I saw the story some time ago about how they don't 100% know how the quantum computing stuff works and someone had a theory that it was stealing energy/compute/information from elsewhere in the multiverse.

Rather than "wow!" I went "oh, so we might be clear-cutting the multiverse too?"

@gooba42 Quantum computing to be honest makes even less sense than LLMs as a bubble.

I've got my hands dirty on it a couple of times and I've never managed to understand the hype around it.

The main problem I see with it, besides the objective difficulty of keeping a quantum system in a useful state without noise taking over (and most of the algorithms I've seen are indeed about noise reduction and error checking), is that there's no physical way of getting useful solutions out of it. As soon as you observe it, a quantum system collapses on a random state, making measurement pretty much pointless.

There are a few algorithms that are actually useful (like Shor's algorithm for prime number factorization), but they manage to get useful results out of quantum computers only because they've come up with clever tricks to extract information about the final state without actually measuring the final state. And the speedup compared to classical algorithms is not as impressive as many think - from polynomial to O(log^2(N)).

I mean, I see how someone could build some temporary hype on it with catch phrases like "my thing can now break RSA encryption within days instead of years!", but as the whole world is in the process of moving to post-quantum algorithms like elliptical curves (and that's because the only thing that we've figured out so far that quantum computers can reliably do better than classical ones is to find prime factors, not solve any other problems) I find it hardly a compelling statement.

But I'm pretty sure that these charlatans will find a way to sell the hype, even if it's barely there, once their LLM bubble runs out of steam and they have to find another wagon to jump on.

@Lazarou @gimulnautti @sergedroz

@sergedroz @fabio Given how much money they invest to keep the hype train going I have my doubts...

Pair that with the still widespread tech-illiteracy in politicians, and you get a bubble that isn't only inflated by investors (who gamble on being able to recoup their investments before the bubble bursts), but also by policymakers, who personally will not have much interaction with LLM tools beyond fancy product demos...

So, while I hope it bursts soon, politicians might keep it going...

@soulsource @sergedroz @fabio and that, to me, is the really depressing part. An app, for example, is going to fix the NHS (UK public health service)...

@soulsource @fabio Today, I am, what people would call a policy maker. But I come fro the tech world. And I feel the policy world is much more eager to learn from the tech world than the other way round.

Just reiterating how dumb policymalers are is not actually going help. What is going to help are statements that actually create context and understanding and accepting the fact that tech maybe also should understand a bit of policy and governance.

I keep hearing Tech is neutral, these CEOs are bad and the stupid politicians don't understand a thing. I have hardly ever heard people working in tech asking what implications does our work have.

Yes, it's big money that decides, but it is every engineer working on that stuff that has its part, and that maybe could ask what are the effects of this? Did we think of the wider implications. There are as many bad policy makers as there are bad engineers, and as many good policy makers as good engineers.

@sergedroz I can take my personal example.

My M.Sc thesis was on AI applied to intrusion detection and I wrote several papers on the topic too. Back in the day (talking of 2009-2010) AI was still trying to emerge back from its “expert systems” winter through academia. There was a ferment of ideas and a lot of genuinely good intentions to build models that helped folks solve real problems. Nobody who worked like me on AI applied to computer security (or speech recognition, mood analysis, climate forecast models etc.) ever remotely thought of negative ethical implications of our work in a far future.

Fast-forward 7-8 years, and I started working on building the first large-scale models to detect things like partner fraud or data anomalies. And I managed to deploy those models in products used by our agents, after sitting next to them for many days in their day-to-day work to understand their pain points, making it very clear that those tools were supposed to augment their work rather than replacing it, and they were expected to worry more about visiting partners and talking to them than checking duplicate pictures on search engines or the correctness of addresses. It was a success in terms of productivity, agents loved it, and it didn’t result in any layoffs. To me the AI ethical problem at this point was a kind of solved problem. I believed that, if AI was built in good faith, in order to augment rather than replace human skills, and by listening closely to the needs of all of their users instead of coming with solutionism from above, then it was possible to build fair models.

Fast-forward another couple of years, and I wrote a book about computer vision models and how to train them on cheap devices, including Raspberry Pis, using off-the-shelf cameras. I showed how to train model for motion detection and tracking and face recognition even with low budget. The book sold quite well and a few weeks later I got a request for an interview from a researcher who worked in the field of ethical AI who was interviewing several technologists to understand their awareness about the ethical impact of their works. I talked excitedly to him about the use-cases of my AI platform, how it could run even on an RPi, how efficiently I trimmed every cycle of CPU on the convolutional layers and how I built some general purpose APIs around Tensorflow, but he wasn’t much interested in that. Instead, he asked me about how I would react if my software was used for racial profiling, mass surveillance or processing of unauthorized police footage, and how I would prevent that from happening. To me those questions came literally out of the blue at that time (it was shortly before Timnit Gebru was fired from Google, and before the whole topic of conflicts between product and ethical teams in AI surfaced). I felt like a manufacturer of calculators who is told out of the blue that his devices could also be used to calculate trajectories of ballistic missiles. I was like “but I only worked on this as a hobby project, it runs on my RPis to turn appliances on an off depending on who walks into the room and give customized greetings…how can it ever hurt anyone?”

Heck if we were naive.

Of course I would give that interviewer very different answers if he were to interview me now.

All of this to make a simple point: I consider myself to be quite politically and socially active for the average engineer, I could figure out a lot of ways in my mind that AI could go wrong and deliberately tried to avoid those pitfalls when training and deploying models, and still when working on AI there are so many things that could go wrong and completely slipped my mind. And btw I didn’t even contribute that much to the field - sure, I did some cool projects and deployed them in the real world, but it’s not like I had a crucial impact on transformer architectures or convolutional neural networks.

Now imagine engineers that are less politically/socially active than me, and who are probably smarter than me and did big contributions to the models deployed by the likes of OpenAI and Google. Many of them are still in the same state of naiveness where I was a decade ago or so. Many are still laser-focused on the exciting geek side of their job, on building things at the edge of the human capabilities, and fail to even see how the things that they build can be misused - or maybe they see it, but they feel like they are acceptable prices to pay for the progress of humanity, or maybe they’re more cynical and they think that they can just make enough money in their jobs to jump off the boat and retire when AI comes after their jobs. And I can tell you that there are also many of us who feel genuinely betrayed and cheated, some of us who really wanted to build robots that helped humanity, and instead ended up with chatbots seized by MBAs who just want to get rid of all white collar jobs, and just package AI into their existing streams of recurring revenue.

But bridges have to be built if you want to have impact. At the very least, any company working on AI should be compelled to have an AI ethics department. And it should be their job to create those bridges between technologists and specialists in the sectors where AI is going to be deployed in order to see all the possible pitfalls of those models. And it should be their job to ensure that any technologist who works on AI is given regular trainings on ethics, just like anyone committing code in production is given regular training on security. And of course regulation must exist to enforce that with big powers come big responsibilities, and the business who develop large scale models used in anything that has impact on large groups of people must be open to external scrutiny, and open up everything (model weights, training data, training code and unlimited API access) to external specialists from various fields to ensure that what those models return is fair and accurate.

@soulsource

@fabio @soulsource I think there are different things we have to separate:

1). You can't predict the future, Ask Einstein who was very unhappy about the nuclear bombs. But if some one decides to take you invention and use it for bad things on purpose that's one thing. There is a great Swiss movie: Der Erfinder (https://www.playsuisse.ch/de/show/999400 use a VPN with an exit in CH) about this.

2). There negligence, or even ignorance: Take the algos that recommend things in SM. It's been shown over and over again, that they tend to produce negative effects. My guess would be no one ever asked themselves the question what are the effects of this algorithm? Or can wer create an Algo that actually promotes good things? I know this is all vague, but that's something that, as en engineer you could think about, and maybe flag to your manager if you feel there are negative risks. I don't think engineers need to be able t make ethical decisions, but I expect them to spot ethical dilemma. I fully agree that engineers need to learn this at university. Finally at least ETH Zurich did so.

In my first reply I argued that big Tech companies had lost their purpose: I guess what I mean is that they only have one goal: Shareholder value, which the neoliberal crowd reinforces. I disagree with this.There is no law that says companies must maximize shareholder value at any cost. Or maybe the question should be: What is maximum shareholder value? The maximum about of money at the end of the fiscal year, or the maximum amount of money over a lifetime on a inhabitable planet where said shareholders and their kids live?

I think our society has forgotten what real misery looks like. That's why they vote in idiots that permit this nonsense, maybe in the hope to get there themselves at some stage.

But back to the original issue: What you did with AI is exactly the stuff that I think will survive. I don't think LLMs. I don't think OpenAI has a winning strategy.

I know people who work on the stuff you worked, at Universities or small companies, and they produce value.

An yes there needs to be some sort of regulation. But I think regulating technical detains is futile: what there really needs to be is values: You want some Tech that solves a problem better than a human, great. But that tech kills the environment and has a lot of toxic side-effects (or externalised costs in economy speech), then No. But why are companies that produce this stuff so successful? Exactly because they are cheapest short term, the real costs are paid by someone else. So we, as a society have to say no to this. The people that come up with this regulation are policy makers, so let's talk to them.

Pff, I feel this is a somewhat challenging topic for SM.

Der Erfinder - Film | Play Suisse

Der Verschrobene Fabrikarbeiter und Tüftler Jakob Nüssli baut im Jahr 1916 ein «Fahrzeug mit einer künstlichen Strasse». Die Erfindung soll den Bauern helfen, doch der Pazifist Nüssli wird von der Kriegsrealität eingeholt. Die Geschichte einer richtigen Idee zur falschen Zeit.

Play Suisse
@sergedroz
You'd enjoy _Careless People_. The ignorance is clearly willful. And poor outcomes for users is irrelevant to the bottom line for a monopolistic sociopath.
@fabio @soulsource

@fabio
I like your comment a lot, Fabio.

There are so many extremely smart yet stupid people: people way smarter than me when it comes to solving engineering problems, and yet somehow they're either

- incredibly naive and/or willfully ignorant
- greedy and indifferent
- indoctrinated by some right-wing ideology

I personally know people, smart people, nice people, who genuinely can't see—in July 2025—how Elon Musk is a Nazi or how working for Facebook/Zuckerberg is not something to be proud of.

They only want to work on Computer Vision and happily accept a job offer at Meta if they get it. I could literally gift them the book "Careless people" and they still wouldn't have any issues with working for Facebook or cheering on the latest Tesla robot reveal or watching Musk on Joe Rogan podcast.

These people would build the biggest surveillance machine for Zuckerberg or train some AI that would ultimately replace them and cost them their own job (see current layoff at the maker of the Candy Crush game), and yet they're still so naive as to not being able to see how AI, the AI they helped to build, will hurt lots of people if not the entire planet, by way of green house gas emissions and water scarcity.

@sergedroz @fabio I did not mean to imply stupidity or willful ignorance. Also, sorry for generalizing.

What I wanted to voice (in an inappropriate tone - sorry again) were concerns that people who have little to no first-hand experience are dependent on the information presented to them from third parties. Experts, but also lobbyists...

I also think a distinction needs to be made between politicians actually working on legislation (-> careful preparation), and politicians holding speeches...

@sergedroz @fabio I am eager to see when we will get a prediction in how much time will #AI replace vapid #CEOs - they never do the actual work and even a language model could generate a more sound business model, speech, interview that they do.

@fabio
No offense, but isn't the whole point of AI to replace humans with a computer everywhere? Which by definition gives every manager/CEO a boner?

Military loves it: Let's get rid of those annoying humans who (I hope at least some) have moral objection to killing an entire village/city/etc. AI doesn't have that 'problem'.

Or Copilot whose goal it is to replace to 'expensive' software engineers with a M$ subscription. Software engineers aren't just typist, but a manager doesn't know that.

@FreePietje I’ve been in or around the AI for nearly two decades by now, and I can assure you that the shift in mindset is actually something quite new.

Until a couple of years ago AI was seen as a tool that could empower and augment human workers, not replace them. Just like a calculator is supposed to make multiplications faster for accountants, not replace them.

It was seen as a tool that could help doctors find tumors in X-ray scans, help customer service operators to get a glimpse of the most common complaints in a huge pool of tickets, help scientists find correlations and patterns in huge datasets, help people write emails without grammar mistakes, help police officers to sift through massive volumes of camera data to find footages where crimes were committed, and help programmers find security bugs in their code.

But until 2020 there’s wasn’t a lot of talk about replacing instead of augmenting. And that’s because the field was still mostly in the hands of academia (or corporate employees with tight ties with academia).

“Let machines take care of repetitive error-prone tasks so humans can focus on what they’re good at” has been the mantra for a long time.

Then the MBA parasites sensed that the bubble that they had invested on (Web 3.0 and crypto) was deflating, and they needed a new wagon to jump on, and they picked AI.

And suddenly people like @timnitGebru started to get fired, big companies started getting rid of their AI ethics teams, OpenAI ditched all of its pretences of being a no-profit working on openness and fairness just to get to market before others did, R&D investments started being seen more like a burden than an asset, since the priorities shifted from building the best, safest, fairest and most accurate models to just winning in a couple of benchmarks and eating competitors’ market share before they would eat yours, and it’s been a race to the bottom since then.

But, like in all technological innovations, things don’t have to be this way - just like some of the promises of Blockchain didn’t deserve to go wasted in the speculation games of a bunch of day-traders, and basically become just another instrument in the box of the people that it was supposed to get rid of after the 2009 financial crisis.

I wouldn’t blame the technologies, I would blame the people, and the lack of mechanisms of self-preservation to prevent whole technological fields from falling prey to this kind of abusers.

Things can work when you leave them up to scientists and engineers (of course they won’t be perfect, but at least most of us are genuinely excited by building new things, and we don’t have genuine interests in screwing up others for our own sake). We just need to find a way to kick MBA graduates out of our industry for good, because those are nothing but tolerated sociopaths and criminals whose education involved how to evade taxes, cut jobs and get rich. And they couldn’t care less if the best way of achieving these goals is by manufacturing AI models or cookies made of stone - building things is just a mean to get rich for them.

They broke the toys for everyone and they are sucking up all the lymph out of my industry. Engineers ought to start a proletarian revolt to seize the means of production (namely, the code that they write, the data that they collect and the models that they train) out of their hands.

@fabio
I don't doubt your intentions for one second.
I do think many (?) were (at best) naive.

Having a separate "AI Ethics" department? WTAF? How can it not be integral to every engineer in the field? How can you not anticipate that department getting rid of at some point?

Or having AI award ceremonies, where there was/is a separate category: "AI for good".
Why doesn't that that raise red flags?

Or people after earned $M over the years, all of a sudden realize it's an existential thread?

@FreePietje if engineers and scientists were in charge then you probably wouldn’t need AI ethics teams. Because most of us really couldn’t care less of going to market fast with a product that just allows some jerk to have virtual sex with a bot, or allows some lazy students to skip their homework.

Unfortunately, AI is often built by product teams, who are usually run by business owners (Scrum and agile methodologies as a way of giving control back to engineeres were just a huge illusion), whose only goal is to deliver profits to their managers, whose job is to deliver profits to their managers, all the way up in the chain, until you get to people whose job is just to deliver value to their shareholders.

From their perspective, anything that slows down the development of features that gives their product an edge over competitors is a liability. In other words, incentives are not aligned. So you need to create a team or a department that, by definition, has incentives aligned in that direction, because the alternative is not feasible, because engineers working on product are literally hostages of people that don’t have those incentives. Of course it’s not a perfect solution, but it was a better one than the current state (no ethical safeguards at all).

@FreePietje I can bring a personal tale of how I (and many others) envisioned the development of AI products until a few years ago (enough time has passed since then so it’s probably not a trade secret anymore).

A few years ago I did some pioneering work for my employer to build models that supported agents in spotting partner fraud and flagging potential fake properties or anomalies in the data provided.

I did this job by biking regularly to the office where our partner services agents sat, I would talk to them a lot, spend a lot of time sitting next to agents while they were doing their work, and trying to understand all of their pain points that could be easily automated (checking duplicate photos? checking the correctness of the pinpoint on the map? cross-checking their provided financial data against lists of known criminals or sanctioned individuals?).

Many were initially scared that we were there to build robots to take over their jobs. I worked my way to get their trust, I tried to understand their actual pain points, and I gave them tools that could automate those so they could actually focus on what they were best at (picking up the phone or driving to partners to talk to them).

As a result, productivity skyrocketed, not a single job was lost (at least until the pandemic), and those agents were enthusiastic about the AI that we had built for them.

If I was tasked with the similar job now, I would probably never have the same amount of freedom.

Engineers (and even engineering leaders) are not allowed to drive so much the vision and strategy of a product as they were until some years ago. Nobody would allow me to spend time that I could use by building new features for the business to go to a local office and try to understand what people actually need. Most of us have just become numbers on a dashboard too (number of PRs, number of commits etc.).

If all of this happened, if people like me who have actual context about our industry can no longer control the direction of what we build, then the blame is not on us. It sits squarely in the field of the business owners.

If I could build ethical AI from the ground up (and I’m just an engineering and data science dude), but someone who is paid millions of $ to drive the strategy of an industry leader can’t, then it’s those heads that I want to see roll.

@fabio
On first glance over your reply, I read Scum instead of Scrum 😂

Some kind of Freudian slip?

@fabio
What I don't get is that *outsourcing* ethics (to a separate department) didn't raise HUGE red flags. And was considered a worthwhile 'compromise'.
To me, that sounds like you're aware of the problem, but for some reason (that I don't understand) are *willing* to turn a blind eye towards.

> the current state (no ethical safeguards at all)

It has been known for DECADES that the optimum way to achieve a goal (ie AI) could result in wiping out all humans.
How could ethics not be integral?

@FreePietje it’s simply because of the paradigm that underlies the whole economic system. Something isn’t supposed to matter to you, as a business owner, until it affects your bottom line. Everything else is a cost.

Let me take the example of another field I’m quite familiar with (computer security).

For years we tried to convince business owners that writing secure code was something that they had to train their engineers on. That it was something that everyone who wrote or tested code was supposed to own, not just a separate IT security department.

Not much moved for years. For years I kept seeing static buffers of 1000 characters and unchecked strcpy and strcat everywhere, even in critical code that ran financial systems.

Once I talked to a CTO about the need to give more security trainings to employees because they kept shipping vulnerable code, and got an answer along the lines of “oh so what’s the impact? Another of those funny exploits that just opens a calculator on my computer?”

For years they were more than happy to keep a barebone overloaded IT security team checking all the code that was produced, and those people were often external contractors hired on a fixed term.

They weren’t worried about these issues until they were. Until it affected their bottom line, until malicious actors started wiping their production databases, running ransomware campaigns, exposing private data of customers and partners and representing threats to national security.

Then suddenly practices like mandatory CodeWarrior trainings, mandatory code reviews and automated security scans in all CI pipelines became widespread.

I see something similar happening with AI. Nobody bothers about ethics, just like nobody bothers about security, until shit hits the fan (and then everybody goes in sudden panic mode). Until then, there will always be people that will keep seeing these things as liabilities rather than assets.

The best solution is, of course, to kick these folks out of our industry and seize our digital means of production back.

But that’s unlikely to happen any time soon.

So we’re left with a compromise that is awful, but it’s better than nothing.

Just like 15 years ago I would have accepted someone to hire external security contractors to occasionally review their codebase rather than having no security audit at all.

@fabio
> Because most of us really couldn’t care less ... allows some lazy students to skip their homework

How could you not care that young people are thus no longer taught to learn and (learn to) think for themselves?

Quite recently there was a post about an amazing teacher who gave their students the assignment to describe where and why ChatGPT was wrong.
Not only was the result amazing, they did so by doing the EXACT thing we want students to learn.
And think for themselves.

@fabio
So in order to make it sustainable, a way needs to be found to make money off of it.

The big problem with FLOSS is and always has been: how to make money with it.

So this is not a new problem.

What I see you mostly railing against (and I do agree with that), is that capitalism for the last 4+ DECADES is only about shareholders.

Socialize the losses and privatize the profits.
Fuck other stakeholders. Fuck the planet. Fuck poor people.
Fuck everyone, except shareholders.

@fabio @timnitGebru @FreePietje We need to find a way to kick the MBAs out of society.

@jmax @fabio @timnitGebru
Maximizing profits, isn't that the essential part of capitalism?

The standard way to increase profits, is to cut costs. What is a large part of those costs? People. That's why every 'restructuring' involves (massive) layoffs.

@FreePietje

Maximizing profits, isn’t that the essential part of capitalism?

Yes, but it all depends on who you frame as the beneficiary of those profits.

Even liberal fathers like Adam Smith and John Stuart Mill talked in clear terms of benefits to shareholders, stakeholders, workers and society in general.

The problem is with whoever dropped that list only to the first term (shareholders), just because it’s the easiest one to measure and the easiest to put incentives on.

And those people have known names and surnames - Milton Friedman, Ronald Reagan and Margaret Thatcher.

The world we’re in now is just the natural outcome of that awful over-simplification done in the second half of the 1970s.

@jmax @timnitGebru

@fabio @timnitGebru @jmax
You do realize the names you gave were from around 1980, right?
So this is the current state of affairs for 4.5 DECADES.

I've been saying for at least 2 decades:
"There is NOTHING more important then next quarters result"

as my (cynical) way to describe 'modern' capitalism.

John Oliver has a nice video about (the merger between McDonald Douglas and) Boeing, which among other things made every employee focus on today's stock price.

@FreePietje @fabio @timnitGebru Only if you're very stupid. Unfortunately, these people are.

What you're describing is maximizing short term profits, as if there is no future beyond this quarter.

I can work with greedy assholes. Don't much like them, but I don't have to like everybody.

The bigger problem is STUPID greedy assholes.

@FreePietje @fabio @timnitGebru - To amplify and clarify:

If your primary goal in life is to accumulate money (not if your goal is to accumulate enough money for some other end - but just to somehow get a high score), I think you're foolish, or have some other problem that causes you to act foolishly.

But lots of people are foolish sometimes, me included, so.

But if that's your goal, then please do it with a planning horizon of your lifetime, and that of any other people you care about.

@fabio

The rot can be traced easily. Just look for the motto "move fast and break things".

That's not an engineer motto. That's a capitalist exploiter motto.

@timnitGebru @FreePietje

"Move fast and break things" is a
"bull in a china shop" motto.

@androcat @fabio @timnitGebru @FreePietje

@fabio @timnitGebru @FreePietje
It's not just jobs that are going. Small businesses are either being bought up if they are original in any way, or left to dry up from no longer being able to compete.

Like a water cycle with a few ever-growing oceans where evaporation no longer occurs, rain no longer falls on the mountains, streams & rivers dry up & crops & life die off from the thirst.

But this is inevitable in an economic system with no caps, no fiscal ceiling, on wealth & income.

@fabio @timnitGebru @FreePietje

This isn't a shift in mindset, it's the second step in a business plan. They do look very similar (see "enshittification").

Sometimes the second step is simply "?????" but the last step is profit.
Where profit may consist simply of cashing out before the crash.

Not to denigrate the field as such, or the people in it. But the leeches will attach themselves. And the grifters will invite them in.

@FreePietje @fabio I’m looking forward to when AI replaces software engineers and they all have to get a job at McDonalds making minimum wage. At least something good will have come from it.

@fabio @nixCraft

I completely agree with you.

My son just finished his degree in AI and CS. I've asked him to keep a lot of this nonsense in mind when he finds his first job. I hope he's been brought up well enough to not be happy at making the world a worse place. And I hope this AiBro bubble bursts soon so we can move on to proper AI research. Not the nonsense currently being spouted by idiots who know nothing.

@fabio Personal usages: If I mean one or another of the various technologies, I use the appropriate term; LLM, expert system, etc.

I reserve the term AI for the ongoing financial market scam involving grossly over hyped LLMs.

@fabio - This is, of course, disrespectful to the 70 years of hard work by many people, but I can't change the fact that the term "AI" has been appropriated by the scumbags.

@jmax ok I didn’t want to open that can of worms, but now that you’re inviting me I can’t pull back 😆

I’ve been in this field long enough to see how it evolved over the past two decades.

My first job out of Bachelor was for a company that had “expert system” in its own name. Its AI actually involved a lot of human language experts manually annotating terms and building semantic graphs.

My college book at the time contained only a small chapter on neural networks, and everything else was about A*, decision trees, dynamic programming, symbolic models for first-order logic, minmax, Bayesian networks etc.

When tasked by our professor to build an AI that could play tic-tac-toe I used a neural network trained on a few hundreds of games - and many were puzzled because I hadn’t used more established methods for graph exploration. Neural networks were seen as this fuzzy weird thing that only made sense for things like computer vision.

Fast-forward 10 years, and neural networks were everywhere. All the problems that they had (biases, need for careful design of datasets, difficulties in testing the reasoning behind their predictions…) weren’t solved by leveraging the other instruments in my textbook (or building new ones). They were solved by brute force, by just building larger networks, adding more layers in front of them, and throwing more data at them. That’s because our industry started rewarding those who find something that works and exploit every single drop of juice out of it, rather than those who keep investing in other things too.

Fast-forward 10 years, and a graduate in AI would probably be puzzled if I were to use minmax and alpha/beta-pruning to solve a game instead of just throwing everything to a neural network - exactly the opposite of what it was in 2010.

And now the horizon has become even narrower.

Now we’ve found out that it’s not even worth investing on all the neural network architectures, because we’ve already got attention-based LLMs that can solve a lot of things. So it’s not a matter of investing in new instruments anymore, it’s just a matter of translating every problem into something that a text-based chatbot can solve. It’s literally the story about the guy who has learned how to use a hammer and expects the whole world to be about nails.

So I’m really sorry if sometimes I treat the terms AI/ML/LLMs as synonyms, but that’s unfortunately the current state of things. It’s not like other algorithms and solutions are getting much funding or attention…

@fabio Um, this is my third AI bubble (at least we got some cool machines out of the first one), I've worked in computer vision, text analysis, and nonlinear modeling.

And I agree with most of what you have to say.

But one of the tactics used by the scam artists is blurring the distinction between LLMs and everything else under the general heading "AI". I can't change that, but I can refuse to assist in blurring the distinction.

@fabio

And since they insist that their snake oil is "General AI", I roll with it and use "AI" when I'm referring to the scam, and only when I'm referring to the scam.

@fabio @jmax I have 1.5 Master's degrees in AI and I am in this picture and I don't like it. Everything went to shit when throwing money at NNs started making economic sense.
Knowledge representation? Inference? optimising anything? Nah. Let's just throw more HW and stolen data at the dumbest, most inefficient NN possible and make money instead of overthinking things.

The incredibly wasteful inefficiency of this solution is mind-boggling, but still the world thinks it's the best thing evar. 🤦

@fabio @jmax when all you have is a hammer, every problem looks like a thumb.
@fabio was soll schon schief gehen
@fabio The best computer engineers should focus on destroying the AI he describes. Maybe they already are.

@fabio If only he could have had physical contact with another human being instead.

To hell with all these angry nerds out to avenge themselves.

@fabio cue Elon Musk, the greatest Nazi bastard who ever lived, 21st century.
How's Grok btw?
..Oh.. ..oh.

@fabio People's lives and experience are so much more than any AI can summarize adequately.

Landing a job shouldn't be a contest over who is most proficient at gaming the system, and plugging the trendiest keywords into their CVs.

When recruiting is reduced to algorithmic manipulation, employers are most likely not hiring the best candidate for the work. Putting decisions into the hands of AI is dehumanizing.

@fabio If we listened to this guy, we would notice that he is always wrong. He predicts more and more jobs to become genai within very short timeframes and none of this is happening yet.

@mms it’s not much about how many of their predictions are plausible or not, it’s about the strong push behind these predictions.

Every single day you hear executives repeating “AI will replace workers in fields X within Y months”.

It’s not because it’s going to happen (these models have still plenty of issues), but it’s because enough people want it to happen.

It’s those folks who have poured a lot money into this hype and now want to see their returns, so they can retire early like all of their colleagues.

So the cycle can be summarized as:

  • Employers lay off people because they keep hearing that those jobs are going to be redundant soon, and since no employer wants to spend more money than their competitor it’s just a domino effect - as soon as one starts laying off everyone else follows suit. There’s absolutely nothing rational in this, it’s just primitive herd behaviour.

  • Those businesses then buy expensive licenses from AI companies with the promise that they will do the job of the workforce that they just let go.

  • They realize that AI really can’t do much by itself without an experienced person giving context to it, asking the right questions and monitoring errors, so some of them after a while go back to hiring (from a market saturated by all the layers of layoffs, so even at a higher price).

  • AI companies come back after a while with “hey, now we’ve got a better version of our model that can actually do their job!” and upsell a more expensive product to their customers.

  • Go back to point 1.

  • @fabio Personally I see it as circie now. Some asshole makes bold statements -> wallstreat likes it -> wallstreat throws mononey at said asshole -> this never fruitions, but not one checks -> asshole makes another claim -> REPEAT

    The problem is not that they build a bad/evil product. The problem is that they are not building a product, but they are selling a vision other MBAs (or whoever is now at the street) will buy.

    We agree that the worst people took over, but it seems we disagree at what they actually doing in between creating a company and "increasing shareholder value". The are employed by those shareholders; their only job is to make money for shareholders. Note that the bubble is self propelling: the same people who are investing, are the ones who get all the benefits. There will be a point where new investors won't get their "value increase", as it is with any ponzi scheme. Right now they just need another level of investments. Do they need to believe in any of that?

    I don't even think there was a single, true layoff for GenAI - it was either a publicity stunt or some pool moron being fooled by such stunt.

    @fabio: I feel like the good ideas behind blockchain technology were already pursued by Git and that what makes blockchains distinct was, from the very start, ideologically driven by the Austrian School cultists.

    That they were taken in any way seriously sowed the seeds for this cynical LLM push.

    @fabio the usual arrogance you find in kids that graduate out of elite schools, not surprised.