VCs are starting to partner with private equity to buy up call centers, accounting firms and other "mature companies" to replace their operations with AI

https://lemmy.world/post/30229802

VCs are starting to partner with private equity to buy up call centers, accounting firms and other "mature companies" to replace their operations with AI - Lemmy.World

Lemmy

LOL. If you have to buy your customers to get them to use your product, maybe you aren’t offering a good product to begin with.
Plenty of good, non-AI technologies out there that businesses are just slow or just don’t have the budget to adopt.
That stood out to me too. This is effectively the investor class coercing use of AI, rather than how tech has worked in the past, driven by ground-up adoption.
That’s not what this is. They find profitable businesses and replace employees with Ai and pocket the spread.
They’re rent seeking douchbags who don’t add value to shit. If there was ever an advertisement for full on vodka and cigarettes for breakfast bolshevism it’s these assholes.

It only works until the inevitable costs from the accumulated problems due to AI use (mainly excessivelly high AI error rates with a uniform distribution - were the most damaging errors are no less likely than little mistakes, unlike with humans who can learn to pay attention not to make mistakes in critical things - leading to customer losses and increased costs of correcting the errors) exceed the savings from cutting down manpower.

(Just imagine customers doing things that severely damage their equipment because they followed the AI customer support line advice and the accumulation of cost as said customers take the company whose support line gave that advice to court for damages and win those rulings, and in turn the companies outsourcing customer support to that “call center supplier” take it to court. It gets even worse than that for accounting, as for example the fines from submitting incorrect documentation to the IRS can get pretty nasty)

I expect we’ll see something similar to how many long established store chains at one point got managers who started cutting costs by getting rid of long time store employees and replacing them with an ever rotating revolving door of short term cheap as possible sellers, making the store experience inferior to just buying it from the Internet, and a few years later those chains were going bankrupt.

These venture capitalists’ grift works as long as they sell the businesses before the side effects of replacing people with language generators haven’t fully filtered through into revenue falls, court judgements for damages and tax authority fines and it’s going to be those buying such businesses (I bet the Venture Capitalists are going to try and sell them to Institutional Investors) that will end up with something that’s leaking customers, having to pay mass8ve compensations and having to hire back people to fix the consequences of AI errors, essentially reverting what the Venture Capitalists did and them spending even more money to cleanup the trail of problems cause by the excessive AI use.

They’re VCs, they’re not here for the long run: they’ll replace the employees with AI, make record profits for a quarter, and sell their shares and leave before problems make themselves too noticeable to ignore. They don’t care about these companies, and especially not about the people working there
And when the economy goes boom, they will ask their friends in the White House for a bailout
Better yet, they buy a company, take a loan out against the company, pocket the cash and then leave the struggling company with the extra debt. When it dies they leave the scraps to be sold and employees and others owed money are left out to dry.

There is another major reason to do it. Businesses are often in multi year contracts with call center solutions, and a lot of call center solutions have technical integrations with a business’ internal tooling.

Swapping out a solution requires time and effort for a lot of businesses. If you’re selling a business on an entirely new vendor, you have to have a sales team hunting for businesses that are at a contract renewal period, you have to lure them with professional services to help with implementation, etc.

lol accounting….

How easy will it be to fool the AI into getting the company in legal trouble? Oh well.
Some would call it effortless, even.
NYC’s AI chatbot was caught telling businesses to break the law. The city isn’t taking it down | AP News - apnews.com/…/new-york-city-chatbot-misinformation…
NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down

An artificial intelligence-powered chatbot meant to help small business owners in New York City has come under fire for dispensing bizarre advice that misstates local policies and advises companies to violate the law. Mayor Eric Adams acknowledged Tuesday that its answers were “wrong in some areas,” but the chatbot powered by Microsoft remains online. The company says it is working with city employees to improve the service. The chatbot has made false suggestions such as it being OK for restaurants to serve cheese nibbled on by rodents. Experts say the buggy bot shows the dangers of embracing new AI technology without proper guardrails.

AP News

The idea of AI accounting is so fucking funny to me. The problem is right in the name. They account for stuff. Accountants account for where stuff came from and where stuff went.

Machine learning algorithms are black boxes that can’t show their work. They can absolutely do things like detect fraud and waste by detecting abnormalities in the data, but they absolutely can’t do things like prove an absence of fraud and waste.

For usage like that you’d wire an LLM into a tool use workflow with whatever accounting software you have. The LLM would make queries to the rigid, non-hallucinating accounting system.

I still don’t think it would be anywhere close to a good idea because you’d need a lot of safeguards and also fuck your accounting and you’ll have some unpleasant meetings with the local equivalent of the IRS.

The LLM would make queries to the rigid, non-hallucinating accounting system.

ERP systems already do that, just not using AI.

But ERP is not a cool buzzword, hence it can fuck off we’re in 2025

The LLM would make queries to the rigid, non-hallucinating accounting system.

And then sometimes adds a halucination before returning an answer - particularly when it encournters anything it wasn’t trained on, like important moments when business leaders should be taking a closer look.

There’s not enough popcorn in the world for the shitshow that is coming.

You’re misunderstanding tool use, the LLM only queries something to be done then the actual system returns the result. You can also summarize the result or something but hallucinations in that workload are remarkably low (however without tuning they can drop important information from the response)

The place where it can hallucinate is generating steps for your natural language query, or the entry stage. That’s why you need to safeguard like your ass depends on it. (Which it does, if your boss is stupid enough)

I’m quite aware that it’s less likely to yessir technically hallucinate in these cases.

But that doesn’t address the core issue that the query was written by the LLM, without expert oversight, which still leads to situations that are effectively halucinations.

Technically, it is returning a “correct” direct answer to a question that no rational actor would ever have asked.

The meaningless, correct-looking and wrong result for the end user is still just going to be called a halucination, by common folks.

For common usage, it’s important not to promise end users that these scenarios are free of halucination.

You and I understand that technically, they’re not getting back a halucination, just an answer to a bad question.

But for the end user to understand how to use the tool safely, they still need to know that a meaningless correct looking and wrong answer is still possible (and today, still also likely).

LLMs often use bizarre “reasoning” to come up with their responses. And if asked to explain those responses, they then use equally bizarre “reasoning.” That’s because the explanation is just another post-hoc response.

Unless explainability is built in, it is impossible to validate an LLM.

This is because auto regressive LLMs work on high level “Tokens”. There are LLM experiments which can access byte information, to correctly answer such questions.

Also, they don’t want to support you omegalul do you really think call centers are hired to give a fuck about you? this is intentional

I don’t think that’s the full explanation though, because there are examples of models that will correctly spell out the word first (ie, it knows the component letter tokens) and still miscount the letters after doing so.

No, this literally is the explanation. The model understands the concept of “Strawberry”, It can output from the model (and that itself is very complicated) in English as Strawberry, jn Persian as توت فرنگی and so on.

But the model does not understand how many Rs exist in Strawberry or how many ت exist in توت فرنگی

I’m talking about models printing out the component letters first not just printing out the full word. As in “S - T - R - A - W - B - E - R - R - Y” then getting the answer wrong. You’re absolutely right that it reads in words at a time encoded to vectors, but if it’s holding a relationship from that coding to the component spelling, which it seems it must be given it is outputting the letters individually, then something else is wrong. I’m not saying all models fail this way, and I’m sure many fail in exactly the way you describe, but I have seen this failure mode and in that case an alternate explanation would be necessary.

The model ISN’T outputing the letters individually, binary models (as I mentioned) do not transformers.

The model output is more like Strawberry <S-T-R><A-W-B>

<S-T-R-A-W-B><E-R-R>

<S-T-R-A-W-B-E-R-R-Y>

Tokens can be a letter, part of a word, any single lexeme, any word, or even multiple words (“let be”)

Hey boss. Think they’re using chatgpt for that?

I am so glad I got out of IT before AI hit. I don’t know how I would have handled customer calls asking why our chat is telling them their shit works when it doesn’t or to cover their computer in cooking oils or whatever.

And only after they banged their head against the AI for two hours and are already pissed will they reach someone. No thanks.

Thank god I can troubleshoot on my own.

When VC and PE call a company or industry “mature” it means they don’t see increasing revenue, only something to be sucked dry and sold for parts. To them, consistent revenue is worthless, it must be skyrocketing or nothing. If you want to see this in action right now, look what Broadcom is doing to VMWare. They also saw VMWare as a “mature company”.
Fuck Broadcom. We’re still dealing with that bullshit, as there aren’t a lot of viable alternatives at the enterprise scale.
Broadcom management deserve gulag

When VC and PE call a company or industry “mature” it means

It means they see a hog ready to be slaughtered.

“What if we threw a ton of money after the absolute shit ton of money we threw away?”
Could the Big Four be in danger?

They have been for awhile. Early adopter communities like the fediverse used to argue about the good and harm done by the big four.

For about the last five years, I haven’t heard an early adopter defend the big four.

I saw/heard the same things around, for example, SEARS, back when it was week known that SEARS was too big and successful to fail.

Doesn’t this seem a little “forced”. This just seems like implementing AI wherever possible…regardless of demand.
Yes, that’s what everyone has been doing since it became a thing.
So like >99% of other AI implementations?
Enshittificatin intensifies
The future is bright! /s
So bright we had to remove the lampshade!
Looks like the Oligarchs are serious about crashing the economy.
Seems like they may be hurting themselves in the long run, I hope it fails miserably
Sure. But in the meantime, calls will get worse.
True innovation in the area of making existence even more miserable, as if using phones for support wasn’t bad enough on its own already.
Just tried call a appliance service fucking told me that customer service was now all AI no human. I fucking hung up.
No no. Don’t just hang up. Tell us who it was so we can ALL avoid buying their products.
Sears appliance repair.
…that only raises MORE questions!!! Where the hell did you even FIND a Sears in 2025??? I thought they went out of business around the same time Toys R Us did. Like, 10 years ago.
Me too, but I called for appliance repair anc it redirected me to the local Sears repair. searshomeservices.com/…/refrigerator-repair-servi…
Expert Refrigerator Repair Services | Sears Home Services

Trust Sears Home Services for professional refrigerator repairs. Our certified technicians provide prompt, reliable service to get your kitchen back in working order.

That deters people from using call centers, which saves the firm money.
They don’t care about the long run.
Yep, just gut one business after another for the quarterly returns. Same logic as the thieves stripping copper from street lights, just at a bigger scale
When they bought a firm I worked at, their goal was to asset-strip the pension fund. Luckily they lost a big court case over that and were forced to repay their ill-gotten gains, though we were still worse off than we would have been because of the legal fees.
call centers got worse after outsourcing them overseas and we still have them.
“I experienced imperfect health, had herpes. So don’t complain about your cancer.”
uh, you completely missed the point. the point is we could very well be stuck with this shit because it’ll save businesses money. they don’t care if it’s worse.
People with money will always find a way to run away from consequences.
Odds are there will be no other options left for us and we’ll have to use them whether we like it or not.