The AI boom is screwing over Gen Z | ChatGPT is commandeering the mundane tasks that young employees have relied on to advance their careers.

https://lemmy.world/post/1632395

The AI boom is screwing over Gen Z | ChatGPT is commandeering the mundane tasks that young employees have relied on to advance their careers. - Lemmy.world

The AI boom is screwing over Gen Z | ChatGPT is commandeering the mundane tasks that young employees have relied on to advance their careers.::ChatGPT is commandeering the tasks that young employees rely on to advance their careers. That’s going to crush Gen Z’s career path.

This is not going to turn out well for a lot of people. Soon human beings will be obsolete in the name of AI
could you please elaborate? do you mean in the workplace?
Humans Are Becoming Horses

YouTube

So what would that mean for the company itself long-term? If they’re not training up their employees, and most of the entry level is replaced by text generator work, then it seems like there would be a hole as executives and managers move out of the company.

It seems like it would be a recipe for the company to implode after a few years/decades, assuming that that the managerial/executive positions aren’t replaced also.

What are these decades? Is that something longer than next quarter?
we are going to hold the month open a few more days

there would be a hole as executives and managers move out of the company.

And why would those executives and managers care about that? They just need to make sure they time their departures to be early enough that those holes don’t impact the share prices. Welcome to modern capitalism where the C suites only goal is to make sure they deploy their golden parachute while the company still has enough cash left over to pay them.

Yeah, it should be obvious by now after 3 decades of 1980s-MBA style corporate management (and a Financial Crash that happenned exactly along those lines) that “the bonus comes now, the problems come after I’ve moved on” measures will always get a go-ahead.
That sounds like someone else’s problem.

It’s a Tragedy Of The Commons situation: each market actor expects to get the benefits of automating away entry level jobs and expects it’s going to be somebody else who keeps on training people through their junior career years.

Since all market actors have those expectations and even those who don’t are pressured by market pressures to do the same (as paying for junior positions makes them less competitive than those who automate that work), the tragedy part will eventually ensue.

Why would you want to train people to do it wrong? If you had to train someone tomorrow would you show them the email client or give them a cart and have them deliver memos for a week?

Right now we have handed over some more basic tasks to machines. Train the next generation to take those tasks being automated as a given.

It’s not the tasks that matter, it’s the understanding of the basics, the implications of certain choices and the real life experience in things like “how long I thought it would take vs how long it actually took” that comes with doing certain things from start to end.

Some stuff can’t be learned theoretically, it has to be learnt as painful real life lessons.

So far it seems to be an ill-defined boundary between what AI can successfully do and what it can’t, and sadly you can’t really teach people starting past that point because it’s not even a point and the understanding at that point and beyond is built on understanding of how to use simpler building blocks and what are the implications of that.

We have this thing called school

You clearly never worked in an expert knowledge area.

In any complex enough domain knowledge there are elements you can only ever learn from doing it for real, with real requirements and real users.

With my career spanning 4 countries I have yet to see somebody straight out of uni that could just drop-in and start working at mid-level, and that includes the trully gifted types who did that stuff at home for fun.

Engineer for 15 years but go ahead and try patronizing me again or you can read what I wrote and respond to it, not what you wish I wrote. Guess you didn’t learn what a strawman was. Maybe should have worked in 5 countries.

Amazing.

How many junior professionals have you hired (or at least interviewed as domain expert) and how many have you led in your career?!

I’ll refrain from pulling rank here (I could, but having lots of experience and professional seniority doesn’t mean I know everything and besides, let’s keep it serious) so I’m just wondering what kind of engineering area do you work in (if it’s not too much to ask) and what in your career has led you to believe that formal education is capable of bridging any training gap that might form if the junior-professional-stage dissapears?

In my professional area, software development, all I’ve seen so far is that there are elements of experience which formal education won’t teach and my own experience with professional education (training courses) is that they provide you with knowledge, maybe a few techniques, but not professional insight on things like choosing which elements are best for which situation.

This is not to say that education has no value (in fact, I believe it’s the opposite: even the seemingly “too theoretical to be useful” can very much turn out to be essential in solving something highly practical: for example, I’ve used immenselly obscure knowledge of microprocessor architectures in the design of high performance distributed software systems for investment banks, which was pretty unexpected when I learned that stuff in an EE Degree). My point is that things such a “scoping a job”, “selecting the better tool for the job” and even estimating risk and acceptability of it in using certain practices for certain parts of a job, aren’t at all taught in formal education and I can’t really see the pathway in the Business Process (the expression in a Requirements Analysis sense, rather than saying it’s all a business) of Education which will result in both formalizing the teaching of such things and in attracting those who can teach it with knowledge.

Maybe the Education System can find a way of doing it, but we can hardly bet that it will and will do so before any problems from an AI-induced junior-level training gap materialises (i.e. there won’t be any pressure for it before things are blowing up because of a lack of mid-level and above professionals, by which time it there will be at least a decade of problems already in the pipeline).

I’ve actually mentored several junior and mid-level developers and have mainly made them aware of potential pitfalls they couldn’t see (often considerations which were outside the nitty gritty details of programming and yet had massive impact on what needed to be programmed), additional implications of certain choices which they weren’t at all aware of and pointed to them the judgment flaws that lead them to dead-ends, but they still need to actually have real situations with real consequences to, at an emotional-level, interiorise the value of certain practices that at first sight seem counterproductive otherwise they either don’t do it unless forced to (and we need programmers, not code monkeys that need constant surveillance) or do it as a mindless habit, hence also when not appropriate.

Maybe what you think of as “junior” is a code-monkey, which is what I think of as “people who shouldn’t even be in the profession” so you’re picturing the kind of teaching that’s the transmission of “do it like this” recipes that a typical code monkey nowadays finds via Google, whilst I’m picturing developers to whom you can say “here’s a small problem part of a big thing, come up with a way to solve it”, which is a set of practices that’s way harder to teach even in the practical classes on an Educational environment because it’s a synthetic environment with were projects have simulated needs and the consequences of one’s mistakes are way lower.

PS: Mind you, you did put me thinking about how we could teach this stuff in a formal educational context, but I really don’t have an answer for that as even one-to-one mentoring is limited if you’re not dealing with real projects, with real world users (and their real world needs and demands) and implications and real lifecycles (which are measured in years, not “one semester”). I mean, you can have learning placements in real companies, but that’s just working at a junior-level but with a different job title and without paying people a salary.

Maybe 6 countries and I would be impressed.

And?

It it makes you feel better the alternative can be much worse:

People are promoted to their level of incompetence. Sure she is a terrible manager but she was the best at sales and is most senior. Let’s have her check to make sure everyone filled out expense reports instead of selling.

You don’t get the knowledge sharing that comes from people moving around. The rival spent a decade of painful trial and error to settle on a new approach, but you have no idea so you are going to reinvent this wheel.

People who do well on open-ended creative tasks are not able to do as they failed to rise above repetitive procedural tasks. Getting started in the mailroom sounds romantic but maybe not the best place to learn tax law.

The tech and corporate and general operational knowledge drifts further and further away from the rest of the industry. Eventually everyone is on ancient IT systems that sap (yes pun intended) efficiency. Parts and software break that it is hard to replace. And eventually the very systems that were meant to make things easier become burdens.

For us humans there really is no alternative to work and thinking is the hardest work of all. You need to consistently reevaluate what the situation calls for and any kinda rigid rule system of promotion and internal training won’t perform as well.

I think those in charge often don’t care. A lot of them don’t actually have any incentive for long term performance. They just need a short/medium term stock performance and later they can sell. Heck, they’ll even get cash bonuses based solely on short term performance.

Even the owners are often hoping to just survive until some bigger company buys their business.

And when the company does explode… They’ll just declare bankruptcy and later make a new company. The kinds of people who created companies rarely do it just once. They do it over and over, somehow managing to convince investors every time.

Bullshit. Learn how to train new hires to do useful work instead of mundane bloat.
100% if an AI can do the job just as well (or better) then there’s no reason we should be making a person do it.

Part of the problem with AI is that it requires significant skill to understand where AI goes wrong.

As a basic example, get a language model like ChatGPT to edit writing. It can go very wrong, removing the wrong words, changing the tone, and making mistakes that an unlearned person does not understand.

This sets up a dangerous scenario where, to diagnose the results, you need to already have a deep understanding. This is in contrast to non-AI language checkers that are simpler to understand.

Moreover as you can imagine the danger is that the people who are making decisions about hiring and restructuring may not understand this issue.

The good news is this means many of the jobs AI is “taking” will probably come back when people realize it isn’t actually as good as the hype implied

It’s just that I fear that realisation may not filter down.

You honestly see it a lot in industry. Companies pay $$$ for things that don’t really produce results. Or what they consider to be “results” changes.

There are plenty of examples of lowering standards and lowering quality in virtually every industry. The idea that people will realise the trap of AI and reverse is not something I’m enthusiastic about.

In many ways AI is like pseudoscience. It’s a black box. Things like machine learning don’t tell you “why” it works. It’s just a black box. ChatGPT is just linear regression on language models.

So the claim that “good science” prevails is patently false. We live in the era of progressive scientific education and yet everywhere we go there is distrust in science, scientific method, critical thinking, etc.

Do people really think that the average Joe is going to “wake up” to the limitations of AI? I fear not.

Not quite. It’s more that a job that once had 5-10 people and perhaps an “expert” supervisor will just be whittled down to the expert. Similarly, factories used to employ hundreds and a handful of supervisors to produce a widget. Now, they can employ a couple of supervisors and a handful of robot technicians to produce more widgets.
The problem is, where do those experts come from? Expertise is earned through experience, and if all the entry-level jobs go away then eventually you’ll run out of experts.

Education. If education was free this wouldn’t be a problem, you could take a few more years at university to gain that experience instead of working in a junior role.

This is the problem with capitalism, if you take too much without giving back, eventually there’s nothing left to take.

You don’t get experts from education. You get experts from job experience (after education).

You definitely don’t get experts from unemployed people, or from people working to the bone doing menial labor for minimum wage.

Education is a broad term, that could include apprenticeships where you do get real work experience. And education would have to change a lot in all areas. The point is, the government can support people to gain that experience, the problem is that right now it isn’t. It’s common to exit just a bachelors degree with crippling amounts of debt.

And it’s viewed more positively in the society to have a bullshit Bs or Ms than a (usefull) trade degree
I wasn’t commenting on what type of education is better or worse than another. The point is that we need to support people through education.
No, companies will and are accepting reduced quality outputs in exchange for a 90+% reduction in costs to get that task done.

And AI is not always the best solution. One of my tasks at my job is to respond to website reviews. There is a button I can push that will generate an AI review. I’ve tested it. It works… but it’s not personal. My responses directly address things they say, especially if they have issues. Their responses are things like, “thanks for your five-star review! We really appreciate it, blah blah blah.” Like a full paragraph of boilerplate bullshit that never feels like the review is addressed.

You would think responding to reviews properly would be a very basic function an AI could do as well as a human, but at present, no way.

This assumes that your company doesn’t decide the AI responses are good enough in exchange for the cost savings of removing a person from the role, and that they don’t improve in a subsequent update.
True, although my company emphasizes human contact with customers. We really go out of our way with tech support and such. That said, I hate responding to reviews. I kind of wish it was good enough to just press the ‘respond to review with AI’ button.

This.

In accounting, 10 years ago, a huge part of the job was categorising bank transactions according to their description.

Now AI can kinda do it, but even providers that would have many billions of transactions to use as training data have a very high error rate.

It’s very difficult for a junior to look at the output and identify which ones are likely to be incorrect.

The problem is really going to be in the number of jobs that are left with 40hrs of work to do.

They don’t want to train new hires to begin with. A lot of work that new hires relied on to get a foothold on a job is bloat and chores that nobody wants to do. Because they aren’t trusted to take on more responsibility than that yet.

Arguably whole industries exist around work that isn’t strictly necessary. Does anyone feel like telemarketing is work that is truly necessary for society? But it provides employment to a lot of people. There’s much that will need to change for us to dismiss these roles entirely, but people need to eat every day.

Indeed: at least in knowledge based industries, everybody starts by working with a level of responsability were the natural mistakes a learning person does have little impact.

One of my interns read the wrong voltage and it took me ten minutes to find his mistake. Ten minutes with me and multiple other senior engineers standing around.

I congratulationed him and damn it I meant it. This was the best possible mistake for him to make. Everyone saw him do it, he gets to know he held everything up, and he has to just own it and move on.

The “not willing to train” thing is one of the biggest problems IMO. But also not a new one. It’s rampant in my field of software dev.

Most people coming out of university aren’t very qualified. Most have no understanding of how to actually program real world software, because they’ve only ever done university classes where their environments are usually nice and easy (possibly already setup), projects are super tiny, they can actually read all the code in the project (you cannot do that in real projects – there’s far too much code), and usually problems are kept minimal with no red herrings, unclear legacy code, etc.

Needless to say, most new grads just aren’t that good at programming in a real project. Everyone in the field knows this. As a result, many companies don’t hire new grads. Their advertised “entry level” position is actually more of a mid level position because they don’t want to deal with this painful training period (which takes a lot of their senior devs time!). But it ends up making the field painful to enter. Reddit would constantly have threads from people lamenting that the field must be dying and every time it’s some new grad or junior. IMO it’s because they face this extra barrier. By comparison, senior devs will get daily emails from recruiters asking if they want a job.

It’s very unsustainable.

The fucked up part isn’t that AI work is replacing human work, it’s that we’re at a place as a society where this is a problem.

More automation and less humans working should be a good thing, not something to fear.

But that would require some mechanism for redistributing wealth and taking care if those who choose not to work, and everyone knows that’s communism.

So much this. The way headlines like this frame the situation is so ass-backwards it makes my brain hurt. In any sane world, we’d be celebrating the automation of mundane tasks as freeing up time and resources to improve our health, happiness, and quality of life instead of wringing our hands about lost livelihoods.

The correct framing is that the money and profits generated by those mundane tasks are still realized, it’s just that they are no longer going to workers, but funneled straight to the top. People need to get mad as hell not at the tech, but at those who are leveraging that tech to specifically to deny them opportunity rather than improving their life.

I need a beer. 😐

money and profits generated by those mundane tasks are still realized, it’s just that they are no longer going to workers, but funneled straight to the top

Workers should be paid royalties for their contributions. If “the top” is able to reap the rewards indefinitely, so should the folks who built the systems.

Guillotine - Wikipedia

I think you misspelled “taxes,” but its possible your spelling will turn out to be more accurate.
Well… the difference is the former has a history of actually working.

This is where I was kind of on board with Andrew Yang. He was looking to setup UBI based on a tax on automation that displaced jobs. I think at first this would be very small, but having the systems in place would be important to allow it to scale up when it’s needed, rather than trying to just start the conversation once it’s a big problem.

That being said, I’m not a fan of your phrasing around, “those who choose not to work.” Being displaced by automation and not having the capability for some of the in demand jobs is one thing. Being mentally and physically able to work, and just deciding you’ll let others do that while you bring nothing to the table… that’s a different issue. UBI allows us to not have to worry about that distinction, and the resulting payout will be lower because of it, meaning people wouldn’t be living well on it, especially at first. It wouldn’t be a living wage, just a nice little bonus. To give everyone 14+ in the US $20/month would cost $5.5B… so we’re talking really small to start.

some sort of A Better World?
Wait you expect a wealthy mammal to share?

Exactly. This has nothing to do with AI and everything to do with UBI.

But, the rich and plebes alike will push AI as the Boogeyman as a distraction from the real enemy.

There’s this bizarre right-wing idea that if everyone can afford basic necessities, they won’t do anything. To which I say, so what? If you want to live in shitty government housing and survive off of food assistance but not do anything all day, fine. Who cares? Plenty of other people want a higher standard of living than that and will have a job to do so. We just won’t have people starving in the street and dying of easily fixable health problems.

We also have to be careful of how people define this sort of thing, and how the wide range of our current wealth inequality affects how something like UBI would be implemented.

In the rich’s eyes, UBI is already a thing and it’s called “welfare”. It’s not enough that people on welfare can barely survive on the poverty-level pittance that the government provides, but both the rich and slightly-more well-off have to put down these people as “mooching off the system” and “stealing from the government”, pushing for even more Draconian laws that punish their situation even further. It is a caste of people who are portrayed as even lower scum than “the poors”, right down to segregating where they live to “Section 8” housing as a form of control.

UBI is not about re-creating welfare. It’s about providing a comfortable safety net while reducing the obscene wealth gap, as technology drives unemployment even higher. Without careful vigilance, the rich and powerful will use this as another wedge issue to create another class of people to hate (their favorite pastime), and push for driving the program down just as hard as they do for welfare.

Yeah, modern welfare isn’t remotely enough to match the spirit of UBI. It’s structured so that you have to have a job. It’s not enough to live by at all. And bizarrely, there’s some jobs where they’d actually be worse than welfare because min wage is so crazy low in many parts of the US.

And even if you’re on disability, you’re gonna have a hard time. It pays barely enough to maybe scrape by if you cut every possible corner.

No form of welfare is close to being livable for the typical recipient. At best, they usually give you some spending cash while you live with friends or family. Maybe if you’re really lucky you can find that rare, rare subsidized housing and manage to just barely make ends meet.

The differences between UBI and “welfare” are perhaps subtle but very important IMO.

In Australia there’s an entire industry around punishing and humiliating people that need welfare. It’s just absurd and unnecessary. UBI avoids any of that by just making the entitlement universal.

We have “job network providers” which IMO do not provide any value to anyone. Suppose in a particular region there are 4,000 unemployed people and this particular week there are 400 new jobs. To receive welfare you need to be working with a job network provider to find a job. However, those job network providers aren’t creating any jobs. One way or another 400 people will probably get a new job this week. They might help a particular person tidy up their resume or whatever but they’re not actually finding jobs for people. Their only purpose is to make receiving welfare a chore, it’s absurd.

There’s also people stuck in the welfare trap. As in, if I don’t work at all I get $w welfare, but for every $1 I earn I lose $0.50 from $w, so why would I work a shitkicker job flipping burgers for effectively half the pay.

Slightly different systems, but in the US, welfare is a lot like that as well, especially punishing people by removing welfare or food stamps when they make X dollars.
The welfare trap is a feature of all means-tested social security systems.
Welfare trap - Wikipedia