I Went All-In on AI. The MIT Study Is Right.

https://lemmy.world/post/39852349

I Went All-In on AI. The MIT Study Is Right. - Lemmy.World

Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful. The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine): > I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists. > I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it. > Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

No shit

What’s interesting is what he found out. From the article:

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Typical C-suite. It takes them three months to come to the same conclusion that would be blindingly obvious to anyone with half a brain: if you build something that no one understands, you’ll end up with something impossible to maintain.

@AutistoMephisto @just_another_person

> "Then three months later, you realize nobody actually understands what you’ve built."

gratz, gang, you turned everything into Perl.

well

*(dusts off `perldoc`)*

I'll be ready

@randomgeek @AutistoMephisto @just_another_person

to be fair, Perl and PHP both suffered from the fact that it was WAY too easy to write TERRIBLE code.

Both languages required a high level of personal discipline to write good code, but it was actually very doable.

The problem wasn't the languages. It was the humans using them.

@masukomi @AutistoMephisto @just_another_person and so many of the folks who wrote terrible Perl / PHP code then went on to copy and paste Rails and Node solutions from StackOverflow and now they—or their kids maybe? It's been a while hasn't it—are all in on vibe coding.

@randomgeek @masukomi @AutistoMephisto @just_another_person

Hey now, some of us typed out the code from StackOverflow. It was the only way to get good formatting before we had PerlTidy.

@AutistoMephisto @just_another_person This is kind of the obvious conclusion. I didn't need to use AI to know this would be the outcome. This is why I only use it for small code snippets if at all. This is why I've taught my kids not to rely on AI to do their homework.

It may seem like the easy way but it will absolutely come back to haunt you later. If you don't do the work you don't learn anything or develop any skills.

Something any (real, trained, educated) developer who has even touched AI in their career could have told you.
What’s funny is this guy has 25 years of experience as a software developer. But three months was all it took to make it worthless.
As someone who has been shoved in the direction of using AI for coding by my superiors, that’s been my experience as well. It’s fine at cranking out stackoverflow-level code regurgitation and mostly connecting things in a sane way if the concept is simple enough. The real breakthrough would be if the corrections you make would persist longer than a turn or two. As soon as your “fix-it prompt” is out of the context window, you’re effectively back to square one. If you’re expecting it to “learn” you’re gonna have a bad time. If you’re not constantly double checking its output, you’re gonna have a bad time.
@felbane @AutistoMephisto i don't have a cs degree (and am more than willing to accept the conclusions of this piece) but how is it not viable to audit code as it's produced so as it's both vetted and understood in sequence?

Auditing the code it produces is basically the only effective way to use coding LLMs at this point.

You’re basically playing the role of senior dev code reviewing and editing a junior dev’s code, except in this case the junior dev randomly writes an amalgamation of mostly valid, extremely wonky, and/or complete bullshit code. It has no concept of best practices, or fitness for purpose, or anything you’d expect a junior dev to learn as they gain experience.

Now given the above, you might ask yourself: “Self, what if I myself don’t have the skills or experience of a senior dev?” This is where vibe coding gets sketchy or downright dangerous: if you don’t notice the problems in generated code, you’re doomed to fail sooner or later. If you’re lucky, you end up having to do a big refactoring when you realize the code is brittle. If you’re unlucky, your backend is compromised and your CTO is having to decide whether to pay off the ransomware demands or just take a chance on restoring the latest backup.

If you’re just trying to slap together a quick and dirty proof of concept or bang out a one-shot script to accomplish a task, it’s fairly useful. If you’re trying to implement anything moderately complex or that you intend to support for months/years, you’re better off just writing it yourself as you’ll end up with something stylistically cohesive and more easily maintainable.

@felbane thanks, such a thorough response really appreciate your time.
It’s still useful to have an actual “study” (I’d rather call it a POC) with hard data you can point to, rather than just “trust me bro”.
Like the MIT study that the author refers to? The one that already existed before they decided they need to do it themself?

Untrained dev here, but the trend I’m seeing is spec-driven development where AI generates the specs with a human, then implements the specs. Humans can modify the specs, and AI can modify the implementation.

This approach seems like it can get us to 99%, maybe.

How is what you’re describing different to what the author is talking about? Isn’t it essentially the same as “AI do this thing for me”, “no not like that”, “ok that’s better”? The trouble the author describes, ie the solution being difficult to change, or having no confidence that it can be safely changed, is still the same.

This poster calckey.world/notes/afzolhb0xk is more articulate than my post.

The difference between this “spec-driven” approach is that the entire process is repeatable by AI once you’ve gotten the spec sorted. So you no longer work on the code, you just work on the spec, which can be a collection of files, files in folders, whatever — but the goal is some kind of determinism, I think.

I use it on a much smaller scale and haven’t really cared much for the “spec as truth” approach myself, at this level. I also work almost exclusively on NextJS apps with the usual Tailwind + etc stack. I would certainly not trust a developer without experience with that stack to generate “correct” code from an AI, but it’s sort of remarkable how I can slowly document the patterns of my own codebase and just auto-include it as context on every prompt (or however Cursor does it) so that everything the LLMs suggest gets LLM-reviewed against my human-written “specs”. And doubly neat is that the resulting documentation of patterns turns out to be really helpful to developers who join or inherit the codebase.

I think the author / developer in the article might not have been experienced enough to direct the LLMs to build good stuff, but these tools like React, NextJS, Tailwind, and so on are all about patterns that make us all build better stuff. The LLMs are like “8 year olds” (someone else in this thread) except now they’re more like somewhat insightful 14 year olds, and where they’ll be in another 5 years… Who knows.

Anyway, just saying. They’re here to stay, and they’re going to get much better.

Dethronatus Sapiens sp. (@dsilverz)

@[email protected] @[email protected] I used to deal with programming since I was 9 y.o., with my professional career in DevOps starting several years later, in 2013. I dealt with lots of other's code, legacy code, very shitty code (especially done by my "managers" who cosplayed as programmers), and tons of technical debts. Even though I'm quite of a LLM power-user (because I'm a person devoid of other humans in my daily existence), I never relied on LLMs to "create" my code: rather, what I did a lot was tinkering with different LLMs to "analyze" _my own code that I wrote myself_, both to experiment with their limits (e.g.: I wrote a lot of cryptic, code-golf one-liners and fed it to the LLMs in order to test their ability to "connect the dots" on whatever was happening behind the cryptic syntax) and to try and use them as a pair of external eyes beyond mine (due to their ability to "connect the dots", and by that I mean their ability, as fancy Markov chains, to relate tokens to other tokens with similar semantic proximity). I did test them (especially Claude/Sonnet) for their "ability" to output code, not intending to use the code because I'm better off writing my own thing, but you likely know the maxim, one can't criticize what they don't know. And I tried to know them so I could criticize them. To me, the code is.. pretty readable. Definitely awful code, but readable nonetheless. So, when the person says... > The developers can’t debug code they didn’t write. ...even though they argue they have more than 25 years of experience, it feels to me like they don't. One thing is saying "developers find it pretty annoying to debug code they didn't write", a statement that I'd totally agree! It's awful to try to debug other's (human or otherwise) code, because you need to try to put yourself on their shoes without knowing how their shoes are... But it's _doable_, especially by people who deal with programming logic since their childhood. Saying "developers can't debug code they didn't write", to me, seems like a layperson who doesn't belong to the field of Computer Science, doesn't like programming, and/or only pursued a "software engineer" career purely because of money/capitalistic mindset. Either way, if a developer can't debug other's code, sorry to say, but they're not developers! Don't take me wrong: I'm not intending to be prideful or pretending to be awesome, this is beyond my person, I'm nothing, I'm no one. I abandoned my career, because I hate the way the technology is growing more and more enshittified. Working as a programmer for capitalistic purposes ended up depleting the joy I used to have back when I coded in a daily basis. I'm not on the "job market" anymore, so what I'm saying is based on more than 10 years of former professional experience. And my experience says: a developer that can't put themselves into at least trying to understand the worst code out there can't call themselves a developer, full stop. RE: 【I Went All-In on AI. The MIT Study Is Right.】 Just want to clarify, this is <i>not</i> my Substack, I’m just sharing this because I found it insightful. The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine): > > I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists. > > > I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it. > > > Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built. > https://lemmy.world/post/39852349

Calckey.world

They’re here to stay

Eh, probably. At least for as long as there is corporate will to shove them down the rest of our throats. But right now, in terms of sheer numbers, humans still rule, and LLMs are pissing off more and more of us every day while their makers are finding it increasingly harder to forge ahead in spite of us, which they are having to do ever more frequently.

and they’re going to get much better.

They’re already getting so much worse, with what is essentially the digital equivalent of kuru, that I’d be willing to bet they’ve already jumped the shark.

If their makers and funders had been patient, and worked the present nightmares out privately, they’d have a far better chance than they do right now, IMO.

Simply put, LLMs/“AI” were released far too soon, and with far too much “I Have a Dream!” fairy-tale promotion that the reality never came close to living up to, and then shoved with brute corporate force down too many throats.

As a result, now you have more and more people across every walk of society pushed into cleaning up the excesses of a product they never wanted in the first place, being forced to share their communities AND energy bills with datacenters, depleted water reserves, privacy violations, EXCESSIVE copyright violations and theft of creative property, having to seek non-AI operating systems just to avoid it . . . right down to the subject of this thread, the corruption of even the most basic video search.

Can LLMs figure out how to override an angry mob, or resolve a situation wherein the vast majority of the masses are against the current iteration of AI even though the makers of it need us all to be avid, ignorant consumers of AI for it to succeed? Because that’s where we’re going, and we’re already farther down that road than the makers ever foresaw, apparently having no idea just how thin the appeal is getting on the ground for the rest of us.

So yeah, I could be wrong, and you might be right. But at this point, unless something very significant changes, I’d put money on you being mostly wrong.

Kuru (disease) - Wikipedia

@some_designer_dude @Piatro
So you're saying the goal is a set of files that can be run through a deterministic process to generate code which can be run by a computer? Revolutionary! You should call this invention a comp-AI-ler.

Trained dev with a decade of professional experience, humans routinely fail to get me workable specs without hours of back and forth meetings. I’d say a solid 25% of my work day is spent understanding what the stakeholders are asking for and how contort the requirements to fit into the system.

If these humans can’t be explict enough with me, a living thinking human that understands my architecture better than any LLM, what chance does an LLM have?

Thus you get a piece of software that no one really knows shit about the inner workings of. Sure you have a bunch of spec sheets but no one was there doing the grunt work so when something inevitably breaks during production there’s no one on the team saying “oh, that might be related to this system I set up over here.”
Have you used any AI to try and get it to do something? It learns generally, not specifically. So you give it instructions and then it goes, “How about this?” You tell it that it’s not quite right and to fix these things and it goes off on a completely different tangent in other areas. It’s like working with an 8 year old who has access to the greatest stuff around.
It doesn’t even actually learn, though.

Even more efficient: humans do the specs and the implementation. AI has nothing to contribute to specs, and is worse at implementation than an experienced human. The process you describe, with current AIs, offers no advantages.

AI can write boilerplate code and implement simple small-scale features when given very clear and specific requests, sometimes. It’s basically an assistant to type out stuff you know exactly how to do and review. It can also make suggestions, which are sometimes informative and often wrong.

If the AI were a member of my team it would be that dodgy developer whose work you never trust without everyone else spending a lot of time holding their hand, to the point where you wish you had just done it yourself.

I was in charge of an AI pilot project two years back at my company. That was my conclusion, among others.
Also, what MIT also told them. Literally MIT lol

Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.

And all they’ll hear is “not failure, metrics great, ship faster, productive” and go against your advice because who cares about three months later, that’s next quarter, line must go up now. I also found this bit funny:

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me… I was proud of what I’d created.

Well you didn’t create it, you said so yourself, not sure why you’d be proud, it’s almost like the conclusion should’ve been blindingly obvious right there.

The top comment on the article points that out.

It’s an example of a far older phenomenon: Once you automate something, the corresponding skill set and experience atrophy. It’s a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired. I’ll have to find it but there’s a story about a modern fighter jet pilot not being able to handle a WWII era Lancaster bomber. They don’t know how to do the stuff that modern warplanes do automatically.

I agree with you, though proponents will tell you that’s by design. Supposedly, it’s like with high-level languages. You don’t need to know the actual instructions in assembly anymore to write a program with them. I think the difference is that high-level language instructions are still (mostly) deterministic, while an LLM prompt certaily isn’t.
Yep, thats the key issue that so many people fail to understand. They want AI to be deterministic but it simply isnt. Its like expecting a human to get the right answer to any possible question, its just not going to happen. The only thing we can do is bring error rates with ai lower than a human doing the same task, and it will be at that point that the ai becomes useful. But even at that point there will always be the alignment issue and nondeterminism, meaning ai will never behave exactly the way we want or expect it to.

It’s more like the ancient phenomenon of spaghetti code. You can throw enough code at something until it works, but the moment you need to make a non-trivial change, you’re doomed. You might as well throw away the entire code base and start over.

And if you want an exact parallel, I’ve said this from the beginning, but LLM coding at this point is the same as offshore coding was 20 years ago. You make a request, get a product that seems to work, but maintaining it, even by the same people who created it in the first place, is almost impossible.

Indeed… Throw-away code is currently where AI coding excels. And that is cool and useful - creating one off scripts, self-contained modules automating boilerplate, etc.

You can’t quite use it the same way for complex existing code bases though… Not yet, at least…

Yes, that exactly how I use cursor and local llms. There a ton of cases, where you need one time script to prepare data/sort thru data/fetch data via API, etc. Even something simple like adding role on discord channel (god save you, if your company uses that piece of crap for communication), that can be done with script too, especially if you need to add role to thousands of users, for example. Of course, it can be done properly by normal development cycle, but that expensive, while shitcoding thru cursor can be done by anyone.

The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.

Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven’t lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won’t make you forget how to write like using ChatGPT will.

I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren’t good at using computers generally don’t do this, and might not even know how you would start trying to.

For years ‘user friendly’ software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user’s brain into the computer and hide the computer’s internal state (so that its not implied that the user has to understand it, so that a user that doesn’t know what they’re doing won’t do something they’ll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every “smart” feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.

Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.

Now, I am of the opinion that the ‘mirroring the internal state’ method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn’t be accessible to people with different levels of ability. But just as a random person in a store shouldn’t grab a wheelchair user’s chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.

Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to ‘user friendliness’. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.

Once you automate something, the corresponding skill set and experience atrophy. It’s a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired.

Well, to be fair, different skills are acquired. You’ve learned how to create automated systems, that’s definitely a skill. In one of my IT jobs there were a lot of people who did things manually, updated computers, installed software one machine at a time. But when someone figures out how to automate that, push the update to all machines in the room simultaneously, that’s valuable and not everyone in that department knew how to do it.

So yeah, I guess my point is, you can forget how to do things the old way, but that’s not always bad. Like, so you don’t really know how to use a scythe, that’s fine if you have a tractor, and trust me, you aren’t missing much.

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me… I was proud of what I’d created.

Well you didn’t create it, you said so yourself, not sure why you’d be proud, it’s almost like the conclusion should’ve been blindingly obvious right there.

Does a director create the movie? They don’t usually edit it, they don’t have to act in it, nor do all directors write movies. Yet the person giving directions is seen as the author.

The idea is that vibe coding is like being a director or architect. I mean that’s the idea. In reality it seems it doesn’t really pan out.

You can vibe write and vibe edit a movie now too. They also turn out shit.

The issue is that llm isnt a person with skills and knowledge. Its a complex guessing box that gets thing kinda right, but not actually right, and it absolutely cant tell whats right or not. It has no actual skills or experience or humainty that a director can expect a writer or editor to have.

What’s impressive about LLM is how good it is at sounding right.

Just makes me think of this character from Adventure Time

What season they from? I thought I’d seen most of it but don’t recall them
This is from season 1 episode 18, titled “Dungeon”

Wrong, it’s just outsourcing.

You’re making a false-equivalence. A director is actively doing their job; they’re a puppeteer and the rest is their puppet. The puppeteer is not outsourcing his job to a puppet.

And I’m pretty sure you have no idea what architects actually do.

If I hire a coder to write an app for me, whether it’s a clanker or a living being, I’m outsourcing the work; I’m a manager.

It’s like tasking an artist to write a poem for you about love and flowers, and being proud about that poem.

yeah i don’t get why the ai can’t do the changes

don’t you just feed it all the code and tell it? i thought that was the point of 100% AI

My big fear with this stuff is security. It just seems so “easy”, without knowledgeable people, for AI to write a product that functions from a user perspective but is wide open to attack.
AI might be good for simulating attacks, because they can do lots of attempts and iteration. IMO, AI and (competent) people would make for a good pairing for trying out ideas before deploying a project into the real world.

We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.

Except we are talking about that, and the tech bro response is “in 10 years we’ll have AGI and it will do all these things all the time permanently.” In their roadmap, there won’t be a next generation of software developers, product managers, or mid-level leaders, because AGI will do all those things faster and better than humans. There will just be CEOs, the capital they control, and AI.

What’s most absurd is that, if that were all true, that would lead to a crisis much larger than just a generational knowledge problem in a specific industry. It would cut regular workers entirely out of the economy, and regular workers form the foundation of the economy, so the entire economy would collapse.

“Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.”

That’s why they’re all-in on authoritarianism.
Also, even if we make it through a wave of bullshit and all these companies fail in 10 years, the next wave will be ready and waiting, spouting the same crap - until it’s actually true (or close enough to be bearable financially). We can’t wait any longer to get this shit under control.

According to a study, the lower top 10% accounts for something like 68% of cash flow in the economy. Us plebs are being cut out all together.

That being said, I think if people can’t afford to eat, things might bet bad. We will probably end up a kept population in these ghouls fever dreams.

What does lower top 10% mean?

Once Boston Dynamic style dogs and Androids can operate over a number of days independently, I’d say all bets are off that we would be kept around as pets.

I’m fairly certain your Musks and Altmans would be content with a much smaller human population existing to only maintain their little bubble and damn everything else.

We’re all idiots. Even the titans of industry-- sit down with then-- they are idiots, at best. narcissists and criminals at worst. At least you and I lack the power to be that awful with our idiocy.
Yep, and now you know why all the tech companies suddenly became VERY politically active. This future isn’t compatible with democracy. Once these companies no longer provide employment their benefit to society becomes a big fat question mark.

I did see someone write a post about Chat Oriented Programming, to me that appeared successful, but not without cost and extra care. Original Link, Discussion Thread

Successful in that it wrote code faster and its output stuck to conventions better than the author would. But they had to watch it like a hawk and with the discipline of a senior developer putting full attention over a junior, stop and swear at it every time it ignored the rules that they give at the beginning of each session, terminate the session when it starts doing a autocompactification routine that wastes your money and makes Claude forget everything. And you try to dump what it has completed each time. One of the costs seem to be the sanity of the developer, so I really question if it’s a sustainable way of doing things from both the model side and from developers. To be actually successful you need to know what you’re doing otherwise it’s easy to fall in a trap like the CTO, trusting the AI’s assertions that everything is hunky-dory.

A Month of Chat-Oriented Programming - CheckEagle

That perfectly describes what my day-to-day has become at work (not by choice).

The only way to get anywhere close to production-ready code is to do like you just described, and the process is incredibly tedious and frustrating. It also isn’t really any faster than just writing the code myself (unless I’m satisfied with committing slop) and in the end, I still don’t understand the code I’ve ‘written’ as well as if I’d done it without AI. When you write code yourself there’s a natural self-reinforcement mechanism, the same way that taking notes in class improves your understanding/retention of the information better than when just passively listening. You don’t get that when vibe coding (no matter how knowledgeable you are and how diligent you are about babysitting it), and the overall health of the app suffers a lot.

The AI tools are also worse than useless when it comes to debugging, so good fucking luck getting it to fix the bugs it inevitably introduces…

For debugging there is the Google antigravity method: there can’t be bugs if it wipes the whole drive containing your project (taps head)
Google Antigravity vibe-codes user's entire drive out of existence

: Caveat coder

The Register

“fractional CTO”(no clue what that means, don’t ask me)

For those who were also interested to find out this means: Consultant and advisor in a part time role, paid to make decisions that would usually fall under the scope of a CTO, but for smaller companies who can’t afford a full-time experienced CTO

That sounds awful. You get someone who doesn’t really know the company or product, they take a bunch of decisions that fundamentally affect how you work, and then they’re gone.

… actually, that sounds exactly like any other company.