0 Followers
0 Following
3 Posts

While that’s true for some of those, you never know when there will be a paradigm shift, and neither do they. Also, off the top of my head, I know that Yahoo! and IBM caused their own undoing through long periods of mismanagement. The world was in their hands and they couldn’t stay out of their own way. Standard Oil was broken up in direct response to the establishment and enforcement of federal anti-monopoly regulation.

So, again, don’t give up hope! If the pendulum does not swing back the other way, it will the defy the sum of all human history. If you think about it, believing otherwise doesn’t even make sense.

Try not to give up hope! People said similar things about IBM, Yahoo!, AltaVista, AOL, Blockbuster Video, Standard Oil, The Dutch East India Company, and more! All of those are either in the dustbin of history or ghosts of their former selves.

The reckoning will come to these companies that continue to seem successful in spite of providing objectively bad and worsening products; nothing has ever stopped the pendulum from swinging. When you see your chance to help, give it a push.

Unlock Pandora’s box, though, a lot of the dumber applications of this stuff will go back in when the VC money dries up.

When I refer to improvements, I mean fundamental improvements to the underlying technology, which appear to be at a stubborn plateau.

I below be he improvements you’re referring to are better guardrails. They are still improving the interface with regard to context and scope, as those functionalities are separate from the underlying technology, bolted on top of it to keep it on task and more continually aware of and operating within the defined context.

Underneath, though, each new model appears to be a refactoring of the previous one to get different sometimes better results, but the methodology is the same, and its strengths and weaknesses remain largely unchanged.

So, essentially what my objection to this practice is this:

This technology has led to companies leaning harder on their current people to get more done with the same amount of time with AI tools. That doesn’t seem to be successful at any sort of scale so far, but that’s the plan nonetheless. As a result, new talent is coming into the industry at a much slower rate than before–hiring is on hold while everyone waits to see if these tools really can replace bodies in the workforce in a serious way (again, super inconclusive at this point).

So, looking forward even one single generation, we will have dramatically fewer experts in the field than before, because so many fewer people were able to start in that field last generation. Since the need for programmers is greater every year, either these tools will be a wild success and meet all these business demands, or there will be a crisis of demand with no easy ways out.

Since both of the foreseeable outcomes are detrimental to the workers themselves, what and who exactly are we rooting for? I think that most people, given the choice, would choose the existing cycle with a proven track record, rather than gamble on something so uncertain with no clear economic benefit to the workers themselves.

Right, but aren’t the interns in training specifically to get better at that than they are today, and eventually surpass the abilities of the AI?

These LLMs are at best OK at this stuff, and are not improving at any sort of convincing rate. If you don’t train anyone to be better than the LLM, the retirement of your generation will make the whole industry you’re in at best, OK at its job.

, or hateful, or murderous, or famously accepting of others.
You are making that “first reaction is the wrong one” assertion like it is some sort of law of physics. Many people have read all the published materials and are knowledgeable in the field, and come to the sober, measured conclusion that this technology is mostly a turd. To make matters worse, its a $2500 turd that makes the room hot and the electric meter spin real fast.
I haven’t heard of the term before, but the notion you’re describing is definitely something I’ve thought and read about some before. Thanks for teaching me something new I can go learn more about!

I am one of those people, but I’m still annoyed when my tools don’t work right. I hate having to fix something, only to find out that my tool I need for that also needs repairs. I use my computer’s primarily as tools, so I almost always am at least a little annoyed when my computer demands attention all of a sudden.

Maybe there are others that are hobbyists. I guess if you’re a computer tinkerer primarily, troubleshooting that crap can be like cultivating a zen garden, but it is the opposite for me.

Sorry, I misunderstood your comment and thought you were suggesting that Huang thought arguments against his point were the straw man argument, which doesn’t seem to be the case to me.

Huang seems to be suggesting that since the AI work is happening further down in the rendering pipeline, rather than reskinning it after rendering, makes it not count as AI slop, which is ridiculous.

If I am cooking a recipe to make something, it doesn’t matter if the recipe calls for adding shit to the top or if I just pour some shit on after it is cooked. It still has shit all over it.