0 Followers
0 Following
3 Posts
Your point was “some people don’t think it’s a no-brainer,” which I addressed, and then you whipped out that line. I’ve been around long enough to know what that means: that your replies would be inflammatory garbage from then on. Learn how to interact with people online in a civil way and maybe you’ll actually be able to maintain a conversation long enough for it to be constructive

Congratulations, you read the headline.

Learn how to have a conversation

That’s only article-worthy because it is a rare occurrence and an increasingly controversial opinion. And even that maintainer didn’t abandon TS completely—he said that would be “daft”—he just moved to types via JSDoc which is run through the TS compiler, as well as to .d.ts files.
Lordy, I did not expect an internal refactoring PR to end up #1 on Hacker News. ... | Hacker News

Well, yes. TypeScript mitigates one big problem with JavaScript (type safety). That’s why it exists. It’s a dumb idea to choose vanilla JS over TS if you’re starting a new project today, IMO.

violates licenses

Not a problem if you believe all code should be free. Being cheeky but this has nothing to do with code quality, despite being true

do the thinking

This argument can be used equally well in favor of AI assistance, and it’s already covered by my previous reply

non-deterministic

It’s deterministic

brainstorming

This is not what a “good developer” uses it for

We have substantially similar opinions, actually. I agree on your points of good developers having a clear grasp over all of their code, ethical issues around AI (not least of which are licensing issues), skill loss, hardware prices, etc.

However, what I have observed in practice is different from the way you describe LLM use. I have seen irresponsible use, and I have seen what I personally consider to be responsible use. Responsible use involves taking a measured and intentional approach to incorporating LLMs into your workflow. It’s a complex topic with a lot of nuance, like all engineering, but I would be happy to share some details.

Critical review is the key sticking point. Junior developers also write crappy code that requires intense scrutiny. It’s not impossible (or irresponsible) to use code written by a junior in production, for the same reason. For a “good developer,” many of the quality problems are mitigated by putting roadblocks in place to…

  • force close attention to edits as they are being written,
  • facilitate handholding and constant instruction while the model is making decisions, and
  • ensure thorough review at the time of design/writing/conclusion of the change.
  • When it comes to making safe and correct changes via LLM, specifically, I have seen plenty of “good developers” in real life, now, who have engineered their workflows to use AI cautiously like this.

    Again, though, I share many of your concerns. I just think there’s nuance here and it’s not black and white/all or nothing.

    You’re wrong, whether you figure that out now or later. Using an LLM where you gatekeep every write is something that good developers have started doing. The most senior engineers I work with are the ones who have adopted the most AI into their workflow, and with the most care. There’s a difference between vibe coding and responsible use.
    What world are you living in? That’s not even remotely true
    Ah, okay, I understand now. Rocks are nutritious—and whisker pants.
    Out of curiosity, would you explain your reply and your immediate parent’s comment for me? “Sez” - a bit old but didn’t seem too weird, but then: “date of poisoning” - are you implying an LLM wrote that and “sez” has something to do with pinpointing some poisoning of the model?