RE: https://wetdry.world/@16af93/115961732893013803

Because not using AI tools for what they excel at will produce less secure code.

For example, they are great at debugging (https://words.filippo.io/claude-debugging/), they can find real issues in code review, they know more math than me or most of my colleagues, and they can write static analyzers I would have never had the time to write myself.

@filippo @16af93 It’s almost as if they’re a tool like any other: applying that tool correctly creates better outcomes than if you ignore it, just as applying it incorrectly creates worse outcomes.
@djspiewak @filippo a tool of ethically at best dubious nature, but then again the torment nexus is just a tool too
@16af93 @djspiewak ^ a social media post sent over the internet using a device made in China, each a tool of ethically dubious nature

@filippo @16af93 @djspiewak This toot is so disappointing. There are many reasonable things one could say about llm ethics but "your ethics argument is invalid because you are also using unethical things!!!" feels not in good faith.

You're right that there's no ethical consumption under capitalism. That doesn't mean we shouldn't care, or try to do better.

@danvolchek @16af93 @djspiewak in context, my three weeks old post is reinforcing that LLMs are indeed tools like any other (including being subject to valid ethical debates, although that was not the topic of the thread). Why are we playing delayed-action short-form context collapse?