RE: https://wetdry.world/@16af93/115961732893013803

Because not using AI tools for what they excel at will produce less secure code.

For example, they are great at debugging (https://words.filippo.io/claude-debugging/), they can find real issues in code review, they know more math than me or most of my colleagues, and they can write static analyzers I would have never had the time to write myself.

@filippo I am sorry, but a cryptographer saying something like "they know more math than me" only tells me that the cryptographer in question does not know how those things work. Please do not underestimate yourself or overestimate the capabilities of a text generator that happens to be have ingested tons of stolen human generated mathematical text that it stitches together (or quotes verbatim without attribution) to look like an answer.

@canacar I know my capabilities (and their limits!) thank you very much, and your description suggests you have not seriously tried a state-of-the-art model for more than five minutes.

Load up Claude with Opus 4.5, ask it to reason about stuff you know the right answer for, and get back to me.

I am good at combinatorics/probabilities (IMO Bronze medal), and it still helped me do the analysis for the recent bruteforce of test vectors I did.

@filippo the "reasoning" is a series of RAG queries, which in turn are web searches or agent outputs that then get added to the context, with no additional component of "understanding" or "knowing" or "reasoning". Just text generation with more context which may or may not be correct. Yes, they are helpful if you can verify the output and they speed things up if you can easily identify and discard incorrect outputs

I am not a developer. I am on the other side, dealing with summaries devoid of content or originality and and increased workload because people think that these things are like a fellow developer that "knows" or "learned" something just because they did it correctly once.

In that, I support your effort pointing these tools to better patterns, but refuse to anthromorphize it.

@canacar "reasoning" is about using longer outputs to produce better final results, it has nothing to do with RAG and little to do with extra context.

You don't have to anthropomorphize them, but you are doing yourself a disservice by thinking about themselves in excessively simplified terms which seem to describe Markov chains more than LLMs.

The Anthropic blog has a lot of great research if you want a more realistic mental model, or again you can try them.