Okay, okay. I need to devote some time to catching up on genAI capabilities in a professional sense.

Security Researchers & SecOps - what's your favorite use case so far?

Also, what's a lesson you learned the hard way?

***Also - please save the snark. I'm tired, and this is a genuine, if hesitant, ask.

#infosec

@neurovagrant I have put Claude Code to the test in earnest for a well-paid, high-stakes project starting just two weeks ago. For context, I am a highly skilled software engineer with decades of experience working professionally. The project I am working on is a basic web app with a whole lot of features — TypeScript, React Router “serverless” architecture deployed on Vercel, some Python in the back-end running LangChain for the AI features of the app. The project is way too ambitious, given the size of the team working on it right now. Management (perhaps foolishly) thought AI would let us deliver on time and on budget despite the sheer volume of work that needs to be done. I was brought onto the team when it became clear that there aren’t enough engineering resources devoted to this project. My AI setup is Claude Code running in Emacs.

So the good news is that AI is genuinely making me work a lot faster, but I had to make a few mistakes at first. I have to follow some pretty strict rules that I set for myself. I learned pretty early on that if I don’t write most of the code myself, I don’t learn anything. If I don’t take notes and write comments, I don’t learn anything. I had worked on the project for a few days before I realized I hadn’t learned a single thing about the software, and had already become very dependent on AI to make important decisions for me. It was hard to solve bugs because I didn’t know what was going on.

So the key take-away is that you absolutely will become dependent on AI as a crutch for what you don’t understand, and it will happen without you realizing it. You have to work very, very hard to not delegate to the AI your responsibility as an engineer to understand the code. You won’t be able to solve problems or explain what you have done to other people because you don’t really know how the system works because you didn’t really write it. You won’t be able to explain the challenges you encountered or the engineering trade-offs you were forced to make because you didn’t make those choices.

You have to slow down to the speed at which you can understand what code is being written. You have to push back on people who are pushing you to deliver features faster and faster, or you will end up becoming dependent on the AI.

One big problem with AI: it tends to copy-paste it’s own code from around your source base. So if you let it make a bad decision (perhaps because you didn’t realize it was a bad decision), pretty soon that same bad decision is being used everywhere throughout the code base. As an example, the AI was using a lazy little hack to reuse a database connection pool in just one line of code. Great for a prototype, not so much for an industrial product. The AI had also written some nice, reusable code, a wrapper around the PostgreSQL client library to obtain the DB connection properly and in a type-safe way. But when I “grepped” for examples of how to use that wrapper, I discovered that the wrapper was not being used anywhere. Instead, the 1-line hack was being used everywhere, in something like 30 different places throughout the code base.

So in order to make sure that I can understand code for which I am responsible, I decided I would not use the AI to write code for me. I would ask it how to write code, and actually physically type it all out myself. This forces me to slow down and think about what I am doing, and it helps me remember how to write code to solve problems.

Claude Code is hands-down the best linter I have ever used. After I write code, I always ask Claude Code to do a review. I tell it not to fix the mistakes for me, but to tell me what mistakes I have to fix. The process of fixing my own mistakes helps me learn how to do things the correct way. At first I would just write pseudo-code because there was so much about TypeScript and React Router I didn’t know. But after a few day of using AI as a linter, I can write code on my own most of the time, and the mistakes the AI catches are fewer and further between. I have never learned so much about a programming language and framework in such a short amount of time, AI is truly very useful for this.

Also, very occasionally, the recommendations Claude Code makes are wrong. But if you are making the changes yourself, not letting the AI do it for you, and actually thinking about what you are doing, you can catch problems before they get buried in lots of other logic.

Occasionally I let the AI write code for me. For example, I asked it to create a GUI for a testing and debugging tool that is not going to be shipped in the final product. Only I am going to use this tool, so I let the AI write that for me. And it worked extremely well! Hundreds of lines of throw-away code written in a just minute, something that would have taken me hours to do, and now I can benefit from that developer tool and it makes me more productive.

@ramin_hal9001 @neurovagrant really appreciate this perspective. I like that you seem to more ask the AI for advice VS having it actually do the thing, so you can understand what's happening, think critically, and continue learning. I haven't read any anecdotes about folks using it this way before. Thanks for sharing.