To mirror what others have already said, LLMs are actually really fucking bad at this.
For starters, all of the most common optimization problems are already silently fixed for you by the compiler. Modern compilers are crazy smart, and can optimize suboptimal code without the developer even having to think about it. That means that the remaining optimizations are already the ones that machines are bad at handling, and LLMs just don’t know how to close that gap.
I confess, I’m the kind of dev who lives for a good optimization challenge, and since I’m required to use an LLM at work, I’ve set them loose on a few of these problems myself. The results are not good. More often than not, optimization tends to be a big-picture endeavor where you’re thinking about how things fit together and if the project can be restructured to avoid bottlenecks. AI seems to be the weakest at big-picture stuff, so it tends to hone in on small details that aren’t going to give you much improvement. What they will do, however, is add lots of verbose code in pursuit of chasing a few extra milliseconds here and there, which is sub-optimal in its own way.
Readability and maintainability are both their own separate dimensions of optimization because they speed up the process of debugging and future development. AI does not optimize for these AT ALL. It often tries to compensate for this by filling the code with comments, but this actually makes the problem worse when your codebase becomes cluttered with hundreds of useless annotations like this:
//Initialize the app
InitializeApp();