Reports say Amazon is mandating human code reviews by senior engineers on AI generated code to reduce failures in production.
Several questions:
Aren’t you just offsetting the time spent in an activity senior engineers find engaging (writing good coding) with one they generally detest (reading bad code)?
Who actually learns and gets better in this cycle?
Don’t PR reviews mainly catch maintainability issues not bugs?
People are working on disciplined practices to improve the code produced in collaboration with ai agents. People send me links to projects all the time and I thank you for that.
I’d also appreciate links to any research quantifying the impact of these practices on delivery speed and code quality.
Claude code, Github, and Obsidian editor are my goto document management solution. Having great success producing deep and broad analysis quickly using these tools where we have to work from outlines, capture notes, understand and process a lot of written context.
Success requires a deliberate practice and a personal commitment that every word that appears in a final draft is verifiably true or qualified with attribution and written or reviewed and copy edited to the word by a human.
As my mom grew older, she dreamed of her mother. She saw it as a visitation and a comfort.
Now I grow older. I dream of my mother. I rather see it as a gift of the mind. Letting me sit beside the aging in me at an arms remove. To contemplate it with awe, fear, sadness, but with a bit of grace.