OpenAI released a plugin that allows you to use Codex to review code generated by Claude Code.

I assume this is some snarky way of implying you need OpenAI’s coding agent to review the bad code generated by Anthropic’s.

Pettiness abounds.
https://github.com/openai/codex-plugin-cc

GitHub - openai/codex-plugin-cc: Use Codex from Claude Code to review code or delegate tasks.

Use Codex from Claude Code to review code or delegate tasks. - openai/codex-plugin-cc

GitHub
@carnage4life some folks keep saying they like to have one review the other's code.
@carnage4life So if they can beat their sycophancy bias, they'll finally have found a viable way to put ads (for themselves) in their product? "This code sucks, I would have done a better job"
@carnage4life if I heard my colleague correctly, this is good news. In his xp codex performs well at this point in time in analytical tasks. I think the ability to hook different models into the claude ux is a bonus.
@carnage4life using alternative models to review your code changes from your current model is a great use case so this makes obvious sense.
@carnage4life It’s not pettiness. Fo the same reason developers should not be the ones testing their own code, as their testing would embed the same possibly incorrect assumptions that exist in their code, having another LLM check a LLM’s code is helpful, even if in the end they were all trained on the same corpus of the Internet and Anna’s Archive. I’ve found Copilot’s code reviews of Claude-generated code excellent, despite the bad reputation Copilot enjoys in these parts.
@carnage4life They are not wrong. Unfortunately.