⭐️ New blog post: A Month With OpenAI's Codex

https://highcaffeinecontent.com/blog/20260301-A-Month-With-OpenAIs-Codex

It's been literal *years* since I last posted anything, so you know this is a big deal for me 😜

A Month With OpenAI's Codex

High Caffeine Content

@stroughtonsmith > something like Codex can chew through and rewrite a thousand lines of code in a second. Eventually, I just trusted it.

Jia Tan’s mistake was being too careful and wasting too much time on the social engineering. The next attacker will be far lazier than that, all they need is to poison the datasets (which is trivial, even by the vendors’ admission) and soon thousands of developers will be happily shipping unvetted malicious code which will compromise everyone beyond repair.

https://www.anthropic.com/research/small-samples-poison

https://www.bbc.com/future/article/20260218-i-hacked-chatgpt-and-googles-ai-and-it-only-took-20-minutes

A small number of samples can poison LLMs of any size

Anthropic research on data-poisoning attacks in large language models