Language models have become privately controlled research infrastructure. This week, OpenAI deprecated the Codex model that ~100 papers have used—with 3 days’ notice. It has said that newer models will only be stable for 3 months. Goodbye reproducibility!

It'll be interesting to see how developers are going to use these models in production if things are going to break every couple of months.

By @sayashk and me on the AI Snake Oil book blog: https://aisnakeoil.substack.com/p/openais-policies-hinder-reproducible

OpenAI’s policies hinder reproducible research on language models

LLMs have become privately-controlled research infrastructure

AI Snake Oil
@randomwalker @sayashk I’m quite unhappy with these changes myself. One of the biggest use-cases I was working on in Feb was using Codex. I look away for 2-3 weeks, and they EOL the model itself. Incredible.