⭐️ Had a lot of questions on & offline about how I start a new project using Codex, so here is what I do every time:

• I create a new iOS -> Universal App (Basic) using this Xcode template: https://github.com/steventroughtonsmith/appleuniversal-xctemplates
• I add my AppleUniversalCore package: https://github.com/steventroughtonsmith/appleuniversalcore
• I drop in the latest version of my CodingStyle.md: https://gist.github.com/steventroughtonsmith/ee58b8c7fe6557a073ac792bcb891267

And then the starter prompt, some variation of:

"Please read the CodingStyle document here, and familiarize yourself with the project structure"

Also as of the time of writing, I'm using the GPT-5.3-Codex model at Medium, for everything, and my model tone set to Pragmatic (instead of Friendly).

I did not have a great experience with GPT-5.4 and I don't recommend it (it's not a model specific to coding anyway) — it would rather discuss things with you, like ChatGPT, than write code, and it really frustrated me. It might work for you.

@stroughtonsmith It's curious that you've had bad luck with 5.4. While not code-specific like Codex models, 5.4 incorporates all of the coding capabilities of 5.3-codex. That said, the non-specific nature of it could be messing up results in certain scenarios.

So far I haven’t had any worse luck with it than 5.3-codex. The fast-mode is a notably welcome change in 5.4. I've also been really impressed with the results when using it within the Codex app.

@sclyde yeah it really likes to talk, and it will argue back more. I didn't find it produced any better code than before, I did waste a lot more time trying to get it to write what I wanted, and it still had to automatically compact context just as often as before, even with a 1M token window, so I found no benefit to using it. If a GPT-5.4-Codex model comes out, I'll try that

@stroughtonsmith Interesting... are you using it from the ChatGPT app, or the Codex app, or Copilot or something else? And for sure in Agent mode, not Ask or Plan?

Admittedly, 5.2 had the same issue (for me) until they released 5.2 Codex.

@sclyde Codex app; no slash commands
@stroughtonsmith @sclyde Did you actually opt into the 1M token context? You’d have to enable that with a setting in your config and it raises the token cost by 2x. I haven’t tried it myself, so no idea if it’s worth it.
@hendrik_kueck @sclyde wasn't aware of that, but not gonna try again with that model
@stroughtonsmith @sclyde “GPT‑5.4 in Codex includes experimental support for the 1M context window. Developers can try this by configuring model_context_window and model_auto_compact_token_limit. Requests that exceed the standard 272K context window count against usage limits at 2x the normal rate.”