Damn, the #ChatGPT 4 usage cap also applies to testing and implementing #CustomGPTs. And I swear there's some usage multiplier on top, like there's internal requests (like RAG or Actions) happening that count against the quota.

This does not feel like a very serious development environment!

The gameplay experience is also going to be like correspondence chess... you get halfway through character creation and you get "You've reached the current usage cap for GPT-4, please try again in 2 hours"

This Custom GPT Tester is _not_ running fresh sessions when you change the prompt!

I actually would like if I could continue an old transcript with an edited prompt. But it's neither here nor there... everytime I edit at all I seem to get a new session, only GPT also seems to know something about the old prompt.

(This might be superstition... but very specific formatting instructions seem to be preserved)

The document retrieval is entirely opaque, including to the person developing the Custom GPT. This is absurd! I have no way to know why the GPT is responding the way it does; is it reading my documents but ignoring them, is it reading them but interpreting them differently than I think? Is it even reading the right documents? If I change the prompt will it read them differently? I'd would hardly be able to tell if it did!

This is not a system for serious prompt developers.

@ianbicking "We don't know why GPT responded this way" is the current state of the art though. That's totally expected.
@stilescrisis It's not always easy to see why GPT responded in a particular way to a particular prompt, but most of the time it's fairly clear, especially if you get used to GPT ("prompt engineering"). But if you are using the API then the input to GPT is always _extremely_ clear, no hidden parts at all! And the output, though produced in an opaque way, is still fully complete and inspectable. That stuff is all missing in Custom GPTs