i just keep pushing with codex to see if will ever come up with a solution.
i've turned up it's reasoning to "Extra High" and now it says it's going back to first principles and reading the docs.
i just keep pushing with codex to see if will ever come up with a solution.
i've turned up it's reasoning to "Extra High" and now it says it's going back to first principles and reading the docs.
interesting. i didn't expect this.
since it can't actually test a drag/drop vi a CLI, it created a one-off build a fuck-ton of NSLogs in all the drag callbacks and has asked me to run a test, then copy/paste the logged data.
it's what i do too when the AppKit documentation is especially unhelpful.
it's surprising that it's trying to deduce the NSOutlineView behavior since it doesn't intrinsically understand it.
AI is wild.
@cdf1982 i guess i figured the docs — at least the ones that have been around a few years are a solid part of the model already.
but i suspect when the feature is not often used, and the API is crappy and there’s a much simpler deprecated api, then there just isn’t a lot of example code to train the model.
but that was also what made it a good test case.
@isaiah I’m not sure there’s a direct connection between the amount of training data available: for instance, it produces decent 4D Database code despite little availability on GitHub.
I think a combo of large context, docs, web search and logging can produce interesting results, but my test of fire will be soon the Onvif protocol and its XML envelopes