It's like measuring car performance by gallons per mile instead of miles per gallon.
@romeu I asked LLM AI to tell me what productivity metric we should use and it told me LLM AI tokens used.
I also asked my coke dealer what productivity metric we should use and he told me grams of coke used.
I don't trust my coke dealer but I sure do absolutely trust my LLM AI.
Our whole society is rife with this: mistaking consumption for productivity
Yuuuuuup.

@romeu
I think the best way to handle token usage minimums that I've seen was someone who had the LLM spit out Alexander the great fanfic and such that they just stuck in a document to never read
Which also goes to show how idiotic. minimums are
I have a good soluce :
- please generate code
- please add unit tests 100% coverage
- please add e2e tests 100% coverage
- please ingest the logs of the tests to see if all error are useful

If you like this piece and want to support my independent reporting and analysis, why not subscribe to my premium newsletter? It’s $70 a year, or $7 a month, and in return you get a weekly newsletter that’s usually anywhere from 5,000 to 18,000 words, including