I am painfully aware that I learned to write software over four decades ago, when it made sense to try and fit everything into 16K, but I'm really struggling with the attitude that it's a great idea to just throw more hardware at a problem.
LLM is being used, for the most part, as a sophisticated "more hardware", but it's far from the only place we do this.
And as a result, we've managed, as a society, to volunteer ourselves for a different kind of "Vime's boots" scenario, where most of our effort and expense goes into maintaining and patching, rather than clever solutions that make for lasting improvement.
Problems that could have been thought about, and addressed at dev time, or even at compile time, are now repeatedly addressed (and re-addressed, over and over) at run time.
It appears cheaper at launch, and impresses the bosses, but over time, it simply serves the purpose of funneling money into the AI service providers' pockets, for a possibly-small, but persistent, loss in performance.
LLM is being used, for the most part, as a sophisticated "more hardware", but it's far from the only place we do this.
And as a result, we've managed, as a society, to volunteer ourselves for a different kind of "Vime's boots" scenario, where most of our effort and expense goes into maintaining and patching, rather than clever solutions that make for lasting improvement.
Problems that could have been thought about, and addressed at dev time, or even at compile time, are now repeatedly addressed (and re-addressed, over and over) at run time.
It appears cheaper at launch, and impresses the bosses, but over time, it simply serves the purpose of funneling money into the AI service providers' pockets, for a possibly-small, but persistent, loss in performance.