@samth @wilbowma I'm a little confused by this one. This appears to be energy cost of inference, but I wasn't aware anyone was actually concerned about that? I thought the main concern was energy cost of training (and then throwing away the trained model and training a new one, forever).
It seems patently obvious that inference is not so bad, and can in the future be made orders of magnitude more efficient.
Am I missing something here? Admittedly I did not read the article super carefully.
@samth @wilbowma It wonder if the problem might get solved automatically when the money runs out, if we start using models for longer without replacing them.
In the past it seemed like this would not be possible because the models need to be trained with up-to-date information. That may be true for the *chatbot* models, but my experience seems to indicate that the agents don't need to be trained on languages and libraries, etc. Point them at a repository and they learn it fresh, even if it's in a weird language.
@jonmsterling @samth @wilbowma The training cannot be separated from the inferring, because they will never stop retraining with every new batch of fresh stolen material to train upon.
This tech is not necessary, not revolutionary, and not useful enough to justify the real-world costs (that other people are paying). The whole cost is the whole cost. People are kidding themselves if they only look at the cost of inference: like a car driver saying "well *my car* only emits.."