@Techaltar you said you don’t think there will be a single moment where one model suddenly becomes much better than the others. I’d like to challenge that by saying: whoever first manages to build an LLM that essentially has an unlimited context size because it can re-train itself during inference, will suddenly be leaps ahead of the competition. Thoughts? (Maybe for next week’s Q&A?)
@can I don't know enough about the tech to answer definitively 🤔
@Techaltar that's fair, I respect that