I don't know much about large language models, but this new Chinese one sounds interesting:
"DeepSeek-R1 scored as high as or higher than OpenAI’s o1 on a variety of third-party benchmarks (tests to measure AI performance at answering questions on various subjects), and was reportedly trained at a fraction of the cost (reportedly around $5 million), with far fewer graphics processing units (GPU) that are under a strict embargo imposed by the U.S., OpenAI’s home turf.
But unlike o1, which is available only to paying ChatGPT subscribers of the Plus tier ($20 per month) and more expensive tiers (such as Pro at $200 per month), DeepSeek-R1 was released as a fully open-source model, which also explains why it has quickly rocketed up the charts of AI code sharing community Hugging Face’s most downloaded and active models.
Also, thanks to the fact that it is fully open-source, people have already fine-tuned and trained many variations of the model for different task-specific purposes, such as making it small enough to run on a mobile device, or combining it with other open-source models. Even if you want to use it for development purposes, DeepSeek’s API costs are more than 90% lower than the equivalent o1 model from OpenAI.
Most impressively of all, you don’t even need to be a software engineer to use it: DeepSeek has a free website and mobile app even for U.S. users."
https://venturebeat.com/ai/why-everyone-in-ai-is-freaking-out-about-deepseek/
(1/n)