* jina-reranker-v3
* jina-code-embeddings
* jina-embeddings-v4
* ReaderLM-v2
* jina-clip-v2
Jina-VLM-2.4B đạt SOTA cho câu hỏi đa ngôn ngữ với encoder SigLIP2 và decoder Qwen3, cho điểm trung bình 72.3 trên 8 benchmark VQA, cùng 78.8 (MMMB) & 74.3 (Multilingual MMBench). Đào tạo từ 5M dữ liệu đa phương tiện & 12B tokens 29 ngôn ngữ. #AI #MachineLearning #JinaAI #TienTien #TríTuNhanTạo
*(Translated and summarized with key stats, no URLs included)*
https://www.reddit.com/r/LocalLLaMA/comments/1ph9pg9/new_jinavlm24b_reaches_sota_for_multilingual/
DeepSearchの精度改善手法:URL Rankingについて
https://qiita.com/xxyc/items/094f889879ce8e9ec7e0?utm_campaign=popular_items&utm_medium=feed&utm_source=popular_items
When directly compared with OpenAI's 8K model text-embedding-ada-002, the jina-embeddings-v2 stand out in terms of quality. Their long context length is a game changer. Don't let a missing model implementation stop you from realizing your awesome AI project in Elixir. Instead, follow three steps to convert a Python model to Elixir.