RT @gabriel_ilharco
Today we are releasing a CLIP ViT-L/14 model with 79.2% zero-shot accuracy on ImageNet.

Our model outperforms OpenAI's CLIP by a large margin, and outperforms even bigger models (ViT-g/14) trained on LAION-2B

Check it out at https://huggingface.co/laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K!

laion/CLIP-ViT-L-14-DataComp.XL-s13B-b90K · Hugging Face

We’re on a journey to advance and democratize artificial intelligence through open source and open science.