merve (@mervenoyann)
NVIDIA가 최근 C-RADIOv4 SOTA 이미지 인코더를 공개했습니다. 두 가지 크기(shape-optimized 431M, huge 653M)로 제공되며 SigLIP2, DINOv3, SAM3에서 증류 및 세그멘테이션 전이 학습을 거쳐 제작되었습니다. DINOv3(보다 큰 모델)와 동등하거나 더 나은 성능을 보인다고 보고되었습니다.
https://x.com/mervenoyann/status/2018301356663079384
#nvidia #cradio #imageencoder #modeldistillation #computervision

merve (@mervenoyann) on X
NVIDIA released C-RADIOv4 sota image encoders past week 🙌🏻
> they come in shape-optimized (431M) and huge (653M)
> distilled from SigLIP2, DINOv3 and SAM3 (transferred for segmentation)
outperforms/on par with DINOv3 (10x larger than this model) 🔥
X (formerly Twitter)Alpár Kertész (@Criticality47)
LTX-2로 긴 음악 비디오 생성 시 한계에 도달했다는 경험 공유입니다. 특히 증류된 GGUF Q4_K_M 버전에서 제약이 보였고, 더 단순한 증류 버전으로 시도해볼 가능성을 언급하고 있습니다.
https://x.com/Criticality47/status/2015928720964333758
#ltx2 #gguf #musicgeneration #modeldistillation

Alpár Kertész (@Criticality47) on X
Welp, for me this is the limit of LTX-2’s capabilities when it comes to generating longer music videos @cocktailpeanut . I’m not giving up tho! I just need to accept that the distilled GGUF Q4_K_M version has its limits. The simple distilled version might work, but I need time to
X (formerly Twitter)Dan Goldwasser (@dgoldwas)
10초 분량의 720p 영상이 'distilled' 모델로 3분 만에 렌더링되었다는 짧은 보고. 결과는 꽤 괜찮았으며, 작성자는 비증류(non-distilled) 모델로의 비교 실험을 시도해 보고 싶어함. 성능(속도)과 품질(비증류와의 차이) 비교에 대한 관심을 드러낸 트윗.
https://x.com/dgoldwas/status/2009332187263578417
#videogeneration #modeldistillation #rendering #ai

Dan Goldwasser (@dgoldwas) on X
@cocktailpeanut 10-seconds at 720p rendered in 3 minutes using distilled.... not too bad. Curious to try the non-distilled to see how that compares in the result.
X (formerly Twitter)Distilling billion‑parameter models into lean student nets can slash latency by 2‑3× while cutting costs double‑digit. From chatbots to recommendation engines, the gains are real. Dive into the benchmarks and see how open‑source pipelines are reshaping AI efficiency. #ModelDistillation #Latency #StudentModel #Chatbots
🔗 https://aidailypost.com/news/model-distillation-cuts-latency-23-lowers-costs-by-doubledigit
- Nova Premier (Q1 2025):
• Ultimate #MachineLearning capabilities
• Advanced #ModelDistillation
• Complex #ReasoningAI tasks
• #TeacherModel functionality
🎨 Creative Suite #AIGC #GenerativeAI:
- #NovaCanvas:
• #Text2Image generation