Ultralytics (@ultralytics)

Ultralytics가 비전 AI 모델의 실전 성능 검증 중요성을 강조하며, 학습 성능이 좋아도 실제 환경에서 실패할 수 있다고 설명한다. 검증 기능을 통해 버전 비교, 핵심 지표 추적, 배포 전 오류 분석을 지원하며 모델 준비 상태를 확인할 수 있다고 안내한다.

https://x.com/ultralytics/status/2036467607189020934

#ultralytics #visionai #validation #yolo #modeldeployment

Ultralytics (@ultralytics) on X

Your vision AI model can look great in training, and still fail in the real world. Validation catches that. Compare versions, track key metrics, and see where predictions go wrong before deployment. Know your model is ready before the world does. 👉Get started on Ultralytics

X (formerly Twitter)

Perplexity (@perplexity_ai)

NVIDIA의 Nemotron 3 Super 모델이 이제 Perplexity 플랫폼과 Agent API, 그리고 Perplexity의 Computer 제품에서 이용 가능해졌다는 발표로, 모델의 접근성과 배포 채널 확대를 알리는 중요한 배포 업데이트입니다.

https://x.com/perplexity_ai/status/2032521063918420286

#nvidia #nemotron3 #perplexity #agentapi #modeldeployment

Perplexity (@perplexity_ai) on X

NVIDIA’s Nemotron 3 Super is now available in Perplexity, Agent API, and Computer.

X (formerly Twitter)

Sudo su (@sudoingX)

짧은 코멘트: 'pocket'에서부터 3090까지 전체 제품군이 출시(shipping)되고 있다는 언급으로, 소형(경량) 모델들이 휴대폰 등 포켓 디바이스에서 구동될 기회를 얻고 있음을 환영하는 내용. 모바일·엣지 배포 확대를 시사.

https://x.com/sudoingX/status/2029252426310795652

#edgeai #mobile #llm #gpu #modeldeployment

Sudo su (@sudoingX) on X

from pocket to 3090, whole family ships. love seeing the small models get phone time

X (formerly Twitter)

Building a machine learning model is only half the journey — deploying it brings your work to life.
From dataset selection and model training to deployment using Streamlit, Gradio, or cloud platforms like AWS and GCP — this roadmap helps you go from idea to interactive app fast.

Don’t just train models. Deploy them.

📕 https://ebokify.com/machine-learning

#MachineLearning #DataScience #MLOps #AI #ModelDeployment #Python #DeepLearning #Streamlit #Gradio #AWS #GCP

🤖 MLOps: The Missing Link in Your Machine Learning Strategy 🔗

MLOps bridges the gap between data science and engineering, creating sustainable ML systems that actually work in the real world.

A proper MLOps workflow includes:
🔄 Automated data ingestion
🧪 Continuous model training
📊 Performance monitoring
🚨 Drift detection
🚀 Seamless redeployment

👀 https://link.illustris.org/mlopscode2prod

#MachineLearning #MLOps #DataScience #AIEngineering #ModelDeployment #DataDrift #AIPipelines

MLOps Demystified: Deploying Your Machine Learning Models to Production – Seamlessly

📊 What is MLOps? The Complete Guide to Machine Learning Operations📊Master the complexities of MLOps with our comprehensive guide, we break down how MLOps b...

YouTube
Building and Deploying a Hugging Face Model with Docker

Discover how to build and deploy a Hugging Face AI model for NLP tasks using Docker. Step-by-step tutorial using Python and the Hugging Face Transformers library.

LINUXexpert

Question about R, mlflow and models...

I am trying to register a R model using the crate flavor in mlflow, and I have some doubts.

I have been able to log and register the model. I have also tested that I can load the model again and use it for prediction (inputs/outputs are data.frames).

I was thinking... that would mean I should write the inference part in R, wouldn't it?

How could I deploy the model so it can be served as a general web service (REST API), not actually relying on final users to use R?

I'm now quite tired, but the only solution I have found is to maybe use plumbr to expose an API receiving a JSON with all the inputs as simple types, and generating the data.frame inside, as I have always done.

Do you think this can be done directly using a crated function? Has anybody done something similar?

Thanks in advance. I think this is a discussion worth having, as there is a lack of documentation on this topic for us R users. :(

#rstats #ml #machinelearning #models #mlflow #ai #datascience #data #prediction #mlops #modeldeployment