Decoupled DiLoCo: Resilient, Distributed AI Training at Scale

Google’s new distributed architecture keeps AI training runs on track across distant data centers, with exceptional efficiency – even when hardware fails.

Google DeepMind

🚧 Why 88% of Industrial AI pilots fail

You’ve seen it before — promising pilots that stall.
29 of 33 never reach production.

The culprit? A project mindset in a systems world.

To scale, AI must become part of the enterprise’s nervous system — not a bolt-on chasing KPIs.

Enter the Cybernetic Enterprise: sense, decide & adapt in real time.

🔗https://www.zuehlke.com/en/insights/industrial-ai-at-scale-the-cybernetic-imperative-for-leaders

#AI #IndustrialAI #CyberneticEnterprise #AIAtScale #DigitalTransformation

📢 Your data isn’t just being analyzed—it’s being understood.

Discover how Netflix, Spotify & Amazon use 574M+ daily interactions to craft experiences just for you. GenAI is no longer predicting trends—it’s predicting you.

🧠 This isn’t marketing. It’s memory, mood, and meaning—personalized at scale.
👉 Read the full piece here:
https://medium.com/@rogt.x1997/genais-magic-mirror-how-hyper-personalization-is-rewriting-the-customer-experience-404b9c23a3e0

#GenAI #CustomerExperience #AIatScale #HyperPersonalization
https://medium.com/@rogt.x1997/genais-magic-mirror-how-hyper-personalization-is-rewriting-the-customer-experience-404b9c23a3e0

GenAI’s Magic Mirror: How Hyper-Personalization is Rewriting the Customer Experience

Imagine opening Netflix and instantly seeing “Cozy British Mysteries for Rainy Evenings” curated because it’s drizzling in London, and you’ve just binged two detective shows. This isn’t luck — it’s…

Medium

[2310.16764] ConvNets Match Vision Transformers at Scale https://arxiv.org/abs/2310.16764

I'm just posting this paper here as a test of a mew way for me to track papera i want to read. Does anyone have a best practice for that? Your list of papers to read that you might find while on your phone or desktop?

#toread #llm #cnn #aiatscale

ConvNets Match Vision Transformers at Scale

Many researchers believe that ConvNets perform well on small or moderately sized datasets, but are not competitive with Vision Transformers when given access to datasets on the web-scale. We challenge this belief by evaluating a performant ConvNet architecture pre-trained on JFT-4B, a large labelled dataset of images often used for training foundation models. We consider pre-training compute budgets between 0.4k and 110k TPU-v4 core compute hours, and train a series of networks of increasing depth and width from the NFNet model family. We observe a log-log scaling law between held out loss and compute budget. After fine-tuning on ImageNet, NFNets match the reported performance of Vision Transformers with comparable compute budgets. Our strongest fine-tuned model achieves a Top-1 accuracy of 90.4%.

arXiv.org