What's AI strategy for those companies which do not have the capital to train their own foundation models?

The world is full of companies who know their customers, their domain and their social value well, but do not have the capital to utilize this position to create their own foundation models.

How are they to survive and even prosper in the AI revolution?

Well, they need to play the cards they got, not the cards they want. They need to position themselves as gardens of knowledge creation. They can use frontier models through APIs, or open-weights models internally, but they will need to tend their data assets to grow into knowledge and skills asset for AIs.

There are a few principles to follow here to succeed. First of all, approach LLM/VLM based automation like is typical, automate and scale up the work. This goes without saying really, everyone does this.

But while doing it, aggregate your data asset:
- Store all the inference calls you make in a sustainable and long-term fashion with ample metadata to later understand what the call was about.
- Build and refine knowledge-bases and RAG assets automatically.
- Systematically document silent knowledge, and make your company internal discussions and other processes saved and accessible to AIs. Slack conversations, emails, Confluence, stuff like that.
- Ingest external data which relates to your domain in a proper #DataHoarder style.

Build processes which refine all this data further into knowledge, for example by storing it into a RAG-enabled graph database.

Then you will need to build some level of refinement processes to at the very least do rejection sampling to your collected inference data. There are many techniques to utilize here in a synergistic, mutually supporting way, enough to write many books about.

You'll get datasets good for fine-tuning and separately for benchmarking. Benchmarking datasets you can already use to select the best available foundation models for your use cases. But you should also measure and prove the training data exports you produce.

You do this by fine-tuning smaller models with this data and note how much better they become in your use case. You don't have to train the best foundation models here, you just want to basically prove that your data asset is valuable and builds knowledge and skills in existing foundation models.

Now this data asset is valuable in the future where generalist AIs will try to serve your social purpose. Leverage it.

If the data has constraints such as personally identifiable information, or other limitations, even better. Then you take a position as a synthetic data generator in this domain, and generate synthetic data which doesn't contain the limited aspects, and produce valuable training and fine-tuning data through that indirection layer.

You will need to reimagine and direct your company to become a garden of knowledge creation in your domain, to carry your purpose.

What if your valuable data asset is copied and stolen? Don't worry about it too much. You're not building a static asset but a living process.

You are the closed feedback loop for AIs to improve in servicing your purpose. You can only be displaced from this position if someone else fulfills your purpose better, by building a better garden for knowledge and skills around which intelligent entities orbit and gather.

#AIStrategy #AI #AGI #FoundationModels

One thing to understand about physical foundation models or robotic foundation models is in-context learning.

You should aim to frame the problem and the data in a fashion where the model can learn to control the embodiment in-context, rather than training it without a possibility to calibrate and discover where it is in the start of the session.

Otherwise you won't get truly universal models, but models which constantly hedge their bets and are forced to make their control signal not only generalist, but generalist across all training worlds and embodiments *at the same time*.

This means that you'll be stuck in a frame where you will need a control adapter layer separately trained per embodiment, because the foundation model is incapable of discovering in-context what it inhabits, so its outputs are by necessity the kind that should work somewhat ok for all possible worlds.

The model also becomes unable to learn embodiment-specific control policies without hacks.

I believe the fact that people don't realize they need to consider in-context learning for these foundation models for embodiment calibration is a root of many practical problems down the line.

#PhysicalFoundationModels #UniversalEmbodiment #robots #FoundationModels

Mô hình nền tảng: Tiềm năng lớn, rủi ro thật 🤖
Các mô hình nền tảng mang lại cơ hội lớn — công cụ nhanh hơn, trợ lý thông minh, đổi mới giáo dục và y tế — nhưng cũng đi kèm rủi ro đáng lo ngại. Dễ bị lạm dụng, thiên kiến, hay mất kiểm soát nếu không được quản lý cẩn trọng.
#FoundationModels #AI #ArtificialIntelligence #MachineLearning #MôHìnhNềnTảng #TríTuệNhânTạo #AIrủiRo #AIpotentials

https://dev.to/paperium/on-the-opportunities-and-risks-of-foundation-models-eb5

On the Opportunities and Risks of Foundation Models

Foundation Models: Big Promise, Real Risk Huge new computer models called foundation...

DEV Community

I am looking for an ambitious team which needs a very experienced AI generalist engineer.

It would be my preference to work on any of the following topics:
- AI medicine
- Artificial General Intelligence
- Robotics foundation models
- Making military drones autonomous and devious to defend liberal democracy
- Anything ambitious which improves the world

I would also prefer working remotely from Spain for example through an employer-of-record service such as Parakar. It might be that I will have to compromise on these preferences a bit.

I can bring a very long experience building machine intelligence in a very wide range of domains. I have worked with many kinds of robots, oncology medtech, automated mapping, military, government, heavy industry, supply chains, global enterprise systems, radio networks, tiny embedded systems, cutting edge innovative and novel AI training in massive scales for all kinds of generalist purposes.

I have a long list of publications although I haven't worked in a publish-or-perish academia, I am very experienced in remote teamwork, leadership, start-ups and big corporate environments.

I am now available for new, ambitious and meaningful challenges, ready to change the world once more towards the better!

I would love to get boosts, introductions, referrals or information of growing teams with ambitious goals.

https://www.linkedin.com/in/terokeskivalkama

#AI #AGI #OpenToWork #drones #robotics #FoundationModels #FediHire

Die @Cyberagentur startet HEGEMON, einen europaweit einzigartigen Forschungswettbewerb zur Bewertung und Anpassung von Foundation Models für sicherheitskritische Anwendungen. Vier Teams entwickeln Benchmarks und KI-Modelle für komplexe Aufgaben im Geoinformationswesen.
Mehr dazu: https://t1p.de/7ct97
#Cyberagentur #HEGEMON #KI #FoundationModels #Cybersicherheit #Benchmarking
HEGEMON startet: Vier Teams entwickeln Benchmarks und KI-Modelle für sicherheitskritische Anwendungen. Mehr zum Wettbewerb: https://t1p.de/7ct97
#HEGEMON #Cyberagentur #KI #FoundationModels #Cybersicherheit
https://nachrichten.idw-online.de/2025/12/10/europas-weg-zu-einer-transparenteren-anwendung-von-foundation-models
Europas Weg zu einer transparenteren Anwendung von Foundation Models - Cyberagentur

Vier Teams entwickeln KI-Benchmarks und Modelle für sicherheitskritische Anwendungen Mit HEGEMON beginnt ein in Europa einzigartiger Forschungswettbewerb: Vier Teams treten gegeneinander an, um generative Foundation Models erstmals systematisch, neutral und nachvollziehbar für sicherheitskritische Kontexte zu adaptieren. Im Zentrum stehen anspruchsvolle Aufgaben aus dem Geoinformationswesen – und die Frage, wie und welche international vortrainierten Modelle sich […]

Cyberagentur
Not many people are talking about Apple Foundation Models. They’re free and unlimited on device. Yes you need the newest hardware, but this is only v1 and the potential is huge. Anyone trying them?
#iOSDev #AI #FoundationModels
Today I tested #copilot for #xcode and with some handholding I managed to produce an app that uses apple #foundationmodels to produce haiku’s. First it tried to cheat me by statically generating text without using the model. But in the end I managed to get it to work. The UI needed the most manual tweaking, as expected. So far it feels like having a motivated junior developer next to you. Now if only I can convince the model that it really always needs to be in 5-7-5 syllable form 😬
Perspective Intelligence 1.2 is now out! There are many accessibility improvements including accessibility settings. All access users can now access maps points of interest, contact lookup, and reminders search. Memories have been updated as well. You can download the app at https://apps.apple.com/us/app/perspective-intelligence/id6448894750 #AI, #Apple #FoundationModels, #IndieDev
Perspective Intelligence App - App Store

Download Perspective Intelligence by Techopolis Online Solutions, LLC on the App Store. See screenshots, ratings and reviews, user tips, and more games like…

App Store

Discover RosettaCommons/foundry, a game-changer for biomolecular research with shared ML trainers #BiomolecularResearch #AIforScience #RosettaCommons

The RosettaCommons/foundry repository serves as a centralized hub for biomolecular foundation models, facilitating collaboration and innovation in the field of structural biology. By providing shared trainers and pipeline components, researchers can leverage...

#RosettaCommons #BiomolecularResearch #FoundationModels #MachineLearning