Building the Second Floor First: Why the U.S. and EU Are Struggling to Fix Digital Systems

By Cliff Potts, CSO, and Editor-in-Chief of WPS News

Baybay City, Leyte, Philippines — April 8, 2026

The Problem Nobody Wants to Admit

There is a quiet contradiction happening in global technology policy right now.

Both the United States and the European Union are investing in advanced systems—artificial intelligence, blockchain, and next-generation data frameworks—while still struggling with the basics. The foundation is not finished, but construction has already moved to the next level.

A simple way to understand it:

Both systems are trying to build the second floor of a house while the first floor is still under construction.

That is not just inefficient. It is risky.

What’s Happening in the United States

In the United States, the system is driven by the private market.

This creates speed, but it also creates fragmentation.

Different companies build different systems, often without coordination. In healthcare, finance, and even search technology, data is stored in incompatible formats. Systems do not “talk” to each other easily.

At the same time, major changes are happening:

  • AI systems are changing how data is used
  • Search engines are shifting toward AI-generated answers (SGE)
  • Companies are being forced to rethink how information is structured

The problem is that many organizations are not ready for these changes.

They are still dealing with:

  • Poor data organization
  • Outdated infrastructure
  • Systems built for humans, not machines

This leads to a situation where companies are trying to adapt to advanced AI tools without having clean, structured data to feed those systems (Google, 2023).

What’s Happening in the European Union

The European Union is taking a different approach.

Instead of relying on the market, it is building centralized frameworks and regulations. One example is the European Health Data Space, which aims to standardize how health data is shared across countries.

The EU is focusing on:

  • Standardized data formats (such as FHIR)
  • Cross-border interoperability
  • Strong regulatory oversight

In some cases, blockchain is being explored as a way to:

  • Track data access
  • Verify records
  • Manage consent

However, blockchain is not the foundation. It is an added layer of trust.

The EU’s challenge is different from the U.S.:

  • Systems are more coordinated
  • But implementation is slower
  • And real-world integration remains uneven

Even with strong standards, the system is still being built while new technologies are layered on top (European Commission, 2022).

The Core Issue: Foundation vs. Innovation

Both regions are facing the same underlying problem.

They are trying to solve advanced problems before solving basic ones.

Those basics include:

  • Clean, structured data
  • Reliable system interoperability
  • Consistent identity management
  • Real-time data exchange

Without these, everything else becomes unstable.

Adding AI or blockchain to a weak system does not fix it.

It exposes the weaknesses faster.

Why This Matters Now

This issue is no longer theoretical.

Artificial intelligence is forcing a shift in how systems operate.

AI requires:

  • Structured data
  • Standardized formats
  • Machine-readable systems

If the data is messy, the output will be unreliable.

This creates pressure on both systems:

  • In the U.S., companies will be forced to clean up data to stay competitive
  • In the EU, regulatory frameworks will be tested by real-world use

In both cases, the second floor cannot stand without a finished first floor.

What Happens Next

The likely outcome is not a clean solution, but a correction.

Systems will not be rebuilt from scratch. Instead:

  • Weak infrastructure will fail under pressure
  • Stronger standards will gradually emerge
  • AI will act as a forcing function for improvement

Blockchain will likely remain in a limited role, mainly for:

  • Audit trails
  • Verification
  • Consent tracking

But it will not become the backbone of these systems.

The real work is still at the foundation level.

The Bottom Line

The United States and the European Union are approaching the same problem from different directions.

  • The U.S. moves fast but lacks coordination
  • The EU coordinates well but moves slowly

Both are attempting to build advanced systems on incomplete foundations.

That is not sustainable.

At some point, the construction has to stop long enough to finish the first floor.

Until then, the second floor will remain unstable.

If you read this and it matters, help me keep it going: https://www.patreon.com/cw/WPSNews

References

European Commission. (2022). European Health Data Space. https://health.ec.europa.eu/ehealth-digital-health-and-care/european-health-data-space_en

Google. (2023). Search Generative Experience (SGE) overview. https://blog.google/products/search/generative-ai-search

Mandel, J. C., Kreda, D. A., Mandl, K. D., Kohane, I. S., & Ramoni, R. B. (2016). SMART on FHIR: A standards-based, interoperable apps platform for electronic health records. Journal of the American Medical Informatics Association, 23(5), 899–908.

Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system.

#AISystems #dataInteroperability #digitalInfrastructure #EuropeanUnionPolicy #Technology #UnitedStatesTechnology #WPSNews

Today at #EOSCSymposium2025, Jakub Jirka (Codevence) showcased their DMP platform, part of #OSTrails, featuring the first implementation of the DMP evaluation service co-developed by Athena RC and TU Wien.

#OpenScience #DataInteroperability #EOSC #ResearchInnovation

The #OSTrails team & Steering Committee are at the #EOSCSymposium! 🌍✨

Discover how we’re boosting collaboration, data interoperability, and supporting the EOSC Federation across scientific fields — 41 partners, 80+ tools, 24 pilots driving innovation.

#OpenScience #EOSC #DataInteroperability #ResearchInnovation

Healthcare Data Interoperability Market

Market Size (2024): USD 68,951.2 million

Market Forecast (2032): USD 349,875.9 million

CAGR: 22.51%

Key drivers include EHR adoption, government mandates, and AI-driven healthcare solutions. Major players shaping the market include Epic Systems, Cerner (Oracle), InterSystems, and Philips.

Get the complete insights here 👉 https://www.credenceresearch.com/report/healthcare-data-interoperability-market

#Healthcare #DataInteroperability #DigitalHealth

🧬 Can AI fix the chaos in biological sample data?

🔗 Annotation of biological samples data to standard ontologies with support from large language models. Computational and Structural Biotechnology Journal, DOI: https://doi.org/10.1016/j.csbj.2025.05.020

📚 CSBJ: https://www.csbj.org/

#AIinScience #LLMs #Bioinformatics #DataAnnotation #GPT4 #BiomedicalAI #OpenScience #FAIRData #Ontology #AIinBiology #DataInteroperability

Turned our presentations at the Data Sharing Festival into a podcast https://spoti.fi/3ExQpgY - The dutch federated data space (FDS) as the Dutch government’s data ecosystem #DataSharingFestival #DataInteroperability #TrustedData #DigitalEU
FDS at the Data Sharing Festival 2025

FDS · Episode

Spotify

Domain Ontologies: Indispensable for Knowledge Graph Construction

AI slop is all around and increasingly extraction of useful information will face difficulties as we start to feed more noise into the already noisy world of knowledge. We are in an era of unprecedented data abundance, yet this deluge of information often lacks the structure necessary to derive meaningful insights. Knowledge graphs (KGs), with their ability to represent entities and their relationships as interconnected nodes and edges, have emerged as a powerful tool for managing and leveraging complex data. However, the efficacy of a KG is critically dependent on the underlying structure provided by domain ontologies. These ontologies, which are formal, machine-readable conceptualizations of a specific field of knowledge, are not merely useful, but essential for the creation of robust and insightful KGs. Let’s explore the role that domain ontologies play in scaffolding KG construction, drawing on various fields such as AI, healthcare, and cultural heritage, to illuminate their importance.

Vassily Kandinsky, 1913 – Composition VII (1913)
According to Kandinsky, this is the most complex piece he ever painted.

At its core, an ontology is a formal representation of knowledge within a specific domain, providing a structured vocabulary and defining the semantic relationships between concepts. In the context of KGs, ontologies serve as the blueprint that defines the types of nodes (entities) and edges (relationships) that can exist within the graph. Without this foundational structure, a KG would be a mere collection of isolated data points with limited utility. The ontology ensures that the KG’s data is not only interconnected but also semantically interoperable. For example, in the biomedical domain, an ontology like the Chemical Entities of Biological Interest (ChEBI) provides a standardized way of representing molecules and their relationships, which is essential for building biomedical KGs. Similarly, in the cultural domain, an ontology provides a controlled vocabulary to define the entities, such as artworks, artists, and historical events, and their relationships, thus creating a consistent representation of cultural heritage information.

One of the primary reasons domain ontologies are crucial for KGs is their role in ensuring data consistency and interoperability. Ontologies provide unique identifiers and clear definitions for each concept, which helps in aligning data from different sources and avoiding ambiguities. Consider, for example, a healthcare KG that integrates data from various clinical trials, patient records, and research publications. Without a shared ontology, terms like “cancer” or “hypertension” may be interpreted differently across these data sets. The use of ontologies standardizes the representation of these concepts, thus allowing for effective integration and analysis. This not only enhances the accuracy of the KG but also makes the information more accessible and reusable. Furthermore, using ontologies that follow the FAIR (Findable, Accessible, Interoperable, Reusable) principles facilitates data integration, unification, and information sharing, essential for building robust KGs.

Moreover, ontologies facilitate the application of advanced AI methods to unlock new knowledge. They support both deductive reasoning to infer new knowledge and provide structured background knowledge for machine learning. In the context of drug discovery, for instance, a KG built on a biomedical ontology can help identify potential drug targets by connecting genes, proteins, and diseases through clearly defined relationships. This structured approach to data also enables the development of explainable AI models, which are critical in fields like medicine where the decision-making process must be transparent and interpretable. The ontology-grounded KGs can then be used to generate hypotheses that can be validated through manual review, in vitro experiments, or clinical studies, highlighting the utility of ontologies in translating complex data into actionable knowledge.

Despite their many advantages, domain ontologies are not without their challenges. One major hurdle is the lack of direct integration between data and ontologies, meaning that most ontologies are abstract knowledge models not designed to contain or integrate data. This necessitates the use of (semi-)automated approaches to integrate data with the ontological knowledge model, which can be complex and resource-intensive. Additionally, the existence of multiple ontologies within a domain can lead to semantic inconsistencies that impede the construction of holistic KGs. Integrating different ontologies with overlapping information may result in semantic irreconcilability, making it difficult to reuse the ontologies for the purpose of KG construction. Careful planning is therefore required when choosing or building an ontology.

As we move forward, the development of integrated, holistic solutions will be crucial to unlocking the full potential of domain ontologies in KG construction. This means creating methods for integrating multiple ontologies, ensuring data quality and credibility, and focusing on semantic expansion techniques to leverage existing resources. Furthermore, there needs to be a greater emphasis on creating ontologies with the explicit purpose of instantiating them, and storing data directly in graph databases. The integration of expert knowledge into KG learning systems, by using ontological rules, is crucial to ensure that KGs not only capture data, but also the logical patterns, inferences, and analytic approaches of a specific domain.

Domain ontologies will prove to be the key to building robust and useful KGs. They provide the necessary structure, consistency, and interpretability that enables AI systems to extract valuable insights from complex data. By understanding and addressing the challenges associated with ontology design and implementation, we can harness the power of KGs to solve complex problems across diverse domains, from healthcare and science to culture and beyond. The future of knowledge management lies not just in the accumulation of data but in the development of intelligent, ontologically-grounded systems that can bridge the gap between information and meaningful understanding.

References

  • Al-Moslmi, T., El Alaoui, I., Tsokos, C.P., & Janjua, N. (2021). Knowledge graph construction approaches: A survey of recent research works. arXiv preprint. https://arxiv.org/abs/2011.00235
  • Chandak, P., Huang, K., & Zitnik, M. (2023). PrimeKG: A multimodal knowledge graph for precision medicine. Scientific Data. https://www.nature.com/articles/s41597-023-01960-3
  • Gilbert, S., & others. (2024). Augmented non-hallucinating large language models using ontologies and knowledge graphs in biomedicine. npj Digital Medicine. https://www.nature.com/articles/s41746-024-01081-0
  • Guzmán, A.L., et al. (2022). Applications of Ontologies and Knowledge Graphs in Cancer Research: A Systematic Review. Cancers, 14(8), 1906. https://www.mdpi.com/2072-6694/14/8/1906
  • Hura, A., & Janjua, N. (2024). Constructing domain-specific knowledge graphs from text: A case study on subprime mortgage crisis. Semantic Web Journal. https://www.semantic-web-journal.net/content/constructing-domain-specific-knowledge-graphs-text-case-study-subprime-mortgage-crisis
  • Kilicoglu, H., et al. (2024). Towards better understanding of biomedical knowledge graphs: A survey. arXiv preprint. https://arxiv.org/abs/2402.06098
  • Noy, N.F., & McGuinness, D.L. (2001). Ontology Development 101: A Guide to Creating Your First Ontology. Semantic Scholar. https://www.semanticscholar.org/paper/Ontology-Development-101%3A-A-Guide-to-Creating-Your-Noy/c15cf32df98969af5eaf85ae3098df6d2180b637
  • Taneja, S.B., et al. (2023). NP-KG: A knowledge graph for pharmacokinetic natural product-drug interaction discovery. Journal of Biomedical Informatics. https://www.sciencedirect.com/science/article/pii/S153204642300062X
  • Zhao, X., & Han, Y. (2023). Architecture of Knowledge Graph Construction. Semantic Scholar. https://www.semanticscholar.org/paper/Architecture-of-Knowledge-Graph-Construction-Zhao-Han/dcd600619962d5c1f1cfa08a85d0be43a626b301
  • #AIInHealthcare #ArtificialIntelligence #BiomedicalOntologies #CulturalHeritageData #DataIntegration #DataInteroperability #DomainOntologies #DrugDiscovery #ExplainableAI #FAIRPrinciples #GraphDatabases #KnowledgeGraphs #KnowledgeManagement #LLMs #Ontology #OntologyDesign #OntologyDevelopment #OntologyDrivenAI #SemanticRelationships #SemanticWeb

    Le Brick Schema simplifie l’interopérabilité des données des bâtiments pour améliorer l’efficacité et réduire les coûts.

    https://www.smart-buildings.net/guide/comprendre-le-brick-schema-pour-des-batiments-plus-intelligents

    #SmartBuilding #IoT #BrickSchema #BuildingAutomation #DataInteroperability #EnergyEfficiency

    Vers des bâtiments connectés avec le Brick Schema

    Le Brick Schema simplifie l’interopérabilité des données des bâtiments pour améliorer l’efficacité et réduire les coûts.

    Smart Buildings

    👩🏽‍💻 Now anyone can detect errors in their tables and optimise the #datainteroperability. The latest features improved at ODE are changes to the Errors Report Panel, #AI integration, and fixes to usability.

    📩 Available for Linux, MacOS and Windows: https://github.com/okfn/opendataeditor/releases/tag/v1.2.0

    Release 1.2.0 · okfn/opendataeditor

    What's Changed Updated readme to v1.1 by @roll in #583 Provide list of licenses by @roll in #586 Fix releasing flow by @roll in #584 Rename column functionality by @roll in #595 Fix results per pa...

    GitHub

    🤖 The CEF Context Broker (CB) orchestrates the management of data and context to enhance connectivity across diverse systems. This interesting article presents the use of CEF Context Broker in #TemaEU and its effects in achieving #datainteroperability.

    https://tema-project.eu/articles/tema-use-cef-context-broker-and-its-effects-achieving-data-interoperability

    #TemaEU #ContextBroker #interoperability #datainteroperability #systemconnectivity #riskmanagement #emergencymanagement

    TEMA use of CEF Context Broker and its effects in achieving data interoperability

    In the rapidly evolving landscape of the Internet of Things (IoT) and smart solutions, the need for seamless interoperability among disparate platforms and technological frameworks has become a cornerstone of modern infrastructure. The CEF Context Broker (CB), an open-source initiative under the

    TEMA Project