https://www.theguardian.com/world/2026/feb/20/india-delhi-summit-ai-technology-us-economic-growth?CMP=Share_iOSApp_Other
It’s been fascinating to watch the G20 summit proceeding, without US participation, under South Africa’s chairmanship. Pretty much every speech talked about fairness, equity, climate resilience, and how power is shifting eastward and southward.
India Joins the Global AI Leadership Stage!
Now, India has become the fourth country to host the AI Action Summit, which is anticipated to be held between November 25 and January 26.
#AIActionSummit #IndiaAI #TechLeadership #DigitalIndia #AIForAll #GlobalInnovation #AI2025
"Backed by nine governments – including Finland, France, Germany, Chile, India, Kenya, Morocco, Nigeria, Slovenia and Switzerland – as well as an assortment of philanthropic bodies and private companies (including Google and Salesforce, which are listed as “core partners”), Current AI aims to “reshape” the AI landscape by expanding access to high-quality datasets; investing in open source tooling and infrastructure to improve transparency around AI; and measuring its social and environmental impact.
European governments and private companies also partnered to commit around €200bn to AI-related investments, which is currently the largest public-private investment in the world. In the run up to the summit, Macron announced the country would attract €109bn worth of private investment in datacentres and AI projects “in the coming years”.
The summit ended with 61 countries – including France, China, India, Japan, Australia and Canada – signing a Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet at the AI Action Summit in Paris, which affirmed a number of shared priorities.
This includes promoting AI accessibility to reduce digital divides between rich and developing countries; “ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all”; avoiding market concentrations around the technology; reinforcing international cooperation; making AI sustainable; and encouraging deployments that “positively” shape labour markets.
However, the UK and US governments refused to sign the joint declaration."
#AI #AIActionSummit #AISafety #AIEthics #ResponsibleAI #AIGovernance
Governments, companies and civil society groups gathered at the third global AI summit to discuss how the technology can work for the benefit of everyone in society, but experts say competing imperatives mean there is no guarantee these visions will win out
🚨 New #Publication just OUT! 🚨
#TrustworthyAI for Whom? #GenAI Detection Techniques of Trust Through Decentralized #Web3 Ecosystems
#Q1 @BDCC_MDPI #IF 3.7 #CiteScore 7.1
🔗https://www.mdpi.com/2504-2289/9/3/62
#AIAct #DraghiReport #AIActionSummit #HorizonEurope @HorizonEU @HorizonEnfield #OpenScience #OpenAccess
As generative AI (GenAI) technologies proliferate, ensuring trust and transparency in digital ecosystems becomes increasingly critical, particularly within democratic frameworks. This article examines decentralized Web3 mechanisms—blockchain, decentralized autonomous organizations (DAOs), and data cooperatives—as foundational tools for enhancing trust in GenAI. These mechanisms are analyzed within the framework of the EU’s AI Act and the Draghi Report, focusing on their potential to support content authenticity, community-driven verification, and data sovereignty. Based on a systematic policy analysis, this article proposes a multi-layered framework to mitigate the risks of AI-generated misinformation. Specifically, as a result of this analysis, it identifies and evaluates seven detection techniques of trust stemming from the action research conducted in the Horizon Europe Lighthouse project called ENFIELD: (i) federated learning for decentralized AI detection, (ii) blockchain-based provenance tracking, (iii) zero-knowledge proofs for content authentication, (iv) DAOs for crowdsourced verification, (v) AI-powered digital watermarking, (vi) explainable AI (XAI) for content detection, and (vii) privacy-preserving machine learning (PPML). By leveraging these approaches, the framework strengthens AI governance through peer-to-peer (P2P) structures while addressing the socio-political challenges of AI-driven misinformation. Ultimately, this research contributes to the development of resilient democratic systems in an era of increasing technopolitical polarization.