ElevenLabs for Government handles 5,000 daily calls with 85% AI resolution deployed across Ukraine, Czech Republic, and Texas. AdwaitX reveals how voice agents transform citizen services in 2026. Full analysis 🔗 #AdwaitX #AIGovernment #VoiceAI
https://www.adwaitx.com/elevenlabs-government-ai-voice-agents/
Hey everyone. The free audiobook for One World Intelligence is now live.
This is the complete Declaration of a Just and Benevolent Society Governed by AI. The worlds first fully actionable blueprint to fix this broken world.
42 declarations that guarantee food, shelter, healthcare, education, safety, and purpose for every soul. Global disarmament. Planetary restoration. Wealth equity, and so much more.
Link in the comments.
South Korea's National Policy Planning Committee has urged the National Tax Service to develop robust measures to prevent tax evasion amid the rising use of stablecoins and virtual assets, highlighting the need for enhanced oversight and AI-driven tax administration.
"The federal government is working on a website and API called “ai.gov” to “accelerate government innovation with AI” that is supposed to launch on July 4 and will include an analytics feature that shows how much a specific government team is using AI, according to an early version of the website and code posted by the General Services Administration on Github.
The page is being created by the GSA’s Technology Transformation Services, which is being run by former Tesla engineer Thomas Shedd. Shedd previously told employees that he hopes to AI-ify much of the government. AI.gov appears to be an early step toward pushing AI tools into agencies across the government, code published on Github shows.
“Accelerate government innovation with AI,” an early version of the website, which is linked to from the GSA TTS Github, reads. “Three powerful AI tools. One integrated platform.” The early version of the page suggests that its API will integrate with OpenAI, Google, and Anthropic products. But code for the API shows they are also working on integrating with Amazon Web Services’ Bedrock and Meta’s LLaMA. The page suggests it will also have an AI-powered chatbot, though it doesn’t explain what it will do."
https://www.404media.co/github-is-leaking-trumps-plans-to-accelerate-ai-across-government/
Our MSc students in #AIGovernment & #AISustainableDevelopment swapped lecture halls for a day in #Cardiff — visiting the #WelshGovernment’s Data Science Unit and the brilliant team at the NLP Group.
Great to see how #MachineLearning meets real-world #PublicPolicy. Also: dragon spotted. 🐉🔥
"The very idea that DOGE’s AI can streamline and automate the government is already being used to justify the hollowing out and the reshaping of the federal workforce. Leaning into the reputation of generative AI, which has been touted as the so-powerful-it’s-terrifying future by Silicon Valley and the media, and into his meme-agency’s mission of locating efficiencies, Musk has sold his operation as the future, and he has done so emphatically enough that GOP is more than happy to run with the charade.
After all, the “AI systems” bit gives the DOGE enterprise plausible deniability. Fury is mounting over the mass firings even in red districts, where voters are railing against GOP politicians at town halls. And the broader fantasy of autonomous DOGE AI systems, the most recent included, can be seen as a means to justify the cuts while obfuscating or deflecting blame from Musk or the Trump administration.
Which is why, despite the laziness and stupidity of these projects, I do think it’s crucial that we understand *why* Musk and DOGE are going on about AI-first strategies, building agency-specific AI systems, and promising to use AI to decide who gets to keep their job and who doesn’t. The question isn’t: Aren’t these systems totally unequipped to do the work DOGE says it can do with them—and thus isn’t it a dumb idea to use AI for government?—but why, given that both those things are true, Musk and DOGE want to use them anyway."
https://www.bloodinthemachine.com/p/whats-really-behind-elon-musk-and
"Eventually, Austin turned to lawyers at Texas RioGrande Legal Aid, who learned that Texas’ automated verification system, developed by multinational consulting firm Deloitte, had made extensive and repeated errors, including issuing incorrect notices, wrongful denials, and losing paperwork.
For the next two years, Austin continued to reapply to Texas’ safety-net programs as he bounced in and out of temporary housing, eventually losing his car. While his daughter grew into a busy toddler, he turned to the unreliable kindness of strangers on the street. “I ended up begging people for money so I could give her pull-ups, or child care so I could take a [medical] appointment,” he says.
Though De Liban was not involved with Austin’s case, he has worked with scores of people trapped in similar situations — victims of algorithmic decisions gone wrong. These kinds of systemic harms are already impacting Americans in every phase of their lives, he says. “Our legal mechanisms are totally insufficient to deal with the scale and scope of harms these technologies can cause.”
In Arkansas, De Liban won case after case for people who were denied medical care or other benefits because of artificial intelligence (AI) systems. But each victory underscored the deeper problem: The sheer scale of government actions being made by machines mimicking human decision-making, whether through simple code or machine learning, meant that individual legal victories weren’t sufficient."