Using drop boxes to file taxes? The CRA is getting rid of them soon
The agency says there are 45 of these boxes still operating across the country where Canadians can drop off tax returns, payments and other documents.
#Canada #CanadaRevenueAgency #taxfiling
https://globalnews.ca/news/11737766/drop-boxes-file-taxes-cra/

Associated Press: Callers to Washington state hotline press 2 for Spanish and get accented AI English instead. “For months, callers to the Washington state Department of Licensing who have requested automated service in Spanish have instead heard an AI voice speaking English in a strong Spanish accent. The agency has since apologized and says it fixed the problem.”

https://rbfirehose.com/2026/03/01/associated-press-callers-to-washington-state-hotline-press-2-for-spanish-and-get-accented-ai-english-instead/

State of Massachusetts: Governor Healey Announces Massachusetts to Become First State to Deploy ChatGPT Across Executive Branch. Uh. “Today, Governor Maura Healey announced the launch of the ChatGPT-powered Artificial Intelligence (AI) Assistant for the state’s workforce, with the goal of making government work better and faster for people…. Massachusetts will be the first state to adopt […]

https://rbfirehose.com/2026/02/16/state-of-massachusetts-governor-healey-announces-massachusetts-to-become-first-state-to-deploy-chatgpt-across-executive-branch/
State of Massachusetts: Governor Healey Announces Massachusetts to Become First State to Deploy ChatGPT Across Executive Branch

State of Massachusetts: Governor Healey Announces Massachusetts to Become First State to Deploy ChatGPT Across Executive Branch. Uh. “Today, Governor Maura Healey announced the launch of the …

ResearchBuzz: Firehose

ComputerWeekly: Large language models provide unreliable answers about public services, Open Data Institute finds. “Popular large language models (LLMs) are unable to provide reliable information about key public services such as health, taxes and benefits, the Open Data Institute (ODI) has found.”

https://rbfirehose.com/2026/02/15/computerweekly-large-language-models-provide-unreliable-answers-about-public-services-open-data-institute-finds/
ComputerWeekly: Large language models provide unreliable answers about public services, Open Data Institute finds

ComputerWeekly: Large language models provide unreliable answers about public services, Open Data Institute finds. “Popular large language models (LLMs) are unable to provide reliable informa…

ResearchBuzz: Firehose

Now to see what he can do as mayor: “In treating constituent problems as urgent and solvable, [Assemblyman Mamdani’s staff] actually provided an answer to a strangely radical hypothetical question: What if every day government services actually worked?”

#Politics #USPolitics #MamdaniPlans #ProgressivePolitics #GovernmentServices #MakeGovernmentDoItsJob #PublicServicesMatter #PublicServicesReform #GovernmentForThePeople

https://www.motherjones.com/politics/2025/12/nycs-new-socialist-mayor-has-a-radical-proposal-have-government-do-its-job/

NYC's socialist mayor has a radical proposal: making government do its job

Rent freezes and free buses can come later. But what if landlords obeyed the law and transit ran on time?

Mother Jones

Chuyển từ "người dân chạy theo thủ tục" sang "dữ liệu chạy thay cho người dân" – Thượng tướng Nguyễn Văn Long nhấn mạnh đây là thành quả nổi bật của Đề án 06, đẩy mạnh chuyển đổi số, rút ngắn thời gian, giảm thủ tục hành chính, phục vụ người dân hiệu quả hơn.
#DigitalTransformation #ChuyenDoiSo #DeAn06 #GovernmentServices #HànhChínhCông #DữLiệuSố #SmartNation

https://vtcnews.vn/chuyen-tu-nguoi-dan-chay-theo-thu-tuc-sang-du-lieu-chay-thay-cho-nguoi-dan-ar993750.html

Chuyển từ 'người dân chạy theo thủ tục' sang 'dữ liệu chạy thay cho người dân'

Theo Thượng tướng Nguyễn Văn Long, một trong những kết quả nổi bật của Đề án 06 là chuyển từ "người dân chạy theo thủ tục" sang "dữ liệu chạy thay cho người dân".

Báo điện tử VTC News

What Does a Good Spec File Look Like?

Most legacy government systems exist in a state of profound documentation poverty. The knowledge lives in the heads of retiring employees, in COBOL comments from 1987, in binders that may or may not reflect current behavior. Against this baseline, the question of what makes a “good” spec file takes on different dimensions than it might in greenfield development.

Common Elements

Any spec worth writing answers the fundamental question: what are we building and why? Beyond that, good specs share a few specific characteristics:

Clear success criteria. Not just features, but how you’ll know the thing works. This matters especially when AI agents are generating implementations—they need something concrete to validate against.

Constraints and boundaries. What’s out of scope. What technologies or patterns to use or avoid. Performance requirements. AI tools are prone to scope creep and assumption-making without explicit boundaries.

Examples of expected behavior. Concrete inputs and outputs, edge cases, error states. These serve as both specification and implicit test cases.

Context about the broader system. How this piece fits into what exists. AI assistants lack awareness of surrounding code and architectural decisions unless you tell them.

The SpecOps Context

When modernizing legacy government systems, specs serve a different purpose than typical development documentation. They’re not just implementation guides—they are artifacts that preserve institutional knowledge. This changes what “good” looks like.

A SpecOps specification document must work for multiple audiences simultaneously: domain experts who verify that the spec captures policy intent, software developers and AI coding agents who need precision to generate correct implementations, and future humans who need to understand why the system behaves a certain way years from now—possibly after everyone currently involved has moved on.

That last audience is the one most spec formats neglect entirely.

Three States, Not One

Legacy system specs can’t just describe “what the system does.” They need to distinguish between:

  • Current system behavior—what the legacy code actually does today, bugs and all
  • Current policy requirements—what the system should do according to governing statutes and regulations
  • Technical constraints—what the system cannot do regardless of policy, due to missing integrations or platform limitations
  • These three things can be in alignment or tension at any moment. And that alignment can shift over time without the code changing—a policy update tomorrow can transform compliant behavior into a violation.

    Known Deviation Patterns

    Consider the example of a benefits system that should verify income against a state tax agency records, but the legacy system only captures self-reported income because the integration with the tax agency was never built. A good spec would make this explicit:

    Policy requirement: Per [directive], applicant income must be verified against tax agency records prior to benefit approval.

    Current implementation: Self-reported income only. Applicant provides income information on Form X.

    Deviation reason: No interface to tax agency income verification service exists. Integration requested in 2019, not funded.

    Modernization note: Modern implementation should include tax agency income verification integration.

    This surfaces the gap, documents why it exists, and gives the modernization effort clear direction—without pretending the legacy system does something it doesn’t.

    Explicit Ambiguity as a Feature

    There’s something that seems almost radical about a methodology that says write down what you don’t know. Traditional documentation can project false confidence. It often describes how things should work and quietly omits the messy parts.

    A spec that explicitly marks areas of tension or uncertainty is more honest, more useful for risk assessment, and a better starting point for modernization. It’s an invitation for future clarification rather than a false endpoint.

    A spec with unresolved tension is better than no reviewable documentation at all. 

    Policy Grounding

    Government system specs need explicit links to authorizing statutes, regulations, or directives. Not just “these items are excluded from income calculations” but “per 42 USC § 1382a, the following items are excluded from income calculations”

    This is the why that survives personnel turnover. It’s what allows future teams to evaluate whether behavior that was correct five years ago still aligns with current policy.

    Decision Records

    When domain experts verify a spec, they make judgment calls—especially where legacy behavior diverges from current policy understanding. Those decisions need to be captured in the spec, not in a separate document that gets lost.

    The spec becomes the repository of institutional reasoning, not just institutional behavior.

    Accessible or Precise?

    The SpecOps approach says that specs should be “readable by domain experts while detailed enough to guide implementation.” This is genuinely hard.

    Options include stratified specs (plain-language summaries with expandable technical detail), executable specs (written as tests that are simultaneously human-readable and machine-verifiable), or annotated specs (a single verbose document where technical precision is explained inline).

    Given that the spec is meant to be the source of truth that outlasts implementations, keeping everything in one artifact—even at the cost of verbosity—reduces the risk of layers drifting apart over time.

    The Road Ahead

    We’re still in early days. Questions remain open:

    • How granular should policy references be?
    • What’s the right way to represent known deviations?
    • How should specs age—versioning, or is git history enough?
    • What level of detail helps AI agents versus adding noise?

    These will get answered empirically as more agencies adopt the approach. The methodology will evolve. The important thing is to start—to surface questions that were previously invisible, to give future teams something to interrogate rather than nothing at all.

    Because the knowledge is what matters. Everything else is implementation details.

    #ai #artificialIntelligence #chatgpt #governmentServices #legacySystems #systemModernization

    The Future is Ahead of Schedule

    MCP Apps and the Acceleration of Just-in-Time Interfaces

    In August, Dan Munz and I wrote about the end of civic tech’s interface era, arguing that the rise of AI-generated, just-in-time interfaces would fundamentally change how civic technologists think about designing government services. We acknowledged that these ideas were still mostly theoretical—”this is still an idea that lies in the future,” we wrote. “But the future is getting here very quickly.”

    That’s starting to look like a pretty significant understatement.

    Just three months later, the organiztions behind the Model Context Protocol (MCP) — the open standard that connects AI assistants to data sources and tools — have announced MCP Apps, a formal extension for delivering interactive user interfaces through the MCP protocol. What we described in our earlier post as an emerging concept is now being standardized by Anthropic, OpenAI, and the MCP community. The timeline from theoretical possibility to a formal specification to guide production implementations wasn’t years or even months. It was weeks.

    And we’d better get usef to it – this is what change looks like in the AI era.

    From Concept to Standard in Record Time

    When we initialy wrote about just-in-time interfaces, we pointed to early experiments and proof-of-concepts: Shopify’s internal prototyping with generative AI, Google’s Stitch and Opal projects, AWS’s explorations with PartyRock. These seemed like interesting signals, but they were scattered efforts using different approaches and solving similar problems in ways that were not obviously compatible.

    MCP Apps changes seems poised to change that. It provides a standardized way for AI tools to deliver interactive interfaces — not as a speculative idea, but as a specification that developers can start to implement today. The extension enables AI-powered tools that can present rich, interactive interfaces while maintaining the security, auditability, and consistency that production systems will require.

    The design is deliberately lean, starting with HTML-based interfaces delivered through sandboxed iframes. But the implications reach further. As the team beghins this effort notes, this is starting to look like “an agentic app runtime: a foundation for novel interactions between AI models, users, and applications.”

    This matters for government digital services because it validates the core thesis of our earlier post: the constraints that forced civic designers to build one interface for everyone is eroding faster than most people anticipated. Certainly faster than we did.

    The Infrastructure and the Ingredients

    MCP Apps provides the delivery mechanism — a standardized way to serve interactive interfaces through AI systems. The specification itself is deliberately lean, focusing on core infrastructure: HTML templates delivered through sandboxed iframes, JSON-RPC protocols for communication, and multiple layers of security (iframe sandboxing, predeclared templates, auditable messaging, and user consent requirements).

    What MCP Apps doesn’t specify is what makes those interfaces good or appropriate for government use. That’s where the foundational work civic technologists have already done becomes critical.

    When we wrote about just-in-time interfaces in August, we noted that Shopify’s generative UI prototyping works in part because their design system is built on tokens—named variables that store key aspects of design systems like colors, spacing, and typography. We noted that “tokens aren’t sufficient to make just-in-time UIs a reality, but they probably are foundational.”

    MCP Apps now provides the plumbing. But the quality of AI-generated government interfaces will still depend on having the right ingredients: well-structured design systems, clear interaction principles, and encoded policy logic. The U.S. Web Design SystemVA.gov Design SystemNational Cancer Institute Design System, and other design systems used in government use tokens. That existing infrastructure positions government agencies to potentially benefit from MCP Apps when the time comes to experiment with dynamic interfaces — not because MCP Apps requires tokens, but because tokenized design systems can give AI something coherent to work with when generating interfaces.

    The architectural decisions in MCP Apps demonstrate another principle that veteran civic technologists will recognize: building on proven patterns rather than inventing everything from scratch. Using MCP’s existing JSON-RPC protocol means developers can use familiar tools. Prioritizing security from the start means it won’t need to be retrofitted later. These are the kinds of decisions that distinguish serious infrastructure from interesting experiments—and exactly the kinds of decisions that government technology teams need to see before they’ll trust a new approach for delivering citizen-facing services.

    What This Means for Civic Designers

    The rapid standardization of interactive interfaces in AI systems has immediate implications for how civic designers should think about their work.

    First, it underscores that the shift from fixed, multitenant interfaces to adaptive, context-specific experiences isn’t just theoretically possible — it’s actively being built. The expertise that civic designers have developed around creating design systems, documenting interaction patterns, and encoding policy logic won’t becoming obsolete. it will becoming more valuable because it provides the neccesary ingredients that AI systems will use to generate appropriate interfaces.

    Second, it underscores the importance of getting the upstream architecture right. As we wrote in August, expertise in civic tech will move upstream — from implementation to architecture, from specific solutions to systemic standards. MCP Apps makes this more concrete. The work of defining interaction principles, building component libraries, and establishing visual identity standards becomes foundational to building great experiences, not nice-to-haves.

    Third, it highlights the compressed timeline that government agencies are now facing. In previous waves of technological change, governments had years to observe how the private sector adopted new approaches before deciding whether (and how) to follow suit. The telephone era unfolded over decades. The Internet era compressed change to years. The AI era is compressing change to months. MCP Apps emerged from theoretical concept to production standard in less time than it typically takes a government agency to complete a procurement cycle for new software.

    This mismatch between the pace of technological change and the pace of government adoption isn’t new – but the gap is widening at an accelerating rate.

    The Infrastructure We Need Now

    If just-in-time interfaces are moving from concept to production this quickly, what should government digital services teams be doing now to prepare?

    The answer isn’t to rush into production deployments of AI-generated interfaces. The better approach is to strengthen the foundations that make such deployments viable when the time is right.

    That means investing in design systems that use tokens and are built with the assumption that they’ll need to support dynamic interface generation. It means continuing the hard work of encoding policy logic in formats that AI systems can understand—efforts like the Digital Benefits Network’s Rules as Code community of practice aren’t just preparing for a possible future, they’re building essential infrastructure for a future that’s arriving ahead of schedule.

    It also means rethinking how government agencies approach risk and experimentation. The traditional model of waiting until a technology is fully mature before considering adoption doesn’t work when the maturity cycle has compressed from years to months. Agencies need to develop the capacity to experiment safely and learn quickly—running controlled pilots, establishing clear evaluation criteria, and building the organizational muscle to rapidly deploy what works while quickly abandoning what doesn’t.

    Acceleration Requires New Muscles

    Perhaps the most important takeaway from the rapid emergence of MCP Apps isn’t about the technology itself. It’s about the pace of change in the AI era and what that means for how government organizations operate.

    Three months ago, we described just-in-time interfaces as lying in the future. Today, there’s a formal specification proposal for delivering them. The team behind the MCP protocol has built an early access SDK to demonstrate the patterns, and projects like MCP-UI are already implementing support. The cycle of innovation, standardization, and adoption that once took years now happens in weeks and months — even if we’re still in the early stages of this particular evolution.

    This creates genuine challenges for government organizations whose processes and decision-making structures were designed for a different era. But it also creates opportunities. Agencies that have invested in the right foundations — strong design systems, encoded policy logic, clear interaction principles — are positioned to benefit from these rapid advances. Those that haven’t will find themselves further behind with each passing month.

    The future we wrote about in August isn’t coming. It’s here, and it arrived faster than even we expected. Government digital services will need to adapt to just-in-time interfaces.

    The challenge for those of us working in and with governments is whether these organizations can develop the capacity to adapt at the speed that technological change now demands. Because if three months taught us anything, it’s that the next three months will bring changes we haven’t yet imagined.

    #artificialIntelligence #governmentServices #justInTimeInterfaces #modelContextProtocol #userExperience

    Telangana Meeseva : तेलंगाना में मीसेवा अब व्हाट्सऐप पर सरकारी सेवाएँ और भी आसान... - Hindi Vaartha

    Telangana Meeseva : तेलंगाना सरकार ने मीसेवा सेवाओं को व्हाट्सऐप पर शुरू किया। 40 विभागों की सरकारी सेवाएँ अब AI आधारित चैट इंटरफ़ेस..

    Hindi Vaartha

    Đề xuất chuyển Chi nhánh Văn phòng đăng ký đất đai về xã để giảm thủ tục hành chính, giúp người dân thực hiện các thủ tục đất đai thuận tiện hơn. Đồng thời tạo điều kiện để cấp xã chủ động quản lý dữ liệu đất đai địa phương. 💼🏛️

    #dịchvụcông #đấtdai #thủtụchànhchính #cảitiến #chínhquyềnđịaphương #VietnamNews #LandRegistration #GovernmentServices #AdministrativeReform #LocalGovernment

    https://vietnamnet.vn/de-xuat-chuyen-chi-nhanh-van-phong-dang-ky-dat-dai-ve-xa-giam-chi-phi-cho-dan-2457998.ht