More than 100,000 San Franciscans rely on Social Security for rent, food & medicine.
Now, cracks in the system are making it harder to access benefits.
Listen to the report by @mildobserver on Civic by @SFPublicPress
đź”— https://www.sfpublicpress.org/looming-medicaid-cuts-threaten-san-franciscos-safety-net/
#SocialSecurity #SanFrancisco #PublicBenefits
Looming Medicaid Cuts Threaten San Francisco’s Safety Net

Experts warn that federal funding reductions would jeopardize in-home support, block access to care and drive more patients to ERs.

San Francisco Public Press
More than 100,000 San Franciscans rely on Social Security for rent, food & medicine.
Now, cracks in the system are making it harder to access benefits.
Listen to the report by @mildobserver on Civic by @SFPublicPress
đź”— https://www.sfpublicpress.org/fear-and-anxiety-mount-amid-social-security-administration-upheaval/
#SocialSecurity #SanFrancisco #PublicBenefits

1/2 US Protest Law Tracker - Updates to #Federal #Protest Laws introduced in 2025.

Latest updates: Jun. 10, 2025 (US Federal)

Providing for deportation of non-citizens who commit protest-related offenses

Would cancel the visa of any individual convicted of protest-related crimes and provide for the individual’s deportation within 60 days. Under the bill, individuals convicted of any “crime (i) related to [their] conduct at and during the course of a protest; (ii) involving the defacement, vandalism, or destruction of Federal property; or (iii) involving the intentional obstruction of any highway, road, bridge, or tunnel” would be deportable. The bill requires that such individuals’ visas be “immediately” cancelled and the individuals removed from the US within 60 days. If enacted, a non-citizen convicted of even a nonviolent misdemeanor “related to” a protest, such as trespass or disorderly conduct, could face deportation. The bill’s sponsor cited protests around immigration raids in #LosAngeles as the impetus for his bill.
(Full text of Bill: https://www.cotton.senate.gov/imo/media/doc/61025novisasforviolentcriminalsactreintro.pdf)
Status: pending
Introduced 10 Jun 2025.
Issue(s): Traffic Interference

Heightened penalties for "#riot" offenses

Would amend the federal #AntiRioting law to raise the maximum penalty to ten years in prison, instead of five, for participating in or inciting a “riot,” or aiding or abetting someone to do so. The federal definition of “riot” is broad, requiring only a “public disturbance” where one individual in a group commits violence. Under the bill, someone who committed or abetted an “act of violence” during the commission of a “riot” offense would face a minimum one-year sentence, while an individual who assaulted a law enforcement officer would face a sentence of at least one year and up to life in prison. Federal law defines “act of violence” broadly to include using force against #property—or just attempting or threatening to use such force. As such, if enacted, the bill could result in steep criminal penalties for protesters who do not actually engage in violence or destructive conduct. The bill’s sponsor cited protests around immigration raids in Los Angeles as the impetus for his bill.
Status: pending
Introduced 10 Jun 2025.
Issue(s): Riot

HR 2272: Blocking #FinancialAid to students who commit a "riot"-related offense

Would bar federal financial assistance and loan forgiveness for any student convicted of a crime in connection with a “riot.” The bar would apply to students convicted of “rioting” or “a) inciting a riot; b) organizing, promoting, encouraging, participating in, or carrying on a riot; c) committing any act of violence in furtherance of a riot; or d) aiding or abetting any person in inciting or participating in or carrying on a riot or committing any act of violence in furtherance of a riot.” Many states define “riot” broadly enough to cover peaceful protest activity; many also have broad laws criminalizing “incitement to riot” that cover protected expression. The bill would bar financial aid and #LoanForgiveness for students convicted under such provisions. As written, the bill would also bar financial aid and loan forgiveness to students convicted of any offense related to “#organizing, #promoting, encouraging” a riot, or “aiding and abetting” incitement or participation in a riot, which could cover an even wider range of expressive conduct, from sharing a social media post to cheering on demonstrators in a protest that was deemed a “riot.”
(Full text of bill: https://www.congress.gov/bill/119th-congress/house-bill/2272)
Status: pending
Introduced 21 Mar 2025.
Issue(s): #CampusProtests, Riot, Limit on #PublicBenefits

#HR2273: Providing for visa revocation and deportation of #noncitizens who commit a "riot"-related offense

Would require the Secretary of State to revoke the visa of and make deportable a noncitizen #student, #scholar, #teacher, or #specialist convicted of a crime in connection with a “riot.” Under the bill, individuals in the US on an F-1, J-1, or M-1 visa would have their visas revoked and would be deportable if they were convicted of “rioting” or “a) inciting a riot; b) organizing, promoting, encouraging, participating in, or carrying on a riot; c) committing any act of violence in furtherance of a riot; or d) aiding or abetting any person in inciting or participating in or carrying on a riot or committing any act of violence in furtherance of a riot.” Many states define “riot” broadly enough to cover peaceful protest activity; many also have broad laws criminalizing “incitement to riot” that cover protected expression. The bill would provide for the deportation of foreign students, scholars, and others convicted under such provisions. As written, the bill would also provide for their deportation if convicted of any offense related to “organizing, promoting, encouraging” a riot, or “aiding and abetting” incitement or participation in a riot, which could cover an even wider range of expressive conduct, from sharing a #SocialMediaPost to cheering on #demonstrators in a protest that was deemed a “riot.”
(Full text of bill: https://www.congress.gov/bill/119th-congress/house-bill/2273)
Status: pending
Introduced 21 Mar 2025.
Issue(s): Campus Protests, Riot

#S1017: New federal criminal penalties for protests near #pipelines

Would create a new federal #felony offense that could apply to protests of planned or operational pipelines. The bill would broadly criminalize under federal law “knowingly and willfully” “#vandalizing, tampering with, disrupting the operation or construction of, or preventing the operation or construction of” a gas pipeline. A range of peaceful activities could be deemed “disrupting… the construction of” a pipeline, from a rally that obstructs a road used by construction equipment, to a #lawsuit challenging a pipeline’s #permit or# zoning approval. The bill does not define “disrupt,” such that even a brief delay would seemingly be covered. Further, the underlying law provides that any "attempt" or "conspiracy" to commit the offense would be punished the same as actual commission. As such, individuals as well as organizations that engage in the planning or facilitation of a protest that is deemed to “disrupt” pipeline construction could be covered. The offense would be punishable by up to 20 years in prison and a fine of up to $250,000 for an individual, or $500,000 for an organization.
(Full text of bill: https://www.congress.gov/bill/119th-congress/senate-bill/1017)
Status: pending
Introduced 13 Mar 2025.
Issue(s): Protest Supporters or Funders, #Infrastructure

#ProtestLaws #protestors #protestors_in_prison #CivilLiberties #Fascism #USA #USPol #NoKings #Project2025 #TrumpIsAFascist

G.O.P. Targets a Medicaid Loophole Used by 49 States to Grab Federal Money

States have long used taxes on hospitals and nursing homes to increase federal matching funds. If Republicans end the tactic, red states could feel the most pain.

The New York Times

In the states, #FGA has become known as a #conservative "thought leader," said Brian Colby, VP of #PublicPolicy for Missouri Budget Project, a #progressive nonprofit that provides analysis of #state #policy issues.

"Conservatives used to try to chop away at the #federal #budget," Colby said. "These guys are doing it at the state level."

In its 14 yrs, FGA has created a playbook to shape state policy discussions about #PublicBenefits behind the scenes.

#law #PublicAssistance #healthcare #Trump

AI Can’t Save Us By Itself

Reading this horrific story about algorithmically-assisted domestic violence interventions in Spain should be required reading for anyone pushing for the adoption of AI into critical government services, specifically those impacting or supporting decision making or the allocation of scare, vital resources.

In a nutshell, Spain uses an algorithm to provide a risk assessment for those potentially at risk for gender violence. Depending on this assessment, police may take steps to intervene and provide protection for those identified by the algorithm as being at elevated levels of risk. But it turns out that sometimes people with assessments identifying them as low risk are the victims of further violence. This is horrible and tragic, but of even more concern is the degree to which this approach has been embedded into the processes of police

Spain has become dependent on an algorithm to combat gender violence, with the software so woven into law enforcement that it is hard to know where its recommendations end and human decision-making begins.

At least three important things to take away from this very sad story that are at the top of my mind.

First, government services like domestic violence intervention (or intervention to protect at risk children, or to approve or deny applications for public benefits) can be life or death decisions. when mistakes happen, people can lose their lives. Arguing that criticisms of algorithms or AI in these scenarios are misplaced because “humans sometimes make mistakes too” misses a critical part of the responsibility for managing these important services – accountability. Who do we hold accountable for decisions that get made or actions that are taken by government when some part of the decision making has been ceded to an algorithm or an AI model?

Second, prior academic research on algorithmic and AI tools meant to assist or support decisions on government interventions shows a troubling dynamic at work. In her groundbreaking book “Automating Inequity,” author Virginia Eubanks observed this phenomenon when evaluating the Alleghany Family Screening Tool – an algorithmic tool used in Alleghany County Pennsylvania to identify potentially at risk children. While this tool was meant to assist screeners in deciding whether to intervene and remove a child from a potentially dangerous situation, over time the behavior of screeners using this tool seemed to change.

“[t]he AFST is supposed to support, not supplant, human decision-making in the call center. And yet, in practice, the algorithm seems to be training the intake workers.”

Intake screeners have asked for the ability to go back and change their risk assessments after they see the AFST score, suggesting that they believe the model is less fallible than human screeners.

We see echoes of this in the recent story from Spain where police simply “accepted the software’s judgment” on whether to allocate resources to protect someone from violence.

Most importantly, this all suggests that the “human-in-the-loop” approach to implementing new AI solutions in government may not be enough of a check on the tendency for humans to allow algorithms and AI models to supplant their judgement. New AI tools are being adopted by government at an accelerated pace, often in places where there are resource constraints – which also happens to be the places where governments make decisions about whether to protect a domestic violence victim, remove an at risk child, or to send a police car to an emergency.

The adoption of new algorithms and AI models into government service delivery must be informed by the information we have on the the very human tendency to cede authority for making critical judgements to software. It must also be accompanied by proper safeguards, training for staff, and rigorous reviews to ensure that the ultimate authority and accountability for making life or death decisions remains with humans.

People’s lives depend on it.

#AI #artificialIntelligence #ChatGPT #ethics #government #machineLearning #PublicBenefits #Safety #technology

An Algorithm Told Police She Was Safe. Then Her Husband Killed Her.

Spain has become reliant on an algorithm to score how likely a domestic violence victim may be abused again and what protection to provide — sometimes leading to fatal consequences.

The New York Times
🚀 Exciting News! 🚀 Today, NIST, Beeck Center's Digital Benefits Network (DBN), and @CenDemTech (CDT) launch a 2-year R&D project to adapt NIST’s digital identity guidelines for public benefits policy and delivery. https://cdt.org/insights/cdt-launches-partnership-with-nist-beeck-center-on-digital-identity-for-public-benefits-programs/ #DigitalIdentity #PublicBenefits
CDT Launches Partnership with NIST, Beeck Center on Digital Identity for Public Benefits Programs

Today CDT, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST,) and the Digital Benefits Network (DBN) at Georgetown University’s Beeck Center for Social Impact + Innovation announced a new collaboration focused on developing digital identity guidelines to support public benefits programs, such as those designed to help beneficiaries access and pay […]

Center for Democracy and Technology

Hollywood has created an entire genre of movies around the fear that humankind will be subjugated or destroyed through the takeover of a self-aware and malevolent artificial intelligence. Many of these movies were made long before artificial intelligence went mainstream, a process accelerated in recent years by the rise of Large Language Models (LLMs) powering services and applications like ChatGPT.

While it’s easy to spot the gaping plot holes in these movies now that we’re living through the time when AI takes a more intimate hold of our lives, there are still plenty of things to worry about and dangers we need to guard against. Artificial intelligence, whether we know it or not, is becoming more deeply embedded in our lives — finding its way into processes that affect people in very personal ways.

A good example of this can be seen in recent stories about health insurance providers using AI for things like processing approvals for coverage. As these stories show, AI can sometimes be used in ways that end up denying health coverage for people who are actually eligible and desperately need it. When economic incentives exist that favor a specific determination (health insurers benefit when they pay fewer claims), adding AI to these existing processes can magnify bad outcomes. These stories provide an object lesson in the need for governance structures to guide the implementation of AI.

This same need for AI oversight and guardrails exists for governments that want to use it as part of existing processes, like applications for benefits or government services. People applying for government services and benefits may be facing health or financial issues, recovering from an accident or natural disaster, or have children in need. The promise of using AI as part of these processes is that it can help streamline them, reducing the time it takes to complete them, and reducing error rates. 

But when incentives exist that favor certain outcomes in government processes, adding AI can harm people in deeply personal ways. The “success” of public benefit programs too often focuses solely on the number of ineligible claims denied. Historically, government administrators and public officials have favored a reduction in fraudulent claims, and the denial of ineligible applicants, as the best measures of success. And while these measures are certainly important, they should not be our only measures of the success of a benefit program.

As governments move to adopt AI into application processes, it is important to have clear governance structures in place to guard against negative outcomes. How do we create these structures, and what should they look like? A good example can be seen is the recent guidance from the Department of Labor to states on unemployment insurance benefits. This new guidance focuses on metrics that will better ensure equitable access to benefits for those who need them:

Identifying and preventing all forms of improper payments – including underpayments and erroneous denials – are critical to ensuring program integrity, and equitable access plays a key role in supporting these efforts. (emphasis added)

UNEMPLOYMENT INSURANCE PROGRAM LETTER NO. 01-24

By creating success metrics for states that include minimizing improper denials, this guidance sets up an important guardrail against some of the things being seen in the healthcare industry. The true success of a program meant to support those in need of assistance can not be measured solely by efforts to deny ineligible applicants. It is also critical to ensure we minimize the number of times those who truly are eligible are improperly denied benefits. And when these improper denials do happen, they need to be rectified quickly.

Even though it turns out that the dangers of an AI apocalypse were probably overblown, there are real concerns and real dangers that we need to guard against as AI becomes more ubiquitous in our lives. One of those concerns is that as AI gets adopted into government benefit processes, eligible claimants are unfairly denied benefits when they most need them.

And that’s an outcome that is scarier than any Hollywood movie.

https://civic.io/2023/12/08/additional-guardrails-for-ai-use-in-government/

#AI #artificialIntelligence #ChatGPT #machineLearning #PublicBenefits #technology

UnitedHealth uses AI model with 90% error rate to deny care, lawsuit alleges

For the largest health insurer in the US, AI's error rate is like a feature, not a bug.

Ars Technica