Thought it could be fun to re-share some #techpolicy stuff I’ve written for the folks interested in #AIpolicy #techgovernance #algorithmicaccountability and #federallaw
(I originally shared this on the bird site, so for those who were following me there back in February, this may be familiar)

Let’s talk about the Algorithmic Accountability Act of 2022!
https://www.protocol.com/enterprise/revised-algorithmic-accountability-bill-ai

This Senate bill would force companies to audit AI used for housing and loans

The Algorithmic Accountability Act gives the FTC more tech staff and follows an approach to AI accountability already promoted by key advisers inside the agency.

Protocol
The 2022 version of the Algorithmic Accountability Act builds on the 2019 version of the bill (which was itself an independent introduction of text that was originally included in the Mind Your Own Business privacy bill). It’s pretty common for US federal bills to be reintroduced, remixed, and otherwise Frankensteined into different versions as people make changes, incorporate feedback, and even change office.

It has a lot going on in the (gonna try hashtagging) #AlgorithmicAccountabilityAct , so it’s worth checking it out in full, but here’s the tl;dr:
It says that the US Federal Trade Comission needs to create and then enforce requirements for companies to assess the impacts of automated decision systems used to make critical decisions about people’s lives.

Here’s a one-pager, too:
https://www.wyden.senate.gov/imo/media/doc/2022-02-03%20Algorithmic%20Accountability%20Act%20of%202022%20One-pager.pdf

What are “automated decision systems,” you say? In this bill, they are broadly defined. This reflects research from experts like Rashida Richardson recognizing both that:
1) Technology evolved & definitions need to be robust against the rapid rate of change
&
2) Many harmful systems are… boring
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3811708
While new innovations in AI and machine learning with deep neural nets are dazzling (and sometimes terrifying!), a lot of the automation that is taking place across society is not particularly technologically advanced, but still has the power to scale harm to millions of people, especially when used to make “critical decisions” about people’s lives. So the Algorithmic Accountability Act of 2022 doesn’t specifically focus on AI or particular automation techniques.
Critical Decisions are decisions relating to consumers’ access to or the cost, terms, or availability of education & vocational training, employment, essential utilities, family planning, financial services, healthcare, housing or lodging, or legal services.
We will dig into this more, but you might notice that there are parallels in this language to the EU AI Act’s 2021 “Annex III: High-risk AI Systems Referred To In Article 6(2)”: https://www.eumonitor.nl/9353000/1/j9vvik7m1c3gyxp/vli87bognun6
ANNEXES to the Proposal for a Regulation of the European Parliament and of the Council LAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE (ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION LEGISLATIVE ACTS - EU monitor

So that’s some background context. Next I’ll break down some of the things that I personally find most exciting, but you can read the section-by-section summary of the bill for more info (or you can even read the full text if you’re into that 🤓) linked at the bottom of this press release: https://www.wyden.senate.gov/news/press-releases/wyden-booker-and-clarke-introduce-algorithmic-accountability-act-of-2022-to-require-new-transparency-and-accountability-for-automated-decision-systems
Wyden, Booker and Clarke Introduce Algorithmic Accountability Act of 2022 To Require New Transparency And Accountability For Automated Decision Systems | U.S. Senator Ron Wyden of Oregon

The Official U.S. Senate website of Senator Ron Wyden of Oregon

Federal legislation may not always have a reputation for being super exciting stuff, but I think that it can be really interesting if you understand how some of the decisions that go into choosing the particular words that make it up shape the ways that legislation may be interpreted.

I want to share some of the things that make me excited about the approach taken in the Algorithmic Accountability Act of 2022 which might also inform other #techpolicy legislation to come, so… LET’S DIG IN! 🍽

Thing I’m excited about #1: Impact Assessment is an activity not an artifact 🏄

One thing you may notice in reading the bill is that it very rarely talks about impact assessment as a plural (“impact assessments”). This is because it treats impact assessment as a mass noun.

(Didn’t think you were gonna get a grammar lesson, did ya? 😜)

Mass nouns (also called noncount nouns) are words like diligence, management, information, feedback, hospitality, mail, poetry, software, training, or… bacon! 🥓✨

So, although one might sometimes talk about various softwares, trainings, feedbacks, or bacons, the typical use treats these more like continuous activities (or breakfast meats).

Similarly, when the Algorithmic Accountability Act talks about impact assessment, it doesn’t treat “assessments” as one-off events or as pieces of paperwork.

Rather, impact assessment is an ongoing series of activities, an integral part of responsible technology deployment.

I like to think of it like documentation of code.

When you’re developing software, there are sometimes discrete artifacts and resources that are produced (documents), but documentation is not just artifacts, it is a process. It is an ongoing activity throughout the lifecycle of software.

“Impact assessment” similarly refers to an ongoing activity that may involve a variety of artifacts and processes depending on the specific application of the term.

Big shout outs to Jingying Yang, Timnit Gebru, Meg Mitchell, Hanna Wallach, Jenn Wortman Vaughn, Deb Raji whose brilliant thinking on documentation for machine learning systems first exposed me to this concept through work like #ABOUTML
https://partnershiponai.org/workstream/about-ml/

So that’s thing #1: In the Algorithmic Accountability Act, impact assessment is a process, a set of actions, an ongoing activity that is integral to deploying automated decision systems to make critical decisions.

ABOUT ML - Partnership on AI

Partnership on AI

Thing I’m excited about #2: Focus on decisions, not data types 📊

A pretty big shift from the 2019 version of the bill (and most legislation in this space) is the move away from the definition of “high-risk” systems to the frame of “critical” decisions.

2019: https://www.congress.gov/bill/116th-congress/house-bill/2231/text

I know at first this may seem like such a nerdy (and some may say pedantic) point, but to me it’s important, and this is MY list of things I’m excited about, so… there!
😜

Text - H.R.2231 - 116th Congress (2019-2020): Algorithmic Accountability Act of 2019

Text for H.R.2231 - 116th Congress (2019-2020): Algorithmic Accountability Act of 2019

So here’s the thing about regulating “high-risk” systems, you kinda have to have an idea already about what is risky.

Most regulation for AI & automated decision systems (ADS) defines this by the:
1) severity of impact
2) number of people impacted
3) sensitivity of data involved

Going through the list, let's talk about why each of these things can be either difficult to use or even possibly antithetical to the typical goals of these types of legislation.

1) Severity of impact
For some regulation of automated decision systems (ADS), specifically regulation that seeks to provide recourse or to address harms of a certain degree, this can be useful. It does depend, however, on knowledge about the impacts of using a system.

In Algorithmic Accountability, the goal is to UNCOVER impacts from using automated systems, to identify (& address!) harms. Because all of the systems causing these bad impacts are not even known, that’s a less useful definition.

2) Number of people impacted
Don't get me wrong: I think this is an important thing to try to capture when thinking about what may make a system risky, but again, for Algorithmic Accountability the issue is that until you have assessed a system, you may not know who it impacts! You can’t define the type of system that is captured by a rule by something you don’t know until you actually apply the rule, so you have to take a different approach.
🐣
Sidebar: there's a WHOLE other conversation to be had about how to define things like "number of users" that probably needs way more standardization because, hoo boy! do people disagree on that. (Even a single company may have many different definitions & metrics for different teams!)
If you’re a staffer writing legislation trying to navigate this… Godspeed. For folks who want to make tech laws less bad, trying to write some thought-out definitions here could really go a long way.
JUST SAYINGGG
Finally: 3) Sensitivity of data involved
When the other two options don't apply, it can be really tempting to lean into this one. There is so much established literature and law about sensitive data, personally identifying information, protected health information, etc!
And there is a real, pressing need for data privacy legislation in the US. There are real harms that come from sensitive information being exposed, and the explosion of data collection about people makes this all the more urgent!

Privacy law is important and urgently needed, BUT privacy law and algorithmic accountability law are complimentary, not substitutes for one-another.

Not only does law and regulation for AI & ADS need to do different things, but sometimes the goals of privacy and algorithmic accountability are in tension!

This is why defining "high-risk" systems through data sensitivity is especially dangerous.

Regulating systems for making decisions based on the INPUTS to those systems rather than their outcomes creates perverse incentives to use less ~sensitive~ data, even if that is the data actually most pertinent information to the situation.

If a system is making critical decisions about a person's healthcare, it probably SHOULD be using sensitive health information! Using more benign data (through proxies or straight up irrelevant info) may not only be unhelpful, it may actually harm people.

So in Algorithmic Accountability, rather than defining the automated systems of interest by 1) impact, 2) number of people, or 3) data used, it uses the framing of "critical decisions.” Those are decisions relating to consumers’ access to or the cost, terms, or availability of education & vocational training, employment, essential utilities, family planning, financial services, healthcare, housing or lodging, or legal services)

Why is this useful?

Focusing the bill on a list of what are defined as "critical decisions" makes the targets of the bill more concrete. Concreteness helps to avoid self-referential definitions.

It also narrows the scope somewhat, which may important for potentially boring government-y reasons because, after all, this is an FTC bill.

Now, I don't have any inside knowledge on this, but I suspect this line of thinking was what prompted European legislators to add the Annex III to the EU AI Act, alluded to earlier.

I think I should expand upon the Federal Trade Commission (FTC) point here as well:
You may notice that the list of things that make up these "critical decisions" does exclude some things that many people might expect to see in AI/ADS laws (like some of what's in the EU AI Act).

This is related to that boring government jurisdictional stuff. Since the FTC is about consumer protection it doesn't cover government use like the criminal legal system, immigration, or govt benefits administration.

So the Algorithmic Accountability Act of 2022 isn't about governing types of data or types of impact, but rather assessing the impacts of making certain types of decision with the help of automated technologies,
So that brings us (finally!) to Thing I’m excited about #3: three tiers of disclosure 🍰
The 2019 Algorithmic Accountability Act did not include reporting on the impact assessment behaviors of covered entities (aka companies)
The 2022 version has new requirements in three layers 😋

Reporting is an important element to accountability because it offers a level of transparency into processes and it introduces procedural check to ensure that impact assessment is, indeed, taking place. (This is different from some other approaches like audits, pre-approval, etc.)

I imagine this is one of the SPICIER elements of the bill, so let's discuss! 🌶✨

Algorithmic Accountability layers of information disclosure:
1) internal assessment of impact within companies (ongoing process, remember?)
2) submission of summary reports, a particular sort of artifact) to the FTC
3) information shared by the FTC to the public in the form of:
- aggregated anonymized trend reports that the FTC produces
- a searchable repository with key (identifying) info about the reports received

Before we get into why this is a Thing I'm Excited About, let's first talk about what many people want this bill to do (which is doesn't do), and then I'll tell you why I think that THIS is actually even better!
😈

⚠️ caricature incoming ⚠️
A lot of people want Algorithmic Accountability to be about catching bad actors red-handed. They want to expose and name-and-shame those who are allowing their automated systems to amplify and exacerbate harms to people.

This is righteous, and I empathize.

I also want there to be justice for those harmed, and I want there to be real consequences for causing harm that willful and feigned ignorance do not excuse. I do believe that this is a step in that direction, but this bill focuses on something slightly different.

It is less about helping the FTC catch wrongdoers (although there is that, and I’ll explain more) and more about making it easier and more clear on how to do the right thing.

One of the great challenges in addressing the impacts of automated decision systems is that there is not (yet!) an agreed upon definition of what "good" (or even "good enough") looks like.

We lack standards for evaluating decision systems' performance, fairness, etc. Worse still, it's all super contextual to the type of decision being made, the types of information/data available, etc.
😵‍💫

These standards may one day exist! But they don't yet. Algorithmic Accountability is about getting there.

And part of getting there, I believe, is facilitated through the three tiers of disclosure and reporting:
1) internal assessment
2) summary reports to the FTC
3) public info sharing FROM the FTC in the form of
- aggregate anonymous trend reports
- a searchable repository

🍰 Layer 1: Internal assessment of impact within companies
This comes back to what we talked about in Exciting Thing #1: impact assessment is a process, an ongoing activity, an integral part of responsible automated decision product development & deployment. The Algorithmic Accountability Act of 2022 requires all companies that meet certain criteria to do this.
🍰 Layer 2: Private submission of summary reports to the FTC
Now here's the potentially ~spicy~ bit! The bill requires companies to submit documentation substantiating their impact assessment activities to the FTC.
(To see what's required, peep Section 5: https://www.congress.gov/bill/117th-congress/house-bill/6580/text?r=2&s=1#H5B3F11105B034F1CB978B5584E0EF168 )
This submission is done PRIVATELY, meaning that it’s between the govt and this one company, here.
Text - H.R.6580 - 117th Congress (2021-2022): Algorithmic Accountability Act of 2022

Text for H.R.6580 - 117th Congress (2021-2022): Algorithmic Accountability Act of 2022

This documentation is required to be submitted before a company deploys—as in sells/licenses/etc OR uses, themselves—any NEW automated decision system for the purpose of making critical decisions. It is also required annually for any existing system as long as it is deployed. This reflects that continuous nature (the mass noun!) of impact assessment. It is an ongoing activity, but these summary docs are snapshots of that activity in action.

Many folks may feel these reports should be made entirely public. I get where that's coming from, but here's why think this private reporting to the FTC is actually a kinda clever way to go about it...

For one, because we lack standards, it is premature to codify specific blanket requirements for which metrics for evaluating performance, for instance, all companies should use. As such, companies will likely choose whichever ones make them look "best” meaning people won’t put out damning info.

To be clear: this kinda "metric-hacking" is to be expected, and whether the reports are private or not, companies (out of fear of accountability or at least judgement) will probably assess impacts and use the metrics that they think will likely reduce the likelihood that they get called out. Such is the nature of humans (especially within a punitive framework)!

Okay, but here's the fun part! Because these reports are submitted privately to the FTC, companies are now in a position of information asymmetry. They do not know what OTHER companies are saying they did or how they performed on THEIR metrics. They may try to do the bare minimum, but they don’t actually know what the bare minimum is!

Gotta love it when collective action problems work on our side 😜

The FTC (+ some other agencies), however, get to see across the collection. And this is super useful! Not because companies are going to "tell on themselves," but because there are really interesting lessons to be learned from how different companies fulfill these requirements.

There is as much to be learned from what particular companies do say in their reports as what they don’t. The information asymmetry makes this more JUICY!

See, right now there's a dynamic where any company (or more like employee) that tries to really interrogate the impacts of these automated decision technologies gets called out for it.

It's a "tall poppy" situation. It's better to not know, to not try, than to find out the truth.

The companies that do the least don't make headlines. The automated decision systems that no one knows about don't feature in hashtags.

The current system rewards keeping your head down, not asking questions, & staying obscure. It often punishes those that try to ask, to measure, to identify and prevent harm. (Better to not test. Sound familiar?)

This private reporting dynamic shifts that calculus.

Now, companies aren't telling on themselves so much as they're telling on each other. The competition can reduce collusion pressures. There is an opportunity for a race to the top that doesn't exist in the current equilibrium.

But maybe you think that this is all just "going through the motions," and this reporting is just a song-and-dance that won't make any REAL difference. I guess it's possible, but even "going through the motions" can save lives. Honestly, there’s so much BASIC stuff out there that hurts people that could be avoided if people were even just a little bit conscious of it when designing/developing powerful tools.
https://www.atlasobscura.com/articles/pointing-and-calling-japan-trains
Why Japan's Rail Workers Can't Stop Pointing at Things

A seemingly silly gesture is done for the sake of safety.

Atlas Obscura

Maybe you say "but still, there are things that The Public really does deserve to know!"

And it's true. Some things are really essential. Like knowing what decisions about you are being automated or knowing if there is a way to contest or correct one of these decisions.

And so... 🍰 Layer 3: information shared by the FTC to the public in the form of:
- aggregate anonymous trend reports
AND
- a searchable repository

Hop over to Section 6 of the Algorithmic Accountability Act to see detailed information about what information will be made public about companies' use of automated decision systems to make critical decisions about people's healthcare, education, and more!
https://www.congress.gov/bill/117th-congress/house-bill/6580/text?r=2&s=1#H84976DDA23F14F169EFE1A3ABC7EED84
#AlgorithmicAccountabilityAct #AIregulation #techpolicy
Text - H.R.6580 - 117th Congress (2021-2022): Algorithmic Accountability Act of 2022

Text for H.R.6580 - 117th Congress (2021-2022): Algorithmic Accountability Act of 2022

The Algorithmic Accountability Act is a consumer protection bill (where consumer is defined as... any person. Turns out there's no official FTC definition of consumer 🤪)

Part of that protection comes from making key information available to the public in a place where individuals, but also awesome consumer-protection and advocacy organizations can access it.

Cool endorsing orgs: https://www.wyden.senate.gov/imo/media/doc/Support%20for%20the%20Algorithmic%20Accountability%20Act%20of%202022.pdf

This 3rd tier of disclosure consists of two types of information. One is an information-rich, qualitative report of the findings and learnings aggregated from the multitude of individual reports. This is where the FTC can highlight differences and patterns.

Personally, I'm really interested to learn about things like... do different critical decisions (health vs employment) gravitate toward different metrics for evaluating performance? What types of stakeholders are being consulted with? How?

The second half of tier three is the public repository. This has more limited information, but contains a record for every critical decision that has been reported and contains that key information we alluded to earlier.

The repository must "allow users to sort and search the repository by multiple characteristics (such as by covered entity, date reported, or category of critical decision) simultaneously," ensuring that it can be a powerful resource for both consumers+advocates and researchers.

Together, these tiers of information disclosure can provide an opportunity to 1) catch issues early where companies can still fix, 2) motivate a greater "race to the top" on both how automated decision systems are used and on impact assessment, itself, & 3) provide the public with essential information for making better-informed choices and for holding companies accountable.

There you have it, folks!

My big 3 Things I’m excited about in the Algorithmic Accountability Act of 2022:
#1: Impact Assessment is an activity not an artifact 🏄
#2: Focus on decisions, not data types 📊
#3: Three tiers of disclosure 🍰

There’s a lot there, but I hope this thread helps illustrate some of the clever ways that writing robust legal definitions about tech and red-teaming regulatory requirements can potentially produce better legislation for tech policy issues!

The #AlgorithmicAccountability Act of 2023 was just introduced yesterday! 🎉

Here’s a love letter I wrote to the particular strategies showcased in that bill: https://posts.bcavello.com/why-im-still-hyped-about-the-algorithmic-accountability-act/

And here’s a toot thread from my new mastodon account talking about it: https://mastodon.publicinterest.town/@b_cavello/111109500222612481

Why I’m (still) hyped about the Algorithmic Accountability Act

B Cavello
@b_cavello i'm sort of unclear about this part i guess. the examples given are things like mortgage loans and drug prescriptions, both of which are fields that already have well-defined regulatory frameworks. it feels like automated decisionmaking is being treated here as a special case in a way that's not really warranted, since automated decisions aren't actually different from human-made decisions at the level of social consequences

@b_cavello like...if a person were making these bad decisions, it'd be obvious why they were bad, and it might contravene laws. so it seems like the consequences should be the same whether or not a computer was involved

similarly, i don't think there's a strong rationale for continuing to allow (for example) human-administered loan-approval processes to be kept opaque, particularly if the automated ones are now being audited

@b_cavello sorry this is kind of tangential i guess, i just think about this non-distinction a lot

@aeonofdiscord not tangential at all! It’s a great point, I think. We should be critical of those human systems.

The thing that I think is unique about how automation technology changes things is it vastly increases the speed and scale of decision making power. A flawed human can only evaluate so many cases a day, a flawed algorithm could adjudicate millions.

@aeonofdiscord but I think it’s a question folks should continue to raise! That’s part of why the emphasis on decision processes (not just the tech) in the Algorithmic Accountability Act differs, I think. It doesn’t assume the potential harm/flaw is introduced by the technology alone, but rather that a harmful process accelerated by technology could be especially dangerous.
@aeonofdiscord I also think that, although it’s definitely true that highly regulated industries have some lines drawn in the hopes of preventing harm, sometimes regardless of the intentions these policies can produce harmful outcomes that due to how a process is designed could be hard to remedy. Because tech can obfuscate and be hard to interrogate, this can make things worse.
https://www.wired.com/story/opioid-drug-addiction-algorithm-chronic-pain/
A Drug Addiction Risk Algorithm and Its Grim Toll on Chronic Pain Sufferers

A sweeping AI has become central to how the US handles the opioid crisis. It may only be making the crisis worse.

WIRED
@aeonofdiscord All that said, I think I should note that the process requirements of the Algorithmic Accountability Act don’t change or supplant any of the rules about what’s allowed (most of which are governed by other agencies). Truthfully, they could help with compliance to those other rules, but the aim I think is not about individual entities harm/performance, but more trying to establish a new baseline for the ecosystem.
@b_cavello agh, yeah, this is like...my concern is basically that this doesn't change, it just gets rubber-stamped because the companies involved submitted the relevant FTC paperwork
@aeonofdiscord Yeah, that's legit. I think that there are a lot of folks inside of companies who are trying to push for change, so I see bills like this as helping them to have more air cover. (Also needed: actual privacy law, whistleblower office at the FTC, banishment of nondisparagement clauses, and and and)