#GoogleIO revealed the two weirdest features as a pair.

1. Give a short summary and Google will draft an e-mail for you based on it. You can even click "elaborate" and it will make the e-mail longer.

2. When opening an e-mail, Gmail can summarize the entire thing for you so you don't have to read all of it.

Does everyone realize how fucking bizarre this is?

Both people in the conversation want to work with directness and brevity, and Google is doing textual steganography in the middle.

@rodhilton just starting the countdown until we get the first employment discrimination settlement that includes a generated email.

The writer, rightfully claiming that they did not in fact, type the words. The reader, displaying the AI shortened message, based on some admin configuration.

@gatesvp yeah I think that the whole AI hype cycle in general is going to really brush up against issues of responsibility in a hurry.

Your AI-powered self-driving car hits someone. Are you responsible? You just enabled a feature of the car. Is the AI responsible? It's an algorithm. Is the company who built it? No, they weren't there.

If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?

Maybe there'll be responsibility issues, but I don't think those are very good examples.

So with the car, the law regarding malpractice and selling defective products is pretty well established and I see no reason why it wouldn't continue to work as expected. That is, the manufacturer would be held responsible for selling a defective product. They would be responsible to source reliable components, and I don't see a reason why a software component would be any different from a hardware component in that regard.

With AI based firings, we've already got pretty clear legal precedents for firing people based on performance metrics. How we apply those metrics isn't very important, so long as a protected characteristic isn't one of those metrics.

Hell, AI based HR will probably reduce liability because it'd be easier to objectively prove that the input data didn't include any protected characteristics.

@Marvin @rodhilton

I think these are really good examples and that you might not be into the details enough.

For example, a self-driving car will not be involved in zero accidents. Zero is not the metric of success for mechanical things. Or even for software.

And we don't typically hold companies liable for things that are operating within regulatory specification.

The HR stuff is even messier. Because it actually highlights giant regulatory problem... /1

I wasn't saying zero crashes. Just that a legal standard exists and it's flexible enough to accommodate AI.

@Marvin @rodhilton

Legal standards are invented by humans like you and I. If you were tasked with creating such a standard, what would you want see in it?

How would you write this to be "inclusive of AI"?

The legal standard about shipping defective products is exactly what I'd create.

@Marvin @rodhilton

What legal standard?

Basically every standard falls to a regulatory body. Implementation literally falls to the State level in the US. The laws use vague words like "as expected by a reasonable consumer".
https://www.findlaw.com/injury/product-liability/what-is-product-liability.html

So if you're responsible for deciding "reasonable consumer" regulations for AI cars. What are you deciding? Remember, this rule doesn't exist today. You get to make it up.

What is Product Liability? - FindLaw

Defective or dangerous products are the cause of thousands of injuries every year in the U.S. Learn about product liability law and more at FindLaw's Injury Law section.

Findlaw
No, in the US there's a legal distinction between a regulatory body and a judicial standard. The "as expected" phrasing you're describing is handled by courts, not regulatory bodies. It's a pretty ordinary process by now.

@Marvin @rodhilton

So we've gone back and forth discussing weird legal technical details.

What you haven't contributed is the simplest question. What rules do you want to govern this?

I would even accept an example of a bad thing happening that was okay and a bad thing happening that was bad.

Do you have any of these? Or are you just posturing?

Let's roll this back to the original point.

Rod gave examples of AI introducing novel legal issues of responsibility. I disagreed that the examples he gave would test the law in novel ways.

My point was that ordinary principles of liability (such as regarding defective products or employment law) would work more or less exactly how they've always worked. Any time a new technology is introduced, juries are still answering the exact same questions in the same way.

Nevermind if we're talking about a lawsuit about, say, the safety of teflon coatings on cookware or the safety of AI driving cars, the legal questions are the same. The AI element won't be a clever, spooky way for a company to dodge liability that they wouldn't be able to otherwise.

That's my point.

Now if you're asking me specifically what I'd think or what I'd want if I was on one of these juries... idk? I'm sure they'd bring in experts testifying about the standards of safety of AI driving cars, and numbers and stats about their reliability and then the opposing side would bring in their experts to argue that the products had been rushed to market too quickly.

Hell, we'll probably see some UL standards published on the issue quickly enough. https://en.wikipedia.org/wiki/UL_(safety_organization)
UL (safety organization) - Wikipedia

@Marvin @rodhilton

So let's drill into Rod's example.

If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?

Who's the liable party?

It can't be the employer, they outsourced the decision to LayoffBot who granted them indemnity.
It can't be LayoffBot because "they don't ingest information about protected classes".

We don't measure outcomes only inputs...

@Marvin @rodhilton

In a normal case, we would subpoena the code and go to the algorithm. We would analyze the code to figure out why it is making decisions the way it is and then we make a decision on whether or not the code is defective.

To me, the machine learning solutions are novel because you can't really analyze the code in that way. You may not even be able to repro the results.

To me this is new territory. Do other companies commonly do this? //

Most jurisdictions in the US use "at will" employment, where you can be fired for any (or no) reason, barring a few discrimination issues. And the discrimination issues can be easily eliminated by merely cleaning the data you feed to LayoffBot.

Faulty products are similarly easy to handle in that the law is concerned with outcomes more than the process. The process doesn't need to be analyzed if the company demonstrates that the outcomes are good enough.

And if the outcomes are bad enough, no amount of excuses about the process will justify the situation. The company is obligated to simply not release the product if it keeps fucking people up beyond what is standard for the industry, process be damned.

It's similar to how courts handle medical malpractice. Courts don't try to be experts on medicine and give opinions on whether a particular treatment is wise. They answer a far less technical question, "did the medical practitioner breach the standard of care?" That is, did they vary from what a typical doctor would do? (whether or not what typical doctors do is stupid or not)
So to answer your question: if LayoffBot is some kind of unabiding racist or something, the company offering LayoffBot is probably breaching their obligations to the company using it, and the company using it is breaching their obligations to their employees.

But if LayoffBot is fed a list of employee numbers and a list of how late their TPS reports are in minutes, and then making hiring/firing decisions based on that, LayoffBot and anyone using LayoffBot is probably fine legally. Even if TPS reports are useless and a waste of company resources.

@Marvin @rodhilton so I think I'm starting to see the gap in our discussion.

Neither of the examples you have given are the things that an AI layoff bot would do. If you were going to perform layoffs based purely on TPS reports, that would be an algorithm, not an AI.

The current batch of layoff bots are promising to analyze every digital interaction you've had on a company account. And then produce a "performance management" list without need for TPS reports...

@Marvin @rodhilton

Because the technology is unregulated and wildly misunderstood, all the "reasonable person" defenses fall down.

Can this all be sorted out in court in the future? Sure. But that court date could be decades in the future after millions and millions of people are hurt.

In my world, I would like to see some of those damages pre-empted. Doesn't that sound better?

I think the distinction you're drawing between AI and algorithm is more appropriate technically, but not legally. Legally, a court will see some kind of process that takes inputs and produces outputs. A more complicated process, even an unauditably complicated process, is still just a process.

Though I agree that the layoff bot process you propose is problematic, in that it could indeed drag in inputs that it legally cannot use in its decisions. So for example, if it's dragging in surnames, a clever civil rights lawyer could definitely make an argument that it might be making illegally discriminatory actions based on employees' ethnicities.

And beyond that just being a possibility, I'm fairly certain it's an inevitability, if the companies in question don't actually do their homework and clean their data sources. Big corporations are extremely conservative when it comes to issues like these. The legal department at any big corporation would be shitting itself about putting such a system into place without lots of preparation.

I'm just not convinced that current civil rights law doesn't cover these issues. I also don't trust legislators to write competent law on technical subjects. The laws they write will be clunky, full of stupid corner cases and drive away innovation. Even if a company isn't attempting to do something skeevy, just the risk of being caught up in some poorly written regulation would be enough to make entrepreneurs reconsider starting an AI in a jurisdiction with such regulations.

Like the EU is considering AI regulations right now, and if it goes through, I bet the vast majority of AI developments going forward will be in the US, not the EU. Whether or not that's worth it is up for Europeans to decide, but I wouldn't want that in the US.

@Marvin @rodhilton our current laws only protect outcomes based on input characteristics. Using AI tools as a shield allows companies to produce terrible outcomes, but claim that they are not responsible by pointing to the inputs.

So we're driving a giant truck through a huge hole in our laws and just letting it run over people for no particularly good reason.

Don't you think we can do better?//

I don't want the government to regulate outcomes. There's some very dark consequences when you start heading down that path. Like there's very good reasons behind why racial hiring quotas are illegal.

But regardless, they can already do effectively the same thing as AI, just messier and by hand. Like I said, hiring or firing based on performance metrics. The AI would do the same thing, except you can provably eliminate accusations of illegal discrimination. I don't see the problem.

@Marvin @rodhilton Except things like LLMs can't in fact fire based on performance metrics. If they did that, it would be an algorithm, not some "AI" or model.

And they can't generate performance metrics because those metrics would be literal BS.

So you're painting a weird universe of things that don't really exist here.

That's a distinction without a difference legally.