#GoogleIO revealed the two weirdest features as a pair.

1. Give a short summary and Google will draft an e-mail for you based on it. You can even click "elaborate" and it will make the e-mail longer.

2. When opening an e-mail, Gmail can summarize the entire thing for you so you don't have to read all of it.

Does everyone realize how fucking bizarre this is?

Both people in the conversation want to work with directness and brevity, and Google is doing textual steganography in the middle.

@rodhilton just starting the countdown until we get the first employment discrimination settlement that includes a generated email.

The writer, rightfully claiming that they did not in fact, type the words. The reader, displaying the AI shortened message, based on some admin configuration.

@gatesvp yeah I think that the whole AI hype cycle in general is going to really brush up against issues of responsibility in a hurry.

Your AI-powered self-driving car hits someone. Are you responsible? You just enabled a feature of the car. Is the AI responsible? It's an algorithm. Is the company who built it? No, they weren't there.

If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?

Maybe there'll be responsibility issues, but I don't think those are very good examples.

So with the car, the law regarding malpractice and selling defective products is pretty well established and I see no reason why it wouldn't continue to work as expected. That is, the manufacturer would be held responsible for selling a defective product. They would be responsible to source reliable components, and I don't see a reason why a software component would be any different from a hardware component in that regard.

With AI based firings, we've already got pretty clear legal precedents for firing people based on performance metrics. How we apply those metrics isn't very important, so long as a protected characteristic isn't one of those metrics.

Hell, AI based HR will probably reduce liability because it'd be easier to objectively prove that the input data didn't include any protected characteristics.

@Marvin @rodhilton

I think these are really good examples and that you might not be into the details enough.

For example, a self-driving car will not be involved in zero accidents. Zero is not the metric of success for mechanical things. Or even for software.

And we don't typically hold companies liable for things that are operating within regulatory specification.

The HR stuff is even messier. Because it actually highlights giant regulatory problem... /1

@Marvin @rodhilton our current laws only protect outcomes based on input characteristics. Using AI tools as a shield allows companies to produce terrible outcomes, but claim that they are not responsible by pointing to the inputs.

So we're driving a giant truck through a huge hole in our laws and just letting it run over people for no particularly good reason.

Don't you think we can do better?//

I don't want the government to regulate outcomes. There's some very dark consequences when you start heading down that path. Like there's very good reasons behind why racial hiring quotas are illegal.

But regardless, they can already do effectively the same thing as AI, just messier and by hand. Like I said, hiring or firing based on performance metrics. The AI would do the same thing, except you can provably eliminate accusations of illegal discrimination. I don't see the problem.

@Marvin @rodhilton Except things like LLMs can't in fact fire based on performance metrics. If they did that, it would be an algorithm, not some "AI" or model.

And they can't generate performance metrics because those metrics would be literal BS.

So you're painting a weird universe of things that don't really exist here.

That's a distinction without a difference legally.