#GoogleIO revealed the two weirdest features as a pair.

1. Give a short summary and Google will draft an e-mail for you based on it. You can even click "elaborate" and it will make the e-mail longer.

2. When opening an e-mail, Gmail can summarize the entire thing for you so you don't have to read all of it.

Does everyone realize how fucking bizarre this is?

Both people in the conversation want to work with directness and brevity, and Google is doing textual steganography in the middle.

@rodhilton It's made for executives to have their EA's write everything.
@rodhilton I know people who want these features and I hate them.
@rodhilton Wasn't there some kind of cartoon making that point earlier this year?
AI Written, AI Read cartoon - Marketoonist | Tom Fishburne

One piece of slang that has long embodied the short attention span Internet age is TL;DR, short for “too long; didn’t read.” With the explosion of generative AI tools, we’re rapidly entering the age of TL;DW: “too long, didn’t write.” A January survey from Fishbowl found that 40% of nearly 12,000 workers have used ChatGPT

Marketoonist | Tom Fishburne - Marketoonist is the thought bubble of Tom Fishburne. Marketing cartoons, content marketing with a sense of humor, keynote speaking.
@raphael_fl @rodhilton What's the difference between satire and reality? About six months.
@rodhilton They've made features that just add and remove their own bullshit? That's kind of amazing that people can spend their days making stuff that just cancels out the other stuff they're making like that.
@kichae @rodhilton If google is smart, they just safe the old prompt and just give that back to the second person ;D
@shadowwwind @kichae @rodhilton Nah, that requires storage space. Calculation is cheaper.
@kichae @rodhilton Definition: Balanced AI — the amount of bullshit produced by the generating AI is equal to the amount of bullshit destroyed by the consuming AI. You can now hand me my #IEEE Hamming medal.
@kichae @rodhilton They've made a feature that adds bullshit, and they've made another feature that replaces it with shorter bullshit. LLMs can only add BS, not remove it.
@rodhilton Plot twist: the summary of the enhanced email is the original prompt used to create it.
@Dave3307 @rodhilton if that were guaranteed these features would only waste cpu cycles, storage, and bandwidth.
Alas, far from it.
@Dave3307 @rodhilton exactly. They don’t need “elaborate”. They need “make this half-brained slapdash cavespeak less annoying “.
@rodhilton This is just the opposite of a compression algorithm
AI Written, AI Read cartoon - Marketoonist | Tom Fishburne

One piece of slang that has long embodied the short attention span Internet age is TL;DR, short for “too long; didn’t read.” With the explosion of generative AI tools, we’re rapidly entering the age of TL;DW: “too long, didn’t write.” A January survey from Fishbowl found that 40% of nearly 12,000 workers have used ChatGPT

Marketoonist | Tom Fishburne - Marketoonist is the thought bubble of Tom Fishburne. Marketing cartoons, content marketing with a sense of humor, keynote speaking.
@rodhilton From a few weeks ago:

@leoncowle @rodhilton

The bullet point appears to say the same as the original unless you inspect it really carefully.

@leoncowle @rodhilton that's how books industry already works
@leoncowle @rodhilton and here I thought you were supposed to compress data before sending it down a link, not inflate it
@leoncowle @rodhilton AI is going to understand our emotions very well.

@leoncowle @rodhilton Was also going to post this.

The biggest change in the world is the speed in which reality imitates art.

@rodhilton @Gte well let me tell you, people don’t actually want directness, they think it’s rude.

Google is permitting them to carry on this charade, which is the real sin.

@jason @rodhilton @Gte It's really quite obnoxious that weird tendency to want indirectness at all cost.
@rodhilton Google having a conversation with GMail

@rodhilton @Gte

It’s a beautiful encapsulation of Silicon Valley in general.

“Should we address this minor issue of social norms in the workplace?

I have a solution that’ll only require 2 globally distributed SRE teams, 20 FTE SWEs and 1/3rd of our orgs annual compute capacity!”

@rodhilton In similar news, Google have a feature where your phone can answer your calls for you, and Duplex which could phone someone up on your behalf. They wrote two AIs that could converse *in spoken English* complete with ums and ahs.

@andrewt Here's another one.

Introducing tools to allow AI-based generation of images, as well as tools to detect AI-generated images

This is the very definition of selling both the poison and the cure.

https://techcrunch.com/2023/05/10/google-introduces-new-features-to-help-identify-ai-images-in-search-and-elsewhere/

TechCrunch is part of the Yahoo family of brands

@rodhilton People have been doing this before Google. First automate the process, then maybe average people will see how absurd this is and learn to be more direct. And if they don’t, at least now it wastes less time :)
@rodhilton This is why I just send one line emails to begin with
@rodhilton @Gte I'm more convinced than ever that once humanity's time on this planet is over the only thing left will be a single super powerful AI that is optimized to navigate complex phone trees, and an even more powerful AI that creates more and more elaborate phone trees.
@garyowen @rodhilton @Gte

Paperclips. It's all about the paperclips.

https://www.decisionproblem.com/paperclips/index2.html

Play the web version of the
#game you'll see why this is relevant to the thread
@rodhilton THANK YOU I feel like I've been the only person thinking this is bonkers.
@rodhilton Proposal: use these two, but the other way round, on normal emails as a form of lossless data compression.
@stecks @rodhilton I seriously doubt it’d be anywhere close to lossless. It’d be fascinating to see how many times you could go back and forth before it drifts away from whatever the point was
@ttyRazor @stecks @rodhilton
We could test this now using two different LLMs. Compressing-expanding song lyrics should yield some interesting creations

@ttyRazor @stecks @rodhilton

Trouble with trying this with BingChat is that it recognizes from which song the lyrics originated, and regurgitates the memorized summary for the entire song.

Yet, asking BingChat to write a song using its own summary as the prompt yields something that is altogether very different - at least it won't attract a copyright lawsuit from Ed Sheeran

@ttyRazor @stecks @rodhilton

Tried this with the abstract to "An aperiodic monotile" by David Smith et al.
https://arxiv.org/abs/2303.10798

Bing Chat's one sentence summary:
"a solution to a longstanding problem of finding an aperiodic monotile or “Einstein” by exhibiting a continuum of combinatorially equivalent aperiodic polygons."

Then asked for an informational paragraph with that as prompt:

An aperiodic monotile

A longstanding open problem asks for an aperiodic monotile, also known as an "einstein": a shape that admits tilings of the plane, but never periodic tilings. We answer this problem for topological disk tiles by exhibiting a continuum of combinatorially equivalent aperiodic polygons. We first show that a representative example, the "hat" polykite, can form clusters called "metatiles", for which substitution rules can be defined. Because the metatiles admit tilings of the plane, so too does the hat. We then prove that generic members of our continuum of polygons are aperiodic, through a new kind of geometric incommensurability argument. Separately, we give a combinatorial, computer-assisted proof that the hat must form hierarchical -- and hence aperiodic -- tilings.

arXiv.org
@rodhilton @jwilker aka translating from Dutch to American
Translation:

While we commonly expect short phrases in one language to be equally short in another, sometimes short phrases are translated into surprisingly long ones: however, many shows parody this completely by having a single word become a long phrase in …

TV Tropes
@rodhilton @designatednerd This is starting to feel like encryption without keys. Convert a sentence to a PHD thesis and then back. 😅
@rodhilton Google will literally make money from you coming and going.
@rodhilton
not only that, it's lossy steganography.

@rodhilton just starting the countdown until we get the first employment discrimination settlement that includes a generated email.

The writer, rightfully claiming that they did not in fact, type the words. The reader, displaying the AI shortened message, based on some admin configuration.

@gatesvp yeah I think that the whole AI hype cycle in general is going to really brush up against issues of responsibility in a hurry.

Your AI-powered self-driving car hits someone. Are you responsible? You just enabled a feature of the car. Is the AI responsible? It's an algorithm. Is the company who built it? No, they weren't there.

If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?

Maybe there'll be responsibility issues, but I don't think those are very good examples.

So with the car, the law regarding malpractice and selling defective products is pretty well established and I see no reason why it wouldn't continue to work as expected. That is, the manufacturer would be held responsible for selling a defective product. They would be responsible to source reliable components, and I don't see a reason why a software component would be any different from a hardware component in that regard.

With AI based firings, we've already got pretty clear legal precedents for firing people based on performance metrics. How we apply those metrics isn't very important, so long as a protected characteristic isn't one of those metrics.

Hell, AI based HR will probably reduce liability because it'd be easier to objectively prove that the input data didn't include any protected characteristics.

@Marvin @rodhilton

I think these are really good examples and that you might not be into the details enough.

For example, a self-driving car will not be involved in zero accidents. Zero is not the metric of success for mechanical things. Or even for software.

And we don't typically hold companies liable for things that are operating within regulatory specification.

The HR stuff is even messier. Because it actually highlights giant regulatory problem... /1

I wasn't saying zero crashes. Just that a legal standard exists and it's flexible enough to accommodate AI.

@Marvin @rodhilton

Legal standards are invented by humans like you and I. If you were tasked with creating such a standard, what would you want see in it?

How would you write this to be "inclusive of AI"?

The legal standard about shipping defective products is exactly what I'd create.

@Marvin @rodhilton

What legal standard?

Basically every standard falls to a regulatory body. Implementation literally falls to the State level in the US. The laws use vague words like "as expected by a reasonable consumer".
https://www.findlaw.com/injury/product-liability/what-is-product-liability.html

So if you're responsible for deciding "reasonable consumer" regulations for AI cars. What are you deciding? Remember, this rule doesn't exist today. You get to make it up.

What is Product Liability? - FindLaw

Defective or dangerous products are the cause of thousands of injuries every year in the U.S. Learn about product liability law and more at FindLaw's Injury Law section.

Findlaw
No, in the US there's a legal distinction between a regulatory body and a judicial standard. The "as expected" phrasing you're describing is handled by courts, not regulatory bodies. It's a pretty ordinary process by now.

@Marvin @rodhilton

So we've gone back and forth discussing weird legal technical details.

What you haven't contributed is the simplest question. What rules do you want to govern this?

I would even accept an example of a bad thing happening that was okay and a bad thing happening that was bad.

Do you have any of these? Or are you just posturing?

Let's roll this back to the original point.

Rod gave examples of AI introducing novel legal issues of responsibility. I disagreed that the examples he gave would test the law in novel ways.

My point was that ordinary principles of liability (such as regarding defective products or employment law) would work more or less exactly how they've always worked. Any time a new technology is introduced, juries are still answering the exact same questions in the same way.

Nevermind if we're talking about a lawsuit about, say, the safety of teflon coatings on cookware or the safety of AI driving cars, the legal questions are the same. The AI element won't be a clever, spooky way for a company to dodge liability that they wouldn't be able to otherwise.

That's my point.

Now if you're asking me specifically what I'd think or what I'd want if I was on one of these juries... idk? I'm sure they'd bring in experts testifying about the standards of safety of AI driving cars, and numbers and stats about their reliability and then the opposing side would bring in their experts to argue that the products had been rushed to market too quickly.

Hell, we'll probably see some UL standards published on the issue quickly enough. https://en.wikipedia.org/wiki/UL_(safety_organization)
UL (safety organization) - Wikipedia

@Marvin @rodhilton

So let's drill into Rod's example.

If your AI chooses who to fire and it turns out to be a bunch of protected class employees, who gets sued? The employer or the builder of LayoffBot?

Who's the liable party?

It can't be the employer, they outsourced the decision to LayoffBot who granted them indemnity.
It can't be LayoffBot because "they don't ingest information about protected classes".

We don't measure outcomes only inputs...

@Marvin @rodhilton

In a normal case, we would subpoena the code and go to the algorithm. We would analyze the code to figure out why it is making decisions the way it is and then we make a decision on whether or not the code is defective.

To me, the machine learning solutions are novel because you can't really analyze the code in that way. You may not even be able to repro the results.

To me this is new territory. Do other companies commonly do this? //