Are you f* kidding me, Apple?!

After a long time, I filed another bug report using Feedback Assistant because the bug was bad enough that it’s worth the effort of writing it all down.

When uploading a sysdiagnose (or probably any other attachments) you get the usual privacy notice that there is likely a lot of private and other sensitive info in those log files. It’s not a great feeling but it is what it is with diagnostic data and I mostly trust the folks at Apple to treat it with respect and I trust the Logging system to redact the most serious bits.

However, when filing a feedback today a noticed a new addition to the privacy notice:

"By submitting, you […] agree that Apple may use your submission to [train] Apple Intelligence models and other machine learning models."

WTF? No! I don’t want that. It’s extremely shitty behavior to a) even ask me this in this context where I entrust you with *my* sensitive data to help *you* fix your shit to b) hide it in the other privacy messaging stuff and to c) not give me any way to opt out except for not filing a bug report.

Do you really need *more* reasons for developers not to file bug reports? Are the people who decided to do this really this ignorant about the image Apple‘s bug reporting process has in the community? How can you even think for a single second that this is an acceptable idea?

So, WTF, Apple?!

I just took another look at the message while writing the alt text. The first sentence is just broken, right?

„The attachments in this report may contain personal or sensitive information and any content you uploaded.“

It either reads as „The attachments in this report may contain […] any content you uploaded.“ which is just … duh?
Or the part after the end is supposed to be a second sentence, in which case „Any content you uploaded.“ is just … not a complete sentence and missing some significant bit. Or
am I missing something?

So not only the behavior is shitty, no one even properly proofread the message in this dialog. This is fine.

@cocoafrog you can upload additional “content” with a bug report (screenshots, etc.), so the sentence is accurate, though awkward.

Also, yeah, sucks about the AI training. I’d prefer an explicit opt out for that.

There had been a lot of discussion about using developer info for AI purposes. The benefit is probably obvious, but it should be a choice.

I guess someone eventually decided differently. 😑

@cocoafrog maybe they could use an LLM to proofread it 🙃🫠
@cwagdev @cocoafrog if their own LLMs weren’t hilariously bad, maybe…
@cwagdev @cocoafrog Nah, after all they used their LLM to write it.

I think it's to be interpreted as if they had written the two sentences "The attachments in this report may contain personal or sensitive information" and "The attachments in this report may contain any content you uploaded".

But I had to read it a few times to make any sense of it. So I don't think it is worded very carefully.

@cocoafrog I hate Apple on free/open grounds, and this wouldn't make it ok if it were the case, but I wouldn't be surprised if legal got overzealous with sticking that clause into anything about Apple receiving user data and no one thought about this.
@jbowen yep, that might well be, but still has a highly discouraging effect on people filing bug reports. So I think they should have thought about it.
@cocoafrog with a comma after "information" it's grammatically just fine.
Still ethically and morallly crappy, tho.
@cocoafrog Great way to discourage bug reports.

@cocoafrog It's not even for some greater ends, Apple Intelligence is absolute garbage that is no doubt going to be canned in a few years or outsorced and redone by some startup.

I wish Apple would stop infecting all their products with this slop. I bought the 11th Gen A16 iPad specifically because it didn't have Apple Intelligence lol

@cocoafrog because no one will sue them for it and have chances of winning.
@cocoafrog You shouldn't trust them. I hope you learn by this and start using open source alternatives respecting more the users. Apple a is pretty bad company to trust.
@H0W25 I‘ve worked for them, I think I have a reasonable understanding of how far I can trust them and when that trust stops.
@cocoafrog But I mean, if you depend on their tools they can change any time like happened to you right now.
@cocoafrog I stopped submitting bugs a LONG time ago with the exception of one at every major update that a decade on they have finally fixed. Always been a waste of time and this is certainly no incentive to waste my time. #SackTimCook
@cocoafrog Of course Apple would do that, they are not to be trusted. Just like every tech giant they only care about their AI models that NOBODY asked for.

@cocoafrog

Not seein' a whole lotta respect there...

@cocoafrog
Yes, that sucks. Billionaires being billionaires, I guess. We don’t need them and maybe it’s time for a change.
@cocoafrog @Em0nM4stodon "Bug reports are *way* down since our last update. We must be doing great!"

@bcasiello @Em0nM4stodon I‘ve sadly seen people treat bug reports that way. „We didn’t get any reports from users about this bug you described, can’t be a big issue.“

And every time I just thought „do you have any idea how bad the bug reporting process is and how unlikely it is that someone files a bug report about an issue they encounter?“

@cocoafrog @bcasiello @Em0nM4stodon I always like to say bug reports are like mice. You might have a mouse problem and not see any mice. But if you see one mouse, it is likely there are a lot that you don’t see and the problem is much bigger than you realize.
@cocoafrog Thanks for bringing this to my attention.
@cocoafrog It’s the least we can do to help Apple get to a quadrillion dollar valuation.

@cocoafrog Well I don’t know what Apple put in these fucking sysdiagnose but I filled a critical HealthKit bug long time ago.
They didn’t fix it because they didn’t see anything in the sysdiagnose.

Since that time I beg them to add logs, but they don't care at all.

@cocoafrog just dont report bugs to ape

@cocoafrog

In their defence, they are not the only ones who would like to use AI analysing crashes and other bugs, and they need your permission to train their model. What’s wrong with that?

@mcepl Apple Intelligence is a collection of user facing features, not something used to analyze crashes and other bugs.

If they use AI to *analyze* the data that’s one thing and I don’t think I’d have an issue with it (I think it would be stupid to use LLMs to do so, a better choice would be other ML, but that’s their mistake to make). But it’s a completely different thing to use my data to *train* their models.

@cocoafrog My only more generous guess to why they added this is that when submitting feedback about an Apple Intelligence feature, e.g. for a misprioritised notification, they want to use that notification (for example) to improve their model.

I can’t imagine them using a sysdiagnose for training but feels like they’ve written a crude generalised privacy warning without much consideration for how it comes across.

@jgarnham right, I think that’s a possible generous explanation, but I think that would be a clear lack of care in writing this message and I expect better from Apple.

Also, even if they did intend it this way, I do not trust Apple enough anymore to not change their mind in the future and indeed use *all* the information from *all* bug reports to train whatever models they like, which this message would give them permission to do.

This whole idea of „let’s just ask for the broadest possible permission and collect as much data as possible in case we need it in the future“ goes against the core idea of privacy and I expect Apple to do better.

@cocoafrog @jgarnham based on dealings with Apple involving NDAs executive management and bug reports (I found one in a flagship launch of their most important product), I think it’s exactly this.

I’ve also, in an unrelated problem with them, been involved in successive security “gates” involving my data that showed me an excellent privacy culture.

That said it should still get shouted to the sky until a big enough person or news outlet notices so they clarify and make the policy more concrete

@cocoafrog Just think of all the people who don’t read this message before hitting Submit. That’s how they are doing it.

@cocoafrog

I mean it makes sense to train models on issues and solutions for user problems. Then they can use AI assistants in the future

@Rhababerbarbar not sure whether you are serious or sarcastic. 😅

@cocoafrog

I mean this is a legit use case from their point of view. And dedicated training data is also way better than the random crap that generalistic LLMs are trained on.

Totally valid if you dont want your interaction in there though, so in the "oh we are Apple, the privacy heros" they should make that opt-out.

Btw, why do you use Apple if you care?

@Rhababerbarbar because Apple was much better in the past about making these „it would be nice to have this data“ vs. „we need to respect the user and let them make this decision“ trade-offs.

They used to be much better about putting user choice first. And they still are in other areas, eg around location data. You still need to allow all of Apple‘s own apps to use your device‘s location data as well, just like any other app. And privacy used to be highly valued by Apple and I think it wasn’t just marketing speak (though they got much better at marketing their privacy stance).

So while I I clearly see all the signs of enshittification in the last decade or so I still think they are better than the alternative and hold them to a higher standard. But it keeps getting harder and harder to make that argument.

@cocoafrog

Their software is not FOSS, and their hardware is very or entirely locked down. Google Pixel + AOSP + GrapheneOS modifications would not be possible in the slightest

@Rhababerbarbar yeah, but those modifications aren’t valuable to me. I prefer a system that’s well integrated and has good UX and respects my choices. Apple was good in all those in the past and I agreed with most of the trade offs between reducing complexity and allowing configurability.

But has been getting worse and worse.

But I don’t think a Google-based Android is an alternative at all and I’m not at the point yet where I want to spend the effort to configure my own.

@cocoafrog

Well Apple having a single ecosystem and being product-first doesnt help here. They have a lot of stuff like Airdrop or iMessage that can be replaced but not easily

GrapheneOS is not google-based, but that comes at the "cost" of no integrated cloud storage, backup, transfer etc.

Also small things like a well integrated and working password manager, #KeepassDX and #Bitwarden are okay replacements, probably more secure but less easy

Airdrop can be replaced with #Localsend

@cocoafrog Isn't it simply that they use the same tool to report bugs related to software and those related to Apple's intelligence? Which would explain this redaction?
Joachim (@cocoafrog@hachyderm.io)

@jgarnham@mastodon.social right, I think that’s a possible generous explanation, but I think that would be a clear lack of care in writing this message and I expect better from Apple. Also, even if they did intend it this way, I do not trust Apple enough anymore to not change their mind in the future and indeed use *all* the information from *all* bug reports to train whatever models they like, which this message would give them permission to do. This whole idea of „let’s just ask for the broadest possible permission and collect as much data as possible in case we need it in the future“ goes against the core idea of privacy and I expect Apple to do better.

Hachyderm.io
@cocoafrog
Does anyone still use products from such shitty companies, especially now that there are companies that respect their customers and don't wipe their feet on them.