Beware UserDefaults: a tale of hard to find bugs, and lost data https://christianselig.com/2024/10/beware-userdefaults/
Beware UserDefaults: a tale of hard to find bugs, and lost data

Excuse the alarmist title, but I think it’s justified, as it’s an issue that’s caused me a ton of pain in both support emails and actually tracking it down, so I want to make others aware of it so they don’t similarly burned. Brief intro For the uninitiated, UserDefaults (née NSUserDefaults) is the de facto iOS standard for persisting non-sensitive, non-massive data to “disk” (AKA offline). In other words, are you storing some user preferences, maybe your user’s favorite ice cream flavors? UserDefaults is great, and used extensively from virtually every iOS app to Apple sample code. Large amount of data, or sensitive data? Look elsewhere! This is as opposed to just storing it in memory where if the user restarts the app all the data is wiped out.

@christianselig good too know about this problem, but about your solution, wouldn’t be better to use an actor instead of a queue? Using sync may lock the thread anyway right?
@Robuske It would lock a background thread only enough to perform minor file IO, shouldn't be noticeable. I swear I remember reading actors weren't recommended for file IO nor do I believe they guarantee any form of serial execution
@christianselig @Robuske Regardless of dispatch queue or actor, it only protect access within a single process. To guard against data races against multiple process (like app and an app extension), I believe we have to use `NSFileCoordinator`.
@taichimaster @Robuske if writing atomically the os should be able to handle that aspect, no?
@christianselig @taichimaster @Robuske The OS won't protect you from everything. Say both processes want to increment a value. P1 reads 0, increments in memory, is about to write 1. But now P2 does the same: it reads 0 because P1 did not write yet. In the end both processes write 1. Zero incremented twice should give 2, not 1. NSFileCoordinator can prevent this bug.
@groue @taichimaster @Robuske Oh definitely, I meant more so from data corruption, that is a great point though, I'll probably implement NSFileCoordinator now, cool little API
@christianselig @taichimaster @Robuske Yeah, with that spicy Objective-C flavor from 2011 😅
@christianselig @taichimaster @Robuske depends, but in general, the answer is no if writing to intersecting files happens from different processes:
Atomic writing doesn’t prevent a contending write to finish first, and being overwritten by the slower writer. (For the same process, your queue can prevent that. The missing details in the articles snippets make it impossible to judge, if that’s the case.)
If you want to be sure no accidental overwrites happen, SQLite would do the trick nicely.
@danyow @taichimaster @Robuske Right, but you're only looking at out of sequence overwriting occurring, right? Not actual data corruption due to concurrent writes? That's still worth protecting against, but just want to make sure I understand
@christianselig @taichimaster @Robuske corruption in the sense of “can no longer deserialise a given key” would be prevented by atomic writes, correct.