I think an important lesson from the UniSuper thing is that, while your cloud provider may work very hard to avoid single points of failure in their systems, your _account_ with them can also be a single point of failure.

To put it another way, major businesses historically had to war-game the scenario "your primary data centre just ceased to exist". Moving to the cloud doesn't stop you doing that, it just replaces "primary data centre" with "primary cloud provider account".

@benno @tychotithonus it's just single points of failure all the way down. The only question is how much inefficiency and redundancy are you willing to pay for in order to climb to the next rung of the "single point of failure" maturity model ladder.
@womble Sure, but I'm not sure how many organisations properly quantify that risk, especially when it comes to Google who have a documented history of "oops, your account's gone". @tychotithonus

@benno @tychotithonus humans are terrible at risk assessment, and organisations are made up of humans, so I *am* sure how many organisations properly quantify the risk of an account disappearing, but the answer is not reassuring. 😜

Apart from this UniSuper implosion, I'm not aware of any other instances of this failure mode, though. Are there previous incidents I've missed?

@womble @benno @tychotithonus but also it looks like unisuper had non-Google backups, which have been valuable in getting back online.

That seems like evidence of some deliberate mitigation of the cloud-provider SPOF by UniSuper. Also looks like Google couldn't recover the account...

@phenidone offsite backups don't seem like they were intended as a mitigation for "GCP nuked our account", because if they had been mitigating that risk, it wouldn't take them several days to get back online.
@benno It was good to see that they had backups to a third-party. Someone had thought about the scenario "what if our cloud provider stuffed up".