These things that were being upgraded are called #Route #Reflectors and they are things that are SUPER CRITICAL to a big network, but also SUPER SIMPLE to have redundancy on - you just add another and tell it to talk to the other one(s). Zero complexity.
They listen to all the OTHER routers that say 'I can get to 1.1.1.1 via x.y.z then a.b.c' or 'I can get to 1.1.1.1 via 8 different paths', and consolidate them all together, and tell every OTHER router the single best path to (in this example) 1.1.1.1
They ALSO take all the internal network routes and squash them together into something that is presented to the outside world via #bgp
Basically, if you have a MASSIVE network, you need a couple of these, and they need to be reliable, and they need to be redundant, but because they're *technically simple*, it's usually not that much of a big deal to upgrade them, as long as you do them one at a time, and *AND THIS IS THE IMPORTANT BIT THAT I THINK THEY MISSED*, make sure that the one you have just upgraded IS ACTUALLY WORKING.
So, at 4am, they all failed. This is a pretty serious failure, because if they're ALL DOWN, you have no network at all. You have to use your #OOB (Out of Band) network to access the routers, and fix them.
But what if your OOB network is using Optus? Well that's an issue if Optus is down, as you can't use the Optus network to fix the Optus network!
So then you need to get someone to physically attach themselves to the OOB network. But here's the NEXT problem - all Optus networking is offshore. There's almost no-one in Australia who can physically fix it.
So what do you do when your offshore outsourced network guys break your core network infrastructure, and you've retrenched everyone who can fix it locally?
You have a 7 hour outage, that's what you do.
Feel free to ask questions or tell me I'm wrong!