"Oct 20 8:04 AM PDT ... We have identified that the issue originated from within the EC2 internal network... " -#AWS

Investigations showed that the problem began at midnight when the EC2 service turned into a pumpkin. We have stopped looking for a root cause and are now searching for some kind of gourd

#EC2 #us_east_1

@tychotithonus

According to Amazon's issue page:

"Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1." & "we recommend flushing your DNS caches."

So this was NOT caused by DNS but a bad change by Amazon themselves. DNS worked as intended and provided the information as designed.

You don't blame the database for wrongly entered information either.

It seems this outage was generated by Pilot Error and the system and procedures failing to mitigate or prevent pilot error.

#us_east_1 #AWS #Amazon #NotDNS

According to Amazon's issue page: "Based on our investigation, the issue appears to be related to DNS resolution of the DynamoDB API endpoint in US-EAST-1." & "we recommend flushing your DNS caches."
So this was NOT caused by DNS but a bad change by Amazon themselves. DNS worked as intended and provided the information entered by Amazon.
You don't blame the database for wrongly entered information either.

#us_east_1 #AWS #Amazon #NotDNS

RE: https://infosec.exchange/@ChaserSystems/115406212560577992

These were in solid demand at our @fwdcloudsec booth earlier this year and we couldn't help but spread the love among AWS users today. Get yours in the post. #us_east_1 #dns (GCP, Azure, etc peeps can also fill the form ๐Ÿ˜› )

My company doesn't even use AWS directly but we're still affected due to Slack and other services being down.

#us_east_1 #AmazonOutage