Cross-region backup has never made sense to me. If an entire region goes away - not a temporary outage, but GONE - then the country is probably under attack, and absolutely no one will give a shit that your SaaS product is dead.
Wildfires, hurricanes, tornadoes, blizzards and ice storms, earthquakes… many regional disasters are temporary but it can take very long to bring everything back online. AWS can also always lose a whole region for an extended period due to software and control plane bugs.
Even if all your apps and data stores are active-active multi-region you can be in a world of risk with no DR for a long time if your DR region fails. If your data size is small that vulnerability window might be small but if you’ve got petabytes you’ll be without lifeboat for a days or weeks until you can take another “full” DR copy.
There are more failure modes for a region than “working perfectly” and “irreversibly destroyed”. Having cross-region backup leaves open the possibility of restoration of service or at least key data during an extended outage.
> then the country is probably under attack, and absolutely no one will give a shit that your SaaS product is dead.
Or there’s a severe natural disaster, or a flooded data center due to unforeseen conditions, or any number of things.
If your country is attacked, all business does not immediately halt. War is not an instantaneous phenomenon where an entire country becomes destroyed overnight. People continue living their lives as best they can because they still need to put food on the table and life must go on. I have a number of friends and past coworkers in Ukraine who can attest to how you continue doing your best and pick up the pieces and continue moving back toward normalcy.
Cross-region backup isn't here to solve for meteor strikes and nuclear war. Most of the major AWS disruptions have been contained within a region. If you're unlucky enough to depend on one, your service is down and you don't know when it will be back up.
If you document and drill an cross-region recovery, in *most* (not all) cases you will be able to more confidently predict when things are going to be running, you'll know what information is there and what isn't and can build processes to communicate expectations to customers and/or regulators.
The most common cause of the outages right now is configuration errors. Even when procedurally they must be limited to AZ only, there is always some region-shared infrastructure that can bring down the whole region altogether.
"Configuration errors" — I'm going to include "bugs" in that —, IME, tend to be global outages more often than regional. If I recount the outages >AZ that I've seen, I think the most recent ones were:
GCP, IAM (global; just like a week and a half ago!)
GCP, VMs etc. (regional!¹)
Azure, application GW (global)
Cloudflare (global)
Azure, IAM (global)
Azure, IAM (global)
You can tell IAM is a point of weakness. (As it kinda must be.)
¹though I wasn't affected by this one, as it was in Europe.
There are plausible scenarios where a region can go down for days or more at a time, like natural disasters. I'm not terribly worried about a region going away _forever_, but during a regional outage long enough to start losing business, having data in multiple regions is important so you can restore in another region (if you aren't able to fail over quickly).
In practice I’ve seen multiple companies benefit from having a hot standby in west us and east us. The threat is not destruction the threat is the cloud provider screwing up the platform and they typically do rolling updates so only one region would be impacted at a time.
Telecom infrastructure can and does go out. And degraded performance can impact business significantly.
There’s also benefits for many apps to be closer to the customer. If you’re building out infrastructure in a remote region for that purpose, the marginal cost of getting more out of it may be compelling.
Cross-region backup has never made sense to me. If an entire region goes away - not a temporary outage, but GONE - then the country is probably under attack, and absolutely no one will give a shit that your SaaS product is dead.
Wildfires, hurricanes, tornadoes, blizzards and ice storms, earthquakes… many regional disasters are temporary but it can take very long to bring everything back online. AWS can also always lose a whole region for an extended period due to software and control plane bugs.
Even if all your apps and data stores are active-active multi-region you can be in a world of risk with no DR for a long time if your DR region fails. If your data size is small that vulnerability window might be small but if you’ve got petabytes you’ll be without lifeboat for a days or weeks until you can take another “full” DR copy.
There are more failure modes for a region than “working perfectly” and “irreversibly destroyed”. Having cross-region backup leaves open the possibility of restoration of service or at least key data during an extended outage.
> then the country is probably under attack, and absolutely no one will give a shit that your SaaS product is dead.
Or there’s a severe natural disaster, or a flooded data center due to unforeseen conditions, or any number of things.
If your country is attacked, all business does not immediately halt. War is not an instantaneous phenomenon where an entire country becomes destroyed overnight. People continue living their lives as best they can because they still need to put food on the table and life must go on. I have a number of friends and past coworkers in Ukraine who can attest to how you continue doing your best and pick up the pieces and continue moving back toward normalcy.
Cross-region backup isn't here to solve for meteor strikes and nuclear war. Most of the major AWS disruptions have been contained within a region. If you're unlucky enough to depend on one, your service is down and you don't know when it will be back up.
If you document and drill an cross-region recovery, in *most* (not all) cases you will be able to more confidently predict when things are going to be running, you'll know what information is there and what isn't and can build processes to communicate expectations to customers and/or regulators.
The most common cause of the outages right now is configuration errors. Even when procedurally they must be limited to AZ only, there is always some region-shared infrastructure that can bring down the whole region altogether.
"Configuration errors" — I'm going to include "bugs" in that —, IME, tend to be global outages more often than regional. If I recount the outages >AZ that I've seen, I think the most recent ones were:
You can tell IAM is a point of weakness. (As it kinda must be.)¹though I wasn't affected by this one, as it was in Europe.
Notable you don't have AWS on that list.
AWS's definitions for AZ & Regions are by far the strongest in the industry.
GCP has AZ in the same physical complex. Azure Regions would be AZ's under AWS's definition.
There are plausible scenarios where a region can go down for days or more at a time, like natural disasters. I'm not terribly worried about a region going away _forever_, but during a regional outage long enough to start losing business, having data in multiple regions is important so you can restore in another region (if you aren't able to fail over quickly).
In practice I’ve seen multiple companies benefit from having a hot standby in west us and east us. The threat is not destruction the threat is the cloud provider screwing up the platform and they typically do rolling updates so only one region would be impacted at a time.
Telecom infrastructure can and does go out. And degraded performance can impact business significantly.
There’s also benefits for many apps to be closer to the customer. If you’re building out infrastructure in a remote region for that purpose, the marginal cost of getting more out of it may be compelling.