As winter storm Jonas bears down on the mid-Atlantic region of the U.S., Amazon’s cloud poohbahs are telling customers not to worry.
In response to customer questions, an Amazon Web Services blog post reiterated the steps the company takes to keep its data centers humming.
The people asking questions were probably remembering the big outage at AWS U.S. East facilities in Ashburn, Virginia that were triggered by lightning strikes in 2012. (That incident is not mentioned in the blog.) On the other hand, the authors did remind everyone that AWS was unaffected by Superstorm Sandy, which took down other data center facilities along the east coast later that year.
The thunderstorm-induced snafu affected Amazon’s (amzn) massive Ashburn data center farm, which powers the Amazon’s huge U.S. East region. Not to put too fine a point on it, but Ashburn is about 35 miles from Washington D.C., which on Friday was prepping for what could be two feet of snow on Saturday.
Read more: Amazon Customers Worry About Cloud Lock-in
The blog lists the usual litany of AWS best practices—that the company replicates key components across Availability Zones (AZs). In Amazon-speak an Availability Zone is a separate, independent set of gear and each data center comprises different numbers of AZs, each of which has its own electrical supplies, generators, heating, air conditioning etc. Customers are encouraged to deploy their applications across several such zones to assure redundancy.
For more on how Amazon took over the cloud, check out the following Fortune video:
And then there are 12 data center regions worldwide, so smart customers run their applications across more than one of those regions as well.
In the public cloud model, businesses use computing, storage and networking capability provided by Amazon, Microsoft (msft) or Google (goog). Each of those tech vendors pool massive numbers of servers, storage boxes, and networking that is then rented out—computing power typically per hour of use, storage by the amount of data deposited, and networking based on how much data gets shipped around.
Get Data Sheet, Fortune’s daily newsletter on the business of technology.
The underlying premise is that since all components fail at some point, the sheer number of servers deployed, and the use of software to route jobs across those servers, keeps work humming along.
From the AWS blog:
As Jonas barrels up the east coast, we’ll see.