See why Amazon’s recent glitch has important implications for every leader.
If you weren’t directly affected by the recent 4-hour outage of Amazon’s Web Services, then you at least likely saw the impact on one (or many) of the websites you visited that day.
We’ve all become accustomed to ‘disaster impending’ stories that gush out after events like this, warning us about over-dependence on cloud services, but what can we truly take away from an event like this?
Here are five specific, important lessons for every business leader from the recent Amazon glitch:
1. Make an immediate inventory of your house pigs.
Believe it or not, many people keep pigs as pets. Not many keep a 650-lb house pig. Turns out, Esther the pig wasn’t a house pig after all, but her owners couldn’t bear to part with her. So she just roamed around their house, being somewhat disruptive, before finally moving to a bigger place.
AWS is Amazon’s house pig – it was originally just a way to make some money off of their enormous tech infrastructure, and then, relatively lately, ballooned into a cloud-hosting colossus.
Ask yourself – what product or service offerings are we providing that are our house pigs? Ones that have bloomed way beyond our original plans, and which we haven’t underpinned with the infrastructure, systems and processes they deserve?
2. Identify single points of failure.
By all accounts, including Amazon’s own, the AWS outage was caused by a line of code that tripped multiple server shutdowns – enough to cascade into the entire AWS network.
What are the ‘single points of failure’ that could cripple your business? Maybe it’s as simple as one key person who holds so much inside their head that you couldn’t afford for them to be out sick or to leave the business. Or it could be as complex as a piece of software you depend on for a large part of the management of your business. Whatever it is, you almost certainly have at least three to five single points of failure in your infrastructure right now. Identify them and build in redundancy.
3. Distinguish between core and non-core systems.
Stroke of the blindingly obvious in the wake of the AWS outage (and others): The cloud is great, except until it isn’t. You can afford to put your non-core services in the cloud and do little else except depend on those systems’ built-in redundancy. (If your list of Google Contacts goes poof for a few days, your business likely won’t die.) But for your core systems, you need to ensure you have your own, controlled backup procedure for when stuff goes down.
Think of it this way: What could go wrong that would immediately cripple your ability to please your clients or customers? That stuff can be stored in the cloud, but you need to be totally in control of recovery if something goes awry.
4. Run a red team / blue team exercise
Not sure just how vulnerable you are to a cloud service going belly up? If you have the resource, try running a red team / blue team exercise to find out. After all, if you don’t know the answer to this, who does?
5. Bring in fresh eyes
One of the most under-used tools in business is effective deployment of high-potential employees. In other words, moving your best folks around the organization so that they get a well-rounded understanding of the business as a whole, and you get fresh eyes on problems and challenges (like an over-dependence on any one system or process).
In fact, effective deployment is so important that it’s one of 13 Key Imperatives we teach in getting organizations to Predictable Success. Yes, it’s disruptive and expensive to start, but once you’ve seen the impact and reaped the benefits, you’ll never go back to silo-ization of your top performers again.
Learn how to give your business the infrastructure it needs in the face of fast growth here.