When storm clouds gather: What’s your backup plan?

Matthew Johnston of Commvault talks about the GitLab backup failure as well as the AWS S3 outage, and explains how organizations should design and implement a holistic data management strategy

AWS GitLab

Author

Matthew Johnston is Area Vice President, ASEAN, at Commvault.

He brings excellent communication skills and the ability to develop strategic partnerships with customers and channel partners to achieve agreed targets.

A successful data strategy is not simply backup or data management – it is the right combination of both. This was clear in the first two months of 2017, where we bore witness to two major outages that affected businesses across the globe. The Amazon Web Services S3 cloud outage which affected connectivity to major websites and services across North America for several hours . Earlier in February, GitLab suffered from a data deletion incident following an inability to restore the data from backups.

These outages had a widespread effect on government, sales, marketing, academic and e-commerce sites. Besides revenue losses, corporate reputation was at risk. Unhappy customers took their agony to social media while IT departments were bombarded with questions how an incident such as this could happen in today’s digital age.

The big lesson from the recent global outages: develop a comprehensive data management strategy, know where your data lives and ensure a partnership with the right provider to recover data in the unexpected event that there is an outage.

Where does your data live?

As part of a comprehensive data management strategy, it is first important to know where your data lives. CIOs today need to have a unified view of management of data in organizations, be it on-premise or in the cloud. A detailed, real-time overview of your data across multi-regions, especially during an outage, needs to be available at your fingertips. A dashboard, for instance, that indicates what data is affected by an outage will be important in such circumstances. This enables businesses to implement a robust disaster recovery plan that provides an effective backup and recovery strategy in the event of an emergency.

Critical data and services native to the cloud should also ensure backups are scheduled in/across/from clouds so your data is always and readily available. Automated backups, in particular, and the ability to verify those backups, enable a smooth data management process and should form the backbone of your data management strategy.

Data Recovery Plan B

If you have not been maintaining a copy of your data outside your primary region, it is time to implement it as a regular practice in your team. Some companies may create copies on-premise and move them to the cloud, or vice versa. Whichever strategy your company takes, you must have a plan to bring your services on another platform that is not down from the outage.

Formats and portability of the data also matters. For instance, if data resides in Amazon AMI format and your on-premises infrastructure is Microsoft Hyper-V or VMware, your provider should be able to crack into the data and make it usable.

It is heartening to note that by 2020, IDC research forecasts public cloud spending will reach $203.4 billion worldwide, highlighting the importance for businesses to preserve and protect their data at all costs. While unpleasant, any outage is a wakeup call to companies and public sector organizations to have a complete holistic data management strategy that addresses disaster recovery, and the ability to flexibility move data across clouds or any infrastructure.

Virtual Reality in India: A closer look
Sapho unveils new enhancements for its micro app platform