Skip to content

Traditionally, we achieved resilience by making systems redundant with a primary data center and a backup data center that consisted of all the same infrastructure, devices, and settings.

The idea was straightforward: If data center A became unavailable for any reason, data center B would take over. Setting up and maintaining two identical locations has served us well. However, now that we can have infrastructure provided as a service from a third party, we don’t really need to have all those underlying systems in place to achieve resilience.

Moving data storage to the cloud means no longer having to pay for, manage, and support the VPN, internet connection, firewalls and file servers, and other systems and devices that makeup storage locations. All of this is provided by the cloud service provider, so you can store a file in cloud provider A and have a mirror image stored in cloud provider B without having to hassle with the infrastructure.

When you move away from an infrastructure-focused approach, the benefits of cost and time savings are obvious, not to mention the ability to scale on-demand. However, you still double-up on storage fees because you are storing the same amount of data with each cloud provider. And for many organizations where high availability is paramount, what’s been holding them back from a cloud-based approach is the issue of trust. While cloud providers are trustworthy, they are not immune to service outages that can be widespread and lengthy. IT leaders often wonder if the data is truly as available as providers claim.

That’s where the next step in the magic of moving to the cloud can happen. With modern technologies, such as a microshard option, you don’t need to have a mirrored image of a file in cloud provider A and B. Operating in the background, microsharding breaks down data into tiny pieces and distributes the microshards across multiple cloud providers and multiple regions so that each storage location only has a fraction of the complete data set. If a location experiences an outage or failure, the self-healing feature reconstructs the affected data for you in real-time, typically without users or applications noticing. You basically eliminate regional outages or a single point of failure because the other locations will immediately kick-in, which means no downtime.

Not only is this approach more resilient, but it is also more cost-effective because you pay less in storage. As a simple example, say you want to distribute data across three cloud providers, and you have a 1 GB file. If you have a copy of the file in each, you’re paying for 3 GBs of storage. By distributing the data so that there is .33 GB with each cloud provider, you’re paying for 1 GB total.

While it is more resilient to have data spread across multiple regions and cloud providers, privacy is also very important to keep in mind. Misconfigurations and user errors can expose sensitive data. Shredding data into four-byte microshards that are too small to contain sensitive data ensures that in the event of data compromise, the threat actor is left with data that is unintelligible and of no value.

Modern technology has advanced to the point where we should reassess our approach to resilience. It is now possible to adopt multiple cloud providers’ services at the same time to get rid of your backup data center and strengthen resilience. It’s a win-win every IT leader should consider.