Skip to content

As a patchwork of regulations evolve to the cloud, data protection techniques must too.

Small companies often leverage the cloud to lower costs, leveraging economies of scale they otherwise wouldn’t achieve. But for larger organizations there’s a bigger driver; agility from a new operating model in which they build, release, and update software every day instead of every quarter through continuous integration and continuous deployment (CI/CD) tools.

Companies are investing lots of time and effort to cloud-enable applications and workloads, but they haven’t figured out how to cloud-enable data, and applications use data. As a result, data security teams are pushing back. At the root of this resistance are two fundamental questions: who owns the data and can authorize access, and who might have unauthorized access? The answers matter because a shared environment naturally has more opportunities for inadvertent and unauthorized access by other users, cloud administrators, and attackers.

Data protection regulators share these concerns, which is why they incorporate data privacy and data breach notification policies within legislation. Alongside the EU’s General Data Protection Regulation (GDPR), Canada, California, Massachusetts, and many other countries and states have created, and are enforcing, their own regulations to protect personal data.

Regulators also care about data sovereignty and residency. Certain data must be within a government’s borders in order to be able to enforce their policies, and they don’t want other jurisdictions to have (what they deem as unauthorized) access without asking. Things get messy when regulations like GDPR and the Clarifying Lawful Overseas Use of Data (CLOUD) Act in the US conflict with each other. It doesn’t help that the Internet wasn’t architected to prevent the flow of information. Data is not designed to fit within neat regulatory boundaries.

And the environment is about to get messier.

New regulators enter the fray

Recently, it seems as if a new set of regulators are moving the goal posts. On March 9, 2022, the Security Exchange Commission (SEC) proposed amendments to its rules to enhance and standardize disclosure and reporting. This includes reporting material, unsanctioned data access incidents. It also establishes the expectation that boards of directors are responsible for data, so they want to see cybersecurity expertise on boards. Already under increasing pressure to meet environmental, social, and governance (ESG) mandates and diversity mandates, boards now have to find cyber expertise, too.

Congress also passed the Strengthening American Cybersecurity Act (SACA), which President Biden signed into law on March 15. Essentially, the law requires reporting within 72 hours of cybersecurity incidents involving critical infrastructure. Companies may not think of themselves as part of critical infrastructure, but if they have anything to do with food and agriculture, IT, communications, commercial facilities, or healthcare, they’re included.

Suddenly, Congress and the U.S. federal government, are saying virtually every board needs to get smart about data protection. Material incidents must be publicly reported, including potentially material breaches impacting data that cloud providers are holding or processing on a company’s behalf. The message is clear: Companies can select vendors and outsource the work, but can’t outsource the responsibility to protect data. Companies and boards are still on the hook.

In addition to an already complicated and confusing patchwork of national and cross-national data protection and privacy interests, additional reporting requirements are being layered on top that explicitly include breaches in the cloud. It begs the same question that has been asked for years and needs an answer: How can an organization move data to the cloud and protect it?

An old tool being stretched to its limits

So here we are with a regulatory environment that is getting messier, an attack surface the size of the cloud, and the same solution we’ve used forever; encryption.

Key-based encryption was invented in the 1500s, just after Middle Ages and around the same time we invented movable type printing presses. A modern block cypher, called DES, is 50 years old (and was cracked just four years after the US adopted it as a national standard). Over the last 50 years, encryption keys and algorithms have gotten bigger, stronger, and better but it’s a single tool and is showing its age. Whether someone forgets to turn it on, or is using an old key that is shorter than it should be, or an old algorithm that is now easier to break, the results are the same. Meanwhile, attackers are getting faster computers and using new techniques to compromise data we thought was protected.

What’s more, as we move to the cloud with new tools and providers, many more people have access to data by design. It’s easy for a developer to share something with an internal team member or partner and put it in a bucket that isn’t adequately protected, or for data to not be tokenized or anonymized before it is shared. Data sensitivity also changes over time. A database approved for sharing with a third party because it did not contain sensitive data, may become sensitive when a new column is added.

We need a new tool that uses a different technique from encryption and can serve as an additional layer of protection, and we need it to be automatic and invisibly applied. No one should have to do extra work to get the benefits of this new approach, because if they have to do something, they might not.

What's the solution?

We have an idea. It’s called microsharding and leading financial services, technology, biotech, and pharmaceutical companies are already using it today. I’ll share the origin story with you in my next blog post.