We’re excited to be an exhibitor at this year’s RSA Conference, where the theme is “Art of Possible.” As RSA notes, the theme is designed to encourage groundbreaking innovations and ironclad defenses against an ever-evolving threat landscape.
As we get ready for the conference, we’ve been thinking about all the ways that the cybersecurity world is changing — and all the fresh ideas that we’ll see in the coming months. Today, we wanted to take a deeper dive into some of these changes and the exciting new possibilities that they’ll open up.
Read on to find out what we see on the horizon for AI, ransomware, quantum computing, and more. Plus, check out the end of the article for details on our booth location and the key technology partners who will be with us.
Hint: It’s all about the data protection.
Artificial intelligence and machine learning have progressed astonishingly quickly over the last year. In the not-too-distant future, we’ll be looking at a world where all data is AI data. That is, the majority of data will either be scraped for training models or be generated by models themselves. There’s going to be an absolute flood of data — and the security world will need to keep up.
That’s because AI and ML models are not just prolific; they’re also highly valuable. Companies invest significant time and resources, often amounting to hundreds of millions of dollars, to gather, refine, and structure vast amounts of data. Processes such as data acquisition, cleaning, labeling, and augmentation, as well as infrastructure investments like high-performance computing and cloud storage, all contribute to the substantial cost of these models.
Beyond the R&D costs, most large language models (LLMs) share many core similarities. The competitive advantage for each organization lies in the quality of their data. Protecting this data, which is essentially a trade secret, is absolutely critical.
What’s more, AI companies are beginning to train their models with organization-specific data in order to provide tailored solutions for enterprises. The Harvard Business Review notes that current LLMs “don’t offer the plug-and-play solution companies might be hoping for”; instead, extensive, high-quality data is needed to fine-tune custom models and meet businesses’ individual needs.
The bottom line? Good data is both a huge investment and a major market differentiator, and it remains highly vulnerable to unauthorized access and manipulation. To secure an AI-powered future, organizations will need to implement robust data protection.
Ransomware attacks are the new normal. From the Trans-Northern Pipelines attack in February to the major UnitedHealth attack in March, ransomware is hitting every sector with increasing frequency and force.
The ransomware delivery model is also evolving to be even more effective and efficient. With the help of AI, attackers are already leveraging advanced techniques like deepfake videos and voice manipulation. They’re also using chatbots to create more convincing phishing emails and pull off more sophisticated social engineering on a larger scale.
A major driver in the rise of ransomware is the widespread availability of Ransomware-as-a-Service (RaaS) kits, which the World Economic Forum notes can cost as little as $40. Coupled with AI tools, this subscription model is allowing anyone with a laptop to become a hacker by proxy.
In the face of this highly complex threat, our approach to ransomware defense needs to change. Rather than relying solely on detection and response measures, which often lag behind attackers’ tactics, the industry needs to focus on proactivity. Instead of hoping to catch ransomware after it happens, we need to preemptively fortify our systems against the almost-certain likelihood of an attack. Consider how companies today are prepared for inevitable server outages; that’s how ransomware protection will look in the future.
One promising avenue is self-healing data. By reconstructing affected data, solutions like the ShardSecure platform can mitigate the impact of ransomware and help organizations maintain their business continuity during an attack.
Twenty years ago, we stored data onto tapes and assumed it would last. When we look back in another twenty years at the ways we currently store our data, we’ll probably be just as incredulous.
The main reason? Quantum computing. With its immense processing power, QC threatens to render standard encryption technologies obsolete, opening the floodgates to a wave of future cyberattacks. It will have especially dire consequences for sectors with long data retention periods, like healthcare and legal services. Once standard encryption protocols are breached by QC, these sectors will bear the brunt of an onslaught of cyber threats.
As the White House’s detailed memorandum makes clear, quantum computing is a major threat. And, as we explore in our Harvest Now, Decrypt Later brief and blog post, attackers are already collecting sensitive data to decrypt in the future. As we increasingly entrust our data to all sorts of services, we have to ask: How can we ensure the security of this data for the coming decades?
The truth is, we need a more robust strategy for data protection, one that anticipates the challenges posed by emerging technologies like QC. Post-quantum cryptography is very much an evolving field, but it’s a particularly good opportunity for exploring big ideas and embracing the art of possible.
One of the key ways that we expect the sector to change is in the form of increased rules and regulatory compliance. That’s partly because new technologies pose new threats to data privacy and security.
As the digital landscape continues to expand and evolve, we anticipate a growing emphasis on regulations aimed at safeguarding sensitive information, like the rising number of US state data privacy laws. This could mean tighter controls on data handling practices and heightened accountability for organizations that collect, process, and store personal data.
We’ll also have to face the question of whether data patterns are the new PII. With the help of AI, seemingly innocuous data points will be able to be pieced together to reveal sensitive information about individuals. Experts explain that new data processing and predictive modeling tools allow bad actors to digest exponentially more data than traditional systems, infer personal behaviors and preferences, and increase the risk of personal data exposure. This poses a fresh set of challenges, if not a total paradigm shift, for data security professionals.
The same challenges will apply to IP and trade secrets. In the near future, people will be able to collect large amounts of non-sensitive information and analyze it with AI to reveal proprietary information. For example, someone could determine the recipe of Coca-Cola — including its 2023 AI-powered new flavor — by tracking down supplier data and then having a LLM aggregate and analyze that data.
The good news? The same spirit of technological innovation that produced AI (and, inadvertently, a host of new data privacy risks) will lead to brand-new tools for securing personal and enterprise data.
RSAC 2024 is just around the corner, and we’re excited to see what our cybersecurity colleagues are cooking up for the Art of Possible. From innovative AI advancements to the persistent threat of ransomware and the looming specter of quantum computing, there will be no shortage of developments to keep an eye on.
Planning to be at RSAC? Come visit the ShardSecure team at booth #5263 Moscone North and meet some of our key technology partners to talk about our integrations. We hope to see you there!
How Data Collaboration Platforms Can Help Companies Build Better AI | Harvard Business Review
Canada’s Trans-Northern Pipelines claimed to be attacked by ALPHV/BlackCat | SC Media
Health industry struggles to recover from cyberattack on a unit of UnitedHealth | NPR
3 trends set to drive cyberattacks and ransomware in 2024 | World Economic Forum
Post-Quantum Cryptography Initiative | CISA
AI and Privacy: Safeguarding Data in the Age of Artificial Intelligence | Digital Ocean