Features, Insight, Opinion

As the ‘Age of AI’ beckons, it’s time to get serious about data resilience

Rick Vanover, Vice President of Product Strategy, Veeam, explores the potential and the processes organisations need to consider before powering their data resilience with AI.

Almost two decades ago, Clive Humby coined the now-infamous phrase “data is the new oil”. With artificial intelligence (AI), we’ve got the new internal combustion engine. The discourse around AI has reached a fever pitch, but this ‘age of AI’ we have entered is just a chapter in a story that’s been going on for years – digital transformation.

The AI hype gripping every industry right now is understandable. The potential is big, exciting, and revolutionary, but before we run off and start our engines, organisations need to put processes in place to power data resilience and ensure their data is available, accurate, protected, and intelligent so that their business continues to run no matter what happens. Look after your data, and it will look after you.

Take control before shadow sprawl does

It’s far easier to manage with training and controls early on when it comes to something so pervasive and ever-changing as a company’s data. You don’t want to be left trying to ‘unbake the cake.’ The time to start is now. The latest McKinsey Global Survey on AI found that 65% of respondents reported that their organisation regularly uses Gen AI (double from just ten months before). But the stat that should give IT and security leaders pause is that nearly half of the respondents said they are ‘heavily customising’ or developing their own models.

This is a new wave of ‘shadow IT’ – unsanctioned or unknown use of software, or systems across an organisation. For a large enterprise, keeping track of the tools teams across various business units might be using is already a challenge. Departments or even individuals building or adapting large language models (LLMs) will make it even harder to manage and track data movement and risk across the organisation. The fact is, it’s almost impossible to have complete control over this, but putting processes and training in place around data stewardship, data privacy, and IP will help. If nothing else, having these measures in place makes the company’s position far more defendable if anything goes wrong.

Managing the risk

It’s not about being the progress police. AI is a great tool that organisations and departments will get enormous value out of. But as it quickly becomes part of the tech stack, it’s vital to ensure these fall within the rest of the business’s data governance and protection principles. For most AI tools, it’s about mitigating the operational risk of the data that flows through them. Broadly speaking, there are three main risk factors: security (what if an outside party accesses or steals the data?), availability (what if we lose access to the data, even temporarily?), and accuracy (what if what we’re working from is wrong?).

This is where data resilience is crucial. As AI tools become integral to your tech stack, you need to ensure visibility, governance, and protection across your entire ‘data landscape’. It comes back to the relatively old-school CIA triad – maintaining confidentiality, integrity, and availability of your data. Rampant or uncontrolled use of AI models across a business could create gaps. Data resilience is already a priority in most areas of an organisation, and LLMs and other AI tools need to be covered. Across the business, you need to understand your business-critical data and where it lives. Companies might have good data governance and resilience now, but if adequate training isn’t put in place, uncontrolled use of AI could cause issues. What’s worse, is you might not even know about them.

Building (and maintaining) data resilience

Ensuring data resilience is a big task – it covers the entire organisation, so the whole team needs to be responsible. It’s also not a ‘one-and-done’ task, things are constantly moving and changing. The growth of AI is just one example of things that need to be reacted to and adapted to. Data resilience is an all-encompassing mission that covers identity management, device and network security, and data protection principles like backup and recovery. It’s a massive de-risking project, but for it to be effective it requires two things above all else: the already-mentioned visibility, and senior buy-in. Data resilience starts in the boardroom. Without it, projects fall flat, funding limits how much can be done, and protection/availability gaps appear. The fatal ‘NMP’ (“not my problem”) can’t fly anymore.

Don’t let the size of the task stop you from starting. You can’t do everything, but you can do something, and that is infinitely better than doing nothing. Starting now will be much easier than starting in a year when LLMs have sprung up across the organisation. Many companies may fall into the same issues as they did with cloud migration all those years ago, you go all-in on the new tech and end up wishing you’d planned some things ahead, rather than having to work backwards. Test your resilience by doing drills – the only way to learn how to swim is by swimming. When testing, make sure you have some realistic worst-case scenarios. Try doing it without your disaster lead (they’re allowed to go on vacation, after all). Have a plan B, C, and D. By doing these tests, it’s easy to see how prepped you are. The most important thing is to start.

Image Credit: Veeam

Previous ArticleNext Article

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines