In my last post, I explored why Data Risk Management (DRM) has become a top board-level priority, with cyberattacks and recovery readiness at the forefront. But DRM is not only about defending against hackers. The reality is that the way organisations handle their data today has far-reaching implications, especially as artificial intelligence becomes embedded into everyday business operations.
The Double-Edged Sword of AI
Back in May, I listened to an interview with Dimitri Sirota from BigID, who highlighted a critical point: when organisations lack control over their data, the risks multiply once AI comes into play.
AI models are often trained and enriched using vector databases. These databases store embeddings that are linked back to reference data. That data is effectively “memorised” by the model and carried forward into future outputs.
Without clear policies around access, classification, and protection, sensitive information can slip in unnoticed. The consequences can be serious: from staff uncovering confidential salary information to external exposures that erode customer trust.
Adoption Moving Faster Than Governance
Generative AI is different from previous hype cycles like blockchain or quantum computing. Its adoption has been fast, widespread, and in many cases unsanctioned. Employees are already experimenting with it in ways that bypass formal governance.
The result is a gap. Human behaviour naturally introduces risk, but it is the absence of strong data foundations that turns those risks into potential crises. Even when IT and data teams put significant effort into securing sensitive information, data is often copied, moved, or used in testing environments without the same level of protection. What was once treated as the “crown jewels” of the organisation can suddenly find its way into an AI model, embedded and overlooked until it is too late.
The risks are only intensifying with the rise of Agentic AI. Unlike traditional models, Agentic AI can operate with autonomy, chaining together tasks and making independent decisions at scale. It works faster than any human can intervene, introducing a completely new layer of complexity and urgency. Without strong governance, Agentic AI can generate damaging outputs at speed, bypassing conventional control mechanisms.
Building Governance Into the Foundation
This is why guardrails, access policies, and risk frameworks must be embedded into the very foundation of model development and AI strategy. It is no longer enough to react to risks after they surface.
The responsibility starts with knowing your data, managing it intelligently, and embedding data risk awareness into every stage of your AI journey. If you are not thinking about your data risk posture today, rest assured that AI will force the conversation sooner than you think.
At Nephos, we combine technical expertise and the strategic business value of traditional professional service providers to deliver innovative data solutions. We help organisations secure their data wherever it resides, providing the visibility and control needed to reduce risk without compromising agility. Find out how.


