PAID CONTENT

Quest Software powers the path to real-world AI impact

Data discipline drives results, says Quest Software’s CEO, who shares how to build confidence in a company’s AI strategy and turn AI pilots into AI success.

The enormous potential of AI is undeniable, so why are companies choosing to leave the vast majority of that potential on the table? This gap is everywhere: Businesses are spending billions on AI, yet most of that investment is kept siloed in sandboxes, far away from the company’s real-world operations. Leaders are rightly terrified to let it off the leash—and for good reason. They are deeply concerned about the dangers of the untrusted and unreliable data used to prompt AI.

People are used to software that is deterministic. It’s predictable, auditable, and safe. Generative AI is not that—not yet. There are confident hallucinations and “smoothened” inaccuracies that happen when a probabilistic model has no real understanding of business context or the specific domain in which it’s operating.

Quest Software’s experiences with AI models confirm this. People cannot remove the human from the loop. Companies are comfortable with AI as a timid intern, asking, “I’ve analyzed the data. Would you like me to act?” But they are terrified of the virtual assistant that doesn’t ask, the one that acts first and reports back, “I’ve analyzed the data, and I have already acted,” at which point it’s often too late. This gap between insights and autonomous actioning isn’t a technology barrier. It’s a single problem—trust.

“AI is only as good as the data,” says Tim Page, CEO, Quest Software, a global leader in data management, cybersecurity, and platform modernization. “And most organizations know their data isn’t good enough. Today’s enterprise data is fragmented, untrusted, and underutilized.”

Following the rapid rise of AI, many companies are finding themselves trapped in what experts call “pilot purgatory.” They’re building sophisticated AI models, but they stop short of letting the system take the final, value-generating step—automating the execution of automated insights. But who would honestly allow agents loose without guardrails? They wouldn’t. There is very little trust. The hesitation isn’t about capability—it’s about a fundamental trust and security deficit.

Business leaders are rightly hesitant to hand over the keys to their AI. For many, two things are missing. First, they don’t trust their data. They know their data is fragmented, siloed, and of questionable quality. Second, they don’t have the right guardrails. They have no way to reliably control, audit, or secure the AI’s actions.

Page says that true AI readiness has little to do with the model itself and everything to do with the foundation of trust that enables autonomy. That foundation rests on three pillars: governed, AI-ready data that’s complete and consistent; secure, auditable identities to define what humans and machines can access; and a modern, resilient platform capable of operating at the speed, scale, and interoperability real-time AI execution demands—without exposing critical data.

The foundation problem

The failure rate of AI pilots has sparked debate about whether AI can handle complex or sensitive business tasks. But before pointing fingers at the technology, Page urges leaders to look inward. “One survey found 99% of AI projects encounter data quality problems,” says Page. “You’re not going to let AI automatically engage a customer when you know its data is untrustworthy. ‘Garbage in, garbage out’—at AI speed—is a disaster.”

The issue is that most organizations are still operating with a 20th-century data strategy—one that treats data as a historical record, collected for human analysis rather than machine execution. “We’ve been using about 20% of the data we collect,” says Page. “Now, the goal is to build an active, trusted data ecosystem that can utilize all of it in real time.”

That new ecosystem begins with unifying structured and unstructured data. For decades, analytics focused on structured data—spreadsheets, databases, and CRM records—while unstructured information such as emails, chat logs, and call transcripts languished in storage. Generative AI has flipped that dynamic.

“All the real context is in that unstructured data,” says Page. “Structured data tells you what happened. Unstructured data tells you why. If your AI can’t see both, it’s making blind decisions.”

From access to context

Building this unified view requires more than access—it requires context. That’s the second major shift in modern data governance. “The semantic layer becomes non-negotiable,” says Page. “It’s the Rosetta Stone that sits between the AI and your messy databases, ensuring the AI understands what the data means, how it relates, and what its quality score is.”

This contextual understanding breaks down logical silos. Even if customer revenue appears in 10 different systems, the semantic layer defines one trusted version of truth. “It’s what delivers consistency, breaks duplication, and builds trust,” says Page.

That trust is what enables AI to move beyond analysis into judgment-based automation—the ability to make real-world decisions with human-level nuance. Quest’s erwin Data Management Platform helps power that transition by discovering, classifying, and scoring metadata to ensure data is complete, consistent, and compliant. It’s the only converged data management platform that brings together data intelligence, modeling, governance, and metadata management into a single integrated platform. The payoff: faster time to market, improved productivity, and measurable business value.

Security as a prerequisite

But as organizations modernize their data environments, a new challenge emerges: security. The migration from legacy systems to unified platforms solves access problems—but it also exposes new governance risks. Suddenly, a company’s most valuable gen AI context—emails, documents, and chat logs—exists in one searchable environment.

“Without proper controls, you’re one bad prompt away from the AI summarizing a sensitive HR complaint or a draft press release for the wrong employee,” says Page. “That’s the new battleground: unstructured data governance.”

Along with helping its customers deliver trusted, AI-ready data, Quest has made this battleground part of its focus. The company recently became the first to achieve Microsoft 365 Certification for migration capabilities, helping consolidate and modernize Microsoft identities and cloud tenants more securely. Its identity security solutions provide integrated protection, detection, containment, and rapid recovery—accelerating ransomware recovery by up to 90% and protecting identity assets, which Page calls “the new perimeter in AI-driven enterprises.”

“If governed data is AI’s guardrail for thinking, then identity security is the guardrail for acting,” says Page. “AI’s success comes down to two questions: ‘Can I trust what it knows?’ and ‘Can I trust what it does?’”

Why pilots don’t scale

Even with the right models and data, most AI pilots stall for one simple reason: trust. “It’s easy to build an impressive demo that works in isolation,” says Page. “The hard part is wiring it into the actual business. With most pilots, no one ever built the plumbing to connect it to the consumer relationship management or enterprise resource planning platform—it’s sitting in a silo just like the data it’s using.”

That’s why Page calls the failures of most pilots a ‘plumbing problem’ rather than a technology one. The companies breaking through aren’t chasing flashy demos—they’re investing in infrastructure. “The inspirational leaders who are actually scaling are obsessing over the power grid,” says Page. “They’re brave enough to admit their data is a mess and are making the hard, unglamorous investment to fix it.”

In other words, the real winners in the AI economy aren’t those moving the fastest—they’re those building the strongest foundation. “I talk to customers all the time. Everyone is building ‘science fair projects,’ and frankly, they should be. It’s the only way to learn,” says Page. “The ones who are failing are the ones who think that’s the last step. But the failures happen when people think a pilot is the finish line. You can’t just ‘productionize’ a demo—you have to rebuild it for the real world, on trusted data, with all the plumbing. That is the only foundation that will enable the future.”