AI is already part of how enterprises operate. Built into workflows, decision-making, and security operations, AI’s shift from experimentation to real use is well underway.
Our latest global survey conducted with the Ponemon Institute makes that clear. More than half of enterprises say they have already deployed generative AI in some form, but adoption is moving faster than the controls and systems needed to manage it. And only about one in five respondents report reaching a level of AI maturity, where cybersecurity risks are clearly understood and actively managed. The rest are still building those foundations while AI is already in use.
Now, that gap is shaping real risks.
A Control Problem, Not An Adoption Constraint
There is no shortage of momentum behind AI. Enterprises are moving quickly, finding use cases, and putting tools into practice across the business.
What is less consistent is how those systems are governed once in place.
We surveyed nearly 1,900 IT and security practitioners across the globe, and fewer than half say they have a risk-based approach in place for managing AI. Only 41% report having AI-specific data privacy policies in place.
At the same time, the risks they’re working through are fundamental to how these systems operate:
- 62% say it is difficult to reduce AI model bias and ethical risks
- 58% struggle with prompt and input risks, including misleading or harmful outputs
- 56% report challenges managing user-driven risks like misinformation
Without the right controls in place, those risks are expanding into security outcomes.
Where The Security Impact Shows Up
Even as enterprises adopt GenAI to improve security operations, nearly six in ten say it is making privacy and security compliance more difficult. Practitioners also report ongoing challenges with model bias, unreliable outputs, and errors tied to poor-quality or incomplete data.
That lack of reliability limits trust: 51% of organizations say human oversight is still required to govern AI. Not as a preference, but because systems cannot yet be relied on to operate independently.
AI is being introduced into workflows, but without a strong foundation, the outcomes are inconsistent.
Closing The Gap Between Adoption And Security
The opportunity with AI is still real, and enterprises will continue to expand how they use it. What needs to catch up is the foundation around it.
Security, governance, and information management are not supporting elements. They are core to how AI systems function and need to be built in from the start, not added later once gaps appear.
An intentional, secure approach to AI has four key pillars:
- Identity and access management: As AI agents take on more work across the enterprise, organizations need to treat them like any other privileged actor. OpenText Identity Core Foundation empowers teams to extend identity and access controls to non-human identities and AI agents, enforce least-privilege access, and apply clear policy guardrails around what agents can access and what they are allowed to do. That gives organizations stronger control over autonomous activity from the start.
- Data security: The biggest concern for many enterprises is not just what AI can generate, but what it can access, expose, change, or move. OpenText Data Privacy & Protection Foundation helps organizations protect sensitive data, repositories, and PII with data-centric security controls, so AI systems are working with the right information under the right protections. This is critical to reducing privacy, compliance, and data integrity risks as AI becomes more embedded in operations.
- Threat detection and response: In the agentic era, security teams need visibility not only into human behavior, but also into agent behavior. OpenText Core Threat Detection and Response helps organizations monitor activity in real time, detect when a user or agent is acting outside normal behavior or approved policy, and investigate threats faster. This kind of continuous monitoring is essential for maintaining trust as AI systems become more autonomous.
- Application security: AI security also starts upstream in the software itself. OpenText’s application security solutions help organizations build security into the code and applications behind AI-driven workflows, so vulnerabilities can be identified and addressed earlier in development. That secure-by-design approach is increasingly important as enterprises embed AI into more business-critical applications and processes.
Together, these four pillars give organizations a stronger foundation for responsible AI adoption: governed identities, protected data, continuous monitoring, and secure applications. That is how enterprises can move from AI experimentation to AI they can trust.
To read the full report findings from OpenText and Ponemon, click here.