A secure foundation for AI content management

Generative AI (GenAI) is fundamentally transforming how users work and interact with information and redefining workplace productivity. While GenAI makes it look easy, it can,…

Mike Safar  profile picture
Mike Safar

October 07, 20253 min read

Representation of a neural network

Generative AI (GenAI) is fundamentally transforming how users work and interact with information and redefining workplace productivity. While GenAI makes it look easy, it can, without careful planning, policy management, and automation, put your organization at risk. McKinsey & Company brings it into focus: “As more organizations use AI to enhance operations, they risk inadvertently introducing new cyber-related threats. Bad actors are also using AI to fuel more sophisticated cyberattacks.” [1] So what can we do to bring the productivity promise of GenAI to the workplace in a defensible AI strategy without compromising our cybersecurity posture?   

Hidden risks of an IT strategy that neglects AI governance

When you introduce GenAI to your users as a tool, employees may be quick to apply GenAI to their daily work product and most treasured information assets. Suddenly, employees are feeding sensitive customer data, proprietary documents, and confidential strategies into systems that may not respect your carefully crafted security policies, or worse, create data leak risks by taking it outside your firewall. Homegrown GenAI solutions may inadvertently become a super-user, exposing information to which the user should never access—or worse—creating a backdoor for unauthorized access. 

According to Deloitte’s “2025 Technology Industry Outlook,” trust issues surrounding data privacy, security, and accuracy pose the biggest barriers to enterprise AI adoption.2 These are warning signs that customers may be missing key elements of a safe, secure, and defensible AI strategy. 

Great AI starts with great content management

A great content management program that incorporates repeatable policy, built-in content security, automation, integration and information governance in a tightly orchestrated system configured to help you stay compliant and safe. When you have such a capability as a foundation, along with a tightly coupled GenAI capability, you are well on your way to delivering secure AI to every user that doesn’t keep you up at night. 

Elements of a secure AI strategy

At OpenText, we believe that GenAI doesn’t have to be difficult or present a giant cybersecurity headache that’s difficult to plan around or manage. Getting GenAI right starts with some comment elements: 

  • Policy-bound grounding. As GenAI is an extension of the user as a digital assistant, it must be grounded in contextually relevant, safe content that honors the user’s access controls and permissions to prevent inadvertent exposure, data leaks, and compliance violations.  This is accomplished through a retrieval-augmented generation (RAG) architecture that is tightly coupled to the content management’s permission structure. 
  • Trustworthy frontier large language models with zero retention terms of service. Allowing a large language model provider to train their GenAI using your proprietary data or personally identifiable information (PII) is unacceptable to any organization and non-compliant in nearly all jurisdictions and information governance standards.  The solution works with GenAI, which adheres to a “zero-retention” model, where no client information is ever retained or used for training.   

Build a secure AI strategy with OpenText

At OpenText, your information management success is critical to our mission.  We’ve prepared more detailed information on building a secure AI strategy, including a white paper and webinar.   We’d also like to hear more about your journey into GenAI and answer any questions you have, so reach out to start a conversation with us about OpenText™ Content Aviator and our larger AI content management portfolio.   

Download the whitepaper Watch the webinar
  1. [1] 1 Greis, J., Sorel, M., Fuchs-Souchon, J., & Banerjee, S. (2024, November 14). The cybersecurity provider’s next opportunity: Making AI safer. McKinsey & Company. https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-cybersecurity-providers-next-opportunity-making-ai-safer 

Share this post

Share this post to x. Share to linkedin. Mail to
Mike Safar avatar image

Mike Safar

Mike Safar leads product marketing for OpenText AI content management and information governance products and serves as a subject matter expert on the intersection of generative AI and information management. Mike has been a product marketing and product management leader in content management and information governance for more than 30 years, having previously held leadership positions at Interwoven, Hewlett Packard Enterprise, and PC DOCS Group.

See all posts

More from the author

What’s new in OpenText Content Aviator

What’s new in OpenText Content Aviator

OpenText™ Content Aviator puts an AI content assistant into the hands of business users to leverage conversational search, discover content, or even summarize a document…

February 19, 2026

8 min read

The data archiving snowball effect: start small and build momentum 

The data archiving snowball effect: start small and build momentum 

Reduce cost and complexity of application decommissioning

February 03, 2026

5 min read

What’s new in OpenText Information Archive

What’s new in OpenText Information Archive

OpenText™ Information Archive provides highly accessible, scalable, economical, and compliant archiving of structured and unstructured information. Whether actively archiving business information to reduce system loads…

January 28, 2026

7 min read

Stay in the loop!

Receive regular insights, updates, and resources—right in your inbox.