Generative AI (GenAI) is fundamentally transforming how users work and interact with information and redefining workplace productivity. While GenAI makes it look easy, it can, without careful planning, policy management, and automation, put your organization at risk. McKinsey & Company brings it into focus: “As more organizations use AI to enhance operations, they risk inadvertently introducing new cyber-related threats. Bad actors are also using AI to fuel more sophisticated cyberattacks.” [1] So what can we do to bring the productivity promise of GenAI to the workplace in a defensible AI strategy without compromising our cybersecurity posture?
Hidden risks of an IT strategy that neglects AI governance
When you introduce GenAI to your users as a tool, employees may be quick to apply GenAI to their daily work product and most treasured information assets. Suddenly, employees are feeding sensitive customer data, proprietary documents, and confidential strategies into systems that may not respect your carefully crafted security policies, or worse, create data leak risks by taking it outside your firewall. Homegrown GenAI solutions may inadvertently become a super-user, exposing information to which the user should never access—or worse—creating a backdoor for unauthorized access.
According to Deloitte’s “2025 Technology Industry Outlook,” trust issues surrounding data privacy, security, and accuracy pose the biggest barriers to enterprise AI adoption.2 These are warning signs that customers may be missing key elements of a safe, secure, and defensible AI strategy.
Great AI starts with great content management
A great content management program that incorporates repeatable policy, built-in content security, automation, integration and information governance in a tightly orchestrated system configured to help you stay compliant and safe. When you have such a capability as a foundation, along with a tightly coupled GenAI capability, you are well on your way to delivering secure AI to every user that doesn’t keep you up at night.
Elements of a secure AI strategy
At OpenText, we believe that GenAI doesn’t have to be difficult or present a giant cybersecurity headache that’s difficult to plan around or manage. Getting GenAI right starts with some comment elements:
- Great content management. Great AI starts with great content management that incorporates organization and labeling of your unstructured data, role and policy-based security, information governance, integration with other processes and applications, and is centrally managed through a business workspace.
- Policy-bound grounding. As GenAI is an extension of the user as a digital assistant, it must be grounded in contextually relevant, safe content that honors the user’s access controls and permissions to prevent inadvertent exposure, data leaks, and compliance violations. This is accomplished through a retrieval-augmented generation (RAG) architecture that is tightly coupled to the content management’s permission structure.
- Trustworthy frontier large language models with zero retention terms of service. Allowing a large language model provider to train their GenAI using your proprietary data or personally identifiable information (PII) is unacceptable to any organization and non-compliant in nearly all jurisdictions and information governance standards. The solution works with GenAI, which adheres to a “zero-retention” model, where no client information is ever retained or used for training.
Build a secure AI strategy with OpenText
At OpenText, your information management success is critical to our mission. We’ve prepared more detailed information on building a secure AI strategy, including a white paper and webinar. We’d also like to hear more about your journey into GenAI and answer any questions you have, so reach out to start a conversation with us about OpenText™ Content Aviator and our larger AI content management portfolio.
Download the whitepaper Watch the webinar- [1] 1 Greis, J., Sorel, M., Fuchs-Souchon, J., & Banerjee, S. (2024, November 14). The cybersecurity provider’s next opportunity: Making AI safer. McKinsey & Company. https://www.mckinsey.com/capabilities/risk-and-resilience/our-insights/the-cybersecurity-providers-next-opportunity-making-ai-safer