When implementing an AI strategy and placing generative AI (GenAI) in the hands of everyday users, trust and security are paramount. At OpenText™, we understand the critical importance of safeguarding your most valuable and sensitive information assets and staying true to ethical AI while unleashing the transformative power of generative AI (GenAI). OpenText Content Aviator has been carefully crafted and built upon our AI content management leadership, ethical AI practices, and unparalleled information governance expertise, ensuring that your data remains protected every step of the way.
Ethical AI starts with accountability
OpenText understands the growing power of AI and its potential to change the way we see work. Generative AI has fundamentally transformed how we think about information search and interact with unstructured content. This requires policy and guardrails, which is why we formed a Bill of Obligations to guide our AI strategy and how we deliver this powerful technology to our customers.
OpenText Bill of AI Obligations:
- Transparency builds trust
- AI and ethical AI are the same thing
- It starts with value-based design
- Your data is not our product
- Respect intellectual property, images, and likeness
- Security and privacy remains paramount
- Dedicated to accurate, verifiable AI results
- Promote the common good
The importance of strong content management and information governance
Large language models (LLMs) fundamentally interpret the unstructured data that represents over 80% of the information held in each organization. Just as when training large language models (LLMs), using them also necessitates a well-governed content management approach to ensure that GenAI remains safe and trustworthy.
When first engineering the OpenText Content Aviator architecture, we built upon our content management and information governance roots, and the foundational belief that “great AI needs great content management.” OpenText’s longstanding strength in information governance sets us apart, and we have a deep understanding of the complex compliance and security risks that come without proper governance practices built into AI strategies.
At the core of these practices is a centralized business workspace that enforces and automates security, governance, and metadata. It also forms a context for GenAI grounding, which provides the raw data necessary to assist users and limits responses to relevant documents, greatly improving response accuracy.
Data security is a prime directive
At OpenText, we believe that your data is yours alone, which is why your data will never be used to train AI without your consent. Moreover, our use of leading-edge foundation models like Google Gemini is engineered so that our model providers cannot and do not use your data to train. Data and prompts used for inference by the model are never stored by the model providers. This commitment guarantees that your proprietary information remains confidential and is used solely for your intended purposes.
OpenText Cloud is engineered to the most stringent security standards, including compliance with ISO 27001 and compliance and certification with other security and quality standards such as FedRAMP, GxP, HIPAA, and more. These standards ensure that data is never exposed unencrypted. OpenText provides customer transparency as to our security protocols and architecture and the ability to comply with regulatory audits that are necessary to manage their most important content repositories.
Protecting sensitive data and honoring security policies
As an intelligent content assistant, OpenText Content Aviator is an extension of the user, and therefore, it has the same access to privileged information as the user, no more. All inference tasks are conducted only on the allowed content. More simply put, OpenText Content Aviator cannot be used to bypass policy-based security or explicit access controls that you place on a document, folder, or business workspace.
Large Language Model behavior and choice
OpenText is partnering with providers of leading-edge, foundational large language models, including Google Gemini, OpenAI hosted by Microsoft, and Mistral hosted by Amazon Web Services. These are all high-performing models that are expertly trained on a wide variety of languages and curated content. They are designed for alignment with non-bias, accuracy and safe generation and are compatible with OpenText’s Bill of AI Obligations. The field of model training is changing rapidly, and our partnership with these foundation model providers gives OpenText Content Aviator customers the reassurance that we are getting the highest level of alignment with ethical and safe AI available.
No matter which LLM model you choose, the secure OpenText Content Aviator architecture remains the same, isolating customer data except at the moment of inference and no storage or training of customer data on the model provider’s AI infrastructure.
Committed to your mission, safety, and ethical AI
Trust and security are central to our approach to delivering intelligent assistance and groundbreaking AI content strategies to our customers. OpenText is committed to safeguarding your data, ensuring that your sensitive information remains confidential and is utilized only in support of your organization’s mission.
Start your journey into AI content management and sign up for a free trial of OpenText Content Aviator today.