Many of us have been dazzled by the power and promise of generative AI since OpenAI ChatGPT burst onto the scene late last year. A real technological disruption is underway—Large Language Models (LLMs) are rapidly maturing and their mastery of natural human language is impressive. Generative AI is entering the workspace too. And with that, challenges like security, privacy, and data access must be tackled to make sure ITSM solutions are secure and business-relevant.
Here at OpenText, we are excited to announce the IT Operations Aviator, a secure, generative AI virtual agent – powered by a private large language model (LLM). To meet the needs of enterprises, Aviator acts like a subject matter expert assistant who is always available, learns over time, and finds the most relevant information to better service your customers or automatically resolve problems. Our private, generative AI virtual assistant connects across your enterprise content to deliver human-like, contextually relevant responses to user requests.
Our solution is based on an architecture that combines the power of new generative AI technology with proven capabilities for data security, content management, unstructured data analytics, and IT Service Management. Our solution rests on four value pillars:
- Privacy and Security
- Enterprise Intelligence
- Access Control
- Ubiquitous Interface
Let’s explore these pillars in detail.
Privacy and Security
When employees start using public LLM services, proprietary, sensitive data can easily (and quickly!) enter the public domain. That’s what happened when three employees at an electronics company unwittingly prompted ChatGPT with company confidential data.
As a principle, work-related LLM interactions must never be at risk of exposure. And to provide true value for enterprises, LLM services must also interact with knowledge databases or other domain-specific content in a secure way. With these requirements in mind, we developed Aviator as a private, secure, OpenText-operated LLM that can be fully managed and controlled by our customers.
This private service leverages our research, testing, and analysis of the most advanced open-source LLM technologies. It is fine-tuned with relevant use cases and industry data. And it is founded on our deep experience in delivering trusted, secure, and scalable information and cloud solutions for customers around the globe.
Another key pillar for providing essential value is enabling conversations that are enriched with domain-specific enterprise content. This content can span across different enterprise functions. For example, some employees may ask questions about HR policies or may request support with an IT issue. Sales may seek insights into customer accounts and contracts. And IT Ops can benefit from analysis of their IT network topology or application health.
As a truly relevant virtual agent—one that is empowered by LLM— Aviator can help across these scenarios by accessing enterprise data and documents in real-time. We refer to this as enterprise Intelligence, and it is made possible by an innovative architecture that combines OpenText™ Content Management, pervasive OpenText IDOL indexing capabilities, and the secure private OpenText LLM engine discussed above.
As an illustration, below is an example of an Aviator conversation with the virtual agent answering questions about a contract. Aviator provides intelligent answers based on real-time information underlying the conversation. The contract is managed in the OpenText SMAX contract management module. A PDF of the contract is stored in the OpenText content management service.
Generative AI virtual agents enabled with Enterprise Intelligence deliver clear value. But they also introduce a challenge. Enterprise information is never uniformly accessible. There are various layers of access control that govern information visibility for employees. These controls vary across dimensions according to various use cases:
- Information can vary by employee location. For example, an employee seeking answers about salary or medical leave policies should get answers that are accurate for that employee’s home country.
- Information may be restricted by role. For example, only managers may seek information about employee promotions or performance review policies—and this information should generally not be exposed to all levels in the enterprise.
- Information may be segmented by group membership. For example, members of a development team may tap the virtual agent for insights only related to the applications they are responsible for.
With our access control capabilities, the virtual agent has context about each user it interacts with. Aviator knows the user’s location, role(s), and group memberships. In addition, Aviator enables the tagging of information with entitlement labels that reference these access control dimensions. Together, the user context and data entitlements are used to enforce real-time access control filtering to make sure answers provided by Aviator are relevant and permissible to a particular user.
Let’s look at another example. Below, a U.S. employee asks Aviator about company-designated holidays for the calendar year and follows up with a question about sick leave.
Now, another employee, this time located in France, asks the same questions. Aviator knows how to respond according to company policies for France.
The pillars described so far deal with making sure that Aviator can provide the right level of relevant information in a way that meets security, privacy, and access requirements for the enterprise. With these pillars in place, the gates can almost open to unleash generative AI into the enterprise domain. But one final and important pillar remains— Aviator must be available to assist employees where and when they need it.
Usability and interface models can make or break virtual agent adoption. Expectations and channels for interactions can vary widely across functions:
- Many employees seek advice and support by visiting the service portal.
- Collaboration tools such as Microsoft Teams have become the de facto corporate interface for collaboration and support, and users have access to channels and embedded bots available for those seeking answers and support.
- Service Desk and HR agents seek real-time answers to assist users can benefit from smart assistance available directly from a Virtual Agent like Aviator, embedded into their support interfaces.
Access to and interactions with Aviator must, therefore, be multichannel. We have created a virtual agent architecture that provides a variety of interfaces for smart conversations enriched by enterprise intelligence. This includes Aviator access from the SMAX service portal, Microsoft Teams, and via an open smart virtual agent API set that can be used for constructing bespoke interfaces by customers or partners. Other interfaces under consideration include a plug-in virtual agent widget for quickly accessing agent-side and back-office applications.
We are excited by the opportunities that generative AI will bring to the heart of the enterprise and how it will transform service management. As ITSM and ESM solution experts, we are constructing a strong architecture built on advancing LLM capabilities together with a vital foundation that provides data security, access control, and enterprise intelligence. We look forward to sharing more news and details about Aviator in future blogs. Stay tuned!