Boost AI readiness and reduce risk

Are your GenAI tools bringing information security gaps to light?

Stephen Ludlow  profile picture
Stephen Ludlow

July 08, 20245 minute read

Generative AI (GenAI) is at an inflection point, poised for widespread enterprise use and significant productivity gains. And organizations are raring to go, planning to increase GenAI spend by 2x to 5x this year, according to VC firm Andreesen Horowitz.1

The market is meeting the moment, flush with GenAI vendor solutions that put the power of conversational search in employees’ hands. The promise is alluring, taking content that has typically been obscured, exposing it to AI to transform information into answers.

There’s just one problem: the promise is not always the reality. Many GenAI tools may be creating more noise and less relevancy for users – which can lead to poor answers, poor decisions, and a loss of employee trust. Potentially even more concerning, without being fully AI ready, sensitive enterprise content is exposed, opening the door to organizational risk.

Are you worried your GenAI investments may be falling flat? Let’s explore some common information security gaps and how GenAI content management can help.

Oversharing can be overwhelming

Imagine an employee uses GenAI to ask: “Am I getting fired tomorrow?” To the individual’s shock, the question is answered, referencing an internal employee list compiled by HR and management, obscured from daily view but not out of the reach of AI.

Sharing and co-editing has become a standard collaboration practice. The default sharing setting in most Office Productivity Suites is to provide read or write access to anybody who has access to the link. The problem is…. GenAI has access to the link. That means that anybody in the organization using GenAI also has access to the content of the document if they ask the right question. This problem has been classified as “Oversharing” and is an issue that is impacting organizations that adopt AI solutions for their Office Productivity Suites.

Cracks in permission management

Many GenAI tools are exposing gaps in enterprise permission management, failing to filter responses when sensitive content has been obscured instead of properly permissioned.  Without the correct application of permission groups and utilizing systems capabilities within the AI tool itself, employees get too much access, violating privacy policies and potentially leading to information security incidents.

Many GenAI tools also assume organizations have reached a high level of maturity with the labeling, classification and monitoring of content. Without the ability to curate relevant information for specific departments, projects and use cases, these tools deliver access and information well beyond the desired “need to know” status, generating overly general or broad answers rather than limiting results to a small subset of information to deliver the most relevant and helpful responses.

Free range of consumer AI tools

When employees do a quick query via publicly available, consumer AI apps, the risks outweigh the reward. Organizations have little visibility or control over how the information is used or distributed, with one employee misstep putting the company in jeopardy, such as inputting sensitive information that leads to a loss in intellectual property, or citing bogus sources that comes with legal ramifications.

Eliminate GenAI shortcuts to create value at scale

Organizations are in varying stages of experimenting with GenAI but are now past the infatuation stage. A survey from Deloitte found 47 percent of organizations are moving fast with GenAI adoption—and for companies with very high expertise that number jumps to 73 percent.2

It’s time for organizations to take a step back and identify GenAI shortcuts that are leading to information security gaps. With the right information management foundation, companies can strike an ideal balance, gaining desired control while ensuring employees see the right information (and nothing more!)—cutting through the noise, mitigating risk, and maximizing value.

Templating and structuring information is the jumping off point for AI readiness and secure GenAI. With OpenText business workspaces, content is organized across line-of-business applications and business processes, applying permissions for information protection, classifying content to make it easier for AI to query and capture accurate business data for valuable and more relevant GenAI responses.

Because information is contextualized to a business process, that context informs who should and should not see the information. Plus, with detailed metadata, business workspaces narrow down information feeding into AI and structure information via templated folders (for certain business transactions or cases), making it easy for employees to understand what they are querying.

Further, to help shore up GenAI, it’s important to also:

  • Conduct regular spot audits and permissions checks: When implementing AI applications, perform quality assurance and testing reviews to ensure information coming through GenAI tools is properly permissioned.
  • Understand your risk tolerance: Determine how to enforce desired permissions and security without sacrificing desired collaboration, assessing how information management can enhance productivity.
  • Work with a responsible AI vendor: Not all GenAI tools were created equal. At OpenText, we believe deeply in ethical practices, outcomes and accountability, leading to the creation of our AI Bill of Obligations. We are also the first signatory of Canada’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems.

With conversational GenAI, organizations can interact with large volumes of information to speed learning and help employees work smarter and faster. But GenAI is just getting started, headed towards broad-scale productivity jumps and shaping the future of innovation.

OpenText™ Content Aviator, an AI-powered content assistant, gathers the most relevant documents from a business workspace for analysis, automatically filtering documents and delivering insights and context-aware summaries based only on permissible content to prevent information leakage and oversharing.

The time is now to boost your company’s AI readiness with GenAI content management.

OpenText™ Content Aviator interactive demo

Work smarter with a GenAI-powered intelligent assistant

Sign up now

[1] Marketing AI Institute, Enterprises Set to Increase Generative AI Spend, March 2024

[2] Deloitte AI Institute, State of Generative AI in the Enterprise, April 2024

Share this post

Share this post to x. Share to linkedin. Mail to
Stephen Ludlow avatar image

Stephen Ludlow

Stephen is SVP, Product Management at OpenText.

See all posts

More from the author

It’s time to get REALLY serious about digital transformation

It’s time to get REALLY serious about digital transformation

With the sudden explosion of remote work, the disruption to global supply chains, dramatic shifts in consumer behavior, and escalating cyber security threats, organizations have…

July 21, 2020 5 minute read

OpenText named a leader in latest Forrester Wave: Software for Digital Process Automation for Deep Deployments Q2 2019

OpenText named a leader in latest Forrester Wave: Software for Digital Process Automation for Deep Deployments Q2 2019

Business Process Management (BPM) has come a long way in the last 5 years. Originally designed to tackle the most complex business processes and case…

June 19, 2019 3 minute read

Traditional records management isn’t working for government – or anybody else

Traditional records management isn’t working for government – or anybody else

Government agencies at all levels are realizing traditional records management practices are insufficient and ineffective in a digital world. Why? Digital—and increasingly mobile—citizens are driving…

March 22, 2018 4 minute read

Stay in the loop!

Get our most popular content delivered monthly to your inbox.