Your AI took action. Who’s accountable? ITSM knows.

AI agents need governed workflows, audit trails, and defined decision boundaries to act safely. Here are 5 actions every service management leader must make.

Travis Greene  profile picture
Travis Greene

March 16, 20264 min read

Human hand and AI hand shaking. OpenText building trusted AI in the Sovereign Cloud

Picture this: your AI agent just fulfilled a user request. No ticket. No approval chain. Your management wants to know who authorized it. Your compliance team wants to know what AI changed. You don’t have a quick answer for either.

That’s not a distant scenario. It’s the operational reality service management leaders are navigating right now.

AI is only useful in the enterprise when it can take action within the framework of governance. Taking action requires workflows, approvals, auditability, and defined boundaries around what AI is allowed to do. But most IT leaders haven’t made that connection yet. They’re deploying AI agents on one side of the house and managing ITSM processes on the other, with nothing tying them together.

That gap is where risk lives.

A new OpenText™ book, Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud, (OpenText leaders Tom Jenkins, David Fraser, and Shannon Bell, 2026), lays out a clear path forward. Here are five insights every CTO should act on now.

5 insights CTOs need to act on now

1. Human approval isn’t a safeguard—it’s an architectural decision

As AI agents gain the ability to initiate actions without explicit human input, governance must expand beyond static policy compliance. When agentic AI identifies root causes and recommends fixes, execution should require human approval until trust is established through a documented track record with a predetermined success rate approaching 100%.

This isn’t about slowing AI down. It’s about deciding—deliberately—which actions AI can take on its own, and which require a human sign-off. If you haven’t made those decisions explicitly, your AI has made them for you.

2. Define your decision boundaries before an incident forces your hand

AI must know what it’s allowed to change. Without defined decision boundaries, agentic AI becomes a risk.

A tiered autonomy model works well here: in early stages, AI surfaces insights and only presents ranked options to a human for approval. As confidence grows, you selectively extend its authority to low-risk, predefined decisions. The core challenge is knowing where to draw the line between AI action and human judgment—and drawing it before something goes wrong, not after.

Map it out now: What can AI do autonomously? What needs approval? What’s off-limits? Use a risk-scoring system that is aligned with your corporate governance policies to develop those three lists, as a foundation for responsible AI operations.

3. Your audit trail is your liability shield

In the European Union (EU), DORA and GDPR aren’t compliance checkboxes—they’re minimum operating standards for AI in regulated environments. North American regulatory pressure is moving in the same direction. Every action your AI takes needs to be recorded, traceable, and reviewable.

ITSM provides that audit trail. When an AI agent resolves an incident, closes a ticket, or initiates a configuration change, that action should flow through the same governed process as any human-initiated change. If it doesn’t, you have an accountability gap that neither your management, legal team, nor your board will be comfortable with.

4. Build feedback loops—or watch your AI repeat the same mistakes

AI that acts without feedback loops doesn’t learn. It repeats. Every incident resolution, every escalation, every human override needs to feed back into the system, so your AI improves over time and doesn’t recreate the same failures.

ITSM is the natural home for this feedback architecture. Incident records, change histories, and resolution data aren’t just operational artifacts, they’re raw material for continuous AI improvement. Organizations that close this loop will see compounding gains in accuracy and reliability. Those that don’t will keep wondering why their AI keeps getting it wrong.

5. Retire MTTR as your primary success metric

Mean time to resolve (MTTR) is a reactive metric for a reactive era. We’ve entered the decade of responsible, AI-driven operations. That demands a new scorecard.

Forward-looking ITSM professionals are already redefining success around metrics that reflect AI’s governed impact: incidents prevented, percentage of incidents handled autonomously within policy boundaries, and AI decision accuracy rates. These aren’t vanity metrics—they’re indicators of whether your governance architecture is actually working.

If you’re still measuring only how fast you fix things, you’re playing the wrong game.

The bottom line

The organizations that win the AI era won’t necessarily have the most sophisticated models. They’ll be the ones that built the governance architecture to let those models act—safely, auditable, and within defined boundaries.

ITSM is that architecture. It’s time to treat it that way.

To learn more about AI in service management, visit our webpage.

Share this post

Share this post to x. Share to linkedin. Mail to
Travis Greene avatar image

Travis Greene

Travis is the Sr. Director of Product Marketing for OpenText IT Operations Management solutions. He began his career as a US Naval Officer but switched to running data centers and managing IT operations in 2000, gaining Expert certification in ITIL. He joined OpenText in 2005, and has been published in Security Week Magazine, InfoWorld and Forbes, while speaking at Interop, RSA, itSMF and Gartner events among dozens of others. Connect with Travis at https://www.linkedin.com/in/travisgreene/

See all posts

More from the author

Why your IT operations can’t stay reactive in the AI era

Why your IT operations can’t stay reactive in the AI era

Discover how AI converts IT operations from reactive to autonomous. Real case studies show MTTR reductions from hours to seconds with AIOps and AI-driven ops.

March 04, 2026

7 min read

Stop treating ESM like a tool choice: It’s a business strategy

Stop treating ESM like a tool choice: It’s a business strategy

Enterprise service management success starts with strategy, not tools. Define goals, secure buy-in, and measure outcomes to unlock enterprise-wide value.

February 06, 2026

4 min read

Why does SecOps and Compliance need the CMDB?

Why does SecOps and Compliance need the CMDB?

Break through the complexity limits of hybrid IT by evolving your CMDB into a strategic engine for SecOps and compliance.

January 13, 2026

5 min read

Stay in the loop!

Receive regular insights, updates, and resources—right in your inbox.