Picture this: your AI agent just fulfilled a user request. No ticket. No approval chain. Your management wants to know who authorized it. Your compliance team wants to know what AI changed. You don’t have a quick answer for either.
That’s not a distant scenario. It’s the operational reality service management leaders are navigating right now.
AI is only useful in the enterprise when it can take action within the framework of governance. Taking action requires workflows, approvals, auditability, and defined boundaries around what AI is allowed to do. But most IT leaders haven’t made that connection yet. They’re deploying AI agents on one side of the house and managing ITSM processes on the other, with nothing tying them together.
That gap is where risk lives.
A new OpenText™ book, Enterprise Artificial Intelligence: Building Trusted AI in the Sovereign Cloud, (OpenText leaders Tom Jenkins, David Fraser, and Shannon Bell, 2026), lays out a clear path forward. Here are five insights every CTO should act on now.
5 insights CTOs need to act on now
1. Human approval isn’t a safeguard—it’s an architectural decision
As AI agents gain the ability to initiate actions without explicit human input, governance must expand beyond static policy compliance. When agentic AI identifies root causes and recommends fixes, execution should require human approval until trust is established through a documented track record with a predetermined success rate approaching 100%.
This isn’t about slowing AI down. It’s about deciding—deliberately—which actions AI can take on its own, and which require a human sign-off. If you haven’t made those decisions explicitly, your AI has made them for you.
2. Define your decision boundaries before an incident forces your hand
AI must know what it’s allowed to change. Without defined decision boundaries, agentic AI becomes a risk.
A tiered autonomy model works well here: in early stages, AI surfaces insights and only presents ranked options to a human for approval. As confidence grows, you selectively extend its authority to low-risk, predefined decisions. The core challenge is knowing where to draw the line between AI action and human judgment—and drawing it before something goes wrong, not after.
Map it out now: What can AI do autonomously? What needs approval? What’s off-limits? Use a risk-scoring system that is aligned with your corporate governance policies to develop those three lists, as a foundation for responsible AI operations.
3. Your audit trail is your liability shield
In the European Union (EU), DORA and GDPR aren’t compliance checkboxes—they’re minimum operating standards for AI in regulated environments. North American regulatory pressure is moving in the same direction. Every action your AI takes needs to be recorded, traceable, and reviewable.
ITSM provides that audit trail. When an AI agent resolves an incident, closes a ticket, or initiates a configuration change, that action should flow through the same governed process as any human-initiated change. If it doesn’t, you have an accountability gap that neither your management, legal team, nor your board will be comfortable with.
4. Build feedback loops—or watch your AI repeat the same mistakes
AI that acts without feedback loops doesn’t learn. It repeats. Every incident resolution, every escalation, every human override needs to feed back into the system, so your AI improves over time and doesn’t recreate the same failures.
ITSM is the natural home for this feedback architecture. Incident records, change histories, and resolution data aren’t just operational artifacts, they’re raw material for continuous AI improvement. Organizations that close this loop will see compounding gains in accuracy and reliability. Those that don’t will keep wondering why their AI keeps getting it wrong.
5. Retire MTTR as your primary success metric
Mean time to resolve (MTTR) is a reactive metric for a reactive era. We’ve entered the decade of responsible, AI-driven operations. That demands a new scorecard.
Forward-looking ITSM professionals are already redefining success around metrics that reflect AI’s governed impact: incidents prevented, percentage of incidents handled autonomously within policy boundaries, and AI decision accuracy rates. These aren’t vanity metrics—they’re indicators of whether your governance architecture is actually working.
If you’re still measuring only how fast you fix things, you’re playing the wrong game.
The bottom line
The organizations that win the AI era won’t necessarily have the most sophisticated models. They’ll be the ones that built the governance architecture to let those models act—safely, auditable, and within defined boundaries.
ITSM is that architecture. It’s time to treat it that way.
To learn more about AI in service management, visit our webpage.