Performance engineering reimagined for an AI world

AI-powered performance engineering uses contextual intelligence, automation, and predictive insights to help teams test smarter, release faster, and scale confidently.

Madison McCurry  profile picture
Madison McCurry

February 19, 20267 min read

Man working at laptop, mid-shot, one hand is on keyboard and the other is pointing to AI icon floating above keys, referring to AI performance engineering and AI in performance testing.

Performance engineering is evolving faster than ever: Modern applications are complex, distributed, and constantly changing, with AI-driven features, microservices, and hybrid cloud environments creating unprecedented scale and interdependence.

Traditional load testing approaches simply can’t keep up with the pace, scale, and architectural complexity of modern systems.

AI transforms performance testing into a guided, intelligent process that identifies patterns, automates routine tasks, and delivers real-time insights. Teams can anticipate issues before they reach production, make smarter decisions faster, and focus on strategy instead of firefighting. The result is faster releases, more reliable software, and performance as a strategic advantage rather than just a safety net.

The reality of modern performance engineering

Scripting and analysis: Where teams struggle without AI

The promise of AI is compelling, but much of performance engineering still runs the hard way. While modern applications are more complex, dynamic, and distributed than ever, performance engineering workflows have not evolved at the same pace. As a result, teams spend more time managing tests than learning from them.

Scripting remains largely manual and fragile, with even small application changes triggering hours of rework. Analysis struggles to keep up with growing test volumes, forcing engineers to sift through dashboards and logs to find what actually matters. And when issues are detected, insights are often buried in technical artifacts that are difficult to translate into clear actions.

The cumulative effect is slower test cycles, stretched expertise, and performance becoming a bottleneck rather than a strategic input. As application complexity increases, these challenges only compound, especially in scripting and analysis.

The trust gap: The friction teams face with AI

AI promises faster scripting and smarter analysis, but real-world adoption comes with challenges: Teams hesitate to act on recommendations they cannot validate, while inconsistent data and shallow “AI-powered” features often undermine trust.

Integrating AI into established workflows can also be disruptive. Engineers want assistants, not black boxes, and tools that work in isolation rarely scale across teams and pipelines.

Even when AI finds real issues, insights must be clear, explainable, and easy to act on. Without that, recommendations lose impact and adoption stalls.

AI that transforms performance engineering

OpenText is bringing AI into performance engineering in a way that is practical, explainable, and built for real-world complexity. OpenText Performance Engineering Aviator is designed to augment engineering teams with intelligent assistance across the entire testing lifecycle, without replacing existing workflows or turning critical decisions into black boxes.

Context is king

At the core is a simple idea: AI should not just automate tasks; it should understand context.

In a recent Techstrong article focused on AI predictions, technology leaders cautioned against “context rot” that can plague AI models, adding that “the next leap in AI will come from smarter context, not bigger models.”

That is where Model Context Protocol (MCP) becomes foundational. By connecting AI directly to performance engineering systems, environments, and workflows, MCP allows Aviator to operate with real operational context, not just generic models and static data.

Together, Aviator and MCP shift performance engineering from isolated testing activities to an intelligent, connected system that learns, adapts, and scales with the application.

Faster, more accurate scripting

DevOps Aviator removes one of the biggest friction points in performance engineering: script creation and maintenance. Instead of starting from scratch or constantly repairing brittle scripts, engineers can generate and refine scripts using AI that understands application behavior, protocols, and data flows.

But the real shift is not just automation—it’s interaction. Engineers can work with Aviator using natural language, turning scripting into a guided, conversational process rather than a trial-and-error exercise.

For example, an engineer can ask:
“Why am I getting a connection timeout in this script?”
 Aviator responds with a plain-language explanation, surfaces likely root causes, such as environment configuration, authentication failures, or network limits, and provides guided steps to resolve the issue quickly. This dramatically reduces debugging time and lowers dependency on senior experts.

Teams can also ask:
“Can you summarize my script?”
 Aviator breaks down what the script does in clear, non-technical language, helping engineers validate intent, onboard new team members faster, and avoid misalignment between test design and execution.

The result isn’t just faster scripting, but higher-quality tests, shared understanding across teams, and an intelligent assistant that scales expertise as applications become more complex.

Smarter analysis and predictive insights

Aviator in OpenText Core Performance Engineering Analysis transforms performance analysis from a manual, time-consuming exercise into an intelligent, guided experience. Instead of forcing engineers to sift through dashboards, logs, and raw metrics, it actively interprets test results and surfaces what matters most.

Engineers can interact with their data using natural language. For example, they can ask:
“Show me the connections graph grouped by script.”
Aviator instantly generates visual widgets and dashboards based on the request, making it easier to understand system behavior, identify dependencies, and accelerate root cause analysis without manual configuration or custom reporting.

To pinpoint critical issues, teams can ask:
“Identify the three scripts with the most errors and show the top error codes for each.”
Aviator correlates scripts, errors, and load metrics in real time, highlighting where failures are concentrated and which problems pose the greatest risk.

From there, analysis becomes action. A simple follow-up like
“Recommend remediation steps for error XYZ” guides teams from detection to resolution, with contextual recommendations grounded in actual test behavior rather than generic advice.

The impact is measurable. Internal testing shows that using Performance Engineering Aviator for analysis reduces time spent on performance investigation by 50 to 70%, allowing teams to move faster, focus on higher-value work, and shift from reactive troubleshooting to proactive optimization.

Over time, this enables a fundamentally different operating model, one where performance insights are continuous, explainable, and embedded directly into engineering workflows, not trapped in post-test reports.

Context-driven validation with MCP

MCP ensures that AI operates with real system context. Instead of relying on static models or abstract data, Aviator interacts directly with performance tools, environments, and workflows to understand how systems behave in practice.

This makes AI actionable, not just advisory. With MCP, AI can initiate real testing tasks, reason over real results, and deliver insights that are explainable and traceable. Testing reflects actual production conditions, and validation is grounded in live system behavior.

MCP also decouples AI from any single interface. Teams can bring AI into the environments they already use, from IDEs to enterprise platforms, without rebuilding workflows.

The result is agentic performance engineering, where AI becomes part of the workflow itself and performance validation becomes more reliable, predictable, and aligned with modern systems.

Testing applications with embedded LLMs

As more applications embed AI features like chatbots and copilots, traditional performance testing struggles to keep up with the unique behavior and performance characteristics of LLM-driven experiences. OpenText’s new LLM protocol captures model-specific metrics, like token processing, latency, and streaming behavior, while simplifying script configuration and test monitoring. Performance engineers can now measure, analyze, and optimize LLM-powered components with the same confidence as traditional workloads.

AI with transparency and control

AI in performance engineering is most effective when it is transparent, auditable, and aligned with enterprise policies. Ultimately, humans must approve, strategize, and bear ultimate responsibility for business decisions.

OpenText ensures AI testing supports regulatory requirements, secure data handling, and internal governance standards. For questions on AI capabilities, data handling, security, licensing, and legal considerations, see our Performance Engineering Aviator FAQs for practical guidance on adopting AI responsibly while keeping humans in control of critical decisions.

Performance that drives the business

AI-powered performance engineering can deliver measurable value across the organization: Teams complete testing cycles faster, accelerating time-to-market while reducing risk and minimizing production incidents. Insights are clear and actionable, improving collaboration and alignment across engineering, QA, and business teams. And because Aviator and MCP scale with your environment, performance engineering remains reliable and effective even as applications grow in complexity.

Transform your performance engineering

See how OpenText Performance Engineering solutions can transform your testing workflow.

Share this post

Share this post to x. Share to linkedin. Mail to
Madison McCurry avatar image

Madison McCurry

Madison McCurry is a Product Marketing Manager for OpenText DevOps Cloud, where she leads positioning and messaging for performance engineering and service virtualization solutions. She’s passionate about helping teams build faster, smarter, and more resilient applications. A proud yellow jacket, she graduated from Georgia Tech and resides in Atlanta, GA.

See all posts

More from the author

Achieve cloud load testing excellence with OpenText™ Core Performance Engineering

Achieve cloud load testing excellence with OpenText™ Core Performance Engineering

Performance engineering is an essential part of ensuring that applications are resilient, and customers are satisfied. Preventing performance outages and delivering a seamless user experience…

February 15, 2026

4 min read

OpenText leads cloud performance engineering in this year’s GigaOm Radar 

OpenText leads cloud performance engineering in this year’s GigaOm Radar 

For the fifth consecutive year, OpenText has earned the highest scores of any vendor in the 2025 GigaOm Radar for Cloud Performance Testing

January 12, 2026

4 min read

A year of performance engineering innovation

A year of performance engineering innovation

This year marked a shift in performance engineering—from reactive testing to proactive, resilient, and business-critical innovation.

December 29, 2025

5 min read

Stay in the loop!

Receive regular insights, updates, and resources—right in your inbox.