Every day, threat hunters navigate an overwhelming sea of data, sifting through countless logs from various sources. These logs, even after getting translated into alerts by various analytical tools, demand constant attention and scrutiny. Security analysts need to manually investigate thousands of alerts while frequently referencing external threat intelligence sources such as BrightCloud. Despite the availability of sophisticated analytics platforms, the sheer volume and complexity of data make efficient threat detection a daunting task.
By integrating AI into the threat-hunting process, alerts can be enriched with deeper contextual insights thereby reducing the manual workload. AI-driven summarization can distill vast amounts of information into concise, actionable summaries and narratives, helping analysts focus on critical threats faster. Additionally, AI can automate report generation and even suggest response strategies, streamlining incident resolution.
Reducing alert fatigue with AI-powered enrichment
Nearly every security tool on the market today can generate alerts after analyzing logs. These alerts may be rule-based or derived from machine learning models and help to reduce the burden from millions of log events to a more manageable number of alerts. However, even at this reduced scale, investigating these voluminous alerts remains a time-consuming challenge for threat hunters.
Each alert requires a deep dive into underlying raw events to extract contextual details. Analysts also must manually cross-reference multiple sources, looking up information such as process hashes or remote IP addresses in threat intelligence databases to determine if they appear in known blacklists. After all this effort, most alerts often turn out to be false positives, leading to wasted time and analyst fatigue.

Generative AI can play a powerful role in automatically enriching security alerts with contextual intelligence, significantly easing the burden on threat hunters. For instance, when the execution of an unusual process triggers an alert, analysts typically need to investigate manually. They look up the process hash to determine if it’s linked to known malware, examine the parent and grandparent processes for anomalies, and analyze the command-line arguments used during execution.
Organizations can automate much of this investigative work with recent advancements in generative AI. AI can generate enhanced alert descriptions incorporating critical details. These include process lineage, command-line inputs, and real-time reputation lookups for hashes and IP addresses. This enriched information empowers analysts to make quicker, more accurate judgments about which alerts warrant deeper investigation. It also allows them to identify which are likely false positives. By minimizing manual effort and improving decision quality, AI-driven enrichment helps security teams cut through the noise and focus on genuine threats.
Enhancing entity-based threat analysis with AI-driven summarization
User and Entity Behavior Analytics (UEBA) tools, such as Core Threat Detection and Response, take threat detection further by aggregating alerts based on associated entities such as users, machines, IP addresses, and more. Instead of analyzing individual alerts in isolation, these tools compute a risk score for each entity based on their associated alerts, allowing threat hunters to assess security incidents holistically. This approach helps identify patterns that might otherwise go unnoticed, including connections between seemingly low-severity alerts that, when correlated, reveal a more significant security threat.
In this approach, threat hunters typically prioritize their investigations on entities based on risk scores and manually review their corresponding alerts to reconstruct an entity’s activity timeline. However, this process still requires significant time and effort to stitch together multiple alerts and build a coherent story.

Streamlining with Generative AI
Generative AI can streamline this process by automatically summarizing anomalous activities for each high-risk entity, providing a concise yet comprehensive overview alongside the risk score. The workflow typically involves the following steps:
- Identifying high-risk entities and relevant time windows: Focus is placed on entities that accumulate higher risk scores based on their associated alerts in a given period.
- Ranking anomalies: Anomalies are prioritized based on their contribution to the entity’s risk. This ranking considers factors such as the importance of associated entities, the weight of the anomaly model, the nature of the suspicious activity, etc.
- Selecting and compressing top anomalies: To ensure a holistic view, a curated set of significant anomalies is chosen across various behavioral dimensions—such as access patterns, authentication patterns, or access anomalies.
- Constructing the anomalous narrative: A large language model (LLM) generates a human-readable summary that stitches these anomalies into a coherent story. This narrative contextualizes scattered alerts into a meaningful threat storyline, helping analysts understand what happened immediately.
By highlighting key behaviors and tying them to the broader risk picture, these AI-generated summaries enable analysts to focus their time and expertise on the entities that matter most. This approach accelerates decision-making and minimizes the risk of missing critical security threats hidden within alert noise. This narrative is further enhanced by associating potential MITRE ATT&CK techniques that map to the entity’s observed activities—a topic we’ll explore in more detail in an upcoming blog post.
From entity insights to organizational summaries
While entity-level summaries help threat hunters analyze individual users, machines, or IPs efficiently, they can extend the same AI-driven approach to provide a broader view of an organization’s overall security posture. By aggregating risk scores, anomalous activities, and trends across multiple entities, AI can generate a high-level summary of an organization’s security state at any given moment.
This organizational-level visibility enables security teams to identify larger attack patterns, persistent threats, and areas of concern that might not be evident from individual alerts. More importantly, AI can automate the generation of executive summaries and detailed security reports, offering CISOs and other stakeholders clear insight into the company’s threat landscape.
Smarter AI-driven responses
Security teams often rely on past experiences and documented response strategies to handle recurring threats effectively. However, manually searching through past cases, incident reports, and response playbooks can be time-consuming and inefficient. Fine-tuned Large Language Models (LLMs) integrated with Retrieval-Augmented Generation (RAG) can take incident response to the next level by learning from an organization’s historical security incidents.
This approach speeds up incident resolution and improves response accuracy by reducing reliance on manual research. Security teams can focus on executing the best course of action rather than spending valuable time piecing together historical data. Ultimately, AI-powered response recommendations transform threat hunting from a reactive process into a proactive and adaptive cybersecurity strategy.
Address key challenges in AI-driven threat-hunting solutions
While AI significantly enhances threat-hunting workflows, its adoption comes with several challenges that organizations must address to ensure security, accuracy, and usability.
Data security
Security logs could contain highly sensitive information crucial to an organization’s defense strategy. Allowing these logs to leave a secure environment for AI processing poses a risk that many organizations are unwilling to take. To mitigate this, AI models must be hosted within an organization’s Virtual Private Cloud (VPC), ensuring that all data remains in a controlled and protected environment. This approach allows organizations to leverage AI’s benefits while maintaining compliance with data security policies.
Data representation
Security data exists in various schemas, often containing unique terminologies and abbreviations specific to an organization. This inconsistency makes it challenging for AI models to interpret and process the data effectively. The retrieval mechanism must be designed to extract meaningful information while normalizing in-house terminologies into a format understandable by the AI model. Standardization ensures accurate insights and prevents misinterpretation due to data structure inconsistencies.
Prompting and output consistency
Ensuring that AI-generated outputs adhere to a consistent structure is critical for usability. For instance, if a UI engine expects responses in a specific JSON format with predefined keys, deviations from this standard could break the interface. Similarly, reports and summaries must follow a uniform structure and language to maintain clarity and usability. Establishing strong prompt engineering practices ensures that AI outputs remain predictable and seamlessly integrate into existing security workflows.
Addressing these challenges is key to successfully integrating AI into threat-hunting operations.
Conclusion: The future of AI-augmented threat hunting
Fusing human expertise with AI-powered efficiency is not just an advantage—it’s becoming necessary in the ever-evolving threat landscape. As cyber threats grow more sophisticated, the demand for real-time analysis, rapid decision-making, and precise incident response is higher than ever. While AI has already demonstrated its value in enriching alerts and summarizing security events, its role in proactive threat defense utilizing mechanisms such as agentic AI is still expanding. By autonomously analyzing patterns, detecting emerging threats, and taking predefined defensive actions, agentic AI transforms cybersecurity from a reactive model into a proactive and adaptive defense strategy.
Join OpenText Cybersecurity data scientists @ RSA 2025 where my colleagues and fellow data scientists, Nakkul Khuraana, and Hari Manassery Koduvely, will discuss ‘How to Use LLMs to Augment Threat Alerts with the MITRE Framework.’