As cyber threats grow more sophisticated through AI-powered malware, zero-day exploits, and state-sponsored attacks, organizations face an increasing challenge in safeguarding their digital assets. The shortage of cybersecurity expertise and the sheer volume of data to analyze has led organizations to seek a balanced approach to threat detection. This approach integrates the precision of rule-based detection, the adaptability of AI/ML models, and the critical thinking of humans. This article aims to explain the role of each of these elements in threat detection and how their combination forms a strong defense against today’s advanced cyber threats.
This is the eleventh post in our ongoing “The Rise of the Threat Hunter” blog series. To learn more about the series and find previous posts check out our series introduction or read last week’s post “The future of threat hunting”.
Rule-based threat detection
Rule-based threat detection has long been an instrumental tool in cybersecurity and is one of the earlier methodologies for identifying and mitigating threats. Traditionally, this approach involves creating specific rules and signatures for known malware and threats and scanning new data for these predefined patterns. Rule-based detection has evolved significantly in recent years, with systems like the advanced real-time event correlation engine within OpenText’s ArcSight ESM that enable precise threat identification and rapid response based on rules.
The strength of rule-based detection lies in its high interpretability and out-of-the-box usability. The transparency provided by rules simplifies investigations by providing clear connections between alerts and the triggering events. These systems are particularly effective in detecting known threats and can be easily deployed with minimal configuration.
However, rule-based approaches have limitations, particularly in adapting to new and unknown threats. Their rigidity rules involving thresholds can increase false positives, especially when data distributions drift due to organizational changes. Maintaining and updating these systems can also be time-consuming and often requires manual interventions.
AI/ML-based threat detection
Behavioral threat monitoring with AI/ML
Machine Learning (ML) and Artificial Intelligence (AI) are revolutionizing threat detection by using advanced techniques to identify both known threats (e.g., brute force, phishing) and unknown threats (e.g., zero-day exploits). These technologies can be tailored to specific organizations by training on their unique data. ML/AI-based systems use supervised models with labeled datasets or unsupervised models that learn normal behaviors and flag deviations as potential anomalies. Platforms like OpenText’s ArcSight Intelligence use various models to proactively detect threats by analyzing different behavioral patterns within an organization.
There are several factors to consider when training AI models for threat detection.
- Transparency and traceability: The Predictions from the models need to be backed with evidence or explanations to help threat hunters understand and trust their decisions and take effective action.
- Adaptability: As users’ and systems’ behaviors evolve, the models must adjust to the new patterns without losing accuracy.
- Scalability: AI models must handle vast and growing cybersecurity data efficiently.
- Relevance: Statistical anomalies may not always be significant from a security perspective. Understanding and distinguishing these anomalies helps reduce false positives and ensures that the model’s outputs remain relevant and actionable.
The strengths of AI/ML in threat detection include their ability to identify unknown threats by detecting deviations from normal behavior patterns. This makes them more effective against sophisticated attacks that evade traditional methods. These models are also easier to maintain as they adapt to changing data without constant human interventions.
However, challenges exist, such as interpreting complex model outputs if the transparency principle is not adhered to. Additionally, limited access to comprehensive threat data complicates the validation of unsupervised models. These models also require sufficient baseline data to make accurate predictions—a process that can take time. Despite these challenges, AI/ML-based threat detection has proven to be a remarkable advancement in cybersecurity, offering state-of-the-art solutions to emerging threats.
Threat hunting using generative AI
Large Language Models (LLMs) are recent advancements in the Generative AI space. They are tools designed to produce human-like text and are widely used for tasks like summarization and developing chatbots. LLMs offer significant potential in cybersecurity, particularly as natural language interfaces between threat hunters and analytics systems.
LLMs for generating threat reports
Rules and AI/ML models speed up threat hunting, but large volumes of alerts can overwhelm threat hunters. LLMs ease this burden by generating reports summarizing the threat landscape and highlighting key anomalies. These summaries help security teams quickly understand critical issues, enhancing efficiency and focus.
LLM-based threat hunting assistant
Beyond generating summaries, LLMs can also serve as natural language interfaces to interact with data in natural language, facilitating pattern discovery, answering queries, and uncovering insights. This intuitive interaction with security data helps teams detect and respond to threats more effectively.
LLMs for code generation
LLMs’ code generation capabilities can be utilized for tasks like automatic rules generation. This will allow threat hunters to generate rules from natural language descriptions, making the process of rule configuration more intuitive.
Human insight in threat detection
Despite the availability of advanced threat detection tools, threat hunters’ role remains indispensable due to the critical human insight they bring. The automated tools could enhance the efficiency of threat hunters. Still, human judgment, critical thinking, and the ability to make decisions under pressure are irreplaceable in detecting sophisticated attacks that automated systems might overlook. Moreover, threat hunters are essential in keeping automated systems relevant and effective, whether by defining and updating rules or collaborating with data scientists to refine AI/ML models. Their deep understanding of evolving threats ensures that detection systems remain accurate and responsive in the face of new challenges.
The integrated approach: A robust defense system
Integrating rules, AI/ML models, and Large Language Models (LLMs) can create a more comprehensive and effective threat detection system for threat hunters. Rules are particularly effective at identifying known threats and malware signatures, while AI/ML models excel at detecting unknown threats by capturing deviations across various behavioral dimensions. By layering rules on top of model outputs, the resulting detection system can be less noisy and more accurate than rules applied directly to raw data. Additionally, ML models can incorporate rule violation results as features, enhancing their predictive power. Finally, LLMs can serve as a user-friendly interface, effectively presenting the combined insights from both rules and models to threat hunters, facilitating more accurate and actionable threat analysis. Human threat hunters are indispensable for their unique insights and decision-making abilities, complementing automated tools to detect complex attacks. They help to ensure that detection systems stay up-to-date and effective against evolving threats.
Learn more about OpenText Cybersecurity
Ready to enable your threat hunting team with products, services, and training to protect your most valuable and sensitive information? Check out our cybersecurity portfolio for a modern portfolio of complementary security solutions that offer threat hunters and security analysts 360-degree visibility across endpoints and network traffic to proactively identify, triage, and investigate anomalous and malicious behavior.