Select the right cybersecurity AI tools with these 3 crucial steps

AI is everywhere but can it help you with your cybersecurity? The short answer is yes if you choose the right solution for your team.

Mari Pospelova  profile picture
Mari Pospelova

May 16, 202511 min read

THis is an image representing AI cybersecurity tools

The overwhelming impression from just wrapped up RSA or any other cybersecurity conference, AI is everywhere. There are dozens, if not hundreds, of AI-based cybersecurity tools with look-alike UIs and extremely similar features. They all sprinkle their marketing materials with the same buzzwords: GenAI, Agentic AI, AI assistants, etc.

One thing is clear every cybersecurity team will have to use AI tools and better sooner than later. Most of the people we met with had the same question – how to choose the best solution for their team. I have been building cybersecurity AI solutions for over a decade. Here is a consideration framework than can assist you with that decision.

1-Define what your team needs with a cybersecurity AI tool

Even though cybersecurity professionals are all tasked with protecting their organizations the circumstances vary from team to team. There are four major dimensions to consider: threats you face, the data you have, your team’s composition, and your processes and workflows.

The threats you face

Let’s start with identifying the primary threats that your organization is concerned with, whether AI or not. What are the main security problems you’re trying to solve by adding a new tool to your arsenal? Ideally, you want to compile that list using a data-driven approach. Consult the results of your red team activities. Are there any blind spots in your existing tools?

Monitor news of the main threats and breaches that affect your industry or similar organizations. What are the high-return targets or goals that malicious actors may pursue in your organization? Are there geo-political tensions and events that may spike specific types of threats aimed at your organization?

Make sure to include in your threat prioritization process the notion of how likely they are to strike your organization and how much damage they will cause if you miss them. For instance, the healthcare industry is experiencing the rise of phishing and ransomware attacks aimed at sensitive patient data exfiltration. Meanwhile, for the financial industry this year, it is advisable to be on the watch for APTs, nation-state-sponsored espionage as well as supply chain and vendor vulnerabilities in addition to ransomware attacks.

The data you have

Check your cybersecurity data inventory. Compose a list of all data sources that you are currently using. Add information about its coverage as well as how accurate is your data. For example, if your organization deploys a mixture of endpoint agents, providing only some of the logs to your AI system may result in a partial view with some aspects of the attack missing.

Inferior quality or incomplete data will require additional cleanup and filtering otherwise it not only will not produce valuable results in your AI system but will cause degradation of the quality, bearing the signal in the noise. Your AI model can be only as good as the data you are feeding into it.

Your team’s composition

Every cyber security team has a unique composition as well as distinctive styles of communication and collaboration patterns. The AI tools should empower your team rather than set up a trap.

Consider the expertise of your team members. For example, a team composed of seasoned threat hunters versus a team of newly hired SOC analysts will need different tools with diverse levels of detail and context. Is your team being uniform in the areas of expertise and tasks or it composed of several “specialists” who tend to have deeper expertise in specific areas of threat hunting, digital forensics, or mitigation techniques?

Some AI assistants provide flexibility to retrieve the answers based on the various levels and areas of the user’s expertise. Was your team exposed to AI in the past? Consider how much assistance your team will need to get up to speed with AI tools. What would benefit them is a highly configurable, hands-on, direct interaction AI tool or something that works behind the scenes incorporating AI enhancements into already familiar types of interactions and interfaces.

Keep in mind the famous Spiderman quote – “With great power comes great responsibility”. Size of the team. The size of the SOC and threat-hunting teams could vary from a “one-person shop” to dozens of members. Smaller teams are likely to be much faster adopters and use direct communication methods. While larger teams are more likely to have more levels in the internal team org structure. If this is the case, the ability to create specialized reports using AI agents or automate the generation and distribution of the latest status summaries is a valuable feature for your team.

Distribution of the team.

Some teams are collocated, while others are distributed across different geo-locations. This factor often affects communication patterns and methods. Your communication AI tool should be able to enhance the type of communication that is currently prevalent in your team and solve existing problems. For example, a team distributed across a wide range of time zones, will benefit from the automated detailed AI-generated reporting.

Your processes and workflows

Your team has already established “well-oiled” team practices. Independent of the origin of these routines, they’ve withstood the test of time and are reasonably effective. Take an inventory of these, so you don’t throw away a baby with the bathwater in your excitement to embrace AI. Consider:

  • Investigation workflows
  • Incident handling policies
  • Inter- and intra-team reporting methods
  • Response techniques

What can you optimize with AI? Where can your team save some time by implementing automated Agentic AI workflows? What decision making to do you want to avoid passing on to AI?

You must set up “healthy boundaries” and determine what you can delegate to an AI system and what you want done by a human. At last, the integration of the new tool should be organic it should enhance and improve your team’s life. Choose evolution over revolution. If you must mandate usage of a tool or if the same tasks take longer and the quality of the work goes down, even after the expected “ramping up” period that is natural for a new tool adoption you should explore different options.

Now that you’ve created a tailored “wish list” of the features and capabilities, prioritize it according to your needs. Make sure to note which are must haves and which are just nice to haves. You may also want to consider incorporating multiple AI tools with complementary capabilities.

2- Avoid the red flags of some cybersecurity AI tools

Now let’s look at what you don’t want to see in an AI solution that you consider.

Can’t try before you buy

How do you verify that an AI solution fits your list of parameters? You can try evaluating an AI tool based on the marketing materials and demos. However, the best option is to “test drive” it. If a vendor provides an opportunity to try the AI solution on your data and provides hands-on experience with the system for your team – go for it. It is by far the best option and speaks volumes about the confidence of the vendor in their solution. If you must commit before seeing the performance of the tool on your data, this should be your first red flag.

Obscure models

Another red flag is undisclosed core AI models’ details. Specifically, if the vendor refuses to provide any information about the origin or data the vendor used to train or fine-tune the model. Naturally, you will not get a level of detail that could benefit their competitors, but you should be able to verify that the AI model adheres to the standards and policies of your organization.

The absence of that information makes it extremely hard to evaluate the AI solution for ethics, privacy, security, and even the legality of the model usage for your use case. With laws and regulations tightening around the world in real time, what was acceptable yesterday may no longer be that tomorrow.

Fuzzy AI ethics

The topic of ethical AI is repeatedly surfacing in the headlines of major media outlets. Why is it important for your selection of AI-powered cyber security tools? AI can cause wide range of issues without ethical considerations and guardrails:

  • Misinformation
  • Bias and discrimination
  • Compromised data privacy
  • IP Infringement
  • Insufficient governance and accountability

The consequences of these issues can pass to you if the vendor uses an AI system that does not address them.

Results lacking explainability and transparency

It is alarming if the signals are hard to explain. It isn’t sufficient if the AI system just indicates if some activity or a user is risky it should provide a clear explanation based on what evidence it came to that conclusion. Even better, drill down to the specific raw events that resulted in that conclusion.

Your top attacks aren’t there

The next red flag is less definitive. It is possible that even though the top-priority threats aren’t listed in their coverage material, the system will still detect them. If you can, red team these attacks and test drive the tool on your data. This will allow you to verify the efficiency of the AI tool to detect these attacks experimentally. However, if you can’t assess, and your top use cases aren’t mentioned, the chances are high that you will have weak or no coverage for at least some of them from that AI system.

Adoption of the tool will break your current process.

Also consider compatibility between the AI tool and your current work processes. Adoption of the new tool should enhance your team’s productivity and the quality of their work. If the AI tool doesn’t align with your team’s current workflow, if you must mandate its adoption, it is unlikely to be a right fit.

Note, that that there will be some kind of learning curve. A reliable vendor should provide sufficient training, onboarding or threat hunting services that would minimize the ramping-up time for your team and get them up to speed without missing a heartbeat.

You don’t see the tool saving you money.

The final red flag is if you don’t see the tool saving you money or time, which is also money. Most of our teams have limited resources that must and budgets. When you consider adopting a new AI tool ask yourself the following questions. Do you expect it to save time for your team and broader costs for your organization? If yes, try to be as specific as possible in your estimations – this will help you to compare multiple AI tools or their combinations. Consider your existing operational costs as well as potential costs of delayed detections or even missed attacks.

3- Evaluate the cybersecurity AI tool

Here are proven best practices for your evaluation methodology:

  • Assess it in conditions as close to reality as possible. Specifically, the data that you are trying the AI tool on should be composed of a wide variety of data sources (for instance, include at least one endpoint and authentication data sources to cover a broad variety of attacks)
  • Include an assortment of devices and user types like what you have in your organization.
  • “Red team it” and be as specific and close to your prioritized list of threats as possible.
  • Ideally, you should test for both quick and targeted attacks as well as advanced persistent threats. Let’s be honest, you don’t need a sophisticated algorithm to detect an obvious DDOS attack.
  • Bring into the evaluation process at least one of your “frontline” team members who spends most of his/her day monitoring the alerts, threat hunting, analysing the logs, and responding to the attacks.

Now you have a solid, systematic way to find the tool that helps your team succeed in your own unique combination of requirements and circumstances.

OpenText Executive Vice President, Security Products, Muhi Majzoud, recently spoke to Bank Info Security about the integration of GenAI and threat detection and response in cybersecurity strategy. Watch the interview.

You can also view presentations by OpenText data scientists at RSA on demand or use our complimentary threat detection and response checklist to assess vendors.

Share this post

Share this post to x. Share to linkedin. Mail to
Mari Pospelova avatar image

Mari Pospelova

Maria Pospelova is a Principal Data Scientist, leading a team of data scientists for Interset, the applied AI division for OpenText Cybersecurity. Maria has been “catching bad guys with math” for almost a decade. With profound expertise in applying data science to the cybersecurity domain, she takes an active role in the development and innovation of Interset’s technology, authoring several patents and research papers in both fields.

See all posts

More from the author

Equipping threat hunters: Advanced analytics and AI – Part 2

Equipping threat hunters: Advanced analytics and AI – Part 2

If you work in the field of cybersecurity, you cannot ignore the warning sign of any imminent threats. New cyberthreats, adversaries, and hacking tools emerge…

September 24, 2024

7 min read

Stay in the loop!

Get our most popular content delivered monthly to your inbox.