My colleague Nakkul Khurana and I attended the RSA Conference 2025 (RSAC 2025) to give a talk on the work we completed at Open Text. How to Use LLMs to Augment Threat Alerts with the MITRE Framework was well received with about 200 people attending. The Open Text booth at the Expo showcased all our Cybersecurity products, was also a main attraction for visitors.
The event was also packed with insightful sessions covering the latest trends and challenges in cybersecurity. A major focus this year was the intersection of artificial intelligence (AI) and cybersecurity, exploring both the benefits and the risks. This post summarizes some key takeaways from various talks presented at the conference.
AI’s dual role in cybersecurity
Several sessions highlighted AI’s evolving role. George Gerchow’s talked about “Harnessing AI to Enhance Cloud Security While Addressing New Attack Vectors.” He discussed how AI-powered bots like MongoDB’s Guardian Bot (GB) are becoming essential for real-time threat response and automating security and compliance tasks. These bots use AI to adapt to emerging threats and improve operational efficiency, reducing response times significantly.
However, AI also brings new risks. Michael Bargury’s presentation, “Your Copilot Is My Insider,” delved into vulnerabilities associated with AI copilots and plugins. He discussed potential data leakage, RAG poisoning, and new attack vectors that arise from the integration of AI into business processes. The key takeaway was that AI can greatly enhance security. However, it also requires careful management and security measures to prevent misuse.
The importance of security in RAG systems
Akash Mukherjee and Dr. Saurabh Shintre’s “RAG-NAROK: What Poorly-Built RAGs Can Do to Data Security” emphasized the security challenges in Retrieval Augmented Generation (RAG) systems. They explained that adding private data to chatbots requires robust access controls and permissions management to prevent data leakage. Akash and Saurabh also discussed different permission enforcement methods along with the need for sensitive data protection beyond just permissions.
Security automation with LLM-driven workflows
In the session “Fast-Track Security Automation with LLM-Driven Workflows,” Steve Povolny explored the application of Large Language Models (LLMs) in automating security operations. He covered various LLM tools, prompt engineering best practices, and real-world use cases for improving Security Operations Center (SOC) efficiency. Steve also highlighted the importance of addressing security considerations like data privacy, prompt injection risks, and model bias.
Principles of GenAI security
Diana Kelley’s talk, “Principles of GenAI Security: Foundations for Building Security In,” provided an overview of Generative AI (GenAI) security. She discussed the GenAI threat landscape, architectural considerations, and security at runtime. Key takeaways include the importance of understanding the unique risks associated with AI systems and implementing security-by-design principles.
Adversarial neural patterns in LLMs
In “Beyond the Black Box Revealing Adversarial Neural Patterns in LLMs,” Mark Cherp and Shaked Reiner focused on uncovering hidden vulnerabilities in LLMs. They discussed new jailbreak techniques and mitigations, exploring the “psychology” of models and how they can be manipulated. This talk highlighted the need for continuous research and development of defences against sophisticated AI attacks.
Supply chain security and emerging threats
Dr. Andrea Little Limbago’s presentation, “A Stuxnet Moment for Supply Chain Security?” addressed the emerging threat of supply chain infiltration, referencing recent incidents like the pager attacks. She discussed how digital supply chain attacks are growing and potentially shifting cyber norms. Her talk also emphasized the need for enhanced security measures and vigilance in hardware and software supply chains.
The future of security UX with Agentive AI
“How Security UX Must Change, with Agentive AI,” explored how user experience (UX) in security must adapt with the rise of agentive AI. In this talk Steph Hay emphasized offloading tasks, dynamic UIs, and exponential outcomes. Assistive UX features like “easy buttons,” seeded prompts, and multi-turn chats will become crucial for improving security operations.
Social engineering and GenAI
Perry Carpenter’s session, “Conversations with a GenAI-Powered Virtual Kidnapper (and Other Scambots),” examined how social engineering attacks can leverage generative AI. He demonstrated how these tools create realistic scams and highlighted the need for organizations to prepare and train employees to recognize and respond to these threats.
Initial access brokers and market trends
“Initial Access Brokers: A Deep Dive,” provided insights into the world of initial access brokers (IABs). In this talk, Amit Weigman discussed their methods of operation, the types of access they sell, and current market trends. Understanding the IAB ecosystem is crucial for preventing and responding to security breaches.
The evolution of the SOC in an AI-driven universe
Dave Gold’s presentation, “The Future of the SOC in an AI-Driven Universe,” revealed the current state of Security Operations Centers (SOCs) and how they will evolve with AI. He highlighted the shift from manual processes to semi-autonomous and autonomous SOCs, the need for scalable AI-driven platforms, and the evolution of SOC visualizations.
Safety and security of LLM agents
”Safety and Security of LLM Agents: Challenges and Future Directions,” focused on the unique safety and security challenges posed by LLM agents. Dawn Song discussed potential attacks, evaluation methods, risk assessment, and defences for these systems. Ensuring both safety and security is crucial for realizing the benefits of LLM agents.
Zero trust AI and multi-agent systems
In “Zero Trust AI: Securing Multi-Agent Systems for Private Data Reasoning,” Ken Huang addressed the security of multi-agent systems. He introduced the MAESTRO threat modelling approach and emphasized the need for a zero-trust security model in AI systems handling private data.
Conclusion
RSAC 2025 makes it clear that AI is fundamentally changing the cybersecurity landscape. While it offers tremendous opportunities for enhancing defences, it also introduces new and complex challenges. Organizations must adapt by understanding these changes, adopting AI-driven security solutions, and addressing the associated risks proactively. Staying informed and prepared is key to navigating the future of cybersecurity.
Learn how OpenText Core Threat Detection and Response is leveraging AI-driven behavioural analytics to revolutionize SOC teams.