Pushing the Boundaries of What’s Possible with AI in Cybersecurity
The premier virtual event for cybersecurity builders leveraging GenAI and LLMs to enhance security operations.
The conference for GenAI builders in cybersecurity
Leading cybersecurity professionals building AI systems
Join leading cybersecurity practitioners and AI builders for Security Frontiers 2025, a virtual event exploring how AI is transforming cybersecurity. This free-to-attend conference brings together hands-on security professionals building AI-powered solutions to share insights, challenges, and lessons learned.
11 AM - 2:15 PM PT
March 27th Conference Agenda
11:00 - 11:10 AM PT
Welcome & Agenda
11:10 - 11:30 AM PT
AI is making a tremendous impact on cybersecurity, with both opportunities and threats. In this panel, we will discuss the current state of GenAI in security and what practitioners and security leaders can do to make sure this technology is working for them.
Where Are We Now, and Where Are We Headed?
Daniel Miessler, Caleb Sima, Edward Wu
11:30 - 11:50 AM PT
Prompt engineering, LLM flows, RAG, agents, fine-tuning, evaluations … not sure where to start? This session offers a practical roadmap for integrating LLMs into your security workflows amidst the overwhelming choices and hype. Regardless if your specialty is AppSec, GRC, Red Teaming, or Security Operations, this talk will help you choose the right techniques, avoid common pitfalls, and apply proven patterns with real-world examples.
LLMs as a Force Multiplier: Practical Patterns for Security Teams
Dylan Williams, Co-Founder and Head of Research at Stealth
11:50 AM - 12:10 PM PT
In this session, Anshuman will share his lessons from the journey of building AI security agents using the ReAct framework. These lessons will include some real world code examples and prompts provided to the LLM that will be more practical and helpful for others in the community to build their own AI agents.
Lessons from building ReAct Security Agents
Anshuman Bhartiya, Staff Security Engineer at Lyft
12:10 - 12:15 PM PT
Break
12:15 - 12:35 PM PT
This session will explore the development of an in-house threat modeling assistant that leverages LLMs through AWS Bedrock and Anthropic Claude. Learn how we're building a private solution that automates and streamlines the threat modeling process while keeping sensitive security data within our control. We'll demonstrate how this proof-of-concept tool combines LangChain and Streamlit to create an interactive threat modeling experience.
LLM-Powered Private Threat Modeling
Murat Zhumagali Security Engineer at Progress
12:35 - 12:55 PM PT
Getting an agent to reliably interact with a legacy security tool can be quite challenging—especially when dealing with output parsing, state management, and error handling. But what if we designed our tools to be AI-ready from the outset? Join us in a practical demo using AI agents to interact with Reaper, an open-source, API-based intercepting web attack proxy and fuzzing tool.
Building Security Tools by Humans, for AI Agents
Josh Larsen, Co-founder and CTO at Ghost Security
12:55 PM - 1:15 PM PT
This session will describe PII Detective, an open-source tool that uses Large Language Models (LLMs) to identify and classify PII with exceptional accuracy by analyzing table metadata. Participants will learn how to leverage LLMs for security operations and implement Dynamic Data Masking to seamlessly protect sensitive data.
Using LLMs for Cost-Effective PII Detection
Kyle Polley, Member of Technical Staff at LLM Search
1:15 - 1:20 PM PT
Break
1:20 - 1:40 PM PT
This session will explain how Databricks uses AI to automatically identify and rank vulnerabilities in third-party libraries based on severity and relevance to Databricks infrastructure. The VulnWatch system has significantly reduced manual effort for the security team.
AI-Enhanced Prioritization of Vulnerabilities
Anirudh Kondaveeti, Data Scientist at Databricks
1:40 - 2:00 PM PT
Leveraging OpenAI's recent findings on the malicious AI tool "Peer Review" and last year’s ISOON leak, DarkWatch explores how threat actors could weaponize AI to surveil social media for political dissent. This presentation introduces a proof-of-concept implementation using Twitter datasets, Neo4j graph algorithms, and an adaptive ReAct agent.
DarkWatch: Exploring the Risks of AI-Driven Surveillance Systems Through Hands-On Building
Jeff Sims, Senior Staff Data Scientist at Infoblox
2:00 - 2:05 PM PT
Wrap Up

About the Event
Security Frontiers is a virtual cybersecurity conference designed for security practitioners and leaders who are leveraging GenAI to improve cybersecurity. This three-hour event highlights projects and examples of how people are automating security tasks with GenAI and shares lessons learned from building GenAI-powered security tools.
This conference is designed for security practitioners interested in building and using GenAI. You will gain valuable knowledge and insights to help effectively incorporate GenAI into your security practices and its potential to drive innovation.
GenAI
topics
FAQs
Got questions? We have answers
Security Frontiers is a conference primarily for security engineers, SOC analysts, SOC lead analysts, and SOC leads who are or want to start experimenting with GenAI in their organization.
Security Frontiers is a virtual cybersecurity conference designed for security practitioners and leaders who are leveraging GenAI to enhance and streamline traditional security practices.
The conference will be live on March 27th, 2025.
The conference is 3 hours long. It starts at 11:00 a.m. - 2:00 p.m. PT. It's a fully virtual event.
Check back for details.
The event is free to attend, but registration is required to ensure you receive all the necessary access to information and updates.
Yes, there will be Q&A sessions and opportunities for live interaction with speakers. We encourage active participation and engagement from all attendees.
Registered attendees will receive updates via email. You can also follow #securityfrontiers on Blue Sky and Mastodon to join the discussion.
Yes, the event will be recorded, the recordings will be shared with registrants after the event.
Yes! While the CFP for March 27th is now closed, we are accepting submissions for speakers to share their GenAI cybersecurity projects in our next Security Frontiers' event.
For more information, or if you need any assistance, please contact our support team at info@securityfrontiers.ai. We’re here to help make your conference experience as smooth and enjoyable as possible.