
Artificial intelligence (AI) has altered the cybersecurity landscape, leading to technological breakthroughs and increasingly sophisticated threats alike. Agentic AI is among these developments.
As organizations consider adopting agentic AI, there are benefits and risks that must be considered. Here, Security magazine talks with Diana Kelley, Chief Information Security Officer (CISO) at Noma Security, about the best practices for implementing agentic AI.
Security magazine: Tell us about your background and career.
Kelley: I’ve spent my career helping organizations navigate the evolving world of cybersecurity. Today I serve as Chief Information Security Officer at Noma Security an AI security platform. My journey has taken me through technical leadership and advisory roles at companies large and small, including: Protect AI (now Palo Alto Networks), Microsoft, IBM Security, Symantec, Burton Group (now Gartner), and KPMG, as well as co-founding the consultancy SecurityCurve. Along the way, I’ve been fortunate to serve on industry boards including WiCyS, the Executive Women’s Forum, CyberFuture Foundation, TechTarget Security Editorial, and InfoSec World.
Teaching, writing, and mentoring are passions of mine. I love sharing knowledge in my public speaking and keynotes, via my LinkedIn Learning courses, and was so proud to learn that a book I co-authored Practical Cybersecurity Architecture, had been adopted by some professors as a textbook. I’m honored to be recognized as a Global Cyber Security Hall of Fame inductee and EWF Executive of the Year, yet what matters most to me is building collaboration and inclusion. Security is at its strongest when we come together, learn from one another, and work side by side to protect what’s most important.
Security: What is agentic AI, and in what industries might the use of agentic AI be the most valuable?
Kelley: Agentic AI brings together software and language models (genAI) to create systems that can make decisions and act autonomously toward defined goals. Traditional genAI reacts to prompts, while agentic AI systems plan, adapt, and collaborate across complex tasks. It has useful applications in many industries but early adoption is being seen in industries where speed and context are critical, such as financial services for fraud detection and dynamic risk modeling and manufacturing for supply chain optimization.
We’re also seeing a lot of interest in the cybersecurity industry with a number of companies working on agents that to augment or even backfill tier one responders and improve real-time threat detection capabilities. Some orgs are also experimenting with agents that can perform low-risk or very well defined automated remediation actions. But really, any industry that can benefit from increasing speed of automation while enhancing decision accuracy could benefit from agentic AI.
Security:
What are the benefits of introducing agentic AI to an organization? What are the risks?
Kelley: The benefits are pretty clear and very exciting: efficiency gains, improved accuracy, faster problem resolution, and rapid action on insights from data. Agentic AI, done well, can scale processes that once required manual intervention, driving response and resilience. This is incredibly good stuff, but the risks are not to be taken lightly.
With all of that agency and automation comes a big potential for downsides if systems are designed, tested, and deployed with security built in. Risks include over-reliance on autonomous systems, shadow AI, data loss, cascading hallucinations, embedded bias, potential regulatory or ethical violations, and exposure to adversarial attacks.
Without governance, transparency, and oversight, organizations could risk data breach, system outage, reputational damage, and the list goes on. That’s why it’s so critical to build security and governance in: clear accountability, strong human-in-the-loop controls, and a focus on explainability to ensure agentic AI actions operate as expected in line with organizational values and policy and safety requirements.
Security:
How can organizations safely and securely introduce agentic AI?
Kelley: Safe adoption starts with governance and risk management frameworks aligned with NIST’s AI RMF, ISO 42001, OWASP GenAI project resources, sector-specific standards, and regulations like the EU AI Act. The old saying, “you can’t manage what you don’t know,” applies here as well.
Organizations should start by conducting a full inventory of all of the AI in use at their organization. A lot of agentic AI work is still in proof of concept mode, so now is the time to build those inventories and having conversations with business owners to understand what they are trying to accomplish so you can help them do it responsibly. Weave AI-aware language into policies and provide standard operating procedures or guidelines so employees are AI literate and responsible adopters. Pilot deployments with red-team testing, ongoing monitoring, and clear escalation paths can help uncover weaknesses early.
Ultimately, secure adoption will require a blend of technical safeguards and controls, AI aware processes, and organizational readiness for cultural change.
Security: Anything else you’d like to add?
Kelley: Agentic AI is exciting! It offers remarkable promise, but its success depends on more than algorithms. Building trust with employees, customers, and regulators requires openness about how systems make decisions and who remains accountable. Equally important is inclusion: diverse perspectives in design and oversight reduce blind spots and strengthen resilience. We should also remember that AI isn’t an infallible magic bullet, it can make mistakes. And, it’s a tool that should amplify human judgment, not replace it.
https://www.securitymagazine.com/articles/101933-agentic-ai-benefits-risks-and-best-practices-for-implementation