“AI Goes Beyond Simply Identifying Problems By Providing Comprehensive Solutions”

Jason shares expert insights into how agentic AI is redefining cyber defense—moving beyond traditional automation to proactive, autonomous risk mitigation.
“AI Goes Beyond Simply Identifying Problems By Providing Comprehensive Solutions”
Published on
5 min read

In an era where cyber threats are becoming more intelligent and evasive, the rise of adversarial and agentic AI is reshaping the cybersecurity landscape. In this exclusive interview, Onkar Sharma, Consulting Editor at Digital Terminal, engages in a deep-dive conversation with Jason Merrick, Senior VP of Product at Tenable.

Jason shares expert insights into how agentic AI is redefining cyber defense—moving beyond traditional automation to proactive, autonomous risk mitigation. From understanding adversarial threats to envisioning self-driving security systems, this discussion explores the evolving role of AI in fortifying digital infrastructure and aligning security with broader business resilience.

Onkar: Can you elaborate on the most significant ways adversarial AI is currently outpacing traditional security defenses, and what specific automated tactics are attackers employing today?

Jason: Adversarial AI is crafted to deceive AI systems, causing them to make incorrect or unintended predictions or decisions. Often, threat actors introduce attacks in the AI input data, altering the original data or the AI model itself by changing the parameters. Adversaries do this by analyzing the AI system’s algorithms, data processing methods, and decision-making patterns. They leverage techniques that can help them gain a foothold in the AI system, so they can easily manipulate training data, effectively poisoning it. Adversaries choose the path of least resistance, meaning they go after publicly exposed, critically vulnerable and overprivileged users and assets.

This is especially a huge risk as sensitive secrets are increasingly being stored in the cloud, which is the home to a majority of AI workloads. Cloud-native AI services, model training workloads and related stored data all rely on large, often sensitive datasets. A study by Tenable found that roughly one third (31%) of publicly exposed cloud storage resources contained data of the highest sensitivity.

The consequences of adversarial AI attacks can range from misclassification of images or text to potentially life-threatening situations in critical infrastructure sectors like healthcare and autonomous vehicles.

Onkar: With organizations deploying dozens of security tools, how does agentic AI specifically cut through this complexity to reduce blind spots and accelerate response, rather than just adding another layer?

Jason: AI goes beyond simply identifying problems by providing comprehensive solutions and addressing four key questions that commonly burden human teams: What needs to be fixed, who should fix it, what is the optimal response process, and when should humans be involved. This strategy enhances human oversight rather than removing it, allowing humans to focus on high-impact actions while offloading repetitive triage and low-level decision-making, thereby reasserting control.

Onkar: How does "agentic AI" fundamentally differ from the "automation" that many security teams are already attempting to implement? What makes it "proactive and autonomous" in a way traditional automation isn't?

Jason: Traditional ML-powered security solutions focus on rule-based, predefined workflows, such as scanning for vulnerabilities or applying patches based on set triggers. They are inherently reactive and limited to executing programmed tasks. Agentic AI, on the other hand, introduces proactive and autonomous decision-making capabilities that elevate how cyber risk is addressed.

The key difference is Agentic AI’s ability to reason, adapt, and act independently within a defined scope. For example, traditional automation might flag a critical vulnerability in a system and send an alert. Agentic AI, however, analyzes the vulnerability’s context, its exploitability, the asset’s criticality, and the organization’s threat landscape, and then autonomously prioritizes remediation, even suggesting specific mitigation strategies.

Additionally, agentic AI learns from evolving attack paths and adjusts its approach, unlike traditional automation that struggles with novel threats or complex environments. This proactive stance reduces response times, eliminates blindspots, and allows organizations to focus on strategic priorities, ultimately strengthening the organization’s resilience against cyber threats.

Onkar: What does a "self-driving" security system, powered by agentic AI, look like in a real-world scenario? Can you walk us through an example of how it would handle a novel threat from detection to remediation?

Jason: A fully functioning self-driving security system doesn’t exist yet. However, agentic AI is already automating many aspects of cybersecurity that only require minimal human intervention.

In a real-world scenario, a swarm of agents, specialized in performing specific actions, would work in tandem. A detection agent would identify unusual API calls from unfamiliar IP addresses, indicating potential misuse of compromised AWS credentials. The agent correlates the cloud asset inventory data and threat intelligence, classifying it as a novel attack without predefined signatures. The analysis agent then takes over, mapping the affected cloud resources and noting its significance in hosting critical data. It analyzes data to confirm credential misuse and assesses the blast radius, prioritising the threat based on its severity. 

Onkar: Agentic AI keeps humans in the loop for critical decisions. How is this balance maintained to ensure both efficiency and necessary oversight, especially when dealing with autonomous remediation actions?

Jason: Given the critical nature of cybersecurity, maintaining human oversight is essential to prevent unintended consequences, ensure accountability, and leverage human expertise for complex situations. Human-in-the-loop architectures are one way to go about it. Human oversight is needed for critical decisions like shutting down a major server or granting admin access to AI training data in the cloud.

The level of autonomy AI agents have can also be dynamically adjusted based on the perceived risk of the threat, the criticality of the affected systems, and the confidence level of the AI's assessment. For example, a low-severity alert on a non-critical endpoint can be fully automated, while a high-severity threat to a core business system triggers a human review. There’s no one-size-fits-all approach to implementing agentic AI. Organizations must be crystal clear about setting the rules of engagement.

Human intervention is also essential to ensuring AI learns from its mistakes. When humans intervene or override an AI's decision, this feedback must be incorporated into the AI's learning model. This allows the AI to learn from human expertise and refine its decision-making over time, reducing future false positives or incorrect actions.

Onkar: How does agentic AI bridge the gap between cybersecurity operations and broader business priorities, moving beyond a purely technical function to one that directly supports organizational resilience and goals?

Jason: Agentic AI helps us move beyond a sea of technical vulnerabilities to pinpointing the exposures that truly matter to the business. For instance, the TenableOne platform leverages a massive data fabric over a trillion unique exposures, assets, and findings. It uses AI to identify complex attack paths and 'toxic' risk combinations. This allows it to prioritize remediation efforts not just by severity but by their potential impact on critical business assets, revenue streams, or operational continuity. It aids in accurately communicating cyber risk in terms that executives and board members understand, enabling smarter decision-making.

They are also making cybersecurity proactive. AI assistants are making it easier to remediate risks that matter the most. They offer natural language search, clear explanations of complex attack paths, and prioritized, actionable remediation guidance. It empowers security teams to anticipate attacks and proactively reduce the organization's overall exposure, building true cyber resilience.

Onkar: What are the biggest hurdles organizations face in shifting from reactive, manual processes to embracing proactive, agentic remediation, and how can they begin to overcome them?

Jason: Organizations struggle with fragmented security ecosystems due to legacy tools, cloud solutions, and various endpoints, leading to data silos and blindspots when integrating new agentic AI platforms. To overcome this, prioritize consolidating security tools, ideally using unified platforms like TenableOne. These platforms contextualize security findings with business criticality and threat data, making AI decisions more relevant.

Leveraging cybersecurity vendors specializing in proactive security provides essential support and training. Additionally, organizations must secure their AI systems, managing the posture of AI models, data, and pipelines to prevent exploitation and bias. Proactively addressing these challenges allows a transition to a resilient, proactive cybersecurity posture, where agentic AI empowers security teams and supports business objectives.

Leveraging cybersecurity vendors specializing in proactive security provides essential support and training. Additionally, organizations must secure their AI systems, managing the posture of AI models, data, and pipelines to prevent exploitation and bias. Proactively addressing these challenges allows a transition to a resilient, proactive cybersecurity posture, where agentic AI empowers security teams and supports business objectives.

𝐒𝐭𝐚𝐲 𝐢𝐧𝐟𝐨𝐫𝐦𝐞𝐝 𝐰𝐢𝐭𝐡 𝐨𝐮𝐫 𝐥𝐚𝐭𝐞𝐬𝐭 𝐮𝐩𝐝𝐚𝐭𝐞𝐬 𝐛𝐲 𝐣𝐨𝐢𝐧𝐢𝐧𝐠 𝐭𝐡𝐞 WhatsApp Channel now! 👈📲

𝑭𝒐𝒍𝒍𝒐𝒘 𝑶𝒖𝒓 𝑺𝒐𝒄𝒊𝒂𝒍 𝑴𝒆𝒅𝒊𝒂 𝑷𝒂𝒈𝒆𝐬 👉 FacebookLinkedInTwitterInstagram

Related Stories

No stories found.
logo
DIGITAL TERMINAL
digitalterminal.in