
Dragan Pleskonjić on AI, Cybersecurity, and the Age of Autonomous Agents
Dragan Pleskonjić is the Founder and CEO of GLOG.AI, a high-tech entrepreneur with extensive experience in computer and information security, computer systems and networks, software and application security, as well as software development methodologies and architectures. He possesses proven leadership skills and talent for managing and organizing successful teams. He is known for his strong leadership skills and talent for building and managing successful teams. He is the author of ten books so far, including university textbooks on topics such as cybersecurity, operating systems, and software. Dragan is an inventor with a set of patents granted by USPTO and other patent offices. He has published more than ninety scientific and technical papers in academic journals and at international conferences. Visit his Personal Website to learn more about Dragan Pleskonjić.
Thank you for accepting our invitation to speak at Cybersecurity Summit and to do this interview.
How is the rise of agentic AI – where AI agents operate autonomously – reshaping businesses, and what security challenges does it introduce?
The rise of agentic AI, where AI agents operate autonomously, is significantly reshaping businesses by enhancing efficiency, decision-making, and customer experiences. Agentic AI automates complex, multi-step tasks that were previously challenging to manage, leading to increased productivity across various business functions. Unlike traditional rule-based systems, agentic AI can adapt and make decisions in dynamic environments, further boosting efficiency.
In decision-making, AI agents analyze vast amounts of data to provide insights that support more informed decisions. They proactively identify opportunities and risks, enabling businesses to respond swiftly to changing market conditions. This capability is crucial for maintaining a competitive edge.
Agentic AI also plays a significant role in creating personalized customer experiences. By anticipating customer needs and offering tailored recommendations, businesses can enhance customer satisfaction and loyalty. This personalization fosters stronger customer relationships and drives business growth.
Innovation and the development of new business models are facilitated by agentic AI. It enables the creation of new products and services, allowing businesses to explore new possibilities and push the boundaries of what is possible.
However, the rise of agentic AI introduces several security challenges. The increased prevalence of AI agents expands the attack surface, creating new entry points for cyberattacks. Compromised AI agents can be exploited to access sensitive data and systems, posing significant risks.
The ease of deploying AI agents can lead to “shadow AI,” where agents are deployed without proper IT oversight. This lack of visibility makes it difficult to monitor and control AI agent activity, increasing the risk of unauthorized access and malicious activity.
AI agents often require access to sensitive data and systems, escalating privileges and risks. If compromised, attackers can gain access to highly privileged accounts, leading to substantial damage.
Human-in-the-loop processes, while crucial for oversight, can introduce vulnerabilities. Attackers may target individuals responsible for validating and approving AI agent actions, exploiting these human elements.
Managing the identities of numerous AI agents in a business environment is another challenge. Ensuring that only authorized AI agents have access to sensitive resources is critical to maintaining security.
As AI agents learn and adapt, there is a risk of “AI drift,” where they may diverge from their intended behavior. This can lead to unintended actions with security implications. Additionally, adversarial AI attacks, where malicious actors manipulate AI agents, pose a significant threat.
To mitigate these security challenges, businesses must implement robust security measures, including strong authentication, access controls, and monitoring. Establishing clear governance and policies for AI agent use, investing in AI security expertise and tools, and prioritizing ethical considerations are essential steps in defending against emerging threats.
With Identity and Access Management (IAM) gaining renewed importance and least privilege strategies becoming critical, what key insights should CISOs and security leaders focus on?
The renewed emphasis on Identity and Access Management (IAM) and least privilege strategies is crucial in today’s threat landscape. CISOs and security leaders should focus on several key insights to strengthen their security posture.
Expanding IAM beyond human identities is a major shift. Non-Human Identities (NHIs), such as machine identities, service accounts, and API keys, are often overlooked, creating significant vulnerabilities. CISOs must prioritize discovering, managing, and securing NHIs with the same rigor as human identities. This includes implementing lifecycle management, least privilege, and continuous monitoring for NHIs. With the rise of agentic AI, each agent must have its own identity and strict access controls, with their actions closely monitored.
Emphasizing least privilege at a granular level is essential. Moving beyond Role-Based Access Control (RBAC) to more granular, Attribute-Based Access Control (ABAC) is necessary for enforcing precise least privilege. Continuous access reviews should be conducted to regularly revoke unnecessary access, and automation should be employed where possible to improve efficiency and accuracy. Implementing Just-in-Time (JIT) access grants temporary privileges only when needed, minimizing the window of opportunity for attackers.
Gaining comprehensive visibility is vital. Investing in Identity Governance and Administration (IGA) solutions provides a centralized view of all identities and their access privileges. This visibility is crucial for enforcing least privilege and detecting anomalous activity. Understanding who has access to sensitive data, not just applications, is vital for data-centric security and preventing data breaches.
Automating IAM processes enhances efficiency by reducing manual errors and freeing up security teams to focus on strategic initiatives. Routine IAM tasks, such as provisioning, de-provisioning, and access reviews, should be automated. Additionally, automating responses to anomalous activity, such as revoking access or triggering alerts, enables faster incident response and reduces the impact of attacks.
Integrating IAM with other security controls is essential. IAM is a cornerstone of a zero-trust architecture. Integrating IAM with other security controls, such as endpoint detection and response (EDR) and security information and event management (SIEM), creates a holistic security posture. Incorporating IAM into the DevSecOps pipeline ensures that security is built into applications from the start, including securing API keys and other credentials used in development.
By focusing on these key insights, CISOs and security leaders can strengthen their IAM programs and significantly reduce the risk of identity-related breaches.
Given your work with projects like INPRESEC and Glog.AI, how do you envision artificial intelligence transforming cybersecurity practices in the next few years?
Based on work with projects like INPRESEC, Security Predictions, vSOC and Glog.AI, it is evident that AI is poised to transform cybersecurity practices in the coming years. The focus is on leveraging AI to create more proactive and intelligent cybersecurity solutions.
Predictive threat intelligence is a game-changer. AI’s ability to analyze vast datasets to anticipate cyberattacks includes detecting anomalies in network traffic and endpoints, alerting security operations center (SOC) staff to real security vulnerabilities, and providing real-time action. AI can predict attack vectors based on threat intelligence, vulnerability analysis, and real-time network monitoring, identifying patterns of malicious behavior before attacks occur. This shift from a reactive to a predictive posture strengthens defenses.
Automated vulnerability remediation is another trend. Glog.AI‘s work on automated vulnerability fixing indicates a move towards AI-powered self-healing systems. AI analyzes code and identifies vulnerabilities with greater accuracy than traditional methods, automatically generating remediation advice and performing remediation to fix vulnerable code. Continuous monitoring for new vulnerabilities and adapting defenses accordingly significantly reduces the time and effort required for remediation.
Enhanced Security Operations Centers (SOCs) are becoming a reality. The concept of a “Virtual Security Operations Center (vSOC)” using INPRESEC and Glog solutions highlights AI’s potential to automate and augment SOC operations. AI triages security alerts, prioritizes incidents based on severity, and automates incident response workflows. Providing analysts with real-time insights and recommendations enables SOCs to handle a greater volume of threats with increased efficiency.
AI-driven behavioral analysis is crucial for detecting anomalies that may indicate malicious activity. AI establishes baseline behavior patterns and identifies deviations, detects insider threats and compromised accounts, and adapts to evolving threat landscapes.
Strengthening application security is a focus area. AI’s role in software security is further implemented into DevSecOps pipelines. Automated security testing throughout the software development lifecycle and real-time feedback to developers help write more secure code, leading to more secure applications and reducing the risk of software vulnerabilities.
In essence, AI plays a central role in cybersecurity, enabling organizations to stay ahead of evolving threats, automate security operations, and strengthen defenses. By combining AI with human expertise, organizations can create a more resilient and secure digital environment.
What are the potential risks associated with the increasing use of AI in cybersecurity, and how can organizations mitigate them?
The increasing use of AI in cybersecurity offers tremendous potential but also introduces new risks that organizations must address. Adversarial attacks involve attackers using adversarial AI techniques to manipulate AI-powered security systems, causing them to misclassify malicious activity as benign. Data poisoning, where attackers corrupt training data, leads to biased or inaccurate results, causing AI systems to make faulty decisions. AI-powered attacks leverage AI to automate and enhance attacks, making them more sophisticated and difficult to detect. The lack of transparency and explainability in AI models, particularly deep learning models, makes it challenging to understand decision-making processes, identify errors, or correct biases. Data privacy concerns arise from AI-powered security systems relying on large datasets containing sensitive personal information. Model theft, where attackers steal or replicate proprietary AI models, allows them to understand and exploit weaknesses. AI bias, resulting from biased training data, can lead to unfair or discriminatory outcomes.
To mitigate these risks, organizations should implement robust data governance policies to ensure the quality and integrity of training data. Regular audits and validation detect and correct errors or biases. Adversarial training exposes AI models to a wide range of malicious inputs, enhancing resilience. Continuously updating and refining AI models keeps them ahead of evolving attack techniques. Explainable AI (XAI) techniques make AI models more transparent and understandable, helping identify and correct errors or biases. Strong data security measures, including encryption, access controls, and regular security audits, protect sensitive data. Continuous monitoring and validation of AI-powered security systems detect and correct errors or biases. Human oversight ensures responsible and ethical use of AI-powered security systems, with humans able to override AI decisions when necessary. Following established AI security best practices, including secure development and deployment of AI models, is essential. Implementing controls to prevent model theft and verify model integrity is crucial.
By proactively addressing these risks, organizations can leverage the benefits of AI in cybersecurity while minimizing potential harm.
The 2nd part of the interview is available here
Note: This interview was created in collaboration between the authors and AI