Skip to content

"AI security experts demand intensified assessment of cutting-edge models and autonomous systems:" - a clear statement highlighting the call for a concentrated review of advanced AI models and agentic systems.

Assessing the potential threats posed by AI networks that constantly adapt and change falls within the jurisdiction of cybersecurity experts, a laborious and methodical process.

Assessment of Security Risks Associated with Dynamic, Evolving AI Networks Proves a Challenging...
Assessment of Security Risks Associated with Dynamic, Evolving AI Networks Proves a Challenging Task for Cybersecurity Experts

"AI security experts demand intensified assessment of cutting-edge models and autonomous systems:" - a clear statement highlighting the call for a concentrated review of advanced AI models and agentic systems.

AI Security Risks: The Uphill Battle for Security Teams

The security and safety risks associated with adopting AI models are a significant concern, as highlighted by experts in the field at the RSAC Conference 2025. Representatives from Google DeepMind, Nvidia, and the UK AI Security Institute raised the challenges involved with evaluating these risks and keeping up with the rapidly evolving nature of AI agents and complex AI systems.

Jade Leung, the CTO at the UK AI Security Institute, emphasized that there are numerous open questions regarding the potential risks of agentic AI systems, with current safety and security assessments struggling to keep pace with the rapid development of AI systems. Despite the efforts of AI companies to adopt dangerous capability evaluations, Leung noted that this is a difficult process and the extent to which it's an "evolving science" is underappreciated.

Daniel Rohrer, VP of Software Product Security, Architecture & Research at Nvidia, argued that as AI systems become more complex, organizations will need to shift to evaluating entire AI systems, rather than individual models. He highlighted that AI agents like agentic AI and mixture of experts models are harder to assess from a security perspective, necessitating continuous monitoring to ensure organizations can still predict the behavior of the systems they've deployed.

John 'Four' Flynn, VP of Security and Privacy at Google DeepMind, agreed with the idea that security teams must repeatedly revisit the behavior of models and systems. He pointed out that AI developers can't entirely predict what a model will be like when they first start pre-training it, and even the step of testing the model's resistance to attacks such as prompt injection isn't good enough on its own.

The panel agreed that coming together as an international community to share intelligence on these risks and form better methods for benchmarking complex AI systems will be key. A recurring theme of the conference has been the evolving methods of attackers, particularly as they use AI to launch attacks more efficiently. The panel discussed the likelihood of threat actors using AI to create polymorphic malware and other sophisticated code, noting that publicly available models are becoming incredibly sophisticated at code generation.

As the AI landscape continues to evolve, it's crucial for organizations to adapt and stay ahead of potential security risks. A holistic approach to managing AI systems, focusing on governance, collaboration, and continuous risk assessment, will be essential in mitigating these risks.

  1. The challenges in evaluating the risks associated with agentic AI systems and mixture of experts models are becoming more complex as AI evolves, requiring continuous monitoring and a shift from evaluating individual models to entire AI systems.
  2. The potential risks of AI, including those related to medical-conditions, neurological-disorders, climate-change, and data-and-cloud-computing, are numerous and require the efforts of an international community to address effectively due to the evolving nature of AI and the methods of attackers.
  3. The EUROPOL's Director of Cybercrime, who attended the RSAC Conference 2025, raised concerns about the threat actors using AI to create polymorphic malware and other sophisticated code, underscoring the need for a holistic approach to managing AI systems.
  4. Security teams face an uphill battle in ensuring the safety and security of AI systems, particularly in preventing those systems from being used for malicious purposes, such as launching attacks against critical infrastructure or manipulating environmental-science data.
  5. In the context of AI Security Risks, artificial-intelligence and technology have become essential tools for detecting and mitigating threats, but their effectiveness is limited by the lack of reliable safety and security assessments and the rapid evolution of sophisticated AI agents and complex AI systems.

Read also:

    Latest