Skip to content

Obstacles to Artificial Intelligence Implementation, as per CISOs' Perspective and Strategies to Overcome Them

Majority of CISO professionals believe that the advantages of AI outweigh the risks. Yet, what's preventing them from fully embracing AI innovation? Learn about overcoming the most significant obstacles to AI progress.

Obstacles in the widespread implementation of Artificial Intelligence, as perired by Chief...
Obstacles in the widespread implementation of Artificial Intelligence, as perired by Chief Information Security Officers, and strategies for their elimination.

Obstacles to Artificial Intelligence Implementation, as per CISOs' Perspective and Strategies to Overcome Them

In a new report titled "CISO perspectives: separating the reality of AI from the hype", Tines offers insights into how Chief Information Security Officers (CISOs) are tackling their biggest AI challenges. The report provides valuable guidance for security leaders looking to implement AI solutions effectively.

One key strategy is the use of AI to automate risk assessments and security operations. CISOs are moving towards continuous, AI-powered risk assessments that offer real-time monitoring of security, privacy, and AI risks. This automation reduces human error, speeds threat response, and helps close vulnerabilities early, thereby easing operational burden and supporting compliance with regulations like GDPR and SOC 2.

Another strategy is the deployment of AI-powered security agents and in-house AI development. Nearly 70% of companies already use or test AI agents tailored for security tasks, such as penetration testing and third-party risk assessments. AI enables broader expert-level capabilities, helping to mitigate insufficient human skills and staff shortages and potentially reducing necessary headcount in security operation centers.

Outsourcing and expanding virtual CISO (vCISO) services is also a popular approach, especially among Small and Medium-sized Businesses (SMBs). vCISO services provide strategic cybersecurity leadership, including risk assessments and cyber resilience planning, thereby supplementing staff capacity and aligning security goals across organizations.

Fostering stronger partnerships between human expertise and AI is another crucial strategy. Success in AI deployment requires combining skilled cybersecurity professionals with AI capabilities, which means investing in training and cultural change to enable staff to work effectively alongside AI tools rather than being replaced entirely.

Ensuring transparent AI governance and regulatory alignment is equally important. CISOs emphasize the need for transparency in AI algorithms and integration groundwork to avoid misalignments and inflexible IT systems. This includes instituting allow-listing or controlled governance models to manage shadow AI use and meet compliance requirements, helping to bridge differing opinions on priorities and risks among stakeholders.

Upgrading legacy systems to support AI integration is another key strategy. To overcome technology inflexibility, organizations invest in flexible infrastructure that can incorporate AI-driven tools smoothly, enabling dynamic and adaptive cybersecurity approaches compared to rigid, periodic controls.

Other strategies include forming a cross-functional group to align on AI priorities and risks across the organization, upskilling and reskilling team members to become AI subject matter experts, and looking for AI solutions with strong security guarantees to prevent unauthorized access and protect data privacy.

These strategies reflect a shift from reactive, fragmented cybersecurity towards a proactive, strategically integrated model. In this model, AI not only automates tasks with acute skill shortages but also drives continuous assurance, aligns organizational risk perspectives, and flexibly adapts to evolving threats and regulations.

Despite these strategies, 98% of large tech executives have paused internal genAI initiatives due to security risk factors. A significant number of CISOs express feelings of AI fatigue, indicating the complexity and challenges associated with tapping into the full potential of AI. Addressing these challenges is crucial for organisations to reap the substantial benefits of embracing AI while mitigating its risks.

[1] Tines, "CISO perspectives: separating the reality of AI from the hype" [2] Tines, "AI buyer's guide for security leaders" [3] Tines, "The state of AI in security operations" [4] Cybersecurity Ventures, "The AI in Cybersecurity Market Report"

  1. The report by Tines, titled "CISO perspectives: separating the reality of AI from the hype," focuses on strategies used by Chief Information Security Officers (CISOs) to tackle AI challenges, including automating risk assessments and security operations, using AI-powered security agents, and outsourcing virtual CISO (vCISO) services.
  2. In the article "AI buyer's guide for security leaders," Tines provides guidance on how AI can help mitigate insufficient human skills and staff shortages in security operation centers, and offers insights into AI-powered services such as penetration testing and third-party risk assessments.
  3. According to "The state of AI in security operations" by Tines, AI enables continuous, AI-powered risk assessments offering real-time monitoring of security, privacy, and AI risks, reducing human error, speeding threat response, and easing operational burden to support compliance with regulations like GDPR and SOC 2.
  4. Cybersecurity Ventures' "The AI in Cybersecurity Market Report" highlights the benefits of a proactive, strategically integrated model of cybersecurity, where AI drives continuous assurance, aligns organizational risk perspectives, and flexibly adapts to evolving threats and regulations, while also addressing the challenges and security risk factors associated with internal genAI initiatives.

Read also:

    Latest