Skip to Main Content
Faint pattern of 1s and 0s on top of hexagons

Securing AI’s Cloud Advantage: Navigating the Risks Highlighted in Tenable’s 2025 Report

Faint pattern of locks, 1s and 0s on top of hexagons
AI threats

Securing AI’s Cloud Advantage: Navigating the Risks Highlighted in Tenable’s 2025 Report

The advent of Artificial Intelligence (AI) is undeniably reshaping the technological landscape, and its natural habitat is the cloud. The scalability, computational power, and vast data storage capabilities of cloud environments provide the perfect bedrock for developing, training, and deploying AI workloads. However, as the “Tenable Cloud Security Risk Report 2025” illustrates, this powerful synergy isn’t without risk. If compromised, AI workloads within the cloud can have a profound and far-reaching impact, not only on an organisation’s cloud environment but on its entire business operation, reputation, and even its future.

The Tenable report highlights several worrying trends. It reveals that a substantial percentage of cloud AI workloads contain unaddressed vulnerabilities – the report specifically noted that approximately 70% of cloud AI workloads have at least one unremediated vulnerability, and 30% have a critical curl vulnerability (CVE-2023-38545). Furthermore, it exposes serious misconfigurations, such as overprivileged service accounts in critical AI services (e.g., Google Vertex AI Notebooks) and default root access in platforms like Amazon SageMaker. These combinations of public exposure, critical vulnerabilities, and excessive privileges create a direct and enticing pathway for attackers to sensitive data, intellectual property, and even the core logic of AI models. The implications of such compromises are potentially damaging:

  • Data Poisoning: Malicious actors injecting corrupted data into AI training sets, leading to biased or manipulated model outcomes. For instance, an AI-powered financial prediction model could be skewed, causing erroneous investment decisions, or a medical diagnostic AI could be sabotaged, leading to incorrect diagnoses.
  • Model Theft and IP Leakage: Sophisticated AI models, representing years of research and significant investment, being stolen or reverse-engineered, severely impacting competitive advantage.
  • Supply Chain Vulnerabilities: Compromised third-party AI components introducing backdoors or vulnerabilities into an organisation’s systems.
  • Data Breaches: AI workloads often process vast amounts of sensitive data. A breach here could expose personal customer information, proprietary business data, or critical operational insights.
  • Reputational Damage: A compromised AI system leading to biased, erroneous, or even harmful outputs can severely erode public trust and damage an organisation’s brand.

Securing the AI Cloud

At Mondas, we understand these intricate challenges. As partners of Tenable, we leverage their exposure management platform to empower organisations to navigate the complexities of cloud AI security.

Our comprehensive range of services, powered by Tenable’s robust tools, is designed to help you proactively identify, prioritise, and remediate risks within your cloud AI environments.

  • Cloud Security Posture Management (CSPM): Continuous monitoring and assessment of your cloud configurations to identify misconfigurations, over-privileged access, and compliance violations, specifically tailored to the nuances of AI workloads.
  • Cloud Infrastructure Entitlement Management (CIEM): Gaining deep visibility into user and service identities within your cloud, ensuring least-privilege access and preventing credential abuse – a critical factor when AI services often require extensive permissions.
  • Cloud Workload Protection (CWP): Securing your AI applications and underlying infrastructure against vulnerabilities, malware, and unauthorised access, from development to runtime.
  • Vulnerability Management: Proactive scanning and prioritisation of vulnerabilities across your cloud-based AI systems, leveraging Tenable’s deep insights into critical exposures.
  • AI Security Posture Management (AI-SPM): A dedicated focus on the unique security risks associated with AI models, training data, and inference pipelines, as highlighted in Tenable’s report.
  • Managed Security Services: For organisations seeking to offload the burden of continuous cloud security, our expert team can provide ongoing monitoring, threat detection, and incident response, ensuring your AI initiatives remain secure.

Mitigating the Risks: A Proactive Approach

By working with Mondas and harnessing Tenable’s exposure management platform, organisations can adopt a proactive and comprehensive approach to mitigating cloud AI security risks:

  1. Gain Full Visibility: The first step is to understand your attack surface. Tenable’s platform provides a unified view across your multi-cloud environments, revealing all cloud assets, including AI workloads, and how they are interconnected.
  2. Contextualise and Prioritise Risks: Not all vulnerabilities are created equal. Tenable’s intelligent prioritisation capabilities, informed by real-world threat intelligence, help you focus on the critical exposures that pose the greatest threat to your AI assets. This includes identifying misconfigurations that could grant unauthorised access to AI models or training data.
  3. Implement Least Privilege and Zero Trust: Ensure that both human and machine identities (service accounts, CI/CD pipelines) accessing AI workloads and data have only the minimum necessary permissions. This significantly reduces the blast radius of a potential breach.
  4. Secure the AI Data Pipeline: Protect your training data from poisoning and unauthorised access. Implement strong data governance, encryption, and strict access controls for all data used in AI development and deployment.
  5. Harden AI Development and Deployment Environments: Secure your development pipelines, ensuring that vulnerable components are not introduced and that best practices for secure coding are followed. Regularly scan and patch the underlying infrastructure for AI workloads.
  6. Continuous Monitoring and Response: Threats evolve rapidly. Continuous monitoring for anomalies, suspicious activities, and changes in configuration is crucial. Automate incident response to quickly contain and remediate threats.
  7. Address Supply Chain Risks: Vet and validate all third-party AI components, libraries, and models to ensure they do not introduce new vulnerabilities.

The cloud enables unprecedented innovation and business transformation. However, this powerful combination demands a renewed focus on security. By partnering with Mondas and leveraging the powerful insights and capabilities of the Tenable platform, organisations can confidently embrace the AI revolution, secure in the knowledge that their cloud environments and critical AI workloads are protected against the ever-evolving threat landscape.