SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

Cybersecurity researchers have identified several security flaws in SAP AI Core, a cloud-based platform used for creating and deploying predictive artificial intelligence (AI) workflows. These vulnerabilities could be exploited to gain access to customer data and access tokens.

The five vulnerabilities, collectively dubbed SAPwned by cloud security firm Wiz, were disclosed on January 25, 2024, and have been addressed by SAP as of May 15, 2024.

These flaws allow unauthorized access to private artifacts and credentials for cloud environments such as Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud. Attackers could also modify Docker images on SAP’s internal container registry and on the Google Container Registry, leading to potential supply chain attacks on SAP AI Core services.

via GIPHY

Furthermore, attackers could gain cluster administrator privileges on SAP AI Core’s Kubernetes cluster by exploiting an exposed Helm package manager server. This access could be used to steal sensitive data, such as models, datasets, and code, or to tamper with AI data and manipulate model inference.

Wiz highlighted that the security flaws arise from the platform’s inadequate isolation and sandboxing mechanisms, which make it feasible to run malicious AI models and training procedures. This issue is common among newer AI service providers like Hugging Face and Replicate, which often rely on containerization instead of more robust isolation techniques used by veteran cloud providers.

The vulnerabilities also underscore the importance of tenant isolation and proper security practices for AI service providers. Organizations should ensure they run trusted AI models from reliable sources and verify the tenant-isolation architecture of their AI service providers.

These findings coincide with a report from Netskope that highlights the increased use of generative AI in enterprises, prompting the implementation of controls to mitigate risks associated with sharing sensitive data with AI applications.

Additionally, the emergence of a new cybercriminal group, NullBulge, has targeted AI- and gaming-focused entities since April 2024, aiming to steal sensitive data and sell compromised OpenAI API keys in underground forums.

The ongoing efforts to identify and mitigate these vulnerabilities are crucial for enhancing the security of AI platforms and protecting customer data.

Share This Article

Start Growing with Cloudways Today.

Our Clients Love us because we never compromise on these

Abdul Rehman

Abdul is a tech-savvy, coffee-fueled, and creatively driven marketer who loves keeping up with the latest software updates and tech gadgets. He’s also a skilled technical writer who can explain complex concepts simply for a broad audience. Abdul enjoys sharing his knowledge of the Cloud industry through user manuals, documentation, and blog posts.

×

Thankyou for Subscribing Us!

Do you like what you read?

Get the Latest Updates

Share Your Feedback

Thank you for your feedback!

Similar Posts