Over 30 critical security flaws discovered in open source AI and ML Models

Cybersecurity researchers have identified more than 30 security vulnerabilities across various open-source artificial intelligence (AI) and machine learning (ML) models. These vulnerabilities could potentially lead to remote code execution and the theft of sensitive information.

Details about these vulnerabilities were reported through the Huntr bug bounty platform by Protect AI, affecting several tools including ChuanhuChatGPT, Lunary, and LocalAI. Among these, the two most severe vulnerabilities are associated with Lunary, a production toolkit for large language models (LLM):

  1. CVE-2024-7474 (CVSS Score: 9.1): An IDOR vulnerability that allows authenticated users to access or delete another user’s data without the necessary permissions.
  2. CVE-2024-7475 (CVSS Score: 9.1): A vulnerability linked to improper access control, enabling an attacker to alter SAML configurations and log in as another user, thereby gaining access to sensitive information.

Additionally, Lunary has another IDOR vulnerability (CVE-2024-7473, CVSS Score: 7.5) which permits an attacker to modify another user’s prompt by altering a parameter in the request.

The third critical vulnerability pertains to a flaw in the file upload handling feature in ChuanhuChatGPT (CVE-2024-5982, CVSS Score: 9.1). This flaw allows an attacker to execute arbitrary code, create new directories on the system, and access sensitive data.

Two security vulnerabilities were also discovered in LocalAI, an open-source project that enables users to run large language models (LLM) on their own servers:

  1. CVE-2024-6983 (CVSS Score: 8.8): This vulnerability allows an attacker to upload a malicious configuration file, leading to the execution of malware on the system.
  2. CVE-2024-7010 (CVSS Score: 7.5): This vulnerability involves guessing valid API keys by analyzing response times from the server. The attacker sends various requests and records the response times to infer the correct API key for each character.

Furthermore, a remote code execution vulnerability (CVE-2024-8396, CVSS Score: 7.8) was discovered in the Deep Java Library (DJL), allowing attackers to execute code remotely by improperly overwriting files.

Users are advised to update their software to the latest versions to protect their AI/ML systems and minimize the risk of attacks. This information comes as NVIDIA released a patch for another vulnerability (CVE-2024-0129, CVSS Score: 6.3) in conjunction with the introduction of NeMo Retriever, an AI generative service for enterprises.

In addition, Protect AI has launched Vulnhuntr, an open-source code analysis tool designed to detect zero-day vulnerabilities in Python. Vulnhuntr operates by breaking down code into smaller segments for analysis without overwhelming the ML system.

Alongside these security weaknesses, a new cracking technique disclosed by 0Day (0Din) indicates that malicious prompts encoded in hexadecimal format and emojis can bypass OpenAI ChatGPT’s protections, creating security vulnerabilities that allow attackers to perform actions undetected by the model.

VSEC experts assert that sustainable development in cybersecurity can only be achieved through the intelligent integration of efficient workflows, human expertise, and a comprehensive security strategy. Only then can we optimize the benefits of artificial intelligence while safeguarding information security in the current threat-laden landscape.

Source: The Hacker News