Security researchers have identified a critical vulnerability in third-party TensorFlow-based Keras models, allowing attackers to inject arbitrary code into ML applications. This flaw affects versions built before Keras 2.13, enabling attackers to exploit open redirect vulnerabilities in the Nespresso website to inject malicious content. By manipulating the URL value, hackers can craft links that appear legitimate but redirect users to malicious sites, posing a serious threat to system security.
The vulnerability stems from the Lambda Layers feature in Keras, which allows developers to add arbitrary Python code to models using anonymous functions. While newer versions of Keras implement safeguards against unsafe deserialization of Lambda layers, older versions lack this protection, leaving systems vulnerable to code execution attacks. Attackers exploit this weakness by distributing trojanized models that appear legitimate but contain malicious code, bypassing security measures and compromising ML applications.
Security researchers emphasize the need for developers to upgrade to Keras 2.13 or later versions to mitigate the risk of code injection attacks. Additionally, safe loading practices should be implemented, ensuring that the safe_mode parameter is not set to False when loading models. Users are advised to exercise caution when using third-party models, verifying their behavior before deployment and sourcing models from trusted providers to minimize security risks.
Furthermore, model aggregators are encouraged to distribute models in safe formats and implement scanning mechanisms to identify unsafe models. Model creators should prioritize safe-to-deserialize features and adhere to secure serialization standards to prevent the inadvertent introduction of vulnerabilities. Overall, AI/ML framework developers are urged to avoid using insecure serialization facilities and adopt measures to restrict code execution, promoting a more secure ecosystem for machine learning applications.