A newly disclosed vulnerability in PyTorch, tracked as CVE-2025-32434, has raised concerns across the AI development community, reported by The Cyber Express. The flaw allows remote attackers to execute arbitrary code by exploiting how AI models are loaded using the
torch.load()
function—even when safety measures like weights_only=True
are in place. This affects all versions of PyTorch up to 2.5.1, and the issue has now been patched in version 2.6.0.At the core of the issue is PyTorch's reliance on a long-trusted flag that has proven to be ineffective in preventing code execution from serialized model files. Security researcher Ji’an Zhou demonstrated how an attacker can bypass this setting to run commands on the target system. This revelation directly contradicts previous recommendations in the official documentation, which considered the flag a secure loading method.The risk is not limited to a narrow use case. Any environment using torch.load()
—from research labs and cloud-based inference engines to collaborative model-sharing platforms—is potentially exposed. Attackers could distribute tampered models via public repositories, and once such a model is loaded, it could trigger the exploit, providing control over the underlying machine without requiring user interaction or elevated privileges.This incident is a reminder of the growing security challenges in machine learning infrastructure. Users are advised to upgrade to PyTorch 2.6.0 immediately, review model sources carefully, and avoid loading untrusted models without rigorous vetting. The vulnerability underscores the importance of applying traditional security principles to AI pipelines and ensuring that even trusted tools are regularly reviewed and updated.