Could you help fix the deserialization vulnerability caused by a risky pre-trained models used in this repo?
Hi, @jcheong0428, @ejolly, I'd like to report that a potentially risky pretrained model is being used in this project, which may pose deserialization threats. Please check the following code example:
• py-feat/feat/MPDetector.py
landmark_model_file = hf_hub_download(
repo_id="py-feat/mp_facemesh_v2",
filename="face_landmarks_detector_Nx3x256x256_onnx.pth",
cache_dir=get_resource_path(),
)
self.landmark_detector = torch.load(
landmark_model_file, map_location=self.device, weights_only=False
)
self.landmark_detector.eval()
self.landmark_detector.to(self.device)
Issue Description
As shown above, in the py-feat/feat/MPDetector.py file, the model "py-feat/mp_facemesh_v2" is first downloaded by the hf_hub_download method . Subsequently, the model is loaded via the torch.load method with the parameter weights_only is set to fasle(A behavior marked as risky by the official documentation), and finally executed using the eval method.
This model has been flagged as risky on the HuggingFace platform. Specifically, its face_landmarks_detector_Nx3x256x256_onnx.pth file is marked as malicious and may trigger deserialization threats. When the model is loaded, the risk of deserialization may already occur and may lead to arbitrary code execution, including malicious code.
Related Risk Reports::py-feat/mp_facemesh_v2 risk report
Suggested Repair Methods
- Convert the model to safer format like safetensors and re-upload
- Delete the functions that are not on the pytorch whitelist in the executable code of the model file
As one of the most popular machine learning projects(star:314), every potential risk could be propagated and amplified. Could you please address the above issues?
Thanks for your help~
Best regards, Rockstar