SadTalker
SadTalker copied to clipboard
add support to macOS GPU for inference
Use torch.backends.mps.is_available() to check whether GPU is available for PyTorch on macOS.
Use to.(xx.device) to match op device for computing.
Use os.environ['PYTORCH_ENABLE_MPS_FALLBACK'] = '1' to deal with operator 'aten::grid_sampler_3d' is not currently implemented for the MPS device.
And I optimized imports in inference.py
How fast is it?
How fast is it?
I tested it in my M1 and got 2x faster
thanks, it'd be better if u can give a review.