generative-models icon indicating copy to clipboard operation
generative-models copied to clipboard

Error while running SVD-Series with CPU

Open DaBaiTuu opened this issue 2 years ago • 2 comments

In order to run SVD-Series in a GPU-cuda-memory-constrained environment, I choose CPU by change device to cpu, as shown before:

def sample( input_path: str = "assets/test_image.png", # Can either be image file or folder with image files num_frames: Optional[int] = None, num_steps: Optional[int] = None, version: str = "svd", fps_id: int = 6, motion_bucket_id: int = 127, cond_aug: float = 0.02, seed: int = 23, decoding_t: int = 14, # Number of frames decoded at a time! This eats most VRAM. Reduce if necessary. device: str = "cpu",#cuda
output_folder: Optional[str] = None, ):

And got Error:

Restored from checkpoints/svd-fp16.safetensors with 0 missing and 0 unexpected keys Traceback (most recent call last): File "/home/dongsongb/PYTHON-PROJ/openai/generative-models/scripts/sampling/simple_video_sample.py", line 284, in Fire(sample) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/fire/core.py", line 141, in Fire component_trace = _Fire(component, args, parsed_flag_args, context, name) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/fire/core.py", line 475, in _Fire component, remaining_args = _CallAndUpdateTrace( File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace component = fn(*varargs, **kwargs) File "/home/dongsongb/PYTHON-PROJ/openai/generative-models/scripts/sampling/simple_video_sample.py", line 155, in sample c, uc = model.conditioner.get_unconditional_conditioning( File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/sgm/modules/encoders/modules.py", line 179, in get_unconditional_conditioning c = self(batch_c, force_cond_zero_embeddings) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/sgm/modules/encoders/modules.py", line 132, in forward emb_out = embedder(batch[embedder.input_key]) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/sgm/modules/encoders/modules.py", line 1012, in forward out = self.encoder.encode(vid[n * n_samples : (n + 1) * n_samples]) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/sgm/models/autoencoder.py", line 472, in encode z = self.encoder(x) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in call_impl return forward_call(*args, **kwargs) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/sgm/modules/diffusionmodules/model.py", line 594, in forward h = self.mid.attn_1(h) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in call_impl return forward_call(*args, **kwargs) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/sgm/modules/diffusionmodules/model.py", line 263, in forward h = self.attention(h) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/sgm/modules/diffusionmodules/model.py", line 249, in attention out = xformers.ops.memory_efficient_attention( File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/xformers/ops/fmha/init.py", line 223, in memory_efficient_attention return _memory_efficient_attention( File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/xformers/ops/fmha/init.py", line 321, in _memory_efficient_attention return _memory_efficient_attention_forward( File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/xformers/ops/fmha/init.py", line 337, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/xformers/ops/fmha/dispatch.py", line 120, in _dispatch_fw return _run_priority_list( File "/home/dongsongb/anaconda3/envs/tinygptv/lib/python3.9/site-packages/xformers/ops/fmha/dispatch.py", line 63, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 9216, 1, 512) (torch.bfloat16) key : shape=(1, 9216, 1, 512) (torch.bfloat16) value : shape=(1, 9216, 1, 512) (torch.bfloat16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 device=cpu (supported: {'cuda'}) attn_bias type is <class 'NoneType'> bf16 is only supported on A100+ GPUs [email protected] is not supported because: max(query.shape[-1] != value.shape[-1]) > 256 device=cpu (supported: {'cuda'}) bf16 is only supported on A100+ GPUs tritonflashattF is not supported because: max(query.shape[-1] != value.shape[-1]) > 128 device=cpu (supported: {'cuda'}) bf16 is only supported on A100+ GPUs operator wasn't built - see python -m xformers.info for more info triton is not available cutlassF is not supported because: device=cpu (supported: {'cuda'}) bf16 is only supported on A100+ GPUs smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 device=cpu (supported: {'cuda'}) dtype=torch.bfloat16 (supported: {torch.float32}) bf16 is only supported on A100+ GPUs unsupported embed per head: 512

DaBaiTuu avatar Jan 16 '24 10:01 DaBaiTuu

same problem

Hurray0 avatar Mar 05 '24 03:03 Hurray0

Same problem here, @DaBaiTuu did you happen to find the workaround solution already?

LinearFalcon avatar Jul 05 '24 23:07 LinearFalcon