RealCompo icon indicating copy to clipboard operation
RealCompo copied to clipboard

When I run the example , I get the error.

Open Maybeetw opened this issue 1 year ago • 5 comments

Traceback (most recent call last): File "inference.py", line 334, in run(meta, args, starting_noise) File "inference.py", line 275, in run samples_fake = sampler.sample(S=steps, shape=shape, input=input, uc=uc, guidance_scale=config.guidance_scale, mask=inpainting_mask, x0=z0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 128, in sample return self.plms_sampling(shape, input, uc, guidance_scale, mask=mask, x0=x0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 166, in plms_sampling attn_layout, attn_text = self.get_attention_maps(ts, img, input) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 78, in get_attention_maps e_t_text = self.text_unet(input2["x"], input2["timesteps"], input2["context"]).sample File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_condition.py", line 970, in forward sample = upsample_block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_blocks.py", line 2134, in forward hidden_states = attn( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/transformer_2d.py", line 292, in forward hidden_states = block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/attention.py", line 171, in forward attn_output = self.attn2( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 211, in forward attention_probs = controller(attention_probs, is_cross, place_in_unet) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 53, in call self.between_steps() File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 85, in between_steps self.attention_store[key][i] += self.step_store[key][i] RuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. Given that this use case is ambiguous and error-prone, it is forbidden. You can clarify your code by moving both the view and the inplace either both inside the no_grad block (if you don't want the inplace to be tracked) or both outside (if you want the inplace to be tracked).

Maybeetw avatar Mar 10 '24 09:03 Maybeetw

Thank you for your attention to our project, we have reevaluated our code, and it seems works well without the problem you mentioned. For this issue, please ensure that your environment is configured according to our provided "installation" process.

YangLing0818 avatar Mar 11 '24 01:03 YangLing0818

I meet the same problem.

AdventureStory avatar Mar 11 '24 09:03 AdventureStory

Traceback (most recent call last): File "inference.py", line 334, in run(meta, args, starting_noise) File "inference.py", line 275, in run samples_fake = sampler.sample(S=steps, shape=shape, input=input, uc=uc, guidance_scale=config.guidance_scale, mask=inpainting_mask, x0=z0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 128, in sample return self.plms_sampling(shape, input, uc, guidance_scale, mask=mask, x0=x0) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 166, in plms_sampling attn_layout, attn_text = self.get_attention_maps(ts, img, input) File "/root/autodl-tmp/RealCompo/ldm/models/diffusion/plms.py", line 78, in get_attention_maps e_t_text = self.text_unet(input2["x"], input2["timesteps"], input2["context"]).sample File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_condition.py", line 970, in forward sample = upsample_block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/unet_2d_blocks.py", line 2134, in forward hidden_states = attn( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/transformer_2d.py", line 292, in forward hidden_states = block( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/diffusers/models/attention.py", line 171, in forward attn_output = self.attn2( File "/root/miniconda3/envs/RealCompo/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 211, in forward attention_probs = controller(attention_probs, is_cross, place_in_unet) File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 53, in call self.between_steps() File "/root/autodl-tmp/RealCompo/utils/attentionmap.py", line 85, in between_steps self.attention_store[key][i] += self.step_store[key][i] RuntimeError: A view was created in no_grad mode and is being modified inplace with grad mode enabled. Given that this use case is ambiguous and error-prone, it is forbidden. You can clarify your code by moving both the view and the inplace either both inside the no_grad block (if you don't want the inplace to be tracked) or both outside (if you want the inplace to be tracked).

I meet the same problem.

Thank you for your question. We have modified the code and there is no problem at present

Cominclip avatar Mar 11 '24 14:03 Cominclip

Thank you for your quick reply~

AdventureStory avatar Mar 12 '24 06:03 AdventureStory

Thank you for your quick answer, perfect work!

Maybeetw avatar Mar 12 '24 16:03 Maybeetw