InvokeAI icon indicating copy to clipboard operation
InvokeAI copied to clipboard

[bug]: context.tensors.load returns an object that is modifiable in-place

Open dunkeroni opened this issue 1 year ago • 1 comments

Is there an existing issue for this problem?

  • [X] I have searched the existing issues

Operating system

Linux

GPU vendor

Nvidia (CUDA)

GPU model

No response

GPU VRAM

No response

Version number

4.2.9

Browser

Chrome

Python dependencies

No response

What happened

Modifying a loaded tensor and then saving the result in the context updates the original tensor as well. Found while experimenting with noise. Noise Adjuster invocation is adding 2 to all values in the noise latent, and Analyze creates histogram plots. image Both Analyze results show the shift of 2 in this case, but one of them should show the original noise instead.

This is an issue of referencing the same object, not of reusing the same name:

[2024-09-07 15:22:09,044]::[InvokeAI]::INFO --> Loaded noise tensor with name Tensor_a8e64f3f-2288-4839-8e63-ca982eb79db6
[2024-09-07 15:22:09,067]::[InvokeAI]::INFO --> Saved noise tensor with name Tensor_a82a4b1b-f9fc-4957-b078-a870a4971164

What you expected to happen

Context should return a .clone() of the stored latent to avoid any direct access to the saved (cached) tensor.

How to reproduce the problem

Example for issue:

noise = context.tensors.load(self.noise.latents_name)
for i in range(noise.shape[1]):
    subnoise = noise[:, i, :, :]
    subnoise = subnoise + self.adjust
    noise[:, i, :, :] = subnoise
name = context.tensors.save(tensor=noise)
return NoiseOutput.build(latents_name=name, latents=noise, seed=self.noise.seed)

Additional context

No response

Discord username

dunkeroni

dunkeroni avatar Sep 07 '24 19:09 dunkeroni

I think this would be an issue for any entity we keep in memory across invocations - so, anything handled by these things:

  • https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/app/services/object_serializer/object_serializer_forward_cache.py
  • https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/app/services/image_files/image_files_disk.py#L148-L158
  • https://github.com/invoke-ai/InvokeAI/blob/main/invokeai/app/services/invocation_cache/invocation_cache_memory.py

Possible solutions:

  • Deep copy everything returned by the context API (as suggested)
  • Deep copy on read for each of the linked cache implementations
  • Theoretical transparent Readyonly<T> type added to the context API method return annotations, indicating to nodes that the values shouldn't be mutated without any runtime impact (I suspect this isn't possible in python without some juggling, it appears to have no concept of readonly/frozen)

psychedelicious avatar Sep 08 '24 11:09 psychedelicious