TAEWOO KIM

Results 2 comments of TAEWOO KIM

As I looked Inside of the motion module's attention (VersatileAttention), encoder_hidden_states become the hidden_states, so ultimately the attention operates as self-attention. In other words, encoder_hidden_states is not used in the...

You should check the `TemporalTransformerBlock` in motion_module.py. When creating `VersatileAttention` block, the `cross_attention_dim` is None since attention_block_types are [ "Temporal_Self", "Temporal_Self" ]. Then you can find that the encoder_hidden_states are...