LL3DA icon indicating copy to clipboard operation
LL3DA copied to clipboard

Code release for models

Open KairosXu opened this issue 2 years ago • 7 comments

Thanks for your nice work! But when I tried to run the code for training LL3DA, I found that the "models" module was missing. Is that correct? If so, could you tell me when your team will release all the code and the training/evaluation script? Hope for your reply soon!

KairosXu avatar Dec 13 '23 02:12 KairosXu

Thanks for your interest in our work! We will gradually upload the codes, weights, and training/evaluation scripts starting in late December. Please stay tuned.

ch3cook-fdu avatar Dec 13 '23 10:12 ch3cook-fdu

Sorry to bother again. Due to the excellent performance LL3DA has achieved, we would like to conduct some further research based on your nice work. Could you please release your model checkpoints and training/evaluation codes asap? Thanks and hope for your reply soon!

KairosXu avatar Dec 29 '23 08:12 KairosXu

Thank you for your recognition of our work and sorry for the delay. As we are validating the reproducibility of our code and the extensibility to support different large language model backends, it may take a few days. After the verification, we will the release as soon as possible!

ch3cook-fdu avatar Jan 02 '24 07:01 ch3cook-fdu

Sorry for bothering. Here are some questions about the Interact3D module.

  1. Since the original Q-Former architecture in BLIP-2 requires the input feature dimension to be 1408, did the scene feature after the scene encoder keep the same?
  2. I found that you added an extra visual prompt compared to 3D-LLM, so I would like to ask how you organized the architecture of Interact3D? And how did the self-attention work with the additional input in the module?
  3. Did your pipeline also need the text instructions in the inference phase? Or only need the 3D feature and visual prompt like BLIP-2? If the former, did the text instruction play the role of the condition? And how did it work? Hope for your reply soon!

KairosXu avatar Jan 25 '24 04:01 KairosXu

Thanks for your interest!

  1. In practice, you can customize the encoder_hidden_size within InstructBlipQFormerConfig for our multimodal transformer. We also adopt an FFN to project the scene feature.
InstructBlipQFormerConfig(
    num_hidden_layers=6,
    encoder_hidden_size=self.encoder_hidden_size
)
  1. We pad the visual prompts with 0s, and set attention_mask for self-attention. You can look into https://huggingface.co/docs/transformers/model_doc/instructblip#transformers.InstructBlipQFormerModel for more information on implementation.

  2. Yes, we need text instructions for inference. The visual prompts are optional. Text instructions play two roles in our architecture: 1) conditional feature aggregation in multi-modal transformer, and 2) conditional text generation in LLM.

ch3cook-fdu avatar Jan 25 '24 05:01 ch3cook-fdu

Hello, @ch3cook-fdu Thanks for your paper and code! Any news for the training/testing main code update?

gujiaqivadin avatar Feb 05 '24 05:02 gujiaqivadin

Thrilled to announce that our paper is accepted to CVPR 2024! The code is now released!

Please stay tuned for our further updates!

ch3cook-fdu avatar Mar 04 '24 04:03 ch3cook-fdu