llama.cpp icon indicating copy to clipboard operation
llama.cpp copied to clipboard

Feature Request: Molmo 72B vision support

Open Kreijstal opened this issue 1 year ago • 18 comments

Prerequisites

  • [X] I am running the latest code. Mention the version if possible as well.
  • [X] I carefully followed the README.md.
  • [X] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [X] I reviewed the Discussions, and have a new and useful enhancement to share.

Feature Description

It would be nice if llama.cpp could support Molmo 72b https://huggingface.co/allenai/Molmo-72B-0924 https://gist.github.com/cyan2k/a493b86ac6ce12bcb19cc601098ed496

Motivation

image it's good

Possible Implementation

I need an adult.

Kreijstal avatar Sep 25 '24 21:09 Kreijstal

The Molmo series of models are multiple different architectures from what they're saying. Molmo-72B-0924: Qwen2-72B + OpenAI CLIP Molmo-7B-D-0924: Qwen2-7B + OpenAI CLIP Molmo-7B-O-0924: Custom architecture + OpenAI CLIP MolmoE-1B-0924: Custom architecture

enthermo avatar Sep 25 '24 21:09 enthermo

impressive

3unnycheung avatar Sep 26 '24 11:09 3unnycheung

+1 for me

olumolu avatar Sep 28 '24 17:09 olumolu

I was able to list Molmo tensors from a transformer load. It's on Molmo-7B-D-0924 i'm trying to get the GGUF for ..

***EDIT: Found this that explains steps to add cutsom model: https://github.com/DIGITALAX/Custom_Llama_Cpp/blob/main/docs/development/HOWTO-add-model.md

If it helps, for MolmoForCasualLlm, files I have found we must add architecture are convert-hf-to-gguf.py, and inside gguf-py directory, the gguf\constants.py in there. Possibly in gguf\tensor_mapping.py as well.

Molmo tensors listing : Tensor Name: model.transformer.wte.embedding | Shape: torch.Size([152064, 3584]) Tensor Name: model.transformer.wte.new_embedding | Shape: torch.Size([128, 3584]) Tensor Name: model.transformer.ln_f.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.0.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.0.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.0.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.0.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.0.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.0.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.0.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.1.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.1.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.1.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.1.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.1.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.1.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.1.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.2.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.2.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.2.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.2.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.2.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.2.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.2.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.3.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.3.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.3.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.3.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.3.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.3.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.3.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.4.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.4.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.4.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.4.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.4.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.4.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.4.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.5.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.5.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.5.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.5.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.5.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.5.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.5.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.6.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.6.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.6.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.6.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.6.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.6.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.6.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.7.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.7.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.7.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.7.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.7.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.7.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.7.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.8.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.8.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.8.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.8.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.8.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.8.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.8.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.9.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.9.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.9.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.9.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.9.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.9.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.9.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.10.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.10.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.10.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.10.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.10.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.10.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.10.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.11.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.11.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.11.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.11.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.11.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.11.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.11.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.12.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.12.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.12.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.12.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.12.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.12.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.12.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.13.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.13.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.13.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.13.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.13.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.13.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.13.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.14.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.14.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.14.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.14.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.14.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.14.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.14.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.15.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.15.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.15.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.15.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.15.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.15.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.15.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.16.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.16.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.16.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.16.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.16.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.16.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.16.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.17.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.17.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.17.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.17.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.17.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.17.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.17.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.18.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.18.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.18.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.18.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.18.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.18.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.18.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.19.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.19.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.19.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.19.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.19.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.19.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.19.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.20.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.20.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.20.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.20.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.20.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.20.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.20.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.21.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.21.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.21.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.21.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.21.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.21.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.21.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.22.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.22.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.22.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.22.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.22.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.22.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.22.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.23.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.23.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.23.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.23.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.23.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.23.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.23.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.24.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.24.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.24.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.24.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.24.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.24.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.24.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.25.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.25.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.25.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.25.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.25.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.25.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.25.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.26.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.26.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.26.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.26.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.26.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.26.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.26.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.blocks.27.attn_out.weight | Shape: torch.Size([3584, 3584]) Tensor Name: model.transformer.blocks.27.ff_out.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.transformer.blocks.27.attn_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.27.ff_norm.weight | Shape: torch.Size([3584]) Tensor Name: model.transformer.blocks.27.att_proj.weight | Shape: torch.Size([4608, 3584]) Tensor Name: model.transformer.blocks.27.att_proj.bias | Shape: torch.Size([4608]) Tensor Name: model.transformer.blocks.27.ff_proj.weight | Shape: torch.Size([37888, 3584]) Tensor Name: model.transformer.ff_out.weight | Shape: torch.Size([152064, 3584]) Tensor Name: model.vision_backbone.pad_embed | Shape: torch.Size([2, 2048]) Tensor Name: model.vision_backbone.image_vit.class_embedding | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.positional_embedding | Shape: torch.Size([577, 1024]) Tensor Name: model.vision_backbone.image_vit.patch_embedding.weight | Shape: torch.Size([1024, 588]) Tensor Name: model.vision_backbone.image_vit.pre_ln.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.pre_ln.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.0.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.1.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.2.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.3.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.4.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.5.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.6.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.7.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.8.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.9.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.10.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.11.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.12.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.13.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.14.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.15.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.16.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.17.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.18.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.19.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.20.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.21.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wq.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wk.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wv.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.feed_forward.w1.weight | Shape: torch.Size([4096, 1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.feed_forward.w1.bias | Shape: torch.Size([4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.feed_forward.w2.weight | Shape: torch.Size([1024, 4096]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.feed_forward.w2.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.attention_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.ffn_norm.weight | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_vit.transformer.resblocks.22.ffn_norm.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_pooling_2d.wq.weight | Shape: torch.Size([1024, 2048]) Tensor Name: model.vision_backbone.image_pooling_2d.wq.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_pooling_2d.wk.weight | Shape: torch.Size([1024, 2048]) Tensor Name: model.vision_backbone.image_pooling_2d.wk.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_pooling_2d.wv.weight | Shape: torch.Size([1024, 2048]) Tensor Name: model.vision_backbone.image_pooling_2d.wv.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_pooling_2d.wo.weight | Shape: torch.Size([1024, 1024]) Tensor Name: model.vision_backbone.image_pooling_2d.wo.bias | Shape: torch.Size([1024]) Tensor Name: model.vision_backbone.image_projector.w1.weight | Shape: torch.Size([18944, 1024]) Tensor Name: model.vision_backbone.image_projector.w2.weight | Shape: torch.Size([3584, 18944]) Tensor Name: model.vision_backbone.image_projector.w3.weight | Shape: torch.Size([18944, 1024]) MolmoForCausalLM( (model): Molmo( (transformer): ModuleDict( (wte): Embedding() (emb_drop): Dropout(p=0, inplace=False) (ln_f): RMSLayerNorm() (blocks): ModuleList( (0-27): 28 x MolmoSequentialBlock( (dropout): Dropout(p=0, inplace=False) (act): SwiGLU() (attn_out): Linear(in_features=3584, out_features=3584, bias=False) (ff_out): Linear(in_features=18944, out_features=3584, bias=False) (rotary_emb): RotaryEmbedding() (attn_norm): RMSLayerNorm() (ff_norm): RMSLayerNorm() (att_proj): Linear(in_features=3584, out_features=4608, bias=True) (ff_proj): Linear(in_features=3584, out_features=37888, bias=False) ) ) (ff_out): Linear(in_features=3584, out_features=152064, bias=False) ) (vision_backbone): OLMoPretrainedVisionBackbone( (image_vit): VisionTransformer( (patch_embedding): Linear(in_features=588, out_features=1024, bias=False) (pre_ln): LayerNormFp32((1024,), eps=1e-05, elementwise_affine=True) (transformer): BlockCollection( (resblocks): ModuleList( (0-22): 23 x ResidualAttentionBlock( (attention): MultiHeadDotProductAttention( (wq): Linear(in_features=1024, out_features=1024, bias=True) (wk): Linear(in_features=1024, out_features=1024, bias=True) (wv): Linear(in_features=1024, out_features=1024, bias=True) (wo): Linear(in_features=1024, out_features=1024, bias=True) (residual_dropout): Dropout(p=0.0, inplace=False) ) (feed_forward): ViTMLP( (w1): Linear(in_features=1024, out_features=4096, bias=True) (act): QuickGELU() (w2): Linear(in_features=4096, out_features=1024, bias=True) ) (attention_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) (ffn_norm): LayerNorm((1024,), eps=1e-05, elementwise_affine=True) ) ) ) ) (image_pooling_2d): MultiHeadDotProductAttention( (wq): Linear(in_features=2048, out_features=1024, bias=True) (wk): Linear(in_features=2048, out_features=1024, bias=True) (wv): Linear(in_features=2048, out_features=1024, bias=True) (wo): Linear(in_features=1024, out_features=1024, bias=True) (residual_dropout): Dropout(p=0.0, inplace=False) ) (image_projector): MLP( (w1): Linear(in_features=1024, out_features=18944, bias=False) (w2): Linear(in_features=18944, out_features=3584, bias=False) (w3): Linear(in_features=1024, out_features=18944, bias=False) (act): LlamaSwiGLU() (dropout): Dropout(p=0.0, inplace=False) ) (image_feature_dropout): Dropout(p=0.0, inplace=False) ) ) )

Xorba001 avatar Oct 04 '24 18:10 Xorba001

Sub

philfung avatar Oct 04 '24 21:10 philfung

I'd love to see support for this!

hg0428 avatar Oct 08 '24 01:10 hg0428

me too I'm using it with Transformers still, Molmo really needs more attention it's VERY good at vision tasks, especially virtual ones.

corporate9601 avatar Oct 23 '24 14:10 corporate9601

+1

wuhongsheng avatar Nov 01 '24 03:11 wuhongsheng

me too I'm using it with Transformers still, Molmo really needs more attention it's VERY good at vision tasks, especially virtual ones.

It's also on mlx-vlm now (main), if you have a Mac. llama.cpp implementation would be really useful still!

Benjoyo avatar Nov 22 '24 17:11 Benjoyo

+1

minhphan88 avatar Nov 29 '24 10:11 minhphan88

+1

oddpxl avatar Dec 10 '24 20:12 oddpxl

any updates?

Ar57m avatar Dec 25 '24 04:12 Ar57m

any updates?

You can ask for support in allenai mail them so they make a gguf format.

olumolu avatar Dec 25 '24 04:12 olumolu

llama.cpp has become out of date pretty quick this model is already ancient lol

Kreijstal avatar Dec 25 '24 08:12 Kreijstal

https://github.com/allenai/OLMo/issues Create a issue to make gguf available for llamacpp Contact email :[email protected] I have sent you guys need to send them... https://allenai.org

olumolu avatar Dec 25 '24 14:12 olumolu

Any news on this?

dprokhorov17 avatar Jan 11 '25 19:01 dprokhorov17

Any news on this?

saw this https://github.com/allenai/OLMo/issues/772

Ar57m avatar Jan 11 '25 21:01 Ar57m

@Ar57m Ok, but how does this is related to the Molmo-72B model?

dprokhorov17 avatar Jan 12 '25 12:01 dprokhorov17

This issue was closed because it has been inactive for 14 days since being marked as stale.

github-actions[bot] avatar Feb 26 '25 01:02 github-actions[bot]

rip

Kreijstal avatar Feb 26 '25 07:02 Kreijstal