open_clip icon indicating copy to clipboard operation
open_clip copied to clipboard

Training open_clip using pretrained timm

Open daniel-z-kaplan opened this issue 2 years ago • 0 comments

I'm trying to train a suite of open_clip models, and there's a few issues/requirements.

I'd like to use https://huggingface.co/timm/vit_base_patch8_224.augreg_in21k to start things off.

I realize that this config does not exist, so I started with a model that should exist already, vit_medium_patch16_gap_256. https://huggingface.co/timm/vit_medium_patch16_gap_256.sw_in12k_ft_in1k

Code/steps are below

#Download this:
#https://huggingface.co/timm/vit_medium_patch16_gap_256.sw_in12k_ft_in1k/tree/main
model, preprocess = open_clip.create_model_from_pretrained(
    model_name = 'vit_medium_patch16_gap_256',pretrained = '/data/pytorch_model.bin', pretrained_image = True)

This results in

	Missing key(s) in state_dict: "positional_embedding", "text_projection", "logit_scale", "visual.trunk.pos_embed", "visual.trunk.patch_embed.proj.weight", "visual.trunk.patch_embed.proj.bias", "visual.trunk.blocks.0.norm1.weight", "visual.trunk.blocks.0.norm1.bias"....
       Unexpected key(s) in state_dict: "pos_embed", "patch_embed.proj.weight", "patch_embed.proj.bias", "blocks.0.norm1.weight", "blocks.0.norm1.bias"......

daniel-z-kaplan avatar Dec 27 '23 17:12 daniel-z-kaplan