xtuner icon indicating copy to clipboard operation
xtuner copied to clipboard

llama3_8b_instruct_clip_vit_large_p14_336微调后模型如何转换为HuggingFace格式?

Open chalesguo opened this issue 1 year ago • 8 comments

微信截图_20240713161048 转换后的文件和说明中的不同

chalesguo avatar Jul 13 '24 08:07 chalesguo

You can refer this page xtuner/llava-llama-3-8b-transformers, you chat by using pipeline or transformers.

Mikael17125 avatar Jul 13 '24 17:07 Mikael17125

File "/home/ubuntu/program/xtuner_llava/xtuner-main/xtuner-main/xtuner/model/llava.py", line 420, in to_huggingface_llava assert getattr(self.llm, 'hf_quantizer', None) is None,
AssertionError: This conversion format does not support quantized LLM.

chalesguo avatar Jul 14 '24 01:07 chalesguo

001 002 如何选择projector_weight文件

chalesguo avatar Jul 14 '24 01:07 chalesguo

Do you referring to this

hhaAndroid avatar Jul 15 '24 01:07 hhaAndroid

Do you referring to this

I have tried all of the finetuned methods. When you convert .pth models to HuggingFace or LLaVA model, there is an error AssertionError: This conversion format does not support quantized LLM.

Here is my script llava_llama3_8b_instruct_qlora_clip_vit_large_p14_336_e1_gpu1_finetune.py :

# Copyright (c) OpenMMLab. All rights reserved.
import torch
from mmengine.hooks import (CheckpointHook, DistSamplerSeedHook, IterTimerHook,
                            LoggerHook, ParamSchedulerHook)
from mmengine.optim import AmpOptimWrapper, CosineAnnealingLR, LinearLR
from peft import LoraConfig
from torch.optim import AdamW
from transformers import (AutoModelForCausalLM, AutoTokenizer,
                          BitsAndBytesConfig, CLIPImageProcessor,
                          CLIPVisionModel)

from xtuner.dataset import LLaVADataset
from xtuner.dataset.collate_fns import default_collate_fn
from xtuner.dataset.map_fns import llava_map_fn, template_map_fn_factory
from xtuner.dataset.samplers import LengthGroupedSampler
from xtuner.engine.hooks import DatasetInfoHook, EvaluateChatHook
from xtuner.engine.runner import TrainLoop
from xtuner.model import LLaVAModel
from xtuner.utils import PROMPT_TEMPLATE

#######################################################################
#                          PART 1  Settings                           #
#######################################################################
# Model
llm_name_or_path = '/root/autodl-tmp/models/Meta-Llama-3-8B-Instruct'
visual_encoder_name_or_path = '/root/autodl-tmp/models/clip-vit-large-patch14-336'
# Specify the pretrained pth
pretrained_pth = './work_dirs/llava_llama3_8b_instruct_clip_vit_large_p14_336_e1_gpu8_pretrain/iter_3.pth'  # noqa: E501

# Data
# data_root = './data/llava_data/'
# data_path = data_root + 'LLaVA-Instruct-150K/llava_v1_5_mix665k.json'
# image_folder = data_root + 'llava_images'
data_root = "/root/autodl-tmp/datasets/"
data_path = data_root + "LLaVA-Finetuning/custom_dataset.json"
image_folder = data_root + "LLaVA-Finetuning/"
prompt_template = PROMPT_TEMPLATE.llama3_chat
max_length = int(2048 - (336 / 14)**2)

# Scheduler & Optimizer
batch_size = 1  # per_device
accumulative_counts = 128 # 累积梯度计算的次数
dataloader_num_workers = 0
max_epochs = 1
optim_type = AdamW
lr = 2e-4
betas = (0.9, 0.999)
weight_decay = 0
max_norm = 1  # grad clip
warmup_ratio = 0.03

# Save
save_steps = 50000
save_total_limit = 2  # Maximum checkpoints to keep (-1 means unlimited)

# Evaluate the generation performance during the training
evaluation_freq = 50000
SYSTEM = ''
evaluation_images = 'https://llava-vl.github.io/static/images/view.jpg'
evaluation_inputs = ['请描述一下这张照片', 'Please describe this picture']

#######################################################################
#            PART 2  Model & Tokenizer & Image Processor              #
#######################################################################
tokenizer = dict(
    type=AutoTokenizer.from_pretrained,
    pretrained_model_name_or_path=llm_name_or_path,
    trust_remote_code=True,
    padding_side='right')

image_processor = dict(
    type=CLIPImageProcessor.from_pretrained,
    pretrained_model_name_or_path=visual_encoder_name_or_path,
    trust_remote_code=True)

model = dict(
    type=LLaVAModel,
    freeze_llm=True,
    freeze_visual_encoder=True,
    pretrained_pth=pretrained_pth,
    llm=dict(
        type=AutoModelForCausalLM.from_pretrained,
        pretrained_model_name_or_path=llm_name_or_path,
        trust_remote_code=True,
        torch_dtype=torch.float16,
        quantization_config=dict(
            type=BitsAndBytesConfig,
            load_in_4bit=True, 
            load_in_8bit=False,
            llm_int8_threshold=6.0,
            llm_int8_has_fp16_weight=False,
            bnb_4bit_compute_dtype=torch.float16,
            bnb_4bit_use_double_quant=True,
            bnb_4bit_quant_type='nf4')),
    llm_lora=dict(
        type=LoraConfig,
        r=64,
        lora_alpha=16,
        lora_dropout=0.05,
        bias='none',
        task_type='CAUSAL_LM'),
    visual_encoder=dict(
        type=CLIPVisionModel.from_pretrained,
        pretrained_model_name_or_path=visual_encoder_name_or_path))

#######################################################################
#                      PART 3  Dataset & Dataloader                   #
#######################################################################
llava_dataset = dict(
    type=LLaVADataset,
    data_path=data_path,
    image_folder=image_folder,
    tokenizer=tokenizer,
    image_processor=image_processor,
    dataset_map_fn=llava_map_fn,
    template_map_fn=dict(
        type=template_map_fn_factory, template=prompt_template),
    max_length=max_length,
    pad_image_to_square=True)

train_dataloader = dict(
    batch_size=batch_size,
    num_workers=dataloader_num_workers,
    dataset=llava_dataset,
    sampler=dict(
        type=LengthGroupedSampler,
        length_property='modality_length',
        per_device_batch_size=batch_size * accumulative_counts),
    collate_fn=dict(type=default_collate_fn))

#######################################################################
#                    PART 4  Scheduler & Optimizer                    #
#######################################################################
# optimizer
optim_wrapper = dict(
    type=AmpOptimWrapper,
    optimizer=dict(
        type=optim_type, lr=lr, betas=betas, weight_decay=weight_decay),
    clip_grad=dict(max_norm=max_norm, error_if_nonfinite=False),
    accumulative_counts=accumulative_counts,
    loss_scale='dynamic',
    dtype='float16')

# learning policy
# More information: https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/param_scheduler.md  # noqa: E501
param_scheduler = [
    # dict(
    #     type=LinearLR,
    #     start_factor=1e-5,
    #     by_epoch=True,
    #     begin=0,
    #     end=warmup_ratio * max_epochs,
    #     convert_to_iter_based=True),
    dict(
        type=CosineAnnealingLR,
        eta_min=0.0,
        by_epoch=True,
        #begin=warmup_ratio * max_epochs,
        begin=0,
        end=max_epochs,
        convert_to_iter_based=True)
]

# train, val, test setting
train_cfg = dict(type=TrainLoop, max_epochs=max_epochs)

#######################################################################
#                           PART 5  Runtime                           #
#######################################################################
# Log the dialogue periodically during the training process, optional
custom_hooks = [
    dict(type=DatasetInfoHook, tokenizer=tokenizer),
    dict(
        type=EvaluateChatHook,
        tokenizer=tokenizer,
        image_processor=image_processor,
        every_n_iters=evaluation_freq,
        evaluation_inputs=evaluation_inputs,
        evaluation_images=evaluation_images,
        system=SYSTEM,
        prompt_template=prompt_template)
]

# configure default hooks
default_hooks = dict(
    # record the time of every iteration.
    timer=dict(type=IterTimerHook),
    # print log every 10 iterations.
    logger=dict(type=LoggerHook, log_metric_by_epoch=False, interval=10),
    # enable the parameter scheduler.
    param_scheduler=dict(type=ParamSchedulerHook),
    # save checkpoint per `save_steps`.
    checkpoint=dict(
        type=CheckpointHook,
        by_epoch=False,
        interval=save_steps,
        max_keep_ckpts=save_total_limit),
    # set sampler seed in distributed evrionment.
    sampler_seed=dict(type=DistSamplerSeedHook),
)

# configure environment
env_cfg = dict(
    # whether to enable cudnn benchmark
    cudnn_benchmark=False,
    # set multi process parameters
    mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
    # set distributed parameters
    dist_cfg=dict(backend='nccl'),
)

# set visualizer
visualizer = None

# set log level
log_level = 'INFO'

# load from which checkpoint
load_from = None

# whether to resume training from the loaded checkpoint
resume = False

# Defaults to use random seed and disable `deterministic`
randomness = dict(seed=None, deterministic=False)

# set log processor
log_processor = dict(by_epoch=False)

TGLTommy avatar Jul 17 '24 09:07 TGLTommy

File "/home/ubuntu/program/xtuner_llava/xtuner-main/xtuner-main/xtuner/model/llava.py", line 420, in to_huggingface_llava assert getattr(self.llm, 'hf_quantizer', None) is None, AssertionError: This conversion format does not support quantized LLM.

请问你解决这个问题了吗?

TGLTommy avatar Jul 17 '24 12:07 TGLTommy

同问AssertionError: This conversion format does not support quantized LLM.如何解决 谢谢

ditto66 avatar Jul 23 '24 04:07 ditto66

终于知道全流程应该是什么样的了。

目前看来是不能直接转huggingface格式,也不能直接转官方格式,都会报上面的错误。我目前是这样操作的:

lora微调以后,执行: xtuner convert pth_to_hf path/to/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/finetune/llava_llama3_8b_qlora.py path/to/iter_12000.pth path/to/xtuner_model --safe-serialization

执行之后会生成xtuner_model这个文件夹,里面有llm_adapter文件夹和projector文件夹。

执行: xtuner convert merge path/to/models--meta-llama--Meta-Llama-3-8B-Instruct path/to/xtuner_model/llm_adapter path/to/llm_merge --safe-serialization

之后,执行configs/llava/llama3_8b_instruct_clip_vit_large_p14_336/convert_xtuner_weights_to_hf.py: python convert_xtuner_weights_to_hf.py --text_model_id path/to/xtuner_model/llm_merge --vision_model_id path/to/models--openai--clip-vit-large-patch14-336 --projector_weight path/to/xtuner_model/projector/model.safetensors --save_path path/to/xtuner_model/llava_finetune

最终会得到llava_finetune文件夹,这个文件夹就等价于huggingface中的xtuner/llava-llama-3-8b-v1_1-transformers,可以用huggingface上的调用方式进行调用,model_id写llava_finetune的路径。

Hellcat1005 avatar Aug 14 '24 05:08 Hellcat1005