Akshata

Results 19 comments of Akshata

pip uninstall transformer-engine then, pip install transformers

> > Hi, I solved this by pip uninstall transformer-engine pip uninstall transformer-engine then, pip install transformers

> Got permission error for pip uninstall transformer-engine then, pip install transformers > > PermissionError: [Errno 13] Permission denied: '/usr/local/lib/python3.10/dist-packages/libtransformer_engine.so' The folder you are trying to install it to does...

@HAWLYQ GPU is NVIDIA A100-SXM4-40GB Training script is: ``` #!/bin/bash if [ $MASTER_ADDR ];then echo $MASTER_ADDR echo $MASTER_PORT echo $WORLD_SIZE echo $RANK else MASTER_ADDR=127.0.0.1 MASTER_PORT=2$(($RANDOM % 10))$(($RANDOM % 10))15 WORLD_SIZE=1...

@HAWLYQ here, I am loading the model from hugging face instead of a local checkpoint. `LOAD='mPLUG/DocOwl1.5-Omni'` also, in train_docowl.py, the code is getting executed until the below line: ``` data_module...

@jimchamp i would like to work on this issue, if it is available

@jeejeelee pls help..its quite urgent. thanks! "lora_rank": 32, "lora_alpha": 64, "lora_dropout": 0.05, "lora_bias_trainable": "none", "lora_dtype": null, "lora_lr_ratio": null, "use_rslora": false, "use_dora": false, "init_lora_weights": true,

@jeejeelee "target_modules": [ "fc2", "w2", "output", "mlp1.3", "fc1", "mlp1.1", "w3", "proj", "w1", "wqkv", "wo", "qkv" ]

@jeejeelee what is the easiest workaround for this? deploying using merged lora is affecting performance..i would want to deploy the original weights..is there some alternative I can explore (fast inference)...

i have 2 loras main and adalora. sharing the whole config here for reference: ``` { "model_type": "internvl2-8b", "model_id_or_path": "OpenGVLab/InternVL2-8B", "model_revision": "main", "full_determinism": false, "sft_type": "lora", "freeze_parameters": [], "freeze_vit": false,...