Hacker Doge

Results 6 comments of Hacker Doge

Same with me! But I found a workaround. I've tried `mistralai/Mixtral-8x7B-Instruct-v0.1` and `NousResearch/Nous-Hermes-2-Yi-34B` (both threw out this warning if it helps: `/home/vic/.local/lib/python3.10/site-packages/peft/utils/save_and_load.py:160: UserWarning: Setting `save_embedding_layers` to `True` as the embedding...

@durs You can do it bro!!! I need this feature.

Tried it again today on a 8x H100 SXM ``` Traceback (most recent call last): File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/root/miniconda3/envs/py3.10/lib/python3.10/runpy.py", line 86, in...

Tried it on 8x H100 PCIE Found it unusual that it got stuck on this part: ``` [2024-05-08 14:11:28,264] [INFO] [axolotl.load_tokenized_prepared_datasets:410] [PID:3641] [RANK:0] merging datasets ``` Been waiting for 10...

A different error for 1xH100 SXM ``` warnings.warn( /root/miniconda3/envs/py3.10/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py:322: UserWarning: MatMul8bitLt: inputs will be cast from torch.bfloat16 to float16 during quantization warnings.warn(f"MatMul8bitLt: inputs will be cast from {A.dtype} to float16...

@winglian Fixed it. I just had to rollback to an older commit: ``` git checkout 132eb740f036eff0fa8b239ddaf0b7a359ed1732 ```