Frank Taylor
Frank Taylor
Did this get resolved? Seems like @lucasalavapena got to the bottom of it.
a bit disturbing to see nobody has answered this. I resolved this issue by doing what you said; encoding the categorical variables as numeric using WOE/Mean Encoding.
"Setting "verbose" output is not a real solution, IMO. In my case, 80% of the prints are useless; I just want to get the final prompt sent to the LLM...
that's exactly the approach I was about to take. Thanks!!
If you try offloading layers to gpu via `--gpu-layers` it will fail with the following: `GGML_ASSERT: ggml-cuda.cu:8148: false` `./llama.cpp-master/finetune --model-base mistral-7b-openorca.Q4_0.gguf --lora-out lora-open-llama-3b-v2-q8_0-ITERATION.bin --train-data "mistral_metadata_training_data_orca2_equallysampled_col_description.txt" --save-every 41 --threads 12 --adam-iter...
Yes, I just ran the code as it is in the notebook. Only difference is the machine and I ran in the ipython REPL.
I suspected exactly that. I spun up a single GPU machine and it works but it's slow slow slow. I'm going to see if creating a custom device_map has any...