Agnieszka Ciborowska

Results 4 comments of Agnieszka Ciborowska

Okay. In that case, after I complete the training, I can still save the model with e.g., `stage3_gather_16bit_weights_on_model_save`? Also, any plans to add checkpointing for NVMe in the near future?

I was mostly being curious about using NVMe in terms of performance trade offs compared to CPU and with different parameters. It is very surprising to me that NVMe does...

I am seeing the same issue with llama3.1 and llama3.2 (running with ollama) on mindsdb/mindsdb:v25.3.4.2