Oscar Key
Oscar Key
Here are the logs from `.dvc/tmp/exps/celery/dvc-exp-worker-1.out`: [dvc-exp-worker-1.out.txt](https://github.com/iterative/dvc/files/9433199/dvc-exp-worker-1.out.txt) The start of the stack trace is ``` [2022-08-25 22:07:00,267: ERROR/MainProcess] Task dvc.repo.experiments.queue.tasks.run_exp[1a24d875-b564-4027-9aae-7569622614cc] raised unexpected: TypeError("open() missing required argument 'flags' (pos 2)") Traceback...
Hey Joel, I reckon you might be better off using a library like https://github.com/fengyuanchen/cropperjs which looks like it's more recently updated and has more features.
Feel free to submit a pull request if you work it out. I also remembered this, maybe it will help: https://github.com/oscarkey/cropper.js/pull/6
thanks both! I have updated this issue to represent the documentation fix.
Hi @sfriedowitz, this is a great feature suggestion! Unfortunately we don't have capacity internally to work on it currently, but I will leave this issue open and we will get...
Fine tuning was recently enabled again! See the examples: - https://github.com/PriorLabs/TabPFN/blob/main/examples/finetune_classifier.py - https://github.com/PriorLabs/TabPFN/blob/main/examples/finetune_regressor.py Feel free to open a new issue if you have any trouble.
no dumb questions! I think you should be able to use: ``` tabpfn.model_loading.save_tabpfn_model(my_ft_classifier, "checkpoint_name.pt") loaded_classifier = TabPFNClassifier(model_path="checkpoint_name.pt") ``` If this doesn't work, open a new issue and we'll look into...
hi @jutleo. I'm closing this issue for now, but feel free to re-open if you need further help :)
lgtm! I'm surprised at some of the test failures though, e.g. the ones hitting low accuracy, and the embedding shape ones. Maybe the test checkpoint we're using is weird in...
You can see what's going on in the default inference engine here: https://github.com/PriorLabs/TabPFN/blob/3dd61b0bc35faae52041cd2b611490db2178ffec/src/tabpfn/inference.py#L511 It moves the model back to CPU after inference. I guess this is done to save GPU...