Michael

Results 22 comments of Michael

That makes sense, but the code already has just an unused variable `print_status` that is just never set. This change makes it usable and adds the ability to make normal...

This isn't a fairseq issue. This is regular pip behaviour see https://stackoverflow.com/questions/2861183/upgrade-package-without-upgrading-dependencies-using-pip

@zheyang0825 does adding this lines at the end make it work? ``` torch.distributed.destroy_process_group() ```

I looked into it and found that there's already actually a `revision` parameter https://github.com/huggingface/peft/blob/56773b9a92b141111d65fe3548d0c30233358868/src/peft/config.py#L235 It's just not being used when calling `from_pretrained` here https://github.com/huggingface/peft/blob/56773b9a92b141111d65fe3548d0c30233358868/src/peft/auto.py#L104 I've opened the PR #1658 but...

adding issue #10 for the `train_gold.sql` file which is still unfixed for the question `What is the description of the type of the company who concluded its contracts most recently?`...

resolved by #1658

I ended up changing `base_model_revision` back to `revision` to maintain backwards compatibility as some models were already uploaded with it, including some models for testing. I can change it back...

I changed the tests but more importantly, realized there is a big problem. For some reason, I assumed that `revision` was a property of `AutoModel` or `PretrainedConfig` but it turns...

Added the `revision` argument to `get_peft_model` and tests are now working as intended One small thing I've noticed is that the `peft_config` passed to `get_peft_model` is modified in place and...

In doing the last changes, I discovered there was a problem if both the base model and the peft model have a revision. In that case, call `AutoPeftModel.from_pretrained(revision="foo")` while also...