Lucas Liebenwein
Lucas Liebenwein
I noticed that `isort` ignores config files as soon as any arguments are passed in on the command line. Is this really the desired behavior? Especially when `isort` is called...
Depending on the backend, distributed communication may only be supported on either CPU or GPU, see [table here](https://pytorch.org/docs/stable/distributed.html#backends). Right now, in `comm.py` communication is always done on the GPU, see...
We should probably eventually support lists as well here since this might lead to some unexpected behaviors if we load yaml files that include lists of dicts. https://github.com/zhijian-liu/torchpack/blob/d3fda521bc2e2684643a46103ecece816b53842b/torchpack/utils/config.py#L45-L52
Hi, really love your extension and it makes `pydantic` so much more useful. :) Thank you for your work on this. I was wondering if you had considered adding an...
### Analysis When sending a `pause` to the Curtain 3 device, I get a success message back but the curtains don't respond. They respond to all other commands I sent...
Adding a few productivity improvements to our devcontainer setup - [x] Expose all GPUs in docker-compose - [x] Auto-mount HuggingFace Hub cache if available - [x] Pre-install requirements and pre-commit...
A simple sharder for BMM based on @suyoggupta's prototype ## Test Coverage ## GitHub Bot Help `/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...` Provide a user friendly way for developers...
This PR improves handling of our inputs that are provided by the cache interface: * better switch between original two inputs and full inputs after cache metadata * handling of...
This PR introduces several improvements to our HF model factory: - [x] Change name to `AutoModelForCausalLM` to reflect there are separate factories for different auto model types - [x] Better...
Apply approach in #4064 for attention pattern matching. This will greatly simplify our pattern matchers in this file