Aleksey Smolenchuk
Aleksey Smolenchuk
``` raw-body ~1.1.0 → ~2.1.6 safe-json-parse ~1.0.1 → ~4.0.0 after ~0.7.0 → ~0.8.1 test-server ~0.1.3 → ~0.2.1 send-data ~1.0.1 → ~8.0.0 tape ~2.3.0 → ~4.5.1 process ~0.5.1 → ~0.11.2 error...
I think the monorepo hype can result in bad architectural decisions if not properly evaluated.
Most requested features! (by me)
Fusion-cli correctly `.map` files for production builds, but uses `hidden-source-map` webpack config, which omits the `//# sourceMappingURL` comment at the end of the source files. From https://webpack.js.org/configuration/devtool/: > hidden-source-map -...
Webpack does not include polyfills for Buffer and Process unless it encounters their references. Explicitly defaulting them to `false` makes things difficult for users whose deps and subdeps depend on...
Originally I was playing around with https://github.com/zphang/minimal-llama/ to generate alpaca-like adaptation [here](https://github.com/lxe/llama-peft-tuner). Both use peft but in a slightly different fashion, like different parameter saving and using a [custom trainer](https://github.com/zphang/minimal-llama/blob/main/finetune_peft.py#L77)....
> Our DyLoRA method trains LoRA blocks for a range of ranks instead of a single rank by sorting out the representation learned by the adapter module at different ranks...
Is it possible to "unload" the PEFT LoRA weights after mutating the base model with PeftModel.from_pretrained? I'd like to load multiple LoRA models on top of a base model, and...
Is there a way to add/use repetition penalty to the generator code?