vmpuri
vmpuri
Just merged both of the PRs https://github.com/pytorch/torchchat/pull/1034 https://github.com/pytorch/torchchat/pull/1035 @nobelchowdary Try pulling and trying the browser again. I've changed how the browser works - now, the UI queries the server backend....
Going to add that we can use `hf_transfer` to "potentially double the download speed" https://huggingface.co/docs/hub/models-downloading https://huggingface.co/docs/huggingface_hub/v0.25.1/package_reference/environment_variables#hfhubenablehftransfer This is an option to use a rust-based downloader. HF claims it's production ready,...
Is the role ever *not* a string? If that's the case, we should find the source of this bug rather than adding a typecast here.
[PR 995](https://github.com/pytorch/torchchat/pull/995) addresses some initial concerns. Responses should now be formatted in JSON using the API dataclasses. Here are our functional gaps so far: - **system_fingerprint/seed** implementation is incomplete. We...
Hey Roger, thanks for reaching out! We do plan on adding LLaVA support in the coming weeks, and we'd appreciate your help on the API/server components. I'm still working on...
Landed pulls #1035 #1034 #1042 which prove that the basic completion API works as expected with the Python OpenAI API.
Fixing @swolchok 's suggestions in follow-up commits. Regarding this: > I don't see any kind of migration from the old path to the new one. How are people going to...
>Why redownload? Copy to new location would be much more efficient. True - this was my first thought, but there were 2 main things that make this more complex (and...
This looks like the type of bug that occurs when we aren't including the proper EOS/BOS and role headers to the messages. The model's trying to "autocomplete" your message rather...