Any HuggingFace Model means, Could I use "mosaicml/mpt-7b-storywriter" model as well with this?
Any HuggingFace Model means, Could I use "mosaicml/mpt-7b-storywriter" model as well with this? Is there any restrictions on the nature of these models?
The YouTube @PromtEngineer mentioned this "localGPT" could support Llama models and HuggingFace Models like "TheBloke/Vigogne-Instruct-13B-HF". This Mosaic ML's StoryWriter model has 7B parameters and supports up to 65K tokens and extendable upto 85K tokens.
@PromtEngineer , I would appreciate your input here..
@ashokrs look for the model names that end with -HF. I am trying to move from langchain to llamacpp. That will hopefully give us the ability to run quantized version as well. Stay tuned for that update.
@PromtEngineer got it.. will look for models suffixed with -HF.