RWayne93
RWayne93
+1 for this issue as well. I have successfully hosted my own models in a registry following @jamesbrink method however now i have been trying to do this with a...
I agree this would be nice. or at least see which chunks the language model referenced for the final output.
This will work similar to the openai spec? i know ollama recently added this.
> > @RWayne93 not sure if I understand what you're saying :/ could you elaborate a bit more? I conceptualise this feature being used to either pre-process prompts or post-process...
@mikeldking how are you getting the tokens for the llamacpp model? I am currently using the langchain libraries and instrumentor and my tokens always show up as 0. for example...
Curious if you ever got around to working on this. If not what would be a good starting point to implement this myself. Ive been looking for an app like...
@pernutbrian I could be wrong but based on reading this and using the healing function i figured maybe changing the reward for exploration based on the hp of the party...
@Lawbayly I was also thinking about the "cut" issue recently as well for trying to get the 3rd gym. with it all being under one RL network idk how it...
The issue there would be to somehow get it to learn fly for easy back tracking.
@Iron-Bound how do you specify to run on GPU?