jmccrosky
jmccrosky
Just want to put on the record that I'd love to see support for the logit_bias parameter for the chat/completions endpoint.
It seems I'm not the only one that looked at the README and assumed that the library is taking care of running the backend, resulting in a "Connection Refused" error...
Currently, setting temperature to 0 will result in the model always generating the 0-token as all logits become infinitesimal. This patch changes the behavior so a temperature of 0 will...