Wallas Henrique

Results 24 comments of Wallas Henrique

Tks for try it out @tjohnson31415, I'll take a look in you examples and address these issues.

Hey @tjohnson31415 I think I addressed your issues: > I see the ModelExecutionError error be raised, but then the server seems to hang, never dumping the debug info or exiting......

Hey @comaniac, do you think this feature implemented this way makes sense for vLLM? I'd love to hear your feedback. Thanks!

@comaniac thanks a lot for your feedback! I guess that I addressed the quick one (as resolved) and answered all you comments. I'd be happy to keep working in this...

Finally all green! This PR has been there for a while, and now I am convinced to remove the V0 support. I also did some minor updates. If any of...

Thanks @njhill ! Could you please point me out the comment? Is that a new one? Or did I missed this?

Hey! Sorry for the delay. Tested the same script with this fix and it worked: ``` /opt/conda/envs/py_3.9/lib/python3.9/site-packages/transformers/utils/hub.py:127: FutureWarning: Using `TRANSFORMERS_CACHE` is deprecated and will be removed in v5 of Transformers....

Thank you @russellb and @mgoin for the quick feedback. Let's discuss. > Is there a good reason we shouldn't just make any_whitespace=False the default? It seems like that should be...