Dmitrii Ioksha

Results 10 comments of Dmitrii Ioksha

@baskaryan Hello, can you make a review?

> how is this different from the default LLM implementations of acall and astream? `acall` and `astream` use under the hood `_astream` and `_acall` methods As stated here https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/llms.py#L1220 `_acall`...

> > > how is this different from the default LLM implementations of acall and astream? > > > > > > `acall` and `astream` use under the hood `_astream`...

tested with async langchain callback + tested with fastapi async backend, works good — returns `StreamingReponse` and I can show it appearing chunk by chunk (also works with multiple requests)

You should add `"django.contrib.auth.backends.ModelBackend"` to your `AUTHENTICATION_BACKENDS`: ```python AUTHENTICATION_BACKENDS = [ "django.contrib.auth.backends.ModelBackend", "django_keycloak.auth.backends.KeycloakAuthorizationCodeBackend", ] ```

Hello @maxdeichmann. Can you make a review? If you need any clarifications, feel free to ask me, I would like to help

@MKhalusova Good day Maria. Waiting for your review

> I am dealing with this right now -- and unfortunately llama-cpp-server has the only completions endpoint that I can find that supports logprobs properly (random hard requirement I have)....

I've just discovered Marta and it works good. Thank you! Waiting for 0.8.2 version though 👏