Jan Philipp Harries
Jan Philipp Harries
Implemented `acompress_documents` and changed syntax for `compress_documents` slightly to make sync/async functions consistent. ` LLMChainExtractor` as implemented in #2915 for use in the `ContextualCompressionRetriever` lacked an async method. As compression...
@vowelparrot @hwchase17 Here a new implementation of `acompress_documents` for `LLMChainExtractor ` without changes to the sync-version, as you suggested in #3587 / [Async Support for LLMChainExtractor](https://github.com/hwchase17/langchain/pull/3587) . I created a...
The deeplake integration was/is very verbose (see e.g. [the documentation example](https://python.langchain.com/en/latest/use_cases/code/code-analysis-deeplake.html) when loading or creating a deeplake dataset with only limited options to dial down verbosity. Additionally, the warning that...
Type: Bug If I set an additional devcontainer config path and am connected to an SSH server, the extension searches for the local devcontainer config file path but tries to...
See [here](https://platform.openai.com/docs/api-reference/batch/create) for the API and [here](https://twitter.com/jeffintime/status/1779924149755924707?t=Tmo3Aoo62N5zA5PWLNiqUw) for the (Twitter) announcement. 50% discount would be huge as most large jobs running on distilabel are not time-critical for us. However there...
First of all many thanks for the release of llama 2 7b 32k and your precious contributions! It's appreciated that you provide example scripts for Finetuning; however the (for me)...
### System Info GPU: RTX4090 Run 2.1.0 with docker like: `docker run -it --rm --gpus all --ipc=host -p 8080:80 -v /home/jp/.cache/data:/data ghcr.io/huggingface/text-generation-inference:2.1.0 --model-id microsoft/Phi-3-mini-128k-instruct --max-batch-prefill-tokens=8192 --max-total-tokens=8192 --max-input-tokens=8191 --trust-remote-code --revision bb5bf1e4001277a606e11debca0ef80323e5f824...
### Your current environment (latest docker image `vllm/vllm-openai:latest`) ```text root@68ac2e4db323:/vllm-workspace# python3 collect_env.py Collecting environment information... PyTorch version: 2.3.0+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used...
While even the offical docs and example skills contain non-lowercase Names (when the folder is lowercased): https://support.claude.com/en/articles/12512198-how-to-create-custom-skills See also official skills like this: This i really annoying, it would be...