[Bug]: v0.6.3(.post1?) regression
Your current environment
The output of `python env.py`
$ python env.py
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: Void Linux (x86_64)
GCC version: (GCC) 13.2.0
Clang version: Could not collect
CMake version: version 3.29.2
Libc version: glibc-2.39
Python version: 3.11.9 (main, Apr 7 2024, 22:28:25) [GCC 13.2.0] (64-bit runtime)
Python platform: Linux-6.6.46_1-x86_64-with-glibc2.39
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: Tesla P100-PCIE-16GB
GPU 1: Tesla P100-PCIE-16GB
GPU 2: Tesla P100-PCIE-16GB
GPU 3: Tesla P100-PCIE-16GB
GPU 4: Tesla P100-PCIE-16GB
GPU 5: Tesla P100-PCIE-16GB
GPU 6: Tesla P100-PCIE-16GB
GPU 7: Tesla P100-PCIE-16GB
Nvidia driver version: 550.107.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 56
On-line CPU(s) list: 0-55
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz
CPU family: 6
Model: 79
Thread(s) per core: 2
Core(s) per socket: 14
Socket(s): 2
Stepping: 1
CPU(s) scaling MHz: 50%
CPU max MHz: 3300.0000
CPU min MHz: 1200.0000
BogoMIPS: 4800.10
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 pti intel_ppin ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts vnmi
Virtualization: VT-x
L1d cache: 896 KiB (28 instances)
L1i cache: 896 KiB (28 instances)
L2 cache: 7 MiB (28 instances)
L3 cache: 70 MiB (2 instances)
NUMA node(s): 2
NUMA node0 CPU(s): 0-13,28-41
NUMA node1 CPU(s): 14-27,42-55
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT vulnerable
Vulnerability Mds: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Meltdown: Mitigation; PTI
Vulnerability Mmio stale data: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; IBRS_FW; STIBP conditional; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Vulnerable: Clear CPU buffers attempted, no microcode; SMT vulnerable
Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] onnxruntime==1.17.3
[pip3] pytorch-triton==3.0.0+989adb9a29
[pip3] pyzmq==26.2.0
[pip3] sentence-transformers==2.7.0
[pip3] torch==2.4.0
[pip3] torchaudio==2.4.0+cu121
[pip3] torchvision==0.19.0
[pip3] transformers==4.45.2
[pip3] triton==3.0.0
[pip3] zmq==0.0.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
Aphrodite Version: 0.6.3.post1
Aphrodite Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX PHB PHB SYS SYS SYS SYS 0-13,28-41 0 N/A
GPU1 PIX X PHB PHB SYS SYS SYS SYS 0-13,28-41 0 N/A
GPU2 PHB PHB X PIX SYS SYS SYS SYS 0-13,28-41 0 N/A
GPU3 PHB PHB PIX X SYS SYS SYS SYS 0-13,28-41 0 N/A
GPU4 SYS SYS SYS SYS X PIX PHB PHB 14-27,42-55 1 N/A
GPU5 SYS SYS SYS SYS PIX X PHB PHB 14-27,42-55 1 N/A
GPU6 SYS SYS SYS SYS PHB PHB X PIX 14-27,42-55 1 N/A
GPU7 SYS SYS SYS SYS PHB PHB PIX X 14-27,42-55 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
Aprodite v0.6.3.post1 will no longer run mistral-large for me. The following command results in a functional mistral setup in v0.6.2.post1 and v0.6.2 :
aphrodite run --api-keys [redacted] --dtype=half --max-model-len 8192 --port 5000 -tp 8 --quantization gptq --kv-cache-dtype fp8 --served-model-name mistral --tokenizer-mode mistral --disable-custom-all-reduce -gmu 1 --enforce-eager True /home/llama/mod/gptq/Mistral-Large-Instruct-2407-GPTQ
When run with v0.6.3.post1 , however, I get the attached error. Removing "--tokenizer-mode mistral" from the command line does make the error go away, but any text generated with that setup is gibberish.
Happy to do any other tests that you might find handy!
Attached Error
INFO: 192.168.0.100:41270 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
+ Exception Group Traceback (most recent call last):
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_utils.py", line 76, in collapse_excgroups
| yield
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 186, in __call__
| async with anyio.create_task_group() as task_group:
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 763, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
| return await self.app(scope, receive, send)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
| await super().__call__(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
| await self.app(scope, receive, _send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 185, in __call__
| with collapse_excgroups():
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/contextlib.py", line 158, in __exit__
| self.gen.throw(typ, value, traceback)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 187, in __call__
| response = await self.dispatch_func(request, call_next)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 625, in authentication
| return await call_next(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 163, in call_next
| raise app_exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 149, in coro
| await self.app(scope, receive_or_disconnect, send_no_error)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
| await self.app(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
| await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
| await route.handle(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
| await self.app(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
| await wrap_app_handling_exceptions(app, request)(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
| response = await f(request)
| ^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
| raw_response = await run_endpoint_function(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
| return await dependant.call(**values)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 271, in create_completion
| generator = await openai_serving_completion.create_completion(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_completions.py", line 100, in create_completion
| await self._guided_decode_logits_processor(request, tokenizer))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_engine.py", line 162, in _guided_decode_logits_processor
| return await get_guided_decoding_logits_processor(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/__init__.py", line 31, in get_guided_decoding_logits_processor
| return await get_lm_format_enforcer_guided_decoding_logits_processor(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 37, in get_lm_format_enforcer_guided_decoding_logits_processor
| tokenizer_data = _cached_build_aphrodite_token_enforcer_tokenizer_data(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 112, in _cached_build_aphrodite_token_enforcer_tokenizer_data
| return build_aphrodite_token_enforcer_tokenizer_data(tokenizer)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_logits_processors.py", line 49, in build_aphrodite_token_enforcer_tokenizer_data
| return build_token_enforcer_tokenizer_data(tokenizer)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 77, in build_token_enforcer_tokenizer_data
| regular_tokens = _build_regular_tokens_list(tokenizer)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 57, in _build_regular_tokens_list
| token_0 = tokenizer.encode("0")[-1]
| ^^^^^^^^^^^^^^^^^^^^^
| TypeError: SentencePieceTokenizer.encode() missing 2 required positional arguments: 'bos' and 'eos'
+------------------------------------
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 185, in __call__
with collapse_excgroups():
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/contextlib.py", line 158, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 187, in __call__
response = await self.dispatch_func(request, call_next)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 625, in authentication
return await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 163, in call_next
raise app_exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 149, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 271, in create_completion
generator = await openai_serving_completion.create_completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_completions.py", line 100, in create_completion
await self._guided_decode_logits_processor(request, tokenizer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_engine.py", line 162, in _guided_decode_logits_processor
return await get_guided_decoding_logits_processor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/__init__.py", line 31, in get_guided_decoding_logits_processor
return await get_lm_format_enforcer_guided_decoding_logits_processor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 37, in get_lm_format_enforcer_guided_decoding_logits_processor
tokenizer_data = _cached_build_aphrodite_token_enforcer_tokenizer_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 112, in _cached_build_aphrodite_token_enforcer_tokenizer_data
return build_aphrodite_token_enforcer_tokenizer_data(tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_logits_processors.py", line 49, in build_aphrodite_token_enforcer_tokenizer_data
return build_token_enforcer_tokenizer_data(tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 77, in build_token_enforcer_tokenizer_data
regular_tokens = _build_regular_tokens_list(tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 57, in _build_regular_tokens_list
token_0 = tokenizer.encode("0")[-1]
^^^^^^^^^^^^^^^^^^^^^
TypeError: SentencePieceTokenizer.encode() missing 2 required positional arguments: 'bos' and 'eos'
Aphrodite v0.6.3.post1 will no longer run mistral-large for me. The following command results in a functional mistral setup in v0.6.2.post1 and v0.6.2 :
aphrodite run --api-keys [redacted] --dtype=half --max-model-len 8192 --port 5000 -tp 8 --quantization gptq --kv-cache-dtype fp8 --served-model-name mistral --tokenizer-mode mistral --disable-custom-all-reduce -gmu 1 --enforce-eager True /home/llama/mod/gptq/Mistral-Large-Instruct-2407-GPTQ
When run with v0.6.3.post1 , however, I get the attached error. Removing "--tokenizer-mode mistral" from the command line does make the error go away, but any text generated with that setup is gibberish.
Happy to do any other tests that you might find handy!
INFO: 192.168.0.100:41270 - "POST /v1/completions HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
+ Exception Group Traceback (most recent call last):
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_utils.py", line 76, in collapse_excgroups
| yield
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 186, in __call__
| async with anyio.create_task_group() as task_group:
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 763, in __aexit__
| raise BaseExceptionGroup(
| ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
| result = await app( # type: ignore[func-returns-value]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
| return await self.app(scope, receive, send)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
| await super().__call__(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
| await self.app(scope, receive, _send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 185, in __call__
| with collapse_excgroups():
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/contextlib.py", line 158, in __exit__
| self.gen.throw(typ, value, traceback)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 187, in __call__
| response = await self.dispatch_func(request, call_next)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 625, in authentication
| return await call_next(request)
| ^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 163, in call_next
| raise app_exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 149, in coro
| await self.app(scope, receive_or_disconnect, send_no_error)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
| await self.app(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
| await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
| await self.middleware_stack(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
| await route.handle(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
| await self.app(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
| await wrap_app_handling_exceptions(app, request)(scope, receive, send)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
| raise exc
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
| await app(scope, receive, sender)
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
| response = await f(request)
| ^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
| raw_response = await run_endpoint_function(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
| return await dependant.call(**values)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 271, in create_completion
| generator = await openai_serving_completion.create_completion(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_completions.py", line 100, in create_completion
| await self._guided_decode_logits_processor(request, tokenizer))
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_engine.py", line 162, in _guided_decode_logits_processor
| return await get_guided_decoding_logits_processor(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/__init__.py", line 31, in get_guided_decoding_logits_processor
| return await get_lm_format_enforcer_guided_decoding_logits_processor(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 37, in get_lm_format_enforcer_guided_decoding_logits_processor
| tokenizer_data = _cached_build_aphrodite_token_enforcer_tokenizer_data(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 112, in _cached_build_aphrodite_token_enforcer_tokenizer_data
| return build_aphrodite_token_enforcer_tokenizer_data(tokenizer)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_logits_processors.py", line 49, in build_aphrodite_token_enforcer_tokenizer_data
| return build_token_enforcer_tokenizer_data(tokenizer)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 77, in build_token_enforcer_tokenizer_data
| regular_tokens = _build_regular_tokens_list(tokenizer)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 57, in _build_regular_tokens_list
| token_0 = tokenizer.encode("0")[-1]
| ^^^^^^^^^^^^^^^^^^^^^
| TypeError: SentencePieceTokenizer.encode() missing 2 required positional arguments: 'bos' and 'eos'
+------------------------------------
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 401, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/applications.py", line 113, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 187, in __call__
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/errors.py", line 165, in __call__
await self.app(scope, receive, _send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 185, in __call__
with collapse_excgroups():
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/contextlib.py", line 158, in __exit__
self.gen.throw(typ, value, traceback)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_utils.py", line 82, in collapse_excgroups
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 187, in __call__
response = await self.dispatch_func(request, call_next)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 625, in authentication
return await call_next(request)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 163, in call_next
raise app_exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/base.py", line 149, in coro
await self.app(scope, receive_or_disconnect, send_no_error)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/cors.py", line 85, in __call__
await self.app(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 715, in __call__
await self.middleware_stack(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 735, in app
await route.handle(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 288, in handle
await self.app(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 76, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
raise exc
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app
await app(scope, receive, sender)
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/starlette/routing.py", line 73, in app
response = await f(request)
^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 301, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/fastapi/routing.py", line 212, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/api_server.py", line 271, in create_completion
generator = await openai_serving_completion.create_completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_completions.py", line 100, in create_completion
await self._guided_decode_logits_processor(request, tokenizer))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/endpoints/openai/serving_engine.py", line 162, in _guided_decode_logits_processor
return await get_guided_decoding_logits_processor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/__init__.py", line 31, in get_guided_decoding_logits_processor
return await get_lm_format_enforcer_guided_decoding_logits_processor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 37, in get_lm_format_enforcer_guided_decoding_logits_processor
tokenizer_data = _cached_build_aphrodite_token_enforcer_tokenizer_data(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_decoding.py", line 112, in _cached_build_aphrodite_token_enforcer_tokenizer_data
return build_aphrodite_token_enforcer_tokenizer_data(tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/aphrodite/modeling/guided_decoding/lm_format_enforcer_logits_processors.py", line 49, in build_aphrodite_token_enforcer_tokenizer_data
return build_token_enforcer_tokenizer_data(tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 77, in build_token_enforcer_tokenizer_data
regular_tokens = _build_regular_tokens_list(tokenizer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/llama/.pyenv/versions/3.11.9/lib/python3.11/site-packages/lmformatenforcer/integrations/transformers.py", line 57, in _build_regular_tokens_list
token_0 = tokenizer.encode("0")[-1]
^^^^^^^^^^^^^^^^^^^^^
TypeError: SentencePieceTokenizer.encode() missing 2 required positional arguments: 'bos' and 'eos'
Sorry I totally forgot to get back to you 😅
Seems like this can be solved by setting --guided-decoding-backend outlines. Not sure what it is about lmfe that breaks mistral tokenizer...