Xiangyu Li
Xiangyu Li
Also getting this when converting Llama-2-7b-hf from huggingface with `convert-hf-to-gguf.py`
#### Update I just tried with a local clone of https://huggingface.co/meta-llama/Llama-2-7b/tree/main, and the converted model (via `convert.py`) works fine. I think this is a more flexible walkaround for now, rather...
Hello, same issue encountered when building and testing the android app as instructed in https://llm.mlc.ai/docs/deploy/android.html. The error message shows as the chatting UI initializes. We use prebuilt models and libs:...
> Hello, same issue encountered when building and testing the android app as instructed in [llm.mlc.ai/docs/deploy/android.html](https://llm.mlc.ai/docs/deploy/android.html). The error message shows as the chatting UI initializes. > > We use prebuilt...
@sygi Hi, the mlc-chat-config is exactly the same as you provided, but mlc/tvm version is earlier. We actually tested the following 3 settings (windows + wsl + Pixel 6 Pro):...
https://github.com/mlc-ai/mlc-llm/issues/2076#issuecomment-2056249208 works fine for me, thanks! ## Update: It seems this only works for the prebuilt libs. When I want to compile with customized configurations, the issue still exists. I'm...
Hi @sygi 1. Regarding the `make_object` type error, I've fixed it as you did to make it work. 2. I didn't encounter any other issue at compiling time, including the...
@sygi I'm using `mlc-ai-nightly==0.15.dev275`, which should contain a prebuilt tvm package. I didn't compile tvm myself either, I only attempted to compile mlc-llm myself. `python -c "import tvm; print('\n'.join(f'{k}: {v}'...
Kernel information: ``` Linux orangepi5plus 6.1.43-rockchip-rk3588 #1.2.0 SMP Thu Nov 21 12:08:24 CST 2024 aarch64 aarch64 aarch64 GNU/Linux ```