wmx-github

Results 8 issues of wmx-github

wmx@wmx-ubuntu:/media/wmx/res/wmx-github/sunxi-livesuite$ ./LiveSuit.sh Starting x86-64/LiveSuit. QGtkStyle was unable to detect the current GTK+ theme. Qt: Cannot set locale modifiers: library file path: /media/wmx/res/wmx-github/sunxi-livesuite/x86-64/plgvector.dll library file path: /media/wmx/res/wmx-github/sunxi-livesuite/x86-64/LangPlg.dll LoadFile 24 Open 274:...

` class CausalSelfAttention(nn.Module): def forward(self, x): B, T, C = x.size() # batch size, sequence length, embedding dimensionality (n_embd) # calculate query, key, values for all heads in batch and...

> https://github.com/ggerganov/llama.cpp/discussions/625 custom blas : if your blas in `C:/workspace/program/openblas` Blas is accelerated by compiling GGML provided, so in the `ggml/src/CMakeLists.txt` file , add the following code in this line...

https://isv-data.oss-cn-hangzhou.aliyuncs.com/ics/MaaS/ASR/dep_libs/ffmpeg-master-latest-win64-gpl-shared.zip 下载之后查看,因该是2023-11月编译的,ffmpeg版本是6.1 。 我在 windows on arm64 电脑上没有这个版本,希望你们能提供

bug

when i use LLVM-ET-Arm-19.1.1-Linux-AArch64.tar.xz in ubuntu aarch64 ,its not work well , can I cross-compile with gcc compiler ?

`(venv) (base) wmx@wmx-ubuntu:/media/wmx/soft1/AI-model/openvino_notebooks$ pip list |grep optimum optimum 1.25.3 optimum-intel 1.23.0 (venv) (base) wmx@wmx-ubuntu:/media/wmx/soft1/AI-model/openvino_notebooks$ (venv) (base) wmx@wmx-ubuntu:/media/wmx/soft1/AI-model/openvino_notebooks$ (venv) (base) wmx@wmx-ubuntu:/media/wmx/soft1/AI-model/openvino_notebooks$ pip list |grep transf transformers 4.51.3 (venv) (base) wmx@wmx-ubuntu:/media/wmx/soft1/AI-model/openvino_notebooks$ pip...

**Describe the bug** use docker : https://hub.docker.com/r/intelanalytics/ipex-llm-serving-xpu [error.txt](https://github.com/user-attachments/files/21543167/error.txt) **How to reproduce** Steps to reproduce the error: ``` cd /ipex-llm/python/llm/example/GPU/HuggingFace/Multimodal/internvl2 REPO_ID_OR_MODEL_PATH=/llm/models/OpenGVLab/InternVL2-1B/ N_PREDICT=200 IMAGE_URL_OR_PATH=/llm/models/demo.png PROMPT="描述图片内容" python ./chat.py --repo-id-or-model-path $REPO_ID_OR_MODEL_PATH --prompt $PROMPT --n-predict...

user issue

can get BODY_25 result ?