lakako

Results 10 comments of lakako

遇到类似问题 使用redir-host时:`curl -vvv "https://www.google.com"` 提示`curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to www.google.com:443` 使用fake-ip的时候正常,同求解决办法

``` [INFO] 2023-12-01 14:14:40.702 +0800 - -> cat: /test.txt: No such file or directory cat: test.txt: No such file or directory root@295f2097dcc1:/opt/dolphinscheduler# cat /dolphinscheduler/default/resources/test.txt hhhh cxzcxz ```

Already fix this in https://github.com/PaddlePaddle/PaddleSpeech/pull/3736

docker镜像版本v1.4.1, xllamacpp==0.1.13 ``` supervisor-1 | 2025-04-16 23:45:22,958 xinference.core.worker 142 INFO [request 89636d5a-1b57-11f0-af90-5ae6638fb738] Enter terminate_model, args: , kwargs: model_uid=Qwen2.5-7B-Instruct-GPTQ-Int4-0 supervisor-1 | 2025-04-16 23:45:22,961 xinference.model.llm.vllm.core 588 INFO Stopping vLLM engine supervisor-1 |...

升级到xllamacpp 0.1.14还是这个问题

升级到v1.5.0还是报错 ``` supervisor-1 | 2025-04-20 22:28:20,429 xinference.core.worker 142 ERROR Failed to load model gemma-3-it-0 supervisor-1 | Traceback (most recent call last): supervisor-1 | File "/usr/local/lib/python3.10/dist-packages/xinference/core/worker.py", line 1135, in launch_builtin_model supervisor-1...

``` supervisor-1 | 2025-04-21 23:57:19,647 xinference.core.worker 142 INFO [request 08a23dc4-1f47-11f0-9a3d-d27bb3cc94bf] Enter launch_builtin_model, args: , kwargs: model_uid=gemma-3-it-0,model_name=gemma-3-it,model_size_in_billions=12,model_format=ggufv2,quantization=Q3_K_L,model_engine=llama.cpp,model_type=LLM,n_gpu=auto,request_limits=None,peft_model_config=None,gpu_idx=None,download_hub=None,model_path=None,xavier_config=None supervisor-1 | 2025-04-21 23:57:20,683 xinference.model.llm.llm_family 142 INFO Caching from Modelscope: bartowski/google_gemma-3-12b-it-GGUF supervisor-1 | INFO...

今早尝试拉取,依然乱码 ![image](https://github.com/user-attachments/assets/fa16834d-bd35-45bb-add9-c6b02aad185b)

mise support windows. https://mise.jdx.dev/installing-mise.html#windows-winget