James Zhang

Results 10 comments of James Zhang

> Hey @shiftonetothree, thanks for the PR! 😄 Can you add tests, so we can avoid regressions when refactoring later? ok!

this will be solve this https://github.com/bevacqua/dragula/pull/461 https://github.com/spartez/dragula/tree/mixed-direction

`yarn add buffer -D`can solve this problem

遇到同样的问题 单路x99主板单cpu,256G内存,双显卡3080 20G ``` nohup ftllm server fastllm/Qwen3-235B-A22B-INT4MIX --device cuda --moe_device "{'multicuda:0,1':15,'cpu':85}" --api_key madsnfnin1 --port 9872 \ > ~/ftllm.log 2>&1 & ``` ``` (ai) ai@ai-desktop:~$ tail -200f ftllm.log 2025-06-05 17:12:22,705...

我用这个命令成功用上了真正用上了两张显卡,不过速度都是一样的,明显是CPU瓶颈 ```shell nohup ftllm server fastllm/Qwen3-235B-A22B-INT4MIX --device "cuda:1" --moe_device "{'cuda:0':15,'cpu':85}" --api_key madsn> > ~/ftllm.log 2>&1 & ```

@viemmsakh yes, it's only work on not bigger than 1.7.3

> We specifically monkeypatch that package to work with Electron Forge. Is there a reason you want to install 1.10.0? npm auto upgrade my @vercel/webpack-asset-relocator-loader