Shengyu Liu
Shengyu Liu
@zema1 可以显示在状态后面 比如 Accept 100 像luogu一样qwq 顺便问一句有没有什么办法批量加入大量题。。。我现在只有题面(.png格式)和数据(数据格式符合oj要求)。。。 感谢。
我也有pdf/jpeg/png/html格式的qwq
@zema1 Thx 还有一个问题就是导入完的题目都是未公开的咋整。。。。
I've got the same error
In cfp.py, line 55, add `.astype(int)` after `np.round(N/2)`. This works for me. I'm using python 3.9 with the latest numpy.
After upgrading to VSCode 1.77.3 from 1.72.2, I am able to use fcitx5 in vscode without any additional flags.
Yes, I was using XWayland. However, vscode looks blurry on my 4K monitor with fractional scaling enabled. After adding flags `--enable-features=UseOzonePlatform --ozone-platform=wayland`, it is no longer blurry, but I am...
I disabled CUDA Graph (`--enforce-eager`) and multi-step scheduling when benchmarking vLLM. Could you please test this configuration?
Probably I'll add support for parallelism one day but it's not guaranteed... Maybe you can add it yourself.
It seems that Llama-3.2-1B does not have `lm_head.weight`. Instead, it uses `model.embed_tokens.weight` as `lm_head.weight`. For a quick fix, you may try to modify this line: https://github.com/interestingLSY/swiftLLM/blob/af7a5589fdac7b2d8b080ed34f2be706f20724a0/swiftllm/worker/weight.py#L154 . You may try...