Tmn07
Tmn07
某个资源下载时也会碰到这个问题。重试了几次,不是出现在固定的某个分块之后。把此链接分享给别人可以成功下载,像是我这边网络问题?这种情况下无法续传,似乎只能重新下载。 [2022-04-05 02:24:54] 分块1478/1657下载完毕 Traceback (most recent call last): File "c:\users\tmn07\.pyenv\pyenv-win\versions\3.8.10\lib\site-packages\PIL\ImageFile.py", line 239, in load s = read(self.decodermaxblock) File "c:\users\tmn07\.pyenv\pyenv-win\versions\3.8.10\lib\site-packages\PIL\PngImagePlugin.py", line 923, in load_read cid, pos, length = self.png.read() File...
> Hi @LinKiling it looks like we are able to load the Codellama models and run generation, but the output seems a bit off. I'll ask our kernel devs to...
统一身份认证没有变化。测试网站乐学网SSL证书有问题而已:joy:
教务处登录页面改了有一段时间了。采用了RSA加密。但弱渣我还没搞定...  ***************** 2017-5-8 更新说明:搞定加密了,明天去有校园网的地方试一下~
回复楼上。因为体测网站关闭了
这个网站每年就开一会儿啊。服务器关了。等到开了才能用。
@stricklandye @jxt1234 遇到了同样的问题,pb中包含了每个conv与linear的量化信息,使用mnnconvert转换得到的mnn模型,某一线性层没有成功转换,还保持着fp32。请问有解决办法或者进一步定位的方法吗?像是mnnconvert转换的过程跳过了一层? 使用方式参考mnn文档:https://mnn-docs.readthedocs.io/en/latest/tools/compress.html#id26 mnnconvert --modelFile quant_model.onnx --MNNModel quant_model.mnn --framework ONNX --bizCode MNNTest --compressionParamsFile compress_params.bin
``` pre-commit: vllm/model_executor/layers/quantization/blockwise_int8.py#L147 Value of type "Optional[list[int]]" is not indexable [index] pre-commit: vllm/model_executor/layers/quantization/blockwise_int8.py#L148 Value of type "list[int] | None" is not indexable [index] ``` Sorry, i have no idea how...
@mgoin Thanks for checking! Currently, I don't have sufficient GPU resources to revive this PR, but I might have access in a week or two. I'll update here once I'm...