yuxx0218
yuxx0218
What is the point of the following function in dataset.py, which do nothing but only raise error. def get_image_id(filename:str) -> int: """ Convert a string to a integer. Make sure...
> > > I met the same problem. Do you solve it? you can see my pull request on this project.
begging for svt-p label
我仿佛解决了,conv8的pad改为0就好了,输出为[1, 11316, 1, 61],请问这是大神笔误吗?另外,11316表示11315个汉字还有1个blank吗?61是序列长度?请问有11315个汉字的样本集吗?
hi, I want to run THUDM/chatglm-6b-int4 by vllm, but raising cuda oom error. Based on the log, it requires at least 10G gpu memory. Actually when I use huggingface transformers...
> > hi, I want to run THUDM/chatglm-6b-int4 by vllm, but raising cuda oom error. Based on the log, it requires at least 10G gpu memory. Actually when I use...
> Any parametric monocular face reconstruction method would be an alternative, like FaceScape, DECA, 3DDFA_v2, etc. What the method did you use? could you please upload the code?
> As I understand, the papers 1 and 2 (mentioned in paragraph 2 of sec 4.1) are used to extract the face landmarks. Did you just use the paper codes,...
same problem: Log start main: build = 1545 (3d2730e) main: built with cc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 for x86_64-linux-gnu main: seed = 1703236442 llama_model_loader: loaded meta data with 18 key-value pairs...
> `THUDM/chatglm-6b-int4` doesn't use AWQ or GPTQ for quantization and is not supported by vLLM. THX. Could I say vLLM only supports chatglm of full precision and that quantized by...