Mengqing Cao

Results 49 comments of Mengqing Cao

> Cool! How is the inference and training speed? Your speed of reply is amazing! : ) As the following pic shows, it takes around 55s for inferencing ViT-L-14 on...

> A metric we usually look at is the sample/s per accelerator. Some baselines: on one 3080 GPUs - B/32 inference speed is about 1300 sample/s - L/14 is about...

@rom1504 Hi, weeks went, if there is any suggestions or concerns, plz let me know and I'll address them as soon.

Could anyone help for reviewing? Thx 👍 @rom1504 @rwightman @gabrielilharco @bryant1410 @mitchellnw

cc @rwightman @rom1504 @gabrielilharco @bryant1410 @mitchellnw

> @MengqingCao hmm, yeah this needs fixing. One q though as I'm not intimately familiar with the gen code. Does the implementation here support multiple sentences in a batch? If...

@rwightman @gpucce , I have implemented option 2 and updated the code, give me some suggestions plz, thanks!

cool! It helps for NPU users like me, thx!

> I am not opposed to having this Dockerfile but I don't think we have test cases in the library yet that would benefit from the corresponding Docker image. We...