Muyang Li

Results 213 comments of Muyang Li

I also met this problem without changing the codes. Have you figured out the reason?

The [nano_demo](https://github.com/mit-han-lab/litepose/tree/main/nano_demo) is tested on Jetson Nano with TVM support. If you are using Jetson Nano, you could follow this [guide](https://github.com/mit-han-lab/litepose/tree/main/nano_demo#installation) to install TVM. If you are using other devices,...

The model should be CPU-friendly, as we also include some results of Raspberry Pi and it only takes ~100ms. But if you directly run the PyTorch model using CPU, I...

You could try TVM to optimize your CPU backend. But I think this will cost your much more time...

Cool! I think you could first convert the pytorch model to tensorflow with onnx first. I am not familiar with tensorflow, so currently I could not predict what kind of...

We do not support StyleGAN for now. Maybe you could check [mit-han-lab/anycost-gan](https://github.com/mit-han-lab/anycost-gan) for it. @anguoyang

Hi! I am wondering which model did you use? I think there should be some latency reduction as suggested in our [paper](https://arxiv.org/abs/2003.08936). We've also released the [code](https://github.com/mit-han-lab/gan-compression/blob/master/latency.py) for measuring the...

We have released the code for running our model on Jetson Nano with pre-built TVM binary in [nano_demo](https://github.com/mit-han-lab/litepose/tree/main/nano_demo). To convert the torch model to TVM binary, you may need to...

> https://github.com/ermongroup/ddim/blob/main/functions/ckpt_util.py#L12 The CelebA model's resolution seems different.

Hi, could you provide which system and device you are using so that I can figure out what is happening? Also, you could try the example in [Colab](https://colab.research.google.com/github/lmxyy/sige/blob/main/example.ipynb) first. This...