Owen

Results 17 issues of Owen

some outputs that your model `Face Portrait v2` generated have a very good anime outline : ![image](https://user-images.githubusercontent.com/21029719/173049330-7db7aaed-e058-4b58-b735-55fb17f6f3da.png) ![image](https://user-images.githubusercontent.com/21029719/173049395-7ef6b6cf-756e-4d78-907c-f60fe662e009.png) and ![image](https://user-images.githubusercontent.com/21029719/173049432-e1cbe79a-7cd0-47b9-8db1-b973f6324a2a.png) ![image](https://user-images.githubusercontent.com/21029719/173049445-cc0aaac0-44d2-4d19-92e9-b86bf350d759.png) How to do it ? I tried to redo...

when I color some photos,I found the colorization model likes color the object by red color! for example: ![img_1](https://user-images.githubusercontent.com/21029719/84871626-5cc0df80-b0b3-11ea-83ca-f6d6ee0ae10d.png) ![img_14](https://user-images.githubusercontent.com/21029719/84871646-621e2a00-b0b3-11ea-9b0a-9c7c314bc3fe.png) ![img_624](https://user-images.githubusercontent.com/21029719/84871660-65b1b100-b0b3-11ea-9388-cc8a165bf248.png) There's something wrong in it !

does anyone train the PyTorch-YOLOv3 on pedestrian detection data sets (e.g., Caltech)? or train the network just for VOC or COCO datasets 's person class ?

when I train yolo v3 on coco, it WARNING that "/pytorch/aten/src/ATen/native/IndexingUtils.h:20: UserWarning: indexing with dtype torch.uint8 is now deprecated, please use a dtype torch.bool instead." It's so many! How to...

I edited face in Eyeglasses, Gender, Hair Color, Pose , Smile ... direction by MODEL_ZOO/StyleGAN2/[ffhq-1024x1024](https://mycuhk-my.sharepoint.com/:u:/g/personal/1155082926_link_cuhk_edu_hk/EX0DNWiBvl5FuOQTF4oMPBYBNSalcxTK0AbLwBn9Y3vfgg?e=Q0sZit&download=1) ![image](https://user-images.githubusercontent.com/21029719/132298098-9299146a-4a18-4455-8c0c-686318e08597.png) ![image](https://user-images.githubusercontent.com/21029719/132298247-c029fe30-902b-44b9-9eb0-c4fc36d2eae7.png) it shown that some attributes/directions are highly correlated with each other .

Hi, thank you for your excellent work. So quick ! I tested, and I found something wrong with the groundingdino detect result . I show it : Input image: ![13201678780389_...

非常棒的工作,标注效果相比Blip的有了很大的提升!nice! ![ram_grounded_sam](https://github.com/xinyu1205/Recognize_Anything-Tag2Text/assets/21029719/a2c13a52-e761-4b97-829c-c9d64bf49aad) 主业的这张图中RAM的结果中如你展示和提醒的是有`lamp和door`标签的,但是我跑出来的结果中却没有 ![image](https://github.com/xinyu1205/Recognize_Anything-Tag2Text/assets/21029719/24415f62-b3a8-4dfb-a15e-47f9c42c5fce) 是什么原因导致的呢?

很惊艳的工作!所以本地测试了一下,发现同类型的视频感觉比你的demo差一些 源视频[hild-runs-barefoot-on-grass-park-joyful](https://www.shutterstock.com/zh-Hant/video/clip-1100574563-child-runs-barefoot-on-grass-park-joyful)跟你的demo一样来自**shutterstock**,下载后的视频默认是webm格式的,我用FFmpeg转为了基于libx264的MP4格式 https://github.com/sczhou/ProPainter/assets/21029719/35e44553-9789-40b9-a1b2-24af7ae8ae51 mask是直接使用的你的mask,因为视频尺寸都是一致的 但是结果相比你demo中的,感觉要差很多,mask区域有比较明显的模糊块 结果视频为: https://github.com/sczhou/ProPainter/assets/21029719/81fb9a4b-d8f5-43d2-8557-e203b03b68ac 有什么提升的办法?

### Describe the bug When trying to accelerate ControlNet inference based on Stable Diffusion XL using OneDiff in ComfyUI, I found that modifying the end_percent parameter of the ControlNet module...

Request-discussion
Response-need_minites
sig-comfyui
sig-compiler

### Describe the bug When I was testing the acceleration inference of DeepCache, I encountered an error when I modified the inference size to 720*960. However, when I modified the...

Request-bug
Response-need_days
Response-important
sig-optimzation-alg