Omniparser crashes after processing 7 images.
2024-12-04 23:44:04 finish processing 2024-12-04 23:44:04 2024-12-04 23:44:04 image 1/1 /usr/src/app/imgs/saved_image_demo.png: 384x640 122 0s, 31.8ms 2024-12-04 23:44:04 Speed: 3.9ms preprocess, 31.8ms inference, 2.3ms postprocess per image at shape (1, 3, 384, 640) 2024-12-04 23:44:33 finish processing 2024-12-04 23:44:33 2024-12-04 23:44:33 image 1/1 /usr/src/app/imgs/saved_image_demo.png: 384x640 119 0s, 31.7ms 2024-12-04 23:44:33 Speed: 4.1ms preprocess, 31.7ms inference, 2.1ms postprocess per image at shape (1, 3, 384, 640) 2024-12-04 23:45:10 /usr/src/app/entrypoint.sh: line 8: 7 Killed python ./gradio_demo.py
Why does this occur. I am running omniparser locally via docker.
Just guessing: looks like it's running out of memory. You can confirm with docker stats <container_id>. Try adjusting the Docker resources with e.g. --memory="4g".
What do i do?? Is there anyway to control this?
@abrichr Sir, I would greatly appreciate your assistance with this issue. I have been stuck on it for quite some time now. Despite carefully following all the instructions in #52 , my setup continues to run out of memory (OOM) after processing 3-4 images, even on a g4dn.xl EC2 instance.
I also tried enabling swap, but after doing so, the instance stopped sending or receiving requests. Any guidance on how to resolve this would be immensely helpful. Thank you in advance.
@techsparkling from your screenshot it appears you are running multiple containers. Can you please clarify exactly the steps you took to arrive in this situation? As it stands you haven't provided enough information to resolve your issue. Please be as detailed as possible, otherwise it will be impossible for others to help you.
@abrichr No, sir, there is only one container running.
I followed the steps mentioned in #52:
- Cloned the GitHub repository from here.
- Created a .env file and added the required variables.
- Executed deploy.py start.
- I verified that the GitHub Actions ran successfully.
After that, I used client.py to send images. However, after sending 5 images, the model crashes due to an out-of-memory (OOM) issue.
I also had the same issue it was running fine once I upgraded from 32 to 128gb ram server then it was stable at 90gb ram utilisation
I think, there is a memory leak somewhere
I found that when I processed images one by one using omni_parser, the memory usage kept rising