[BUG] {The Sign Language Recognition run very slow on my desktop with oak-d PoE}
Check if issue already exists Google it (e.g. error xy github luxonis depthai) Check troubleshooting in documentation.
Describe the bug Hi, thanks for sharing the Sign Language Recognition project. I run the hand_tracker_asl.py on my ubuntu desktop, with my oak-d PoE device, but the frame seems very very low, The frame will be refresh a few seconds a time, how to improve this? Thanks a lot!
To Reproduce Steps to reproduce the behavior:
- Run script (e.g. python3 depthai_demo.py -s color) '....'
- See error
Expected behavior A clear and concise description of what you expected to happen.
Screenshots If applicable, add screenshots to help explain your problem.
Attach system log
- Provide output of log_system_information.py
Additional context Add any other context about the problem here.
Hello @JimXu1989 , is the problem low FPS or the ~1 second delay you are experiencing? Since you are using the POE device, I wonder if your internet connection throughput is the bottleneck for the frames. Are you connected to the LAN via WiFi? When I had weak signal I experienced similar issues as you. Could you check that out? Thanks, Erik
So one thing that may help here is to port the host-side logic of the sign-language example into the scripting node. That way there isn't the back/forth logic across the network and on device instead.
Thoughts?
Thanks, Brandon
I'm also having a performance issue with the OAK-D-POE device. I've attached the log_system_information.json file (renamed to .json.log). I'm not sure how I would go about "porting the host-side logic of the example into the scripting node", otherwise I would try that.. My desktop computer is hard-wired at gigabit to the same network switch that the OAK-D is plugged into. log_system_information.json.log
Hi @chrishem ,
Sorry about the trouble. So because Ethernet (Gigabit; 1,000mbps PHY) is slower than USB3 (5,000 mbps PHY) and significantly higher latency, many of the examples will visualize a bit slowly. The device can still perform almost identically, but visualizations and/or pipelines that involve going to/from a computer will be slower because of the slower throughput and higher latency.
As for porting to the scripting node - that is something we would do. On that - is there a specific example or set of examples you would like to see run faster? We can prioritize porting that.
Thanks, Brandon
Hi @chrishem and @JimXu1989,
I just submitted a PR that improves the DepthAI demo performance - https://github.com/luxonis/depthai/pull/433
The approach used there can be also applied to other examples / experiments we have, we'll work on applying these