alercelik
alercelik
Sorry for the late response. While creating ImageSender, what provided as argument is ip:port of the Hub. Besides, I could not see any other option to specify the port for...
If I am not wrong, binding means "allocating a **local** port to connect a port on another host" whereas in connect() you provide destination's port. So, what you provide in...
There is also a random option. It would be really helpful it both options are available while creating ImageSender. ```python self.zmq_socket.bind_to_random_port('tcp://*', min_port=6001, max_port=6004, max_tries=100) ```
Did you check out standard Python logging features? https://stackoverflow.com/questions/6386698/how-to-write-to-a-file-using-the-logging-python-module
Plus I could not find a way to set interpolation method nor saw the default one in codes or somewhere. Please inform if anyone finds
What batch size did you use? If you have used 1 batch size you can easily get low performances.
We were having low GPU utilization problem as well. We exported Yolov7 PyTorch to TensorRT FP32 (batch-size=32), created 2 setups with the same .engine (or .plan) file, 1 with and...
During perf_client, GPU utilization is like 100%. There might a bug or a setting to enable full utilization during standard inference. We did not do any further research
Thanks for the answer. However my models are independent from each other and each model will work on different data.
Thanks, I did read the documentation beforehand. I was just trying to understand if there is a high level logic other than a round robin like scheduling with dynamic batching....