Changing inference model
I have been trying to change the current person-vehicle-bike-detection-crossroad-0078 model with the person-vehicle-bike-detection-crossroad-1016 as both of them have a similar use. But I am facing a problem where the output is not showing any bounding boxes or the class labels, but is showing a undefined label with an accuracy percentage where an object is expected to be.

Can you please guide me on how can I deal with the G-streamer pipeline or where can I get a proper documentation to deal with other similar model changing scenarios.
@nnshah1, can you help?
@varunjain3 When switching the model did you also modify the model proc file:
https://github.com/OpenVisualCloud/Smart-City-Sample/blob/master/analytics/object/models/object_detection_2020R2/1/person-vehicle-bike-detection-crossroad-0078.json
The output layer name for person-vehicle-bike-detection-crossroad-1016, looks to be "653" based on:
https://download.01.org/opencv/2020/openvinotoolkit/2020.4/open_model_zoo/models_bin/3/person-vehicle-bike-detection-crossroad-1016/FP32/person-vehicle-bike-detection-crossroad-1016.xml
Please try changing:
https://github.com/OpenVisualCloud/Smart-City-Sample/blob/51ffca882c843c81bd2b382131de27a507633677/analytics/object/models/object_detection_2020R2/1/person-vehicle-bike-detection-crossroad-0078.json#L10
To: "layer_name": "653"
Dear @nnshah1 , Thanks a lot for the help. This worked exactly as we wanted it to.
But I again faced an error wherein I was trying to change the person detection model- person-detection-retail-0013 with the person-detection-retail-0002 one in the stadium scenario. Checking from the CPU usage, one could understand that the model was running, in fact, the svcq-counting stats were also showing the numbers, yet no bounding boxes were appearing on the resulting video. I also checked the layer_name this time, it is same in both the models. Can you help me understand where I am going wrong?
Other than this, could you please explain to me, where are the outputs of the inferences are stored, as per the documentation this should be stored in some rec folder in the analytics container in some .json file. But I am not able to locate that.
Further, how could one add another model in series for eg. the person-reidentification model in the svcq pipeline.
I wasn't able to find a current person-detection-retail-0002 - this seems to no longer be supported, can you send me a pointer to the model? If the model is running then again I suspect a model-proc related issue.
The inferences are sent from the pipeline to mqtt and then stored in the database here:
https://github.com/OpenVisualCloud/Smart-City-Sample/blob/51ffca882c843c81bd2b382131de27a507633677/analytics/mqtt2db/mqtt2db.py#L67
To add person-re-identification model to the pipeline, please see:
https://github.com/OpenVisualCloud/Smart-City-Sample/tree/51ffca882c843c81bd2b382131de27a507633677/analytics/entrance
As this pipeline has both person-detection and person-reidentification. Note: it also includes custom logic to count people based on their re-id - but that may or may not be useful for your use case.
@nnshah1 Here is a pointer to the folder for the model https://download.01.org/opencv/2020/openvinotoolkit/2020.4/open_model_zoo/models_bin/3/person-detection-retail-0002/
Being present in the latest openmodel zoo directory, Can I assume it to be supported, or is there any other criteria for a model to be supported.
Can you let me know what other things shall be changed in the model-proc w.r.t to this model, as I checked the layer_name for both the models are same?
As far as I can understand the function - "on_message", reads the inference_results from some mqtt json file in Smart-City-Sample/analytics/mqtt2db/mqtt2db.py But I am not able to understand, where is this .json file is being stored and in which container, can we retrieve the logs of all the inferences at the end of a run or maybe in between?
Also, where shall one make changes to change the analytics(UI) being displayed. As in how is the UI retrieving the stored inferences and from where.
I'll take a quick look at the model. I believe I was mistaken above, the model is present here:
https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-retail-0002
The on_message reads the inference results from the mqtt topic (mqtt is a message broker). The analytics pipeline streams its results to the message broker in json format (one frames results as one message). The results are not stored in a file.
To make changes to the UI - the inference results are stored in the database:
https://github.com/OpenVisualCloud/Smart-City-Sample/blob/51ffca882c843c81bd2b382131de27a507633677/analytics/mqtt2db/mqtt2db.py#L92
To start prototyping you can modify the mqtt2db.py to print the analytics it recieves and modify the results being stored in the database (to get a sense of how the system is working).
If you need to change the visualization itself, that is done in
https://github.com/OpenVisualCloud/Smart-City-Sample/tree/51ffca882c843c81bd2b382131de27a507633677/cloud
I am using gvainference for custom model and it able to detect the location of objects. But i am facing problem where output is not showing any bounding box on the frame. I have also updated the analytics.js file for our model classes still it is not showing any bounding box
@Gsarg18 you will need to add the json message to the frame converting the detection output to the message format expected by the rest of the solution. Are you attaching a json meta data?
Thankyou Neelay, I am not attaching the a json meta data. I just add the GVA::RegionOfInterest with the frames. I also want to know that is it possible to run two detection model in the pipeline
On 10/13/2020 3:47 PM Neelay Shah <[email protected]> wrote: @Gsarg18 https://github.com/Gsarg18 you will need to add the json message to the frame converting the detection output to the message format expected by the rest of the solution. Are you attaching a json meta data? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenVisualCloud/Smart-City-Sample/issues/547#issuecomment-707642089 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWKILJ4QDFNBRDV4I5MIO3SKQSMVANCNFSM4PBRETOQ .
If you are running two detectors across the whole frame - that should be possible now. To run a secondary detection (i.e. on top of a bounding box detected by a primary detectory) - this is not currently directly supported but is a feature on the roadmap.
If you have GVA::RegionOfInterest, and gvametaconvert in the pipeline, can you verify the json data is well formed and as expected?
I would first compare it to a working case just to double check that any required fields are missing.
I have GVA::RegionOfInterest, and gvametaconvert in the pipeline, but json data is not formed as expected. Detection, confidence keys are missing from the json data.
On 10/13/2020 5:31 PM Neelay Shah <[email protected]> wrote: If you are running two detectors across the whole frame - that should be possible now. To run a secondary detection (i.e. on top of a bounding box detected by a primary detectory) - this is not currently directly supported but is a feature on the roadmap. If you have GVA::RegionOfInterest, and gvametaconvert in the pipeline, can you verify the json data is well formed and as expected? I would first compare it to a working case just to double check that any required fields are missing. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenVisualCloud/Smart-City-Sample/issues/547#issuecomment-707691522 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWKILORKTMGZ2SKZHYT7O3SKQ6R5ANCNFSM4PBRETOQ .
I believe you will also need to add a detection tensor to the regionofinterest.
Another approach would be to add a json meta directly (via add_message) creating your own message to match (removing gvametaconvert).
I am using Yolov3 model still it is not working.
On 10/13/2020 6:33 PM Neelay Shah <[email protected]> wrote: I believe you will also need to add a detection tensor to the regionofinterest. Another approach would be to add a json meta directly (via add_message) creating your own message to match (removing gvametaconvert). — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenVisualCloud/Smart-City-Sample/issues/547#issuecomment-707723054 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWKILN2DDIBZ4VMISOYKZDSKRFZJANCNFSM4PBRETOQ .
Can you provide more details (pipeline.json, gvapython code, dlstreamer version) / sample output ?
If you are using gvainference(yolov3) + gvapython to add region of interest + metaconvert - you should be quite close.
- I am using dlstreamer version:2020.2
2)Using add_region(self, x, y, w, h, label: str = "", confidence: float = 0.0, normalized: bool = False) function for adding regionofinterest in gvapython file
3)pipeline: "rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name="videoconvert" ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvainference ie-config=CPU_BIND_THREAD=NO model="{models[tire_detection_2020R2][1][network]}" model-proc="{models[tire_detection_2020R2][1][proc]}" name="detection" ! gvapython name="boundingbox" module="postproc_callbacks/bounding_box.py" class="BoundingBox" ! gvametaconvert name="metaconvert" ! queue ! gvametapublish name="destination" ! appsink name=appsink t. ! queue ! splitmuxsink max-size-time=60500000000 name="splitmuxsink"",
4)Output message: [h:value, roi_type:label, x:value, y:value]
On 10/13/2020 8:05 PM Neelay Shah <[email protected]> wrote: Can you provide more details (pipeline.json, gvapython code, dlstreamer version) / sample output ? If you are using gvainference(yolov3) + gvapython to add region of interest + metaconvert - you should be quite close. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenVisualCloud/Smart-City-Sample/issues/547#issuecomment-707781716 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWKILLLXWDBO3JMVYVPFXTSKRQRNANCNFSM4PBRETOQ .
@Gsarg18
In the 2020.2 version of dlstreamer, in order for the JSON metadata to be added to the frame correctly you will need to add a label_id in addition to calling add_region. Note this is not needed in later versions (specifically tried 2021.1).
def process_frame(self, frame):
region = frame.add_region(0,0,100,100,"BlueMonday",1.0)
region.detection()["label_id"] = 1
return True
I confirmed this works with the current Smart Cities sample. As you already mentioned, you'll need to add your label into analytics.js as well for the bounding box to be displayed.
Thank you Neelay, now I am getting bounding box.
On 10/18/2020 4:55 PM Neelay Shah <[email protected]> wrote:
@Gsarg18 https://github.com/Gsarg18 In the 2020.2 version of dlstreamer, in order for the JSON metadata to be added to the frame correctly you will need to add a label_id in addition to calling add_region. Note this is not needed in later versions (specifically tried 2021.1). def process_frame(self, frame): region = frame.add_region(0,0,100,100,"BlueMonday",1.0) region.detection()["label_id"] = 1 return True I confirmed this works with the current Smart Cities sample. As you already mentioned, you'll need to add your label into analytics.js as well for the bounding box to be displayed. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/OpenVisualCloud/Smart-City-Sample/issues/547#issuecomment-711153872 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AQWKILPNJK5ZWIVO5IQGSSLSLLGENANCNFSM4PBRETOQ .