"GPU" name is not registered in the InferenceEngine
I am not able to run inference model of smart city on the GPU device
More details please. What GPU device do you use? Currently the sample only supported the VCAC-A accelerator. To support other GPU devices, you have to extend the sample to use different base analytics images, for example, replace xeon-ubuntu1804-analytics-gst with xeone3-ubuntu1804-analytics-gst under analytics/common. There are other changes to the pipeline configuration to utilize VAAPI for decoding.
Currently I am using Intel® UHD Graphics 630 (CFL GT2) graphics card
Is xeone3-ubuntu1804-analytics-gst image support Intel® UHD Graphics 630 (CFL GT2) graphics card? I am using Intel® Core™ i7-8700 CPU @ 3.20GHz × 12 processor.
You have to try it. It should work.
I have a branch here. Not yet successful enabling inference. Something is missing.
@xwu2git Do you have a system that reproduces the issue? We recently ran into an issue with Comet Late GPUs and the XeonE3 container. I will send the details to you offline.
See issue #662 I posted on OVC Dockerhub repo, but it is specific to Comet Lake. Issue shows how to check for OpenCL device.