jstumpin

Results 8 issues of jstumpin

As of now, it is either zero or N/A. I don't believe it's the former. For completeness (so we may have finer granularity by breaking down immunity category into vaccine-immunity,...

Official/formal reporting of Malaysia AEFI cases is virtually non-existent: ![IMG_20221117_192928](https://user-images.githubusercontent.com/27394196/202434984-9c342a0c-6bc7-4803-ad06-82b0ac365fc8.jpg) when compared to Singapore: ![IMG_20221117_193004](https://user-images.githubusercontent.com/27394196/202435023-664c9b5a-1898-432f-a937-18918b98531e.jpg) (https://knollfrank.github.io/HowBadIsMyBatch/batchCodeTable.html). Does MOH/NPRA not sync it up against VAERS? Also why haven't [aefi.csv](https://github.com/MoH-Malaysia/covid19-public/blob/main/vaccination/aefi.csv) and [aefi_serious.csv](https://github.com/MoH-Malaysia/covid19-public/blob/main/vaccination/aefi_serious.csv)...

Ever since the [Big update](https://github.com/marcoslucianops/DeepStream-Yolo/commit/07feae9509ffb581fa85f65bec653d1c1d001056) commit, we don't have to build the engine from TensorRT API anymore. Empirically (tested on YOLOv5 & YOLOv8), conversion via **trtexec** yields identical performance &...

I believe EmguCV does not provide GStreamer API out of the box, as with OpenCV. Previously, I've successfully built/tested OpenCV v3.3.1 with GStreamer but now I'm facing difficulty extending it...

As an extension to the [preliminary benchmark](https://github.com/NVIDIA-AI-IOT/yolov4_deepstream/issues/3#issuecomment-757589640) for [_tensorrt_yolov4_](https://github.com/NVIDIA-AI-IOT/yolov4_deepstream/tree/master/tensorrt_yolov4), batch inference performance is provided as follows: | repo. | batch=1 | batch=2 | batch=4 | batch=8 | | ------------- |...

How do we extend the [inference function](https://github.com/CaoWGG/TensorRT-YOLOv4/blob/master/src/trt.cpp#L204) to support batchSize > 1? For batched inputs, I'm using OpenCV's _blobFromImages_. It seems to work just fine as batchSize = 1 (using...

Is there a quick and dirty way to print out the performance (FPS) for the `cpp` counterpart? https://github.com/prominenceai/deepstream-services-library/blob/master/docs/examples-diagnaostics-and-utilities.md#pipeline-with-source-meter-pad-probe-handler-and-component-queue-management Thanks in advance.

Hi folks, just a quick update on importing a fine-tuned RTMDet model from mmdetection to this repo. Here how it goes: 1. Fine-tuning Install mmdetection on RTX 50xx-series: https://gitee.com/Wilson_Lws/MuseTalk-50Series-Adaptation/blob/master/README.md Import/export...