inference icon indicating copy to clipboard operation
inference copied to clipboard

Slow Inference Pipeline with M3U8 Format

Open emma-smashvision opened this issue 10 months ago • 3 comments

Search before asking

  • [x] I have searched the Inference issues and found no similar feature requests.

Question

Hi,

Introduction While working with the inference pipeline, I had a significant performance problem when processing M3U8 (HTTP Live Streaming) format files. Unlike MP4 files which process smoothly, M3U8 inputs seem to cause skipping of frames in the pipeline. Could this be investigated? It's needed for all live-streaming services and I would really like to use it for my workflow as well.

I am not testing live, so I have a video (which plays as a livestream at 25 fps in m3u8 format) and when I manually use openCV to look at the frames and process them everything works and looks fine. When I use the inference pipeline, it starts skipping a lot of the frames. I also tried different max_fps (from 1 to 100), but the problem stays.

Question How can I make sure that none of the frames from my m3u8 stream are skipped using the inference pipeline?

My code is the following:

self.pipeline = InferencePipeline.init_with_workflow(
            api_key=self.api_key,
            workspace_name=self.workspace_name,
            workflow_id=self.workflow_id,
            video_reference=video_url,
            max_fps=100,
            on_prediction=self.process_frame
        )
       
self.pipeline.start()
self.pipeline.join()

Any help would be appreciated!

With kind regards, Emma

Additional

No response

emma-smashvision avatar Apr 08 '25 11:04 emma-smashvision

Interesting extra information:

I have the same problem if I don't use the init_with_workflow, but just the general init:

self.pipeline = InferencePipeline.init(
            model_id=model,
            video_reference=video_url,
            on_prediction=self.process_frame,
            api_key=self.api_key,
            confidence=0.5,
        )

It again skips frames. I don't have any problem reading out the stream when I just use OpenCV VideoCapture, so with the stream itself there is no problem. It's really the inference where it goes wrong.

Could you help me out?

emma-smashvision avatar Apr 09 '25 11:04 emma-smashvision

thanks for the report we will take a look when we have some time - I have some suspect for the root cause but would need to find some time to investigate

PawelPeczek-Roboflow avatar Apr 09 '25 17:04 PawelPeczek-Roboflow

Okay, thank you in advance! 💯

emma-smashvision avatar Apr 09 '25 17:04 emma-smashvision