Video output when using --frame-skip should include all frames
As identified in #70, using frame-skip also results in the same frames being skipped in the resulting video output. This is undesirable, and will require some changes so that we avoid frame skipping when we're in a motion event.
To accomplish this correctly, proper seeking needs to be added, since we need to go back and re-process all the frames we skipped so that --time-before can be respected. Another issue also arises when using --bounding-box, since we would need to interpolate the box between frames.
A better and more robust solution might be to just start processing every frame once we detect some motion, and only use frame skipping when we are looking for events, with a slight performance penalty. Although this would be a slight performance hit, encoding the actual video is more computationally expensive than the background subtraction itself, so this approach has some merit.
This might actually be easier if video output is first moved to a separate process as part of #52, and integration with PySceneDetect's VideoManager (pin to v0.5.6.1 for now).
To go about this, when we seek backwards, all computations of the frame mask should set learning rate to 0 so we don't update the background mask. From that point forward everything should proceed as normal.
This isn't fully supported yet using the default OpenCV output mode, but this should now function correctly in v1.5 if you use the -m ffmpeg or -m copy flags.