Can your code achieve real-time video stitching?
Hello blogger! I am very interested in your code. Currently, I need to do real-time video stitching. In your introduction to your code, you mentioned that the code can achieve real-time stitching. May I ask if it is possible to output the stitching results in real time if I want to use two videos as input? If possible, what is the approximate frame rate that can be achieved? Would it be convenient for you to provide a code for testing the video stitching effect?
The source code in the repo is developed for simulation only. It is neither optimized nor pipelined to run in any FPGA as of now. I had initial plans to make it execute in an embedded platform. But as of now, this project is a dormant state. And since I have not performed any testing on hardware, I cannot comment on the frame rates that can be expected.
Please use the algorithm implementation as an inspiration for your project.
Does the verilog code for simulation have the potential for synthesis ?
In the current state, no. The memory required would be too much. But there are avenues for optimization after which synthesis is possible.
If I use DDR3 to store all the data I need (including image data, grayscale data, and convolution data), will this code synthesis be feasible?
Yes, it would be feasible