Can This Be Used in Real Time?
Can this algorithm be used to render a 2D video stream into 3D in real time?
This was amazing in its time but if you're starting now, probably better to look at DenseDepth or Mitacs for that.
Thank you so much! I'll look into those.
I am only a novice, I checked out DenseDepth and it's not apparent to me how I would get a 3D video output
Look at RunwayML
Alright, I'll look at that now. Thanks again for your help!
Do you have an idea which model could help. I wasn't able to find a model that converts video into 3D. The application I am trying to use this for requires live 2D video from a camera to be rendered into 3D and then viewed on a VR headset
It's possible to do that but it would mean several fiddly pipeline steps and a massive GPU to get a decent framerate at the end of it. Much easier to use a live feed from a depth camera for that, like Kinect or RealSense.
Hmmm, alright. I should have access to some pretty beefy GPU muscle at my university if I have to choose that route. The whole point of this project is to create a VR microscope, so I'm not sure if using a depth camera would be an option, because I was going to use a variable zoom microscope camera. Thank you for the advice!