picamera2 icon indicating copy to clipboard operation
picamera2 copied to clipboard

[SUGGESTION] provide 16bit or 12bit processed RGB arrays from PISP pipeline on Pi5

Open jlprojects opened this issue 2 years ago • 7 comments

This may not be possible, so feel free to close as necessary. It really depends at what bit-depth the new image processing pipeline on the Pi5 handles the image processing - raw conversion/debayering etc. From a cursory read of the draft PISP datasheet, it looks that it could be possible.

Describe the solution you'd like If the new PISP works at 12bit or 16bit depth on the RAW data, it would be good to be able to get a 12 or 16bit per channel developed image array into the Python application. To take full advantage of all the nice new features of the pipeline (temporal denoise, HDR etc), but allowing for further processing at a higher bit depth.

Describe alternatives you've considered My project is a cine film scanner... For it, I wrote a very basic raw processing routine (using Cython for performance) that works in 16bits per channel, taking the 12bit bayer arrays from the HQ camera, applying gains, CCM and gamma. OpenCV is used to debayer the image in the middle of the raw process. This produces images pretty close to the visible (8bit) preview. Once the raw image is converted, it is then sent further processing where 16bit data is desired.

For example correcting colour negative, or badly faded, film can require some extreme modification of the colour channels. This can lead to banding if 8bit RGB data is used.

jlprojects avatar Nov 10 '23 13:11 jlprojects

Yes, it's a good idea, and I've certainly thought about it so that you could get linear pixel data out of the pipeline and have enough bits to do any gamma or tonemapping later (not dissimilar to your own use case). And it shouldn't be that hard because we have the 16-bit pixel values. The obstacle at the moment is that V4L2 doesn't define 16-bit RGB pixel formats, so we'd have to start by getting that approved and upstreamed to the Linux kernel - so the whole thing is just a little bit more involved than one might have expected!

davidplowman avatar Nov 10 '23 14:11 davidplowman

So near, yet so far! Thanks for the info - was sure it has been thought of before. Good to have confirmed that the pipeline works at 16bit - so there's hope for the future. There's no scope for intercepting the 16bit values bypassing V4L2? I'm out of my depth here, but hoping that a simple "read 16bit RGB into a Numpy (or C) array for further processing" might be possible.... :grin:

jlprojects avatar Nov 10 '23 16:11 jlprojects

Unfortunately no, V4L2 is the fundamental kernel driver framework, so there's no way round it. But it shouldn't be that hard, and I think we're fairly motivated to do this at some point for Pi 5.

davidplowman avatar Nov 10 '23 16:11 davidplowman

Fair enough. No way of getting in between the PISP and V4L2. :frowning_face: Never mind, will look out for 16bit image support in the kernel at some point.

jlprojects avatar Nov 10 '23 17:11 jlprojects

I have added 48-bpp RGB support in our downstream kernel and libcamera here. Eventually this will get upstreamed, but you can test it out if you can build the kernel and libcamera trees yourself.

However, note that this output is 48-bpp RGB after gamma, so is not linear data.

naushir avatar Dec 07 '23 15:12 naushir

However, note that this output is 48-bpp RGB after gamma, so is not linear data.

Of course, gamma is easily disabled in the tuning file!

davidplowman avatar Dec 07 '23 16:12 davidplowman

Wow, thanks for your work on this! Looking forward to try it out, probably when it lands in Raspberry Pi OS. Though I may investigate compiling it in order to play with during the holidays. Thanks again!

jlprojects avatar Dec 07 '23 23:12 jlprojects