shape, origin, and dimension order of images are not documented
When a CameraDevice returns an image, it returns a numpy array. There are methods to get back width and height of the sensor and ROIs. However, we never defined which of the dimensions in the numpy array is the width and height and we don't define what order is the data on the array, and what is the origin.
I just came across this issue trying to generate a non-square image for BeamDelta. Some of the patterns had the width/height inverted. I notice that the _fetch_data on TestCamera applies a transpose although not sure why. The image that reaches cockpit stills needs a transformation applied (I'm guessing OpenGL origin is different).
Some of this is addressed in #90.
There's a transform system on the cameras to compensate for:
- orientation of the camera (fixed, but can vary between instances of the same camera type depending on how they're bolted to a table or microscope body);
- changes in the optical path (e.g. introducing an additional mirror with a mirror-flipper);
- changes on the camera (e.g. changing readout mode to one that reads from the other edge of the chip).
I decided to do this in microscope rather than the client, because:
- the client shouldn't have to care about readout-mode related changes;
- some cameras support fast transforms on the hardware.
I looked into this recently when I fixed up Binning and ROI in both cockpit and microscope. Here's a table of dimension order and (where appropriate) their cartesian equivalents in various packages.
| scikit | plane | row (y) | col (x) | channel |
| numpy | whatever | whatever | row(y) | col(x) |
| Pillow | | | x | y |
scikit chose that order because it makes multi-channel arithmetic fast and efficient.
I just went with the numpy order.
GL doesn't matter so much - when you render a texture, you have to specify vertex co-ordinates for the quad, and texture co-ordinates into the allocated texture, and we can specify them in whatever order we need.