Mapping of cells output with actual inference image
Hi, I am using inference for extracting cells objects. The output of detect is cropped and passed as an input to recognize. Now, recognize produces the cell objects for cropped table image. I want to map these cell bboxes on the actual image that was fed into the detection model. If you could provide some guidance on How should I map these to actual image input?
You will need to resize the bboxes. You can use the following formulas:
old_width > size of your cropped image
new_width > size of your original page image (same goes for height)
new_xmin = int((xmin / old_width) * new_width) new_ymin = int((ymin / old_height) * new_height) new_xmax = int((xmax / old_width) * new_width) new_ymax = int((ymax / old_height) * new_height)