test_disp.py min/max depth value issue
Hi! Thank you for your nice implementation.
I have a question about clipping the depth value in test_disp.py. https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/test_disp.py
Currently, before applying the scale factor to predicted depth, depth is clipped by min/max depth value as shown below.
pred_depth_zoomed = zoom(pred_depth,
(gt_depth.shape[0]/pred_depth.shape[0],
gt_depth.shape[1]/pred_depth.shape[1])
).clip(args.min_depth, args.max_depth)
( ... )
if seq_length > 1:
( ... )
scale_factor = np.mean(sample['displacements'] / displacement_magnitudes)
errors[0,:,j] = compute_errors(gt_depth, pred_depth_zoomed*scale_factor)
( ... )
scale_factor = np.median(gt_depth)/np.median(pred_depth_zoomed)
errors[1,:,j] = compute_errors(gt_depth, pred_depth_zoomed*scale_factor)
But isn't it correct to clipping after applying the scale factor as shown below?
pred_depth_zoomed = zoom(pred_depth,
(gt_depth.shape[0]/pred_depth.shape[0],
gt_depth.shape[1]/pred_depth.shape[1])
)
( ... )
if seq_length > 1:
( ... )
scale_factor = np.mean(sample['displacements'] / displacement_magnitudes)
errors[0,:,j] = compute_errors(gt_depth, (pred_depth_zoomed*scale_factor).clip(args.min_depth, args.max_depth))
( ... )
scale_factor = np.median(gt_depth)/np.median(pred_depth_zoomed)
errors[1,:,j] = compute_errors(gt_depth, (pred_depth_zoomed*scale_factor).clip(args.min_depth, args.max_depth))
The difference by this issue is below when pose network is not used.
| abs_diff | abs_rel | sq_rel | rms | log_rms | abs_log | a1 | a2 | a3 | |
|---|---|---|---|---|---|---|---|---|---|
| current | 2.3598 | 0.1233 | 0.8640 | 4.8908 | 0.1987 | 0.1258 | 0.8569 | 0.9561 | 0.9815 |
| if fixed | 2.3391 | 0.1229 | 0.8205 | 4.7596 | 0.1981 | 0.1256 | 0.8572 | 0.9564 | 0.9817 |
Especially, the sq_rel and rms metric is sensitive to this issue.
Thank you.
Hi, indeed you are right !
Thanks for this catch ! If you submit a PR I'd happily merge it.
Thank you for your quick reply!
Ok I see. I will submit the PR to your repo.
I also saw this issue in loss_functions.py as shown below. https://github.com/ClementPinard/SfmLearner-Pytorch/blob/master/loss_functions.py
@torch.no_grad()
def compute_depth_errors(gt, pred, crop=True):
( ... )
valid_pred = current_pred[valid].clamp(1e-3, 80)
valid_pred = valid_pred * torch.median(valid_gt)/torch.median(valid_pred)
Is that also correct like below?
@torch.no_grad()
def compute_depth_errors(gt, pred, crop=True):
( ... )
valid_pred = current_pred[valid]
valid_pred = (valid_pred * torch.median(valid_gt)/torch.median(valid_pred)).clamp(1e-3, 80)
Thank you!
Yes, the problem is there as well. I will validate this change in your PR :)