Sergey Sandler
Sergey Sandler
Hi, Have you considered using filterpy.common.Q_discrete_white_noise() to initialize self.kf.Q in KalmanBoxTracker.__init__()? In my understanding, Q_discrete_white_noise() is designed for kinematic problems, that's the MOT domain, is not it? You will see...
With ONNX 1.13.1, a fp32 model passes onnx.checker.check_model() without warnings or errors, `import onnx` `onnx_model = onnx.load("/models/ResNet50.onnx")` `onnx.checker.check_model(onnx_model) ` but when converted into fp16 onnx.checker.check_model() `from onnxconverter_common import float16` `onnx_model_fp16...
With my current configuration that follows [requirements](https://github.com/SysCV/MaskFreeVIS#requirements), _bash scripts/visual_video.sh_ fails with ``` ModuleNotFoundError: No module named 'MultiScaleDeformableAttention' Please compile MultiScaleDeformableAttention CUDA op with the following commands: `cd mask2former/modeling/pixel_decoder/ops` `sh make.sh`...
While making the torch TAPIR model compatible with _Torchscript tracing_ is easy by changing `TAPIR.forward()` in https://github.com/google-deepmind/tapnet/blob/main/torch/tapir_model.py#L196-L209 from ``` out = dict( occlusion=torch.mean( torch.stack(trajectories['occlusion'][p::p]), dim=0 ), tracks=torch.mean(torch.stack(trajectories['tracks'][p::p]), dim=0), expected_dist=torch.mean( torch.stack(trajectories['expected_dist'][p::p]),...
Making the code torch.jit.script() friendly; torch.jit.trace() is also supported. See [issues/83](https://github.com/google-deepmind/tapnet/issues/83).
_torch.stack(template_imgs).float().div(255)_ on line 103 in /MixSort/yolox/mixsort_oc_tracker/mixformer.py ``` template_imgs = normalize( torch.stack(template_imgs).float().div(255), self.cfg.DATA.MEAN, self.cfg.DATA.STD, ) ``` causes _TypeError: expected Tensor as element 0 in argument 0, but got tuple_. I've followed...