[Feature Request] Learning values for tracker drift
I've built a few SlimeVR trackers for some friends.
During the final testing, I noticed that the IMUs (SlimeVR IMU Module BNO085) show some variation in their behavior.
- One IMU has almost no drifting and remains perfectly stable.
- Another IMU starts drifting in the Y-axis.
- The next IMU starts drifting in the X-axis.
For my tests, I placed the trackers on a smooth, level surface. The starting point was the same for all of them. The rest period on the surface right after powering on was also the same. After that, I moved each tracker in a figure-eight motion twice and returned them to the starting position.
The interesting part is that while the behavior can slightly change after a restart, the general tendency of each tracker remains the same. A tracker that tends to drift in the negative Y direction will do so repeatedly. A tracker with almost no drift will continue to have minimal drift later on.
Wouldn't it make sense for the SlimeVR server to have a workflow for tracker calibration? That way, the server could store the baseline tendencies of each tracker, allowing for improved drift compensation.
Drift generally only happens with yaw. The problem with trying to compensate for it is, that he bno085 is essentially a black box, and slime cant really do much with it. If you just do it in the server after the fact, its basically just the old method of drift compensation which doesnt work well for bno
Black box or not – in the end, each tracker with its own black box still has a specific behavior. One drifts more in one direction, while another drifts more in a different direction.
If this were just a random pattern every time, I wouldn’t argue. But each tracker shows an individual, repeating pattern. The intensity may vary, but the pattern is still there.
Is there really nothing useful that can be done with this?
So far no implementations that would actually work good haven't been created, and lack of repeatable methodology to measure and test drift is making the efforts a ghost hunt based on vibes and inconsistent results. Right now we're working on a drift tester that can make us do repeatable and precise testing of different algorithms, and maybe something will come out of it.
If you have more solid ideas on algorithms we can try to implement, we can try it.
The intensity may vary, but the pattern is still there.
That's the worst part, because for anything basic that may as well mean that the pattern isn't there. BNO is specifically a black box that does its own fusion, so we don't read raw data. Trying to compensate already fused data so far has had mixed results, besides specific things like Stay Aligned & Skeleton Constraints.
In general, I don't think that trying to compensate on the server for each imu will bear any results. Ideas like making algorithms that take into account how people move and other trackers on the body should be more promising. Once again, I'd love to be proven wrong, but so far we don't have good ideas...