Jonathan
Jonathan
Hey, I am working on modular robotics with pinocchio, which means I build lots of robots composed of the same modules. It would be convenient if it was possible to...
Hey, I have short question regarding the Jacobian matrix: Looking through the code, I am still not entirely sure which Jacobian exactly is computed. In [the second last line](https://github.com/UM-ARM-Lab/pytorch_kinematics/blob/master/src/pytorch_kinematics/jacobian.py#L59) of...
Following #28 , here's a first draft regarding an implementation of forward kinematics that are differentiable w.r.t. joint offsets/link offsets. This PR implements: - An abstract ParameterizedTransform interface that allows...
Dear pytorch kinematics team, thank you for this amazing repo! I am interested in computing the FK gradients, but not only w.r.t joint angles, but also with respect to the...
Dear ADAM developers, I just stumbled across your package and it looks amazing! However, there's a question I couldn't find an answer to in your examples and the README: If...
## Category - [ ] Report an error in the documentation. - [x] Request for something to be documented. - [ ] Suggestion to improve the documentation. - [x] Other:...
[Question] How to deal with large gradients caused by joint_attach constraints in Euler Integration
### Description The SemiImplicitIntegrator enforces "joint attachment" by modeling them as spring-damper systems with parameters `joint_attach_ke` and `joint_attach_kd`. Sometimes, it is required to set those relatively high, especially in kinematic...
### Description As far as I can see, `wp.sim` joints are implemented as frictionless joints. It would be great if they supported a simple friction model to enable more realistic...
### Bug Description **EDIT**: This issue does not only appear for long time horizons. Upon closer inspection, even for short horizons, gradients for inertial properties are wrong using the FeatherstoneIntegrator....
### Description It would be great if warp would offer a feature similar to PyTorch's `with torch.no_grad()` context. ### Context There are various use cases for such a feature. An...