OliverAh
OliverAh
After the training (`for i in range(50):`) the model mode is switched to `model.eval()`. Then after activating the `with ...verbose_linalg():` context, it is switched back to `model.train()` just to be...
I did, using 3 runs each. Without switching back and forth it is approx. 5% faster, when **not specifying the contexts** `(with gpytorch.settings...)`. When specifying the contexts, the runtime seemed...
I opened the issue [mentioned above](https://github.com/sympy/sympy/pull/24848#ref-issue-2357576490) (#26717). Since my attention was brought to this PR a few days ago, the status of the checks remains as "Expected - Waiting for...
Indeed I just assumed `dsolve_system` would do, what you suggested with the combination of `solve` and `dsolve`. Is there any plan to go further with the PR you mentioned? As...
This issue persists in release 0.11.0 ([nvcr.io/nvidia/quantum/cuda-quantum:cu12-0.11.0](nvcr.io/nvidia/quantum/cuda-quantum:cu12-0.11.0) Would still highly appreciate a fix.
Awesome, I just tested and it works like a charm. Thanks alot!
related issues: * #2803 * #2485 potentially related issues: * #2535 * #2346 * #2137
I ran into the same issue and found a messy workaround. Nevertheless, it seems to get the job done. 1. Compose a string that holds the `cudaq.register_operation(...)` as well as...
If I make a pull request with that, does it have a chance to be merged?