haidarazzam
haidarazzam
Thank you for submitting your issue. We have identified the problem, the current solution is by disabling cutensorNet internal preprocessing simplification process 'CUTENSORNET_CONTRACTION_OPTIMIZER_CONFIG_SIMPLIFICATION_DISABLE_DR' then you would get what you are...
Also note that paths from Cotengra or other package can run with cuTensorNet but not always because again either if slicing is missing or if the path generated by such...
@sss441803 hyperedges have been allowed in cuTensorNet since a while. The flops issue you observed was due because cuTensorNet simplification phase was trying to simplify the contraction before the path-optimizer...
Dear @namehta4 was this issue resolved for you? Thanks
@jeffhammond @albandil @langou I think there is 2 main issues here. 1. First let's ignore MPI. the BLACS functions BI_XXX are scalapack functions and are called from within the Fortran...
Suggestion: I would suggest to replace all `MpiInt` originally `int` into `Int` (similar to when `int` was replaced by `Int`), so we know all integer computation and indices work with...
If you want to (add/subtract/sum elementwise) of one tensor, you can use the cuTENSOR library. cuQuantum performs contraction of a tensor network. if you want to add/subtract a tensor network...
Hi, Thank you very much for reporting your observations, we will be happy to work with you resolving the issue. First, the LOGGING print any call to workspace_needed and it...
your problem seems to be very large, I can see it requires workspace ranging in the Exa-bytes. > What is the optimiser doing when CONFIG_NUM_HYPER_SAMPLES is left to its default...
yes, if contraction doesn't fit one GPU, then cuTensorNet will slice it to make it fit in 1 GPU. Similarly for multi node multi GPU, the slicing is the techniques...