Support for arbitrary input states in TensorNetworkBackend
Is your feature request related to a problem? Please describe.
TensorNetworkBackend currently only supports |+> input states.
Describe the feature you'd like
We should be able to pass arbitrary states as input_states to TensorNetworkBackend.
Additional context Support for input states has been added for other back-ends in https://github.com/TeamGraphix/graphix/pull/135.
Hi! Thanks for you feature request
Let me ask, which do you assume a highly entangled state or a sparsely entangled state as a input_state? Do you want to initialize with an arbitrary state vector?
Hi Masato,
that will be one of the topics of our discussion in July with @shinich1 and @thierry-martinez. We want to have a better grasp of this backend (why it works, when it works, its relation to more traditional TN backends, how it can be extended, etc...).
Indeed, we are currently working on refactoring the backends to make the API simpler ans safer. Additionally, since this backend is very specific, the latest features are not implemented there (some of them most likely can't be, like truly arbitrary input states) so it needs special treatment.
Hi Maxime! Thank you for your response.
that will be one of the topics of our discussion in July with @shinich1 and @thierry-martinez. We want to have a better grasp of this backend (why it works, when it works, its relation to more traditional TN backends, how it can be extended, etc...).
I'm sorry for not being able to join the discussion. Here are quick answers to your wondering
- why it works
- TN is just a generalization of usual statevec sim from the perspective of matrix multiplication. The only difference is in the contraction order.
- when it works
- When we calculate Unitary, because we can skip middle measurement probability calculation and change the contraction order from the SV backend. This is strict especially when a pattern is strongly deterministic because the probability of 0/1 for all the measurement planes must be 50:50. The important point is whether it's permitted to skip probability calculations
- its relation to more traditional TN backends
- TN sim in graphix is just a naive implementation(connecting tensors one by one following pattern commands. We use quimb contraction backend). There's currently no truncation option as in some TN backends(like MPS).
Indeed, we are currently working on refactoring the backends to make the API simpler ans safer.
Thanks a lot for your commitment.
Additionally, since this backend is very specific, the latest features are not implemented there (some of them most likely can't be, like truly arbitrary input states) so it needs special treatment.
Yes, I agree with you. I recognize that the coding of this backend is not very sophisticated, and some useless methods are still present(graph_prep op.). I can refactor this backend and implement arbitrary input states. Has your team already started to refactor the TN backend? If so please let me know. I do not want to interfere.
some of them most likely can't be, like truly arbitrary input states
For this part, everything, except for prob calc, implemented in SV backend is possible. We just need to prepare the corresponding input tensor. I was just concerned about the large qubit number state(>30) in the former question.
Thanks a lot!
Yes indeed too bad you can't be there but we might schedule a call at some point. Or later to let you know what happened.
why it works TN is just a generalization of usual statevec sim from the perspective of matrix multiplication. The only difference is in the contraction order. when it works When we calculate Unitary, because we can skip middle measurement probability calculation and change the contraction order from the SV backend. This is strict especially when a pattern is strongly deterministic because the probability of 0/1 for all the measurement planes must be 50:50. The important point is whether it's permitted to skip probability calculations its relation to more traditional TN backends TN sim in graphix is just a naive implementation(connecting tensors one by one following pattern commands. We use quimb contraction backend). There's currently no truncation option as in some TN backends(like MPS).
That's what I add in mind, thanks. However, I don't yet see how you generalized the paper by Eisert and all to arbitrary graph (maybe we can talk about it via Discord?). I planned to discuss that with Shinichi in July anyways.
Has your team already started to refactor the TN backend? If so please let me know. I do not want to interfere.
No, we're not touching this backend until we understand it better. Just trying to maintain compatibility and keeping in mind that we don't want it to be too far behind!
For this part, everything, except for prob calc, implemented in SV backend is possible. We just need to prepare the corresponding input tensor. I was just concerned about the large qubit number state(>30) in the former question.
Great, that's very helpful.
I have tried to implement the arbitrary TN input in my private repo. but it seems that we need to remove pattern object from TN backend or move pattern simulator method into a different file because it causes circular-import problem.
I would like to note that arbitrary statevector input is posssible but useless on TN backend because there's almost no benefit to use TN simulator with highly entangled state. Therefore, this backend will recieve arbitrary tensor network object, with dangling edges of the number of qubits, as an input.
@masa10-f thanks!
- circular import - can we take quimb tensor object instead of TN backend object?
- usefulness - in the case of entangled states, I thought the standard procedure (accepting errors) was to use SVD to reduce the computational cost?
We should avoid directly recieve quimb object. Because it is not a standard library as numpy in Python and has really complicated APIs, we would want to have a wrapper to use inside graphix. In addition to this, a bare quimb object does not have necessary information on tensor network backend(might be resolvable? but not a elegant way).
Of course SVD is available in TN and really effective to sparsly entangled state, but there's generally a limitaion on SVD, especially for highly entangled state. In the worst case, we cannot approximate at all. I think we should not generate errors without user instruction. Another way is to explicitly create MPS class to allow SVD errors but it should be done in another issue or PR. For this problem, I beleive it's good way to have a self-contained and API-limited wrapper that just recieves arbitrary TN object from quimb and record necessary information for simulation. We can decouple TN engine from the simulator layer.
Thanks! Indeed I think we need to think and discuss more on what to do with this backend. For input states we could try different TN architecture. Also, there are several ways of doing the computation in tensor networks:
- either express the whole network and contract
- perform the computation command by command and compress the network
This can be delayed a bit. For now we would need explanations on what is done in tensornet.py https://github.com/TeamGraphix/graphix/blob/364be19de0b1cc3a9be559eb2891f2a2e56ff6b7/graphix/sim/tensornet.py#L412 especially regarding the degree <= 5 limit.