Add array.__matmul__(), "@" operator
My understanding is that this functionality is already provided with the np.linalg.dot function, but for compatibility's sake could this be supported?
Yes. The code has to be re-organised a bit, but it is definitely doable.
@CallumJHays Cal, do you want an implementation that resolves the dtypes? At the moment, for reasons of firmware size, linalg.dot always returns a float, irrespective of the input types. In this regard, the code is not completely numpy-compatible, but doing this properly is expensive.
@v923z I don't think it's necessary. At least for my purposes, floats are the main usecase.
@CallumJHays Is the @ operator simply an alias for numpy.dot, or are there small nuances that we have to keep in mind?
@v923z I'm not well aware, but a quick stack-overflow search reveals
matmul differs from dot in two important ways.
- Multiplication by scalars is not allowed.
- Stacks of matrices are broadcast together as if the matrices were elements.
The last point makes it clear that dot and matmul methods behave differently when passed 3D (or higher dimensional) arrays. Quoting from the documentation some more:
For matmul: If either argument is N-D, N > 2, it is treated as a stack of matrices residing in the last two indexes and broadcast accordingly.
For np.dot: For 2-D arrays it is equivalent to matrix multiplication, and for 1-D arrays to inner product of vectors (without complex conjugation). For N dimensions it is a sum product over the last axis of a and the second-to-last of b
I'm working on defining some tests for these differences in terms of code, because that explanation isn't super clear to me. Will post them here when I'm done.
@CallumJHays You do realise that implementing the generic case is going to be expensive, don't you? Since I am not using this function/method frequently, you have to choose the case that might be most relevant.
@v923z Would you be happy with an implementation that just maps @ to numpy.linalg.dot?
@thetazero You've seen the comments above. Complying with the numpy standard might be expensive, and if we go for a partial implementation, I'd like to discuss first which sub-set of the standards we keep. But I would like to see the @ operator in ulab.
There might actually be a better way than to call numpy.linalg.dot. Namely, we could iterate over the rows of the matrix, and call the multiplication function. That would at least solve the problem of the type resolution, which the .dot method does not have.