Broadcasting for elementwise operations
Hi there, first of all, thank you for creating this library, it seems like it's exactly what I need for this project I'm working on at the moment. My application is a bit different than what your library is designed for - I'm working on a differential algebra toolbox which should only need relatively small tensors, which I want to be static, or at least to have the option for this, as well as living in contiguous memory since in my case the tensors are almost always dense. This option isn't offered by the existing packages that I know, such as DACE.
One thing I would need is a way to do element wise multiplication with broadcasting. Here's an example, which won't compile:
tensori<float, storage_vec<ny>> d0(0);
tensori<float, storage_vec<ny>, storage_sym<nx>> d2(0), e2(0);
d2(i,j,k) = (e2(i,j,k) * d0(i)); // even though i appears twice, I don't want it to be summed as the Einstein notation interpretation of this expression implies
I am working around this with the following function, which is a solution I am more than OK with for the time being, I would just like your opinion on whether there is a better way to use your library for this and similar purposes:
template<typename T1, typename T2>
T1 elemMul_3_1(T1 const & a, T2 const & b) {
return T1([&](int i, int j, int k) -> typename T1::Scalar {
return a(i,j,k) * b(i);
});
}
Once again, thank you for creating this library!
Thanks very much, I'm glad you like it.
It looks like you found elemMul which comes as close to what you're looking for as it gets so far.
Here are some options for fixing this:
One option is you can use the diagonal() function which turns a vector into a symmetric matrix with elements along the diagonal. This mathematically does what you're looking for, but produces a bunch of off-diaognal zeroes ($n(n-1)/2$ more), which you can optimistically trust a good compiler to optimize away ... ... ... but probably not.
Your operation could look like d2 = diagonal(d0) * e2.
This requires no changes to the Tensor library.
I could lift the constraint of elemMul so that the two inputs no longer have to be the same type, and then your operation could just be d2 = elemMul(d0, e2).
This requires changing the elemMul function, and would take a bit of work.
Another option is I could make a "diagonal" rank-2 storage for a diagonal matrix, similar to how "ident" is represented as a single scalar, i.e. $a_{ij} := δ_{ij} ⋅ s_i$. I didn't add this originally because of the improper use of index gymnastics in differential geometry, but it definitely would have its uses such as code like this. Then you'd have to store d0 as a rank-2 object, but with this your operation would just be d2 = d0 * e2.
This requires creating an additional storage class, and would take a bit more work.
Also you can shorthand your tensor creation as just:
tensorx<float, ny> d0;
tensorx<float, ny, -'s', nx> d2, e2;
Hey there, thanks so much for replying!
Please don't spend time creating functionality just because of my question - I'm still figuring out exactly how to do what I want to do, maybe I won't end up using this package. I was just checking that I wasn't missing some functionality.
I'll keep playing around with it, using my own versions of elemMul just to get an MVP. If I end up going somewhere with this I'll let you know, otherwise there's no need to waste your time.
In any case, to me the best option would be option 2, since it would cover my needs - I also have a elemMul_2_1 which takes a rank-2 and rank-1 tensor. If I understand correctly, the change would be as simple as allowing the types to be different from each other with no other changes to the code? If that's the case I could try to do it just on my side.
And thank you for the info on that shorthand!
For your code it would be as simple as relaxing the type, but for the general case I would have to mess around with internal storage stuff a bit.