Initial PyTorch support
Changes in this pull request
- Link and compile with PyTorch
Testing performed
- make tests
Related issues
Checklist before requesting a review
- [] I have performed a self-review of my code
- [] I have added docstrings/doxygen in line with the guidance in the developer guide
- [] I have implemented unit tests that cover any new or modified functionality (if applicable)
- [] The code builds and runs on my machine
- []
documentation/release_XXX.mdhas been updated with any functionality change (if applicable)
This is where we are now:
Running tests...
Test project /local_mount/space/celer/2/users/pytorch_playground/STIR.b
Start 1: test_Array
1/2 Test #1: test_Array ....................... Passed 2.58 sec
Start 2: test_VectorWithOffset
2/2 Test #2: test_VectorWithOffset ............ Passed 0.50 sec
100% tests passed, 0 tests failed out of 2
- I know, it says absolutely nothing :)
Made quite a bit of progress, a few more boring parts left:
Running tests...
Test project /local_mount/space/celer/2/users/pytorch_playground/STIR.b
Start 1: test_Array
1/3 Test #1: test_Array ....................... Passed 8.19 sec
Start 2: test_VectorWithOffset
2/3 Test #2: test_VectorWithOffset ............ Passed 0.49 sec
Start 3: test_Tensor
3/3 Test #3: test_Tensor ...................... Passed 0.50 sec
100% tests passed, 0 tests failed out of 3
Total Test time (real) = 9.18 sec
See #1589. Possibly move some stuff from here to there if I didn't cover everything yet. Let's merge #1589 quickly though (before 6.3).
@KrisThielemans Hi, I wrote the 2D check next() test for the TensorWrapper. Can you please confirm that this is correct (BasicCoordinate always gives me doubts):
std::cerr << "\tTest on next() with regular array\n";
BasicCoordinate<2, int> index;
index[1] = test.get_min_index(0);
index[2] = test.get_min_index(1);
cerr << ">>>>" << index[1] << " " << index[2] << std::endl;
TensorWrapper<2, int>::const_full_iterator iter = test.begin_all_const();
do
{
cerr << ">>" << index[1] << " " << index[2] << std::endl;
check(*iter == test.at(index), "test on next(): element out of sequence?");
++iter;
index[2]+=1;
if (index[2] > test.get_max_index(1))
{
index[2] = test.get_min_index(1);
index[1]+=1;
}
} while (iter != test.end_all());
check(iter == test.end_all(), "test on next() : did we cover all elements?");
}
}
Looks ok, but I guess I'd write it with a loop in terms of the index (which is closer to how STIR will be using it)
auto iter = test.begin_all_const();
for (index[1] = test.get_min_index(0); index[1] <= test.get_max_index(0); ++index[1])
for (index[2] = test.get_min_index(1); index[2] <= test.get_max_index(1); ++index[2])
{
check(*iter == test.at(index), "test on next(): element out of sequence?");
++iter;
}
The choice to start from 1 was a very bad one... At some point, we'll have to introduce a new index-type that starts from 0 (could probably just use std::array), maybe auto-converting the BasicCoordinate to the new type.
Looks ok, but I guess I'd write it with a loop in terms of the index (which is closer to how STIR will be using it)
I know. I wanted to replicate closely the spelling of that test. it is based on a similar do .. while
@KrisThielemans in NumericVector operators you have this line
this->grow(std::min(this->get_min_index(), v.get_min_index()), std::max(this->get_max_index(), v.get_max_index()));
This automatically resizes the vector if it is smaller. Do you want this behavior? Shouldn't the user handle this?
I did it like this
if(!tensor.sizes().equals(v.tensor.sizes())) {
if(!is_on_gpu())
{
this->grow(v.get_index_range());
}
else
throw std::invalid_argument("Tensors must have the same shape for element-wise addition.");
}