pyEIT icon indicating copy to clipboard operation
pyEIT copied to clipboard

Using pyEIT for larger models

Open jareer22 opened this issue 4 years ago • 8 comments

hi @liubenyuan, I'm an undergraduate biomedical student and I'm new to EIT and I was interested in this pyEIT project. I looked into all the examples that are given. however, I'm curious if pyEIT can be applied in larger models like human lungs datasets or it's just coded for small prototypes. if it's possible to apply in larger models may I know it is done? thank you.

jareer22 avatar May 27 '21 13:05 jareer22

you mean run pyeit with a realistic mesh of thorax or brain with millions of tetrahedrons? pyeit has not yet optimize for running on this large models. I started coding on NY-head years ago but stop trying on this model due to some personal issues.

Are your model the same size as this one? if so, we can start tweaking on this.

Sorry for the late reply. I am working on other projects last year.

liubenyuan avatar Feb 12 '22 10:02 liubenyuan

#46 is working on a demo using NY-HEAD phantom (I do not know whether EITForward will fit in memory).

liubenyuan avatar May 10 '22 14:05 liubenyuan

It could be possible to run the calculations I implemented with Numpy on the GPU, if one is ever detected, but the mean we need to install and import a new package (e.g.: Tensorflow, Cupy, etc.). It is not totally impossible, however, and could even be easily done with Tensorflow as they have added an experimental API (see Tensorflow doc). However, I am a bit reluctant to using it, as it can slow down the overall program (at least at startup), and/or make it memory hungry and inefficient if not used correctly. CuPy, on the other hand, might be better for the use case (see CuPy doc), but I don't know how to use it. It is also possible to interchange CuPy and Numpy (and Tensorflow as well, but this one is automatically done by Tensorflow itself), at startup by checking whether a GPU is detected or not.

I am not sure if this was the question or not at first, but this is a lead a can check on if needed.

ChabaneAmaury avatar Sep 20 '22 07:09 ChabaneAmaury

Hi, cupy deploy the computation load on GPU which might be a better choice. I have collected some articles/projects on precision 3D EIT simulation, though my current priority is to implement a complete electrode model into existing pyeit. The CEM model would make the simulation much accuracy.

liubenyuan avatar Sep 20 '22 13:09 liubenyuan

I am currently trying to implement CuPy, though it needs some heavy installation on the cuda part. It is not entirely possible to automate during package import unfortunately. I am trying as well to implement a fallback method in case the installation is not complete.

ChabaneAmaury avatar Sep 20 '22 13:09 ChabaneAmaury

Are you trying to use some big models like NY-HEAD or openSAHE? If you need any help or something that I would do, please PM me.

liubenyuan avatar Sep 20 '22 14:09 liubenyuan

Not right now, first I am to setup a working installation as easily as possible (since it must be setup manually by the user). Once this is done and everything can be performed on the GPU, I will take a look at this.

ChabaneAmaury avatar Sep 20 '22 15:09 ChabaneAmaury

Update: Seems like the GPU implementation needs to rework entirely the module, as well as limiting the use of it to only Linux users (specifically for at least the scipy.spatial.Delaunay import). The GPU may be, in theory, a good idea, at least for accelerating calculations, but my findings so far are as follows :

  • The memory available is drastically reduced due to being limited to the GPU memory (VRAM)
  • The installation process is heavier, longer and needs some adjustements from the user if the module is not in a Conda env
  • Some necessary dependencies cannot implicitely work both with Numpy and CuPy, meaning we need to adjust each use of them in pyEIT (not only the imports of course)
  • Works only with Nvidia GPUs
  • Partial use of the GPU only (possible with some minimal code adjustements only) makes the program slower (needs to constantly move data between GPU and RAM)

I will try and take a look at the problem itself with a big model like NY-HEAD, find the bottleneck and assess the possibility of improving it.

ChabaneAmaury avatar Sep 20 '22 23:09 ChabaneAmaury