dolfinx icon indicating copy to clipboard operation
dolfinx copied to clipboard

enable simultaneous installation of real and complex builds

Open drew-parsons opened this issue 5 years ago • 7 comments

Currently the dolfinx build is set up to make only one build with one installation, which can be configured to either real number or complex number support. But both cannot be installed at the same time, there is only one libdolfinx.so.

Probably it is a good idea to enable both real and complex versions to be installed side by side and accessible to applications. I've got some patches doing this in the debian build, see https://salsa.debian.org/science-team/fenics/dolfinx/-/tree/master/debian . Build configuration is in debian/rules, patches are in debian/patches

The important patch which shouldn't be too invasive is lib_rename.patch, attached here as lib_rename.patch.txt

which renames the shared library to, for example, libdolfinx_real.so or libdolfinx_complex.so instead of libdolfinx.so. It operates by appending ${LIB_NAME_EXT} as an extension suffix to libdolfinx wherever the dolfinx library name is referenced. In the patch I set LIB_NAME_EXT to '_real' or '_complex' depending on the value of PETSC_SCALAR_COMPLEX, but in principle it could be handled differently, for instance as a variable controlled at the cmake command line (e.g. "cmake -DLIB_NAME_EXT=special_build ..").

My patch also handles the pkgconfig file, creating dolfinx_real.pc or dolfinx_complex.pc rather than dolfinx.pc. Some further (simple) work might be needed to select which of these is preferred (in Debian it's easy to set up alternatives with dolfinx.pc created as a link pointing to the preferred dolfinx_real.pc or dolfinx_complex.pc )

What I do not yet have in place is handling the DOLFINX cmake files, e.g. DOLFINXConfig.cmake and friends installed in /usr/share/dolfinx/cmake/. This needs to be handled to deal with lines like

find_package(DOLFINX REQUIRED)

in the CMakeLists.txt file for applications (e.g. tests and demos). This last point is connected to Issues #888 and #893, being addressed in Francesco's branch francesco/issue888, PR #897.

drew-parsons avatar Mar 23 '20 05:03 drew-parsons

Since the dolfinx build is essentially controlled by PETSC_DIR, one pathway is to hijack the PETSc installation infrastructure, i.e. install dolfinx components under $PETSC_DIR. This is what I've done with the dolfinx python modules, installing them into $PETSC_DIR/lib/python3/dist-packages/ following the pattern suggested by Lisandro Dalcin, the petsc4py author, for real and complex builds of petsc4py. I place a python path file dolfinx.pth in the standard python module directory redirecting at runtime to the required python module according to the value of PETSC_DIR.

In the same way, one way of handling the build-specific DOLFINX cmake files could be to install, say, into $PETSC_DIR/share/dolfinx/cmake. Then some wrapper cmake files installed in the standard cmake location could redirect to the build-specific files according to the value of PETSC_DIR. cmake is complicated, so perhaps there are other ways of handling it, but this way seems clean enough.

drew-parsons avatar Mar 23 '20 05:03 drew-parsons

I do not see an easy way to do this. How would this work on the python side? We can only have one python package named dolfinx. Upon installation the other package would be removed. Furthermore, at runtime it would have to switch between two pybind11 libraries (either dolfinx-real or dolfinx-complex) which would have to refer to the appropriate C++ {real, complex} library.

francesco-ballarin avatar Mar 23 '20 06:03 francesco-ballarin

I wouldn't bother with this. We'd be turning ourselves inside out to work around a PETSc shortcoming. Most don't use complex builds, and for those who do we have Docker containers.

We're working to untangle PETSc from DOLFINX, partly to overcome the fixed float and integer types, so there may be a better solution coming.

garth-wells avatar Mar 23 '20 07:03 garth-wells

@francesco-ballarin it's already done for the python modules. Copy the modules to a convenient location (I've used $PETSC_DIR/lib/python3/dist-packages/), and get access to it via a dolfin.pth. Disambiguating libdolfinx_real.so from libdolfinx_complex.so helps make it work.

@garth-wells It's a bit more of a subtle challenge if you're trying to decouple from PETSc anyway, although the the question of providing access to real vs complex builds can be made without relying on PETSc as such. You've got docker as a solution for that. For cloud computing based on Ubuntu images it would be useful I think to have system packages, so I'll keep playing with the CMake scripts and look for a tidy patch.

drew-parsons avatar Mar 23 '20 07:03 drew-parsons

Copy the modules to a convenient location (I've used $PETSC_DIR/lib/python3/dist-packages/)

I am not sure I follow. Do you mean copying dolfinx modules inside petsc4py dir?

francesco-ballarin avatar Mar 23 '20 08:03 francesco-ballarin

No, I mean creating a PYTHONPATH inside the PETSc dir. Copying inside petsc, not petsc4py. petsc4py just happens to be installed in the same location. So the dolfinx module is a neighbour module alongside petsc4py. To be explicit, in a "normal" installation (not distinguishing real from complex, i.e. a real-number build if you like), you would have the modules installed in

/usr/lib/python3/dist-packages/petsc4py/
/usr/lib/python3/dist-packages/dolfinx/

alongside all of the other usual python modules found in /usr/lib/python3/dist-packages/

What Lisandro proposed was to install petsc4py not in /usr/lib/python3/dist-packages/, but in $PETSC_DIR/lib/python3/dist-packages/, say in

/usr/lib/petscdir/petsc3.12/x86_64-linux-gnu-real/lib/python3/dist-packages/

That has the advantage of making petsc4py tightly bound to the PETSc it was built against, as it should be. (From a packaging perspective, it also allows petsc4py-3.8 (for PETSc 3.8) to be provided at the same time as as a petsc4py-3.12 (for PETSc 3.12), in case PETSc versions are important for particular applications).

Here I'm suggesting to use the same mechanism for the same kind of reason, to enable different dolfinx builds (e.g. real and complex) to be installed at the same time. So under the PETSC_DIR we'd end up with both

/usr/lib/petscdir/petsc3.12/x86_64-linux-gnu-real/lib/python3/dist-packages/petsc4py

and

/usr/lib/petscdir/petsc3.12/x86_64-linux-gnu-real/lib/python3/dist-packages/dolfinx

with /usr/lib/python3/dist-packages/petsc4py.pth and /usr/lib/python3/dist-packages/dolfinx.pth using PETSC_DIR to sort out which build is which.

If it were desirable to liberate dolfinx from such a tight dependency on PETSc, then the same kind of mechanism could be set up with a DOLFINX_DIR or DOLFINX_BUILD or DOLFINX_TYPE environment variable to distinguish dolfinx builds rather than using PETSC_DIR, and installations of different builds could be made under /usr/lib/dolfinx_dir (just as PETSC_DIR directs at petsc builds in /usr/lib/petscdir, on Debian at least)

drew-parsons avatar Mar 23 '20 08:03 drew-parsons

Looking through the cmake files, none of them in /usr/share/dolfinx/cmake are build-specific, so there's no need to make special installation arrangements for them.

The exception is DOLFINXTargets-relwithdebinfo.cmake which identifies the shared library (libdolfinx.so). With a touch of surgery this can be made build-adaptive. This operation seems to do the trick, selecting libdolfinx_real.so or libdolfinx_complex.so depending on the flag:

sed "s/set(CMAKE_IMPORT_FILE_VERSION 1)/set(CMAKE_IMPORT_FILE_VERSION 1)\n\nif(PETSC_SCALAR_COMPLEX)\n  set(LIB_NAME_EXT \"_complex\")\nelse()\n  set(LIB_NAME_EXT \"_real\")\nendif()/; \
      s/libdolfinx_real.so/libdolfinx${LIB_NAME_EXT}.so/g"  -i ${rootdir}/usr/share/dolfinx/cmake/DOLFINXTargets-relwithdebinfo.cmake

i.e. injecting

if(PETSC_SCALAR_COMPLEX)
  set(LIB_NAME_EXT "_complex")
else()
  set(LIB_NAME_EXT "_real")
endif()

and then using libdolfinx${LIB_NAME_EXT}.so to define the library.

A little messy in the sense that DOLFINXTargets-relwithdebinfo.cmake itself is generated automatically by the cmake build, but it seems to be working as intended once installed.

drew-parsons avatar Mar 24 '20 03:03 drew-parsons

We template over the scalar type, so the only substantive issue is PETSc. We can run more-or-less without PETSc, so I don't think dealing with PETSc real and complex (or float32 and float64 for that matter) is worth the effort.

garth-wells avatar Oct 25 '23 12:10 garth-wells