PETSc.jl icon indicating copy to clipboard operation
PETSc.jl copied to clipboard

MPI Friendly GC

Open jkozdon opened this issue 4 years ago • 4 comments

We should consider implementing what GridapPETSc.jl has done for GC with mpi objects.

Basically the julia finalizer registers the object for destruction with PetscObjectRegisterDestroy, see for example PETScLinearSolverNS

Of course this means the object is not destroyed until PETSc is finalized. If the user wants to destroy things sooner they can call a function gridap_petsc_gc:

# In an MPI environment context,
# this function has global collective semantics.
function gridap_petsc_gc()
  GC.gc()
  @check_error_code PETSC.PetscObjectRegisterDestroyAll()
end

By first calling GC.gc() all objects will be properly registered via PetscObjectRegisterDestroy and the call to PetscObjectRegisterDestroyAll actually destroys then.

The only change I would make is to suggest still allow manual destruction of objects is this is desired for performance reason (though I don't know if this is really ever needed).

h/t: @amartinhuertas in https://github.com/JuliaParallel/PETSc.jl/issues/146#issuecomment-987425710

jkozdon avatar Dec 07 '21 17:12 jkozdon

@jkozdon Note there is a caveat here with the use of PetscObjectRegisterDestroy. PETSc holds a global data structure with registered objects for lazy destroy that has a maximum capacity. By default, it is 256 (although it can be increased via the corresponding CPP macro during configuration stage). If you exceed such size, then an error is produced. (see https://github.com/gridap/GridapPETSc.jl/pull/42 for more details). Our workaround here is to inject calls to gridap_petsc_gc() at strategic points within GridapPETSc.jl. I know it is far from ideal, but this is the best idea that came to my mind given such constraints.

amartinhuertas avatar Dec 07 '21 21:12 amartinhuertas

Good to know. Thanks!

jkozdon avatar Dec 07 '21 22:12 jkozdon