createdb out of memory
Expected Behavior
createdb creates db without error
Current Behavior
created command is killed after 261 mio genes on a node with 500gb memory
Steps to Reproduce (for bugs)
Please make sure to execute the reproduction steps with newly recreated and empty tmp folders.
It will be difficult to send you my sequence file.
MMseqs Output (for bugs)
Please make sure to also post the complete output of MMseqs. You can use gist.github.com for large output.
createdb genecatalog/input.faa genecatalog/input_mmseqdb/db
MMseqs Version: 11.e1a1c
Converting sequences
[=================================================================================================== 1 Mio. sequences processed
=================================================================================================== 2 Mio. sequences processed
=================================================================================================== 3 Mio. sequences processed
...
=================================================================================================== 260 Mio. sequences processed
=================================================================================================== 261 Mio. sequences processed
=======================================================================================
Context
Providing context helps us come up with a solution and improve our documentation for the future.
slurm error:
slurmstepd: error: Detected 1 oom-kill event(s) in step 33218922.batch cgroup. Some of your processes may have been killed by the cgroup out-of-memory handler.
Your Environment
Include as many relevant details about the environment you experienced the bug in.
- Git commit used (The string after "MMseqs Version:" when you execute MMseqs without any parameters):
- Which MMseqs version was used (Statically-compiled, self-compiled, Homebrew, etc.):
- For self-compiled and Homebrew: Compiler and Cmake versions used and their invocation:
- Server specifications (especially CPU support for AVX2/SSE and amount of system memory):
- Operating system and version:
Is createdb expected to use so much memory?
Does compress input or output help something?
@SilasK there is only one allocation in createdb. So this should have been maximal ~500MB at the time the process got killed.
sourceLookup[splitIdx].emplace_back(fileIdx);
Maybe the writing to a disk took so long that the writing buffer exploded. My server has slow I/O.
I solved it with an order version of mmseqs.
This might be a issue on our site. Another user reported a similar behavior. We need to fix this. Which version works for you?
It’s mmseqs2=3
I know it’s old but it’s the one I started with.
hi @martin-steinegger any luck on this on this here? Thanks in advance.
Thanks to @b-tierney I got a clue where it crashes. It seems like the write lookup function has an issue. The newest mmseqs (in git) has a flag --write-lookup, which can omit creating the lookup file. Since the lookup file is not needed for majority of use cases there should be no harm omitting it.
mmseqs createdb input.fasta input --write-lookup 0
Hello, is this still the preferred way to deal with this issue? From which version on is it implemented?
I wonder if it has to do with a slow shared-filesystem?
@martin-steinegger Hello, is this still the preferred way to deal with this issue? From which version on is it implemented?
I wonder if it has to do with a slow shared filesystem?