Memory error
Description of bug
It looks like spades is killing my run due to a memory allocation error. As per the uploaded log file, you can see I have allocated 900gb ram. It only needs like 470gb to process the particular part it gets up to when it runs into this error. Why is this happening, and what can I do to prevent it?
Thank you very much for your help!
spades.log
params.txt
Command line: /shared/c3/apps/centos8/SPAdes-3.15.3-Linux/bin/spades.py --pe1-1 /scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L001_R1_filtered.fq --pe1-2 /scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L001_R2_filtered.fq --pe1-1 /scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L002_R1_filtered.fq --pe1-2 /scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L002_R2_filtered.fq -o /scratch/u12034652_aad/clean.fastq.to.assemble/spades.assembly.output/402626_S24_output --meta -t 12 -m 900
System information: SPAdes version: 3.15.3 Python version: 3.8.3 OS: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
Output dir: /scratch/u12034652_aad/clean.fastq.to.assemble/spades.assembly.output/402626_S24_output Mode: read error correction and assembling Debug mode is turned OFF
Dataset parameters: Metagenomic mode Reads: Library number: 1, library type: paired-end orientation: fr left reads: ['/scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L001_R1_filtered.fq', '/scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L002_R1_filtered.fq'] right reads: ['/scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L001_R2_filtered.fq', '/scratch/u12034652_aad/clean.fastq.to.assemble/402626_S24_L002_R2_filtered.fq'] interlaced reads: not specified single reads: not specified merged reads: not specified Read error correction parameters: Iterations: 1 PHRED offset will be auto-detected Corrected reads will be compressed Assembly parameters: k: [21, 33, 55] Repeat resolution is enabled Mismatch careful mode is turned OFF MismatchCorrector will be SKIPPED Coverage cutoff is turned OFF Other parameters: Dir for temp files: /scratch/u12034652_aad/clean.fastq.to.assemble/spades.assembly.output/402626_S24_output/tmp Threads: 12 Memory limit (in Gb): 900
SPAdes version
SPAdes-3.15.3
Operating System
Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
Python Version
python-3.8.3
Method of SPAdes installation
conda
No errors reported in spades.log
- [X] Yes
Hello
At the time of the failure SPAdes used ~700 Gb of RAM. And your OS failed to fulfil SPAdes' request to allocate another 4 Mb. I would suggest you to ensure that indeed 900 Gb of RAM is available to your SPAdes job (note that -m option does not allocate anything, it only specifies the high memory bar that SPAdes will never overshoot).
As far as I can see, you're having quite complex and tangled assembly graph hence the elevated memory consumption (though, the dataset itself is huge). One way to potentially reduce the memory consumption is to perform some heavy quality trimming before the assembly.
Okay great! Yea I am running this on a university HPC. I am allocating myself the full amount of memory when setting up my job, but perhaps I'm getting bumped for some reason.
Thank you for the advice!
Nathan