DRAM2 fails during large batch due to ARG_MAX
Hello, I have currently ran into an issue when running ~1200 genomes through DRAM2. Specifically the issue is when trying to run the combine_annotations.py, when running that job it errors out due to the fact that ARG_MAX has been exceeding. Looking at the .command.sh it is quite large due to the input being all the annotation files generated for each genome. I'm not sure what the work around here could be other than to change how this operates via creating a list as input or running it in max batches? Anyways I thought I'd flag this since this put's an upper limitation on batch size.
/project/thrash_89/zjhenning/cfg/CAREER_MAGs/Final_nGOM_MAGs/DRAM/work/70/a55b8c8460dae363ca5d2224abe63a/.command.sh: line 9: /opt/conda/envs/dram2-env/bin/python: Argument list too long
/var/spool/slurm/d/job29422379/slurm_script: line 9: /home1/zjhennin/miniconda3/bin/python: Argument list too long
Above is the error from the combine_annotations.log and below is from the .nextflow.log
Command exit status:
126
Command output:
(empty)
Command error:
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
Work dir:
/project/thrash_89/zjhenning/cfg/CAREER_MAGs/Final_nGOM_MAGs/DRAM/work/70/a55b8c8460dae363ca5d2224abe63a
To follow up, is there defined cutoffs for the HMM results, etc. that I can use in order to merge the annotations myself?
Sorry I didn't get to this until now. But #447 Fixes this.