xl5525
xl5525
> Huh... I followed exactly what you did by pasting your data and commands and was able to load the domains.txt file. > > > > Did you get any...
``` #!/bin/bash # this file is snakemake.sh snakemake -j 18 --keep-going --verbose \ --jobs 10 --profile config --executor slurm \ --latency-wait 120 --use-conda --conda-frontend mamba --conda-base-path "../mambaforge" ``` If config/config.v8+.yaml...
sbatch snakemake.sh
Using profile config for setting default command line arguments. Building DAG of jobs... shared_storage_local_copies: True remote_exec: False SLURM run ID: 36940d04-e904-4fa8-916d-26ba4fb2c04b Using shell: /usr/bin/bash Provided remote nodes: 10 Job stats:...
> ``` > #!/bin/bash > # this file is snakemake.sh > > snakemake -j 18 --keep-going --verbose \ > --jobs 10 --profile config --executor slurm \ > --latency-wait 120 --use-conda...
I see you point. Running a job several hours in front can be killed anytime with no reason, so I will simply use "mem_mb_per_cpu", which works fine for sbatch snakemake...