Sam Nooij
Sam Nooij
I ran into the same issue with FastANI version 1.33, installed via conda. Downgrading gsl did not work, but gave the error: ``` Encountered problems while solving: - package fastani-1.33-h0fdf51a_1...
Even though it may be a bit late, I'd like to share my experiences with teaching the Shell Genomics lesson. (This is the workshop link: https://aschuerch.github.io/2017-12-13-amsterdam/) We had planned to...
I have altered the snakemake rules in https://github.com/DennisSchmitz/Jovian/commit/30a5ec0a49714964c809ca7e53611912fde2a1c1 and https://github.com/DennisSchmitz/Jovian/commit/28c1cf1a2f176a869daa711544eb02e90f98a79d. These should change all intermediate fastq files to gzipped variants. I am still running tests to see how well this...
I am done with the benchmark of 9 bacterial metagenomic datasets. In short the conclusions are: 1. Total processing time per sample increases by about 50 minutes. (from 150 minutes...
I just now noticed two other rules that depend on these intermediate fastq files: `Fragment_length_analysis` and `quantify_output`. The former uses BWA, which should be able to handle gzipped fastq files....
Yes, 50 minutes extra per sample is not a very desirable change. Also, it might be a bit of a hassle to adapt those other two rules to work with...
Yes, I think such a file will be necessary. Especially because this cluster will kill jobs if they take more than the reserved amount of RAM, and will soon be...
I managed to get the pipeline running with a 'manual' snakemake command: `snakemake --profile profile --cluster "qsub -q all.q -pe BWA {threads} -l h_vmem={cluster.vmem} -cwd -j Y -V" -p --cluster-config...
I came across this article that describes another interesting approach of addressing different schedulers, [Xenon](https://zenodo.org/record/3404191): https://peerj.com/articles/8214/#p-13 I think we had heard of this already, but never really considered it. Maybe...
I also think the table becomes a bit too long like that. For read-based quantifications, one may want to check `results/profile_read_counts.csv`, generated by the rule quantify_output (`bin/quantify_profiles.py`). These are also...