tdlong
tdlong
/share/adl/tdlong/peromyscus/Progressive/PCwork/progressiveAlignment/Anc0/Anc0/Anc0_DB/ktout.log 2018-05-01T09:45:48.483191-08:00: [SYSTEM]: ================ [START]: pid=33941 2018-05-01T09:45:48.483330-08:00: [SYSTEM]: opening a database: path=:#opts=ls#bnum=30m#msiz=50g#ktopts=p 2018-05-01T09:45:48.486521-08:00: [SYSTEM]: starting the server: expr=10.1.255.117:1978 2018-05-01T09:45:48.486666-08:00: [SYSTEM]: server socket opened: expr=10.1.255.117:1978 timeout=200000.0 2018-05-01T09:45:48.486707-08:00: [SYSTEM]: listening server socket...
I was able to get a job to run to completion that had worked before. So then perhaps it is something to do with the length of my input sequences...
I am not sure how to use the --restart flag runProgressiveCactus.sh --maxThreads 32 --restart pero.txt PCwork PCwork/pero.hal ... Usage: runProgressiveCactus.sh [options] Required Arguments: File containing newick tree and seqeunce paths...
It has been running a few days now from the restart. It seems to be running a number of lastz jobs in parallel. I will see if it crashes again...
Yes it is running out of memory despite being on a node with 500GB. (see below). My computer people offered me a 1.5TB node, but in my experience if 500Gb...
All 4 genomes are soft-masked (with lower case letters). I think my next steps are to 1. rerun without the new scaffolded assembly 2. rerun with the newer assembly as...
I am running some experiments now. I am aligning rat to my scaffolded genome, and rat to contigs only. Right now the program is in the phase where lastz is...
Progressive Cactus continues to crash. It runs about 24 hours (depending on the number of cores) and then bad things happen. log below. It is so strange, as it used...
My bam file is 123Gb! I have a few flow cells of RNA seq data from several different tissue I wish to use to annotate a de novo genome assembly....
It is on the big memory node now. Here is the script I submitted to my SGE queuing software. #$ -N strawberry #$ -q bigmemory #$ -pe openmp 80 #$...