flagger icon indicating copy to clipboard operation
flagger copied to clipboard

Error: Floating point exception (core dumped)

Open Enorya opened this issue 7 months ago • 2 comments

Dear,

I'm trying to run HMM-flagger on a human genome assembly but every time I try to run it (no matter the amount of memory or cores), the process crash with the following error message: Floating point exception (core dumped)

Here is the command I'm using to run HMM-flagger:

singularity exec --cleanenv -H $PWD -B /lustre1,/staging,/data,${VSC_SCRATCH},${TMPDIR},${VSC_SCRATCH}/tmp:/tmp flagger.v1.1.0.img hmm_flagger --input flagger_flye_coverage.cov.gz --outputDir ./ --alphaTsv ./flagger/misc/alpha_tsv/ONT_R1041_Dorado/alpha_optimum_trunc_exp_gaussian_w_8000_n_50.ONT_R1041_Dorado_DEC_2024.v1.1.0.tsv --labelNames Err,Dup,Hap,Col --threads 8

And here is the complete output message:

[2025-06-23 10:07:16] Parsing/Creating coverage chunks.
[2025-06-23 10:07:16] The given input file is not binary so chunks will be constructed from cov file.
[2025-06-23 10:07:16] Index file exists: ../flagger-step1_flye/flagger_flye_coverage.cov.gz.index
[2025-06-23 10:07:16] Parsing index ...
[2025-06-23 10:07:16] Index is parsed from disk.
[2025-06-23 10:07:16] Parsing header info for ChunksCreator.
[2025-06-23 10:07:18] Truth tag was set to false (or not defined) so truth labels will not be parsed from file.
[2025-06-23 10:07:18] Prediction tag was set to false (or not defined) so prediction labels will not be parsed from file.
[2025-06-23 10:07:18] Creating empty chunks.
[2025-06-23 10:07:18] Created a thread pool with 8 threads for parsing chunks
[2025-06-23 10:07:18] Queued 2523 jobs for the thread pool (no more than 8 jobs will be processed at a time)
[2025-06-23 10:12:57] Chunks are constructed from cov file.
[2025-06-23 10:12:57] 2523 chunks are parsed covering total length of 2891852816 bases.
[2025-06-23 10:12:57] Determining the number of components for the 'collapsed' state.
Floating point exception (core dumped)

I also tried to run with more threads (up to 36) and more memory (up to 1800G) but I always end up with the mentioned error message. I'm running the tool on a slurm cluster and I converted the docker image to a singularity one using: singularity build flagger.v1.1.0.img docker://mobinasri/flagger:v1.1.0

Do you have any idea why I end up with this error message?

Thank you in advance for your help, Enora

Enorya avatar Jun 23 '25 08:06 Enorya

Hi @Enorya If you could share your cov.gz file I would be able to reproduce your error on my side and help you better.

mobinasri avatar Jul 15 '25 21:07 mobinasri

Hi @mobinasri , I tried to upload the file via Github but it failed because the file is too big (108Mb). I send it to you using a Belnet Filesender, I hope you will receive it correctly. Let me know if you need something else to search for the issue.

Enorya avatar Jul 16 '25 06:07 Enorya