Sorting by SLA does not work in some cases since 77d6bdd0 change
Bug description When a product has an SLA profile assigned which defines certain expiration days for certain vulnerabilities, SLA counter starts. Some findings may expire, some may not. It is important to have those which close to expiration sorted on top.
This worked previously (although, on client side).
After change the 77d6bdd0 , sorting the list of open findings in a product view ( defectdojo.host/product/{ID}/finding/open?o=sla_age_days ) does not work anymore.
However:
- in some cases it works;
- clicking SLA column changes the order of findings, but the pattern is unclear.
Steps to reproduce Unfortunately, the main problem is to define the situation when the sorting breaks.
In our case, we have some products where it works, and some where it does not.
Can you advice on what to look for to provide you with more information? Which product or finding properties are essential for SLA sorting?
Expected behavior List of open findings in product view can be sorted by SLA counter.
Deployment method (select with an X)
- [ ] Docker Compose
- [X] Kubernetes
- [ ] GoDojo
Environment information DefectDojo: 2.52.2 Helm chart: 1.8.2
Logs N/A
Sample scan files N/A
Screenshots
Sorting by SLA:
Additional context (optional) N/A
My first instinct would be to look at the database query being executed for this list. Are you able to do some sort of logging or inspection? There are also some guides on how to configure Django to log (all) queries: https://www.geeksforgeeks.org/python/how-to-log-all-sql-queries-in-django/
And what is the url when sorting? It will contain the sort order as a parameter.
My first instinct would be to look at the database query being executed for this list. Are you able to do some sort of logging or inspection? There are also some guides on how to configure Django to log (all) queries: https://www.geeksforgeeks.org/python/how-to-log-all-sql-queries-in-django/
I tried this approach. The article you linked is not sufficient, in case of DefectDojo one needs to additionally set DD_DEBUG to True. It would also be nice to have this kind logging controlled fully via environment variables, without the need of rebuilding container images...
So, for the SQL queries issued when sorting by SLA. It turned out to be tricky to correlate between uwsgi requests and the queries in "console" handler, as HTTP request result appear after SQL query message.
I believe the query below is the one of interest:
[01/Dec/2025 21:12:13] DEBUG [django.db.backends:151] (0.007) SELECT "dojo_finding"."id", "dojo_finding"."title", "dojo_finding"."date", "dojo_finding"."sla_start_date", "dojo_finding"."sla_expiration_date", "dojo_finding"."cwe", "dojo_finding"."cve", "dojo_finding"."epss_score", "dojo_finding"."epss_percentile", "dojo_finding"."known_exploited", "dojo_finding"."ransomware_used", "dojo_finding"."kev_date", "dojo_finding"."cvssv3", "dojo_finding"."cvssv3_score", "dojo_finding"."cvssv4", "dojo_finding"."cvssv4_score", "dojo_finding"."url", "dojo_finding"."severity", "dojo_finding"."description", "dojo_finding"."mitigation", "dojo_finding"."fix_available", "dojo_finding"."impact", "dojo_finding"."steps_to_reproduce", "dojo_finding"."severity_justification", "dojo_finding"."refs", "dojo_finding"."test_id", "dojo_finding"."active", "dojo_finding"."verified", "dojo_finding"."false_p", "dojo_finding"."duplicate", "dojo_finding"."duplicate_finding_id", "dojo_finding"."out_of_scope", "dojo_finding"."risk_accepted", "dojo_finding"."under_review", "dojo_finding"."last_status_update", "dojo_finding"."review_requested_by_id", "dojo_finding"."under_defect_review", "dojo_finding"."defect_review_requested_by_id", "dojo_finding"."is_mitigated", "dojo_finding"."thread_id", "dojo_finding"."mitigated", "dojo_finding"."mitigated_by_id", "dojo_finding"."reporter_id", "dojo_finding"."numerical_severity", "dojo_finding"."last_reviewed", "dojo_finding"."last_reviewed_by_id", "dojo_finding"."param", "dojo_finding"."payload", "dojo_finding"."hash_code", "dojo_finding"."line", "dojo_finding"."file_path", "dojo_finding"."component_name", "dojo_finding"."component_version", "dojo_finding"."static_finding", "dojo_finding"."dynamic_finding", "dojo_finding"."created", "dojo_finding"."scanner_confidence", "dojo_finding"."sonarqube_issue_id", "dojo_finding"."unique_id_from_tool", "dojo_finding"."vuln_id_from_tool", "dojo_finding"."sast_source_object", "dojo_finding"."sast_sink_object", "dojo_finding"."sast_source_line", "dojo_finding"."sast_source_file_path", "dojo_finding"."nb_occurences", "dojo_finding"."publish_date", "dojo_finding"."service", "dojo_finding"."planned_remediation_date", "dojo_finding"."planned_remediation_version", "dojo_finding"."effort_for_fixing", EXTRACT(DAY FROM (interval '1 day' * ("dojo_finding"."sla_expiration_date" - COALESCE("dojo_finding"."sla_start_date", ("dojo_finding"."created" AT TIME ZONE 'Europe/Zurich')::date)))) AS "sla_age_days" FROM "dojo_finding" INNER JOIN "dojo_test" ON ("dojo_finding"."test_id" = "dojo_test"."id") INNER JOIN "dojo_engagement" ON ("dojo_test"."engagement_id" = "dojo_engagement"."id") WHERE ("dojo_engagement"."product_id" = 142 AND "dojo_finding"."active") ORDER BY "dojo_finding"."date" ASC LIMIT 1; args=('Europe/Zurich', 142); alias=default
Another one next to it:
[01/Dec/2025 21:12:13] DEBUG [django.db.backends:151] (0.014) SELECT "dojo_finding"."id", "dojo_finding"."title", "dojo_finding"."date", "dojo_finding"."sla_start_date", "dojo_finding"."sla_expiration_date", "dojo_finding"."cwe", "dojo_finding"."cve", "dojo_finding"."epss_score", "dojo_finding"."epss_percentile", "dojo_finding"."known_exploited", "dojo_finding"."ransomware_used", "dojo_finding"."kev_date", "dojo_finding"."cvssv3", "dojo_finding"."cvssv3_score", "dojo_finding"."cvssv4", "dojo_finding"."cvssv4_score", "dojo_finding"."url", "dojo_finding"."severity", "dojo_finding"."description", "dojo_finding"."mitigation", "dojo_finding"."fix_available", "dojo_finding"."impact", "dojo_finding"."steps_to_reproduce", "dojo_finding"."severity_justification", "dojo_finding"."refs", "dojo_finding"."test_id", "dojo_finding"."active", "dojo_finding"."verified", "dojo_finding"."false_p", "dojo_finding"."duplicate", "dojo_finding"."duplicate_finding_id", "dojo_finding"."out_of_scope", "dojo_finding"."risk_accepted", "dojo_finding"."under_review", "dojo_finding"."last_status_update", "dojo_finding"."review_requested_by_id", "dojo_finding"."under_defect_review", "dojo_finding"."defect_review_requested_by_id", "dojo_finding"."is_mitigated", "dojo_finding"."thread_id", "dojo_finding"."mitigated", "dojo_finding"."mitigated_by_id", "dojo_finding"."reporter_id", "dojo_finding"."numerical_severity", "dojo_finding"."last_reviewed", "dojo_finding"."last_reviewed_by_id", "dojo_finding"."param", "dojo_finding"."payload", "dojo_finding"."hash_code", "dojo_finding"."line", "dojo_finding"."file_path", "dojo_finding"."component_name", "dojo_finding"."component_version", "dojo_finding"."static_finding", "dojo_finding"."dynamic_finding", "dojo_finding"."created", "dojo_finding"."scanner_confidence", "dojo_finding"."sonarqube_issue_id", "dojo_finding"."unique_id_from_tool", "dojo_finding"."vuln_id_from_tool", "dojo_finding"."sast_source_object", "dojo_finding"."sast_sink_object", "dojo_finding"."sast_source_line", "dojo_finding"."sast_source_file_path", "dojo_finding"."nb_occurences", "dojo_finding"."publish_date", "dojo_finding"."service", "dojo_finding"."planned_remediation_date", "dojo_finding"."planned_remediation_version", "dojo_finding"."effort_for_fixing", EXTRACT(DAY FROM (interval '1 day' * ("dojo_finding"."sla_expiration_date" - COALESCE("dojo_finding"."sla_start_date", ("dojo_finding"."created" AT TIME ZONE 'Europe/Zurich')::date)))) AS "sla_age_days", COALESCE((SELECT COUNT(*) AS "c" FROM "dojo_endpoint_status" U0 WHERE (U0."finding_id" = ("dojo_finding"."id") AND NOT U0."mitigated") GROUP BY U0."finding_id" LIMIT 1), 0) AS "active_endpoint_count", COALESCE((SELECT COUNT(*) AS "c" FROM "dojo_endpoint_status" U0 WHERE (U0."finding_id" = ("dojo_finding"."id") AND U0."mitigated") GROUP BY U0."finding_id" LIMIT 1), 0) AS "mitigated_endpoint_count" FROM "dojo_finding" INNER JOIN "dojo_test" ON ("dojo_finding"."test_id" = "dojo_test"."id") INNER JOIN "dojo_engagement" ON ("dojo_test"."engagement_id" = "dojo_engagement"."id") WHERE ("dojo_engagement"."product_id" = 142 AND "dojo_finding"."active") ORDER BY 71 DESC LIMIT 121; args=('Europe/Zurich', Int4(0), Int4(0), 142); alias=default
The HTTP request which produced the queries above:
[pid: 24|app: -|req: -/-] 10.134.202.186 (admin) {62 vars in 1364 bytes} [Mon Dec 1 21:12:13 2025] GET /product/142/finding/open?o=-sla_age_days&page_size=150 => generated 2437280 bytes in 1725 msecs (HTTP/1.1 200) 8 headers in 379 bytes (14 switches on core 0)
The resulting sort order was incorrect. From top to bottom: -27, 27, 27, ... , 27, 85, 85 ... 85, 57.
Are you able to provide a way for use to reproduce this? I can't reproduce it and the query that gets generated looks good to me. In https://github.com/DefectDojo/django-DefectDojo/pull/13918 I made some changes to handle None/Null values better. Can you try with those changes?
Are you able to provide a way for use to reproduce this? I can't reproduce it and the query that gets generated looks good to me. In #13918 I made some changes to handle None/Null values better. Can you try with those changes?
Thanks for the update!
I checked on the test instance, with your changes, and, unfortunately, can not provide much of valuable input here.
My observations:
- The products I had in a test instance with hundreds of findings accumulated over several test re-imports still suffered from inconsistent sorting by SLA.
- The newly created Product Type / Product / Engagement / Test, did not show any inconsistencies. I tried to re-import several (five) Trivy exports of different versions of nginx. Each import had its own date, so SLA numbers were more or less versatile.
- Then I deleted the old collection of products and their findings which were not sorted correctly and imported the fresh data in the same way as was done before. No sorting issues.
I'll check if re-importing next day will change the situation.
Sounds like maybe the sla_expiration_date is empty for those older findings. You can recalculate that field by modifying the SLA Configuration and saving it.
Sounds like maybe the
sla_expiration_dateis empty for those older findings. You can recalculate that field by modifying the SLA Configuration and saving it.
I've tried, did not help...
I now see that on the test instance SLA sorting breaks for the products which use "/api/v2/reimport-scan/" API endpoint to import scan results. The first import via API works. Re-Import on the second day (to have (date+1) as "scan_date") breaks the sorting. Although the actual SLA calculation is correct.
If you can show me how to reproduce it for example in the demo instance, we can probably fix it.