[New Rule] Large data exfiltration from elasticsearch clusters
Description
While dumping data from our production cluster to a local cluster for troubleshooting purpose, I realised that it might be interesting to have a detection for large unusual data export from elasticsearch.
I'm not sure what rule type it could be (threshold, ESQL, ML, ...). It could be one or many rules.
It could be interesting to reference this rule when elasticsearch is hitting the news for massive data leakage (due to poorly configured elasticsearch).
Target Ruleset
other
Target Rule Type
None
Tested ECS Version
No response
Query
No response
New fields required in ECS/data sources for this rule?
Probably not
Related issues or PRs
No response
References
https://cybernews.com/security/risika-swedish-data-exposed/
Redacted Example Data
No response
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This has been closed due to inactivity. If you feel this is an error, please re-open and include a justifying comment.
@elastic/threat-research-and-detection-engineering I’m surprised to see this issue being closed without any review from your side. If you believe it's not relevant or not a priority, that’s fine, we can keep it closed. However, it would be helpful to have your analysis.
@clement-fouque sorry it was closed automatically, we will review this internally. Do you have example of logs ?
@Samirbous I don't have specific logs in mind. It's a general rule, not dependant to the source.
I'm thinking of those rules:
- X% of total data of the cluster has been exported to a new IP
- More than X MB of data has been exported to a new IP
- Unusual amount of data exported from a specific user account
- ...