Error purgin S3-compatible Google Cloud Storage
Describe the bug When purging previous backups on Google Cloud Storage (using HMAC keys for the S3-compatible XML API), an error occurs.
To Reproduce Steps to reproduce the behavior:
- Create a storage bucket on Google Cloud Storage
- Set up an HMAC key to access the bucket using S3-compatible XML API
- Set up docker-volume-backup using the following env variables:
BACKUP_CRON_EXPRESSION="0 1/6 * * *"
AWS_ENDPOINT=storage.googleapis.com
AWS_S3_BUCKET_NAME=<your-bucket-name>
AWS_ACCESS_KEY_ID=<your-HMAC-key-Access-ID>
AWS_SECRET_ACCESS_KEY=<your-HMAC-key-secret>
BACKUP_RETENTION_DAYS=1
- Backups start being created
- After 1 day, older backups should be purged. But the logs show the following error:
Now running script on schedule 0 1/6 * * *
Created backup of `/backup` at `/tmp/backup-2025-02-17T13-00-00.tar.gz`.
Encrypted backup using \"gpg\", saving as \"/tmp/backup-2025-02-17T13-00-00.tar.gz.gpg\"
Uploaded a copy of backup `/tmp/backup-2025-02-17T13-00-00.tar.gz.gpg` to bucket `<my-bucket>`." storage=S3
Removed tar file `/tmp/backup-2025-02-17T13-00-00.tar.gz`.
Removed encrypted file `/tmp/backup-2025-02-17T13-00-00.tar.gz.gpg`.
Unexpected error running schedule 0 1/6 * * *: A header or query you provided requested a function that is not implemented.\nEOF" error="main.runScript.func4: error running script: main.(*script).pruneBackups: error pruning backups: A header or query you provided requested a function that is not implemented.\nEOF"
Expected behavior Backups are created on the Google Cloud Storage bucket and they are successfully purged when they are older than 1 day.
Version (please complete the following information):
- Image Version: v2.43.2
- Docker Version: 28.0.0
- Docker Compose Version (if applicable): v2.31.0
Additional context
It seems GCS does not support bulk deletion of files as per S3 specs https://issuetracker.google.com/issues/162653700
I'm not entirely sure how to handle this in here as the basic contract for S3 storages is S3, so implementing compatibility is up to GCS in this scenario. The code could try to fall back to one-by-one deletion in this case, but I'm a bit worried this opens a can of worms trying to iron out vendor inconsistencies all over the place.
@m90 totally understandable.
As a workaround I implemented on my Google Cloud Storage bucket a lifecycle rule that deletes objects with a certain prefix and older than X days. It works but it's not ideal, since if docker-volume-backup stops working unnoticed for whatever reason, the lifecycle rule will keep deleting old backups until all of them are deleted. A woraround to the workaround (and good practice anyway) will be adding a custom command to run when backups are completed and ping some monitoring service such as Healthchecks.io, so to be promptly alerted if backups stop running.