Announcement: S3 default integrity change
In AWS CLI v2.23.0, we released changes to the S3 client that adopts new default integrity protections. For more information on default integrity behavior, please refer to the official SDK documentation. In SDK releases from this version on, clients default to enabling an additional checksum on all Put calls and enabling validation on Get calls.
You can disable default integrity protections for S3. We do not recommend this because checksums are important to S3 integrity posture. Integrity protections can be disabled by setting the config flag to when_required, or by using the related AWS shared config file settings or environment variables.
Disclaimer: The AWS SDKs and CLI are designed for usage with official AWS services. We may introduce and enable new features by default, such as these new default integrity protections prior to them being supported or handled by third-party service implementations. You can disable the new behavior with the WHEN_REQUIRED value for the request_checksum_calculation and response_checksum_validation configuration options covered in Data Integrity Protections for Amazon S3.
Not sure is it related but since users started to use 2.23.0 aws, the put operation to s3 ceph storage doesn't work.
upload failed: ./online.txt to s3://bucket/online.txt An error occurred (MissingContentLength) when calling the PutObject operation: Unknown
Config looks like this:
cat /root/.aws/config [default] region = data output = json response_checksum_validation = when_required
How I try to upload:
/usr/local/bin/aws --endpoint=https://endpoint --no-verify-ssl s3 cp online.txt s3://bucket/
The object is a 4MB object.
Also tried to define --content-length 14, seems like this option skipped.
breaks uploading to backblaze also:
An error occurred (InvalidArgument) when calling the PutObject operation: Unsupported header 'x-amz-sdk-checksum-algorithm' received for this API call.
Hello. I have also migrated to 2.23.0, and now I have this kind of error when trying to synchronise files
~$ aws s3 sync folder_to_sync s3://custom/ --profile default --endpoint-url https://my-endpoint --acl public-read
upload failed: folder_to_sync/path/to/zip/file.zip to s3://custom/path/to/zip/file.zip argument of type 'NoneType' is not iterable
Did I miss something? Is there a breaking change I'm not aware of?
Yeah, my CI is also started to fail with
An error occurred (InvalidArgument) when calling the UploadPart operation: x-amz-content-sha256 must be UNSIGNED-PAYLOAD, STREAMING-AWS4-HMAC-SHA256-PAYLOAD, or a valid sha256 value.
We also have issues on multiple servers and different endpoints. Errors:
upload failed: .ploi/asset-db-2025-01-16-113817.tar to s3://ploibackups/asset-db-2025-01-16-113817.tar An error occurred (MissingContentLength) when calling the PutObject operation: Unknown
An error occurred (InvalidArgument) when calling the CreateMultipartUpload operation: Unsupported header 'x-amz-checksum-algorithm' received for this API call.
upload failed: .ploi/test55-db-2025-01-16-095732.zip to s3://ploitesting213/test55-db-2025-01-16-095732.zip An error occurred (InvalidRequest) when calling the PutObject operation: The algorithm type you specified in x-amz-checksum- header is invalid.
adding: APPNAME-wp-1737017385.sql (deflated 86%) urllib3/connectionpool.py:1064: InsecureRequestWarning: Unverified HTTPS request is being made to host 's3.eu-central-1.wasabisys.com'. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/1.26.x/advanced-usage.html#ssl-warnings upload failed: .ploi/APPNAME-wp-db-2025-01-16-094945.zip to s3://web01-prod-CLIENTNAME-nl/web01-CLIENTNAME-db/APPNAME-wp-db-2025-01-16-094945.zip An error occurred (InvalidRequest) when calling the CreateMultipartUpload operation: Checksum algorithm provided is unsupported. Please try again with any of the valid types: [CRC32, CRC32C, SHA1, SHA256]
I've debugged this a bit, version 2.23 should work just fine if you use the AWS S3 service, but if you're using your own endpoint/service then the errors described above will happen, for people that absolutely need to get this working as soon as possible and used snap to install it, run the following script as root.
#!/bin/bash
snap remove aws-cli
if [[ $(uname -m) == "x86_64" ]]; then
snap install aws-cli --classic --revision=1148
else
snap install aws-cli --classic --revision=1149
fi
snap refresh --hold aws-cli
This forces the version 2.22.35 so you can get moving, it will also disable the automatic refresh of the AWS CLI package. If you want to resume updates later, run: sudo snap refresh --unhold aws-cli
A list of the revisions:
1152 = v2.23.0 (amd64) - Latest version
1151 = v2.23.0 (arm64)
1150 = v1.36.40 (amd64)
1149 = v2.22.35 (arm64) - The version we want for ARM64
1148 = v2.22.35 (amd64) - The version we want for AMD64
1147 = v1.36.39 (arm64)
1146 = v1.36.39 (amd64)
This will get you a bit of time to see how to upgrade and get 2.23 working properly.
If you need to download a specific version of the CLI for any reason, please do so using the official AWS installation instructions which can be found at the links below:
- v1: https://docs.aws.amazon.com/cli/v1/userguide/cli-chap-install.html
- v2: https://docs.aws.amazon.com/cli/latest/userguide/getting-started-version.html
For those using third-party service implementations, please see the updated disclaimer above:
Disclaimer: The AWS SDKs and CLI are designed for usage with official AWS services. We may introduce and enable new features by default, such as these new default integrity protections prior to them being supported or handled by third-party service implementations. You can disable the new behavior with the
WHEN_REQUIREDvalue for therequest_checksum_calculationandresponse_checksum_validationconfiguration options covered in Data Integrity Protections for Amazon S3.
Hi @RyanFitzSimmonsAK & @jonathan343 - The request_checksum_calculation and response_checksum_validation configuration options do work correctly with the low-level s3api get_object and put_object CLI operations, but the high-level s3 cp operation does not work with third-party services, even if both request_checksum_calculation and response_checksum_validation are set to WHEN_REQUIRED.
What seems to be happening is that the S3 Transfer Manager ensures that ChecksumAlgorithm is set, either to the user-specified value, or CRC64NVME (see the set_default_checksum_algorithm() function, added just a couple of days ago, here).
Now, in Botocore's resolve_request_checksum_algorithm() function, request_checksum_required evaluates to False, as you might expect, but, since ChecksumAlgorithm is set in the incoming params, if algorithm_member and algorithm_member in params: evaluates to True and the checksum headers are calculated and sent in the request.
The issue seems to be in S3 Transfer Manager - I think that, if request_checksum_calculation is set to WHEN_REQUIRED, it should not set a default ChecksumAlgorithm at all.
Hi @metadaddy, thanks for bringing this to our attention. Our investigation matches what you described. A default value we specified in s3transfer is taking precedence over the “when_required” config and setting a CRC32 default checksum. We’ll work on addressing this issue and provide more updates when we have them. We will use the issue you made in s3transfer (https://github.com/boto/s3transfer/issues/327) to track this.
Thanks for the confirmation, @RyanFitzSimmonsAK. I just submitted a PR with a fix, and tests: https://github.com/boto/s3transfer/pull/328.
Got this error: argument of type 'NoneType' is not iterable. Then downgraded my version, seems to be working as expected now: pip install awscli==1.36.0. (Bucket hosted in Digital Ocean Spaces)
We are using the AWS CLI to push data to SwiftStack Client. We're using the 3 commands: aws s3api create-multipart-upload, aws s3api upload-part, and aws s3api complete-multipart-upload.
Using AWS CLI version 2.23.1, we're suddenly seeing a previously mentioned error with the upload-part command:
An error occurred (InvalidArgument) when calling the UploadPart operation: x-amz-content-sha256 must be UNSIGNED-PAYLOAD, or a valid sha256 value.
Trying to set the checksum type flags is not working, and i don't see an option to use "when_required" - how can we fix this?
Hi @ssolanki38 , In order to deactivate this feature, you can set those environment variables
export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
Hi @ssolanki38 , In order to deactivate this feature, you can set those environment variables
Note: these are the v1 CLI docs.
export AWS_REQUEST_CHECKSUM_CALCULATION=when_required export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
The v2 CLI docs about environment variables mention these variables too, but they don't seem to work.
I just tested using aws s3 cp to upload a file to a bucket hosted on Ceph Object Gateway using aws-cli v2.23.2 on macOS. It fails with upload failed: <file> to <bucket> argument of type 'NoneType' is not iterable, even when AWS_REQUEST_CHECKSUM_CALCULATION and AWS_RESPONSE_CHECKSUM_VALIDATION are set to when_required.
Got this error:
argument of type 'NoneType' is not iterable. Then downgraded my version, seems to be working as expected now:pip install awscli==1.36.0. (Bucket hosted in Digital Ocean Spaces)
Hi @BlazeIsClone - just wanted to let you know, as of January 21, DigitalOcean Spaces Object Storage has resolved the AWS CLI/SDK incompatibility issues for the vast majority of Spaces customers.
https://status.digitalocean.com/incidents/zbrpd3j7hrrd
If you're still having issues after you upgrade to the latest AWS CLI/SDK (or if you need us to confirm on our side that your Spaces buckets are no longer affected by this incompatibility), please open a support ticket from within your account, and we'll be happy to help.
Keshav Attrey Sr. Product Manager, DigitalOcean Spaces
breaks uploading to backblaze also:
An error occurred (InvalidArgument) when calling the PutObject operation: Unsupported header 'x-amz-sdk-checksum-algorithm' received for this API call.
how did you fix this? setting when_required doesn't work for me.
breaks uploading to backblaze also:
An error occurred (InvalidArgument) when calling the PutObject operation: Unsupported header 'x-amz-sdk-checksum-algorithm' received for this API call.how did you fix this? setting when_required doesn't work for me.
None of the possible solutions worked for my case so I reverted to a previous aws cli version til this is sorted.
Hi @BlazeIsClone - just wanted to let you know, as of January 21, DigitalOcean Spaces Object Storage has resolved the AWS CLI/SDK incompatibility issues for the vast majority of Spaces customers.
https://status.digitalocean.com/incidents/zbrpd3j7hrrd
Out of curiosity: how did you resolve this?
For anyone else who wants to stay on AWS CLI v2 ..
I was able to stop the argument of type 'NoneType' is not iterable bug from happening by pinning to 2.22.35 (all releases), the last available version before 2.23.0 where this change went into effect, available here:
- https://awscli.amazonaws.com/awscli-exe-linux-x86_64-2.22.35.zip
Install directions (via https://docs.aws.amazon.com/cli/latest/userguide/getting-started-version.html):
$ curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64-2.22.35.zip" -o "awscliv2.zip"
$ unzip awscliv2.zip
$ sudo ./aws/install
Hi
I captured the data transferred on the socket and aws-cli seems to now apply two chunked encodings:
Transfer-Encoding: chunked
Content-Encoding: aws-chunked
Is that on purpose? This was not the case before and doesn't seem like a good idea to do. If it is just for the checksum trailer - wouldn't it be better to send it as standard TE trailer, rather than as custom CE aws-chunked trailer?
Here is the full request dump:
PUT /bucket/key HTTP/1.1
Host: localhost:3333
Accept-Encoding: identity
x-amz-sdk-checksum-algorithm: CRC64NVME
User-Agent: aws-cli/2.23.11 md/awscrt#0.23.4 ua/2.0 os/macos#24.2.0 md/arch#arm64 lang/python#3.12.8 md/pyimpl#CPython cfg/retry-mode#standard md/installer#source md/prompt#off md/command#s3.cp
Expect: 100-continue
Transfer-Encoding: chunked
Content-Encoding: aws-chunked
X-Amz-Trailer: x-amz-checksum-crc64nvme
X-Amz-Decoded-Content-Length: 5
X-Amz-Date: 20250203T234047Z
X-Amz-Content-SHA256: STREAMING-UNSIGNED-PAYLOAD-TRAILER
Authorization: AWS4-HMAC-SHA256 ...
a
5
hello
2c
0
x-amz-checksum-crc64nvme:M3eFcAZSQlc=
0
Folks, this project follows the Amazon Open Source Code of Conduct. As a reminder, we will remove comments that don't adhere to the Code of Conduct. Please keep your comments productive, inclusive, and civil. Thank you!
Hi @ssolanki38 , In order to deactivate this feature, you can set those environment variables
export AWS_REQUEST_CHECKSUM_CALCULATION=when_required export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
I have github actions with ubuntu-latest and windows-latest which had stopped working. By adding these environment variables they have started working again on both operating systems.
I captured the data transferred on the socket and aws-cli seems to now apply two chunked encodings:
Transfer-Encoding: chunked
Content-Encoding: aws-chunked
Is that on purpose? This was not the case before and doesn't seem like a good idea to do. If it is just for the checksum trailer - wouldn't it be better to send it as standard TE trailer, rather than as custom CE aws-chunked trailer?
Hey @guymguym, this is intended. This is in line with the S3 spec for chunked uploads (https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-streaming.html).
Just curious, will this change back to not use the following flags
export AWS_REQUEST_CHECKSUM_CALCULATION=when_required
export AWS_RESPONSE_CHECKSUM_VALIDATION=when_required
to be able to work?
This will get you a bit of time to see how to upgrade and get 2.23 working properly.
Hi @Cannonb4ll , this version is not working from github actions
You can now run: /usr/local/bin/aws --version
[LOG] Tue Mar 4 11:46:49 UTC 2025 :: Installation completed
[LOG] Tue Mar 4 11:46:49 UTC 2025 :: Printing AWS CLI installed version
aws-cli/2.23.15 Python/3.12.6 Linux/5.10.233-223.887.amzn2.x86_64 exe/x86_64.ubuntu.20
Run echo "version=$(aws --version)" >> $GITHUB_OUTPUT
echo "version=$(aws --version)" >> $GITHUB_OUTPUT
shell: /usr/bin/bash --noprofile --norc -e -o pipefail {0}
env:
AWS_CLI_VERSION: 2.23.15
AWS_CLI_ARCH: amd64
VERBOSE: false
LIGHTSAILCTL: false
BINDIR: /usr/local/bin
INSTALLROOTDIR: /usr/local
ROOTDIR:
WORKDIR:
upload failed: xxxxxx An error occurred (InvalidArgument) when calling the CreateMultipartUpload operation: Unsupported header 'x-amz-checksum-algorithm' received for this API call.
Error: Process completed with exit code 1.
any fixes ! already tried latest version as well but from gh action, it is throws same error. However from local machine, there is no issue.
I expect we also ran into issues because of this:
"upload failed: ../data/DFM_OUTPUT_Vietnam/Vietnam_map.nc to s3://oidc-jveenstra/DFM_OUTPUT/Vietnam_map.nc An error occurred (InvalidArgument) when calling the CreateMultipartUpload operation: Invalid arguments provided for oidc-jveenstra/DFM_OUTPUT/Vietnam_map.nc: (invalid/unknown checksum sent: invalid checksum)"
Used in a job.yaml (behind login), specifically calling something like:
aws s3 cp local_source_dir s3_target_dir --endpoint-url https://$(AWS_S3_ENDPOINT) --recursive
The job runs succesfully again if we do one of these:
- enforce an older aws-cli version
-
AWS_RESPONSE_CHECKSUM_VALIDATION=WHEN_REQUIREDas suggested by AWS docs and the issue description.
The error message is different than the ones reported by others, but as far as I can judge it has the same origin.
The failing file is a 1.5GB netcdf file. Smaller netcdf files work fine and other files also work just fine. I guess it could be that CreateMultipartUpload is only used for this larger netcdf file. Maybe our netcdf format is inconvenient for this CreateMultipartUpload, or for the checksums. Either way, we can now work around it, but it does make sense (I guess) to validate the upload with checksums. Therefore, I am eager to learn about any updates to aws-cli that might resolve this issue. Or does it just make sense to disable it?
@veenstrajelmer, are you using AWS S3, or a third-party S3-like service?
If you're using AWS S3, could you cut us a new issue including debug logs? Thanks!
@RyanFitzSimmonsAK sorry, we have resolved the issue with the environment variable and now cannot easily reproduce it. The testcase took >10 minutes to run and fixing it already took several hours of iterating and testing several options. We use instances on EDITO. Based on the commands I shares I think I directly use aws s3, but I am not 100% sure to be honest.