HDDS-13740. AbortMultipartUpload should return 204 No Content for non-existent keys
What changes were proposed in this pull request?
On AWS S3, AbortMultipartUpload with the same upload id will always get 204 No Content. But on MinIO, when the client repeats the same AbortMultipartUpload request, as the resource has already been deleted, the server returns 404 Not Found. The behaviour should be consistent with AWS S3. If AbortMultipartUpload request is given a non-existent key, it should respond 204 No Content (no exception) instead of 404.
Related discussion for this : https://github.com/minio/minio/discussions/13495
What is the link to the Apache JIRA
https://issues.apache.org/jira/browse/HDDS-13740
How was this patch tested?
Updated unit test : TestAbortMultipartUpload and TestS3GatewayMetrics
Updated acceptance test: MultipartUpload.robot
@Gargi-jais11 AWS S3 reference says NoSuchUpload should return 404. The discussion you mentioned says AWS is inconsistent and returns 404 after 24 hours. The discussion is 4 years old, AWS behavior may have changed since then.
I'm not sure this change is needed.
AWS S3 reference says NoSuchUpload should return 404.
You are correct but if you see this NoSuchUpload exception is used at multiple places were it should return 404, that's why rather than changing it to return 404 I changed AbortMultipartUpload to return 204 No Content for this exception.
The discussion you mentioned says AWS is inconsistent and returns 404 after 24 hours.
I did noticed this statement as well which is opposite and makes it naunced.
I changed
AbortMultipartUploadto return 204 No Content for this exception.
But the AWS doc I mentioned is specifically for AbortMultipartUpload
The discussion you mentioned says AWS is inconsistent and returns 404 after 24 hours.
I did noticed this statement as well which is opposite and makes it naunced.
Sorry I missed going through this. The whole purpose of the old discussion seemed to be consistent with AWS S3, then I don't think we should change status code from 404 to 204. And If I am not wrong that's what we should be doing.
I believe that the behavior of 204 No Content for an already-aborted upload is not legacy behavior tied to eventual consistency. The goal of AbortMultipartUpload is to ensure the upload is gone. If it's already gone, the goal is achieved successfully. To maintain a consistent client experience, the API must not raise an error.
AWS-SDK-PHP documentation says For Ozone to be compatible with AWS SDKs, it must adhere to this idempotency. The original test failed because the aws-sdk-php received a 404 and threw a client exception, proving it expects a non-error code (204) in this scenario.
the question is why does the mint test expect 204 no content? If Minio also requires returning 404, then it might be the Mint test bug?
The Mint test expects 204 No Content because it is validating compliance with the idempotency of the AWS S3 API, which all AWS SDKs are built to follow.
@Gargi-jais11 Thanks for the reply.
From the reply (https://github.com/minio/minio/discussions/13495#discussioncomment-1518062)
I want to confirm the behavior of
AbortMultipartUpload, whether it is idempotent:
- If
AbortMultipartUploadis idempotent, then maybe it should not returnNoSuchUploadeven the multipart upload has been aborted.- If
AbortMultipartUploadis non-idempotent, maybe the comments should be updated.
I get the impression that returning 404 will means that it is idempotent. Afterwards, the discussion continued by saying that Minio returns NoSuchUpload (404) instead of 204 because it's consistent.
The odd thing is that mint is made created by minio, so this suggests that minio is not s3-compatible?
AWS-SDK-PHP documentation says For Ozone to be compatible with AWS SDKs, it must adhere to this idempotency. The original test failed because the aws-sdk-php received a 404 and threw a client exception, proving it expects a non-error code (204) in this scenario.
The doc you attached seems to be about x-amz-if-match-initiated-time for directory bucket, not for general-purpose bucket.
My understanding about "idempotency" is that abort multipart upload is always idempotent since there is no side effect on sending duplicate abort multipart upload request (only one multipart upload will be aborted).
I think for 204 vs 404, we might need to check these two cases:
- Abort multipart upload that was initiated before (i.e. it exists before): This might return 204 to show that there is a successful abort multipart upload before
- Abort multipart upload that was never initiated (i.e. it does not exist): This might return 404 NoSuchUpload since there was no successful abort itself.
If there is a distinction between the two, then we might need a separate way to check which multipart upload ID was aborted recently.
I think the best way for us to check is to simply create a real AWS S3 account and verify the behavior. Since the official AWS S3 documentation only mentions 404 for no multipart uploads, changing to 204 might not be the right way to go. The Mint commit https://github.com/minio/mint/commit/a1b4fb2c255e6dcb846d2c9010372c84b69ae313 also does not add any reference to why it needs to return 204.
Also in Ceph S3 compatibility test, it should return 404 instead (https://github.com/ceph/s3-tests/blob/2805e0cab406fe1408777033e6f4be37db425e5f/s3tests/functional/test_s3.py#L6474). So we might fail Ceph S3 test if we change the behavior to 204.
I think for 204 vs 404, we might need to check these two cases:
Abort multipart upload that was initiated before (i.e. it exists before): This might return 204 to show that there is a successful abort multipart upload before Abort multipart upload that was never initiated (i.e. it does not exist): This might return 404 NoSuchUpload since there was no successful abort itself. If there is a distinction between the two, then we might need a separate way to check which multipart upload ID was aborted recently.
Yes I agree these two cases needs to be checked separately.
I think the best way for us to check is to simply create a real AWS S3 account and verify the behavior. Since the official AWS S3 documentation only mentions 404 for no multipart uploads, changing to 204 might not be the right way to go. The Mint commit https://github.com/minio/mint/commit/a1b4fb2c255e6dcb846d2c9010372c84b69ae313 also does not add any reference to why it needs to return 204.
I was trying to get a real AWS S3 account but even the free version seems to be paid so not able to confirm this scenario.
I think for 204 vs 404, we might need to check these two cases:
Do you want me to do this check as part of this PR or should raise a different one.
@Gargi-jais11 I'll get back to you with my own AWS S3 testing first.
@Gargi-jais11 I'll get back to you with my own AWS S3 testing first.
@ivandika3 just wanted to check in regarding the AWS S3 testing for AbortMultipartUpload behaviour. Please share an update when convenient.
I have tested it in the AWS S3 SDK, the behaviors are as follow
- Abort multipart upload that was initiated before (i.e. it exists before): This will return 204 to show that there is a successful abort multipart upload before (idempotent)
- Abort multipart upload that was never initiated (i.e. it does not exist): This will return 404 None since there was no successful abort itself.
Since there is this distinction, we cannot simply returns 204 for non-existent MPU key.
I asked Claude on Cursor about this and the result is as follows
cursor_s3_abort_multipart_upload_http_s.md
This means that abort multipart uploads can return 204 first and then 404 after some time. I will try to run abort again after a few hours / one day to see when it turns to 404 (Edit: Even after 1-2 days, it still returns 204).
IMO we should keep to the current 404 behavior since returning 204 means that we need to keep some kind persistent data (e.g. in OM DB) to see that the multipart upload was aborted recently and return 204 instead. However, I'm open to ideas to handle this as long as it doesn't introduce unnecessary complexity.
FYI, these are the commands I used:
Abort multipart upload that never exists
AWS_ACCESS_KEY_ID=REDACTED AWS_SECRET_ACCESS_KEY=REDACTEDREGION=us-east-2 aws --debug s3api abort-multipart-upload \
--bucket test-ivan-andika \
--key multipart/01 \
--upload-id dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R
2025-12-20 20:14:56,539 - MainThread - urllib3.connectionpool - DEBUG - https://test-ivan-andika.s3.us-east-2.amazonaws.com:443 "DELETE /multipart/01?uploadId=dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R HTTP/1.1" 404 None
2025-12-20 20:14:56,541 - MainThread - botocore.hooks - DEBUG - Event before-parse.s3.AbortMultipartUpload: calling handler <function _handle_200_error at 0x110588400>
2025-12-20 20:14:56,541 - MainThread - botocore.hooks - DEBUG - Event before-parse.s3.AbortMultipartUpload: calling handler <function handle_expires_header at 0x110588220>
2025-12-20 20:14:56,541 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-request-id': 'MZPG73CN5097BBSB', 'x-amz-id-2': 'NB0Lv0hTEVPrLM79pmyDkA0BXQozLcx9IgmOkE7OP3GTxpwEsHpFDMGCGWJAkvGz7sNFLav4GZkOtc6sw4owatueozuqNrKu', 'Content-Type': 'application/xml', 'Transfer-Encoding': 'chunked', 'Date': 'Sat, 20 Dec 2025 12:14:56 GMT', 'Server': 'AmazonS3'}
2025-12-20 20:14:56,541 - MainThread - botocore.parsers - DEBUG - Response body:
b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>NoSuchUpload</Code><Message>The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.</Message><UploadId>dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R</UploadId><RequestId>MZPG73CN5097BBSB</RequestId><HostId>NB0Lv0hTEVPrLM79pmyDkA0BXQozLcx9IgmOkE7OP3GTxpwEsHpFDMGCGWJAkvGz7sNFLav4GZkOtc6sw4owatueozuqNrKu</HostId></Error>'
2025-12-20 20:14:56,547 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-request-id': 'MZPG73CN5097BBSB', 'x-amz-id-2': 'NB0Lv0hTEVPrLM79pmyDkA0BXQozLcx9IgmOkE7OP3GTxpwEsHpFDMGCGWJAkvGz7sNFLav4GZkOtc6sw4owatueozuqNrKu', 'Content-Type': 'application/xml', 'Transfer-Encoding': 'chunked', 'Date': 'Sat, 20 Dec 2025 12:14:56 GMT', 'Server': 'AmazonS3'}
2025-12-20 20:14:56,547 - MainThread - botocore.parsers - DEBUG - Response body:
b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>NoSuchUpload</Code><Message>The specified upload does not exist. The upload ID may be invalid, or the upload may have been aborted or completed.</Message><UploadId>dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R</UploadId><RequestId>MZPG73CN5097BBSB</RequestId><HostId>NB0Lv0hTEVPrLM79pmyDkA0BXQozLcx9IgmOkE7OP3GTxpwEsHpFDMGCGWJAkvGz7sNFLav4GZkOtc6sw4owatueozuqNrKu</HostId></Error>'
Abort multipart upload that exists
Init
AWS_ACCESS_KEY_ID=REDACTED AWS_SECRET_ACCESS_KEY=REDACTED REGION=us-east-2 aws --debug s3api create-multipart-upload \
--bucket test-ivan-andika \
--key test-multipart-key
{
"ServerSideEncryption": "AES256",
"Bucket": "test-ivan-andika",
"Key": "test-multipart-key",
"UploadId": "NoRtyL9CXYLPCDUMviHts491NsNlJzeRnOx.6ADN9bT6iEpeNYUcFR.WQ_xX7rqlsgwQcJbAkgDDdeFUq4qOBx4DYyqFZFGygI3vUUru6UM-"
}
First abort (204)
AWS_ACCESS_KEY_ID=REDACTED AWS_SECRET_ACCESS_KEY=REDACTED REGION=us-east-2 aws --debug s3api abort-multipart-upload \
--bucket test-ivan-andika \
--key test-multipart-key \
--upload-id NoRtyL9CXYLPCDUMviHts491NsNlJzeRnOx.6ADN9bT6iEpeNYUcFR.WQ_xX7rqlsgwQcJbAkgDDdeFUq4qOBx4DYyqFZFGygI3vUUru6UM-
2025-12-20 20:22:30,115 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=DELETE, url=https://test-ivan-andika.s3.us-east-2.amazonaws.com/test-multipart-key?uploadId=NoRtyL9CXYLPCDUMviHts491NsNlJzeRnOx.6ADN9bT6iEpeNYUcFR.WQ_xX7rqlsgwQcJbAkgDDdeFUq4qOBx4DYyqFZFGygI3vUUru6UM-, headers={'User-Agent': b'aws-cli/2.27.2 md/awscrt#0.25.4 ua/2.1 os/macos#24.5.0 md/arch#x86_64 lang/python#3.13.2 md/pyimpl#CPython cfg/retry-mode#standard md/installer#exe md/prompt#off md/command#s3api.abort-multipart-upload', 'X-Amz-Date': b'20251220T122230Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=REDACTED/20251220/us-east-2/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=101f571ec687f53daf7336e9b19cbdaa418dd4f06de451405f9192612987d197', 'Content-Length': '0'}>
2025-12-20 20:22:30,116 - MainThread - botocore.httpsession - DEBUG - Certificate path: /usr/local/aws-cli/awscli/botocore/cacert.pem
2025-12-20 20:22:30,116 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): test-ivan-andika.s3.us-east-2.amazonaws.com:443
2025-12-20 20:22:30,976 - MainThread - urllib3.connectionpool - DEBUG - https://test-ivan-andika.s3.us-east-2.amazonaws.com:443 "DELETE /test-multipart-key?uploadId=NoRtyL9CXYLPCDUMviHts491NsNlJzeRnOx.6ADN9bT6iEpeNYUcFR.WQ_xX7rqlsgwQcJbAkgDDdeFUq4qOBx4DYyqFZFGygI3vUUru6UM- HTTP/1.1" 204 0
2025-12-20 20:22:30,979 - MainThread - botocore.hooks - DEBUG - Event before-parse.s3.AbortMultipartUpload: calling handler <function _handle_200_error at 0x10e2d8540>
2025-12-20 20:22:30,981 - MainThread - botocore.hooks - DEBUG - Event before-parse.s3.AbortMultipartUpload: calling handler <function handle_expires_header at 0x10e2d8360>
2025-12-20 20:22:30,982 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': 'OwpZO5xYKfeuFO7meLGjdMi6PwMAV2P2i1RItY0dYZ4x5nGwLfBDgr5qahFc3EoTYTlYZMofAzw=', 'x-amz-request-id': 'HTA67X9B7ER7EZX2', 'Date': 'Sat, 20 Dec 2025 12:22:31 GMT', 'Server': 'AmazonS3'}
2025-12-20 20:22:30,982 - MainThread - botocore.parsers - DEBUG - Response body:
b''
2025-12-20 20:22:30,985 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.AbortMultipartUpload: calling handler <function _update_status_code at 0x10e2d8680>
2025-12-20 20:22:30,985 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.AbortMultipartUpload: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x112002a50>>
2025-12-20 20:22:30,985 - MainThread - botocore.retries.standard - DEBUG - Not retrying request.
2025-12-20 20:22:30,985 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.AbortMultipartUpload: calling handler <bound method S3RegionRedirectorv2.redirect_from_error of <botocore.utils.S3RegionRedirectorv2 object at 0x112002ba0>>
2025-12-20 20:22:30,986 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.AbortMultipartUpload: calling handler <function enhance_error_msg at 0x10ffba2a0>
2025-12-20 20:22:30,986 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.AbortMultipartUpload: calling handler <bound method RetryQuotaChecker.release_retry_quota of <botocore.retries.standard.RetryQuotaChecker object at 0x112001940>>
2025-12-20 20:22:30,986 - MainThread - awscli.formatter - DEBUG - RequestId: HTA67X9B7ER7EZX2
Second abort (also 204)
AWS_ACCESS_KEY_ID=REDACTED AWS_SECRET_ACCESS_KEY=REDACTED REGION=us-east-2 aws --debug s3api abort-multipart-upload \
--bucket test-ivan-andika \
--key test-multipart-key \
--upload-id NoRtyL9CXYLPCDUMviHts491NsNlJzeRnOx.6ADN9bT6iEpeNYUcFR.WQ_xX7rqlsgwQcJbAkgDDdeFUq4qOBx4DYyqFZFGygI3vUUru6UM-
2025-12-20 20:23:16,862 - MainThread - botocore.endpoint - DEBUG - Sending http request: <AWSPreparedRequest stream_output=False, method=DELETE, url=https://test-ivan-andika.s3.us-east-2.amazonaws.com/test-multipart-key?uploadId=NoRtyL9CXYLPCDUMviHts491NsNlJzeRnOx.6ADN9bT6iEpeNYUcFR.WQ_xX7rqlsgwQcJbAkgDDdeFUq4qOBx4DYyqFZFGygI3vUUru6UM-, headers={'User-Agent': b'aws-cli/2.27.2 md/awscrt#0.25.4 ua/2.1 os/macos#24.5.0 md/arch#x86_64 lang/python#3.13.2 md/pyimpl#CPython cfg/retry-mode#standard md/installer#exe md/prompt#off md/command#s3api.abort-multipart-upload', 'X-Amz-Date': b'20251220T122316Z', 'X-Amz-Content-SHA256': b'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'Authorization': b'AWS4-HMAC-SHA256 Credential=REDACTED/20251220/us-east-2/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=a93be57eb5736f616ed763ab9417b7db8eebec6e8b2c4b261227b6e550e63dbf', 'Content-Length': '0'}>
2025-12-20 20:23:16,862 - MainThread - botocore.httpsession - DEBUG - Certificate path: /usr/local/aws-cli/awscli/botocore/cacert.pem
2025-12-20 20:23:16,862 - MainThread - urllib3.connectionpool - DEBUG - Starting new HTTPS connection (1): test-ivan-andika.s3.us-east-2.amazonaws.com:443
2025-12-20 20:23:17,709 - MainThread - urllib3.connectionpool - DEBUG - https://test-ivan-andika.s3.us-east-2.amazonaws.com:443 "DELETE /test-multipart-key?uploadId=NoRtyL9CXYLPCDUMviHts491NsNlJzeRnOx.6ADN9bT6iEpeNYUcFR.WQ_xX7rqlsgwQcJbAkgDDdeFUq4qOBx4DYyqFZFGygI3vUUru6UM- HTTP/1.1" 204 0
2025-12-20 20:23:17,710 - MainThread - botocore.hooks - DEBUG - Event before-parse.s3.AbortMultipartUpload: calling handler <function _handle_200_error at 0x110c78540>
2025-12-20 20:23:17,710 - MainThread - botocore.hooks - DEBUG - Event before-parse.s3.AbortMultipartUpload: calling handler <function handle_expires_header at 0x110c78360>
2025-12-20 20:23:17,710 - MainThread - botocore.parsers - DEBUG - Response headers: {'x-amz-id-2': '/I+m1LdKqfAjQXI6mZG8vn5+6b+I/icFckf36ixwvK4DpuIkHBFFAIpDEC9BxHNIywUJY/22PLY=', 'x-amz-request-id': 'EZ0C366D8H536DCT', 'Date': 'Sat, 20 Dec 2025 12:23:18 GMT', 'Server': 'AmazonS3'}
2025-12-20 20:23:17,711 - MainThread - botocore.parsers - DEBUG - Response body:
b''
2025-12-20 20:23:17,712 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.AbortMultipartUpload: calling handler <function _update_status_code at 0x110c78680>
2025-12-20 20:23:17,713 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.AbortMultipartUpload: calling handler <bound method RetryHandler.needs_retry of <botocore.retries.standard.RetryHandler object at 0x1149d2a50>>
2025-12-20 20:23:17,713 - MainThread - botocore.retries.standard - DEBUG - Not retrying request.
2025-12-20 20:23:17,713 - MainThread - botocore.hooks - DEBUG - Event needs-retry.s3.AbortMultipartUpload: calling handler <bound method S3RegionRedirectorv2.redirect_from_error of <botocore.utils.S3RegionRedirectorv2 object at 0x1149d2ba0>>
2025-12-20 20:23:17,713 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.AbortMultipartUpload: calling handler <function enhance_error_msg at 0x11296e2a0>
2025-12-20 20:23:17,713 - MainThread - botocore.hooks - DEBUG - Event after-call.s3.AbortMultipartUpload: calling handler <bound method RetryQuotaChecker.release_retry_quota of <botocore.retries.standard.RetryQuotaChecker object at 0x1149d1940>>
2025-12-20 20:23:17,714 - MainThread - awscli.formatter - DEBUG - RequestId: EZ0C366D8H536DCT
list-multipart-upload
AWS_ACCESS_KEY_ID=REDACTED AWS_SECRET_ACCESS_KEY=REDACTED REGION=us-east-2 aws --debug s3api list-multipart-uploads \
--bucket test-ivan-andika
{
"RequestCharged": null,
"Prefix": null
}
Thanks @ivandika3 for the testing and analysis. Ahh I see now the picture is clear to us as follows: AWS S3 behavior:
- Abort an upload that existed → 204
- Abort the same upload again → 204 (idempotent)
- Abort the same upload again after 1 -2 days still returns -> 204, here we don't know about the time period after which it will show 404 error for aborted upload.
- Abort an upload that never existed → 404
Current Ozone implementation:
- After abort, entries are deleted from openKeyTable and multipartInfoTable
- A subsequent abort finds no metadata and throws NO_SUCH_MULTIPART_UPLOAD_ERROR
- We can’t distinguish
was abortedvsnever existedbecause the metadata is gone.
The current code correctly returns 404 for uploads that never existed. To match AWS’s idempotent 204, we’d need to track aborted uploads, which adds unnecessary complexity. To match AWS exactly, we would need to roighly follow below steps:
- Track aborted uploads (e.g., a new table or flag in existing metadata)
- Check this record when NO_SUCH_MULTIPART_UPLOAD_ERROR is thrown
- Return 204 if previously aborted, 404 if never existed
- Implement cleanup/expiration for this tracking data
I asked Claude on Cursor about this and the result is as follows
cursor_s3_abort_multipart_upload_http_s.md
Since the grace period is not known to us that after how much it will throw 404 error, we can't fix the expiration time of previously aborted uploads and defining this expiry by our own could again cause inconsistency.
I too prefer keeping the current behaviour unless AWS compatibility is required. This is because ozone implementation of abort multipart uploads removes entires from openKeyTable and multipartInfoTable means no metadata so we can keep 404 error.
I too prefer keeping the current behaviour unless AWS compatibility is required. This is because ozone implementation of abort multipart uploads removes entires from openKeyTable and multipartInfoTable means no metadata so we can keep 404 error.
@Gargi-jais11 Agreed, in that case, should we close this?
I too prefer keeping the current behaviour unless AWS compatibility is required. This is because ozone implementation of abort multipart uploads removes entires from openKeyTable and multipartInfoTable means no metadata so we can keep 404 error.
@Gargi-jais11 Agreed, in that case, should we close this?
Yes sure we can close this.