amazon-s3-encryption-client-java icon indicating copy to clipboard operation
amazon-s3-encryption-client-java copied to clipboard

Requests causing excessive AEADBadTagException logging

Open tbaeg opened this issue 9 months ago • 2 comments

Problem:

I've noticed excessive spamming of the logs that have something similar to below. ~~I haven't been able to get time to replicated this locally, but if legacy unauthenticated mode (with a range query) is executed, should this type of exception ever occur? Seems authentication is still occurring when it shouldn't be.~~

Of note, we have on.. legacy unauthenticated mode, delayed authentication and legacy wrapping algorithms on.

The delayed authentication flag was turned on due to the default buffer size not being large enough. We opted for delayed authentication over buffer size increasing the buffer size.

EDIT: I misinterpreted the legacy unauthenticated setting. It refers to allowing ALG_AES_256_CTR_IV16_TAG16_NO_KDF and ALG_AES_256_CBC_IV16_NO_KDF algorithm suites. That said, there doesn't seem to be issues reading content, but there is excessive spamming of these logs. I still suspect range queries to be part of the issue here.

software.amazon.encryption.s3.S3EncryptionClientSecurityException: Input data too short to contain an expected tag length of 16bytes
	at software.amazon.encryption.s3.internal.CipherSubscriber.onComplete(CipherSubscriber.java:110)
	at software.amazon.awssdk.core.async.listener.SubscriberListener$NotifyingSubscriber.onComplete(SubscriberListener.java:97)
	at software.amazon.awssdk.services.s3.internal.checksums.S3ChecksumValidatingPublisher$ChecksumValidatingSubscriber.onComplete(S3ChecksumValidatingPublisher.java:148)
	at software.amazon.awssdk.core.internal.metrics.BytesReadTrackingPublisher$BytesReadTracker.onComplete(BytesReadTrackingPublisher.java:74)
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$DataCountingPublisher$1.onComplete(ResponseHandler.java:519)
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.runAndLogError(ResponseHandler.java:254)
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler.access$600(ResponseHandler.java:77)
	at software.amazon.awssdk.http.nio.netty.internal.ResponseHandler$PublisherAdapter$1.onComplete(ResponseHandler.java:375)
	at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.publishMessage(HandlerPublisher.java:402)
	at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.flushBuffer(HandlerPublisher.java:338)
	at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.receivedDemand(HandlerPublisher.java:291)
	at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher.access$200(HandlerPublisher.java:61)
	at software.amazon.awssdk.http.nio.netty.internal.nrs.HandlerPublisher$ChannelSubscription$1.run(HandlerPublisher.java:495)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:173)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:166)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:566)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at java.base/java.lang.Thread.run(Thread.java:1575)
Caused by: javax.crypto.AEADBadTagException: Input data too short to contain an expected tag length of 16bytes
	at java.base/com.sun.crypto.provider.GaloisCounterMode$GCMDecrypt.doFinal(GaloisCounterMode.java:1477)
	at java.base/com.sun.crypto.provider.GaloisCounterMode.engineDoFinal(GaloisCounterMode.java:415)
	at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2130)
	at software.amazon.encryption.s3.internal.CipherSubscriber.onComplete(CipherSubscriber.java:104)
	... 19 more
``

tbaeg avatar Apr 24 '25 20:04 tbaeg

Hey @tbaeg ,

Thanks for the bug report. For valid ranged get requests, this behavior is fixed in the recently released version 3.3.3. Please upgrade and let us know if you still have the issue. Note that there are some edge cases where "invalid" (but modeled) ranged get requests may still log spurious exceptions (specifically when the end of the range exceeds the content length). These cases are considered lower priority; if these cases are a problem for you let us know so that we can prioritize cleaning up the logging accordingly. Thanks!

kessplas avatar May 09 '25 19:05 kessplas

Hey @tbaeg ,

Thanks for the bug report. For valid ranged get requests, this behavior is fixed in the recently released version 3.3.3. Please upgrade and let us know if you still have the issue. Note that there are some edge cases where "invalid" (but modeled) ranged get requests may still log spurious exceptions (specifically when the end of the range exceeds the content length). These cases are considered lower priority; if these cases are a problem for you let us know so that we can prioritize cleaning up the logging accordingly. Thanks!

Upgrading did help reduce the excessive logging but we started seeing failures in reading the data in our tasks and had to revert to our known working version of 3.2.2. I can try and get more details as I have time.

tbaeg avatar May 13 '25 20:05 tbaeg

Hi @tbaeg, this behavior is fixed in the recently released version 3.3.5. Please upgrade and let us know if you still have the issue.

rishav-karanjit avatar May 22 '25 23:05 rishav-karanjit