automq icon indicating copy to clipboard operation
automq copied to clipboard

Timeout uploading s3 objects when just starting to produce messages

Open Chillax-0v0 opened this issue 2 years ago • 0 comments

server.log:

[2023-10-12 09:28:08,713] INFO log append cost, permitAcquireFail={count=0, avg=0ns, max=0ns}, remainingPermit=1073712638/1073741824 ,append={count=17692, avg=91us, max=19ms}, callback={count=17694, avg=1ms, max=30ms}, ack={count=15819, avg=385us, max=39ms} (kafka.log.streamaspect.ElasticLog)
[2023-10-12 09:28:08,713] INFO [KafkaApi-0] produce cost, produce={count=7293, avg=3ms, max=40ms} callback={count=7293, avg=2ms, max=40ms} ack={count=7293, avg=58us, max=3ms} (kafka.server.KafkaApis)
[2023-10-12 09:28:08,844] INFO Reporting metrics. (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-10-12 09:28:08,848] INFO Finished reporting metrics, total metrics size: 106, merged size: 69. (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-10-12 09:28:13,584] WARN [DefaultObjectWriter objId=16] CreateMultipartUpload for object 01000000/_kafka_rZdE0DjZSrqy96PXrMUZVw/16 fail, retry later (com.automq.stream.s3.operator.DefaultS3Operator)
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:65)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncExecutionFailureExceptionReportingStage.lambda$execute$0(AsyncExecutionFailureExceptionReportingStage.java:51)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeAttemptExecute(AsyncRetryableStage.java:103)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:184)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$null$0(MakeAsyncHttpRequestStage.java:103)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.completeResponseFuture(MakeAsyncHttpRequestStage.java:240)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$executeHttpRequest$3(MakeAsyncHttpRequestStage.java:163)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:482)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111)
	at software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:47)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:223)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:218)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:182)
	... 23 more
Caused by: java.lang.Throwable: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.decorateException(NettyUtils.java:69)
	at software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.handleFailure(NettyRequestExecutor.java:309)
	at software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.makeRequestListener(NettyRequestExecutor.java:189)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
	at software.amazon.awssdk.http.nio.netty.internal.CancellableAcquireChannelPool.lambda$acquire$1(CancellableAcquireChannelPool.java:58)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
	at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:502)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	... 1 more
Caused by: java.util.concurrent.TimeoutException: Acquire operation took longer than 10000 milliseconds.
	at software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.timeoutAcquire(HealthCheckedChannelPool.java:77)
	at software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.lambda$acquire$0(HealthCheckedChannelPool.java:67)
	at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:153)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
	... 3 more
[2023-10-12 09:28:13,855] INFO Reporting metrics. (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-10-12 09:28:13,860] INFO Finished reporting metrics, total metrics size: 106, merged size: 69. (kafka.autobalancer.metricsreporter.AutoBalancerMetricsReporter)
[2023-10-12 09:28:15,278] WARN [DefaultObjectWriter objId=14] UploadPart for object e0000000/_kafka_rZdE0DjZSrqy96PXrMUZVw/14-1 fail, retry later (com.automq.stream.s3.operator.DefaultS3Operator)
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:65)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncExecutionFailureExceptionReportingStage.lambda$execute$0(AsyncExecutionFailureExceptionReportingStage.java:51)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeAttemptExecute(AsyncRetryableStage.java:103)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:184)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$null$0(MakeAsyncHttpRequestStage.java:103)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.completeResponseFuture(MakeAsyncHttpRequestStage.java:240)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$executeHttpRequest$3(MakeAsyncHttpRequestStage.java:163)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:482)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111)
	at software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:47)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:223)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:218)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:182)
	... 23 more
Caused by: java.lang.Throwable: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.decorateException(NettyUtils.java:69)
	at software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.handleFailure(NettyRequestExecutor.java:309)
	at software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.makeRequestListener(NettyRequestExecutor.java:189)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:491)
	at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:616)
	at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:609)
	at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
	at software.amazon.awssdk.http.nio.netty.internal.CancellableAcquireChannelPool.lambda$acquire$1(CancellableAcquireChannelPool.java:58)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
	at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:502)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:503)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	... 1 more
Caused by: java.util.concurrent.TimeoutException: Acquire operation took longer than 10000 milliseconds.
	at software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.timeoutAcquire(HealthCheckedChannelPool.java:77)
	at software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.lambda$acquire$0(HealthCheckedChannelPool.java:67)
	at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:153)
	... 7 more
[2023-10-12 09:28:15,569] WARN log cache size 1073767219 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,570] WARN log cache size 1073894998 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,571] WARN log cache size 1073899176 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,573] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,669] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,670] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,672] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,673] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,769] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,770] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,772] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,773] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,869] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,871] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,872] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,873] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,970] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,971] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,972] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:15,973] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,070] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,071] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,072] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,074] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,170] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,171] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,172] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,174] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,270] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,271] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,272] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,274] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,370] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,371] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,372] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,374] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,471] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,471] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,474] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,475] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,571] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,571] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,575] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,575] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,671] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,672] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,675] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,675] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,771] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,772] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,775] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,775] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,872] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,872] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,875] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,875] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,972] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,972] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,975] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:16,975] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:17,072] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:17,072] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:17,075] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:17,075] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:17,154] WARN [DefaultObjectWriter objId=11] UploadPart for object b0000000/_kafka_rZdE0DjZSrqy96PXrMUZVw/11-1 fail, retry later (com.automq.stream.s3.operator.DefaultS3Operator)
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.utils.CompletableFutureUtils.errorAsCompletionException(CompletableFutureUtils.java:65)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncExecutionFailureExceptionReportingStage.lambda$execute$0(AsyncExecutionFailureExceptionReportingStage.java:51)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeAttemptExecute(AsyncRetryableStage.java:103)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:184)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.lambda$attemptExecute$1(AsyncRetryableStage.java:159)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.utils.CompletableFutureUtils.lambda$forwardExceptionTo$0(CompletableFutureUtils.java:79)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$null$0(MakeAsyncHttpRequestStage.java:103)
	at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
	at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.completeResponseFuture(MakeAsyncHttpRequestStage.java:240)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.MakeAsyncHttpRequestStage.lambda$executeHttpRequest$3(MakeAsyncHttpRequestStage.java:163)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:482)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: software.amazon.awssdk.core.exception.SdkClientException: Unable to execute HTTP request: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.core.exception.SdkClientException$BuilderImpl.build(SdkClientException.java:111)
	at software.amazon.awssdk.core.exception.SdkClientException.create(SdkClientException.java:47)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:223)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.utils.RetryableStageHelper.setLastException(RetryableStageHelper.java:218)
	at software.amazon.awssdk.core.internal.http.pipeline.stages.AsyncRetryableStage$RetryingExecutor.maybeRetryExecute(AsyncRetryableStage.java:182)
	... 23 more
Caused by: java.lang.Throwable: Acquire operation took longer than the configured maximum time. This indicates that a request cannot get a connection from the pool within the specified maximum time. This can be due to high request rate.
Consider taking any of the following actions to mitigate the issue: increase max connections, increase acquire timeout, or slowing the request rate.
Increasing the max connections can increase client throughput (unless the network interface is already fully utilized), but can eventually start to hit operation system limitations on the number of file descriptors used by the process. If you already are fully utilizing your network interface or cannot further increase your connection count, increasing the acquire timeout gives extra time for requests to acquire a connection before timing out. If the connections doesn't free up, the subsequent requests will still timeout.
If the above mechanisms are not able to fix the issue, try smoothing out your requests so that large traffic bursts cannot overload the client, being more efficient with the number of times you need to call AWS, or by increasing the number of hosts sending requests.
	at software.amazon.awssdk.http.nio.netty.internal.utils.NettyUtils.decorateException(NettyUtils.java:69)
	at software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.handleFailure(NettyRequestExecutor.java:309)
	at software.amazon.awssdk.http.nio.netty.internal.NettyRequestExecutor.makeRequestListener(NettyRequestExecutor.java:189)
	at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:578)
	at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:552)
	at io.netty.util.concurrent.DefaultPromise.access$200(DefaultPromise.java:35)
	at io.netty.util.concurrent.DefaultPromise$1.run(DefaultPromise.java:502)
	at io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)
	at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)
	at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	... 1 more
Caused by: java.util.concurrent.TimeoutException: Acquire operation took longer than 10000 milliseconds.
	at software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.timeoutAcquire(HealthCheckedChannelPool.java:77)
	at software.amazon.awssdk.http.nio.netty.internal.HealthCheckedChannelPool.lambda$acquire$0(HealthCheckedChannelPool.java:67)
	at io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)
	at io.netty.util.concurrent.ScheduledFutureTask.run(ScheduledFutureTask.java:153)
	... 7 more
[2023-10-12 09:28:17,172] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:17,172] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)
[2023-10-12 09:28:17,177] WARN log cache size 1073903354 is larger than 1073741824, wait 100ms (com.automq.stream.s3.S3Storage)

Chillax-0v0 avatar Oct 12 '23 09:10 Chillax-0v0