[Question] FileStoreCommitImpl.tryCommitOnce failed
Search before asking
- [X] I searched in the issues and found nothing similar.
Paimon version
0.8.0
Compute Engine
Flink
Minimal reproduce step
docker-compose.yml from flink-sql-client-with-session-cluster, and I only change the tag from latest to 1.19.0-java17
version: "2.2"
services:
jobmanager:
image: flink:1.19.0-java17
ports:
- "18081:8081"
command: jobmanager
container_name: jobmanager
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
taskmanager:
image: flink:1.19.0-java17
depends_on:
- jobmanager
command: taskmanager
container_name: taskmanager
scale: 1
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
taskmanager.numberOfTaskSlots: 2
sql-client:
image: flink:1.19.0-java17
command: bin/sql-client.sh
depends_on:
- jobmanager
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
rest.address: jobmanager
docker-compose up -d
docker cp paimon-flink-1.19-0.8.0.jar taskmanager:/opt/flink/lib
docker cp paimon-flink-1.19-0.8.0.jar jobmanager:/opt/flink/lib
docker cp paimon-flink-action-0.8.0.jar taskmanager:/opt/flink/lib
docker cp paimon-flink-action-0.8.0.jar jobmanager:/opt/flink/lib
docker cp flink-shaded-hadoop-2-uber-2.8.3-10.0.jar taskmanager:/opt/flink/lib
docker cp flink-shaded-hadoop-2-uber-2.8.3-10.0.jar jobmanager:/opt/flink/lib
docker-compose restart
docker exec -it jobmanager bash
sql-client.sh
And execute the following statments in the sql session:
CREATE CATALOG my_catalog WITH (
'type'='paimon',
'warehouse'='file:/tmp/paimon'
);
USE CATALOG my_catalog;
-- create a word count table
CREATE TABLE word_count (
word STRING PRIMARY KEY NOT ENFORCED,
cnt BIGINT
);
SET 'sql-client.execution.result-mode' = 'tableau';
SET 'execution.runtime-mode' = 'batch';
SELECT * FROM word_count;
insert into word_count(word, cnt) values ('A', 1);
What doesn't meet your expectations?
the insert into statement failed to be executed. and the flink web UI shows the stack info:
2024-06-10 11:44:29,451 INFO org.apache.flink.runtime.executiongraph.ExecutionGraph [] - Job insert-into_my_catalog.default.word_count (ac6d92709bc2c76e45cb318e852dc290) switched from state FAILING to FAILED.
org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:180) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:107) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:277) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:268) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:261) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:787) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:764) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:488) ~[flink-dist-1.19.0.jar:1.19.0]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:?]
at jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
at jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) ~[?:?]
at java.lang.reflect.Method.invoke(Unknown Source) ~[?:?]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309) ~[flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307) ~[flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222) ~[flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85) ~[flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168) ~[flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253) [flink-rpc-akkaee4fde85-8aea-407f-989b-1d2dfded6bff.jar:1.19.0]
at java.util.concurrent.ForkJoinTask.doExec(Unknown Source) [?:?]
at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source) [?:?]
at java.util.concurrent.ForkJoinPool.scan(Unknown Source) [?:?]
at java.util.concurrent.ForkJoinPool.runWorker(Unknown Source) [?:?]
at java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source) [?:?]
Caused by: java.lang.RuntimeException: Exception occurs when preparing snapshot #1 (path file:/tmp/paimon/default.db/word_count/snapshot/snapshot-1) by user 48f05c3a-2380-4c1e-b713-7fd966f2cf1f with hash 9223372036854775807 and kind APPEND. Clean up.
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommitOnce(FileStoreCommitImpl.java:886) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommit(FileStoreCommitImpl.java:659) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.operation.FileStoreCommitImpl.commit(FileStoreCommitImpl.java:267) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.table.sink.TableCommitImpl.commitMultiple(TableCommitImpl.java:206) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.flink.sink.StoreCommitter.commit(StoreCommitter.java:100) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.flink.sink.CommitterOperator.commitUpToCheckpoint(CommitterOperator.java:171) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.flink.sink.CommitterOperator.endInput(CommitterOperator.java:158) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:96) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$finish$0(StreamOperatorWrapper.java:149) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.finish(StreamOperatorWrapper.java:149) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.finish(StreamOperatorWrapper.java:156) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.finishOperators(RegularOperatorChain.java:115) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.endData(StreamTask.java:636) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:594) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:909) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:858) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) ~[flink-dist-1.19.0.jar:1.19.0]
at java.lang.Thread.run(Unknown Source) ~[?:?]
Caused by: java.util.NoSuchElementException: No value present
at java.util.Optional.get(Unknown Source) ~[?:?]
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommitOnce(FileStoreCommitImpl.java:839) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommit(FileStoreCommitImpl.java:659) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.operation.FileStoreCommitImpl.commit(FileStoreCommitImpl.java:267) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.table.sink.TableCommitImpl.commitMultiple(TableCommitImpl.java:206) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.flink.sink.StoreCommitter.commit(StoreCommitter.java:100) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.flink.sink.CommitterOperator.commitUpToCheckpoint(CommitterOperator.java:171) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.paimon.flink.sink.CommitterOperator.endInput(CommitterOperator.java:158) ~[paimon-flink-1.19-0.8.0.jar:0.8.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:96) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$finish$0(StreamOperatorWrapper.java:149) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.finish(StreamOperatorWrapper.java:149) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.finish(StreamOperatorWrapper.java:156) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.finishOperators(RegularOperatorChain.java:115) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.endData(StreamTask.java:636) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:594) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:909) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:858) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751) ~[flink-dist-1.19.0.jar:1.19.0]
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566) ~[flink-dist-1.19.0.jar:1.19.0]
at java.lang.Thread.run(Unknown Source) ~[?:?]
2024-06-10 11:44:29,455 INFO org.apache.flink.runtime.dispatcher.StandaloneDispatcher [] - Job ac6d92709bc2c76e45cb318e852dc290 reached terminal state FAILED.
org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:180)
at org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:107)
at org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:277)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:268)
at org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:261)
at org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:787)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:764)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:488)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.base/java.lang.reflect.Method.invoke(Unknown Source)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:309)
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:307)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:222)
at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:85)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:168)
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)
at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545)
at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229)
at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590)
at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557)
at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280)
at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241)
at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253)
at java.base/java.util.concurrent.ForkJoinTask.doExec(Unknown Source)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(Unknown Source)
at java.base/java.util.concurrent.ForkJoinPool.scan(Unknown Source)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(Unknown Source)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(Unknown Source)
Caused by: java.lang.RuntimeException: Exception occurs when preparing snapshot #1 (path file:/tmp/paimon/default.db/word_count/snapshot/snapshot-1) by user 48f05c3a-2380-4c1e-b713-7fd966f2cf1f with hash 9223372036854775807 and kind APPEND. Clean up.
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommitOnce(FileStoreCommitImpl.java:886)
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommit(FileStoreCommitImpl.java:659)
at org.apache.paimon.operation.FileStoreCommitImpl.commit(FileStoreCommitImpl.java:267)
at org.apache.paimon.table.sink.TableCommitImpl.commitMultiple(TableCommitImpl.java:206)
at org.apache.paimon.flink.sink.StoreCommitter.commit(StoreCommitter.java:100)
at org.apache.paimon.flink.sink.CommitterOperator.commitUpToCheckpoint(CommitterOperator.java:171)
at org.apache.paimon.flink.sink.CommitterOperator.endInput(CommitterOperator.java:158)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.endOperatorInput(StreamOperatorWrapper.java:96)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.lambda$finish$0(StreamOperatorWrapper.java:149)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.runThrowing(StreamTaskActionExecutor.java:50)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.finish(StreamOperatorWrapper.java:149)
at org.apache.flink.streaming.runtime.tasks.StreamOperatorWrapper.finish(StreamOperatorWrapper.java:156)
at org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.finishOperators(RegularOperatorChain.java:115)
at org.apache.flink.streaming.runtime.tasks.StreamTask.endData(StreamTask.java:636)
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:594)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:909)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:858)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:958)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:937)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:751)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:566)
at java.base/java.lang.Thread.run(Unknown Source)
Caused by: java.util.NoSuchElementException: No value present
at java.base/java.util.Optional.get(Unknown Source)
at org.apache.paimon.operation.FileStoreCommitImpl.tryCommitOnce(FileStoreCommitImpl.java:839)
... 22 more
Anything else?
No response
Are you willing to submit a PR?
- [ ] I'm willing to submit a PR!
I have same problem,budy,have you aleady solved it?
I too am facing same problem.
I too am facing same problem.
Same here..
Hi.
At this point, this answer will not be useful to the original question, but just in case, and thinking of other people who might come here with the same problem...
The issue here is that JobManager and TaskManagers do not share persistence, so when creating the catalog as a local file, the TaskManagers do not have access to it and cannot retrieve the table and its schema.
The simplest solution would be to create a volume in the docker-compose file and associate it with each service so that they share persistence. For example, using a volume backed by a local folder:
services:
jobmanager:
image: flink:1.19.0-java17
ports:
- "18081:8081"
command: jobmanager
container_name: jobmanager
volumes:
- ./shared-data:/tmp/flink-shared-data
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
taskmanager:
image: flink:1.19.0-java17
depends_on:
- jobmanager
command: taskmanager
container_name: taskmanager
scale: 1
volumes:
- ./shared-data:/tmp/flink-shared-data
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
taskmanager.numberOfTaskSlots: 2
sql-client:
image: flink:1.19.0-java17
command: bin/sql-client.sh
volumes:
- ./shared-data:/tmp/flink-shared-data
depends_on:
- jobmanager
environment:
- |
FLINK_PROPERTIES=
jobmanager.rpc.address: jobmanager
rest.address: jobmanager
CREATE CATALOG my_catalog WITH (
'type'='paimon',
'warehouse'='file:/tmp/flink-shared-data/paimon'
);
That way, the /tmp/flink-shared-data path is shared by all the services, and the structures created from the SQL Client can also be accessed from the TaskManager during execution.