github-raphael-douyere
github-raphael-douyere
What could be done is to run Spark's streaming "forEachBatch" and initialize the `AbrisConfig` inside it. But it is a really limiting approach. I'm a bit surprised that there is...
Thanks for the answer. I see the issue: Spark infers the DataFrame schema only once and it cannot changed. An alternative would be to be able to stop the app...
Sure, the stack trace is as follow : ``` [2023-05-03 15:08:11,082] ERROR WorkerSinkTask{id=test-partitioned-table-sink-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually...