bigmancomeon
bigmancomeon
 When I use spark 3.3.3 hadoop 2.7.1 scala 2.12 ,just run a spark job in this picture. An error occurred with java.lang.StackOverflowError  Here is my spark configuration ...
spark version 3.3.3 this is spark conf with blaze spark.executor.memory 5g spark.executor.memoryOverhead 3072 spark.blaze.memoryFraction 0.7 spark.blaze.enable.caseconvert.functions true spark.blaze.enable.smjInequalityJoin false spark.blaze.enable.bhjFallbacksToSmj false this is spark conf without blaze spark.executor.memory 6g spark.executor.memoryOverhead...
I use anolis os, this is like rhel system,can it work?  use spark version 3.3.3 Common spark conf configurations : num-executors 15, executor-cores 2, driver-memory 2g this is spark...
Does this blaze optimization only support parquet files? does it support text and orc files ?