wgip
wgip
Shows merge conflicts at the moment 
Unit test JVM looks good `com.nec.spark.cgescape.CodegenEscapeSpec` fails but unrelated to this PR I believe
CMake scope  If possible, might be appropriate load the lib programmatically rather than via SBT, then it should be more consistent across different scopes.
Further finding: WSCG (joins & aggregates) extend `BlockingOperatorWithCodegen` which indicates parts where all data needs to be loaded in before anything can be done. We could rewrite the codegen parts...
@Wosin libarrow-glib.so would need to be compiled for aurora architecture as well you probably won't be able to link x86 libs into an aurora project
More notes: ```scala object VeoGenericPlanExtractor { def matchPlan(sparkPlan: SparkPlan): Option[GenericSparkPlanDescription] = { PartialFunction.condOpt(sparkPlan) { case first @ HashAggregateExec( requiredChildDistributionExpressions, groupingExpressions, exprs, aggregateAttributes, initialInputBufferOffset, resultExpressions, org.apache.spark.sql.execution.exchange .ShuffleExchangeExec( outputPartitioning, f @ org.apache.spark.sql.execution.aggregate...