jammman
jammman
Thanks @imback82 . Local Spark was 2.4.1, upgraded it 2.4.5 and it still works locally. The above was sample code. Here is the real code but it contains some of...
Reviewing the Spark code, it looks like this null exception would only happen if [ArrowVector.childColumns is null](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java#L131) and since it seems to be [initialized in the constructor](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/java/org/apache/spark/sql/vectorized/ArrowColumnVector.java#L170) and only[ de-initialized...
I tried a Azure HD Insight Spark cluster to compare and I can confirm this **does** work correctly on HDInsight with no code changes - as expected (and hoped), so...
Thanks @elvaliuliuliu and @imback82 , so far have only tried: - Local, spark-2.4.1-bin-hadoop2.7, successful - Local, spark-2.4.5-bin-hadoop2.7, successful - Databricks 6.4 (includes Apache Spark 2.4.5, Scala 2.11), failed - HDInsight...