Xiao Li
Xiao Li
@skrawcz Please check this link https://www.mail-archive.com/[email protected]/msg281210.html
I think we do no plan to support it in DBConnect in the near future. cc @juliuszsompolski @youngbink
@yosifkit Today, we just published the latest release spark 3.4 https://spark.apache.org/releases/spark-release-3-4-0.html I am wondering when we can have our own official docker image?
PySpark is following the ANSI SQL, although we have not supported all the data types yet. Recently, we are revisiting the [type coercion rules](https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TypeCoercion.scala) and type casting. Below is the...
FYI, PySpark is following the NULL semantics that was defined in ANSI SQL. We documented our behaviors in http://spark.apache.org/docs/latest/sql-ref-null-semantics.html
@mihailom-db how about splitting it to multiple PRs?