Decimal truncate strategy is incorrect
Previously, truncating decimal type for precision greater than 38 will keep the scale and truncate the precision to 38 only. It helps to keep the data's precision as much as possible. However it also turns some decimal types into null if truncated_precision - scale is less than data's precision - scale. When the according column is a primary key / non-null column, it will cause problems.
Talked with Spark team and we might consider PR for larger range of decimal as long as keep compatible with older version of Spark (both behavior and performance). @birdstorm
Is the Spark team doing this?
/lifecycle frozen