sedona icon indicating copy to clipboard operation
sedona copied to clipboard

Writing to multiple GeoParquet files will not output _metadata

Open jwass opened this issue 1 year ago • 2 comments

Expected behavior

When writing out a GeoParquet dataframe that results in multiple files, the _metadata summary file will not be created when configured to do so.

import sedona
from sedona.spark import *
sedona = SedonaContext.create(spark)

print("spark version: {}".format(spark.version))
print("sedona version: {}".format(sedona.version))
spark.conf.set("parquet.summary.metadata.level", "ALL")

def write_geoparquet(df, path):
    df.write.format("geoparquet") \
        .option("geoparquet.version", "1.0.0") \
        .option("geoparquet.crs", "") \
        .option("compression", "zstd") \
        .option("parquet.block.size", 16 * 1024 * 1024) \
        .option("maxRecordsPerFile", 10000000) \
        .mode("overwrite").save(path)

df = sedona.read.format("geoparquet").option("mergeSchema", "true").load(input_path)
write_geoparquet(df, output_path)

If the number of records exceeds maxRecordsPerFile so that more than one file is written, the _metadata and _common_metadata files will not be written. When there are fewer records that only one file is written, then _metadata and _common_metadata will be created.

However if I change the above to write parquet instead of geoparquet:

def write_parquet(df, path):
    df.write.format("parquet") \
        .option("compression", "zstd") \
        .option("parquet.block.size", 16 * 1024 * 1024) \
        .option("maxRecordsPerFile", 10000000) \
        .mode("overwrite").save(path)

write_parquet(df, output_path)

Then _metadata and _common_metadata will be written even with multiple files. Is there a setting or other way to enable writing the common metadata files?

I'd like to write these files as reading in full datasets from pyarrow or others will not need to fully scan all files which can be time-consuming for large datasets.

Settings

Sedona version = 3.4.1 Apache Spark version = 3.4.1

Environment = Databricks

jwass avatar Mar 29 '24 14:03 jwass

The geo metadata in the parquet footers may not be the same for all written geoparquet files, especially the bbox field, this makes the default parquet footer metadata merging process fail with the following exception:

java.lang.RuntimeException: could not merge metadata: key geo has conflicting values: [{"version":"1.0.0","primary_column":"geom","columns":{"geom":{"encoding":"WKB","geometry_types":["Polygon"],"bbox":[1.0,1.0,9998.0,9998.0],"crs":null}}}, {"version":"1.0.0","primary_column":"geom","columns":{"geom":{"encoding":"WKB","geometry_types":["Polygon"],"bbox":[0.0,0.0,10000.0,10000.0],"crs":null}}}]
	at org.apache.parquet.hadoop.metadata.StrictKeyValueMetadataMergeStrategy.merge(StrictKeyValueMetadataMergeStrategy.java:36)
	at org.apache.parquet.hadoop.metadata.GlobalMetaData.merge(GlobalMetaData.java:106)
	at org.apache.parquet.hadoop.ParquetFileWriter.mergeFooters(ParquetFileWriter.java:1451)
	at org.apache.parquet.hadoop.ParquetFileWriter.mergeFooters(ParquetFileWriter.java:1422)
	at org.apache.parquet.hadoop.ParquetFileWriter.writeMetadataFile(ParquetFileWriter.java:1383)
	at org.apache.parquet.hadoop.ParquetOutputCommitter.writeMetaDataFile(ParquetOutputCommitter.java:84)
	at org.apache.parquet.hadoop.ParquetOutputCommitter.commitJob(ParquetOutputCommitter.java:50)
	at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:192)

We have to implement an output committer for GeoParquet to merge geo metadata properly. If your usecase do not need to read the geo metadata from _common_metadata or _metadata file, we can simply ignore geo metadata when generating such files.

Kontinuation avatar Apr 01 '24 14:04 Kontinuation

@Kontinuation Thanks. I think it would be totally fine to leave off the geo metadata in the combined _metadata and/or _common_metadata files - as long as it is still present in the individual geoparquet files.

Since GeoParquet doesn't define these single _metadata summary files, I don't think it would be any issue at all - of course in the future it may standardize on a definition but I think for now it'll only be used for row group filtering and the geo metadata is not needed.

jwass avatar Apr 03 '24 20:04 jwass