Rishabh Bandi
Rishabh Bandi
**Hudi Config** "hoodie.datasource.write.recordkey.field" = "a,b" "hoodie.datasource.write.partitionpath.field" = "c,d" "hoodie.datasource.write.precombine.field" = "e" "hoodie.datasource.write.operation" = "upsert" "hoodie.datasource.write.table.type" = "COPY_ON_WRITE" "hoodie.table.name" = "dbname.tablename" "hoodie.datasource.write.keygenerator.class" = "org.apache.hudi.keygen.ComplexKeyGenerator" "hoodie.datasource.write.hive_style_partitioning" = "true" "hoodie.datasource.hive_sync.support_timestamp" = "true" "hoodie.cleaner.commits.retained"...
@hassan-ammar can we have one working session if possible? please let me know your good time.
@hassan-ammar below command being used to create the spark shell spark-shell --jars gs://spark-lib/bigquery/spark-bigquery-with-dependencies_2.12-0.22.2.jar,/edge_data/code/svcordrdats/pipeline-resources/hudi-support-jars/hudi-spark-bundle_2.12-0.11.0.jar --conf spark.serializer=org.apache.spark.serializer.KryoSerializer --conf spark.kryoserializer.buffer.max=512m --conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' --conf 'spark.sql.catalogImplementation=hive' you can save the hudi config as mentioned in...
Hi Team, we changed the approach and created a separate custom java class instead to perform the partial update. Therefore closing this issue from my side.