baiyangtx

Results 40 issues of baiyangtx

1. iceberg#OverwriteFiles implement UnkeyedHiveTable.OverwriteFiles , alter hive partition location when commit 2. iceberg#OverwriteFiles determine location of hive partition by addDataFile 3. use UnkeyedHiveTable as base store of KeyedHiveTable

core

This is a sub-issue, parent task is [#173 ](https://github.com/NetEase/arctic/issues/173) some questions should be clear at design docs. i. pos-delete or eq-delete for unkeyed table? ii. should batch update write log...

Support Insert/Delete/Update/MergeInto SQL in Spark TaskList (link each sub-issue at each subtask): * Design Docs #174 some questions should be clear at design docs. i. pos-delete or eq-delete for unkeyed...

spark

support insert into/ insert overwrite for unkeyed table adapt hive catalog unkeyed table under hive cannot reuse iceberg SparkTable code. make ArcticSparkTable warpped HiveUnkeyedTable, support reader/writer for table

spark

run command via git bash ``` mvn install -pl '!trino' ``` unit test will failed ``` Tests run: 2, Failures: 2, Errors: 0, Skipped: 0, Time elapsed: 7.277 sec

type:bug
module:ams-server
priority:major
stale

use mysql 8.0 jdbc driver

enhancement

add spark conf ```scala conf.set("arctic.catalog.url", "thrift://xxxx/catalog_name") conf.set("arctic.resolve-all-identifier", "true" ) conf.set("spark.sql.extensions" , "com.netease.arctic.spark.ArcticSparkSessionExtensions") ``` note: new spark conf * arctic.catalog.url : default arctic catalog url for lookup table meta. * arctic.resolve-all-identifier...

enhancement
spark

```scala df.format("arctic").save("xxxx") df.format("arctic").mode("overwrite").save("xxx") val df = spark.read.format("arctic").load("xxx") ``` keep same action with spark3

enhancement
spark

support v1 dataframe api ```scala df.format("arctic").save("xxxx") df.format("arctic").mode("overwrite").save("xxx") val df = spark.read.format("arctic").load("xxx") ``` suport v2 dataframe api ```scala df.writeTo("xxx") df.writeTo("xxx").overwritePartitions() val df = spark.read.table("xxx") ```

enhancement
spark

support insert into sql and dataframe append for keyed table. insert into and append action for keyed table is append to keyed table **change store**

enhancement
spark