chenliang

Results 11 issues of chenliang

## What is the purpose of the pull request The existing KeyRangeLookupTree implementation is Binary Sorting Tree. Although it is shuffled before insertion, it may still cause uneven distribution. This...

priority:major
writer-core
index
size:XL

双击AppImage没用,安装rpm会报错 [root@myserver soft]# rpm -ivh electron-ssr-0.2.6.rpm error: Failed dependencies: libappindicator is needed by electron-ssr-0.2.6-119.x86_64

When the metadata file doesn't exist any more, drop table operation will fail. This PR aims to support deleting the table even if the table was corrupted.

hive

fix https://github.com/apache/iceberg/issues/5163 Add 'force' option to register existing table, this is optional usage: `CALL mycatalogname.system.register_table('mycatalogname.mydb.mytablename','xx://xxxxx/metadata/xx.metadata.json','force')`

API
spark
core
build
hive
AWS
NESSIE
DELL

PR is for quick discussion here, Reproduce step please see jira https://issues.apache.org/jira/browse/SPARK-40320 When the Executor plugin fails to initialize, the Executor shows active but does not accept tasks forever, just...

CORE

### What changes were proposed in this pull request? Add config `spark.broadcast.cleanAfterExecution.enabled`(default false) . Clean up the broadcast data generated when sql execution ends ( only suitable for long running...

SQL
CORE

This PR adds filter pushing down api for `endsWith` (`like %x`) and `contains` (`like %x%`). Before this PR, iceberg only support pushdown startWith filter. Spark parquet supports pushdown `endWiths` and...

API
parquet
core

If we can accurately judge by the minMax status, we don’t need to load the dictionary from filesystem and compare one by one anymore. Similarly , Bloomfilter needs to load...

### What changes were proposed in this pull request? A new session level config `spark.sql.execution.coresLimitNumber` to configure the maximum number of cores can be used by SQL. ### Why are...

SQL
CORE

### Why are the changes needed? The following SQL statement will get the wrong column lineage result: create table table0(a int, b string, c string) create table table1(a int, b...

module:spark
module:extensions