gm

Results 9 comments of gm

> @saimigo : NessieCatalog cannot understand S3 configurations if it is set in properties. > > Can you try passing them in Hadoop configuration[1]? You are already creating a fresh...

> The method that I suggested above is not working? @ajantha-bhat create database and table is ok , flink sql default load s3 ak/sk/region from awscli profile ![Screenshot from 2022-06-13...

> # code > ``` > Map properties = new HashMap(); > properties.put("type", "iceberg"); > // properties.put("catalog-impl", "org.apache.iceberg.nessie.NessieCatalog"); > properties.put("io-impl", "org.apache.iceberg.aws.s3.S3FileIO"); > properties.put("uri", "http://nessie:19120/api/v1"); > properties.put("ref", "main"); > properties.put("s3.endpoint", "https://mys3.com");...

> Same here! It seems that within the Spark context, a different DNS resolver is used. We currently did an ugly workaround by giving the container a fixed IP and...

> Can you try setting: > > ``` > s3.path-style-access=true > ``` > > This can be done on a catalog level: > > ```shell > spark.sql.extensions org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions > spark.sql.catalog.demo...

> > Is there anyone who solved this issue? > > Try to add env variable to the rest container: `CATALOG_S3_PATH__STYLE__ACCESS: true` > > It'll be [converted](https://github.com/tabular-io/iceberg-rest-image/blob/2e4d04184e6db38f23a98498151aa18bb6c148ab/src/main/java/org/apache/iceberg/rest/RESTCatalogServer.java#L54) to `s3.path-style-access=true` when...

> > > > Is there anyone who solved this issue? > > > > > > > > > Try to add env variable to the rest container: `CATALOG_S3_PATH__STYLE__ACCESS:...

> To access in to the cluster you may have to configure the API gateway (Envoy/ingress for instance). Add the rules to make it accessible. > > Or If you...

@joejztang In my scenario, the Istio AuthorizationPolicy is configured to allow all traffic within the cluster, so it is not limited to the service accounts (sa) in the loshu namespace....