chenbodeng719
chenbodeng719
@dpadhiar You can click the resubmit the workflow multi times in a short time. It will happen sometimes. I used the namespace-install.yaml in argo namespace.
@dpadhiar Anyway, it looks like if you use its own artifact storage to pass data, it's not stable. Sometimes it can't resolve the artifact source. So I change my way...
@clumsy456 Hi, I used the way you mentioned. But I get the error sometimes. ``` apiVersion: argoproj.io/v1alpha1 kind: Workflow metadata: generateName: artifact-passing- spec: entrypoint: artifact-example templates: - name: artifact-example steps:...
I have same problem.@Limess It seems that spark cant insert overwrite with bucket index if you have a large data.
@nsivabalan I have the same issue. The below is my flink hudi config. And I use pyspark to read the table, but I get duplicate data ``` # flink write...
@LucaCanali - Which version of HBase are you using? 2.4.8 - Did you add the 2 jars listed in the note to $HBASE_HOME/lib ? I'using emr hbase. $HBASE_HOME/lib is /usr/lib/hbase/lib....
> On the client side: > > * which version of Spark do you use? > * do you run it with --jars $JAR1,$JAR2 --packages org.apache.hbase:hbase-shaded-mapreduce:2.4.9 ? - which version...
> Does it work from the spark-shell? Same error
My error is different with error in md "java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/spark/datasources/JavaBytesEncoder". Maybe it's a server config error?
> Can you try using HBase 2.3.x ? There is no 2.3.x hbase in aws emr release.