zhihanz
zhihanz
cc @Xuanwo @PsiACE
> 也许可以增加一些github robot, app来完成一些类似 auto assign, auto add label, check title... 这些工作。发现两个相关示例: > > * 直接安装github app,如基于probot的[auto assign](https://probot.github.io/apps/auto-assign/) 这个是随机分配的。 > * 另一种是搭建一个robot,如基于[prow](https://github.com/kubernetes/test-infra/tree/master/prow)的[tichi](https://github.com/ti-community-infra/tichi), [prow部署教程](https://www.servicemesher.com/blog/prow-quick-start-guide/)。 how about use github actions just like...
> @ZhiHanZ thx, we will take a look at github actions. great, I am maintaining github action based CI pipeline for https://github.com/datafuselabs/databend, and willing to contribute to issues in this...
| Feature | Snowflake | Databend | |-------------------|-----------|------------------------------------------------------------| | Kafka Connect | Yes | [Yes](https://github.com/databendcloud/databend-kafka-connect) | | Flink CDC | No | [Yes](https://github.com/databendcloud/flink-connector-databend) | | dbt | Yes | [Yes](https://github.com/databendcloud/dbt-databend)...
Any plan for SQL transaction and stored procedures?
besides, I think we could also support query queueing, warehouse automatica scaling based on pending queue and separate another coordinator component for dispatching physical plan to warehouse compute node.
I think it could be a derived issue from implementing a native ODBC driver for databend.
And it is primarily serve for estimating proper warehouse size for curtain query category, if customer found that the spilled bytes is pretty high, they could consider to upgrade warehouse...
It makes sense, basically in kubernetes, if we contain any state on header using deployment as databend onboarding app, the query forward have to be handled by several workaround. Either...
I think the priority of this issue is suitable for a good first issue? cc @flaneur2020