case-k
case-k
dataflow timestamp format is wrong use HH instead of hh please read this documents ``` h clock-hour-of-am-pm (1-12) number 12 H hour-of-day (0-23) number 0 ``` https://docs.oracle.com/javase/jp/8/docs/api/java/time/format/DateTimeFormatter.html please look at...
- refactor jdbc io template jdbc io date and timestamp etl is specializing for MySQL some date type schema should not be converted like SQL Server. so refactor existing JDBC...
add PubsubToPubsubMaskedEventKeyValue template masking secret value from pub/sub secret attribute keys. ``` mvn -Pdataflow-runner compile exec:java \ -Dexec.mainClass=com.google.cloud.teleport.templates.PubsubToPubsubMaskedEventKeyValue \ -Dexec.args="--project='' --tempLocation='' \ --templateLocation=gs://beam-test-kinesis/templates/PubsubToPubsubMaskedEventKeyValue \ --experiments=enable_stackdriver_agent_metrics \ --enableStreamingEngine \ --runner=DataflowRunner \...
add DynamicDestinations options to determine table based on pub/sub message value . using this option reduce dataflow instance cost .
use idattribute method to remove message duplication. Dataflow remove message duplication automatically based on created pub/sub message id at least once policy. but message gonna duplicate if publisher send same...
Add Cloud Pub/Sub to AWS Kinesis and Kinesis to BigQuery IO to use in DataflowTemplates https://github.com/apache/beam/blob/243128a8fc52798e1b58b0cf1a271d95ee7aa241/sdks/java/io/kinesis/src/main/java/org/apache/beam/sdk/io/kinesis/KinesisIO.java PubsubToKinesis ``` mvn -Pdataflow-runner compile exec:java \ -Dexec.mainClass=com.google.cloud.teleport.templates.PubsubToKinesis \ -Dexec.args="--project=${project_id} \ --gcpTempLocation='' \ --templateLocation=''...
I would like to get secret value by using this databricks-cli repository. But currently there are not such a method and API. https://github.com/databricks/databricks-cli/blob/main/databricks_cli/secrets/api.py I have checked api doc and seems...
### Describe the feature A clear and concise description of what you want to happen. Using Spark UDF From DBT will be helpful. As discussing in dbt-spark, Something like using...
- [x] I understand that this repository is auto-generated and my pull request may not be merged ## Changes being requested fix this issue https://github.com/openai/openai-python/issues/1082 https://github.com/openai/openai-python/issues/1196 Sample code to reproduce...
resolves # ### Problem Fix the same issue that solved by databricks-dbt https://github.com/databricks/dbt-databricks/pull/580 When processing incrementally, adding new columns is ignored by the ignore setting. However, when a SQL model...