oracle-connector实时读取只读备库的问题
因为目前的现状是公司只提供一个oracle只读备库的环境用于数据同步,因为cdc在部署的时候需要在对接的oracle库下创建相关信息表,故无法执行,麻烦请问,对于这种情况,如何使用oracle-connector完成数据同步?
DBZ-3866 类似这个问题
-
The backup library cannot use Logminer core cause: DBMS_LOGMNR_D and DBMS_LOGMNR write data to V$LOGMNR_CONTENTS view is required to write permission. Active/standby deployment is used for DISASTER recovery deployment, not for Logminer data capture for analysis
-
The official recommendation is to deploy a separate Mining Database for data analysis, not master and slave.This is official document https://docs.oracle.com/goldengate/c1230/gg-winux/GGODB/configuring-downstream-mining-database.htm#GGODB-GUID-E265AB7E-6255-496E-896F-32E943C362D9
-
The official documentation of logMiner : https://docs.oracle.com/cd/B19306_01/server.102/b14215/logminer.htm
@wangdabin1216 I hope this can help you
Thank you very much It is not a good suggestion to deploy a separate mining database for data analysis, because our data analysis may be calculated based on flink or spark in the future. At this stage, only the upstream data is synchronized in real time, and the subsequent analysis requirements are yet to be determined. A separate mining database will double the capacity of the original oracle database, and we do not do data analysis based on oracle. @molsionmo
因为目前的现状是公司只提供一个oracle只读备库的环境用于数据同步,因为cdc在部署的时候需要在对接的oracle库下创建相关信息表,故无法执行,麻烦请问,对于这种情况,如何使用oracle-connector完成数据同步?
请问你们这个问题解决了吗 我现在也被这个只读备库卡住了 。。。
无解,目前。只能用OGG@ afsun1996
oracle 19c支持了备库DML自动重定向到主库,请问能解决flink-cdc创建相关信息表的问题吗
2.3解决备库的问题了吗
Considering collaboration with developers around the world, please re-create your issue in English on Apache Jira under project Flink with component tag Flink CDC. Thank you!
cc @GOODBOY008
Closing this issue because it was created before version 2.3.0 (2022-11-10). Please try the latest version of Flink CDC to see if the issue has been resolved. If the issue is still valid, kindly report it on Apache Jira under project Flink with component tag Flink CDC. Thank you!