[Bug][InnoDB] Setting up InnoDB cluster as Devlake DB
Search before asking
- [X] I had searched in the issues and found no similar issues.
What happened
Hi all , we tried configuring InnoDB cluster as Devlake DB. Met with an issue , few tables dont have primary keys in them . Group Replication has a requirement to have primary key in them .
Error shown -
ERROR: The following tables do not have a Primary Key or equivalent column:
lake._devlake_locking_stub, lake._tool_teambition_connections, lake._tool_bitbucket_pipeline_steps
Group Replication requires tables to use InnoDB and have a PRIMARY KEY or PRIMARY KEY Equivalent (non-null unique key). Tables that do not follow these requirements will be readable but not updateable when used with Group Replication. If your applications make updates (INSERT, UPDATE or DELETE) to these tables, ensure they use the InnoDB storage engine and have a PRIMARY KEY or PRIMARY KEY Equivalent.
If you can't change the tables structure to include an extra visible key to be used as PRIMARY KEY, you can make use of the INVISIBLE COLUMN feature available since 8.0.23: https://dev.mysql.com/doc/refman/8.0/en/invisible-columns.html
Checking instance configuration...
{
"status": "error"
}
What do you expect to happen
InnoDB cluster should have recognised devlake mysql instances .
How to reproduce
- Deploy devlake with a single mysql instance.
- Deploy mysql-operator using helm-chart. (link)
- Deploy InnoDB cluster using helm-chart. (link)
- Try validating mysql instance of step-1 for innodb cluster .(Used - dba.checkInstanceConfiguration('Mysql instance'))
Anything else
We can bypass this issue by adding primary key to these tables for now .But it may culminate to become bigger problem when all datas are there on devlake and we try to upgrade Devlake to a higher version .
Version
v0.19.0-beta2
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
Code of Conduct
- [X] I agree to follow this project's Code of Conduct
@pratiyush05 Confirmed, it's a bug, I'm working on it, and it will be fix in v0.20.0-beta*. Thanks for your feedback!
@pratiyush05 I have fixed missing primary keys in these table. Can you tell me your operations? When you meet this problem(bootstrap devlake or scale your database)?
Hi @d4x1 , we met this when we were trying to set up devlake with InnoDB cluster as its DB for the first time. Group Replication property of InnoDB cluster needs Primary Keys for its functioning . We faced these compatibility issue -
- We tried to include mysql instance to InnoDB cluster . Mysql Inst. was already having tables and old datas .Group replication was failing on it .
- We later tried to first deploy InnoDB and then allow Devlake to create its tables . In this case , devlake couldnot create few of its tables ( cicd_deployment_commits table among few others ). Followed InnoDB installation along with Mysql Operator mentioned here.
@pratiyush05 situation 2,cicd_deployment_commits has primary keys, so it should be created successfully theretically.
Can you paste some useful logs to help us to debug this issue?
I am afraid my fix cannot work if devlake cannot init its tables in innodb cluster, but my fix should work on situation 1.
@d4x1 I re-preformed step 2 . I dont see any warning or error while performing it . Steps -
- Installed mysql innodb along with mysql operator .
- Install devlake v0.19.0-beta2 with innodb service .
I captured few logs -
Migration started on installing of devlake .Logs of lake container -
After completion , found few domain layer tables missing .Only these tables were made -
Cannot open devlake on browser . Its showing migration script detected .Following logs generated -
time="2023-11-24 06:47:19" level=info msg="path /plugins/github/connections will continue"
time="2023-11-24 06:47:19" level=error msg="HTTP 428 error\n\tcaused by: \n\tNew migration scripts detected. Database migration is required to launch DevLake.\n\tWARNING: Performing migration may wipe collected data for consistency and re-collecting data may be required.\n\tTo proceed, please send a request to <config-ui-endpoint>/api/proceed-db-migration (or <devlake-endpoint>/proceed-db-migration).\n\tAlternatively, you may downgrade back to the previous DevLake version.\n\t (428)"
@pratiyush05 db migration is triggered in Config UI, have you triggered it manually? (just to make sure.)
No ,I didnt trigger it. It started automatically .
@pratiyush05 ok. I test it with a new inno db database(not a cluster), and all tables are inited without trigger manually. And I think there is something wrong with you db cluster. Does devlake connect to the master instance(read and write)?
Yes Devlake connects with InnoDB cluster via its primary instance but it cannot write all tables and remains stuck in migration indefinitely .
@pratiyush05 Can you share the status or screenshot of the config-ui?
Sure , Status -
status:
phase: Running
conditions:
- type: Initialized
status: 'True'
lastProbeTime: null
lastTransitionTime: '2023-11-24T09:22:06Z'
- type: Ready
status: 'True'
lastProbeTime: null
lastTransitionTime: '2023-11-24T09:22:13Z'
- type: ContainersReady
status: 'True'
lastProbeTime: null
lastTransitionTime: '2023-11-24T09:22:13Z'
- type: PodScheduled
status: 'True'
lastProbeTime: null
lastTransitionTime: '2023-11-24T09:22:06Z'
hostIP: X.X.X.X
podIP: X.X.X.X
podIPs:
- ip: X.X.X.X
startTime: '2023-11-24T09:22:06Z'
containerStatuses:
- name: config-ui
state:
running:
startedAt: '2023-11-24T09:22:13Z'
lastState: {}
ready: true
restartCount: 0
image: devlake.docker.scarf.sh/apache/devlake-config-ui:v0.19.0-beta2
imageID: >-
docker-pullable://devlake.docker.scarf.sh/apache/devlake-config-ui@sha256:15fb0426d0a81c8c69e1852241449b8279d8e138323ff7085baf0dd9bf4ee161
containerID: >-
docker://XYZ
started: true
qosClass: BestEffort
Screenshot of config-ui -
Hi , just asking another small issue of migration only - Parallely we started trying another HA mode of mysql for Devlake. And we came across this bitnami mysql chart. On using it for devlake, we faced something strange -
2023-11-24T09:26:31.278866932Z Version: v0.19.0-beta2@
2023-11-24T09:26:31.292442173Z panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x118d198]
2023-11-24T09:26:31.292453882Z
2023-11-24T09:26:31.292457310Z goroutine 1 [running]:
2023-11-24T09:26:31.292461198Z database/sql.(*DB).Close(0x0)
Devlake is also stuck in migration in here . Devlake version used - v0.19.0.-beta2 I also followed the process of redeploying devlake as mentioned here but met with same result .
If this needs to be raised as a separate issue , I will do the need full . If this innodb cluster issue gets resolve first , we will surely use it over this bitnami one .
@pratiyush05
-
What does this front-end interface look like? Can it be displayed normally?
-
I want to see which step of migration execution went wrong or got stuck. Can you give me the screenshot, like this:
On browser I am seeing this -
On doing "Proceed Database migration" it is showing -
Full log - DEvlake_pod.log
@pratiyush05 Add a custom_field6 field in the _tool_tapd_stories table with type text, then restart and perform the migration operation again.
If you still have problems with the above, you can also do thisï¼
- INSERT INTO
_devlake_migration_history(created_at,script_version,script_name,comment) VALUES ('2023-11-24 07:39:01.113', 20230411000004, X'6D6F64696679207461706420637573746F6D206669656C64206E616D65', X'74617064'); - DROP TABLE IF EXISTS
_tool_tapd_stories; - CREATE TABLE
_tool_tapd_stories(connection_idbigint unsigned NOT NULL,idbigint NOT NULL,workitem_type_idbigint unsigned DEFAULT NULL,namevarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,descriptionlongtext COLLATE utf8mb4_bin,workspace_idbigint unsigned DEFAULT NULL,creatorvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,createddatetime(3) DEFAULT NULL,modifieddatetime(3) DEFAULT NULL,statusvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,ownervarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,ccvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,begindatetime(3) DEFAULT NULL,duedatetime(3) DEFAULT NULL,sizesmallint DEFAULT NULL,priorityvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,developervarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,iteration_idbigint unsigned DEFAULT NULL,test_focusvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,typevarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,sourcevarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,modulevarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,versionvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,completeddatetime(3) DEFAULT NULL,category_idbigint DEFAULT NULL,pathvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,parent_idbigint unsigned DEFAULT NULL,children_idtext COLLATE utf8mb4_bin,ancestor_idbigint unsigned DEFAULT NULL,business_valuevarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,effortfloat DEFAULT NULL,effort_completedfloat DEFAULT NULL,exceedfloat DEFAULT NULL,remainfloat DEFAULT NULL,release_idbigint unsigned DEFAULT NULL,confidentialvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,templated_idbigint unsigned DEFAULT NULL,created_fromvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,featurevarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,std_statusvarchar(20) COLLATE utf8mb4_bin DEFAULT NULL,std_typevarchar(20) COLLATE utf8mb4_bin DEFAULT NULL,urlvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,attachment_countsmallint DEFAULT NULL,has_attachmentvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,bug_idbigint unsigned DEFAULT NULL,followervarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,sync_typetext COLLATE utf8mb4_bin,predecessor_countsmallint DEFAULT NULL,is_archivedvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,modifiervarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,progress_manualvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,successor_countsmallint DEFAULT NULL,labelvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,custom_field_onetext COLLATE utf8mb4_bin,custom_field_twotext COLLATE utf8mb4_bin,custom_field_threetext COLLATE utf8mb4_bin,custom_field_fourtext COLLATE utf8mb4_bin,custom_field_fivetext COLLATE utf8mb4_bin,custom_field_sixtext COLLATE utf8mb4_bin,custom_field_seventext COLLATE utf8mb4_bin,custom_field_eighttext COLLATE utf8mb4_bin,custom_field_9text COLLATE utf8mb4_bin,custom_field_10text COLLATE utf8mb4_bin,custom_field_11text COLLATE utf8mb4_bin,custom_field_12text COLLATE utf8mb4_bin,custom_field_13text COLLATE utf8mb4_bin,custom_field_14text COLLATE utf8mb4_bin,custom_field_15text COLLATE utf8mb4_bin,custom_field_16text COLLATE utf8mb4_bin,custom_field_17text COLLATE utf8mb4_bin,custom_field_18text COLLATE utf8mb4_bin,custom_field_19text COLLATE utf8mb4_bin,custom_field_20text COLLATE utf8mb4_bin,custom_field_21text COLLATE utf8mb4_bin,custom_field_22text COLLATE utf8mb4_bin,custom_field_23text COLLATE utf8mb4_bin,custom_field_24text COLLATE utf8mb4_bin,custom_field_25text COLLATE utf8mb4_bin,custom_field_26text COLLATE utf8mb4_bin,custom_field_27text COLLATE utf8mb4_bin,custom_field_28text COLLATE utf8mb4_bin,custom_field_29text COLLATE utf8mb4_bin,custom_field_30text COLLATE utf8mb4_bin,custom_field_31text COLLATE utf8mb4_bin,custom_field_32text COLLATE utf8mb4_bin,custom_field_33text COLLATE utf8mb4_bin,custom_field_34text COLLATE utf8mb4_bin,custom_field_35text COLLATE utf8mb4_bin,custom_field_36text COLLATE utf8mb4_bin,custom_field_37text COLLATE utf8mb4_bin,custom_field_38text COLLATE utf8mb4_bin,custom_field_39text COLLATE utf8mb4_bin,custom_field_40text COLLATE utf8mb4_bin,custom_field_41text COLLATE utf8mb4_bin,custom_field_42text COLLATE utf8mb4_bin,custom_field_43text COLLATE utf8mb4_bin,custom_field_44text COLLATE utf8mb4_bin,custom_field_45text COLLATE utf8mb4_bin,custom_field_46text COLLATE utf8mb4_bin,custom_field_47text COLLATE utf8mb4_bin,custom_field_48text COLLATE utf8mb4_bin,custom_field_49text COLLATE utf8mb4_bin,custom_field_50text COLLATE utf8mb4_bin,created_atdatetime(3) DEFAULT NULL,updated_atdatetime(3) DEFAULT NULL,_raw_data_paramsvarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,_raw_data_tablevarchar(255) COLLATE utf8mb4_bin DEFAULT NULL,_raw_data_idbigint unsigned DEFAULT NULL,_raw_data_remarklongtext COLLATE utf8mb4_bin, PRIMARY KEY (connection_id,id), KEYidx__tool_tapd_stories_modified(modified), KEYidx__tool_tapd_stories_raw_data_params(_raw_data_params) ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_bin;
Then restart and perform the migration operation again.
@abeizn Why should users run such sql scripts?
@abeizn Why should users run such sql scripts?
Yes, I'm not sure what happened before that caused the database field to be deleted. It can only be modified manually. Or just delete the database and redeploy it.
Thanks @abeizn , it worked after I redeployed after re-creating the table .
For debugging , it shows ALTER TABLE command is run which finds unknown custom_field6 as shown in this picture above -
and the migration code linked to it is - link but this code seems to be linked to table "_tool_tapd_tasks" only not "_tool_tapd_stories". Is this an intended workflow ?
@pratiyush05 https://github.com/apache/incubator-devlake/blob/0f990b109c52dfaf7103405f043d449452a751e8/backend/plugins/tapd/models/migrationscripts/20230411_modify_custom_field_name.go#L42
There is a fallthrough here, so that when issuesName is "_tool_tapd_stories", the program will continue to execute the case of "_tool_tapd_tasks"
@d4x1 @abeizn Any update in this ? Why migration is stucking for InnoDB cluster ?
@pratiyush05 I'm not sure why it crashed. Maybe there were changes in the database, maybe the migration failed during execution, or maybe a developer changed this value.
This issue has been automatically marked as stale because it has been inactive for 60 days. It will be closed in next 7 days if no further activity occurs.
This issue has been automatically marked as stale because it has been inactive for 60 days. It will be closed in next 7 days if no further activity occurs.
This issue has been automatically marked as stale because it has been inactive for 60 days. It will be closed in next 7 days if no further activity occurs.
This issue has been closed because it has been inactive for a long time. You can reopen it if you encounter the similar problem in the future.