AWS::RDS::DBCluster with it's associated AWS::Logs::LogGroup error
This is not specifically about a new attribute or a new option to support. So the issue template does not really fit for this case.
When creating a new AWS::RDS::DBCluster the cluster immediately starts logging in /aws/rds/cluster/${DBCluster}/error and then it is impossible to explicitly create the AWS::Logs::LogGroup in the CloudFormation template as the resource exists as soon as the DBCluster is created.
Here is an example that fails with this error:
CREATE_FAILED /aws/rds/cluster/apviz-api-sql-test-dbcluster-xxxxx/error already exists
DBCluster:
Type: AWS::RDS::DBCluster
Properties:
DatabaseName: foo
Engine : aurora
EngineMode : serverless
EngineVersion : 5.6.10a
DBClusterLogGroupError:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub /aws/rds/cluster/${DBCluster}/error
RetentionInDays: 90
As you can see in order to create the log group, I need the DB cluster name... but the DB cluster will create the log group implicitly before I can create it explicitly using AWS::Logs::LogGroup.
Is there a workaround?
Or should we think about a new CloudFormation feature? Something like a way to "upsert" a AWS::Logs::LogGroup resource? Not failing if the log group already exists but update it. I'm not sure that's a good idea. 🤔
Or maybe a new resource that sets the default properties of any AWS::Logs::LogGroup created with a prefix e.g. /aws/rds/cluster/* 👍
For example we could have:
LogGroupDefaultProperties:
Type: AWS::Logs::LogGroupDefaultProperties
Properties:
LogGroupPattern: /aws/rds/cluster/*/error
RetentionInDays: 90
Edit: Maybe it's much more of a AWS::Logs::LogGroup issue
I have not been able to reproduce this error, I was able to create rds log groups with CloudFormation for both serverless/provisioned. Has this issue perhaps been (silently) resolved, or is it perhaps a race condition between CFN/RDS that might be producing unpredictable behaviour ?
I ran into this today.
Having an Aurora MySQL database with audit logs enabled, I am unable to update my existing CFN stack with an AWS::Logs::LogGroup to specify a log retention period as infrastructure as code.
I think a nice solution would be to give AWS::RDS::DBCluster an explicit attribute for referencing an AWS::Logs::LogGroup instead of defining the log group name implicitly.
Definitely still happening as of today. Seems like a race condition, depending on how fast the DB server starts logging compared to how slow CloudFormation is deploying resources
Still happening today when trying to deploy a Postgres-based Aurora Serverless RDS. Is there a solution yet?
Still happening to me too. The only workaround I found is to remove the EnableCloudwatchLogsExports property from RDS until after the initial deployment. Then add it and update the stack to enable logging on the RDS instance.
By design, CFN resources implementation are scoped down in visibility and permissions. There is no way in DBCluster handlers to know about other stack resources such LogGroup and handling it in proper way. No enhancement can be done in resource layer. However, I feel your pain. I faced already exist exception in many cases beside what mentioned here while using CFN. I opened this feature request https://github.com/aws-cloudformation/cloudformation-coverage-roadmap/issues/1402 which will be a good fit here by marking LogGroup resource to AutoImportIfExists that's means CFN will pick already exist resource and start modifying it directly to reach its desired state. Currently, Resource Import is available via AWS Management Tool which is considered manual step.
Possible ways to avoid this issue
- Extract ClusterIdentifier as parameter, So you can create LogGroup first by using Dependson Here is an example stack Cons: You will need to use new parameter value whenever you are modifying any DBCluster parameter that require replacement otherwise stack update will fail.
Parameters:
ClusterId:
Type: string
Resources:
DBCluster:
Type: AWS::RDS::DBCluster
DependsOn: DBClusterLogGroupError
Properties:
DBClusterIdentifier: !Ref ClusterId
DatabaseName: foo
Engine : aurora
EngineMode : serverless
EngineVersion : 5.6.10a
MasterUsername: testuser
MasterUserPassword: testpassword
DBClusterLogGroupError:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Sub /aws/rds/cluster/${ClusterId}/error
RetentionInDays: 90
- Randomly generate the DBCluster identifier. This is only one step and does not need any manual interference each time you modify the stack.
Cons: This identifier change will always triggers resource replacement.
Resources:
DBCluster:
Type: AWS::RDS::DBCluster
DependsOn: DBClusterLogGroupError
Properties:
DBClusterIdentifier: !Select [4, !Split ["/", !Ref DBClusterLogGroupError]]
DatabaseName: foo
Engine : aurora
EngineMode : serverless
EngineVersion : 5.6.10a
MasterUsername: testuser
MasterUserPassword: testpassword
DBClusterLogGroupError:
Type: AWS::Logs::LogGroup
Properties:
LogGroupName: !Join ['', [ "/aws/rds/cluster/id-prefix-", !Select [ 0, !Split [ '-', !Select [ 2, !Split [ '/', !Ref AWS::StackId ] ] ] ], '/error' ] ]
RetentionInDays: 90
- Solution mentioned by JeffRausch: Do it in two steps
A. Create stack with cluster
EnableCloudwatchLogsExportsis false. B. Modify stack again withEnableCloudwatchLogsExportsis true.