Question about reference usage
** Question : ** Hello,
In a purpose of enforcing encryption of AWS RDS Clusters using Customer Managed AWS encryption Keys only I am trying to use terraform-compliance to enforce a naming convention on the name of a data referenced in a resource.
Feature: RDS cluster must be encrypted with Encryption Keys generated by Keytrade CISO
In order to improve security
As engineers
We will only use storage_encrypted as true in aws_rds_cluster
Scenario: Enforce encryption for RDS Cluster
Given I have aws_rds_cluster defined
Then it must contain storage_encrypted
And its value must be true
Scenario Outline: Forbid usage for AWS Encryption Keys in RDS clusters
Given I have <variable_name> variable configured
Then its value must not match the "aws/*" regex
Examples:
| variable_name |
| rds_kms_key-v1 |
| rds_kms_key-v2 |
| rds_kms_key-v3 |
| rds_kms_key-v4 |
| rds_kms_key-v5 |
Scenario: Enforce naming convention
Given I have aws_rds_cluster defined
Here I would like to make sure that the `name` of the data.aws_kms_key that is used in the aws_rds_cluster is always matching this regex: "^rds_kms_key-v[0-9]*$", so that people cannot skip the previous Scenario
More precisely, I would like to enforce that:
When I have aws_rds_cluster resource defined, the reference used in the kms_key_id must follow this naming convention: "^rds_kms_key-v[0-9]*$"
resource "aws_rds_cluster" "my_aws_rds_cluster" {
...
kms_key_id = data.aws_kms_key.rds_key-v1.arn
...
}
Do you see a way on how to do this ?
Feel free if I can provide you with moire information!
Thanks in advance, Niels
Hi @Nielsoux
I'm having trouble recreating your issue. Is it possible to share the plan file?
HI @Kudbettin,
Thanks for taking the time to have a look at my question!
unfortunately the plan is full of information my CISO would not be happy about me sharing ^^
Is it sufficient if I share you the terraform resources I use ?
KMS key definition
data "aws_kms_key" "rds_key-v1" {
key_id = "alias/${var.rds_kms_key-v1}"
}
variable "rds_kms_key-v1" {
default = "aws/rds"
}
Cluster definition
resource "aws_db_subnet_group" "cluster1-mysql" {
name = "cluster1-mysql"
subnet_ids = [id1, id2]
tags = "${merge(
local.common_tags,
map(
"Name", "Subnet group for cluster1 MySQL"
)
)}"
}
resource "aws_rds_cluster_parameter_group" "mysql-pg" {
name = "cluster1-mysql-pg"
family = "aurora-mysql5.7"
parameter {
name = "binlog_format"
value = "ROW"
apply_method = "pending-reboot"
}
}
resource "aws_db_parameter_group" "mysql-pg" {
name = "cluster1-mysql-pg"
family = "aurora-mysql5.7"
}
resource "aws_security_group" "cluster1-ingress-mysql" {
name = "cluster1-ingress-mysql"
vpc_id = data.aws_vpc.vpc.id
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = var.sg_subnets
}
}
resource "aws_rds_cluster" "aws_rds_cluster1" {
cluster_identifier = "cluster1"
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.04.5"
db_subnet_group_name = aws_db_subnet_group.cluster1.name
database_name = "cluster1"
master_username = "master"
master_password = "AStrongPassword"
vpc_security_group_ids = [aws_security_group.cluster1-ingress-mysql.id]
skip_final_snapshot = false
deletion_protection = true
backup_retention_period = 30
preferred_backup_window = "01:00-03:00"
preferred_maintenance_window = "sat:03:30-sat:04:30"
storage_encrypted = true
kms_key_id = data.aws_kms_key.rds_key-v1.arn
db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.mysql-pg.name
tags = merge(
local.common_tags,
map(
"Name", "A MySQL Aurora Cluster"
)
)
}
It would result in this:
cluster
{
"address": "aws_rds_cluster.aws_rds_cluster1",
"mode": "managed",
"type": "aws_rds_cluster",
"name": "aws_rds_cluster1",
"provider_config_key": "aws",
"expressions": {
...
"kms_key_id": {
"references": [
"data.aws_kms_key.rds_key-v1"
]
},
...
"schema_version": 0
},
KMS key
{
"address": "data.aws_kms_key.rds_key-v1",
"mode": "data",
"type": "aws_kms_key",
"name": "rds_key-v1",
"provider_config_key": "aws",
"expressions": {
"key_id": {
"references": [
"var.rds_kms_key-v1"
]
}
},
"schema_version": 0
},
,
"variables": {
...
"rds_kms_key-v1": {
"default": "aws/rds"
},
so with all that, my three scenarios are meant to check that:
1 - The aws_rds_cluster has storage_encrypted defined and set to true
2 - That if I do have a variable named rds_kms_key-v[1-5] configured, it's value must not match the "aws/*" regex, so that it is not set to an AWS managed Key
3 - To provide people from referencing another variable than var.rds_kms_key-v[1-5] inside the kms_key_id chan of an aws_rds_cluster. The goel being than they cannot bypass the scenario 2
I hope this is sufficient for you to reproduce, feel free is I can provide you with more information!
Kind regards, Niels
Hello @Kudbettin
Have you had any time to have a look at this recently ?
The subjet of enforcing CMK is back on the table on our side and we would like to push a compliance-rule for it :)
Thanks for you help!
Niels
Looking into this, I think we had a similar problem with EBS with KMS as well. That was due to inconsistent structure of resources within the terraform aws provider, where those resources were not properly linked within the plan output.
Let me try to reproduce the problem you had, then will update here. Hopefully this afternoon.
Sounds like there was only time for one release for this afternoon. I am still on this one, will have a look on the weekend.
Thanks for the feedback @eerkunt really appreciated!
Hi again,
It looks like this requires an enhancement on terraform-compliance. I have a feeling this can be done if the kms_key was a resource, not a data, because we keep the references of resources that are linking each other ( we call it resource mounting ). This functionality only works for resources, since module and data could literally be anything that might violate provider resource structure. Thought on that for a while and I think we can enable module and data mounting somehow. I need to try it first with a generic use case, then we can enable the solution for your case and start creating on your case.
It looks like this will take a while, sorry. I must say, this is a very good issue :) Eye opening one.
Hello @eerkunt
Thanks you very much for your time and the explanation, I'm glad you find that my issue is a very good one! 👍 ^^
Kind regards, Niels