aztfexport icon indicating copy to clipboard operation
aztfexport copied to clipboard

Failed to marshal state to json: unsupported attribute "public_network_access"

Open MrSimonC opened this issue 3 years ago • 4 comments

Hi everyone. I'm trying to import a resource group "digitalplatform-dev" (which obviously exists) into azure-held storage account state with this command:

aztfy resource `
>> --backend-type=azurerm `
>> --backend-config=resource_group_name=digitalplatform-state-dev `
>> --backend-config=storage_account_name=digitalplatformstatedev `
>> --backend-config=container_name=development `
>> --backend-config=key=commoninfrastructure.tfstate `
>> --name=main-resource-group `
>> /subscriptions/000mysubscriptionguid000/resourceGroups/digitalplatform-dev

But I keep getting this error:

Error: generating Terraform configuration: converting from state to configurations: converting terraform state to config for resource azurerm_resource_group.res-0: show state: exit status 1
Failed to marshal state to json: unsupported attribute "public_network_access"

...and pointers? I can't see a reference to public_network_access anywhere on the resource group docs. The resource group I'm terraforming has resources inside it - but I'd like to think that's not connected.

... Update - interestingly, although the prompt shows a hard stop / error, after inspecting the remote state in the azure storage account, it has updated / wrote remote state correctly! So it's more a warning than an error - so this issue is actually less severe now I've found remote state is written to fine.

MrSimonC avatar Sep 09 '22 16:09 MrSimonC

@MrSimonC This is really weired.. The error occured when run terraform state show, after terraform import the resource group. Could you please manually do the import and state show by following this guide, to see whether the issue still exists?

magodo avatar Sep 10 '22 00:09 magodo

(I'm back at work Monday, will try it then)

MrSimonC avatar Sep 10 '22 08:09 MrSimonC

So apologies work (as usual) takes over, but a few more things:

  1. I'm using Powershell not git bash as aztfy seems to not like forward slashes at the start of the resource id
  2. Issue occurs (today) when importing a single resource to remote azure state
  3. Again importing a single resource works flawlessly (directly again remote state held in Azure storage) but results in that reported error message
  4. No error is seen when importing locally (to an empty directory), only remove Azure storage
  5. Where that remote Azure storage has other entries - and we're adding another resource into that existing state file, the error always appears.
  6. terraform state show azurerm_kubernetes_cluster_node_pool.cluster_node_pool (as an example) shows what you expect e.g.
resource "azurerm_kubernetes_cluster_node_pool" "cluster_node_pool" {
    enable_auto_scaling    = true
    enable_host_encryption = false
    enable_node_public_ip  = false
    fips_enabled           = false
    id                     = "/subscriptions/0000mySubscription000/resourceGroups/digitalplatform-dev/providers/Microsoft.ContainerService/managedClusters/digitalplatform-cluster/agentPools/agentpool"
    kubelet_disk_type      = "OS"
    kubernetes_cluster_id  = "/subscriptions/0000mySubscription000/resourceGroups/digitalplatform-dev/providers/Microsoft.ContainerService/managedClusters/digitalplatform-cluster"
    max_count              = 6
    max_pods               = 110
    min_count              = 2
    mode                   = "System"
    name                   = "agentpool"
    node_count             = 2
    node_labels            = {}
    node_taints            = []
    os_disk_size_gb        = 128
    os_disk_type           = "Managed"
    os_sku                 = "Ubuntu"
    os_type                = "Linux"
    priority               = "Regular"
    scale_down_mode        = "Delete"
    spot_max_price         = -1
    tags                   = {}
    ultra_ssd_enabled      = false
    vm_size                = "Standard_DS2_v2"
    zones                  = [
        "1",
        "2",
        "3",
    ]

    timeouts {}
}
  1. I've found the issue comes from the contents of the current remote azure state file we're appending to using aztf resource ...

e.g today I got: Error: generating Terraform configuration: converting from state to configurations: converting terraform state to config for resource azurerm_kubernetes_cluster_node_pool.main-resource-group: show state: exit status 1 Failed to marshal state to json: unsupported attribute "public_network_access"

... yet it import ok. In the remote state file, the only existing entry which mentions "public_network_access" is:

{
    "version": 4,
    "terraform_version": "1.2.9",
    "serial": 21,
    "lineage": "b5d34b85-5f9d-1046-4605-f20852a1a77b",
    "outputs": {},
    "resources": [
      {
        "mode": "managed",
        "type": "azurerm_app_configuration",
        "name": "appconf",
        "provider": "provider[\"registry.terraform.io/hashicorp/azurerm\"]",
        "instances": [
          {
            "schema_version": 0,
            "attributes": {
              "endpoint": "https://myExampleEndPoint.azconfig.io",
              "id": "/subscriptions/000mySubscription000/resourceGroups/digitalplatform-dev/providers/Microsoft.AppConfiguration/configurationStores/myExampleEndPoint",
              "identity": [],
              "location": "westeurope",
              "name": "myExampleEndPoint",
              "primary_read_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "primary_write_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "public_network_access": "",
              "resource_group_name": "digitalplatform-dev",
              "secondary_read_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "secondary_write_key": [
                {
                  "connection_string": "Endpoint=https://myExampleEndPoint.azconfig.io;Id=REDACTED;Secret=REDACTED",
                  "id": "REDACTED",
                  "secret": "REDACTED"
                }
              ],
              "sku": "free",
              "tags": {
                "environment": "development",
                "terraformed": "True"
              },
              "timeouts": null
            },
            "sensitive_attributes": [],
            "private": "REDACTED=",
            "dependencies": [
              "azurerm_resource_group.main_resource_group"
            ]
          }
        ]
      },
...

MrSimonC avatar Sep 14 '22 13:09 MrSimonC

@MrSimonC Much appreaciated for above detailed information, which is quite useful!

The error is from the terraform show, which actually implies that the provider under used by aztfy can't unmarshal what is stored in the remote state for azurerm_app_configuration. The reason is that the public_network_access property is introduced in v3.21.0. Each aztfy will be bound to a specific provider version (which you can see from the file provider.tf in the output directory of aztfy). I believe if you use the latest aztfy (v0.7.0), which is bound to provider v3.22.0, the issue should be resolved.

Meanwhile, the reason why current implementation of the code that converts from state to HCL will also show the state for unrelated resources is due to the terraform-exec library currently doesn't support terraform state show, which is asked in: https://github.com/hashicorp/terraform-exec/issues/336

magodo avatar Sep 15 '22 01:09 magodo

@magodo Is it possible to specify azurerm provider version for aztfy? I have my existing plan at 3.20 and aztfy runs 3.46. This causes issues during import. Seems aztfy doesn't respect provider setting at all, is it possible to somehow force it to use 3.20 version?

volver-13 avatar Mar 09 '23 14:03 volver-13

@brysk If you are appending to ane existing workspace where you have the terraform block defined (you can simply define it if not exist), it will use that version (i.e. v3.20.0). But that might cause issues as aztfy assumes the provider's schema is defined as the bound version.

magodo avatar Mar 09 '23 14:03 magodo

@magodo yes, I actually have a terraform block defined in my existing workspace:

terraform {
  required_providers {

    azurerm = {
      source  = "hashicorp/azurerm"
      version = "= 3.20.0"
    }
  }

  required_version = ">= 1.1.0"
}

For some reason it always uses the latest which is 3.46 and I'm not able to find a solution here

EDIT: I'm using v0.10 since v.0.11 is not yet avaialble on homebrew tap

aztfy --version
aztfy version v0.10.0(c11238f)

volver-13 avatar Mar 09 '23 14:03 volver-13

@brysk You are right, aztfy will generate a couple of temp directories to import in parallel, where each such directory will create a terraform config with the bound provider version. Whilst there is a escape hatch for this - --dev-provider - but need special setup. I'll create a new issue (#375) for tracking this feature request.

magodo avatar Mar 10 '23 01:03 magodo

Fixed by #376

magodo avatar May 24 '23 07:05 magodo