Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

google_cloudfunctions2_function event_filter causes "Provider produced inconsistent final plan" #20969

Open
dan-drew opened this issue Jan 20, 2025 · 10 comments
Assignees

Comments

@dan-drew
Copy link

Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request.
  • Please do not leave +1 or me too comments, they generate extra noise for issue followers and do not help prioritize the request.
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment.
  • If an issue is assigned to a user, that user is claiming responsibility for the issue.
  • Customers working with a Google Technical Account Manager or Customer Engineer can ask them to reach out internally to expedite investigation and resolution of this issue.

Terraform Version & Provider Version(s)

Terraform v1.9.8
on linux amd64

  • provider registry.terraform.io/hashicorp/google v6.12.0

Affected Resource(s)

google_cloudfunctions2_function

Terraform Configuration

resource "google_cloudfunctions2_function" "setup" {
  # ...

  event_trigger {
      event_type   = "google.cloud.audit.log.v1.written"
      retry_policy = "RETRY_POLICY_RETRY"

      event_filters {
        attribute = "serviceName"
        value     = "compute.googleapis.com"
      }
      event_filters {
        attribute = "methodName"
        value     = "v1.compute.instances.start"
      }
      event_filters {
        attribute = "resourceName"
        value     = some_gcp_instance_resource.id
      }
  }
}

Debug Output

Let me know if needed, haven't run yet with full debug logging

Expected Behavior

Changes should apply successfully every time.

Actual Behavior

Every other time the plan runs, the following errors occur:

│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.synamedia.module.setup.module.gcp[0].google_cloudfunctions2_function.setup to include new values learned so far during apply,
│ provider "registry.terraform.io/hashicorp/google" produced an invalid new value for .event_trigger[0].event_filters: planned set element
│ cty.ObjectVal(map[string]cty.Value{"attribute":cty.StringVal("methodName"), "operator":cty.NullVal(cty.String),
│ "value":cty.StringVal("v1.compute.instances.start")}) does not correlate with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

│ Error: Provider produced inconsistent final plan
│
│ When expanding the plan for module.synamedia.module.setup.module.gcp[0].google_cloudfunctions2_function.setup to include new values learned so far during apply,
│ provider "registry.terraform.io/hashicorp/google" produced an invalid new value for .event_trigger[0].event_filters: planned set element
│ cty.ObjectVal(map[string]cty.Value{"attribute":cty.StringVal("serviceName"), "operator":cty.NullVal(cty.String),
│ "value":cty.StringVal("compute.googleapis.com")}) does not correlate with any element in actual.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

Steps to reproduce

Apply change for the first time to create the function

terraform apply

Make a change to the function code for example to cause it to need an update

terraform apply

Apply fails with errors described in Actual Behavior.
Immediate run apply again.

terraform apply

Apply succeeds as expected

Important Factoids

Authenticating as a user with full permissions to run the required operations.

References

No response

@dan-drew dan-drew added the bug label Jan 20, 2025
@github-actions github-actions bot added forward/review In review; remove label to forward service/cloudfunctions labels Jan 20, 2025
@ggtisc ggtisc self-assigned this Jan 21, 2025
@ggtisc
Copy link
Collaborator

ggtisc commented Jan 22, 2025

Hi @dan-drew

I tried to replicate this issue, but everything looks good without errors. have you assigned the correct roles to use event_trigger on your function?

You may check the following used code (based on this example: link here) to check if you have the correct configuration:

resource "google_storage_bucket" "bucket_20969" {
  name     = "bucket-20969"
  location = "us-central1"
}

resource "google_storage_bucket" "trigger_bucket_20969" { # The trigger must be in the same location as the bucket
  name     = "trigger-bucket-20969"
  location = "us-central1"
}

resource "google_storage_bucket_object" "bucket_object_20969" {
  name   = "index20969.zip"
  bucket = google_storage_bucket.bucket_20969.name
  source = "./utils/google_cloud_repository/index.zip"
}

# you need the following roles to manage the event_trigger filters
resource "google_project_iam_member" "run_invoker_20969" {
  project = "projec-20969t"
  role    = "roles/run.invoker"
  member  = "user:user-20969@domain-20969.com"
}

resource "google_project_iam_member" "eventarc_receiver_20969" {
  project     = "project-20969"
  role        = "roles/eventarc.eventReceiver"
  member      = "user:user-20969@domain-20969.com"
  depends_on  = [google_project_iam_member.run_invoker_20969]
}

resource "google_project_iam_member" "artifactregistry_reader_20969" {
  project     = "project-20969"
  role        = "roles/artifactregistry.reader"
  member      = "user:user-20969@domain-20969.com"
  depends_on  = [google_project_iam_member.eventarc_receiver_20969]
}

resource "google_cloudfunctions2_function" "function_20969" {
  name     = "cf2-function-20969"
  location = "us-central1"

  build_config {
    runtime     = "nodejs16"
    entry_point = "helloGET"

    source {
      storage_source {
        bucket     = google_storage_bucket.bucket_20969.name
        object     = google_storage_bucket_object.bucket_object_20969.name
      }
    }
  }

  event_trigger { # in addition to need to have eventarc permissions for your IAM user at project and/or organization level
    event_type   = "google.cloud.audit.log.v1.written"
    retry_policy = "RETRY_POLICY_RETRY"

    event_filters {
      attribute = "serviceName"
      value     = "compute.googleapis.com"
    }

    event_filters {
      attribute = "methodName"
      value     = "v1.compute.instances.start"
    }

    event_filters {
      attribute = "resourceName"
      value     = google_storage_bucket.trigger_bucket_20969.name # The trigger must be in the same location as the bucket
    }
  }
}

If after this you are still having issues please share with us your full code of all involved resources WITHOUT USING MODULES, LOCALS OR VARIABLES to make a new try. For sensitive data you could use examples like:

  1. project = "project-20969"
  2. member = "user:user-20969@domain-20969.com"

@dan-drew
Copy link
Author

Hi @dan-drew

I tried to replicate this issue, but everything looks good without errors. have you assigned the correct roles to use event_trigger on your function?

Yes, everything works great when I don't hit this error and as I listed in the repro instructions I just have to run the exact same apply after getting the error and it succeeds the second time around.

One thing that's different is that for some reason the resource ID in my case has been tagged as sensitive. I never do this explicitly so maybe an issue somewhere else in the google provider. A more full example would be like below with a couple module layers, but maybe you could try something simpler like adding a fake sensitive variable?

FYI I can try and find a simpler repro myself too, just slammed trying to get the project done. This isn't blocking me since the workaround is to run apply again, but wanted to get the issue filed.

variable method_name {
  type = string
  default = "v1.compute.instances.start"
}

resource function... {
  event_filter {
    value = var.method_name
  }
}

Full Example

terraform-module-1

resource google_compute_instance instance {
   # ...
}

output gcp_instance {
  value = google_compute_instance.instance
}

terraform-module-2

module gcp_instance {
   source = ".../terraform-module-1"
}

output gcp_instance {
  value = module.gcp_instance.instance
}

my-project

module instance {
  source = "../terraform-module-2"
}

# NOTE: For some reason module.instance.gcp_instance is now marked SENSITIVE

resource "google_cloudfunctions2_function" "setup" {
  # ...

  event_trigger {
      event_type   = "google.cloud.audit.log.v1.written"
      retry_policy = "RETRY_POLICY_RETRY"

      event_filters {
        attribute = "serviceName"
        value     = "compute.googleapis.com"
      }
      event_filters {
        attribute = "methodName"
        value     = "v1.compute.instances.start"
      }
      event_filters {
        attribute = "resourceName"
        value     = module.instance.gcp_instance.id   # Marked SENSITIVE
      }
  }
}

@ggtisc
Copy link
Collaborator

ggtisc commented Jan 22, 2025

Sorry @dan-drew this is not clear at all, so were you able to solve it?

@dan-drew
Copy link
Author

Sorry @dan-drew this is not clear at all, so were you able to solve it?

No, just that I'm not blocked because the workaround is to run apply again. This was stated in the steps to reproduce:

  1. Run apply first time
  2. Apply fails with errors about inconsistent plan.
  3. Run apply a second time
  4. Apply succeeds now, no errors.

The main additional info I added above is about whether "sensitive" values were tripping it up somehow. Previous errors I've gotten were specifically about the sensitive flag being inconsistent.

If this isn't making sense or you can't repro, feel free to punt back to me until I have a chance to better repro myself with debug logging on.

@ggtisc
Copy link
Collaborator

ggtisc commented Jan 22, 2025

I tried again but everything works fine without errors with just 1 terraform apply. Finally have you tried without using modules? Maybe this is something related to your own configuration because with the base terraform code everything is good

@micahjsmith
Copy link

I'm experiencing something similar. Upgraded from provider v5.29.1 to v6.17.0 and then get an issue with google_cloud_run_v2_service resource (elided):

╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.xyz.google_cloud_run_v2_service.abc to include new values learned so far
│ during apply, provider "registry.terraform.io/hashicorp/google" produced an invalid new value for
│ .template[0].containers[0].env: planned set element
│ cty.ObjectVal(map[string]cty.Value{"name":cty.StringVal("CLIENT_SECRET"), "value":cty.NullVal(cty.String),
│ "value_source":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"secret_key_ref":cty.ListVal([]cty.Value{cty.ObjectVal(map[string]cty.Value{"secret":cty.StringVal("ABC__CLIENT_SECRET"),
│ "version":cty.StringVal("latest")})})})})}) does not correlate with any element in actual.
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.

The plan shows as deleting all the existing env vars and creating new ones.

Here is an elided service definition

resource "google_cloud_run_v2_service" "abc" {
  name     = "abc"
  location = "us-central1"
  ingress  = "INGRESS_TRAFFIC_ALL"

  template {
    service_account       = var.abc_service_account
    execution_environment = "EXECUTION_ENVIRONMENT_GEN2"
    scaling {
      min_instance_count = 1
      max_instance_count = 2
    }
    timeout = "120s"
    containers {
      image = data.google_container_registry_image.abc_image.image_url
      env {
        name = "CLIENT_SECRET"
        value_source {
          secret_key_ref {
            secret  = data.google_secret_manager_secret_version.abc_client_secret_version.secret
            version = "latest"
          }
        }
      }
      ports {
        container_port = 8000
        name           = "http1"
      }
      resources {
        limits = {
          cpu    = "4"
          memory = "8Gi"
        }

        startup_cpu_boost = true
      }

      liveness_probe {
        http_get {
          path = "/liveness"
          port = 8000
        }
      }

      startup_probe {
        http_get {
          path = "/readiness"
          port = 8000
        }
        initial_delay_seconds = 20
        failure_threshold     = 4
        period_seconds        = 5
        timeout_seconds       = 5
      }
    }

    vpc_access {
      egress = "ALL_TRAFFIC"
      network_interfaces {
        network    = "default"
        subnetwork = "default"
      }
    }
  }
}

Similarly, I was able to terraform apply, experience this error above, then terraform apply again, and it was successful

@dan-drew
Copy link
Author

dan-drew commented Jan 23, 2025

The plan shows as deleting all the existing env vars and creating new ones.

That's interesting, I've definitely also seen this when the only change was to the function env vars.

@ggtisc
Copy link
Collaborator

ggtisc commented Jan 23, 2025

Have you tried with the base terraform resources without using modules?

@micahjsmith
Copy link

Have you tried with the base terraform resources without using modules?

@ggtisc is this question for me? no I have not tried that

adding to my report, the "double apply" workaround does not actually work for me.

or rather, any time there is a change to my cloud run service, the initial apply fails and a second apply is needed. (same error messages relating to env vars) this means that my CI pipelines continually fail, unless I built in double apply to the pipelines which doesn't seem great

@ggtisc
Copy link
Collaborator

ggtisc commented Jan 24, 2025

Please do it @micahjsmith, I've tried many times with the base code without modules, which means this is just a mistake in the configuration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants