Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provider produced inconsistent final plan with automated logpush ownership challenge #3459

Closed
3 tasks done
Kaitou786 opened this issue Jul 15, 2024 · 2 comments
Closed
3 tasks done
Labels
triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@Kaitou786
Copy link

Kaitou786 commented Jul 15, 2024

Confirmation

  • This is a bug with an existing resource and is not a feature request or enhancement. Feature requests should be submitted with Cloudflare Support or your account team.
  • I have searched the issue tracker and my issue isn't already found.
  • I have replicated my issue using the latest version of the provider and it is still present.

Terraform and Cloudflare provider version

Terraform v1.9.2
on darwin_arm64
+ provider registry.terraform.io/cloudflare/cloudflare v4.37.0
+ provider registry.terraform.io/hashicorp/google v5.37.0
+ provider registry.terraform.io/hashicorp/google-beta v5.37.0

Affected resource(s)

  • cloudflare_logpush_job
  • cloudflare_logpush_ownership_challenge

Terraform configuration files

resource "google_storage_bucket" "cloudflare_logpush" {
  count = local.is_gcp ? 1 : 0
  name     = "${var.tenant_id}-cloudflare-logs"
  location = "US"
  project  = var.gcp_project_id
  force_destroy = true
  lifecycle_rule {
    condition {
      age            = var.retention_period_days
      matches_prefix = ["v2/"]
    }
    action {
      type = "Delete"
    }
  }
}

resource "google_storage_bucket_iam_member" "logpush_cloudflare" {
  count = local.is_gcp  ? 1 : 0
  bucket = google_storage_bucket.cloudflare_logpush[0].name
  role   = "roles/storage.objectAdmin"
  member = "serviceAccount:logpush@cloudflare-data.iam.gserviceaccount.com"
  depends_on = [ google_storage_bucket.cloudflare_logpush[0] ]
}


resource "cloudflare_logpush_ownership_challenge" "gcp_challenge" {
  zone_id          = var.cloudflare_zone_id
  destination_conf = "${local.destination_conf}/v2/spectrum/{DATE}"
  depends_on = [ google_storage_bucket_iam_member.logpush_cloudflare ]
}



data "google_storage_bucket_object_content" "challenge" {
  name   = cloudflare_logpush_ownership_challenge.gcp_challenge.ownership_challenge_filename
  bucket = google_storage_bucket.cloudflare_logpush[0].name
  depends_on = [ cloudflare_logpush_ownership_challenge.gcp_challenge ]
}

resource "cloudflare_logpush_job" "v2_spectrum" {
  enabled             = true
  zone_id             = var.cloudflare_zone_id
  name                = "Logpush-v2-${var.tenant_id}-Spectrum"
  logpull_options     = "fields=Application,ClientAsn,ClientBytes,ClientCountry,ClientIP,ClientMatchedIpFirewall,ClientPort,ClientProto,ColoCode,ConnectTimestamp,DisconnectTimestamp,Event,OriginBytes,OriginIP,OriginPort,OriginProto,Status,Timestamp,IpFirewall,ProxyProtocol,ClientTcpRtt,ClientTlsCipher,ClientTlsClientHelloServerName,ClientTlsProtocol,ClientTlsStatus,OriginTcpRtt,OriginTlsCipher,OriginTlsFingerprint,OriginTlsMode,OriginTlsProtocol,OriginTlsStatus&timestamps=rfc3339"
  destination_conf    = "${local.destination_conf}/v2/spectrum/{DATE}"
  ownership_challenge = data.google_storage_bucket_object_content.challenge.content
  dataset             = "spectrum_events"
  depends_on = [ data.google_storage_bucket_object_content.challenge, cloudflare_logpush_ownership_challenge.gcp_challenge ]
}

Link to debug output

https://gist.github.com/Kaitou786/2c4534a610b3abdb2ba167930dfb2f49

Panic output

No response

Expected output

Apply to proceed and create a logpush job without any problem.

Actual output

╷
│ Error: Provider produced inconsistent final plan
│ 
│ When expanding the plan for module.cloudflare-logpush[0].cloudflare_logpush_job.v2_spectrum to include new values learned so far during apply, provider "registry.terraform.io/cloudflare/cloudflare" produced an invalid new value for .ownership_challenge: was null, but now
│ cty.StringVal("REDACTED_VALUE").
│ 
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
╵

Steps to reproduce

The minimum steps required for reproducing this problem would be:

  1. Create a ownership_challenge:
resource "cloudflare_logpush_ownership_challenge" "gcp_challenge" {
  zone_id          = var.cloudflare_zone_id
  destination_conf = "${local.destination_conf}/v2/spectrum/{DATE}"
}
  1. Read that ownership challenge from the bucket, using the data block.
data "google_storage_bucket_object_content" "challenge" {
  name   = cloudflare_logpush_ownership_challenge.gcp_challenge.ownership_challenge_filename
  bucket = google_storage_bucket.cloudflare_logpush[0].name
  depends_on = [ cloudflare_logpush_ownership_challenge.gcp_challenge ]
}
  1. Use the above data while creating the logpush job with explicitly depending on the above resources:
resource "cloudflare_logpush_job" "v2_spectrum" {
  enabled             = true
  zone_id             = var.cloudflare_zone_id
  name                = "Logpush-v2-${var.tenant_id}-Spectrum"
  logpull_options     = "fields=Application,ClientAsn,ClientBytes,ClientCountry,ClientIP,ClientMatchedIpFirewall,ClientPort,ClientProto,ColoCode,ConnectTimestamp,DisconnectTimestamp,Event,OriginBytes,OriginIP,OriginPort,OriginProto,Status,Timestamp,IpFirewall,ProxyProtocol,ClientTcpRtt,ClientTlsCipher,ClientTlsClientHelloServerName,ClientTlsProtocol,ClientTlsStatus,OriginTcpRtt,OriginTlsCipher,OriginTlsFingerprint,OriginTlsMode,OriginTlsProtocol,OriginTlsStatus&timestamps=rfc3339"
  destination_conf    = "${local.destination_conf}/v2/spectrum/{DATE}"
  ownership_challenge = data.google_storage_bucket_object_content.challenge.content
  dataset             = "spectrum_events"
  depends_on = [ data.google_storage_bucket_object_content.challenge, cloudflare_logpush_ownership_challenge.gcp_challenge ]
}

Additional factoids

  • The inital plan for creating the logpush job also doesn' have ownership_challenge i.e. comes to be null instead of known after apply

ref:


  + resource "cloudflare_logpush_job" "v2_spectrum" {
      + dataset          = "spectrum_events"
      + destination_conf = "gs://maintkhandelwal3rev-cloudflare-logs/v2/spectrum/{DATE}"
      + enabled          = true
      + frequency        = "high"
      + id               = (known after apply)
      + logpull_options  = "fields=Application,ClientAsn,ClientBytes,ClientCountry,ClientIP,ClientMatchedIpFirewall,ClientPort,ClientProto,ColoCode,ConnectTimestamp,DisconnectTimestamp,Event,OriginBytes,OriginIP,OriginPort,OriginProto,Status,Timestamp,IpFirewall,ProxyProtocol,ClientTcpRtt,ClientTlsCipher,ClientTlsClientHelloServerName,ClientTlsProtocol,ClientTlsStatus,OriginTcpRtt,OriginTlsCipher,OriginTlsFingerprint,OriginTlsMode,OriginTlsProtocol,OriginTlsStatus&timestamps=rfc3339"
      + name             = "Logpush-v2-maintkhandelwal3rev-Spectrum"
      + zone_id          = "REDACTED"
    }
  • For some weird reason it only happens when using GCS, it works fine when working with S3

References

The below issues describes the same issue, in the first one it is suggested to use the depends_on which I also have but still get the error,
fwiw, I have tried all the combination of depends_on array:
i.e.:

 depends_on = [ data.google_storage_bucket_object_content.challenge, cloudflare_logpush_ownership_challenge.gcp_challenge ]
 depends_on = [ cloudflare_logpush_ownership_challenge.gcp_challenge ]
  depends_on = [ data.google_storage_bucket_object_content.challenge ]
@Kaitou786 Kaitou786 added kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 15, 2024
Copy link
Contributor

Community Note

Voting for Prioritization

  • Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
  • Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.

Volunteering to Work on This Issue

  • If you are interested in working on this issue, please leave a comment.
  • If this would be your first contribution, please review the contribution guide.

@github-actions github-actions bot added the triage/debug-log-attached Indicates an issue or PR has a complete Terraform debug log. label Jul 15, 2024
@jacobbednarz
Copy link
Member

similar to the linked issues, this isn't a provider problem but an issue with the ordering and consistency in GCP. you'll need to either:

  1. insert an artifact wait condition for the contents to be available; or
  2. do it in two runs.

this works as expected with backends like S3 or R2 so unfortunately, there is little we can do here without using one of the above workarounds for GCP.

@jacobbednarz jacobbednarz closed this as not planned Won't fix, can't repro, duplicate, stale Jul 15, 2024
@jacobbednarz jacobbednarz added triage/duplicate Indicates an issue is a duplicate of other open issue. and removed kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. triage/debug-log-attached Indicates an issue or PR has a complete Terraform debug log. labels Jul 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

2 participants