Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CloudSQL instances with auto disk resizing fail to update on the first try but succeed on retrying #2307

Open
parth-da opened this issue Aug 8, 2024 · 3 comments
Labels
awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). kind/bug Some behavior is incorrect or out of spec

Comments

@parth-da
Copy link

parth-da commented Aug 8, 2024

Describe what happened

CloudSQL for Postgres instances created with diskAutoresize: true always fail if the size of the database has changed since the last time they were updated via pulumi with:

    error: 1 error occurred:
    	* updating urn:pulumi:cluster.scratchc::cluster::gcp:sql/databaseInstance:DatabaseInstance::test-pg: 1 error occurred:
    	* Error, failed to update instance settings for : googleapi: Error 400: Invalid request: The disk size cannot decrease. Current size: 31 GB, requested: 10 GB.., invalid

Re-running pulumi up however succeeds.

It looks like pulumi caches the old disk size (not sure where since I couldn't find it in the output of pulumi stack export) and sends it to GCP which then rejects the request. The first run updates this cache allowing subsequent attempts to succeed.

This is very similar to #549 except for the ignoreChanges part.

Sample program

Do the same steps as in #549 (comment) (removing the ignoreChanges lines).

Then re-run pulumi up after making some small change to the script. I was trying to set a database flag.

       databaseVersion: "POSTGRES_13",
+      databaseFlags: [{"temp_file_limit", 500000}],
       settings: {

Log output

    error: 1 error occurred:
    	* updating urn:pulumi:cluster.scratchc::cluster::gcp:sql/databaseInstance:DatabaseInstance::test-pg: 1 error occurred:
    	* Error, failed to update instance settings for : googleapi: Error 400: Invalid request: The disk size cannot decrease. Current size: 31 GB, requested: 10 GB.., invalid

Affected Resource(s)

CloudSQL for Postgres instances

Output of pulumi about

CLI
Version      3.112.0
Go Version   go1.22.1
Go Compiler  gc

Plugins
NAME    VERSION
nodejs  unknown

Host
OS       darwin
Version  14.6.1
Arch     arm64

This project is written in nodejs: executable='/nix/store/9zli090ri6wlhjla6bb51dg326ann92x-nodejs-20.12.2/bin/node' version='v20.12.2'

Additional context

No response

Contributing

Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).

@parth-da parth-da added kind/bug Some behavior is incorrect or out of spec needs-triage Needs attention from the triage team labels Aug 8, 2024
@iwahbe
Copy link
Member

iwahbe commented Aug 12, 2024

Hi @parth-da. Thanks for reporting an issue. I suspect that you can work around this issue by running pulumi up --refresh instead of pulumi up. That will fetch the current size from gcp before running the pulumi up.

@iwahbe iwahbe added awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). and removed needs-triage Needs attention from the triage team labels Aug 12, 2024
@iwahbe iwahbe removed their assignment Aug 12, 2024
@parth-da
Copy link
Author

Yes we can but is there a way to avoid refreshing the entire stack when running pulumi up? That is, is there a way to specify a particular resource to always be refreshed when changes are applied? (I was unable to find one). For now, configuring retries in automated deployments works for us but is not ideal.

@iwahbe
Copy link
Member

iwahbe commented Aug 13, 2024

It's not idea, but you can use a targeted refresh:

pulumi refresh --target urn:pulumi:cluster.scratchc::cluster::gcp:sql/databaseInstance:DatabaseInstance::test-pg

(replacing the URN with your actual URN)

It's definitely a bug in the resource. There is no "in-code" resource option to specify a refresh before an update.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting-upstream The issue cannot be resolved without action in another repository (may be owned by Pulumi). kind/bug Some behavior is incorrect or out of spec
Projects
None yet
Development

No branches or pull requests

2 participants