Skip to content

WayScience/gc-bucket-alsf-predicting-nuclear-speckles

Repository files navigation

Google Cloud - Cloud Storage Bucket - gc-bucket-alsf-predicting-nuclear-speckles

flowchart LR
    subgraph upload
      files[("file(s)")]
      data-provider["👤 Data Provider"]

    end
    subgraph download
      data-receiver["👤 Data Receiver"]
      files2[("file(s)")]
    end
    bucket[\"Cloud Storage\n Bucket"/]
    files --> |to| bucket
    data-provider --> |uploads| files
    bucket --> |provides\naccess to| files2
    files2 --> |received by| data-receiver

Loading

Template leveraging Cookiecutter for creating Cloud Storage bucket on Google Cloud with a service account and related key to enable data or file upload and use.

This repository uses Terraform to maintain cloud resources. See terraform/README.md for documentation on Terraform elements.

👥 Roles

See below for an overview of roles which are important to context for various parts of this repository.

  • Terraform Administrator: this role involves administrating over cloud resources created with Terraform. Content found under the terraform directory and following steps under Tutorial: Bucket Infrastructure apply to this role.
  • Data Provider: this role involves using content under utilities/data-provider to synchronize (add, update, or remove) data to the bucket created by a Terraform Administrator. Instructions specific to this role are provided under utilities/data-provider/README.md.
  • Data Receiver: this role is involved with downloading content from the bucket after it has been uploaded by the data provider. Associated content may be found under utilities/data-receiver/README.md.

🛠️ Install

See below for steps which are required for installation.

  1. Create a repository from this template.

  2. Clone the repository to your development environment.

  3. Install Terraform

  4. Configure Terraform as necessary for your Google Cloud environment.

📚 Tutorial

See below for brief tutorials on how to implement the work found in this repository for your needs.

🎛️ State Management

These steps cover how to store Terraform state management files in association with the bucket infrastructure below. Terraform tracks cloud resources it creates as a statefile (.tfstate). If multiple people want to manage the same resources at the same time, they all need to have access to the same statefile or else they risk overwriting or corrupting state data. One option for sharing this statefile is to use a Google Cloud Bucket, which is the option used here.

Note: Terraform cloud state management must be setup before it is referenced as a backend.

  1. Make adjustments to the content as necessary (for example, this readme file).
  2. Fill in terraform.tfvars with values that make sense for your initiative (note: these are by default filled in from cookiecutter values).
  3. Terraform init: to ensure Terraform is initialized use command terraform -chdir=terraform/state-management init.
  4. Terraform plan: to plan the work and observe any needs use command terraform -chdir=terraform/state-management plan .
  5. Terraform apply: to apply the work and create resources use command terraform -chdir=terraform/state-management apply

🏗️ Bucket Infrastructure

These steps cover how to control the infrastructure found within this repository.

❗ Please note: after applying the Terraform code with the steps below, a service-account.json file is added to your local /utilities/data-provider directory which contains sensitive data which may enable access to your cloud resources. This file should not be checked into source control!
  1. Make adjustments to the content as necessary (for example, this readme file).
  2. Fill in terraform.tfvars with values that make sense for your initiative (note: these are by default filled in from cookiecutter values).
  3. Terraform init: to ensure Terraform is initialized use command terraform -chdir=terraform/operations init.
  4. Terraform plan: to plan the work and observe any needs use command terraform -chdir=terraform/operations plan .
  5. Terraform apply: to apply the work and create resources use command terraform -chdir=terraform/operations apply

When finished with the work, optionally use the following step.

  • OPTIONAL: Terraform destroy: to destroy all created resources use command terraform -chdir=terraform/operations destroy

📁 Using the Bucket

These steps cover an example of how to use the bucket after creating the surrounding infrastructure.

⚠️ Please note: be certain data you upload to Google Cloud abide any data governance or privacy restrictions applicable to your environment. The steps below do not inherently check or validate that data, the bucket, or the Google Cloud environment follow these policies.

🧑‍💻 Development

Development for this repository is assisted by the following technologies:

  • Poetry: Used to help configure pre-commit for local development work.
  • Pre-commit: Used for performing checks within local development environment and via Github Actions automated testing. The following sub-items are used as checks through pre-commit-terraform and require local installation when testing outside of Dagger:
    • terraform_docs: Used to automatically generate Terraform-specific documentation.
    • tflint: Used to perform static analysis (linting) on Terraform content.
    • tfsec: Used to perform security-focused static analysis (security linting) on Terraform content.
  • Dagger: Used to help orchestrate reproducible testing within local development environment and for automated testing.

About

GCP bucket for ALSF collaborations on predicting nuclear speckles.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published