Terraform module for Snowflake database management.
- Creates Snowflake database
- Can create custom Snowflake database roles with role-to-role assignments
- Can create a set of default database roles to simplify access management:
READONLY
- grantedUSAGE
andMONITOR
privileges on the databaseTRANSFORMER
- allows creating schemas and some Snowflake objects in themADMIN
- full access, including database options likedata_retention_time_in_days
- Can create number of schemas in the database with their specific stages and access roles
- Can create database ownership to specified account role
module "snowflake_database" {
source = "getindata/database/snowflake"
# version = "x.x.x"
name = "MY_DB"
is_transient = false
data_retention_time_in_days = 1
create_default_roles = true
}
Due to breaking changes in Snowflake provider and additional code optimizations, breaking changes were introduced in v2.0.0
version of this module.
List of code and variable (API) changes:
- Switched to
snowflake_database_role
module to leverage newdatabase_roles
mechanism - database
default_roles
andcustom_roles
are now managed bygetindata/database_role/snowflake
module - snowflake_database resource was updated to use newly introduced changes in Snowflake provider
- snowflake_schema resource was updated to use newly introduced changes in Snowflake provider
- variable
add_grants_to_existing_objects
was removed as it is no longer needed - minimum Snowflake provider version is
0.90.0
For more information, refer to variables.tf, list of inputs below and Snowflake provider documentation
When upgrading from v1.x
, expect most of the resources to be recreated - if recreation is impossible, then it is possible to import some existing resources.
Due to replacement of nulllabel (context.tf
) with context provider, some breaking changes were introduced in v3.0.0
version of this module.
List od code and variable (API) changes:
- Removed
context.tf
file (a single-file module with additional variables), which implied a removal of all its variables (exceptname
):descriptor_formats
label_value_case
label_key_case
id_length_limit
regex_replace_chars
label_order
additional_tag_map
tags
labels_as_tags
attributes
delimiter
stage
environment
tenant
namespace
enabled
context
- Remove support
enabled
flag - that might cause some backward compatibility issues with terraform state (please take into account that propermove
clauses were added to minimize the impact), but proceed with caution - Additional
context
provider configuration - New variables were added, to allow naming configuration via
context
provider:context_templates
name_schema
drop_public_schema_on_creation
which istrue
by default
Name | Description | Type | Default | Required |
---|---|---|---|---|
catalog | The database parameter that specifies the default catalog to use for Iceberg tables | string |
null |
no |
comment | Specifies a comment for the database | string |
null |
no |
context_templates | Map of context templates used for naming conventions - this variable supersedes naming_scheme.properties and naming_scheme.delimiter configuration |
map(string) |
{} |
no |
create_default_roles | Whether the default roles should be created | bool |
false |
no |
data_retention_time_in_days | Number of days for which Snowflake retains historical data for performing Time Travel actions (SELECT, CLONE, UNDROP) on the object. A value of 0 effectively disables Time Travel for the specified database, schema, or table | number |
null |
no |
database_ownership_grant | The name of the account role to which database privileges will be granted | string |
null |
no |
default_ddl_collation | Specifies a default collation specification for all schemas and tables added to the database. | string |
null |
no |
drop_public_schema_on_creation | Whether the PUBLIC schema should be dropped after the database creation |
bool |
true |
no |
enable_console_output | If true, enables stdout/stderr fast path logging for anonymous stored procedures | bool |
null |
no |
external_volume | The database parameter that specifies the default external volume to use for Iceberg tables | string |
null |
no |
is_transient | Specifies a database as transient. Transient databases do not have a Fail-safe period so they do not incur additional storage costs once they leave Time Travel; however, this means they are also not protected by Fail-safe in the event of a data loss | bool |
null |
no |
log_level | Specifies the severity level of messages that should be ingested and made available in the active event table. Valid options are: [TRACE DEBUG INFO WARN ERROR FATAL OFF] | string |
null |
no |
max_data_extension_time_in_days | Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for tables in the database to prevent streams on the tables from becoming stale | number |
null |
no |
name | Name of the resource | string |
n/a | yes |
name_scheme | Naming scheme configuration for the resource. This configuration is used to generate names using context provider: - properties - list of properties to use when creating the name - is superseded by var.context_templates - delimiter - delimited used to create the name from properties - is superseded by var.context_templates - context_template_name - name of the context template used to create the name- replace_chars_regex - regex to use for replacing characters in property-values created by the provider - any characters that match the regex will be removed from the name- extra_values - map of extra label-value pairs, used to create a name |
object({ |
{} |
no |
quoted_identifiers_ignore_case | If true, the case of quoted identifiers is ignored | bool |
null |
no |
replace_invalid_characters | Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character () in query results for an Iceberg table | bool |
null |
no |
roles | Roles created in the database scope | map(object({ |
{} |
no |
schemas | Schemas to be created in the database | map(object({ |
{} |
no |
storage_serialization_policy | The storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED] | string |
null |
no |
suspend_task_after_num_failures | How many times a task must fail in a row before it is automatically suspended. 0 disables auto-suspending | number |
null |
no |
task_auto_retry_attempts | Maximum automatic retries allowed for a user task | number |
null |
no |
trace_level | Controls how trace events are ingested into the event table. Valid options are: [ALWAYS ON_EVENT OFF] | string |
null |
no |
user_task_managed_initial_warehouse_size | The initial size of warehouse to use for managed warehouses in the absence of history | string |
null |
no |
user_task_minimum_trigger_interval_in_seconds | Minimum amount of time between Triggered Task executions in seconds | number |
null |
no |
user_task_timeout_ms | User task execution timeout in milliseconds | number |
null |
no |
Name | Source | Version |
---|---|---|
roles_deep_merge | Invicton-Labs/deepmerge/null | 0.1.5 |
snowflake_custom_role | getindata/database-role/snowflake | 2.0.1 |
snowflake_default_role | getindata/database-role/snowflake | 2.0.1 |
snowflake_schema | getindata/schema/snowflake | 3.0.0 |
Name | Description |
---|---|
catalog | The database parameter that specifies the default catalog to use for Iceberg tables |
data_retention_time_in_days | Data retention days for the database |
database_ownership_grant | The name of the account role to which database ownership will be granted |
database_roles | Snowflake Database roles |
default_ddl_collation | Specifies a default collation specification for all schemas and tables added to the database. |
enable_console_output | If true, enables stdout/stderr fast path logging for anonymous stored procedures |
external_volume | The database parameter that specifies the default external volume to use for Iceberg tables |
is_transient | Specifies a database as transient. Transient databases do not have a Fail-safe period so they do not incur additional storage costs once they leave Time Travel; however, this means they are also not protected by Fail-safe in the event of a data loss |
log_level | Specifies the severity level of messages that should be ingested and made available in the active event table. Valid options are: [TRACE DEBUG INFO WARN ERROR FATAL OFF] |
max_data_extension_time_in_days | Object parameter that specifies the maximum number of days for which Snowflake can extend the data retention period for tables in the database to prevent streams on the tables from becoming stale |
name | Name of the database |
quoted_identifiers_ignore_case | If true, the case of quoted identifiers is ignored |
replace_invalid_characters | Specifies whether to replace invalid UTF-8 characters with the Unicode replacement character () in query results for an Iceberg table |
schemas | This database schemas |
storage_serialization_policy | The storage serialization policy for Iceberg tables that use Snowflake as the catalog. Valid options are: [COMPATIBLE OPTIMIZED] |
suspend_task_after_num_failures | How many times a task must fail in a row before it is automatically suspended. 0 disables auto-suspending |
task_auto_retry_attempts | Maximum automatic retries allowed for a user task |
trace_level | Controls how trace events are ingested into the event table. Valid options are: [ALWAYS ON_EVENT OFF] |
user_task_managed_initial_warehouse_size | The initial size of warehouse to use for managed warehouses in the absence of history |
user_task_minimum_trigger_interval_in_seconds | Minimum amount of time between Triggered Task executions in seconds |
user_task_timeout_ms | User task execution timeout in milliseconds |
Name | Version |
---|---|
context | >=0.4.0 |
snowflake | ~> 0.95 |
Name | Version |
---|---|
terraform | >= 1.3 |
context | >=0.4.0 |
snowflake | ~> 0.95 |
Name | Type |
---|---|
snowflake_database.this | resource |
snowflake_grant_ownership.database_ownership | resource |
context_label.this | data source |
Contributions are very welcomed!
Start by reviewing contribution guide and our code of conduct. After that, start coding and ship your changes by creating a new PR.
Apache 2 Licensed. See LICENSE for full details.
Made with contrib.rocks.