-
Notifications
You must be signed in to change notification settings - Fork 10
Genesis DSL
Genesis uses custom DSL to describe environment life cycle. One such life cycle declaration called environment template.
Environment life cycle defined as a set of workflows. One workflow describes a particular set of actions performed over environment. For example one environment could have 'create', 'destroy', 'scale-up', 'scale-down', 'redeploy' workflows. Generally speaking everything that happens with the environment must be described as an environment workflow.
Groovy is used as a host language for environment template declaration. So every valid groovy code is acceptable.
template {
name("template_name") // required
version("template_version") // required
dataSources{ //optional
dependentList("list"){
list = [1,2,3,4]
}
}
createWorkflow("create_workflow_name") // required
destroyWorkflow("destroy_workflow_name") // required
workflow("create_workflow") {
variables {
// workflow variables block
}
steps {
// workflow steps block
}
onError { // optional
// rescue steps block
}
}
...
}
onError declaration is optional. Steps declared in this section are executed only in case of any of the main steps(from "steps" declaration) execution fails. Name and version fields cannot contain symbol '/'.
Data sources are really sources of values from variable. They should refer to a spring beans declared in Genesis. Example declaration:
dataSources{
dependentList("list") {
values = [1,2,3,4]
}
}
In this example variables can use datasource of type dependentList with name list. Data sources must extends trait VarDataSource have at least method getData returning associative array of values.
Just a list of possible values. Example:
staticList("list") {
values = [1,2,3,4]
}
list | List of possible values in Groovy notation ([]) |
---|
nexus("genesisNexus") {
query = "http://nexus.domain/path-to-repo/path-to-artifacts"
filter = "arifact-*-*.tar.gz"
credential = "nexus-credential"
}
query | Url to concrete artifact folder on nexus |
---|---|
filter | Filename filter for artifact selection |
credential | Name of project level credential to use when Nexus is configured to use basic authorization for access |
jenkins("artifactRepo") {
url = "http://http://osg-ci.vm.griddynamics.net:8080/"
jobName = "GDMaster_GD"
filter = ".*1\\.1\\.0-SNAPSHOT\\.tar\\.gz.*"
}
url | base url where jenkins is located |
---|---|
jobName | name of the job that generates artifacts |
filter | (optional) regexp expression to filter artifacts generated by jenkins |
credentials | (optional) name of credentials records in credentials store (cloud provider (sick!) must be "jenkins" |
showBuildNumber | (optional) indicates if title should contain build number. default = false |
nuget {
url = "http://nuget.example.com/v2/api"
artifactId = "MyPackage"
auth = "digest"
credential = "nuget"
}
url | URL of NuGet repository api root. Both v1 and v2 endpoints supported |
---|---|
artifactId | Full name of artifact |
auth | HTTP authentication (only basic and digest values are supported by now). Skip it if there is no authentication on NuGet side. |
credential | Name of project level credential to use when NuGet is configured to use HTTP authentication (Cloud provider must be set to 'nuget'). This parameter has no effect if auth is not set. |
databags("genesisDatabases") {
tags = ["genesis", "database"]
source = "system"
}
tags | (optional) tag filter (only databags that has all specified tags will be selected by datasource) |
source | (optional) Databag source ("project" or "system"). Default is "project". |
cloudImages("amazon") {
filter = "us-west-1.*"
account = [ "identity" : "login",
"credential" : "password",
"provider" : "aws-ec2",
"endpoint" : "https://ec2.us-east-1.amazonaws.com" ]
}
filter | (optional) regexp to filter image ids |
---|---|
account | (optional) cloud provider api account to be used. See provisionVm step 'account' parameter description |
hardwareProfiles("amazonHardwares") {
filter = "us-west-1.*"
account = [ "identity" : "login",
"credential" : "password",
"provider" : "aws-ec2",
"endpoint" : "https://ec2.us-east-1.amazonaws.com" ]
}
filter | (optional) regexp to filter image ids |
---|---|
account | (optional) cloud provider api account to be used. See provisionVm step 'account' parameter description |
Workflow declaration consists of two parts - variables and steps.
One workflow may be parametrized with the set of variables. For example for 'scale-up' appropriate variable could be nodes count to scale or for 'redeploy' workflow it could be the version of application to redeploy.
variables {
variable("nodesCount").as(Integer).validator {it > 0 && it < 4} .description("Nodes count to scale")
variable("appVersion").as(String).validator {["0.1","0.2"].contains(it)} .description("Version to redeploy")
}
All variables described in variables declaration block will be available in steps declaration block by provided name as groovy variable bound to the execution scope. Variable can be optional and can have a default value
variable("optional").as(String).optional().defaultValue("foo")
Variable value can be limited by a list of values. This list can be declared as closure returning associative array (String -> String only) of possible values:
variable("fromClosure").oneOf {(["1": "Value One", "2": "Value Two"])}
In this case value of variable will be one of the keys in this associative array.
You can also declare source of variable values from data source described above:
variable("fromDS").dataSource("list")
In this case value of variable fromDS must be one of the keys of associative array returned by "list" data source getData method.
Variables connected to dataSource can depend from other variables:
variable("fromDS1").dataSource("list")
variable("fromDS2").dataSource("list").dependsOn("fromDS1")
variable("fromDS3").dataSource("list").dependsOn("fromDS1", "fromDS2")
In this case data source list must extend trait DependentDataSource (though, latter is required only for two dependent variables) and have three methods getData with the following signatures:
def getData: Map[String,String]
def getData(param: Any): Map[String, String]
def getData(param: Any, param1: Any): Map[String, String]
Of course, you can use different data sources for different variables. Note that in last signature params order must correspond to dependsOn order of parameters.
Since Genesis 2.1 variable names are in separate namespace. So variable named x must be referenced as:
$vars.x
Alternative syntax allow:
- declare inline datasources
- declare variable depencies in more clear way
- use default values for mandatory variables
NOTE: it's possible to combine both syntaxes in same "variables" block, but one can't reference variables of one kind from another.
variables {
nodesCount = {
description = "Nodes count to scale"
isOptional = true
clazz = Integer
validator = {it > 0 && it < 4}
}
}
Name | Mandatory | Description |
---|---|---|
description | Specify text to be shown for this variable | |
clazz | Specify type of variables. Currently fully supported values are "Integer" and "String". Default: String | |
isOptional | Specify if variable is optional for workflow execution. Default: false | |
defaultValue | Specify default value of the variable | |
validator | Closure returning boolean flag indicating if the value should be considered to be valid. The value of variable is accessible by name "it" within closure | |
dataSource |
Can be one of two kinds:
Inline declaration conforms to most of global datasource declaration with following differences:
|
variables {
foo = {
description = "Provide foo"
}
bar = {
description = "Provide bar"
}
list = {
description = "Select foo or bar"
dataSource = staticList { // inline declaration
values = [ foo, bar ] // references to defined previously variables
}
}
}
The example demonstrates how to declare inline datasource with dependencies on other variables:
In example variable "list" depends on 2 variables: "foo" and "bar", this means that possible values of "list" will be calculated only after foo and bar will be provided.
More complex example:
variables {
cloudAccount = {
description = "Select cloud api account"
dataSource = databags { tags = ["cloud", "account"] }
}
hardId = {
description = "Supported hardware id within cloud api"
dataSource = hardwareProfiles {
account = $project.databag( cloudAccount ) //variable "cloudAccount" value is used to access databag properties
}
}
}
Variable declarations could be included in groups. At most one variable per group could have value. If group is required then exactly one variable must have value. Variable declarations inside groups have the same syntax as top-level: both syntax forms are supported. Nested groups are not supported.
Group declaration parameters:
Name | type | Mandatory | Description |
---|---|---|---|
name | String | Yes | Name of the group. Is used to identify group internally, names should be unique |
description | String | Specify text to be shown in UI for this group. Is equal to name by default(if is not specified). | |
required | Boolean | If required is true then exactly one variable in group must have value. | |
default | String | Name of the variable selected by default in UI. If not set - no variable is selected initially. |
Example:
variables {
variable("a").description("A variable").as(Integer).validator([
"A must be > 0" : { it > 0 }
])
group(name: "group1") {
b = {
description = "B variable"
clazz = Boolean
}
variable("c").description("C variable").as(Integer)
}
group(name: "group2", description: "group 2") {
variable("x").description("string var")
}
group(name: "requiredGroup", required: true) {
y = {clazz = Integer}
}
group(name: "groupWithDefault", default: "var1") {
var1 = {clazz = Integer}
var2 = {description = "variable 2"}
}
}
Each workflow consists of a set of depended steps. One step describes particular action performed over environment. Two steps could be executed in parallel. Each step has two common attributes - phase and dependent phases and some set of step specific attributes.
step_name {
phase = "phaseC" // common attribute
precedingPhases = ["phaseA", "phaseB"] // common attribute
skip = $vars.a > 1 // common attribute (optional)
attrInt = 1 // step specific attribute
attrString = "str" // step specific attribute
}
Step phase describes the time when step will be executed. The general rule is that step is executed after all steps with phases in step precedingPhases have been completed. If this parameter is omitted, Genesis will assign automatic phase for step with the form of "auto_<index>", where index is a position of concrete step in a workflow. If prececedingPhases is also omitted, Genesis will set it in a previous by order phase, for example, such definition:
steps {
step {}
step {}
}
is considered as:
steps {
step {
phase = "auto_0"
}
step {
phase = "auto_1"
precedingPhases = ["auto_0"]
}
}
Optinal skip attribute could be specified if step should not be executed on some condition. In example above step is skipped in case of variable named 'a' numeric value is greater than one.
If you have several steps with the same phase, they can be grouped in a phase container like this:
steps {
phase(name: "initial") {
teststep {
text = "test input"
}
teststep {
text = "another input"
}
}
phase(name: "second", after: ["initial"]) {
teststep {
text = "foo"
}
}
teststep {
text = "bbb"
phase = "final"
precedingPhases = ["initial", "second"]
}
}
Steps might have results of their execution (declared as properties in concrete StepResult implementation)
Within genesis template it is possible to store result of a step and then reference the value in other steps.
Every step implementation based on GenesisStep has special parameter exportTo. exportTo can contain list of pairs (source, target) that defines what step result fields should be exported to execution context.
Example:
Result of provision step is represented via ProvisionStepResult class:
case class ProvisionStepResult(step: Step, virtualMachines: Seq[VirtualMachine]) extends StepResult
The declaration states that result contains sequence of virtual machines.
Creator of particular tamplate can specify that this list should be stored in execution context under specific name: "databaseVms":
provisionVms {
phase = "provision"
roleName = "dbnode"
hardwareId = "2"
imageId = "16"
quantity = 1
exportTo = ["virtualMachines" : "databaseVms"]
}
In case of ActionStepResult being used as result of the step export accessor will implicitly assumes that ActionResult is the real values holder.
NOTE: During export following conversions might take place:
- scala.collection.Traversable will be converted to java.util.List, in case of
- scala.collection.Map will be converted to java.util.Map
To access context special closure notation is used. Within closure block every exported attributes are accessible by name as local variables. Special syntactic sugar allow access scala Option content directly (note: if Option[_] value is None, null will be returned)
Example:
To access IP property of the first virtual machine from the list:
{ databaseVms[0].ip.address }
```
This value can be used as input parameter of any other step
### Project databags
To access databags defined for a project use notation $project.databag("bagname").property. This element can be used in data source declaration, variable default value declaration or as input parameter for any step:
```groovy
dataSources {
staticList("list") {
list = [$project.databag("bagname").key1]
}
}
...
variable("name").optional().defaultValue($project.databag("bagname").key2)
...
step {
value = $project.databag("bagname").key3
}
```
### Data bags: system-wide properties
Data bag is a named holder of related properties. Only system administrator can create, read and modify databags. Databags might have tags.
To access databag properties from template use notation *$system.databag['bag_name'].property*, or to get all properties at once as a java.util.Map use *$system.databag("bag_name")* (last notation might be usefull in case of property key contains special characters):
```groovy
dataSources {
jenkins("artifactRepo") {
url = $system.databag["global jenkins"].url
jobName = $system.databag("global jenkins")["genesis.app.jobname"] //as long as property key has dots, map access notation is used
}
}
```
### Current environment properties(since 1.4.0)
String attributes and other properties(such as **name**, **creator**, etc) of the current environment(environment on which current workflow is executed) could be accessed in step declarations using *$env* notation within closure block, for example:
```groovy
step {
envName = {$env.name}
envAttr = {$env['attribute']}
}
```
Includes for groovy files stored in template repository
------------------------------------
Groovy definitions from external files[stored in the same template repository] could be included into template files using **include** keyword. For example:
```groovy
include "file1.groovy"
template {
name("Includes")
version("0.1")
createWorkflow("create")
destroyWorkflow("destroy")
workflow("create") {
f("Included from create.")
}
workflow("destroy") {
f("Included from destroy.")
}
}
```
Here function f() is called inside **create** and **destroy** workflows. The definition of this function should be specified in **file1.groovy**. File named **file1.groovy** should exist in template repository.
Limitations:
* Name specified in the include statement should match filename stored in template repository. Most template repositories don't store full absolute path, so specifying relative or absolute paths will not work.
* Included file content should be any valid groovy code, but not Genesis DSL
* Definitions from included file could be used only inside workflow declarations, not in template declaration.