Skip to content

Commit

Permalink
Merge branch 'sarah/add-octopus-integration' of github.com:DataDog/in…
Browse files Browse the repository at this point in the history
…tegrations-core into sarah/add-waiting-task
  • Loading branch information
sarah-witt committed Dec 19, 2024
2 parents 5d192c9 + e0312a0 commit e414f2f
Show file tree
Hide file tree
Showing 53 changed files with 1,971 additions and 211 deletions.
113 changes: 109 additions & 4 deletions octopus_deploy/assets/configuration/spec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,45 @@ files:
- name: spaces
display_priority: 5
description: |
Filter your integration by spaces.
Optional configuration to indicate the spaces that we want to be processed. If not configured,
all spaces and their corresponding project groups and projects will be processed.
The 'include' key will indicate the regular expressions of the spaces for which metrics are to be reported
and the configuration to be applied to each of them. Each group may have a 'project-groups'-like configuration,
enabling or disabling metric collection for the project groups which match that condition. For further details
see next section 'project-groups'. If no configuration associated with the key is indicated with the regular
expression, they will be processed with the default configuration.
The spaces will be processed in the order indicated in the 'include'.
If a space is matched on an 'include' key, it will only be processed there and not in a later 'include'
that it might match on.
The 'exclude' key will indicate the regular expressions of those spaces for which metrics
are not to be reported.
The excludes will have priority over the includes, that is, if a space matches an exclude, it will not be
processed even if it matches an include. The 'include' key must be used if using the 'exclude' key.
The 'limit' key will allow limiting the number of spaces processed to avoid a combinatorial explosion of tags
associated with a metric.
The 'interval' key will indicate the validity time of the last list of spaces obtained through the endpoint.
If 'interval' is not indicated, the list of spaces will be obtained each time the check is executed
and will not be cached.
In the following example, only the space named "default" will be collected. Additionally, only the project groups
starting with "test" in that space will be collected. All other project groups and spaces will be ignored.
Furthermore, the cache will be valid for 1 minute.
spaces:
limit: 3
include:
- 'default':
project_groups:
- limit: 5
include:
- 'test.*'
interval: 60
interval: 60
value:
type: object
properties:
Expand All @@ -44,7 +82,42 @@ files:
- name: project_groups
display_priority: 5
description: |
Filter your integration by project groups and projects.
Optional configuration to indicate the project groups that we want to be processed. If not configured,
all project groups will be processed.
The 'include' key will indicate the regular expressions of the project groups for which metrics are to be
reported and the configuration to be applied to each of them. Each group may have a 'projects'-like
configuration, enabling or disabling metric collection for the projects which match that condition. For
further details see previous section 'projects'. If no configuration associated with the key is indicated with
the regular expression, they will be processed with the default configuration.
The project groups will be processed in the order indicated in the 'include'.
If a project group is matched on an 'include' key, it will only be processed there and not in a later 'include'
that it might match on.
The 'exclude' key will indicate the regular expressions of those project groups for which metrics
are not to be reported.
The excludes will have priority over the includes, that is, if a project group matches an exclude, it will not be
processed even if it matches an include. The 'include' key must be used if using the 'exclude' key.
The 'limit' key will allow limiting the number of project groups processed to avoid a combinatorial explosion of
tags associated with a metric.
The 'interval' key will indicate the validity time of the last list of project groups obtained through the
endpoint. If 'interval' is not indicated, the list of project groups will be obtained each time the check is
executed and will not be cached.
In the following example, all project groups will be processed except those whose name begins with 'tmp_'
up to a maximum of 10 project groups.
Furthermore, the cache will be valid for 1 minute.
project_groups:
limit: 10
include:
- '.*'
exclude:
- 'tmp_.*'
interval: 60
value:
type: object
properties:
Expand All @@ -68,7 +141,32 @@ files:
- name: projects
display_priority: 5
description: |
Filter your integration by projects.
Optional configuration to indicate the projects that we want to be processed. If not configured,
all projects will be processed.
The 'include' key will indicate the regular expressions of the projects for which metrics are to be reported.
The projects will be processed in the order indicated in the 'include'.
If a projects is matched on an 'include' key, it will only be processed there and not in a later 'include'
that it might match on.
The 'exclude' key will indicate the regular expressions of those projects for which metrics
are not to be reported.
The excludes will have priority over the includes, that is, if a projects matches an exclude, it will not be
processed even if it matches an include. The 'include' key must be used if using the 'exclude' key.
The 'limit' key will allow limiting the number of projects processed to avoid a combinatorial explosion of tags
associated with a metric.
The 'interval' key will indicate the validity time of the last list of projects obtained through the endpoint.
If 'interval' is not indicated, the list of projects will be obtained each time the check is executed
and will not be cached.
In the following example, only the project named 'my-project' will be collected.
projects:
include:
- 'my-project'
value:
type: object
properties:
Expand All @@ -89,6 +187,13 @@ files:
- name: interval
type: integer
example: {}
- name: paginated_limit
description: |
Sets the number of items API calls should return at a time. Default is 30.
value:
example: 30
type: integer
required: false
- template: instances/default
- template: instances/http
overrides:
Expand All @@ -106,4 +211,4 @@ files:
example:
- type: integration
source: octopus-deploy
service: <SERVICE_NAME>
service: <SERVICE_NAME>
62 changes: 44 additions & 18 deletions octopus_deploy/datadog_checks/octopus_deploy/check.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ def _update_times(self):

def _process_endpoint(self, endpoint, params=None, report_service_check=False):
try:
params = {} if params is None else params
response = self.http.get(f"{self.config.octopus_endpoint}/{endpoint}", params=params)
response.raise_for_status()
if report_service_check:
Expand All @@ -73,10 +74,31 @@ def _process_endpoint(self, endpoint, params=None, report_service_check=False):
self.warning("Failed to access endpoint: %s: %s", endpoint, e)
return {}

def _process_paginated_endpoint(self, endpoint, params=None, report_service_check=False):
skip = 0
take = self.config.paginated_limit
num_pages = 1
num_pages_seen = 0
all_items = []
params = {} if params is None else params
while num_pages_seen < num_pages:
params['skip'] = skip
params['take'] = take

response_json = self._process_endpoint(endpoint, params=params, report_service_check=report_service_check)
if response_json == {}:
return response_json
items = response_json.get("Items")
num_pages_seen += 1
num_pages = response_json.get("NumberOfPages", num_pages)
skip += self.config.paginated_limit
all_items = all_items + items
return {"Items": all_items}

def _init_spaces_discovery(self):
self.log.info("Spaces discovery: %s", self.config.spaces)
self._spaces_discovery = Discovery(
lambda: self._process_endpoint("api/spaces", report_service_check=True).get('Items', []),
lambda: self._process_paginated_endpoint("api/spaces", report_service_check=True).get('Items', []),
limit=self.config.spaces.limit,
include=normalize_discover_config_include(self.config.spaces),
exclude=self.config.spaces.exclude,
Expand All @@ -88,9 +110,9 @@ def _init_default_project_groups_discovery(self, space_id):
self.log.info("Default Project Groups discovery: %s", self.config.project_groups)
if space_id not in self._default_project_groups_discovery:
self._default_project_groups_discovery[space_id] = Discovery(
lambda: self._process_endpoint(f"api/{space_id}/projectgroups", report_service_check=True).get(
'Items', []
),
lambda: self._process_paginated_endpoint(
f"api/{space_id}/projectgroups", report_service_check=True
).get('Items', []),
limit=self.config.project_groups.limit,
include=normalize_discover_config_include(self.config.project_groups),
exclude=self.config.project_groups.exclude,
Expand All @@ -102,9 +124,9 @@ def _init_project_groups_discovery(self, space_id, project_groups_config):
self.log.info("Project Groups discovery: %s", project_groups_config)
if space_id not in self._project_groups_discovery:
self._project_groups_discovery[space_id] = Discovery(
lambda: self._process_endpoint(f"api/{space_id}/projectgroups", report_service_check=True).get(
'Items', []
),
lambda: self._process_paginated_endpoint(
f"api/{space_id}/projectgroups", report_service_check=True
).get('Items', []),
limit=project_groups_config.limit,
include=normalize_discover_config_include(project_groups_config),
exclude=project_groups_config.exclude,
Expand All @@ -118,8 +140,9 @@ def _init_default_projects_discovery(self, space_id, project_group_id):
self._default_projects_discovery[space_id] = {}
if project_group_id not in self._default_projects_discovery[space_id]:
self._default_projects_discovery[space_id][project_group_id] = Discovery(
lambda: self._process_endpoint(
f"api/{space_id}/projectgroups/{project_group_id}/projects", report_service_check=True
lambda: self._process_paginated_endpoint(
f"api/{space_id}/projectgroups/{project_group_id}/projects",
report_service_check=True,
).get('Items', []),
limit=self.config.projects.limit,
include=normalize_discover_config_include(self.config.projects),
Expand All @@ -134,8 +157,9 @@ def _init_projects_discovery(self, space_id, project_group_id, projects_config):
self._projects_discovery[space_id] = {}
if project_group_id not in self._projects_discovery[space_id]:
self._projects_discovery[space_id][project_group_id] = Discovery(
lambda: self._process_endpoint(
f"api/{space_id}/projectgroups/{project_group_id}/projects", report_service_check=True
lambda: self._process_paginated_endpoint(
f"api/{space_id}/projectgroups/{project_group_id}/projects",
report_service_check=True,
).get('Items', []),
limit=projects_config.limit,
include=normalize_discover_config_include(projects_config),
Expand All @@ -152,7 +176,7 @@ def _process_spaces(self):
else:
spaces = [
(None, space.get("Name"), space, None)
for space in self._process_endpoint("api/spaces", report_service_check=True).get('Items', [])
for space in self._process_paginated_endpoint("api/spaces", report_service_check=True).get('Items', [])
]
self.log.debug("Monitoring %s spaces", len(spaces))
for _, _, space, space_config in spaces:
Expand All @@ -178,7 +202,9 @@ def _process_project_groups(self, space_id, space_name, project_groups_config):
else:
project_groups = [
(None, project_group.get("Name"), project_group, None)
for project_group in self._process_endpoint(f"api/{space_id}/projectgroups").get('Items', [])
for project_group in self._process_paginated_endpoint(f"api/{space_id}/projectgroups").get(
'Items', []
)
]
self.log.debug("Monitoring %s Project Groups", len(project_groups))
for _, _, project_group, project_group_config in project_groups:
Expand Down Expand Up @@ -209,7 +235,7 @@ def _process_projects(self, space_id, space_name, project_group_id, project_grou
else:
projects = [
(None, project.get("Name"), project, None)
for project in self._process_endpoint(
for project in self._process_paginated_endpoint(
f"api/{space_id}/projectgroups/{project_group_id}/projects"
).get('Items', [])
]
Expand All @@ -230,7 +256,7 @@ def _process_projects(self, space_id, space_name, project_group_id, project_grou
def _process_queued_and_running_tasks(self, space_id, space_name, project_id, project_name):
self.log.debug("Collecting running and queued tasks for project %s", project_name)
params = {'project': project_id, 'states': ["Queued", "Executing"]}
response_json = self._process_endpoint(f"api/{space_id}/tasks", params)
response_json = self._process_paginated_endpoint(f"api/{space_id}/tasks", params)
self._process_tasks(space_id, space_name, project_name, response_json.get('Items', []))

def _process_completed_tasks(self, space_id, space_name, project_id, project_name):
Expand All @@ -240,7 +266,7 @@ def _process_completed_tasks(self, space_id, space_name, project_id, project_nam
'fromCompletedDate': self._from_completed_time,
'toCompletedDate': self._to_completed_time,
}
response_json = self._process_endpoint(f"api/{space_id}/tasks", params)
response_json = self._process_paginated_endpoint(f"api/{space_id}/tasks", params)
self._process_tasks(space_id, space_name, project_name, response_json.get('Items', []))

def _calculate_task_times(self, task):
Expand Down Expand Up @@ -309,7 +335,7 @@ def _process_tasks(self, space_id, space_name, project_name, tasks_json):
def _collect_server_nodes_metrics(self):
self.log.debug("Collecting server node metrics.")
url = "api/octopusservernodes"
response_json = self._process_endpoint(url)
response_json = self._process_paginated_endpoint(url)
server_nodes = response_json.get('Items', [])

for server_node in server_nodes:
Expand Down Expand Up @@ -352,7 +378,7 @@ def _collect_new_events(self, space_id, space_name):
'to': self._to_completed_time,
'eventCategories': list(EVENT_TO_ALERT_TYPE.keys()),
}
events = self._process_endpoint(url, params=params).get('Items', [])
events = self._process_paginated_endpoint(url, params=params).get('Items', [])
tags = self._base_tags + [f"space_name:{space_name}"]

for event in events:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,10 @@ def instance_min_collection_interval():
return 15


def instance_paginated_limit():
return 30


def instance_persist_connections():
return False

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,7 @@ class InstanceConfig(BaseModel):
min_collection_interval: Optional[float] = None
ntlm_domain: Optional[str] = None
octopus_endpoint: str
paginated_limit: Optional[int] = None
password: Optional[str] = None
persist_connections: Optional[bool] = None
project_groups: Optional[ProjectGroups] = None
Expand Down
Loading

0 comments on commit e414f2f

Please sign in to comment.