Skip to content

Commit

Permalink
Release 1.7.0
Browse files Browse the repository at this point in the history
Merge branch develop@bfa12b006fd326cb89201ad63c8328e78175710f into main
  • Loading branch information
lae committed Jan 13, 2022
2 parents 0e91945 + bfa12b0 commit f3bcd26
Show file tree
Hide file tree
Showing 38 changed files with 568 additions and 1,102 deletions.
2 changes: 2 additions & 0 deletions .ansible-lint
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
skip_list:
- no-handler
25 changes: 25 additions & 0 deletions LICENSE_IMPORTS
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
==============================================================================

The following files are licensed under APL2:

library/pve_ceph_volume.py (This is a combined version of the original files module_utils/ca_common.py and library/ceph_volume.py)

The license text from ceph/ceph-ansible is as follows:

Copyright [2014] [Sébastien Han]

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

==============================================================================

# Licenses for libraries imported in the future should go here
168 changes: 107 additions & 61 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,36 @@
[![Build Status](https://travis-ci.org/lae/ansible-role-proxmox.svg?branch=master)](https://travis-ci.org/lae/ansible-role-proxmox)
[![Galaxy Role](https://img.shields.io/badge/ansible--galaxy-proxmox-blue.svg)](https://galaxy.ansible.com/lae/proxmox/)

lae.proxmox
===========

Installs and configures a Proxmox 5.x/6.x cluster with the following features:
Installs and configures Proxmox Virtual Environment 6.x/7.x on Debian servers.

- Ensures all hosts can connect to one another as root
- Ability to create/manage groups, users, access control lists and storage
- Ability to create or add nodes to a PVE cluster
- Ability to setup Ceph on the nodes
- IPMI watchdog support
- BYO HTTPS certificate support
- Ability to use either `pve-no-subscription` or `pve-enterprise` repositories
This role allows you to deploy and manage single-node PVE installations and PVE
clusters (3+ nodes) on Debian Buster (10) and Bullseye (11). You are able to
configure the following with the assistance of this role:

- PVE RBAC definitions (roles, groups, users, and access control lists)
- PVE Storage definitions
- [`datacenter.cfg`][datacenter-cfg]
- HTTPS certificates for the Proxmox Web GUI (BYO)
- PVE repository selection (e.g. `pve-no-subscription` or `pve-enterprise`)
- Watchdog modules (IPMI and NMI) with applicable pve-ha-manager config
- ZFS module setup and ZED notification email

With clustering enabled, this role does (or allows you to do) the following:

- Ensure all hosts can connect to one another as root over SSH
- Initialize a new PVE cluster (or possibly adopt an existing one)
- Create or add new nodes to a PVE cluster
- Setup Ceph on a PVE cluster
- Create and manage high availability groups

## Support/Contributing

For support or if you'd like to contribute to this role but want guidance, feel
free to join this Discord server: https://discord.gg/cjqr6Fg. Please note, this
is an temporary invite, so you'll need to wait for @lae to assign you a role,
otherwise Discord will remove you from the server when you logout.

## Quickstart

Expand All @@ -30,20 +48,15 @@ Copy the following playbook to a file like `install_proxmox.yml`:
- hosts: all
become: True
roles:
- {
role: geerlingguy.ntp,
ntp_manage_config: true,
ntp_servers: [
clock.sjc.he.net,
clock.fmt.he.net,
clock.nyc.he.net
]
}
- {
role: lae.proxmox,
pve_group: all,
pve_reboot_on_kernel_update: true
}
- role: geerlingguy.ntp
ntp_manage_config: true
ntp_servers:
- clock.sjc.he.net,
- clock.fmt.he.net,
- clock.nyc.he.net
- role: lae.proxmox
- pve_group: all
- pve_reboot_on_kernel_update: true

Install this role and a role for configuring NTP:

Expand All @@ -63,12 +76,7 @@ file containing a list of hosts).
Once complete, you should be able to access your Proxmox VE instance at
`https://$SSH_HOST_FQDN:8006`.

## Support/Contributing

For support or if you'd like to contribute to this role but want guidance, feel
free to join this Discord server: https://discord.gg/cjqr6Fg

## Deploying a fully-featured PVE 5.x cluster
## Deploying a fully-featured PVE 7.x cluster

Create a new playbook directory. We call ours `lab-cluster`. Our playbook will
eventually look like this, but yours does not have to follow all of the steps:
Expand Down Expand Up @@ -195,10 +203,6 @@ pvecluster. Here, a file lookup is used to read the contents of a file in the
playbook, e.g. `files/pve01/lab-node01.key`. You could possibly just use host
variables instead of files, if you prefer.

`pve_ssl_letsencrypt` allows to obtain a Let's Encrypt SSL certificate for
pvecluster. The Ansible role [systemli.letsencrypt](https://galaxy.ansible.com/systemli/letsencrypt/)
needs to be installed first in order to use this function.

`pve_cluster_enabled` enables the role to perform all cluster management tasks.
This includes creating a cluster if it doesn't exist, or adding nodes to the
existing cluster. There are checks to make sure you're not mixing nodes that
Expand All @@ -209,8 +213,8 @@ must already exist) to access PVE and gives them the Administrator role as part
of the `ops` group. Read the **User and ACL Management** section for more info.

`pve_storages` allows to create different types of storage and configure them.
The backend needs to be supported by [Proxmox](https://pve.proxmox.com/pve-docs/chapter-pvesm.html).
Read the **Storage Management** section for more info.
The backend needs to be supported by [Proxmox][pvesm]. Read the **Storage
Management** section for more info.

`pve_ssh_port` allows you to change the SSH port. If your SSH is listening on
a port other than the default 22, please set this variable. If a new node is
Expand All @@ -220,7 +224,7 @@ joining the cluster, the PVE cluster needs to communicate once via SSH.
would make to your SSH server config. This is useful if you use another role
to manage your SSH server. Note that setting this to false is not officially
supported, you're on your own to replicate the changes normally made in
ssh_cluster_config.yml.
`ssh_cluster_config.yml` and `pve_add_node.yml`.

`interfaces_template` is set to the path of a template we'll use for configuring
the network on these Debian machines. This is only necessary if you want to
Expand Down Expand Up @@ -354,29 +358,24 @@ serially during a maintenance period.) It will also enable the IPMI watchdog.
- hosts: pve01
become: True
roles:
- {
role: geerlingguy.ntp,
ntp_manage_config: true,
ntp_servers: [
clock.sjc.he.net,
clock.fmt.he.net,
clock.nyc.he.net
]
}
- {
role: lae.proxmox,
pve_group: pve01,
pve_cluster_enabled: yes,
pve_reboot_on_kernel_update: true,
- role: geerlingguy.ntp
ntp_manage_config: true
ntp_servers:
- clock.sjc.he.net,
- clock.fmt.he.net,
- clock.nyc.he.net
- role: lae.proxmox
pve_group: pve01
pve_cluster_enabled: yes
pve_reboot_on_kernel_update: true
pve_watchdog: ipmi
}

## Role Variables

```
[variable]: [default] #[description/purpose]
pve_group: proxmox # host group that contains the Proxmox hosts to be clustered together
pve_repository_line: "deb http://download.proxmox.com/debian/pve stretch pve-no-subscription" # apt-repository configuration - change to enterprise if needed (although TODO further configuration may be needed)
pve_repository_line: "deb http://download.proxmox.com/debian/pve bullseye pve-no-subscription" # apt-repository configuration - change to enterprise if needed (although TODO further configuration may be needed)
pve_remove_subscription_warning: true # patches the subscription warning messages in proxmox if you are using the community edition
pve_extra_packages: [] # Any extra packages you may want to install, e.g. ngrep
pve_run_system_upgrades: false # Let role perform system upgrades
Expand All @@ -391,8 +390,9 @@ pve_watchdog_ipmi_timeout: 10 # Number of seconds the watchdog should wait
pve_zfs_enabled: no # Specifies whether or not to install and configure ZFS packages
# pve_zfs_options: "" # modprobe parameters to pass to zfs module on boot/modprobe
# pve_zfs_zed_email: "" # Should be set to an email to receive ZFS notifications
pve_zfs_create_volumes: [] # List of ZFS Volumes to create (to use as PVE Storages). See section on Storage Management.
pve_ceph_enabled: false # Specifies wheter or not to install and configure Ceph packages. See below for an example configuration.
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-nautilus buster main" # apt-repository configuration. Will be automatically set for 5.x and 6.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/ceph-pacific bullseye main" # apt-repository configuration. Will be automatically set for 6.x and 7.x (Further information: https://pve.proxmox.com/wiki/Package_Repositories)
pve_ceph_network: "{{ (ansible_default_ipv4.network +'/'+ ansible_default_ipv4.netmask) | ipaddr('net') }}" # Ceph public network
# pve_ceph_cluster_network: "" # Optional, if the ceph cluster network is different from the public network (see https://pve.proxmox.com/pve-docs/chapter-pveceph.html#pve_ceph_install_wizard)
pve_ceph_nodes: "{{ pve_group }}" # Host group containing all Ceph nodes
Expand All @@ -405,7 +405,6 @@ pve_ceph_fs: [] # List of CephFS filesystems to create
pve_ceph_crush_rules: [] # List of CRUSH rules to create
# pve_ssl_private_key: "" # Should be set to the contents of the private key to use for HTTPS
# pve_ssl_certificate: "" # Should be set to the contents of the certificate to use for HTTPS
pve_ssl_letsencrypt: false # Specifies whether or not to obtain a SSL certificate using Let's Encrypt
pve_roles: [] # Added more roles with specific privileges. See section on User Management.
pve_groups: [] # List of group definitions to manage in PVE. See section on User Management.
pve_users: [] # List of user definitions to manage in PVE. See section on User Management.
Expand Down Expand Up @@ -454,8 +453,8 @@ pve_cluster_ha_groups:
restricted: 0
```

All configuration options supported in the datacenter.cfg file are documented in the
[Proxmox manual datacenter.cfg section][datacenter-cfg].
All configuration options supported in the datacenter.cfg file are documented
in the [Proxmox manual datacenter.cfg section][datacenter-cfg].

In order for live reloading of network interfaces to work via the PVE web UI,
you need to install the `ifupdown2` package. Note that this will remove
Expand Down Expand Up @@ -537,14 +536,14 @@ pve_acls:
- test_users
```

Refer to `library/proxmox_role.py` [link][user-module] and
Refer to `library/proxmox_role.py` [link][user-module] and
`library/proxmox_acl.py` [link][acl-module] for module documentation.

## Storage Management

You can use this role to manage storage within Proxmox VE (both in
single server deployments and cluster deployments). For now, the only supported
types are `dir`, `rbd`, `nfs`, `cephfs` ,`lvm` and `lvmthin`.
types are `dir`, `rbd`, `nfs`, `cephfs`, `lvm`,`lvmthin`, and `zfspool`.
Here are some examples.

```
Expand Down Expand Up @@ -588,6 +587,26 @@ pve_storages:
- 10.0.0.1
- 10.0.0.2
- 10.0.0.3
- name: zfs1
type: zfspool
content: [ "images", "rootdir" ]
pool: rpool/data
sparse: true
```

Currently the `zfspool` type can be used only for `images` and `rootdir` contents.
If you want to store the other content types on a ZFS volume, you need to specify
them with type `dir`, path `/<POOL>/<VOLUME>` and add an entry in
`pve_zfs_create_volumes`. This example adds a `iso` storage on a ZFS pool:

```
pve_zfs_create_volumes:
- rpool/iso
pve_storages:
- name: iso
type: dir
path: /rpool/iso
content: [ "iso" ]
```

Refer to `library/proxmox_storage.py` [link][storage-module] for module
Expand Down Expand Up @@ -627,7 +646,8 @@ pve_ceph_osds:
block.db: /dev/sdb1
encrypted: true
# Crush rules for different storage classes
# By default 'type' is set to host, you can find valid types at (https://docs.ceph.com/en/latest/rados/operations/crush-map/)
# By default 'type' is set to host, you can find valid types at
# (https://docs.ceph.com/en/latest/rados/operations/crush-map/)
# listed under 'TYPES AND BUCKETS'
pve_ceph_crush_rules:
- name: replicated_rule
Expand Down Expand Up @@ -675,15 +695,40 @@ pve_ceph_fs:
`pve_ceph_network` by default uses the `ipaddr` filter, which requires the
`netaddr` library to be installed and usable by your Ansible controller.

`pve_ceph_nodes` by default uses `pve_group`, this parameter allows to specify on which nodes install Ceph (e.g. if you don't want to install Ceph on all your nodes).
`pve_ceph_nodes` by default uses `pve_group`, this parameter allows to specify
on which nodes install Ceph (e.g. if you don't want to install Ceph on all your
nodes).

`pve_ceph_osds` by default creates unencrypted ceph volumes. To use encrypted
volumes the parameter `encrypted` has to be set per drive to `true`.

## Developer Notes

When developing new features or fixing something in this role, you can test out
your changes by using Vagrant (only libvirt is supported currently). The
playbook can be found in `tests/vagrant` (so be sure to modify group variables
as needed). Be sure to test any changes on both Debian 10 and 11 (update the
Vagrantfile locally to use `debian/buster64`) before submitting a PR.

You can also specify an apt caching proxy (e.g. `apt-cacher-ng`, and it must
run on port 3142) with the `APT_CACHE_HOST` environment variable to speed up
package downloads if you have one running locally in your environment. The
vagrant playbook will detect whether or not the caching proxy is available and
only use it if it is accessible from your network, so you could just
permanently set this variable in your development environment if you prefer.

For example, you could run the following to show verbose/easier to read output,
use a caching proxy, and keep the VMs running if you run into an error (so that
you can troubleshoot it and/or run `vagrant provision` after fixing):

`pve_ceph_osds` by default creates unencrypted ceph volumes. To use encrypted volumes the parameter `encrypted` has to be set per drive to `true`.
APT_CACHE_HOST=10.71.71.10 ANSIBLE_STDOUT_CALLBACK=debug vagrant up --no-destroy-on-error

## Contributors

Musee Ullah ([@lae](https://github.com/lae), <lae@lae.is>) - Main developer
Fabien Brachere ([@Fbrachere](https://github.com/Fbrachere)) - Storage config support
Gaudenz Steinlin ([@gaundez](https://github.com/gaudenz)) - Ceph support, etc
Richard Scott ([@zenntrix](https://github.com/zenntrix)) - Ceph support, PVE 7.x support, etc
Thoralf Rickert-Wendt ([@trickert76](https://github.com/trickert76)) - PVE 6.x support, etc
Engin Dumlu ([@roadrunner](https://github.com/roadrunner))
Jonas Meurer ([@mejo-](https://github.com/mejo-))
Expand All @@ -695,6 +740,7 @@ Michael Holasek ([@mholasek](https://github.com/mholasek))
[pve-cluster]: https://pve.proxmox.com/wiki/Cluster_Manager
[install-ansible]: http://docs.ansible.com/ansible/intro_installation.html
[pvecm-network]: https://pve.proxmox.com/pve-docs/chapter-pvecm.html#_separate_cluster_network
[pvesm]: https://pve.proxmox.com/pve-docs/chapter-pvesm.html
[user-module]: https://github.com/lae/ansible-role-proxmox/blob/master/library/proxmox_user.py
[group-module]: https://github.com/lae/ansible-role-proxmox/blob/master/library/proxmox_group.py
[acl-module]: https://github.com/lae/ansible-role-proxmox/blob/master/library/proxmox_group.py
Expand Down
7 changes: 4 additions & 3 deletions Vagrantfile
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
Vagrant.configure("2") do |config|
config.vm.box = "debian/buster64"
config.vm.box = "debian/bullseye64"

config.vm.provider :libvirt do |libvirt|
libvirt.memory = 2048
libvirt.memory = 2560
libvirt.cpus = 2
libvirt.storage :file, :size => '2G'
libvirt.storage :file, :size => '128M'
libvirt.storage :file, :size => '128M'
end

N = 3
Expand Down
5 changes: 3 additions & 2 deletions defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,9 @@ pve_watchdog_ipmi_timeout: 10
pve_zfs_enabled: no
# pve_zfs_options: "parameters to pass to zfs module"
# pve_zfs_zed_email: "email address for zfs events"
pve_zfs_create_volumes: []
pve_ceph_enabled: false
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/{% if ansible_distribution_release == 'stretch' %}ceph-luminous stretch{% else %}ceph-nautilus buster{% endif %} main"
pve_ceph_repository_line: "deb http://download.proxmox.com/debian/{% if ansible_distribution_release == 'buster' %}ceph-nautilus buster{% else %}ceph-pacific bullseye{% endif %} main"
pve_ceph_network: "{{ (ansible_default_ipv4.network +'/'+ ansible_default_ipv4.netmask) | ipaddr('net') }}"
pve_ceph_nodes: "{{ pve_group }}"
pve_ceph_mon_group: "{{ pve_group }}"
Expand All @@ -36,7 +37,6 @@ pve_manage_hosts_enabled: yes
# pve_cluster_addr1: "{{ ansible_eth1.ipv4.address }}
pve_datacenter_cfg: {}
pve_cluster_ha_groups: []
pve_ssl_letsencrypt: false
# additional roles for your cluster (f.e. for monitoring)
pve_roles: []
pve_groups: []
Expand All @@ -45,3 +45,4 @@ pve_acls: []
pve_storages: []
pve_ssh_port: 22
pve_manage_ssh: true
pve_hooks: {}
13 changes: 0 additions & 13 deletions files/00_remove_checked_command_buster.patch

This file was deleted.

Loading

0 comments on commit f3bcd26

Please sign in to comment.