Automate VM Deployment On Proxmox VE With Ansible
In the video below, we show you how to automate VM creation on Proxmox VE using Ansible
Proxmox provides a really useful graphical user interface for Proxmox VE that makes managing a cluster and virtual machines relatively straightforward
But manually creating virtual machines takes time and usually you’re just repeating the same process over and over again
No doubt those virtual machines are important so you’ll buy more storage so you can make backup copies of them locally, and then pay some more money so that you can have offline copies as well
But if you consider the time it would take to recover your backup copies from an offline storage and then how long it would take to restore your virtual machines, automation with a tool like Ansible makes more sense
Because if you can create and manage your virtual machines from code, then aside from any data that’s generated, that code is all you need a backup of to be able to rebuild your virtual machines
Useful links:
https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_kvm_module.html
https://docs.ansible.com/ansible/latest/collections/community/general/proxmox_disk_module.html
https://pve.proxmox.com/wiki/Cloud-Init_Support
Assumptions:
Now because this video is specifically about creating virtual machines using Ansible I’m going to have to make some assumptions
Firstly that you already have Ansible installed or at least know how to install it. If not, then I do have a video that provides a crash course on Ansible, including how to install it
Secondly, I’m going to assume you know how to setup Proxmox VE servers to allow Ansible to manage them. If not, then I do have a video that covers that
And lastly, that you know how to use Ansible Vault to encrypt sensitive information like user credentials. If you don’t, well I also have a video which shows how you can create, manage and use Ansible Vault files
Create Playbook:
Since there is quite a bit to cover here, we’ll start with the main Playbook
I’ve broken the tasks down into roles that we’ll call from our Playbook
This allows you to create most of the code once but call it from other playbooks
It also makes tracking changes easier
In other words, if something stops working, and the last change was to the code in a role, it’s easier to find and fix that code in that role rather than having to look through a single Playbook that does everything
First though we’ll create a project folder for this in my /opt/ansible folder where I keep my Ansible files
mkdir vms
Then we’ll switch to that folder
cd vms
Next, we’ll create the Playbook itself
nano build_vms.yml
- hosts: localhost
become: false
gather_facts: false
tasks:
# Create variables files for images and VMs
- name: Create Variable Files
include_role:
name: create_variables_file
vars:
variable_name: '{{item}}'
loop:
'{{variable_files}}'
- hosts: pvenodes
become: true
gather_facts: false
roles:
# Download cloud-init images
- download_cloud_init_images
# Create cloud-init files
- create_user_files
- hosts: pvenodes[0]
become: false
gather_facts: false
roles:
# Create VMs
- create_vms
Now save and exit
The first play will create some variable files, which we’ll cover in more detail later
But one file will define which cloud-init images to download, which we’ll use to perform unattended installations
A second file will provide most of the details to create our virtual machines
This is all done on the Ansible computer itself, hence why we target localhost
At the time of recording, this is the preferred method I’ve come across to loop a role
I’d prefer to see an alternative option to using include_role, but I’d still rather do this than having to create multiple entries for every file you want assembled
The other play will connect to all of the Proxmox VE servers in a group in my inventory file called pvenodes
[pvenodes]
192.168.102.10
192.168.102.11
192.168.102.12
This will download the cloud-init images and create user files for cloud-init
The last play connects to the first Proxmox VE server in the group to create the virtual machines
Things would soon get complicated if you try to target multiple nodes when creating virtual machines, so that’s why I target only one of them
For this then to work, the assumption is that all of the nodes in the cluster will be up and running
Create Image Files:
Normally I’d be creating templates to clone computers from to save time and provide standardisation
But because everything will be managed by Ansible and we’ll be using cloud-init images as the base OS, we’ll skip straight to creating virtual machines instead
For this to be unattended, we’ll need to provide details of the images to be used
Rather than storing this information in one single file, we’ll maintain separate files for each image to make the administration easier
And we’ll get Ansible to assemble them all together into a single file that it can use
Essentially, we’ll be handing Ansible a dictionary so we can setup one role to download all the images that we need
First we’ll create a folder
mkdir -p variable_files/images
Then we’ll create what I’ll call the header file
nano variable_files/images/Aheader.yml
os_images:
Now save and exit
This single line is just the definition of a variable we’ll reference for information about the images
You can call the file something else if you want, but alphabetically it must be the first file in the folder so that it appears at the top when all these files are assembled together
Then we’ll create the individual image files
nano variable_files/images/Debian11.yml
- name: Debian11-Image
cloud_init_file: debian-11-generic-amd64.qcow2
image_url: 'https://cloud.debian.org/images/cloud/bullseye/latest/debian-11-generic-amd64.qcow2'
image_checksum: sha512:ab971a745b5081e7a36df56f589e1c11077c0fbec799f5eb1e57acf17236715df4575aa02118dbcf076cbe372ebe9dedf1d1457b7f5873bdcf08bc5f3ac0233c
state: present
Now save and exit
NOTE: The formatting of these files must be correct
nano variable_files/images/Debian12.yml
- name: Debian12-Image
cloud_init_file: debian-12-generic-amd64.qcow2
image_url: 'https://cloud.debian.org/images/cloud/bookworm/latest/debian-12-generic-amd64.qcow2'
image_checksum: sha512:ad93b29870f370b34b8bbd138b92608ec7a1817ee64d3b4602143dd68713e5e225a7fec0509e98e64ed5d570c27f3a6864e7ab4aa99d7cdfad2158f4b84d364f
state: present
Now save and exit
NOTE: When an image is updated on the website, its checksum will change, so make sure to update these before running the Playbook
In this example, I’ve created a file for Debian 11 and another for Debian 12
Each file contains the information needed to download that image, but it also has a state
You usually build computers using the latest OS
But after a while, a newer version will be released and so an image for that will need to be downloaded
In which case, a new file for that version will have to be created in the folder
So although I’m not actually using Debian 11 anymore, I wanted to demonstrate handling mutliple image versions
When an image is no longer needed, it could be phased out by setting it’s state to absent and Ansible could delete that image and no longer download it
Eventually the image file that provides the details for that download can be deleted from the folder as well
But I haven’t included that in this Playbook because it’s always possible a virtual machine needs an older image and it’s easier to just edit the file and change the state to present
Create Computer Files:
Similar to what was done for the cloud-init images, we need to create files detailing the virtual machines to build
This would be even more difficult to maintain if it was a single file, so we’ll create individual files to make life easier for us, but have Ansible assemble them together into a dictionary it can use
For practical reasons, each file will be named after the VMID
By keeping a track of computers and VMIDs in a spreadsheet for instance this should make it easier to manage and repurpose VMIDs over time
First we’ll create a folder
mkdir variable_files/vms
Then we’ll create the header file
nano variable_files/vms/0.yml
computers:
Now save and exit
As before, this has to be the first file alphabetically in the folder and so I’ve called it 0.yml
Then we’ll create the individual virtual machine files
nano variable_files/vms/100.yml
- name: vmtest1
vmid: 100
node: pvedemo1
image_file: debian-11-generic-amd64.qcow2
cores: 1
memory: 1024
vlan: 1
ipv4mode: static
ipv4_address: 192.168.102.40/24
ipv4_gateway: 192.168.102.254
state: new
Now save and exit
NOTE: The formatting of these files must be correct
nano variable_files/vms/101.yml
- name: vmtest2
vmid: 101
node: pvedemo2
image_file: debian-12-generic-amd64.qcow2
cores: 2
memory: 2048
vlan: 1
ipv4mode: dhcp
state: new
Now save and exit
In these examples, I’ve created one file for a virtual machine that will have a static IP address and another for one that will use DHCP
Bear in mind the specific image file being referenced because this has to align with a cloud-init image that has been downloaded
Other settings are ones that I feel could be unique to a virtual machine, others will either be hard coded into the role or defined in a group variable
I’ve also made an assumption that not all virtual machines can be run on a single computer at once and so the node to install a virtual machine on is defined as well
You can of course modify this approach by updating the settings in these files and the role to suit your own requirements
I’ve included a setting for the state because at some point a virtual machine needs to be created and later in time it may no longer be required
By setting the state to new to begin with, it will let the Playbook know that the virtual machine can be created
A setting like absent, on the other hand, could be used by another Playbook that is used to delete unused virtual machines
And once a virtual machine is deleted, you could repurpose the VMID and create a new computer
Create Variables:
Variables make maintenance much easier and Ansible will look for them in multiple places
Some variables might be used by multiple groups in the inventory and they can go in the group_vars/all folder
In this demo, I also have a group in the inventory file called pvenodes, for the Proxmox VE nodes, so I need a folder for that group as well
The first thing to do is to create the folders
mkdir -p group_vars/{all,pvenodes}
Next we’ll create the variables that can used for all devices
nano group_vars/all/vars
ansible_key_file: /opt/ansible/files/public-keys/ansible-key.pub
project_path: '/opt/ansible/vms/'
image_path: '/var/lib/vz/images/0/'
domain_name: '.homelab.lan'
timezone: 'Europe/London'
locale: 'en_GB.UTF-8'
group_name: pvenodes
variable_files:
- images
- vms
These involve where to find things like the Public key for the Ansible user account and where the project path is
These will likely be different for you so do make sure these are set correctly
I also have a variable for the image path and this is where cloud-init images will be downloaded to on the Proxmox server
You can change this to something else, but bear in mind that the API does not provide access to absolute paths and you’ll have to account for that
So while Ansible will login using SSH and download the cloud-init images to the /var/lib/vz/images/0/ folder, when we use the API we’ll reference this as local:0
There are also details for the domain name, timezone and locale to configure each OS to use, so set these according to your needs
The group name is the one I’m using in my inventory file to group Proxmox VE nodes. Again, set this to what’s relevant to you
I’ve also created a list for the variables files discussed earlier so that I can loop a role that assembles files together
Now we’ll create the variables that relate to the Proxmox VE nodes
nano group_vars/pvenodes/vars
api_host: '{{inventory_hostname}}'
drive_storage: VMS
drive_format: qcow2
snippets_storage: local
snippets_path: '/var/lib/vz/snippets/'
image_storage: local
linux_bridge: vmbr0
The API host defines which PVE node to connect to through the API. I’ve opted to use whichever host Ansible has already logged into using SSH to keep things simple
The drive_storage variable is the storage location where virtual machines should be installed into
I’ve assumed all virtual machines will be kept in the same storage and so I made this a group variable, but you can do this on a per VM basis if your prefer
The PVE servers I have connect to a NAS using NFS and the storage has been labelled VMS, so you’ll likely want to change this to suit your setup
The drive_format variable should ideally be qcow2 because that supports snapshots for instance. However, if you’re using a local drive formatted for LVM, you’ll be restricted to using raw which doesn’t support snapshots
Because we’ll be using cloud-init images and unattended files to create our virtual machines, we need a storage location for snippets
I’ll be using the PVE node itself to store these on, though they will be temporary files, so that’s why the snippets_storage and snippets_path variables are set this way
You can of course change these to something else, but as I’ll show later, the storage location does need to support snippets
Where you want store cloud-init images is up to you, but that’s defined by the image_storage variable
As before, bear in mind that the API has limited drive access and it needs to be pointed to a storage location as opposed to an absolute path
For me, all the virtual machines will be on the same Linux bridge, and that’s why I’ve defined a group variable to define which Linux bridge
Just to point out, that roles can have their own variables
But some of these variables might be used across roles and it will also be easier to maintain them in one place, so that’s why I’m focusing on using group variables
Create Vaults:
Some variables are better stored in encrypted files and Ansible allows us to setup Vaults that it can reference
As before, I’m going to assume you already know how to create and manage vaults
In my case I’ve setup a password file and I have an ansible.cfg file configured that references it
[defaults]
interpreter_python=auto_silent
host_key_checking=False
private_key_file=~/.ssh/ansible-key
remote_user=ansible
vault_password_file=~/.myvaultkey
inventory=/opt/ansible/inventory
Unless you do that you will need to create a password and enter it when accessing Vaults
In addition I’ve setup bash to use nano as the text editor because otherwise it defaults to using Vi
If you don’t know how to do all that, check out my blog on Ansible Vaults https://www.techtutorials.tv/sections/ansible/how-to-use-ansible-vault/
First we’ll create a Vault to store details that might be needed by all hosts
ansible-vault create group_vars/all/vault
ansible_user: ansible
ansible_ip: '192.168.200.10'
Now save and exit
In this case, I’m storing the ansible user account name and IP address of the ansible computer
Next we’ll create a Vault for the pvenodes group
ansible-vault create group_vars/pvenodes/vault
api_user: ansible@pam
api_token_id: ansible-token
api_token_secret: b8a56da2-80fe-49d3-9d03-52f32dad9caa
Now save and exit
These are the login credentials to gain access to the API in Proxmox VE
Assemble Files Role:
There are two variable files that we need to assemble so we’ll create a role to do this
First we’ll create the folder and sub-folders
mkdir -p roles/create_variables_file/tasks
Then we’ll create the role itself
nano roles/create_variables_file/tasks/main.yml
# This will assemble several files into one variable file
# The expectation is the original files will be found in a sub-folder of one called variable_files
# The variable file will then be created in a sub-folder within the group_vars folder
- name: Create Variables File
assemble:
src: '{{project_path}}variable_files/{{variable_name}}/'
dest: '{{project_path}}group_vars/{{group_name}}/{{variable_name}}'
delegate_to: localhost
Now save and exit
Basically this takes all of the files in the source folder and appends them together in alphabetical order
The resulting file is then created in the destination folder
Because we’re creating dictionaries, it’s important to have an extra header file in the source folder that defines the variable and this has to be the file that Ansible begins with
Download Cloud-Init Images Role:
Rather than creating templates, we’ll download pre-built Debian images that support cloud-init so we can do unattended installations
First we’ll create a folder and sub-folder for the role
mkdir -p roles/download_cloud_init_images/tasks
Then we’ll create the role
nano roles/download_cloud_init_images/tasks/main.yml
# Create a download folder
- name: Create download folder
file:
path: '{{image_path}}'
state: directory
# Download cloud-init images
- name: Download cloud-init images
get_url:
url: '{{item.image_url}}'
dest: '{{image_path}}{{item.cloud_init_file}}'
checksum: '{{item.image_checksum}}'
loop: '{{os_images}}'
when: 'item.state == "present"'
# Delete cloud-init images, that are no longer needed
- name: Delete unused cloud-init images
file:
path: '{{image_path}}{{item.cloud_init_file}}'
state: absent
loop: '{{os_images}}'
when: 'item.state == "absent"'
Now save and exit
The first thing this role does is to create a folder to download the images to
Then it downloads an image
Because we’ve created a dictionary which defines all the cloud-init images we need, we can run this process through a loop
However, this will only happen if the state in the file is set to present
In other words, once you’re finished with an image set its state to absent
Not only does that prevent the image from being downloaded, but in the last process Ansible deletes images that have their state set to absent
Create Snippets Folder:
As part of the creation of a virtual machine, we’re going to be using cloud-init images
Now Proxmox VE allows us to attach answer files to complete the build process unattended
But to be able to do that, we need to place these files in a storage location that supports snippets
For this demo we’ll use the local storage on the Proxmox VE nodes
The files are only intended to be temporary, although they won’t take up much space anyway
A useful feature of the local storage is that you can update it to support snippets on the fly, whereas with other storage types you might have to remove and recreate them
Because these files contain sensitive infomration though, I prefer to delete them once the virtual machines are built
To setup support for snippets, in the GUI we need to navigate to Datacenter | Storage
Now select the local storage and click Edit
Click on the Content drop-down menu, select Snippets and then click OK
To double check this has worked, select a node in the left hand pane then click Shell
Now we should have a folder called snippets that we can use
ls -l /var/lib/vz
TIP: If you look in the Storage section in the GUI you’ll see that local storage is actually a folder called /var/lib/vz
Create Cloud-Init Files Role:
Cloud-init images are a bit like having a hard drive supplied with a pre-built operating system
However, some questions still need answering to complete the build process and make a computer unique
Each virtual machine will have some slight differences and so we need to create individual answer files for them
First we’ll create a folder and sub-folder for the role
mkdir -p roles/create_user_files/tasks
Then we’ll create the role
nano roles/create_user_files/tasks/main.yml
# Create user cloud-init files
# NOTE: The storage used for these files has to support snippets
- name: Create user cloud-init files
template:
src: user_template.j2
dest: '{{snippets_path}}{{item.vmid}}-user.yml'
owner: root
group: root
mode: 0644
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
Now save and exit
What the role does is to copy a template to a storage location that supports snippets and names it based on the VMID of the virtual machine
It also sets the permissions for the file
Because we’re dealing with mutiple virtual machines, we’re taking advantage of a dictionary file so that we can run this process through a loop and create all of the virtual machine files we need
One thing to point out is that logging is disabled to prevent sensitive information being sent to the screen for instance
Because the role relies on a template, we need to create another sub-folder to store that in
mkdir roles/create_user_files/templates
Then we create the template file which is based on Jinja2
nano roles/create_user_files/templates/user_template.j2
#cloud-config
hostname: {{item.name}}
manage_etc_hosts: true
fqdn: {{item.name}}{{domain_name}}
user: {{ansible_user}}
ssh_authorized_keys:
- {{ lookup("file", "{{ansible_key_file}}") }}
chpasswd:
expire: False
users:
- default
package_update: true
package_upgrade: true
timezone: {{timezone}}
packages:
- qemu-guest-agent
- ufw
runcmd:
- systemctl enable qemu-guest-agent
- systemctl start qemu-guest-agent
- ufw limit proto tcp from {{ansible_ip}} to any port 22
- ufw enable
- locale-gen {{locale}}
- localectl set-locale LANG={{locale}}
- chfn -f Ansible {{ansible_user}}
Now save and exit
Some of the settings are to make a virtual machine unique, whilst others are standard settings we want to see on all of our virtual machines
It’s also worth pointing out that Debian isn’t as flexible when it comes to cloud-init as compared to Ubuntu
As a result, some of this information was pulled from a dump of a cloud-init drive created by Proxmox VE
What these settings here will do is to setup the hostname and hosts file
It also creates a user account for Ansible to login with using SSH key authentication. A password isn’t defined but it’s expiry will still be disabled
We don’t want a default user account creating, but I’ve included a line for this all the same, as that’s what the Proxmox VE dump contains, even if you opt to create an alternate user account
The default account doesn’t get created though, which is exactly what I want
There are then some settings for updates and upgrades as well as to set the timezone, although in reality we should have the latest version of the OS anyway
Cloud-init allows you to install packages and run commands so I’ve taken advantage of that, but only for what I’d call the bare essentials
As part of the initial build, we’ll install the Qemu Guest Agent and UFW packages
We’ll make sure the Guest Agent will start at boot and is also started after installation
Then we’ll setup a UFW rule to allow SSH access, but only from the Ansible computer
By starting UFW, we expect to be restricting remote access to a virtual machine ASAP
While we’re here, we’ll also download and configure the locale we want
Finally we’ll change the name of our Ansible account because otherwise it will be called Debian
Create VMs Role:
The last role we need is one that takes all this information we have and uses it to create the virtual machines
First we’ll create a folder and sub-folder for the role
mkdir -p roles/create_vms/tasks
Then we’ll create the role itself
nano roles/create_vms/tasks/main.yml
# Create VMs
- name: Create VMs
proxmox_kvm:
api_user: '{{api_user}}'
api_token_id: '{{api_token_id}}'
api_token_secret: '{{api_token_secret}}'
api_host: '{{api_host}}'
node: '{{item.node}}'
name: '{{item.name}}'
vmid: '{{item.vmid}}'
vga: serial0
net: '{"net0":"virtio,bridge={{linux_bridge}},tag={{item.vlan}},firewall=1"}'
serial: '{"serial0":"socket"}'
scsihw: virtio-scsi-single
scsi:
scsi0: '{{drive_storage}}:0,import-from={{image_storage}}:0/{{item.image_file}},format={{drive_format}},iothread=1'
ide:
ide2: '{{drive_storage}}:cloudinit'
ostype: 'l26'
onboot: 'no'
cpu: 'host'
cores: '{{item.cores}}'
sockets: 1
memory: '{{item.memory}}'
balloon: 0
boot: order=scsi0
ipconfig:
ipconfig0: 'ip=dhcp'
cicustom: 'user={{snippets_storage}}:snippets/{{item.vmid}}-user.yml'
agent: 'enabled=1'
timeout: 700
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
# Update network settings
# A custom file would have been preferred to update the network settings but the problem is it includes a MAC address
# So all VMs will be configured to use DHCP by default, those that need a static IP will be updated
- name: Update network settings
proxmox_kvm:
api_user: '{{api_user}}'
api_token_id: '{{api_token_id}}'
api_token_secret: '{{api_token_secret}}'
api_host: '{{api_host}}'
node: '{{item.node}}'
vmid: '{{item.vmid}}'
ipconfig:
ipconfig0: 'ip={{item.ipv4_address}},gw={{item.ipv4_gateway}}'
timeout: 60
update: true
loop: '{{computers}}'
when: (item.state == "new") and
(item.ipv4mode == 'static')
no_log: yes
# Resize the disk
# The Cloud-Init disk is only 2GB
- name: Resize disk
community.general.proxmox_disk:
api_user: '{{api_user}}'
api_token_id: '{{api_token_id}}'
api_token_secret: '{{api_token_secret}}'
api_host: '{{api_host}}'
vmid: '{{item.vmid}}'
disk: 'scsi0'
size: '32G'
state: 'resized'
timeout: 60
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
# Start VMs
- name: Start VMs
proxmox_kvm:
api_user: '{{api_user}}'
api_token_id: '{{api_token_id}}'
api_token_secret: '{{api_token_secret}}'
api_host: '{{api_host}}'
node: '{{item.node}}'
vmid: '{{item.vmid}}'
state: 'started'
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
Now save and exit
What we’re doing here is to make use of the Proxmox VE API to make all these changes
First we create the virtual machine by pulling in information from a number of variables, although some I’ve chosen to hard code
Because this is a cloud-init image, we’ll give it a serial terminal for console access
I’ve opted for the VirtIO SCSI single controller and the hard drive will use the SCSI bus
We’ll attach the relevant cloud-init image as a hard drive and set the virtual machine to boot from this hard drive
In addition, we’ll add an IDE drive that cloud-init will use to complete the installation
And we’ll attach the relevant user file that goes with that
Each virtual machine has its own settings for things like a VLAN tag, cpus, memory etc.
NOTE: If you aren’t using VLANs and the Linux Bridge Proxmox VE is not VLAN aware, then do not include the VLAN tag option
Other settings like the ostype, number of sockets, disabling of memory ballooning, etc. have been hard coded here as for me these will be the same for all virtual machines
Of course, feel free to change these decisions how you like
We don’t want a virtual machine to immmediately start after creation, so that option is disabled
As a default option I’ve chosen to set the IP addressing using DHCP
Feel free to alter the timeout to suit. In this demo it’s quite high because of the delays of nested virtualisation and if it does timeout, the playbook will halt
Because Ansible created a dictionary for all our virtual machines, we can loop this process to get all of our virtual machines built when this task is run
Lastly, I’ve disabled logging to avoid sensitive information being sent to the terminal
At least some computers in the network should have a static IP address, for example a DNS and DHCP server
For that reason, the next process is to alter the cloud-init settings for those virtual machines that have been assigned a static IP address
Although if all you use is DHCP you could remove this step
Likewise, if all you use is static IP addressing you could merge this taks into the main one
Again, we’ll loop this process and disable logging
As an aside, my preference would have been to have network files instead, but one thing I noticed in the cloud-init dump was that the MAC address was being defined
I’d rather leave that choice to Proxmox VE, so I opted to store static IP addressing in the virtual machine file instead
The cloud-init image provided has only 2GB in capacity. For that reason, the next process increases the hard drive size
Typically, Proxmox VE gives Linux computers a default drive of 32GB but feel free to pick a different value
NOTE: Unlike the GUI, this Ansible module requires the final size. In contrast, you would add 30GB for instance when using the GUI to increase the size from 2GB to 32GB
Again, because we have a dictionary we can loop this process
The last thing to do in this role is to start the virtual machines and we can loop this process to start all of the virtual machines up
When a virtual machine starts for the first time, the ISO image for cloud-init will be updated with the settings we’ve provided and will be used to finalise the installation
NOTE: This is a one off process, that’s why we don’t want a virtual machine to boot up as soon as it’s created. Once cloud-init has done it’s job, the operating system is finalised and cloud-init will no longer be used
Now that everything’s defined we’ll run our playbook
ansible-playbook build_vms.yml
This can take a while to complete mind, especially because Proxmox VE will have to import the cloud-init image for every virtual machine you ask it to create
NOTE: As shown earlier, the ansible.cfg file I use provides all the information Ansible needs and so I don’t have to add extra parameters to this command
Post Cleanup:
Because these virtual machines are new, the setup of the OS required starting them
However, once cloud-init has done it’s job, the IDE drive and user file are no longer needed and I’d prefer to remove these as I don’t like information like this being left lying around
To do that we’ll create a Playbook to do some post cleanup work
nano post_cleanup.yml
- hosts: pvenodes
become: true
gather_facts: false
tasks:
# Remove cloud-init file
- name: Remove cloud-init file
become: true
file:
path: '{{snippets_path}}{{item.vmid}}-user.yml'
state: absent
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
- hosts: pvenodes[0]
become: false
gather_facts: false
tasks:
# Stop VMs
- name: Stop VMs
proxmox_kvm:
api_user: '{{api_user}}'
api_token_id: '{{api_token_id}}'
api_token_secret: '{{api_token_secret}}'
api_host: '{{api_host}}'
vmid: '{{item.vmid}}'
state: 'stopped'
timeout: 180
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
# Remove cloud-init drive
- name: Remove cloud-init drive
community.general.proxmox_disk:
api_user: '{{api_user}}'
api_token_id: '{{api_token_id}}'
api_token_secret: '{{api_token_secret}}'
api_host: '{{api_host}}'
vmid: '{{item.vmid}}'
disk: 'ide2'
state: absent
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
# Pause before starting VM again
# Without a pause the task may indicate it's already started
# When that happens, the VM doesn't get started
- name: Pause
pause:
seconds: 30
# Start VMs
- name: Start VMs
proxmox_kvm:
api_user: '{{api_user}}'
api_token_id: '{{api_token_id}}'
api_token_secret: '{{api_token_secret}}'
api_host: '{{api_host}}'
vmid: '{{item.vmid}}'
state: 'started'
loop: '{{computers}}'
when: 'item.state == "new"'
no_log: yes
# Update State
- name: Change state
replace:
path: '{{project_path}}variable_files/vms/{{item.vmid}}.yml'
regexp: 'state: new'
replace: 'state: present'
loop: '{{computers}}'
when: 'item.state == "new"'
delegate_to: localhost
no_log: yes
# Update Variable File
- name: Create Variable Files
include_role:
name: create_variables_file
vars:
variable_name: 'vms'
The first thing that’s done is to delete the user files that were created in the snippets folder on the PVE servers
Changing the hardware of a virtual machine in Proxmox VE can require a shutdown, so we shutdown the virtual machines and then remove the cloud-init IDE drive from them
NOTE: This is why a state of new is important. We only want to shutdown computers that have only just been created
The pause is necessary to avoid a situation where a virtual machine is shutdown, but Ansible can’t start it back up
It looks like there’s a bit of lag in reporting the virtual machine’s state and pausing helps resolve that. Feel free to change the duration depending on your experience
The virtual machine files are then updated to change the state from new to present
This state reflects a virtual machine which can be considered operational
Lastly, the dictionary file for the virtual machines is rebuilt as the intention is that other playbooks will likely use this file
NOTE: You should wait until the Guest Agent is ready on the virtual machines before running this playbook
You can confirm it’s running on the Summary page for a computer where it should report the IP address
Without the Guest Agent, Proxmox VE won’t be able to shutdown a virtual machine and Ansible will timeout
While you can add a forced shutdown option, it’s better to perform a clearn shutdown to avoid data loss or corruption
Now we’ll run this Playbook to finish things off
ansible-playbook post_cleanup.yml
Testing:
Just because we have some virtual machines created and they appear to be up and running, it can make sense to do a manual check
One computer has a static IP address so we’ll SSH into that first
ssh ansible@192.168.102.100 -i ~/.ssh/ansible-key
We’ll check the firewall rules
sudo ufw status
And we’ll also check the local details
localectl
Then we’ll check the timezone
timedatectl status
So we know the virtual machine works and that the packages and commands were run
The other virtual machine obtained it’s IP address through DHCP, so to find that you can check the summary page or your DHCP server
Then we’ll login into that computer
ssh ansible@192.168.102.40 -i ~/.ssh/ansible-key
And again we’ll check it’s firewall rules, locale and timezone details
sudo ufw status
localectl
timedatectl status
Summary:
Now obviously all we have here is some virtual machines, and although they only have an operating system installed, they are ready for Ansible to work on
That means we just need another playbook to install and maintain applications, services and firewall rules, as well as to keep the operating system up to date
At that point, we’ll have the whole process automated and only need a backup copy of the Ansible files and any data, if any, we need to keep
This should save a lot of time in the event of a disaster recovery but it also reduces the cost of keeping local and offsite backups because we aren’t backing up operating systems or software for lots of computers
And another benefit of automation is that it maintains the computers to a set standard, which includes their security settings
Sharing is caring!