On Sunday 21 December 2025
To establish context, I have a Terraform script that automatically creates a new Droplet on DigitalOcean, running Ubuntu 22.04. This droplet can then be set up to run nginx and certbot by a simple Ansible playbook.
Also unrelated but I have a small script that reads the output log of Terraform to get the IP address of the droplet:
const fs = require('fs');
fs.readFile('./terraform.tfstate', 'utf8', (err, data) => {
if (err) {
console.log(`Error reading file from disk: ${err}`);
} else {
const tfstate = JSON.parse(data);
if (tfstate.resources.length == 0) {
console.log("no IP yet");
return;
}
console.log(tfstate.resources[0].instances[0].attributes.ipv4_address)
}
});
Everything being orchestrated by a big Makefile, it just has to read the IP from this script and add it to the Ansible inventory.
I modified a few nginx configuration files and some files were renamed in the process. As destroying and recreating a droplet costs time, I often run my ansible playbook several time on a single droplet.
Which means that the old config files were still there, and, because I leverage certbot’s ability to automatically modify nginx config files to test domain ownership, I ended up breaking those tests.
To fix this I added a step that removes every old config files.
- name: Remove every old Nginx enabled configs
ansible.builtin.file:
state: "{{ item }}"
path: "/etc/nginx/sites-enabled"
owner: 0 # root
group: 0
mode: '0755' # i checked that those were the default permissions for these folders
with_items:
- absent # destroys
- directory # recreate
- name: Remove every old Nginx available configs
ansible.builtin.file:
state: "{{ item }}"
path: "/etc/nginx/sites-available"
owner: 0
group: 0
mode: '0755'
with_items:
- absent
- directory