This is a slight departure from FreeBSD land, but there is no reason why this can't be applied to FreeBSD as well with some slight modifications. Here we will use Saltstack to automatically partition, format, and mount a volume in Ubuntu Linux 20.04.
Requirements
- Saltstack version 3004.1 or later
- Ubuntu 20.04 (due to major difference between releases, this may or may not work with other versions)
- The "module.run" option must be enabled to allow individual execution module calls to be made via states
Assumptions
There are countless ways to get a functional Salt environment running. We won't cover that here, but perhaps in a future story. If you don't have a functional Salt environment, check out the getting started with Salt guide.
We'll also assume you are familiar with and using AWS and Terraform to provision infrastructure. Even though I am against using cloud services, VMware support with Terraform is still unreliable. Finally, in my examples I will be utilizing Consul for storing Pillar data, but you can easily replace that with a YAML file. You just won't have the full automatic features that's possible when using the 3 tools together (Terraform, Salt, Consul).
Terraform Infrastructure
Within Terraform, lets say we have the following EBS volume provisioned:
resource "aws_ebs_volume" "data" {
availability_zone = 'us-east-1a'
size = 50
}
Using the Consul provider for Terraform, we can automatically save some Pillar data describing the EBS volume that can be accessed by Salt. This of course requires that you also have the Salt Consul Pillar module configured to use Consul as a source for Pillar data.
resource "consul_key_prefix" "volumes" {
path_prefix = format("saltstack/private/%s/", "server.domain.tld")
subkey {
path = "ebs/data"
value = yamlencode({
ebs_id = aws_ebs_volume.data.id,
volume_label = "data"
mount_point = "/var/db/data"
})
}
}
The above is simply an example and several assumptions are made in regards to how exactly your Salt environment is setup. Essentially with the above we are recording the EBS ID of the volume and giving a mount point and label. This is convenient because we keep hardware configuration in one spot.
Pillar Data
Since we placed the EBS information under "private/MINION_ID", that means that the Pillar data is accessible only to the specific minion based on the the minion's ID which is by default the FQDN of the host. This makes it possible to define EBS volumes on a per host basis.
# salt 'server.domain.tld' pillar.items
ebs:
----------
data:
----------
ebs_id:
vol-0b1532b92bd79899a
mount_point:
/var/db/data
volume_label:
data
If you are not using Consul you could easily convert the above into a YAML file and manage it how ever you have your Pillars configured.
Getting the Block Device Name
We have a way to communicate the EBS volume information from Terraform to Salt, but there is one slight problem. For whatever reason (unlike FreeBSD) Ubuntu Linux fails to include a utility on it's AWS images that allows you to get the blockdevice name by means of an AWS EBS ID. That's easy to solve with a simple shell script that will get installed on the machine using Saltstack.
Lets start with the Salt state (packages.sls) to install the OS packages and script:
baseline packages:
pkg.installed:
- pkgs:
- nvme-cli
- jq
ebstodev:
file.managed:
- name: /usr/local/sbin/ebstodev
- source: salt://{{ slspath }}/sbin/ebstodev.sh
- mode: 755
- require_in:
- sls: .volume
- require:
- pkg: baseline packages
Pretty straight forward. The above installs the nvme-cli utility, and the jq CLI tool for working with JSON from the shell. We also install our custom wrapper script named ebstodev under "/usr/local/sbin/". We also tell Salt how to manage the dependencies among all these states. Our script requires the packages, and our volume states will require the script.
Here's what our script looks like:
#! /bin/sh
#
# Prints out the Linux device name for the corresponding AWS EBS volume ID
# Requires the 'jq' and 'nvme' executables
#
# FreeBSD:
# - pkg install -y nvme-cli jq
#
# Ubuntu Linux 20.04 and later
# - apt install nvme-cli jq
#
volume_id=$1
# Exit with error if no volume ID provided
if [ -z $volume_id ]; then
exit 1
fi
# Silly tool for some reason doesn't include the '-', remove it
nvme_volume_id=$(echo $volume_id | tr -d '-')
# Use JSON output to easilly parse the output and select the device path
nvme list -o json | jq -r '.Devices | .[] | select(.SerialNumber | contains("'${nvme_volume_id}'")) | .DevicePath'
It's very crude and minimal. I won't explain it in any great detail, just know that when we give it an EBS volume ID, a corresponding block device name is returned.
Everything is in place to begin constructing our volume.sls file. This is the Salt state that will do the partitioning, formatting, and mounting of our EBS volume on the host. We use Salt's JINJA template feature to iterate through all the EBS volumes (if any) and do the needful.
{%- for volume_name, config in salt['pillar.get']('ebs', {}).items() -%}
{%- set block_device_name = salt['cmd.shell']('ebstodev {}'.format(config.ebs_id)) %}
{% if block_device_name is defined and block_device_name|length %}
disk_label_{{ config.volume_label }}:
module.run:
- partition.mklabel:
- device: {{ block_device_name }}
- label_type: gpt
- unless: "parted {{ block_device_name }} print | grep -i '^Partition Table: gpt'"
disk_partition_{{ config.volume_label }}:
module.run:
- partition.mkpart:
- device: {{ block_device_name }}
- fs_type: ext4
- part_type: primary
- start: 0%
- end: 100%
- unless: parted {{ block_device_name }} print 1
- require:
- module: disk_label_{{ config.volume_label }}
disk_name_{{ config.volume_label }}:
module.run:
- partition.name:
- device: {{ block_device_name }}
- partition: 1
- name: {{ config.volume_label }}
- unless: parted {{ block_device_name }} print | grep {{ config.volume_label }}
- require:
- module: disk_partition_{{ config.volume_label }}
disk_format_{{ config.volume_label }}:
module.run:
- extfs.mkfs:
- device: {{ block_device_name }}p1
- fs_type: ext4
- label: {{ config.volume_label }}
- unless: blkid --label {{ config.volume_label }}
- require:
- module: disk_name_{{ config.volume_label }}
disk_mount_{{ config.volume_label }}:
mount.mounted:
- name: {{ config.mount_point }}
- device: LABEL={{ config.volume_label }}
- fstype: ext4
- mkmnt: True
- require:
- module: disk_format_{{ config.volume_label }}
- onlyif:
- blkid --label {{ config.volume_label }}
{% endif %}
{%- endfor %}
The state will for each EBS volume defined in the Pillar data, attempt to get the corresponding blockdevice name for the given EBS volume ID. If one was found, it will generate the states to do the partitioning, formatting, and mounting. Since this mostly makes use of execution modules and not actual "states", the state definitions have unless to prevent executing them if the volume is already prepared.
You can include these SLS files in your top file or in your module.
Closing Thoughts
I normally do not work with Ubuntu or Linux in general. I look forward to publishing a similar guide on how to achieve the same result on FreeBSD with ZFS. I know for sure Salt has built-in state modules for ZFS, so none those hacks done with the execution module will be necessary.
- Log in to post comments