Installing MAAS CLI on WSL 2
Installing MAAS CLI on WSL2
This is a short post on how to install MAAS CLI on WSL2 so you can administer your MAAS environments for your Windows system.
If anyone reading this is confused as to what MAAS (Metal-as-a-Service) is, it’s an open source cloud platform from Canonical allowing you to manage bare-metal infrastructure, such as networking and server deployments within your DC. https://maas.io/how-it-works
There are a number of blog post going into deeper depth from my colleague Matthew Quickenden here: https://www.cryingcloud.com/blog/tag/%23MAAS
TL;DR
Here's all the command you need to run:
# Setup repo
MAAS_VERSION=3.5
sudo apt-add-repository ppa:maas/${MAAS_VERSION}
# Install OpenSSL Python module
sudo apt install python3-pip
pip install pyOpenSSL
# Install MAAS CLI
sudo apt install maas-cli
Installing MAAS CLI
Following the official documentation for installing the CLI, it tells you to initially run this command:
sudo apt install maas-cli
When you attempt via WSL2 for the first time, you're likely to see the following error:
E: Unable to locate package maas-cli
To get around this you need to run the following:
MAAS_VERSION=3.5
sudo apt-add-repository ppa:maas/${MAAS_VERSION}
Running sudo apt install maas-cli
will give you something similar to this:
Dependent on your setup, you may need to run the following to install the OpenSSL Python module
# Install pip if not available
sudo apt install python3-pip
# Install OpenSSl module
pip install pyOpenSSL
If the module isn't present, you'll get an error like this:
When the pre-reqs are in place running the maas
command should return something similar:
Azure Container Storage for Azure Arc Edge Volumes - deploying on Azure Local AKS
Late last year, Microsoft released the latest version of the snappily titled ‘Azure Container Storage enabled by Azure Arc’, (ACSA) which is a solution to make it easier to get data from your container solution to Azure Blob Storage. You can read the overview here, but in essence it’s a pretty configurable allowing you to setup local resilient storage for your container apps, or use for cloud ingest; to send data to Azure and purge once transfer is confirmed.
The purpose of the post is to give and example of the steps needed to get this setup on an Azure Local AKS cluster.
If you have an existing cluster you want to deploy to, take heed of the pre-reqs:
Single-node or 2-node cluster
per node:
- 4 CPUs
- 16 GB RAM
Multi-node cluster
per node:
- 8 CPUs
- 32 GB RAM
16GB RAM should be fine, but in more active scenarios, 32 GB is recommended.
Prepare AKS enabled by Azure Arc cluster
Make sure you have the latest AZ CLI extensions installed.
Azure Arc Kubernetes Extensions Documentation
# Make sure the az extensions are installed
az extension add --name connectedk8s --upgrade
az extension add --name k8s-extension --upgrade
az extension add -n k8s-runtime --upgrade
az extension add --name aksarc --upgrade
# Login to Azure
az login
az account set --subscription <subscription-id>
As of time of writing, here are the versions of the extensions:
If you have a virgin cluster, you will need to install the Load Balancer.
# Check you have relevent Graph permissions
az ad sp list --filter "appId eq '087fca6e-4606-4d41-b3f6-5ebdf75b8b4c'" --output json
# If that command returns an empty result, use the alternative method: https://learn.microsoft.com/en-us/azure/aks/aksarc/deploy-load-balancer-cli#option-2-enable-arc-extension-for-metallb-using-az-k8s-extension-add-command
# Enable the extension
RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME" # name of the resource group where the AKS Arc cluster is deployed
CLUSTER_NAME="YOUR_CLUSTER_NAME"
AKS_ARC_CLUSTER_URI=$(az aksarc show --resource-group ${RESOURCE_GROUP_NAME} --name ${CLUSTER_NAME} --query id -o tsv | cut -d'/' -f1-9)
az k8s-runtime load-balancer enable --resource-uri $AKS_ARC_CLUSTER_URI
# Deploy the Load Balancer
LB_NAME="al-lb-01" # must be lowercase, alphanumeric, '-' or '.' (RFC 1123)
IP_RANGE="192.168.1.100-192.168.1.150"
ADVERTISE_MODE="ARP" # Options: ARP, BGP, Both
az k8s-runtime load-balancer create --load-balancer-name $LB_NAME \
--resource-uri $AKS_ARC_CLUSTER_URI \
--addresses $IP_RANGE \
--advertise-mode $ADVERTISE_MODE
Open Service Mesh is used to deliver the ACSA capabilities, so to deploy on the connected AKS cluster, use the following commands:
RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"
az k8s-extension create --resource-group $RESOURCE_GROUP_NAME \
--cluster-name $CLUSTER_NAME \
--cluster-type connectedClusters \
--extension-type Microsoft.openservicemesh \
--scope cluster \
--name osm \
--config "osm.osm.featureFlags.enableWASMStats=false" \
--config "osm.osm.enablePermissiveTrafficPolicy=false" \
--config "osm.osm.configResyncInterval=10s" \
--config "osm.osm.osmController.resource.requests.cpu=100m" \
--config "osm.osm.osmBootstrap.resource.requests.cpu=100m" \
--config "osm.osm.injector.resource.requests.cpu=100m"
Deploy IoT Operations Dependencies
In the official documentation, it says to deploy the IoT Operations extension, specifically the cert-manager
component. It doesn't say if you don't have to deploy if not using Azure IoT Operations, so I deployed anyway.
RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"
az k8s-extension create --cluster-name "${CLUSTER_NAME}" \
--name "${CLUSTER_NAME}-certmgr" \
--resource-group "${RESOURCE_GROUP_NAME}" \
--cluster-type connectedClusters \
--extension-type microsoft.iotoperations.platform \
--scope cluster \
--release-namespace cert-manager
Deploy the container storage extension
RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"
az k8s-extension create --resource-group "${RESOURCE_GROUP_NAME}" \
--cluster-name "${CLUSTER_NAME}" \
--cluster-type connectedClusters \
--name azure-arc-containerstorage \
--extension-type microsoft.arc.containerstorage
Now it's time to deploy the edge storage configuration. As my cluster is deployed on Azure Local AKS and is connected to Azure Arc, I went with the Arc config option detailed in the docs.
cat <<EOF > edgeConfig.yaml
apiVersion: arccontainerstorage.azure.net/v1
kind: EdgeStorageConfiguration
metadata:
name: edge-storage-configuration
spec:
defaultDiskStorageClasses:
- "default"
- "local-path"
serviceMesh: "osm"
EOF
kubectl apply -f "edgeConfig.yaml"
Once it's deployed, you can list the storage classes available to the cluster:
kubectl get storageclass
Setting up cloud ingest volumes
Now we're ready to configure permissions on the Azure Storage Account so that the Edge Volume provider has access to upload data to the blob container.
You can use the script below to get the extension identity and then assign the necessary role to the storage account:
RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"
export EXTENSION_TYPE=${1:-"microsoft.arc.containerstorage"}
EXTENSION_IDENTITY_PRINCIPAL_ID=$(az k8s-extension list \
--cluster-name ${CLUSTER_NAME} \
--resource-group ${RESOURCE_GROUP_NAME} \
--cluster-type connectedClusters \
| jq --arg extType ${EXTENSION_TYPE} 'map(select(.extensionType == $extType)) | .[] | .identity.principalId' -r)
STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME"
STORAGE_ACCOUNT_RESOURCE_GROUP="YOUR_STORAGE_ACCOUNT_RESOURCE_GROUP"
STORAGE_ACCOUNT_ID=$(az storage account show --name ${STORAGE_ACCOUNT_NAME} --resource-group ${STORAGE_ACCOUNT_RESOURCE_GROUP} --query id --output tsv)
az role assignment create --assignee ${EXTENSION_IDENTITY_PRINCIPAL_ID} --role "Storage Blob Data Contributor" --scope ${STORAGE_ACCOUNT_ID}
Create a deployment to test the cloud ingest volume
Now we can test transferring data from edge to cloud.I'm using the demo from Azure Arc Jumpstart: Deploy demo from Azure Arc Jumpstart
First off, create a container on the storage account to store the data from the edge volume.
export STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME"
export STORAGE_ACCOUNT_CONTAINER="fault-detection"
STORAGE_ACCOUNT_RESOURCE_GROUP="YOUR_STORAGE_ACCOUNT_RESOURCE_GROUP"
az storage container create --name ${STORAGE_ACCOUNT_CONTAINER} --account-name ${STORAGE_ACCOUNT_NAME} --resource-group ${STORAGE_ACCOUNT_RESOURCE_GROUP}
Next, create a file called acsa-deployment.yaml using the following content:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
### Create a name for your PVC ###
name: acsa-pvc
### Use a namespace that matched your intended consuming pod, or "default" ###
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: cloud-backed-sc
---
apiVersion: "arccontainerstorage.azure.net/v1"
kind: EdgeSubvolume
metadata:
name: faultdata
spec:
edgevolume: acsa-pvc
path: faultdata # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash.
auth:
authType: MANAGED_IDENTITY
storageaccountendpoint: "https://${STORAGE_ACCOUNT_NAME}.blob.core.windows.net/"
container: ${STORAGE_ACCOUNT_CONTAINER}
ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: acsa-webserver
spec:
replicas: 1
selector:
matchLabels:
app: acsa-webserver
template:
metadata:
labels:
app: acsa-webserver
spec:
containers:
- name: acsa-webserver
image: mcr.microsoft.com/jumpstart/scenarios/acsa_ai_webserver:1.0.0
resources:
limits:
cpu: "1"
memory: "1Gi"
requests:
cpu: "200m"
memory: "256Mi"
ports:
- containerPort: 8000
env:
- name: RTSP_URL
value: rtsp://virtual-rtsp:8554/stream
- name: LOCAL_STORAGE
value: /app/acsa_storage/faultdata
volumeMounts:
### This name must match the volumes.name attribute below ###
- name: blob
### This mountPath is where the PVC will be attached to the pod's filesystem ###
mountPath: "/app/acsa_storage"
volumes:
### User-defined 'name' that will be used to link the volumeMounts. This name must match volumeMounts.name as specified above. ###
- name: blob
persistentVolumeClaim:
### This claimName must refer to the PVC resource 'name' as defined in the PVC config. This name will match what your PVC resource was actually named. ###
claimName: acsa-pvc
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: virtual-rtsp
spec:
replicas: 1
selector:
matchLabels:
app: virtual-rtsp
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: virtual-rtsp
spec:
initContainers:
- name: init-samples
image: busybox
resources:
limits:
cpu: "200m"
memory: "256Mi"
requests:
cpu: "100m"
memory: "128Mi"
command:
- wget
- "-O"
- "/samples/bolt-detection.mp4"
- https://github.com/ldabas-msft/jumpstart-resources/raw/main/bolt-detection.mp4
volumeMounts:
- name: tmp-samples
mountPath: /samples
containers:
- name: virtual-rtsp
image: "kerberos/virtual-rtsp"
resources:
limits:
cpu: "500m"
memory: "512Mi"
requests:
cpu: "200m"
memory: "256Mi"
imagePullPolicy: Always
ports:
- containerPort: 8554
env:
- name: SOURCE_URL
value: "file:///samples/bolt-detection.mp4"
volumeMounts:
- name: tmp-samples
mountPath: /samples
volumes:
- name: tmp-samples
emptyDir: { }
---
apiVersion: v1
kind: Service
metadata:
name: virtual-rtsp
labels:
app: virtual-rtsp
spec:
type: LoadBalancer
ports:
- port: 8554
targetPort: 8554
name: rtsp
protocol: TCP
selector:
app: virtual-rtsp
---
apiVersion: v1
kind: Service
metadata:
name: acsa-webserver-svc
labels:
app: acsa-webserver
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: acsa-webserver
Once created, apply the deployment :
export STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME" # we need to export the storage account name so envsubst can substitute it
envsubst < acsa-deployment.yaml | kubectl apply -f -
[!NOTE]
This will deploy in to the default namespace.
This will create the deployment and the volumes, substituting the values for the storage account name with the variables previously set.
If you want to check the status of the edge volume, such as if it's connected or how many files are in the queue, you can use the following command:
# List the edge subvolumes
kubectl get edgesubvolume
kubectl describe edgesubvolume faultdata
Testing
Assuming everything has deployed without errors, you should be able to access the web server at the IP address of the webserver. You can find the IP address by running:
kubectl get svc acsa-webserver-svc
Obtain the EXTERNAL-IP and port (should be 80) and use that to access the web server.
take a look at the edgevolume for metrics:
kubectl get edgesubvolume
take a look at the edgevolume for metrics:
kubectl get edgesubvolume
And that’s how simple (?!) it is to setup. As long as you’ve met the pre-reqs and set permissions properly, it’s pretty smooth to implement.
HCI Box on a Budget. Leverage Azure Spot & Hyrbrid Use Benefits. Up to 93% savings.
Do you want to take HCI Box for a test drive but dont have $2,681 in the budget? Me either. How about the same box for $178?
This is the price for 730 hours
Following general instructions from jumpstart Azure Arc Jumpstart
once you have the git repo, edit the host.bicep file
...\azure_arc\azure_jumpstart_hcibox\bicep\host\host.bicep
add to the properties for the host virtualMachine the resource vm 'Microsoft.Compute/virtualMachines@2022-03-01'
priority: 'Spot' evictionPolicy: 'Deallocate' billingProfile: { maxPrice: -1 }
You can review difference regions for either cheaper price per hour or lower eviction rate
0.24393 per hour * 730 hours = $178
If you are elegable for Hybrid Use Benefits through you EA or have licenses you can also enable HUB in the Bicep template under virtual machine properties
licenseType: 'Windows_Server'
Code changes
... resource vm 'Microsoft.Compute/virtualMachines@2022-03-01' = { name: vmName location: location tags: resourceTags properties: { licenseType: 'Windows_Server' priority: 'Spot' evictionPolicy: 'Deallocate' billingProfile: { maxPrice: -1 } ...
Good luck, enjoy HCI’ing
Importing Root CA to Azure Stack Linux VM at provisioning time.
Deploying Linux VMs on Azure Stack Hub with Enterprise CA Certificates? Here's a Solution!
When deploying Linux VMs on Azure Stack Hub in a corporate environment, you may encounter issues with TLS endpoint certificates signed by an internal Enterprise CA. In this post, we'll explore a technique for importing the root CA certificate into the truststore of your Linux VMs, enabling seamless access to TLS endpoints. We'll also show you how to use Terraform to automate the process, including provisioning a VM, importing the CA certificate, and running a custom script
In a loose continuation of my previous post on using Terraform with Azure Stack Hub, I describe a technique for those deployingLinux VM's in an envirionment where an Enterprise CA has been used to sign the endpoint SSL certs.
Problem statement
Normally, when using a trusted thrid party Certificate Authority to sign the TLS endpoint certs, the root/intermediate/signing Cert Authority public certificate is usually already available in the CA truststore, so you don't have to add them manually. This means you should be able to access TLS enpoints (https sites) without errors being thrown.
As Azure Stack Hub is typically deployed in corporate environments, many use an internal Enterprise CA or self-signed CA to create the mandatory certificates for the public endpoints. The devices accessing services hosted on ASH should have the internal enterprise root CA public cert in the local trusted cert store, so there will be no problems from the client side.
The problem, however is if you want to deploy Marketplace VM's (e.g. you've downloaded Marketplace items), they won't have your signing root CA in the truststore. This is an issue for automation as typically the install script is uploaded to a Storage Account, which the Azure Linux VM Agent obtains and then runs. If the storage account endpoint TLS certificate is untrusted, an error is thrown and you can't run your script :(
Importing the root CA into the truststore
If you're building VM's, there are two options to ensure that the internal root CA is baked into the OS at provisioning time:
- create a custom image and publish it to the marketplace. Import the root CA and use the generalized VHD as the base OS disk.
- run a command at buildtime to import the root CA
The first option is quite involved, so I prefer the second option :) Thankfully, the command can be distilled to a one-liner:
sudo cp /var/lib/waagent/Certificates.pem /usr/local/share/ca-certificates/ash-ca-trust.crt && sudo update-ca-certificates
sudo cp /var/lib/waagent/Certificates.pem /etc/pki/ca-trust/source/anchors/ && sudo update-ca-trust
The Azure Linux VM Agent has a copy of the rootCA in the waagent
directory, hence making the one-liner possible.
Using Terraform
So, we want to provision a VM, and then run a script once the VM is up and running to configure it. We need to import the root CA before we can get download the script and run it. It's all fairly straightforward, but we do have one consideration to make.
Using the azurestack_virtual_machine_extension resource, we need to define the publisher and type. Typically this would be:
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
This would allow us to run a command (we don't have to use a script!):
settings = <<SETTINGS
{
"commandToExecute": "sudo cp /var/lib/waagent/Certificates.pem /etc/pki/ca-trust/source/anchors/ && sudo update-ca-trust"
}
SETTINGS
In theory, we can deploy another vm extension which has a dependency on the CA import resource being completed.
That is correct, but we are restricted to deploying only one CustomScript
extension type per VM, otherwise when running terraform plan
it will fail telling us so.
We need to find an alternative type which will achieve the same objective.
Here's where we can use the CustomScriptForLinux
. It's essentially the same as the CustomScript
type, but it allows us to get around the restriction.
Here's how it would look:
locals {
vm_name = "example-machine"
storage_account_name = "assets"
}
resource "azurestack_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurestack_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = azurestack_resource_group.example.location
resource_group_name = azurestack_resource_group.example.name
}
resource "azurestack_subnet" "example" {
name = "internal"
resource_group_name = azurestack_resource_group.example.name
virtual_network_name = azurestack_virtual_network.example.name
address_prefix = ["10.0.2.0/24"]
}
resource "azurestack_network_interface" "example" {
name = "example-nic"
location = azurestack_resource_group.example.location
resource_group_name = azurestack_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurestack_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "tls_private_key" "ssh_key" {
algorithm = "RSA"
rsa_bits = 4096
}
# Storage account to store the custom script
resource "azurestack_storage_account" "vm_sa" {
name = local.storage_account_name
resource_group_name = azurestack_resource_group.example.name
location = azurestack_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
# the container to store the custom script
resource "azurestack_storage_container" "assets" {
name = "assets"
storage_account_name = azurestack_storage_account.vm_sa.name
container_access_type = "private"
}
# upload the script to the storage account (located in same dir as the main.tf)
resource "azurestack_storage_blob" "host_vm_install" {
name = "install_host_vm.sh"
storage_account_name = azurestack_storage_account.vm_sa.name
storage_container_name = azurestack_storage_container.assets.name
type = "Block"
source = "install_host_vm.sh"
}
# Create the VM
resource "azurestack_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurestack_resource_group.example.name
location = azurestack_resource_group.example.location
vm_size = "Standard_F2"
network_interface_ids = [
azurestack_network_interface.example.id,
]
os_profile {
computer_name = local.vm_name
admin_username = "adminuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/adminuser/.ssh/authorized_keys"
key_data = tls_private_key.pk.public_key_openssh
}
}
storage_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}
storage_os_disk {
name = "${local.vm_name}-osdisk"
create_option = "FromImage"
caching = "ReadWrite"
managed_disk_type = "Standard_LRS"
os_type = "Linux"
disk_size_gb = 60
}
}
# import the CA certificate to truststore
resource "azurestack_virtual_machine_extension" "import_ca_bundle" {
name = "import_ca_bundle"
virtual_machine_id = azurestack_virtual_machine.vm.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScriptForLinux"
type_handler_version = "2.0"
depends_on = [
azurestack_virtual_machine.vm
]
protected_settings = <<PROTECTED_SETTINGS
{
"commandToExecute": "sudo /var/lib/waagent/Certificates.pem /usr/local/share/ca-certificates/ash-ca-trust.crt && sudo update-ca-certificates"
}
PROTECTED_SETTINGS
}
# install the custom script using different extension type
resource "azurestack_virtual_machine_extension" "install_vm_config" {
name = "install_vm_config"
virtual_machine_id = azurestack_virtual_machine.vm.id
publisher = "Microsoft.Azure.Extensions"
type = "CustomScript"
type_handler_version = "2.0"
depends_on = [
azurestack_virtual_machine_extension.import_ca_bundle
]
settings = <<SETTINGS
{
"fileUris": "${azurestack_storage_blob.host_vm_install.id}"
}
SETTINGS
protected_settings = <<PROTECTED_SETTINGS
{
"storageAccountName": "${azurestack_storage_account.vm_sa.name}",
"storageAccountKey": "${azurestack_storage_account.vm_sa.primary_access_key}"
"commandToExecute": "bash install.sh"
}
PROTECTED_SETTINGS
}
Thanks for reading and I hope it's given some inspiration!
Terraform with Azure Stack Hub - Creating a VM with multiple data disks
I've recently been working with Azure Stack Hub (ASH) and needed to create some VM's with a variable number of managed data disks. It's not actually as straightforward as it should be, so here's how I achieved it.
azurerm vs. azurestack Providers
Due to differences with the ARM management endpoints for Azure and Azure Stack Hub, Hashicorp provide separate providers for each system. If anyone has used ASH, they will know that the resource providers available are a subset of Azure and are typically an older version, hence the need for different providers.
An interesting thing to check out is how often the providers are updated.
azurerm | azurestack |
---|---|
![]() |
![]() |
As you can see, the azurerm
provider is regularly maintained, whereas azurestack
is not. Why's this relevant? Well, if we want to use Terraform as our infra-as-code tool, then we have to work within the limitations.
Deploying a VM with a variable number of managed data disks
With the azurerm
provider, this is quite straightforward:
- Create Network interface
- Create Managed Disk(s)
- Create VM
- Attach Managed data disks to VM
- (Optional) Run Customscript extension on the VM to configure the running VM
locals {
data_disk_count = 4
}
resource "azurerm_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurerm_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
}
resource "azurerm_subnet" "example" {
name = "internal"
resource_group_name = azurerm_resource_group.example.name
virtual_network_name = azurerm_virtual_network.example.name
address_prefixes = ["10.0.2.0/24"]
}
resource "azurerm_network_interface" "example" {
name = "example-nic"
location = azurerm_resource_group.example.location
resource_group_name = azurerm_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurerm_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "tls_private_key" "ssh_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "azurerm_linux_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
size = "Standard_F2"
admin_username = "adminuser"
network_interface_ids = [
azurerm_network_interface.example.id,
]
admin_ssh_key {
username = "adminuser"
public_key = tls_private_key.ssh_key.public_key_openssh
}
os_disk {
caching = "ReadWrite"
storage_account_type = "Standard_LRS"
}
source_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}
}
resource "azurerm_managed_disk" "example" {
count = local.data_disk_count
name = "${azurerm_linux_virtual_machine.example.name}-data-${count.index}"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
storage_account_type = "Premium_LRS"
create_option = "Empty"
disk_size_gb = 256
}
resource "azurerm_virtual_machine_data_disk_attachment" "example" {
depends_on = [
azurerm_managed_disk.example,
azurerm_linux_virtual_machine.example
]
count = local.data_disk_count
managed_disk_id = azurerm_managed_disk.example[count.index].id
virtual_machine_id = azurerm_linux_virtual_machine.example.id
lun = count.index
caching = "ReadWrite"
}
resource "null_resource" "output_ssh_key" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "echo '${tls_private_key.ssh_key.private_key_pem}' > ./${azurerm_linux_virtual_machine.example.name}.pem"
}
}
The code above uses the azurerm_virtual_machine_data_disk_attachment
resource. When using the azurerm_linux_virtual_machine
, this is the only option available to us. Reading the documentation notes:
⚠️ NOTE:
Data Disks can be attached either directly on the
azurerm_virtual_machine resource
, or using theazurerm_virtual_machine_data_disk_attachment
resource - but the two cannot be used together. If both are used against the same Virtual Machine, spurious changes will occur.
There's no method to attach directly using the azurerm_virtual_machine_data_disk_attachment
resource.
If we check the resources available with the azurestack
provider, we'll see that we can't use the above technique as azurerm_virtual_machine_data_disk_attachment
does not exist.
That means the only option is to use azurestack_virtual_machine resource
and attach the disks directly when the VM is created.
Implemetation for Azure Stack Hub
We could just create multiple storage_data_disk
blocks within the azurestack_virtual_machine
resource, but we want to account for variable number of disks.
To do this we need to use the dynamic blocks capability to generate nested blocks, as the count meta-argument does not work in this instance.
I first setup a map object with the name of each data disk and lun, as can be seen in the locals
block in the code below.
This map of objects can then be iterated through to generate the nested block using the for_each meta-argument
The code block in question:
dynamic "storage_data_disk" {
for_each = {for count, value in local.disk_map : count => value}
content {
name = storage_data_disk.value.disk_name
managed_disk_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 256
lun = storage_data_disk.value.lun
}
}
Example
locals {
data_disk_count = 4
vm_name = "example-machine"
disk_map = [
for i in range(local.data_disk_count) : {
disk_name = format("%s_disk_%02d", local.vm_name, i+1)
lun = i
}
]
}
resource "azurestack_resource_group" "example" {
name = "example-resources"
location = "West Europe"
}
resource "azurestack_virtual_network" "example" {
name = "example-network"
address_space = ["10.0.0.0/16"]
location = azurestack_resource_group.example.location
resource_group_name = azurestack_resource_group.example.name
}
resource "azurestack_subnet" "example" {
name = "internal"
resource_group_name = azurestack_resource_group.example.name
virtual_network_name = azurestack_virtual_network.example.name
address_prefix = ["10.0.2.0/24"]
}
resource "azurestack_network_interface" "example" {
name = "example-nic"
location = azurestack_resource_group.example.location
resource_group_name = azurestack_resource_group.example.name
ip_configuration {
name = "internal"
subnet_id = azurestack_subnet.example.id
private_ip_address_allocation = "Dynamic"
}
}
resource "tls_private_key" "ssh_key" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "azurestack_virtual_machine" "example" {
name = "example-machine"
resource_group_name = azurestack_resource_group.example.name
location = azurestack_resource_group.example.location
vm_size = "Standard_F2"
network_interface_ids = [
azurestack_network_interface.example.id,
]
os_profile {
computer_name = local.vm_name
admin_username = "adminuser"
}
os_profile_linux_config {
disable_password_authentication = true
ssh_keys {
path = "/home/adminuser/.ssh/authorized_keys"
key_data = tls_private_key.pk.public_key_openssh
}
}
storage_image_reference {
publisher = "Canonical"
offer = "0001-com-ubuntu-server-jammy"
sku = "22_04-lts"
version = "latest"
}
storage_os_disk {
name = "${local.vm_name}-osdisk"
create_option = "FromImage"
caching = "ReadWrite"
managed_disk_type = "Standard_LRS"
os_type = "Linux"
disk_size_gb = 60
}
dynamic "storage_data_disk" {
for_each = {for count, value in local.disk_map : count => value}
content {
name = storage_data_disk.value.disk_name
managed_disk_type = "Standard_LRS"
create_option = "Empty"
disk_size_gb = 256
lun = storage_data_disk.value.lun
}
}
resource "null_resource" "output_ssh_key" {
triggers = {
always_run = "${timestamp()}"
}
provisioner "local-exec" {
command = "echo '${tls_private_key.ssh_key.private_key_pem}' > ./${azurestack_linux_virtual_machine.example.name}.pem"
}
}
Topic Search
-
Securing TLS in WAC (Windows Admin Center) https://t.co/klDc7J7R4G
Posts by Date
- March 2025 1
- February 2025 1
- October 2024 1
- August 2024 1
- July 2024 1
- October 2023 1
- September 2023 1
- August 2023 3
- July 2023 1
- June 2023 2
- May 2023 1
- February 2023 3
- January 2023 1
- December 2022 1
- November 2022 3
- October 2022 7
- September 2022 2
- August 2022 4
- July 2022 1
- February 2022 2
- January 2022 1
- October 2021 1
- June 2021 2
- February 2021 1
- December 2020 2
- November 2020 2
- October 2020 1
- September 2020 1
- August 2020 1
- June 2020 1
- May 2020 2
- March 2020 1
- January 2020 2
- December 2019 2
- November 2019 1
- October 2019 7
- June 2019 2
- March 2019 2
- February 2019 1
- December 2018 3
- November 2018 1
- October 2018 4
- September 2018 6
- August 2018 1
- June 2018 1
- April 2018 2
- March 2018 1
- February 2018 3
- January 2018 2
- August 2017 5
- June 2017 2
- May 2017 3
- March 2017 4
- February 2017 4
- December 2016 1
- November 2016 3
- October 2016 3
- September 2016 5
- August 2016 11
- July 2016 13