HCI Box on a Budget. Leverage Azure Spot & Hyrbrid Use Benefits. Up to 93% savings.

Do you want to take HCI Box for a test drive but dont have $2,681 in the budget? Me either. How about the same box for $178?

This is the price for 730 hours

Following general instructions from jumpstart Azure Arc Jumpstart

once you have the git repo, edit the host.bicep file

...\azure_arc\azure_jumpstart_hcibox\bicep\host\host.bicep

add to the properties for the host virtualMachine the resource vm 'Microsoft.Compute/virtualMachines@2022-03-01'

priority: 'Spot'
    evictionPolicy: 'Deallocate'
    billingProfile: {
        maxPrice: -1
    }

You can review difference regions for either cheaper price per hour or lower eviction rate

0.24393 per hour * 730 hours = $178

If you are elegable for Hybrid Use Benefits through you EA or have licenses you can also enable HUB in the Bicep template under virtual machine properties

licenseType: 'Windows_Server'

Code changes

...
resource vm 'Microsoft.Compute/virtualMachines@2022-03-01' = {
  name: vmName
  location: location
  tags: resourceTags
  properties: {
    licenseType: 'Windows_Server'
    priority: 'Spot'
    evictionPolicy: 'Deallocate'
    billingProfile: {
        maxPrice: -1
    }
...

Good luck, enjoy HCI’ing

Importing Root CA to Azure Stack Linux VM at provisioning time.

Importing Root CA to Azure Stack Linux VM at provisioning time.

Deploying Linux VMs on Azure Stack Hub with Enterprise CA Certificates? Here's a Solution!

When deploying Linux VMs on Azure Stack Hub in a corporate environment, you may encounter issues with TLS endpoint certificates signed by an internal Enterprise CA. In this post, we'll explore a technique for importing the root CA certificate into the truststore of your Linux VMs, enabling seamless access to TLS endpoints. We'll also show you how to use Terraform to automate the process, including provisioning a VM, importing the CA certificate, and running a custom script

Terraform with Azure Stack Hub - Creating a VM with multiple data disks

I've recently been working with Azure Stack Hub (ASH) and needed to create some VM's with a variable number of managed data disks. It's not actually as straightforward as it should be, so here's how I achieved it.

azurerm vs. azurestack Providers

Due to differences with the ARM management endpoints for Azure and Azure Stack Hub, Hashicorp provide separate providers for each system. If anyone has used ASH, they will know that the resource providers available are a subset of Azure and are typically an older version, hence the need for different providers.

An interesting thing to check out is how often the providers are updated.

azurerm azurestack
azurerm azurestack

As you can see, the azurerm provider is regularly maintained, whereas azurestack is not. Why's this relevant? Well, if we want to use Terraform as our infra-as-code tool, then we have to work within the limitations.

Deploying a VM with a variable number of managed data disks

With the azurerm provider, this is quite straightforward:

  1. Create Network interface
  2. Create Managed Disk(s)
  3. Create VM
  4. Attach Managed data disks to VM
  5. (Optional) Run Customscript extension on the VM to configure the running VM
locals {
  data_disk_count = 4
}
resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

resource "azurerm_virtual_network" "example" {
  name                = "example-network"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

resource "azurerm_subnet" "example" {
  name                 = "internal"
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = ["10.0.2.0/24"]
}

resource "azurerm_network_interface" "example" {
  name                = "example-nic"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.example.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "tls_private_key" "ssh_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "azurerm_linux_virtual_machine" "example" {
  name                = "example-machine"
  resource_group_name = azurerm_resource_group.example.name
  location            = azurerm_resource_group.example.location
  size                = "Standard_F2"
  admin_username      = "adminuser"
  network_interface_ids = [
    azurerm_network_interface.example.id,
  ]

  admin_ssh_key {
    username   = "adminuser"
    public_key = tls_private_key.ssh_key.public_key_openssh 
  }

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-jammy"
    sku       = "22_04-lts"
    version   = "latest"
  }
}

resource "azurerm_managed_disk" "example" {
  count                = local.data_disk_count
  name                 = "${azurerm_linux_virtual_machine.example.name}-data-${count.index}"
  resource_group_name  = azurerm_resource_group.example.name
  location             = azurerm_resource_group.example.location
  storage_account_type = "Premium_LRS"
  create_option        = "Empty"
  disk_size_gb         = 256
}
resource "azurerm_virtual_machine_data_disk_attachment" "example" {
  depends_on = [
    azurerm_managed_disk.example,
    azurerm_linux_virtual_machine.example
  ]
  count              = local.data_disk_count
  managed_disk_id    = azurerm_managed_disk.example[count.index].id
  virtual_machine_id = azurerm_linux_virtual_machine.example.id
  lun                = count.index
  caching            = "ReadWrite"
}


resource "null_resource" "output_ssh_key" { 
  triggers = {
    always_run = "${timestamp()}"
  }
   provisioner "local-exec" {
    command = "echo '${tls_private_key.ssh_key.private_key_pem}' > ./${azurerm_linux_virtual_machine.example.name}.pem"
  }
}

The code above uses the azurerm_virtual_machine_data_disk_attachment resource. When using the azurerm_linux_virtual_machine, this is the only option available to us. Reading the documentation notes:

⚠️ NOTE:

Data Disks can be attached either directly on the azurerm_virtual_machine resource, or using the azurerm_virtual_machine_data_disk_attachment resource - but the two cannot be used together. If both are used against the same Virtual Machine, spurious changes will occur.

There's no method to attach directly using the azurerm_virtual_machine_data_disk_attachment resource.

If we check the resources available with the azurestack provider, we'll see that we can't use the above technique as azurerm_virtual_machine_data_disk_attachment does not exist.

alt text

That means the only option is to use azurestack_virtual_machine resource and attach the disks directly when the VM is created.

Implemetation for Azure Stack Hub

We could just create multiple storage_data_disk blocks within the azurestack_virtual_machine resource, but we want to account for variable number of disks.
To do this we need to use the dynamic blocks capability to generate nested blocks, as the count meta-argument does not work in this instance.

I first setup a map object with the name of each data disk and lun, as can be seen in the locals block in the code below.

This map of objects can then be iterated through to generate the nested block using the for_each meta-argument

The code block in question:

dynamic "storage_data_disk" {
    for_each = {for count, value in local.disk_map :   count => value}
    content {
      name              = storage_data_disk.value.disk_name
      managed_disk_type = "Standard_LRS"
      create_option     = "Empty"
      disk_size_gb      = 256
      lun               = storage_data_disk.value.lun
    }
  }

Example

locals {
  data_disk_count = 4
  vm_name         = "example-machine"
  disk_map = [
    for i in range(local.data_disk_count) :  {
      disk_name = format("%s_disk_%02d", local.vm_name, i+1)
      lun  = i 
    }
  ]
}
resource "azurestack_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

resource "azurestack_virtual_network" "example" {
  name                = "example-network"
  address_space       = ["10.0.0.0/16"]
  location            = azurestack_resource_group.example.location
  resource_group_name = azurestack_resource_group.example.name
}

resource "azurestack_subnet" "example" {
  name                 = "internal"
  resource_group_name  = azurestack_resource_group.example.name
  virtual_network_name = azurestack_virtual_network.example.name
  address_prefix       = ["10.0.2.0/24"]
}

resource "azurestack_network_interface" "example" {
  name                = "example-nic"
  location            = azurestack_resource_group.example.location
  resource_group_name = azurestack_resource_group.example.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurestack_subnet.example.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "tls_private_key" "ssh_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "azurestack_virtual_machine" "example" {
  name                  = "example-machine"
  resource_group_name   = azurestack_resource_group.example.name
  location              = azurestack_resource_group.example.location
  vm_size               = "Standard_F2"
  network_interface_ids = [
    azurestack_network_interface.example.id,
  ]
  
  os_profile {
    computer_name  = local.vm_name
    admin_username = "adminuser"
  }

  os_profile_linux_config {
    disable_password_authentication = true
    ssh_keys {
      path     = "/home/adminuser/.ssh/authorized_keys"
      key_data = tls_private_key.pk.public_key_openssh
    }
  }

  storage_image_reference {
    publisher         = "Canonical"
    offer             = "0001-com-ubuntu-server-jammy"
    sku               = "22_04-lts"
    version           = "latest"
  }
  storage_os_disk {
    name              = "${local.vm_name}-osdisk"
    create_option     = "FromImage"
    caching           = "ReadWrite"
    managed_disk_type = "Standard_LRS"
    os_type           = "Linux"
    disk_size_gb      = 60
  }

  dynamic "storage_data_disk" {
    for_each = {for count, value in local.disk_map :   count => value}
    content {
      name              = storage_data_disk.value.disk_name
      managed_disk_type = "Standard_LRS"
      create_option     = "Empty"
      disk_size_gb      = 256
      lun               = storage_data_disk.value.lun
    }
  }


resource "null_resource" "output_ssh_key" { 
  triggers = {
    always_run = "${timestamp()}"
  }
   provisioner "local-exec" {
    command = "echo '${tls_private_key.ssh_key.private_key_pem}' > ./${azurestack_linux_virtual_machine.example.name}.pem"
  }
}

AKS Edge Essentials options for persistent storage

Out of the box, AKS Edge Essentials does not have the capability to host persistent storage. Thats OK if you're running stateless apps, but more often than not, you'll need to run stateful apps. There are a couple of options you can use to enable this:

  1. Create a manual storage class for local storage on the node
  2. Create a StorageClass to provision the persisent storage

First, I'm checking no existing storage classes exist. This is on a newly deployed AKS-EE, so I'm just double checking

kubectl get storageclasses --all-namespaces

Next, check no existing persistent volumes exist

kubectl get pv --all-namespaces

Manual storage class method

create a local manual persistent volume

Create a YAML file with the following config: (local-host-pv.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Now deploy it:

kubectl apply -f .\local-host-pv.yaml
kubectl get pv --all-namespaces

Create persistent volume claim

Create a YAML file with the following code: (pv-claim.yml)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Now deploy it:

kubectl apply -f .\pv-claim.yaml
kubectl get pvc --all-namespaces

The problem with the above approach!

The issue with the first method is that the persistent volume has to be created manually each time. If using Helm charts or deployment YAML files, they expect a default Storage Class to handle the provisoning so that you don't have to refactor the config each time and make the code portable.

As an example to show the problem, I've tried to deploy Keycloak using a Helm chart; it uses PostgreSQL DB which needs a pvc:

Using kubectl describe pvc -n keycloak, I can see the underlying problem; the persistent volume claim stays in pending because there are no available persistent volumes or Storage Classes available:

Create a Local Path provisioner StorageClass

So, to fix this, we need to deploy a storage class for our cluster. For this example, I'm using the Local Path provisioner example.

kubectl apply -f https://raw.githubusercontent.com/Azure/AKS-Edge/main/samples/storage/local-path-provisioner/local-path-storage.yaml

Once deployed, you can check that is exists as StorageClass:

kubectl get sc

Once the storage class is available, when I deploy the helm chart again, the persistent volume and claim are created successfully:

kubectl get pv
kubectl get pvc --all-namespaces

Conclusion

My advice is as part of the AKS Edge Essentials installation is to deploy a StorageClass to deal with provisioning volumes and claims to handle persistent data. As well as the Local Path provisioner, there is an example to use NFS storge binding

Interesting changes to Arc Agent 1.34 with expanded detected properties

Microsoft just pushed out a change in Azure Arc Connected Agent 1.34 and with this comes some enrichment of Hybrid Servers detected properties.

This is what the properties looked like prior to the update.

Agent 1.33 and earlier

Okay… so what’s new and different?

New detected properties for Azure Arc Connected Agent 1.34

serialNumber, ProcessNames and totalPhysicalMemory

resources
| where ['type'] == "microsoft.hybridcompute/machines" 
| extend processorCount = properties.detectedProperties.processorCount,
    serialNumber = properties.detectedProperties.serialNumber,
    manufacturer= properties.detectedProperties.manufacturer,
    processorNames= properties.detectedProperties.processorNames,
    logicalCoreCount = properties.detectedProperties.logicalCoreCount,
    smbiosAssetTag = properties.detectedProperties.smbiosAssetTag,
    totalPhysicalMemoryInBytes = properties.detectedProperties.totalPhysicalMemoryInBytes,
    totalPhysicalMemoryInGigabytes = properties.detectedProperties.totalPhysicalMemoryInGigabytes
| project name,serialNumber,logicalCoreCount,manufacturer,processorCount,processorNames,totalPhysicalMemoryInBytes,totalPhysicalMemoryInGigabytes

This unlocks organizations to collect processor, serial number and memory information in a simple fashion via Azure Arc infrastructure. This can be used to look at things like consolidation and migration planning, perhaps decommissioning aging hardware even warranty lookup if you don’t have current hardware CMDB.