Azure Container Storage for Azure Arc Edge Volumes - deploying on Azure Local AKS

Late last year, Microsoft released the latest version of the snappily titled ‘Azure Container Storage enabled by Azure Arc’, (ACSA) which is a solution to make it easier to get data from your container solution to Azure Blob Storage. You can read the overview here, but in essence it’s a pretty configurable allowing you to setup local resilient storage for your container apps, or use for cloud ingest; to send data to Azure and purge once transfer is confirmed.

The purpose of the post is to give and example of the steps needed to get this setup on an Azure Local AKS cluster.

If you have an existing cluster you want to deploy to, take heed of the pre-reqs:

Single-node or 2-node cluster

per node:

  • 4 CPUs
  • 16 GB RAM

Multi-node cluster

per node:

  • 8 CPUs
  • 32 GB RAM

16GB RAM should be fine, but in more active scenarios, 32 GB is recommended.

Prepare AKS enabled by Azure Arc cluster

Make sure you have the latest AZ CLI extensions installed.

Azure Arc Kubernetes Extensions Documentation

# Make sure the az extensions are installed
az extension add --name connectedk8s --upgrade
az extension add --name k8s-extension --upgrade
az extension add -n k8s-runtime --upgrade
az extension add --name aksarc --upgrade

# Login to Azure
az login
az account set --subscription <subscription-id>

As of time of writing, here are the versions of the extensions:

If you have a virgin cluster, you will need to install the Load Balancer.

Install MetalLB

# Check you have relevent Graph permissions
az ad sp list --filter "appId eq '087fca6e-4606-4d41-b3f6-5ebdf75b8b4c'" --output json 

# If that command returns an empty result, use the alternative method: https://learn.microsoft.com/en-us/azure/aks/aksarc/deploy-load-balancer-cli#option-2-enable-arc-extension-for-metallb-using-az-k8s-extension-add-command


# Enable the extension
RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME" # name of the resource group where the AKS Arc cluster is deployed
CLUSTER_NAME="YOUR_CLUSTER_NAME"
AKS_ARC_CLUSTER_URI=$(az aksarc show --resource-group ${RESOURCE_GROUP_NAME} --name ${CLUSTER_NAME} --query id -o tsv | cut -d'/' -f1-9)

az k8s-runtime load-balancer enable --resource-uri $AKS_ARC_CLUSTER_URI

# Deploy the Load Balancer

LB_NAME="al-lb-01" # must be lowercase, alphanumeric, '-' or '.' (RFC 1123)
IP_RANGE="192.168.1.100-192.168.1.150"
ADVERTISE_MODE="ARP" # Options: ARP, BGP, Both

az k8s-runtime load-balancer create --load-balancer-name $LB_NAME \
--resource-uri $AKS_ARC_CLUSTER_URI \
--addresses $IP_RANGE \
--advertise-mode $ADVERTISE_MODE

Open Service Mesh is used to deliver the ACSA capabilities, so to deploy on the connected AKS cluster, use the following commands:

RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"

az k8s-extension create --resource-group $RESOURCE_GROUP_NAME \
--cluster-name $CLUSTER_NAME \
--cluster-type connectedClusters \
--extension-type Microsoft.openservicemesh \
--scope cluster \
--name osm \
--config "osm.osm.featureFlags.enableWASMStats=false" \
--config "osm.osm.enablePermissiveTrafficPolicy=false" \
--config "osm.osm.configResyncInterval=10s" \
--config "osm.osm.osmController.resource.requests.cpu=100m" \
--config "osm.osm.osmBootstrap.resource.requests.cpu=100m" \
--config "osm.osm.injector.resource.requests.cpu=100m"

Deploy IoT Operations Dependencies

In the official documentation, it says to deploy the IoT Operations extension, specifically the cert-manager component. It doesn't say if you don't have to deploy if not using Azure IoT Operations, so I deployed anyway.

RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"

az k8s-extension create --cluster-name "${CLUSTER_NAME}" \
--name "${CLUSTER_NAME}-certmgr" \
--resource-group "${RESOURCE_GROUP_NAME}" \
--cluster-type connectedClusters \
--extension-type microsoft.iotoperations.platform \
--scope cluster \
--release-namespace cert-manager

Deploy the container storage extension

RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"

az k8s-extension create --resource-group "${RESOURCE_GROUP_NAME}" \
--cluster-name "${CLUSTER_NAME}" \
--cluster-type connectedClusters \
--name azure-arc-containerstorage \
--extension-type microsoft.arc.containerstorage

Now it's time to deploy the edge storage configuration. As my cluster is deployed on Azure Local AKS and is connected to Azure Arc, I went with the Arc config option detailed in the docs.

cat <<EOF > edgeConfig.yaml
apiVersion: arccontainerstorage.azure.net/v1
kind: EdgeStorageConfiguration
metadata:
  name: edge-storage-configuration
spec:
  defaultDiskStorageClasses:
    - "default"
    - "local-path"
  serviceMesh: "osm"
EOF

kubectl apply -f "edgeConfig.yaml"

Once it's deployed, you can list the storage classes available to the cluster:

kubectl get storageclass

Setting up cloud ingest volumes

Now we're ready to configure permissions on the Azure Storage Account so that the Edge Volume provider has access to upload data to the blob container.

Offical Documentation

You can use the script below to get the extension identity and then assign the necessary role to the storage account:

RESOURCE_GROUP_NAME="YOUR_RESOURCE_GROUP_NAME"
CLUSTER_NAME="YOUR_CLUSTER_NAME"
export EXTENSION_TYPE=${1:-"microsoft.arc.containerstorage"}
EXTENSION_IDENTITY_PRINCIPAL_ID=$(az k8s-extension list \
--cluster-name ${CLUSTER_NAME} \
--resource-group ${RESOURCE_GROUP_NAME} \
--cluster-type connectedClusters \
| jq --arg extType ${EXTENSION_TYPE} 'map(select(.extensionType == $extType)) | .[] | .identity.principalId' -r)

STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME"
STORAGE_ACCOUNT_RESOURCE_GROUP="YOUR_STORAGE_ACCOUNT_RESOURCE_GROUP"

STORAGE_ACCOUNT_ID=$(az storage account show --name ${STORAGE_ACCOUNT_NAME} --resource-group ${STORAGE_ACCOUNT_RESOURCE_GROUP} --query id --output tsv)

az role assignment create --assignee ${EXTENSION_IDENTITY_PRINCIPAL_ID} --role "Storage Blob Data Contributor" --scope ${STORAGE_ACCOUNT_ID}

Create a deployment to test the cloud ingest volume

Now we can test transferring data from edge to cloud.I'm using the demo from Azure Arc Jumpstart: Deploy demo from Azure Arc Jumpstart

First off, create a container on the storage account to store the data from the edge volume.

export STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME"
export STORAGE_ACCOUNT_CONTAINER="fault-detection"
STORAGE_ACCOUNT_RESOURCE_GROUP="YOUR_STORAGE_ACCOUNT_RESOURCE_GROUP"

az storage container create --name ${STORAGE_ACCOUNT_CONTAINER} --account-name ${STORAGE_ACCOUNT_NAME} --resource-group ${STORAGE_ACCOUNT_RESOURCE_GROUP}

Next, create a file called acsa-deployment.yaml using the following content:

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  ### Create a name for your PVC ###
  name: acsa-pvc
  ### Use a namespace that matched your intended consuming pod, or "default" ###
  namespace: default
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: cloud-backed-sc
---
apiVersion: "arccontainerstorage.azure.net/v1"
kind: EdgeSubvolume
metadata:
  name: faultdata
spec:
  edgevolume: acsa-pvc
  path: faultdata # If you change this path, line 33 in deploymentExample.yaml must be updated. Don't use a preceding slash.
  auth:
    authType: MANAGED_IDENTITY
  storageaccountendpoint: "https://${STORAGE_ACCOUNT_NAME}.blob.core.windows.net/"
  container: ${STORAGE_ACCOUNT_CONTAINER}
  ingestPolicy: edgeingestpolicy-default # Optional: See the following instructions if you want to update the ingestPolicy with your own configuration
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: acsa-webserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: acsa-webserver
  template:
    metadata:
      labels:
        app: acsa-webserver
    spec:
      containers:
        - name: acsa-webserver
          image: mcr.microsoft.com/jumpstart/scenarios/acsa_ai_webserver:1.0.0
          resources:
            limits:
              cpu: "1"
              memory: "1Gi"
            requests:
              cpu: "200m"
              memory: "256Mi"
          ports:
            - containerPort: 8000
          env:
            - name: RTSP_URL
              value: rtsp://virtual-rtsp:8554/stream
            - name: LOCAL_STORAGE
              value: /app/acsa_storage/faultdata
          volumeMounts:
            ### This name must match the volumes.name attribute below ###
            - name: blob
              ### This mountPath is where the PVC will be attached to the pod's filesystem ###
              mountPath: "/app/acsa_storage"
      volumes:
        ### User-defined 'name' that will be used to link the volumeMounts. This name must match volumeMounts.name as specified above. ###
        - name: blob
          persistentVolumeClaim:
            ### This claimName must refer to the PVC resource 'name' as defined in the PVC config. This name will match what your PVC resource was actually named. ###
            claimName: acsa-pvc

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: virtual-rtsp
spec:
  replicas: 1
  selector:
    matchLabels:
      app: virtual-rtsp
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: virtual-rtsp
    spec:
      initContainers:
        - name: init-samples
          image: busybox
          resources:
            limits:
              cpu: "200m"
              memory: "256Mi"
            requests:
              cpu: "100m"
              memory: "128Mi"
          command:
          - wget
          - "-O"
          - "/samples/bolt-detection.mp4"
          - https://github.com/ldabas-msft/jumpstart-resources/raw/main/bolt-detection.mp4
          volumeMounts:
          - name: tmp-samples
            mountPath: /samples
      containers:
        - name: virtual-rtsp
          image: "kerberos/virtual-rtsp"
          resources:
            limits:
              cpu: "500m"
              memory: "512Mi"
            requests:
              cpu: "200m"
              memory: "256Mi"
          imagePullPolicy: Always
          ports:
            - containerPort: 8554
          env:
            - name: SOURCE_URL
              value: "file:///samples/bolt-detection.mp4"
          volumeMounts:
            - name: tmp-samples
              mountPath: /samples
      volumes:
        - name: tmp-samples
          emptyDir: { }
---
apiVersion: v1
kind: Service
metadata:
  name: virtual-rtsp
  labels:
    app: virtual-rtsp
spec:
  type: LoadBalancer
  ports:
    - port: 8554
      targetPort: 8554
      name: rtsp
      protocol: TCP
  selector:
    app: virtual-rtsp
---
apiVersion: v1
kind: Service
metadata:
  name: acsa-webserver-svc
  labels:
    app: acsa-webserver
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8000
      protocol: TCP
  selector:
    app: acsa-webserver

Once created, apply the deployment :

export STORAGE_ACCOUNT_NAME="YOUR_STORAGE_ACCOUNT_NAME" # we need to export the storage account name so envsubst can substitute it
envsubst < acsa-deployment.yaml | kubectl apply -f -

[!NOTE]
This will deploy in to the default namespace.

This will create the deployment and the volumes, substituting the values for the storage account name with the variables previously set.

If you want to check the status of the edge volume, such as if it's connected or how many files are in the queue, you can use the following command:

# List the edge subvolumes
kubectl get edgesubvolume

kubectl describe edgesubvolume faultdata

Testing

Assuming everything has deployed without errors, you should be able to access the web server at the IP address of the webserver. You can find the IP address by running:

kubectl get svc acsa-webserver-svc

Obtain the EXTERNAL-IP and port (should be 80) and use that to access the web server.

take a look at the edgevolume for metrics:

kubectl get edgesubvolume

take a look at the edgevolume for metrics:

kubectl get edgesubvolume

And that’s how simple (?!) it is to setup. As long as you’ve met the pre-reqs and set permissions properly, it’s pretty smooth to implement.

HCI Box on a Budget. Leverage Azure Spot & Hyrbrid Use Benefits. Up to 93% savings.

Do you want to take HCI Box for a test drive but dont have $2,681 in the budget? Me either. How about the same box for $178?

This is the price for 730 hours

Following general instructions from jumpstart Azure Arc Jumpstart

once you have the git repo, edit the host.bicep file

...\azure_arc\azure_jumpstart_hcibox\bicep\host\host.bicep

add to the properties for the host virtualMachine the resource vm 'Microsoft.Compute/virtualMachines@2022-03-01'

priority: 'Spot'
    evictionPolicy: 'Deallocate'
    billingProfile: {
        maxPrice: -1
    }

You can review difference regions for either cheaper price per hour or lower eviction rate

0.24393 per hour * 730 hours = $178

If you are elegable for Hybrid Use Benefits through you EA or have licenses you can also enable HUB in the Bicep template under virtual machine properties

licenseType: 'Windows_Server'

Code changes

...
resource vm 'Microsoft.Compute/virtualMachines@2022-03-01' = {
  name: vmName
  location: location
  tags: resourceTags
  properties: {
    licenseType: 'Windows_Server'
    priority: 'Spot'
    evictionPolicy: 'Deallocate'
    billingProfile: {
        maxPrice: -1
    }
...

Good luck, enjoy HCI’ing

Importing Root CA to Azure Stack Linux VM at provisioning time.

Importing Root CA to Azure Stack Linux VM at provisioning time.

Deploying Linux VMs on Azure Stack Hub with Enterprise CA Certificates? Here's a Solution!

When deploying Linux VMs on Azure Stack Hub in a corporate environment, you may encounter issues with TLS endpoint certificates signed by an internal Enterprise CA. In this post, we'll explore a technique for importing the root CA certificate into the truststore of your Linux VMs, enabling seamless access to TLS endpoints. We'll also show you how to use Terraform to automate the process, including provisioning a VM, importing the CA certificate, and running a custom script

Terraform with Azure Stack Hub - Creating a VM with multiple data disks

I've recently been working with Azure Stack Hub (ASH) and needed to create some VM's with a variable number of managed data disks. It's not actually as straightforward as it should be, so here's how I achieved it.

azurerm vs. azurestack Providers

Due to differences with the ARM management endpoints for Azure and Azure Stack Hub, Hashicorp provide separate providers for each system. If anyone has used ASH, they will know that the resource providers available are a subset of Azure and are typically an older version, hence the need for different providers.

An interesting thing to check out is how often the providers are updated.

azurerm azurestack
azurerm azurestack

As you can see, the azurerm provider is regularly maintained, whereas azurestack is not. Why's this relevant? Well, if we want to use Terraform as our infra-as-code tool, then we have to work within the limitations.

Deploying a VM with a variable number of managed data disks

With the azurerm provider, this is quite straightforward:

  1. Create Network interface
  2. Create Managed Disk(s)
  3. Create VM
  4. Attach Managed data disks to VM
  5. (Optional) Run Customscript extension on the VM to configure the running VM
locals {
  data_disk_count = 4
}
resource "azurerm_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

resource "azurerm_virtual_network" "example" {
  name                = "example-network"
  address_space       = ["10.0.0.0/16"]
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name
}

resource "azurerm_subnet" "example" {
  name                 = "internal"
  resource_group_name  = azurerm_resource_group.example.name
  virtual_network_name = azurerm_virtual_network.example.name
  address_prefixes     = ["10.0.2.0/24"]
}

resource "azurerm_network_interface" "example" {
  name                = "example-nic"
  location            = azurerm_resource_group.example.location
  resource_group_name = azurerm_resource_group.example.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurerm_subnet.example.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "tls_private_key" "ssh_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "azurerm_linux_virtual_machine" "example" {
  name                = "example-machine"
  resource_group_name = azurerm_resource_group.example.name
  location            = azurerm_resource_group.example.location
  size                = "Standard_F2"
  admin_username      = "adminuser"
  network_interface_ids = [
    azurerm_network_interface.example.id,
  ]

  admin_ssh_key {
    username   = "adminuser"
    public_key = tls_private_key.ssh_key.public_key_openssh 
  }

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "Canonical"
    offer     = "0001-com-ubuntu-server-jammy"
    sku       = "22_04-lts"
    version   = "latest"
  }
}

resource "azurerm_managed_disk" "example" {
  count                = local.data_disk_count
  name                 = "${azurerm_linux_virtual_machine.example.name}-data-${count.index}"
  resource_group_name  = azurerm_resource_group.example.name
  location             = azurerm_resource_group.example.location
  storage_account_type = "Premium_LRS"
  create_option        = "Empty"
  disk_size_gb         = 256
}
resource "azurerm_virtual_machine_data_disk_attachment" "example" {
  depends_on = [
    azurerm_managed_disk.example,
    azurerm_linux_virtual_machine.example
  ]
  count              = local.data_disk_count
  managed_disk_id    = azurerm_managed_disk.example[count.index].id
  virtual_machine_id = azurerm_linux_virtual_machine.example.id
  lun                = count.index
  caching            = "ReadWrite"
}


resource "null_resource" "output_ssh_key" { 
  triggers = {
    always_run = "${timestamp()}"
  }
   provisioner "local-exec" {
    command = "echo '${tls_private_key.ssh_key.private_key_pem}' > ./${azurerm_linux_virtual_machine.example.name}.pem"
  }
}

The code above uses the azurerm_virtual_machine_data_disk_attachment resource. When using the azurerm_linux_virtual_machine, this is the only option available to us. Reading the documentation notes:

⚠️ NOTE:

Data Disks can be attached either directly on the azurerm_virtual_machine resource, or using the azurerm_virtual_machine_data_disk_attachment resource - but the two cannot be used together. If both are used against the same Virtual Machine, spurious changes will occur.

There's no method to attach directly using the azurerm_virtual_machine_data_disk_attachment resource.

If we check the resources available with the azurestack provider, we'll see that we can't use the above technique as azurerm_virtual_machine_data_disk_attachment does not exist.

alt text

That means the only option is to use azurestack_virtual_machine resource and attach the disks directly when the VM is created.

Implemetation for Azure Stack Hub

We could just create multiple storage_data_disk blocks within the azurestack_virtual_machine resource, but we want to account for variable number of disks.
To do this we need to use the dynamic blocks capability to generate nested blocks, as the count meta-argument does not work in this instance.

I first setup a map object with the name of each data disk and lun, as can be seen in the locals block in the code below.

This map of objects can then be iterated through to generate the nested block using the for_each meta-argument

The code block in question:

dynamic "storage_data_disk" {
    for_each = {for count, value in local.disk_map :   count => value}
    content {
      name              = storage_data_disk.value.disk_name
      managed_disk_type = "Standard_LRS"
      create_option     = "Empty"
      disk_size_gb      = 256
      lun               = storage_data_disk.value.lun
    }
  }

Example

locals {
  data_disk_count = 4
  vm_name         = "example-machine"
  disk_map = [
    for i in range(local.data_disk_count) :  {
      disk_name = format("%s_disk_%02d", local.vm_name, i+1)
      lun  = i 
    }
  ]
}
resource "azurestack_resource_group" "example" {
  name     = "example-resources"
  location = "West Europe"
}

resource "azurestack_virtual_network" "example" {
  name                = "example-network"
  address_space       = ["10.0.0.0/16"]
  location            = azurestack_resource_group.example.location
  resource_group_name = azurestack_resource_group.example.name
}

resource "azurestack_subnet" "example" {
  name                 = "internal"
  resource_group_name  = azurestack_resource_group.example.name
  virtual_network_name = azurestack_virtual_network.example.name
  address_prefix       = ["10.0.2.0/24"]
}

resource "azurestack_network_interface" "example" {
  name                = "example-nic"
  location            = azurestack_resource_group.example.location
  resource_group_name = azurestack_resource_group.example.name

  ip_configuration {
    name                          = "internal"
    subnet_id                     = azurestack_subnet.example.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "tls_private_key" "ssh_key" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "azurestack_virtual_machine" "example" {
  name                  = "example-machine"
  resource_group_name   = azurestack_resource_group.example.name
  location              = azurestack_resource_group.example.location
  vm_size               = "Standard_F2"
  network_interface_ids = [
    azurestack_network_interface.example.id,
  ]
  
  os_profile {
    computer_name  = local.vm_name
    admin_username = "adminuser"
  }

  os_profile_linux_config {
    disable_password_authentication = true
    ssh_keys {
      path     = "/home/adminuser/.ssh/authorized_keys"
      key_data = tls_private_key.pk.public_key_openssh
    }
  }

  storage_image_reference {
    publisher         = "Canonical"
    offer             = "0001-com-ubuntu-server-jammy"
    sku               = "22_04-lts"
    version           = "latest"
  }
  storage_os_disk {
    name              = "${local.vm_name}-osdisk"
    create_option     = "FromImage"
    caching           = "ReadWrite"
    managed_disk_type = "Standard_LRS"
    os_type           = "Linux"
    disk_size_gb      = 60
  }

  dynamic "storage_data_disk" {
    for_each = {for count, value in local.disk_map :   count => value}
    content {
      name              = storage_data_disk.value.disk_name
      managed_disk_type = "Standard_LRS"
      create_option     = "Empty"
      disk_size_gb      = 256
      lun               = storage_data_disk.value.lun
    }
  }


resource "null_resource" "output_ssh_key" { 
  triggers = {
    always_run = "${timestamp()}"
  }
   provisioner "local-exec" {
    command = "echo '${tls_private_key.ssh_key.private_key_pem}' > ./${azurestack_linux_virtual_machine.example.name}.pem"
  }
}

AKS Edge Essentials options for persistent storage

Out of the box, AKS Edge Essentials does not have the capability to host persistent storage. Thats OK if you're running stateless apps, but more often than not, you'll need to run stateful apps. There are a couple of options you can use to enable this:

  1. Create a manual storage class for local storage on the node
  2. Create a StorageClass to provision the persisent storage

First, I'm checking no existing storage classes exist. This is on a newly deployed AKS-EE, so I'm just double checking

kubectl get storageclasses --all-namespaces

Next, check no existing persistent volumes exist

kubectl get pv --all-namespaces

Manual storage class method

create a local manual persistent volume

Create a YAML file with the following config: (local-host-pv.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Now deploy it:

kubectl apply -f .\local-host-pv.yaml
kubectl get pv --all-namespaces

Create persistent volume claim

Create a YAML file with the following code: (pv-claim.yml)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Now deploy it:

kubectl apply -f .\pv-claim.yaml
kubectl get pvc --all-namespaces

The problem with the above approach!

The issue with the first method is that the persistent volume has to be created manually each time. If using Helm charts or deployment YAML files, they expect a default Storage Class to handle the provisoning so that you don't have to refactor the config each time and make the code portable.

As an example to show the problem, I've tried to deploy Keycloak using a Helm chart; it uses PostgreSQL DB which needs a pvc:

Using kubectl describe pvc -n keycloak, I can see the underlying problem; the persistent volume claim stays in pending because there are no available persistent volumes or Storage Classes available:

Create a Local Path provisioner StorageClass

So, to fix this, we need to deploy a storage class for our cluster. For this example, I'm using the Local Path provisioner example.

kubectl apply -f https://raw.githubusercontent.com/Azure/AKS-Edge/main/samples/storage/local-path-provisioner/local-path-storage.yaml

Once deployed, you can check that is exists as StorageClass:

kubectl get sc

Once the storage class is available, when I deploy the helm chart again, the persistent volume and claim are created successfully:

kubectl get pv
kubectl get pvc --all-namespaces

Conclusion

My advice is as part of the AKS Edge Essentials installation is to deploy a StorageClass to deal with provisioning volumes and claims to handle persistent data. As well as the Local Path provisioner, there is an example to use NFS storge binding