AKS Edge

AKS Edge Essentials options for persistent storage

Out of the box, AKS Edge Essentials does not have the capability to host persistent storage. Thats OK if you're running stateless apps, but more often than not, you'll need to run stateful apps. There are a couple of options you can use to enable this:

  1. Create a manual storage class for local storage on the node
  2. Create a StorageClass to provision the persisent storage

First, I'm checking no existing storage classes exist. This is on a newly deployed AKS-EE, so I'm just double checking

kubectl get storageclasses --all-namespaces

Next, check no existing persistent volumes exist

kubectl get pv --all-namespaces

Manual storage class method

create a local manual persistent volume

Create a YAML file with the following config: (local-host-pv.yaml)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: manual
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/mnt/data"

Now deploy it:

kubectl apply -f .\local-host-pv.yaml
kubectl get pv --all-namespaces

Create persistent volume claim

Create a YAML file with the following code: (pv-claim.yml)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

Now deploy it:

kubectl apply -f .\pv-claim.yaml
kubectl get pvc --all-namespaces

The problem with the above approach!

The issue with the first method is that the persistent volume has to be created manually each time. If using Helm charts or deployment YAML files, they expect a default Storage Class to handle the provisoning so that you don't have to refactor the config each time and make the code portable.

As an example to show the problem, I've tried to deploy Keycloak using a Helm chart; it uses PostgreSQL DB which needs a pvc:

Using kubectl describe pvc -n keycloak, I can see the underlying problem; the persistent volume claim stays in pending because there are no available persistent volumes or Storage Classes available:

Create a Local Path provisioner StorageClass

So, to fix this, we need to deploy a storage class for our cluster. For this example, I'm using the Local Path provisioner example.

kubectl apply -f https://raw.githubusercontent.com/Azure/AKS-Edge/main/samples/storage/local-path-provisioner/local-path-storage.yaml

Once deployed, you can check that is exists as StorageClass:

kubectl get sc

Once the storage class is available, when I deploy the helm chart again, the persistent volume and claim are created successfully:

kubectl get pv
kubectl get pvc --all-namespaces

Conclusion

My advice is as part of the AKS Edge Essentials installation is to deploy a StorageClass to deal with provisioning volumes and claims to handle persistent data. As well as the Local Path provisioner, there is an example to use NFS storge binding

Installing AKS Edge Essentials public preview

UPDATE: Check out this later post for lessons learnt and some configuration information that you will want to use https://www.cryingcloud.com/blog/2023/2/3/aks-edge-essentials-diving-deeper


Microsoft announced the Public preview of AKS Edge Essentials (I’m going to abbreviate to AKE-EE!) a few months ago and I wanted to try it out.  I think it's a great idea to be able to run and manage containerized workloads hosted on smaller, but capable compute systems such as Intel NUC's, a Windows desktop/laptop etc. By using Azure Arc to manage this ecosystem is quite a compelling prospect.

 https://learn.microsoft.com/en-gb/azure/aks/hybrid/aks-edge-overview

 To try it out, I looked through the public preview documentation, using my Windows 11 laptop to run AKS-EE. The doc's are pretty good, but make some assumptions, or miss some steps, especially if you're coming to this cold. I decided to use K8s, rather than K3s, so this blog is geared towards that.

There is a recently published Azure Arc jumpstart scenario which creates a VM in Azure and does the same (but automated!).

Prep your machine

https://learn.microsoft.com/en-gb/azure/aks/hybrid/aks-edge-howto-setup-machine


First of all, you need a system running Windows 10/11 or Windows Server 2019/2022 I chose my laptop as it is capable, but you could use an Azure VM running Windows server 2019/2022.

I'm using VS Code, so I'd recommend installing that if you haven't already done so. You will also need your own GitHub account, as you will need to clone the AKS Edge project into your own repo.v

If Hyper-V isn't already enabled, you can either do this manually, or let the AKS Edge setup script do this for you. Check it's enabled (from an elevated PowerShell prompt):

Get-WindowsOptionalFeature -Online -FeatureName *hyper*

If you need to install it (from an elevated PowerShell prompt):

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All

The docs say to disable power standby settings. If you’re deploying AKS-EE in a remote location, you’ll want to keep the system running 24/7, so this makes sense. For a kick the tires scenario, you probably don’t need to do this.

To check the current power settings of your system, run:

powercfg /a

You can see above, that I’ve already disabled the power standby settings. If any of them are enabled, run the following commands:

powercfg /x -standby-timeout-ac 0 
powercfg /x -standby-timeout-dc 0 
powercfg /hibernate off

reg add HKLM\System\CurrentControlSet\Control\Power /v PlatformAoAcOverride /t REG_DWORD /d 0

If power standby settings were enabled, reboot your system and check they’ve been disabled.

Installing AKS-Edge Essentials tools

Once the machine prep is complete, go ahead and download the AKS-Edge K8s tool from https://aka.ms/aks-edge/k8s-msi

It’s an MSI file, so go ahead and install on your system however you prefer (next, next, next… ;) ).

Once that’s done, we need to clone the AKS-EE repo to the local system. I’ve done this via VS Code.

  1. Open the Source Control blade

  2. Click on Clone Repository

  3. Paste the url https://github.com/Azure/AKS-Edge.git

  4. Click on the Clone from URL option

Select a folder from window that pops up of where you want to store the repo on your local system.

In my example, I cloned to a directory called C:\repos

A message will pop up in VS Code stating that the repo is cloning.

Click Open to display folder in VS Code

Open a  PowerShell terminal within VS Code, and navigate to the tools directory from the repo you’ve just cloned.

from the PowerShell terminal window, run the following command:

.\AKSEdgePrompt.cmd

It will open a  new PowerShell window as Admin, check Hyper-V is enabled and AKS Edge Essentials for K8s is installed. Remember to run further commands in this newly instantiated PowerShell window.

Within this window, you can check that the AKSEdge module has been imported.

Get-Command -Module AKSEdge | Format-Table Name, Version

Deploying the cluster

I use the term cluster loosely, as I’m running this from my laptop, but you get my meaning.

Given this, I need to edit the config to reflect a single machine scenario.

From VS Code:

  1. Open the aksedge-config.json file in the tools directory.

  2. As I'm using K8s, edit the NetworkPlugin parameter to use calico "NetworkPlugin": "calico",

  3. You need to select a spare IP on your local network for the VM that’s provisioned. My home router uses 192.168.1.0/24, so I selected a free address not in the DHCP lease range.

  4. Select another free IP on your network for the API end point

  5. …and now enter the start and end range for the kubernetes services

If you’re interested, you can check out the config options here and further descriptionm on what each setting does here: https://learn.microsoft.com/en-us/azure/aks/hybrid/aks-edge-deployment-config-json

Once you’ve saved the config file, go ahead to the PowerShell window that you opened via the AKSEdgePrompt.cmd and run the following to start the deployment (assuming you are still in the tools directory in the repo):

New-AksEdgeDeployment -JsonConfigFilePath .\aksedge-config.json

If everything is in place, that should start the deployment

When prompted, choose whether you want to send optional or only required diagnostics data.

.. eventually it will complete…

Let’s prove we can use kubectl to talk to the API and get some info

kubectl get nodes -o wide

kubectl get pods -A -o wide

Cool, the base cluster is deployed :)

Something to be aware of:

In the repo’s tools directory, a file will be created called servicetoken.txt.

Take care of this file as it is a highly-privileged token used to admin the K8s cluster. Make sure not to commit the file into your repo. You will need this token for certain activities later on.

Connecting AKS Edge Essentials to Azure Arc

The provided repo has some nice tools to help you get connected to Azure Arc, but we need to get some configuration information prior to running the scripts. To make it easy, here are some of the az cli commands to get what you need.

az login
#optional - run if you have multiple subscriptions and want to select which one to use
az account set --subscription '<name of the subscription>'

#SubscriptionId
az account show --query id -o tsv

#TenantId
az account show --query tenantId -o tsv

edit the aide-userconfig.json file from the tools directory

Change the following parameters using the values captured

"SubscriptionName": "<Azure Subscription Name>"
"SubscriptionId": "<SubscriptionId>",
"TenantId": "<TenantId>",

There’s no indication inn the docs whether to change the AKSEdgeProduct from K3s to K8s. I did - ‘just in case’

"AksEdgeProduct": "AKS Edge Essentials - K8s (Public Preview)"

Go ahead and name your resource group, service principal and region to deploy to. If the resource group and service principal do not exist, the deployment routine will create them for you. In the case of the service principal, it will assign contributor rights to the resource group scope.

Once the config file is saved, we’re ready to setup Arc.

From the PowerShell window, I decided to move to the root of the repo (not tools as previous), and ran the following command to setup the resource group, service principal permissions and role assignment:

.\tools\scripts\AksEdgeAzureSetup\AksEdgeAzureSetup.ps1 .\tools\aide-userconfig.json -spContributorRole

A warning is shown that the output includes credentials that you must protect. If you open the aide-userconfig.json you’ll notice it is populated with a service principal id and corresponding password created when the script was run.

If you want to test the credentials, run the following:

.\tools\scripts\AksEdgeAzureSetup\AksEdgeAzureSetup-Test.ps1 .\tools\aide-userconfig.json

To test the user config prior to initialization:

Read-AideUserConfig

Get-AideUserConfig

Time to initialize…

Initialize-AideArc

…and now to connect…

Connect-AideArc

Once complete, we can check the Azure portal to see that that the Arc resources are present in the resource group I specified in the config file. There will be one entry for ‘Server’ (my laptop) and another for Kubernetes

Select the Kubernetes - Azure Arc resource and then select ‘Namespaces’.

We need to paste the service token that is created when Connect-AideArc is run. This is located in .\tools\servicetoken.txt

Paste it in the Service account bearer token field.

You’ll now be able to query resources in your cluster.

All in all, the tools and scripts developed for AKS Edge Essentials work really well and I didn’t come across any issues. It bodes well for when it moves to GA.

Next steps for me are to investigate further scenarios and see what I can do on the platform.