Using Azure File Shares to mount a volume in Kubernetes

As of Kubernetes version 1.14 and Windows Server 2019, it's now possible to mount an Azure File Share object as a PersistentVolume in Kubernetes, and mount it into a Windows-based pod.

Side Note:  All of these commands will also work just fine on a Linux pod/node as well, you just need to install the "cifs-utils" package with your distros package manager, such as "apt-get install cifs-utils":

Let's walk through the process to get it working!

First, you'll want to create a storage account in Azure, and then a File Share folder in that storage account.  First create the account and make note of the account name and the account access key:

Storage account name and access key

After that go over to the "Files" section from the overview page of your storage account and make a new file share called "configfiles":

Creating the 'configfiles' file share

Now we can jump on over to the kubectl command-line and start making objects in Kubernetes.  Let's start off by making a namespace that can contain everything.

	
kubectl create ns filesharetest
	

Next, we create a secret that k8s will use to be able to access and mount the fileshare.  NOTE:  This secret MUST be in the same namespace as the PersistentVolumeClaim and Pods that will mount the volume!  Fill in your values for the "YourAzureStorageAccountNameHere" and "YourAzureStorageAccountKeyHere" based on what you obtained earlier.

	
kubectl create secret generic azure-fileshare-secret --from-literal=azurestorageaccountname=YourAzureStorageAccountNameHere --from-literal=azurestorageaccountkey=YourAzureStorageAccountKeyHere -n filesharetest
	

Now we can go ahead and create the PersistentVolume that will map to our Azure File Share.  This is what the YAML would look like for that:

	
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fileshare-pv
  labels:
    usage: fileshare-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  azureFile:
    secretName: azure-fileshare-secret
    shareName: configfiles
    readOnly: false
	

A few important things to note here:

  • You can call it whatever you want, but make sure you note down what you put for the "usage" label - that's what we'll use in our PersistentVolume to allow our PersistentVolumeClaim to bind directly to it
  • You can change accessModes if you want - Azure File Share does support the "ReadWriteMany" mode, which means you can have multiple pods mounting this volume and reading/writing to it at the same time.  This is what we've set it up for above
  • Make sure the "secretName" matches up with what you called your secret, and the "shareName" matches up with the file share object you created in your storage account
  • You don't specify a namespace when creating a PV - they are cluster-level resources

Go ahead create the file, save it as "pv.yaml", and apply it with:

	
kubectl apply -f pv.yaml
	

Now that we have a PersistentVolume, we can create a PersistentVolumeClaim to bind to it.  This is what the YAML would look like for that:

	
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: fileshare-pvc
  namespace: filesharetest
  # Set this annotation to NOT let Kubernetes automatically create
  # a persistent volume for this volume claim.
  annotations:
    volume.beta.kubernetes.io/storage-class: ""
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    # To make sure we match the claim with the exact volume, match the label
    matchLabels:
      usage: fileshare-pv
	

Note here that we ARE defining the namespace (PVC must be tied to a namespace), and make sure that your "accessModes" value matches that of your PV.  Additionally, make sure in the selector/matchLabels, your "usage" value matches what you put for your PV - that's how this PVC will know what to bind to.

Go ahead create the file, save it as "pvc.yaml", and apply it with:

	
kubectl apply -f pvc.yaml
	

Once those resources have been created, you can verify with the following:

kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         		STORAGECLASS   REASON   AGE
fileshare-pv   	 10Gi       RWX            Retain           Bound    filesharetest/fileshare-pvc                       	        66s

And then we can check our PersistentVolumeClaim with the following:

kubectl get pvc -n filesharetest
NAME             STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
fileshare-pvc    Bound    fileshare-pv   10Gi       RWX                           93s

Perfect!  Next up - let's make a deployment with multiple pods, and have them all bind to this PVC - here's the YAML:

	
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fileshare-deployment
  namespace: filesharetest
  labels:
    app: fileshare-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: fileshare-deployment
  template:
    metadata:
      labels:
        app: fileshare-deployment
    spec:
      volumes:
      - name: azure
        persistentVolumeClaim:
          claimName: fileshare-pvc
      containers:
      - name: main
        image: mcr.microsoft.com/windows/servercore:ltsc2019
        command: ["powershell", "Start-Sleep", "-s", "86400"]
        volumeMounts:
        - name: azure
          mountPath: "/configfiles"
	

Here we're creating a deployment that will create three pods.  Each of them are just running a server core 2019 image that runs the Start-Sleep powershell command for one day - the purpose of this is to just keep the pod running so we can connect to it to run commands.

Note the spec.volumes and spec.containers.volumeMounts.  We're saying we want a volume matching to fileshare-pvc PVC, which we created earlier.  Then in the container spec, we're mounting that volume to "/configfiles", which would be available at C:\configfiles on a windows box or /configfiles on a Linux box.

Go ahead create the file, save it as "fileshare-deployment.yaml", and apply it with:

	
kubectl apply -f fileshare-deployment.yaml
	

Once everything is deployed and up and running, let's find the name of our pods:

kubectl get pods -n filesharetest
NAME                                   READY   STATUS    RESTARTS   AGE
fileshare-deployment-f8fdc848b-r94pk   1/1     Running   0          23s
fileshare-deployment-f8fdc848b-xvg8z   1/1     Running   0          23s
fileshare-deployment-f8fdc848b-zbnk7   1/1     Running   0          23s

Now we can exec to one of our pods to see the mount in action:

	
kubectl exec -it YOUR_POD_NAME -n filesharetest powershell
	

Once connected, we can run some PowerShell commands to see the volume mount:

	
cd C:\

# Note the "configfiles" folder in the root of C when you perform the ls command
ls

# Change to the directory and write a file called "pod1.txt"
cd configfiles
echo "Hello from first pod" >> pod1.txt

# Note that the "pod1.txt" file is present when you perform the ls command
ls

	

If you jump back to the Azure Portal and look at your file share, you'll now see the file we just created:

Azure File Share

If you do the same kubectl exec steps and evaluate that directory on a different pod, you'll see the same file!  That's all there is to it, pretty simple!

One last note: Since we set the "persistentVolumeReclaimPolicy" on our PersistentVolume object to "Retain", we can delete the PersistentVolume in Kubernetes and our FileShare in Azure will be untouched.  Run the following to delete all the stuff you just created in Kubernetes:

	
kubectl delete ns filesharetest && kubectl delete pv fileshare-pv
	

Check back in the Azure Portal on your file share and you should see the share and the pod1.txt file are still there.

PersistentVolume and PersistentVolumeClaims in Kubernetes, along with the support of multiple cloud vendors storage solutions, can prove useful in many cases.  Hopefully this helps you get up and running!

If you're interested in learning more about PersistentVolumes and PersistentVolumeClaims, be sure to check out the docs at https://kubernetes.io/docs/concepts/storage/persistent-volumes/

Thanks,
Justin

Justin Carlson

Read more posts by this author.

Wisconsin