Setting up your own ELK stack in Kubernetes with Azure AKS

Lately I've posted a few articles that show how to do certain things with your own hosted ELK stack.  I wanted to go through and show a process on how to easily setup and configure your own ELK stack by using Azure AKS, along with the Helm package manager.

Since my intent doing this is just for doing some testing and proof-of-concept work, I'm skipping some stuff that would be necessary for production environments, such as user authentication, TLS/SSL, and more.  I plan to eventually cover that in another post, but just keep in mind that is not covered here!

First we'll create an AKS instance that we can deploy our ELK stack into.  I'll be using the az CLI to deploy these resources:

(Make sure to save off the values that are output at the end of the script)

	
#******************************************************************************
# Script parameters - Set these to your own values!
#******************************************************************************
$resourceGroup = "Your-Resource-Group-Name"
$clusterName = "Your-AKS-Cluster-Name"
$subscriptionName = "Your-Azure-Subscription-Name"
$location = "eastus"

#******************************************************************************
# Defined functions
#******************************************************************************

function Create-ServicePrincipal() {
    [HashTable]$servicePrincipalDetails = @{ }
	
    # come up with a random name for our AAD application, including the users initials
    $userInitials = Read-Host -Prompt 'Enter your initials'
    if (!$userInitials) {
        Write-Host 'User initials were not supplied - script is aborting!' -ForegroundColor Red
        throw "Unable to continue - user initials not supplied"
    }

    $servicePrincipalDetails.UserInitials = $userInitials.ToUpper()

    # Determine subscription ID
    $subscriptionID = (az account show | ConvertFrom-Json).id

    # Create service principal for RBAC and assign permissions
    $servicePrincipalName = "aks_{0}_{1}_{2}" -f $userInitials.ToUpper(), $resourceGroup, $clusterName
    $servicePrincipalResponse = az ad sp create-for-rbac --name $servicePrincipalName --role contributor --scopes /subscriptions/$subscriptionID/resourceGroups/$resourceGroup | ConvertFrom-Json

    # Assign the appId and password to the return value
    $servicePrincipalDetails.ApplicationId = $servicePrincipalResponse.appId
    $servicePrincipalDetails.Password = $servicePrincipalResponse.password
	
    # Get the details of the newly created service principal so we can obtain the objectId
    $spDetailResponse = az ad sp list --display-name $servicePrincipalName | ConvertFrom-Json
    $servicePrincipalDetails.ObjectId = $spDetailResponse.objectId
	
    return $servicePrincipalDetails
}

#******************************************************************************
# Script body
# Execution begins here
#******************************************************************************
$ErrorActionPreference = "Stop"
Write-Host ("Script Started " + [System.Datetime]::Now.ToString()) -ForegroundColor Green

# Login
Write-Host "Logging in..."
az login
		
# Select subscription
Write-Host "Selecting subscription '$subscriptionName'"
az account set --subscription $subscriptionName

Write-Host ("Creating Resource Group '$resourceGroup' " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
az group create -n $resourceGroup -l $location

# Create Service Principal
Write-Host ("Creating Service Principal " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
$servicePrincipalDetails = Create-ServicePrincipal

# Creating AKS Cluster
Write-Host ("Creating AKS Cluster " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
az aks create -g $resourceGroup -n $clusterName --location $location -c 3 --network-plugin azure --service-principal $servicePrincipalDetails.ApplicationId --client-secret $servicePrincipalDetails.Password --generate-ssh-keys

Write-Host ("Generating kubernetes credentials and updating .kubeconfig " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
az aks get-credentials -g $resourceGroup -n $clusterName

Write-Host " "
Write-Host "Please make note of the following values, as you will not be able to obtain the password after closing this script:"
Write-Host ("Service Principal AppId: '{0}'" -f $servicePrincipalDetails.ApplicationId)
Write-Host ("Service Principal Secret: '{0}'" -f $servicePrincipalDetails.Password)
Write-Host ("Script Completed " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
	

This may take some time.  I've seen it take upwards of 15 minutes to actually create the AKS cluster, so be patient!  Assuming it all goes well and completes without error, you should now have your cluster configured, and your local .kubeconfig file should have also been setup to be able to communicate with it.  Check that it's up and running with the following:

	
# Verify your current context was appropriately set
kubectl config current-context
        
# Check that you have three nodes and that they're all in "Ready" status
kubectl get nodes
	

We're going to use Helm to deploy our ELK stack into our cluster.  Helm is basically a package manager (like NPM, NuGet, etc.) for Kubernetes.  

To install Helm on Windows, we can run Chocolatey from an elevated shell:

	
choco install kubernetes-helm
	

(You used to have to also install a component called "Tiller" into your cluster to work with Helm.  As of version 3.0 of Helm, that is no longer necessary. See https://helm.sh/docs/faq/#removal-of-tiller for more details.)

Now we're ready to start deploying stuff!

With Helm, different users/organizations can publish their own charts.  For our ELK stack, Elastic.co has made some helm charts available that we can use.  If you want, you can check out all the specific charts here.

First we need to add a repo to helm to tell it where to search for our charts:

	
helm repo add elastic https://helm.elastic.co
	

We'll install an Elasticsearch service with all the default values, except we'll override the service.type flag to set it as a LoadBalancer.  This will have Azure automatically create an external Load Balancer so we can access our Elasticsearch endpoint from outside our cluster:

NOTE: By doing this you're exposing your Elasticsearch endpoint to the entire internet, and it's not secured.  This is ONLY for testing purposes!!!

	
helm install elasticsearch elastic/elasticsearch --set service.type=LoadBalancer
	

When it's done you should get some output that looks like this:

	
NAME: elasticsearch
LAST DEPLOYED: Wed Feb 12 11:19:38 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=default -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch 
	

Run the "kubectl get pods" command that is output.  It will keep the command shell open indefinitely.  You'll want to wait until you see all the pods have a "READY" value of "1/1" and a STATUS of "Running".  It has to spin up some new data volumes the first time it runs, so it could take a while to complete.  When I did it, it took around 7 minutes for everything to get to a good state.  Once it's there, you can just "Ctrl + C" to exit out of the command.

Let's find out the IP address for our external Elasticsearch endpoint:

	
kubectl get svc elasticsearch-master
	

Make a note of the "EXTERNAL-IP" that you find here.  Then go over to a browser and go to http://<YOUR-EXTERNAL-IP>:9200.  You should see something like this:

Elasticsearch endpoint in browser

Elasticsearch is all setup and ready to go!

One last component we're going to install is Kibana, so once we start putting data into our Elasticsearch instance, we'll be able to visualize it in Kibana.  Installation of Kibana with helm is almost identical as to what we did with Elasticsearch:

	
helm install kibana elastic/kibana --set service.type=LoadBalancer
	

For some reason, the output for this doesn't give you a nice "watch" command like the previous one did.  But you can use the following to see the status:

	
kubectl get pods --namespace=default -l app=kibana -w
	

When that's all ready we can check the services for our external IP for Kibana:

	
kubectl get svc kibana-kibana
	

Take the "EXTERNAL-IP" and go back to your browser and go to http://<YOUR-EXTERNAL-IP>:5601.  You should see something like this:

Kibana home page

And now you're all setup to play around!  You can always optionally install Logstash if you want to use that, or you can just start pushing data to your stack with one of the Beats.

Thanks,
Justin

Justin Carlson

Read more posts by this author.

Wisconsin