<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Talk Cloudly To Me]]></title><description><![CDATA[All things cloud and development]]></description><link>https://talkcloudlytome.com/</link><generator>Ghost 2.8</generator><lastBuildDate>Sun, 12 Apr 2026 08:46:25 GMT</lastBuildDate><atom:link href="https://talkcloudlytome.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Securing your hosted ELK stack]]></title><description><![CDATA[Secure your hosted ELK stack with TLS communication and user authentication]]></description><link>https://talkcloudlytome.com/securing-your-hosted-elk-stack/</link><guid isPermaLink="false">5e44711e2d8d6b040adf6d31</guid><category><![CDATA[Azure]]></category><category><![CDATA[ELK]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Logging]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Wed, 26 Feb 2020 15:23:34 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2020/02/elk-lock-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2020/02/elk-lock-1.png" alt="Securing your hosted ELK stack"><p>In my previous post, I walked through the process to deploy an ELK stack in AKS to play around with / learn / test stuff.  In that example there was NO TLS/SSL communication, and nothing was protected by passwords.</p><p>I went through and read all the documentation and figured out how to do the same thing, but to deploy a fully-secured cluster for both Elasticsearch and Kibana, and wanted to share that process as well.</p><p>You would want to read through the previous post to deploy an AKS instance to use and get chocolatey/helm installed, unless you've already done so.  For reference, you can check back here:  <a href="https://talkcloudlytome.com/setting-up-your-own-elk-stack-in-kubernetes-with-azure-aks/">https://talkcloudlytome.com/setting-up-your-own-elk-stack-in-kubernetes-with-azure-aks/</a></p><p>Once we have an AKS cluster to deploy to, we're going to have to create some certificates to secure our ELK stack with.  The point of my blog post here is to show how to setup and secure your ELK stack.  However - it would take far longer than I want to spend to setup an AppGateway/LoadBalancer with the proper DNS names so I could get correct certificate validation.  So with this example, everything will be secured via TLS/SSL, but if you try to access it via the external IP from outside of the cluster, you WILL get certificate validation errors (as expected).</p><p>On other thing to note is that I'm using .pem certificates.  If you're using PKCS#12 certificates, there are some different settings that you use for Elasticsearch, which is all covered in the documentation listed in the reference section below.</p><p>We're going to use <a href="https://github.com/cloudflare/cfssl">cfssl and cfssljson</a> to create our certs.  Run this and save off the ca-key.pem, ca.pem, security-master-key.pem, and security-master.pem files that are created:</p><pre>
	<code class="language-bash">
cat > ca-config.json &lt;&lt;EOF
{
  "signing": {
    "default": {
      "expiry": "8760h"
    },
    "profiles": {
      "elasticsearch": {
        "usages": ["signing", "key encipherment", "server auth", "client auth"],
        "expiry": "8760h"
      }
    }
  }
}
EOF

cat > ca-csr.json &lt;&lt;EOF
{
  "CN": "Elasticsearch",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Hartland",
      "O": "Elasticsearch",
      "OU": "CA",
      "ST": "Wisconsin"
    }
  ]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

cat > security-master-csr.json &lt;&lt;EOF
{
  "CN": "security-master",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Hartland",
      "O": "Elasticsearch",
      "OU": "Elasticsearch",
      "ST": "Wisconsin"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=elasticsearch \
  security-master-csr.json | cfssljson -bare security-master
	</code>
</pre><p></p><p>Now create a file in your current directory (containing all the certificates you just created) called "kustomization.yaml" with the following contents:<br></p><pre>
	<code class="language-yaml">
secretGenerator:
- name: elastic-certificates
  files:
  - ca.pem
  - security-master.pem
  - security-master-key.pem
	</code>
</pre><p></p><p>Run the following command to apply this secret to Kubernetes:</p><pre>
	<code class="language-powershell">
kubectl apply -k .
	</code>
</pre><p></p><p>Make a note of the secret name that is output here, as you'll need it later.  You can see it in the output - it will look something like this:</p><pre>
	<code class="language-powershell">
PS > kubectl apply -k .
secret/elastic-certificates-7ft8hkbftk created
	</code>
</pre><p></p><p>Next we need to create a secret that will hold the credentials that Elasticsearch will run under.  Replace "##PASSWORD##" with your actual password you want to use:<br></p><pre>
	<code class="language-powershell">
kubectl create secret generic elastic-credentials --from-literal=password=##PASSWORD## --from-literal=username=elastic
	</code>
</pre><p></p><p>Now we're going to make and modify a values.yml file for Helm to deploy Elasticsearch with our security configuration settings.  Previously we didn't bother with a values file for our deployment, since we were only overriding one value (service.type).  However, we have to modify quite a few values here, so we'll put them all into one file.</p><p>Add the following into a file and save it as "elasticsearch-helm-values.yml".  <strong>Make sure you replace "elastic-certificates-7ft8hkbftk" in the secretMounts.secretName section with the name of the secret you created above.</strong></p><pre>
	<code class="language-yaml">
---
clusterName: "security"
nodeGroup: "master"

roles:
  master: "true"
  ingest: "true"
  data: "true"

protocol: https

esConfig:
  elasticsearch.yml: |
    xpack.security.enabled: true
    xpack.security.transport.ssl.enabled: true
    xpack.security.transport.ssl.verification_mode: certificate
    xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/certs/security-master-key.pem
    xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/certs/security-master.pem
    xpack.security.transport.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/certs/ca.pem" ]
    xpack.security.http.ssl.enabled: true
    xpack.security.http.ssl.key: /usr/share/elasticsearch/config/certs/security-master-key.pem
    xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/certs/security-master.pem
    xpack.security.http.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/certs/ca.pem" ]
    
extraEnvs:
  - name: ELASTIC_PASSWORD
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: password
  - name: ELASTIC_USERNAME
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: username

secretMounts:
  - name: elastic-certificates
    secretName: elastic-certificates-7ft8hkbftk
    path: /usr/share/elasticsearch/config/certs
    
service:
  labels: {}
  labelsHeadless: {}
  type: LoadBalancer
  nodePort: ""
  annotations: {}
  httpPortName: http
  transportPortName: transport
	</code>
</pre><p></p><p>Now we're good to install it via Helm with these values:</p><pre>
	<code class="language-powershell">
helm install elasticsearch elastic/elasticsearch --values elasticsearch-helm-values.yml
	</code>
</pre><p></p><p>You should get a similar output as in the previous post:</p><pre>
	<code class="language-powershell">
NAME: elasticsearch
LAST DEPLOYED: Fri Feb 21 19:26:40 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=default -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch 
	</code>
</pre><p></p><p>Run the "get pods" command until all the pods have a "READY" value of "1/1" and a STATUS of "Running".</p><p>Find out the External IP address of your elasticsearch service:</p><pre>
	<code class="language-powershell">
kubectl get svc security-master
	</code>
</pre><p></p><p>Make a note of the "EXTERNAL-IP" that you find here.  Then go over to a browser and go to https://&lt;YOUR-EXTERNAL-IP&gt;:9200.  Note that we're specifying "https" instead of "http" for the protocol now!  You should see something like this:<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-23.png" class="kg-image" alt="Securing your hosted ELK stack"><figcaption>TLS and Authentication credentials for Elasticsearch</figcaption></figure><p></p><p>Note that it does say "Not secure".  It IS secured with TLS, however, due to how we setup our load balancer infrastructure, we currently have to access it via IP.  If we were to access it with the host name (security-master), like it does with in-cluster communication, it would work fine and show as secure.   In a production environment, you would probably configure this a little differently.  But you can click on the "Not secure" link to see the certificate and validate that we are in fact securing communications with TLS:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-24.png" class="kg-image" alt="Securing your hosted ELK stack"><figcaption>The elasticsearch certificate&nbsp;</figcaption></figure><p></p><p> Another thing you'll note that's different from our unsecured installation - we're now prompted for credentials when logging in!   This is due to us setting "xpack.security.enabled" to true in our elasticsearch.yml configuration.  To log in, use "elastic" for the username, and whatever you specified for your "elastic-credentials" secret password you created earlier.</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-25.png" class="kg-image" alt="Securing your hosted ELK stack"><figcaption>Elasticsearch web page</figcaption></figure><p></p><p>Success!  We now have an elasticsearch installation that's using encrypted TLS both for node-to-node and HTTP communications, and also requires a login to authenticate.</p><p>Next we're going to setup Kibana to talk to this instance.  This is going to require additional configuration as well since we want Kibana to be accessed over HTTPS from the web browser, and we'll also need to configure it to communicate with our Elasticsearch instance over HTTPS as well.</p><p>We're going to go back to the same directory where we created our elasticsearch certs and make one for Kibana.  Run this to create the certs.  Save off the kibana-kibana.pem and kibana-kibana-key.pem files that are created:</p><pre>
	<code class="language-bash">
cat > kibana-kibana-csr.json &lt;&lt;EOF
{
  "CN": "kibana-kibana",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "US",
      "L": "Hartland",
      "O": "Kibana",
      "OU": "Kibana",
      "ST": "Wisconsin"
    }
  ]
}
EOF

cfssl gencert \
  -ca=ca.pem \
  -ca-key=ca-key.pem \
  -config=ca-config.json \
  -profile=elasticsearch \
  kibana-kibana-csr.json | cfssljson -bare kibana-kibana
	</code>
</pre><p></p><p>Copy those .pem files into the same directory as all your other certificates, and modify the "kustomization.yaml" file you created earlier to contain only the following contents:</p><pre>
	<code class="language-yaml">
secretGenerator:
- name: kibana-certificates
  files:
  - kibana-kibana.pem
  - kibana-kibana-key.pem
	</code>
</pre><p></p><p>Run the following command to apply this secret to Kubernetes:</p><pre>
	<code class="language-powershell">
kubectl apply -k .
	</code>
</pre><p></p><p>Make a note of the secret name that is output here, as you'll need it later.  You can see it in the output - it will look something like this:</p><pre>
	<code class="language-powershell">
PS > kubectl apply -k .
secret/kibana-certificates-mkt5m8644t created
	</code>
</pre><p></p><p>Now we're going to make and modify a values.yml file to deploy Kibana with our security config settings. Add the following into a file and save it as "kibana-helm-values.yml".  <strong>Make sure you replace "elastic-certificates-7ft8hkbftk" and "kibana-certificates-mkt5m8644t" secret names with the respective names of the secrets you've created so far in this process!</strong></p><pre>
	<code class="language-yaml">
---
elasticsearchHosts: "https://security-master:9200"

extraEnvs:
  - name: 'ELASTICSEARCH_USERNAME'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: username
  - name: 'ELASTICSEARCH_PASSWORD'
    valueFrom:
      secretKeyRef:
        name: elastic-credentials
        key: password

kibanaConfig:
  kibana.yml: |
    server.ssl:
      enabled: true
      key: /usr/share/kibana/config/certs/kibana-kibana-key.pem
      certificate: /usr/share/kibana/config/certs/kibana-kibana.pem
    elasticsearch.ssl:
      certificateAuthorities: /usr/share/kibana/config/elasticsearchcerts/ca.pem
      verificationMode: certificate

protocol: https

secretMounts:
  - name: kibana-certificates
    secretName: kibana-certificates-mkt5m8644t
    path: /usr/share/kibana/config/certs
  - name: elastic-certificates
    secretName: elastic-certificates-7ft8hkbftk
    path: /usr/share/kibana/config/elasticsearchcerts
    
service:
  type: LoadBalancer
  port: 5601
  nodePort: ""
  labels: {}
  annotations: {}
  loadBalancerSourceRanges: []
	</code>
</pre><p></p><p>Now we're good to install it via Helm with these values:</p><pre>
	<code class="language-powershell">
helm install kibana elastic/kibana --values kibana-helm-values.yml
	</code>
</pre><p></p><p>You can run the following to watch the status of the Kibana pod:</p><pre>
	<code class="language-powershell">
kubectl get pods --namespace=default -l app=kibana -w
	</code>
</pre><p></p><p>Once this command shows the pod(s) with a "READY" value of "1/1" and a STATUS of "Running", we're all good to go.</p><p>Find out the External IP address of the Kibana service:</p><pre>
	<code class="language-powershell">
kubectl get svc kibana-kibana
	</code>
</pre><p></p><p>Make a note of the "EXTERNAL-IP" that you find here.  Then go over to a browser and go to https://&lt;YOUR-EXTERNAL-IP&gt;:5601.  Note that we're specifying "https" instead of "http" for the protocol now!  All the same comments about the certificate being 'wrong' are still the same here, but you can view the certificate itself to see that we're indeed using TLS:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-26.png" class="kg-image" alt="Securing your hosted ELK stack"><figcaption>Kibana certificate</figcaption></figure><p>If you click through the security warnings, you'll see this:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-27.png" class="kg-image" alt="Securing your hosted ELK stack"><figcaption>Kibana login page</figcaption></figure><p>Kibana uses the same credentials for login that Elasticsearch does, so you can use the same username/password you used to login to Elasticsearch earlier:<br></p><p>And we're in!</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-28.png" class="kg-image" alt="Securing your hosted ELK stack"><figcaption>Kibana web portal</figcaption></figure><p></p><p>Lastly - I didn't go through and setup any Beats with this instance, but the changes are relatively minor to setup.  Basically you just need to specify an "https" when you're setting up your elasticsearch output and provide a path to the CA of your elasticsearch certificate.  You can read up some examples here if you're interested: <a href="https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html">https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html</a></p><p></p><p><strong><strong>Additional Resources:</strong></strong></p><p>I used the following resources when researching this process and building out this blog post:</p><ul><li><a href="https://www.elastic.co/guide/en/elasticsearch/reference/7.6/configuring-tls.html#tls-http">https://www.elastic.co/guide/en/elasticsearch/reference/7.6/configuring-tls.html#tls-http</a></li><li><a href="https://www.elastic.co/guide/en/kibana/current/configuring-tls.html">https://www.elastic.co/guide/en/kibana/current/configuring-tls.html</a></li><li><a href="https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/security">https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/security</a></li><li><a href="https://github.com/elastic/helm-charts/tree/master/kibana/examples/security">https://github.com/elastic/helm-charts/tree/master/kibana/examples/security</a></li></ul><p>I hope this helps you out - thanks for reading!</p><p>-Justin</p><p></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Setting up your own ELK stack in Kubernetes with Azure AKS]]></title><description><![CDATA[Read up to learn how to quickly and easily deploy your own ELK stack in an AKS kubernetes cluster for testing!]]></description><link>https://talkcloudlytome.com/setting-up-your-own-elk-stack-in-kubernetes-with-azure-aks/</link><guid isPermaLink="false">5e42c9ef2d8d6b040adf6c9f</guid><category><![CDATA[Azure]]></category><category><![CDATA[ELK]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[Logging]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Sun, 16 Feb 2020 18:37:13 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2020/02/elk-4.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2020/02/elk-4.png" alt="Setting up your own ELK stack in Kubernetes with Azure AKS"><p>Lately I've posted a few articles that show how to do certain things with your own hosted ELK stack.  I wanted to go through and show a process on how to easily setup and configure your own ELK stack by using Azure AKS, along with the Helm package manager.</p><p>Since my intent doing this is just for doing some testing and proof-of-concept work, I'm skipping some stuff that would be necessary for production environments, such as user authentication, TLS/SSL, and more.  I plan to eventually cover that in another post, but just keep in mind that is not covered here!</p><p>First we'll create an AKS instance that we can deploy our ELK stack into.  I'll be using the az CLI to deploy these resources:</p><p><em>(Make sure to save off the values that are output at the end of the script)</em></p><pre>
	<code class="language-powershell">
#******************************************************************************
# Script parameters - Set these to your own values!
#******************************************************************************
$resourceGroup = "Your-Resource-Group-Name"
$clusterName = "Your-AKS-Cluster-Name"
$subscriptionName = "Your-Azure-Subscription-Name"
$location = "eastus"

#******************************************************************************
# Defined functions
#******************************************************************************

function Create-ServicePrincipal() {
    [HashTable]$servicePrincipalDetails = @{ }
	
    # come up with a random name for our AAD application, including the users initials
    $userInitials = Read-Host -Prompt 'Enter your initials'
    if (!$userInitials) {
        Write-Host 'User initials were not supplied - script is aborting!' -ForegroundColor Red
        throw "Unable to continue - user initials not supplied"
    }

    $servicePrincipalDetails.UserInitials = $userInitials.ToUpper()

    # Determine subscription ID
    $subscriptionID = (az account show | ConvertFrom-Json).id

    # Create service principal for RBAC and assign permissions
    $servicePrincipalName = "aks_{0}_{1}_{2}" -f $userInitials.ToUpper(), $resourceGroup, $clusterName
    $servicePrincipalResponse = az ad sp create-for-rbac --name $servicePrincipalName --role contributor --scopes /subscriptions/$subscriptionID/resourceGroups/$resourceGroup | ConvertFrom-Json

    # Assign the appId and password to the return value
    $servicePrincipalDetails.ApplicationId = $servicePrincipalResponse.appId
    $servicePrincipalDetails.Password = $servicePrincipalResponse.password
	
    # Get the details of the newly created service principal so we can obtain the objectId
    $spDetailResponse = az ad sp list --display-name $servicePrincipalName | ConvertFrom-Json
    $servicePrincipalDetails.ObjectId = $spDetailResponse.objectId
	
    return $servicePrincipalDetails
}

#******************************************************************************
# Script body
# Execution begins here
#******************************************************************************
$ErrorActionPreference = "Stop"
Write-Host ("Script Started " + [System.Datetime]::Now.ToString()) -ForegroundColor Green

# Login
Write-Host "Logging in..."
az login
		
# Select subscription
Write-Host "Selecting subscription '$subscriptionName'"
az account set --subscription $subscriptionName

Write-Host ("Creating Resource Group '$resourceGroup' " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
az group create -n $resourceGroup -l $location

# Create Service Principal
Write-Host ("Creating Service Principal " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
$servicePrincipalDetails = Create-ServicePrincipal

# Creating AKS Cluster
Write-Host ("Creating AKS Cluster " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
az aks create -g $resourceGroup -n $clusterName --location $location -c 3 --network-plugin azure --service-principal $servicePrincipalDetails.ApplicationId --client-secret $servicePrincipalDetails.Password --generate-ssh-keys

Write-Host ("Generating kubernetes credentials and updating .kubeconfig " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
az aks get-credentials -g $resourceGroup -n $clusterName

Write-Host " "
Write-Host "Please make note of the following values, as you will not be able to obtain the password after closing this script:"
Write-Host ("Service Principal AppId: '{0}'" -f $servicePrincipalDetails.ApplicationId)
Write-Host ("Service Principal Secret: '{0}'" -f $servicePrincipalDetails.Password)
Write-Host ("Script Completed " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
	</code>
</pre><p></p><p>This may take some time.  I've seen it take upwards of 15 minutes to actually create the AKS cluster, so be patient!  Assuming it all goes well and completes without error, you should now have your cluster configured, and your local .kubeconfig file should have also been setup to be able to communicate with it.  Check that it's up and running with the following:</p><pre>
	<code class="language-powershell">
# Verify your current context was appropriately set
kubectl config current-context
        
# Check that you have three nodes and that they're all in "Ready" status
kubectl get nodes
	</code>
</pre><p></p><p>We're going to use <a href="https://helm.sh/">Helm</a> to deploy our ELK stack into our cluster.  Helm is basically a package manager (like NPM, NuGet, etc.) for Kubernetes.  </p><p>To install Helm on Windows, we can run <a href="https://chocolatey.org/">Chocolatey</a> from an elevated shell:</p><pre>
	<code class="language-powershell">
choco install kubernetes-helm
	</code>
</pre><p></p><p><em>(You used to have to also install a component called "Tiller" into your cluster to work with Helm.  As of version 3.0 of Helm, that is no longer necessary. See <a href="https://helm.sh/docs/faq/#removal-of-tiller">https://helm.sh/docs/faq/#removal-of-tiller</a> for more details.)</em></p><p>Now we're ready to start deploying stuff!</p><p>With Helm, different users/organizations can publish their own charts.  For our ELK stack, Elastic.co has made some helm charts available that we can use.  If you want, you can check out all the specific charts <a href="https://github.com/elastic/helm-charts">here</a>.</p><p>First we need to add a repo to helm to tell it where to search for our charts:</p><pre>
	<code class="language-powershell">
helm repo add elastic https://helm.elastic.co
	</code>
</pre><p></p><p>We'll install an Elasticsearch service with all the default values, except we'll override the service.type flag to set it as a LoadBalancer.  This will have Azure automatically create an external Load Balancer so we can access our Elasticsearch endpoint from outside our cluster:</p><p><em>NOTE: By doing this you're exposing your Elasticsearch endpoint to the entire internet, and it's not secured.  This is ONLY for testing purposes!!!</em></p><pre>
	<code class="language-powershell">
helm install elasticsearch elastic/elasticsearch --set service.type=LoadBalancer
	</code>
</pre><p></p><p>When it's done you should get some output that looks like this:<br></p><pre>
	<code class="language-powershell">
NAME: elasticsearch
LAST DEPLOYED: Wed Feb 12 11:19:38 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
  $ kubectl get pods --namespace=default -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
  $ helm test elasticsearch 
	</code>
</pre><p></p><p>Run the "kubectl get pods" command that is output.  It will keep the command shell open indefinitely.  You'll want to wait until you see all the pods have a "READY" value of "1/1" and a STATUS of "Running".  It has to spin up some new data volumes the first time it runs, so it could take a while to complete.  When I did it, it took around 7 minutes for everything to get to a good state.  Once it's there, you can just "Ctrl + C" to exit out of the command.</p><p>Let's find out the IP address for our external Elasticsearch endpoint:</p><pre>
	<code class="language-powershell">
kubectl get svc elasticsearch-master
	</code>
</pre><p></p><p>Make a note of the "EXTERNAL-IP" that you find here.  Then go over to a browser and go to http://&lt;YOUR-EXTERNAL-IP&gt;:9200.  You should see something like this:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-21.png" class="kg-image" alt="Setting up your own ELK stack in Kubernetes with Azure AKS"><figcaption>Elasticsearch endpoint in browser</figcaption></figure><p></p><p></p><p>Elasticsearch is all setup and ready to go!</p><p>One last component we're going to install is Kibana, so once we start putting data into our Elasticsearch instance, we'll be able to visualize it in Kibana.  Installation of Kibana with helm is almost identical as to what we did with Elasticsearch:<br></p><pre>
	<code class="language-powershell">
helm install kibana elastic/kibana --set service.type=LoadBalancer
	</code>
</pre><p></p><p>For some reason, the output for this doesn't give you a nice "watch" command like the previous one did.  But you can use the following to see the status:<br></p><pre>
	<code class="language-powershell">
kubectl get pods --namespace=default -l app=kibana -w
	</code>
</pre><p></p><p>When that's all ready we can check the services for our external IP for Kibana:<br></p><pre>
	<code class="language-powershell">
kubectl get svc kibana-kibana
	</code>
</pre><p></p><p>Take the "EXTERNAL-IP" and go back to your browser and go to http://&lt;YOUR-EXTERNAL-IP&gt;:5601.  You should see something like this:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-22.png" class="kg-image" alt="Setting up your own ELK stack in Kubernetes with Azure AKS"><figcaption>Kibana home page</figcaption></figure><p></p><p>And now you're all setup to play around!  You can always optionally install Logstash if you want to use that, or you can just start pushing data to your stack with one of the <a href="https://www.elastic.co/beats">Beats</a>.</p><p>Thanks,<br>Justin</p>]]></content:encoded></item><item><title><![CDATA[Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs]]></title><description><![CDATA[Learn how to setup and configure Azure Log data to be sent to your ELK stack by utilizing Azure Event Hubs and Azure Functions]]></description><link>https://talkcloudlytome.com/elk-with-functions-event-hubs/</link><guid isPermaLink="false">5e41a0f82d8d6b040adf6bd4</guid><category><![CDATA[Azure]]></category><category><![CDATA[ELK]]></category><category><![CDATA[Logging]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Wed, 12 Feb 2020 14:35:31 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2020/02/functions.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2020/02/functions.png" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><p>I've recently been playing around with my own hosted ELK stack.  When I looked at some of the third party SaaS solutions, I saw that they had certain plugins that would gather data from say, Azure, and import it into their hosted stack.   Looking through the different Beats available with Elastic.co's offering, I didn't see anything out of the box that would do that for me.  It looks like they're trying to go down that path with the "<a href="https://www.elastic.co/guide/en/beats/functionbeat/current/index.html">Functionbeat</a>", however, that only has limited support for AWS logs, and nothing for Azure.</p><p>So I did some research and came up with a way to get some of the Azure logs to be imported into my ELK stack.  For the purposes of this walkthrough, I'm going to assume you already have your own ELK stack setup, that you're not using TLS or Authentication for ELK, and that your Elasticsearch endpoint is accessible from Azure.</p><p>Our goal here is to get the Azure Activity Log data into ELK stack.  Here's a quick overview of what we're going to do to accomplish that:</p><ul><li>Setup a resource group, storage account, and event hub namespace in Azure</li><li>Create a new Azure Function project in Visual Studio</li><li>Add in our code to post the data to the ELK stack in the function</li><li>Deploy the function to Azure with a publish profile from Visual Studio</li><li>Setup Application Insights so we can monitor our Azure function</li><li>Configure Azure Activity logs to export to an event hub</li><li>View our results!</li></ul><p>Let's get started!  First, run the following Powershell script to generate a resource group, storage account, and event hub namespace.  You'll need to specify your own values for the $subscriptionName and $resourceGroupName.  Make sure to save the values off that are output at the end:</p><pre>
	<code class="language-powershell">
#******************************************************************************
# Script parameters - Set these to your own values!
#******************************************************************************
$subscriptionName = "Your Subscription Name"
$resourceGroupName = "Your Resource Group Name"
$location = "eastus"

#******************************************************************************
# Script body
# Execution begins here
#******************************************************************************
Write-Host "Importing Azure Modules..."
Import-Module -Name Az

$ErrorActionPreference = "Stop"
Write-Host ("Script Started " + [System.Datetime]::Now.ToString()) -ForegroundColor Green

# Sign in to Azure account
Write-Host "Logging in..."
$currentContext = Get-AzContext
if ($null -eq $currentContext.Subscription) {
	$verboseMessage = Connect-AzAccount
	Write-Verbose $verboseMessage

	# reload context
	$currentContext = Get-AzContext
}

# Select subscription
Write-Host "Selecting subscription '$subscriptionName'"
$verboseMessage = Select-AzSubscription -SubscriptionName $subscriptionName
Write-Verbose $verboseMessage

# Create resource group
Write-Host ("Creating Resource Group '$resourceGroupName' " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
$verboseMessage = New-AzResourceGroup -Name $resourceGroupName -Location $location
Write-Verbose $verboseMessage

# Get initials to prepend our resource names
$userInitials = Read-Host -Prompt 'Enter your initials'
if (!$userInitials) {
	Write-Host 'User initials were not supplied - script is aborting!' -ForegroundColor Red
	throw "Unable to continue - user initials not supplied"
}

$userInitials = $userInitials.ToLower()

# Create storage account
$storageAccountName = "{0}evthubstorage" -f $userInitials
Write-Host ("Creating Storage Account '$storageAccountName' " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
$verboseMessage = New-AzStorageAccount -ResourceGroupName $resourceGroupName -Name $storageAccountName -Location $location -SkuName Standard_LRS
Write-Host $verboseMessage

# Create event hub namespace
$eventHubNamespaceName = "{0}eventhub" -f $userInitials
Write-Host ("Creating Event Hub Namespace '$eventHubNamespaceName' " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
$verboseMessage = New-AzEventHubNamespace -ResourceGroupName $resourceGroupName -Name $eventHubNamespaceName -Location $location -SkuName Standard
Write-Host $verboseMessage

# Get the primary key connection string for our newly created event hub
$key = Get-AzEventHubKey -ResourceGroupName $resourceGroupName -Namespace $eventHubNamespaceName -AuthorizationRuleName "RootManageSharedAccessKey"

Write-Host "Save off the following values for use later:"
Write-Host ("Resource Group: '{0}'" -f $resourceGroupName)
Write-Host ("Storage Account: '{0}'" -f $storageAccountName)
Write-Host ("Event Hub Namespace: '{0}'" -f $eventHubNamespaceName)
Write-Host ("Event Hub Connection String: '{0}'" -f $key.PrimaryConnectionString)
Write-Host ("Script Completed " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
	</code>
</pre><p></p><p>Now we have our main resources setup in Azure so we can start with creating our function code.  I'm using Visual Studio 2019.  Create a new project and choose the "Azure Functions" template:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-2.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Selecting Azure Functions new project template in VS2019</figcaption></figure><p><br>For the options, choose the "Event Hub trigger" type, enter in the name of your storage account you created earlier, enter "<strong><em>EventHubConnectionString</em></strong>" for the Connection string setting name, and enter "<strong><em>insights-operational-logs</em></strong>" for the event hub name.  Note that at this point that an event hub with this name does NOT exist in our namespace - but it will later after a few steps.<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-3.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Project settings for new Azure Functions project</figcaption></figure><p><br>Once the project is created, you'll need to open up the "local.settings.json" file and add a new item in the "Values" object with a key of "EventHubConnectionString", and a value of the Event Hub connection string that was output by the initial Powershell script.</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-4.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Setting connection string in local.settings.json</figcaption></figure><p><br>At this point, you could leave the code as-is (the 'template' code that VS puts in the Run(...) function) and follow the remaining steps, and it would just output each message to the logger for your function.  However, we want to be able to take these messages and POST them into an Elasticsearch endpoint, so we can then view/query them in Kibana.</p><p>To do that we will use the <a href="https://www.elastic.co/guide/en/elasticsearch/client/net-api/current/elasticsearch-net.html">Elasticsearch.Net low level client</a>. This allows lower level access to directly write to an Elasticsearch endpoint.  Remember - I'm not using SSL or any kind of authentication for this example, so if you have that enabled you'll probably need to build in more code than I have to make it work, but it does look like those options are supported with this client.</p><p>We'll install the needed NuGet package in the package manager console with the following command:<br></p><pre>
	<code class="language-powershell">
Install-Package Elasticsearch.Net
	</code>
</pre><p></p><p>Add in a "using Elasticsearch.Net" statement in your usings section, and replace the entirety of the "Run" method with the following code:<br></p><pre>
	<code class="language-csharp">
[FunctionName("Function1")]
public static async Task Run([EventHubTrigger("insights-operational-logs", Connection = "EventHubConnectionString")] EventData[] events, ILogger log)
{
	var exceptions = new List<exception>();

	var elasticsearchIndex = "azureactivitylog";
	# Replace this with the actual address to your elasticsearch endpoint!
	var elasticsearchAddress = "http://1.2.3.4:9200";
	var settings = new ConnectionConfiguration(new Uri(elasticsearchAddress));
	var client = new ElasticLowLevelClient(settings);

	foreach (EventData eventData in events)
	{
		try
		{
			string messageBody = Encoding.UTF8.GetString(eventData.Body.Array, eventData.Body.Offset, eventData.Body.Count);

			// you probably wouldn't want this log message in a production instance, but we'll keep it here for our testing purposes
			log.LogInformation($"Raw Data From Function: {messageBody}");
			var response = client.Index<stringresponse>(elasticsearchIndex, (PostData)PostData.String(messageBody));
			if (!response.Success)
			{
				throw response.ApiCall.OriginalException;
			}

			await Task.Yield();
		}
		catch (Exception e)
		{
			// We need to keep processing the rest of the batch - capture this exception and continue.
			// Also, consider capturing details of the message that failed processing so it can be processed again later.
			exceptions.Add(e);
		}
	}

	// Once processing of the batch is complete, if any messages in the batch failed processing throw an exception so that there is a record of the failure.

	if (exceptions.Count > 1)
		throw new AggregateException(exceptions);

	if (exceptions.Count == 1)
		throw exceptions.Single();
}
	</stringresponse></exception></code>
</pre><p><br>All we're really doing here is setting up a connection to our Elasticsearch endpoint near the top of the function, and then making a call to "client.Index(...)" for each event in the event hub.  Since the data coming from Azure is already in JSON format, we don't have to do anything special to process it.</p><p>What happens now, is that every time an item is put on the "insights-operational-logs" event hub in the event hub namespace we've defined with our connection string, this function will be triggered and run the code above.</p><p>Now let's go ahead and make a Publish Profile in Visual Studio to publish this as a new function.  Right click on your project and select "Publish".</p><p>Choose "Azure Functions Consumption Plan", "Create New", and check the "Run from package file" checkbox, then click "Create Profile":</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-5.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Setting up your publish target</figcaption></figure><p><br>Give your app service a name - then select the appropriate subscription, resource group, location, and storage account.  Note that these should match up with the values you used/created via the initial Powershell script.  When ready, click "Create":</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-6.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Setting up the details for your publish profile</figcaption></figure><p></p><p>Now we've created our publish profile, we just need to do an actual deployment with that profile, and specify our production parameters needed.  Remember we defined the "EventHubConnectionString" in our local.settings.json file?  Well that's only used when you're running the function locally in debug mode.  We have to provide a value for the deployment so it knows what to use.  Click on the "Edit Azure App Service settings" link:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-7.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Edit Azure App Service settings</figcaption></figure><p>Copy the value you put in the "Local" field for EventHubConnectionString into the Remote field as well, and hit OK.</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-8.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Setting your remote deployment parameters</figcaption></figure><p><br>Click "Publish" back on the main screen and wait for all your resources to publish to Azure.  Once completed you should see a new App Service and App Service plan on your resource group, that correspond to the function you just published:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-9.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"></figure><p><br>Open up the App Service record and expand functions, and you'll see our "Function1" we created.  If you go to "Monitor", you can configure Application Insights to enable logging for the function.  Click "Configure" and then on the next screen just select "Create new resource" and give a name for the Application Insights resource to be created:<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-10.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Configure Application Insights for your function</figcaption></figure><p></p><p>Once that's setup you can go back to the "Monitor" section and you should now see this:<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-11.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Monitor screen for your function</figcaption></figure><p>We're basically all setup now!  All we need is to create an event hub in our namespace and have it start pushing messages to it, so our function can pick them up.</p><p>For this setup, we're going to stream Azure Activity Logs.  These are the access/audit logs that Azure maintains to show who added/deleted/edited different resources in the Azure Portal or via the CLI.  <em>(For more detailed information, you can check out this link:  <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-export">https://docs.microsoft.com/en-us/azure/azure-monitor/platform/activity-log-export</a>)</em></p><p>To set this up, we will go to the "Activity Log" in the Azure portal (just search for 'Activity Log', open it up, and click on the "Diagnostic settings" button.  Then, click on the purple banner for the "Legacy experience" to get to this screen:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-1.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"></figure><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Setup Azure activity log export</figcaption></figure><p></p><p>Select your subscription, and whatever regions you want to monitor.  Then, check the box for "Export to an event hub", and for the Service bus namespace, you'll specify the Subscription where your event hub namespace is, along with the namespace name itself, and then specify "RootManageSharedAccessKey" from the policy name drop down.  Click OK, then click Save.</p><p>Let's go load up our Event Hub namespace now and look at the Event Hub entities:<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-12.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Event hubs</figcaption></figure><p>Look at that - Azure created a new hub for us in our namespace called "insights-operational-logs"!</p><p>Now let's do something to force an audit change.  I'm just going to create a storage account, making sure I do it in the subscription/region that I specified when setting up the activity log export.  After your resource is successfully created, wait 5 minutes or so (there is a slight lag/delay on the event hub log data showing up in the portal)</p><p>Go back to the App Service for your function and load up the "Monitor" section again:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-13.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Monitor data for function showing successful processing</figcaption></figure><p></p><p>Click on the row there and you can see the actual logged raw data!</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-14.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Raw logging data from function</figcaption></figure><p></p><p>Since we're not seeing any error messages here, we can assume our function was able to successfully parse the records and send them to Elasticsearch.  Let's go view the Elasticsearch Index Management page in the Kibana portal:<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-15.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>New index showing up in Elasticsearch</figcaption></figure><p></p><p>There's our new index!  Next we can add an index pattern in Kibana:<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-16.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Define index pattern</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-17.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Set records.time as the Time Filter field</figcaption></figure><p></p><p>And then go to the discover page to view the raw index data:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-18.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"><figcaption>Viewing your Azure Activity Log data in Kibana!</figcaption></figure><p><br>That's it - you're done!  I've never really done anything with Azure Functions or the Event Hub, so this was definitely a fun learning experience.  Some other things to eventually focus on for improvements might be:</p><ul><li>Figure out how to communicate with TLS using the Elasticsearch.Net client</li><li>Figure out how to use Basic Auth with the Elasticsearch.Net client to communicate with a protected Elasticsearch endpoint</li><li>Consider parsing out the data a little better in the Azure function and only sending certain elements to Elasticsearch, instead of the entire (giant) JSON message that Azure sends</li><li>Look into all the other types of Azure resources that allow streaming of Diagnostic data to event hubs so we could consume those in ELK as well (see details on how here:  <a href="https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings">https://docs.microsoft.com/en-us/azure/azure-monitor/platform/diagnostic-settings</a>).  The following is a short list of what I found that looks like it would be available to setup in a similar way to what we just did with the Azure Activity Logs</li></ul><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/02/image-20.png" class="kg-image" alt="Exporting Azure Log data to the ELK stack with Azure Functions and Event Hubs"></figure><p></p><p>Hopefully you've found this useful - if you have any questions let me know!</p><p>Thanks,<br>Justin</p>]]></content:encoded></item><item><title><![CDATA[Utilizing AzureAD Application Roles for added security when using Service-to-Service authentication]]></title><description><![CDATA[Implement AzureAD application roles for greater security when utilizing service-to-service authentication]]></description><link>https://talkcloudlytome.com/azuread-application-roles-for-service-to-service-authentication/</link><guid isPermaLink="false">5e29d4382d8d6b040adf6b33</guid><category><![CDATA[Azure]]></category><category><![CDATA[RBAC]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Mon, 03 Feb 2020 17:38:49 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2020/02/azuread.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2020/02/azuread.png" alt="Utilizing AzureAD Application Roles for added security when using Service-to-Service authentication"><p>Recently I had to deal with a scenario where I had several different service principals that were allowed to access certain endpoints on a web service.  This code was already in place and was working fine.  However - I had a new requirement come up where I needed only ONE service principal to be able to access certain secure endpoints, while the others could only access the "normal" endpoints.</p><p>After doing some research, I found two concepts that apply to OAuth and Azure AD application registration:  scopes and roles.  These concepts allow you to provide granular permissions to certain users, and then can get returned as claims in the JWT that's returned from a successful authentication.  Scopes deal with when you working with an actual AD user, or a service that's accessing an endpoint on behalf of (delegated auth) a user.  In my scenario, it was all backend service daemons calling the endpoints, so that wouldn't apply.  However - roles will work in that case.</p><p>I found bits and pieces of examples from multiple different websites (listed below in the "Additional Resources" section), but wanted to put together a full step-by-step guide for how to set this up as a proof of concept.</p><p>For this example, we will have two application registrations / service principals.  One will be the "TodoService", which will serve as the endpoint that's being called by the other service principal, the "TodoClient".  Imagine that the "TodoService" is the resource that contains the sensitive/secure endpoints and it's going to be called by the "TodoClient" service.</p><p>The below Powershell script will create both of the application registrations / service principals, as well as define and create a "CallSecureEndpoints" role on the TodoService application.  You will need to enter your Azure subscription name and directory domain in the "$subscriptionName" and "$directoryDomain" parameters for it to work, and you can (optionally) change the display names and identifier URI's to your own initials.</p><pre>
	<code class="language-powershell">
    #******************************************************************************
# Script parameters
#******************************************************************************
$subscriptionName = "Your Subscription Here"
$directoryDomain = "yourazuredomain.onmicrosoft.com"

$todoClientAppDisplayName = "jdc-todo-client"
$todoServiceAppDisplayName = "jdc-todo-service"

$todoClientAppIdentifierUri = "https://$directoryDomain/jdc/todo/client"
$todoServiceAppIdentifierUri = "https://$directoryDomain/jdc/todo/service"

#******************************************************************************
# Defined functions
#******************************************************************************
function Create-AppRegistration([String]$identifierUri, [String]$displayName, [Boolean]$assignApplicationRoleToAppRegistration) {
	[HashTable]$appRegistrationDetails = @{ }
	
	try
	{
		# Remove existing application registration if it exists
	    $app = Get-AzureADApplication -Filter "identifierUris/any(uri:uri eq '$identifierUri')"
		if ($app)
		{
			Write-Host ("Removing Application with IdentifierUri: {0}" -f $identifierUri) -ForegroundColor Green
			Remove-AzureADApplication -ObjectId $($app.ObjectId)
		}
		
		# Create application registration
		Write-Host ("Creating Application with IdentifierUri: {0}..." -f $identifierUri) -ForegroundColor Green
		$applicationRegistration = New-AzureADApplication `
			-DisplayName $displayName `
			-IdentifierUris $identifierUri `
			-AvailableToOtherTenants $true
			
		# Create service principal and credentials
		Write-Host "Creating service principal..."
		$applicationRegistrationServicePrincipal = New-AzureADServicePrincipal -AppId $applicationRegistration.AppId
		$passwordParams = @{ CustomKeyIdentifier = "AccessKey" }
		$applicationRegistrationPasswordCredential = New-AzureADApplicationPasswordCredential -ObjectId $applicationRegistration.ObjectId @passwordParams
		
		if ($assignApplicationRoleToAppRegistration) {
			Write-Host "Creating and assigning application roles..."
			
			$callSecureEndpointsAppRole = Create-AppRole -roleName "CallSecureEndpoints" -roleDescription "Applications are allowed to call sensitive and secure endpoints on the service"
			
			$applicationRoles = $applicationRegistration.AppRoles
			$applicationRoles.Add($callSecureEndpointsAppRole)
			Set-AzureADApplication -ObjectId $applicationRegistration.ObjectId -AppRoles $applicationRoles
		}
		
		# fill in our values to return to the caller
		$appRegistrationDetails.Uri = $identifierUri
		$appRegistrationDetails.DisplayName = $displayName
		$appRegistrationDetails.ApplicationId = $applicationRegistration.AppId
		$appRegistrationDetails.Secret = $applicationRegistrationPasswordCredential.Value
		$appRegistrationDetails.ServicePrincipalObjectId = $applicationRegistrationServicePrincipal.ObjectId

		return $appRegistrationDetails
	}
	catch [Exception]
	{
		Write-Output ($_)
		exit 1
	}
}

Function Create-AppRole([string] $roleName, [string] $roleDescription) {
    $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole
    $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string]
    $appRole.AllowedMemberTypes.Add("Application");
    $appRole.DisplayName = $roleName
    $appRole.Id = New-Guid
    $appRole.IsEnabled = $true
    $appRole.Description = $roleDescription
    $appRole.Value = $roleName;
    return $appRole
}

#******************************************************************************
# Script body
# Execution begins here
#******************************************************************************
Write-Host "Importing Azure Modules..."
Import-Module -Name Az
Import-Module -Name AzureAD

$ErrorActionPreference = "Stop"
Write-Host ("Script Started " + [System.Datetime]::Now.ToString()) -ForegroundColor Green

# Sign in to Azure account
Write-Host "Logging in..."
$currentContext = Get-AzContext
if ($null -eq $currentContext.Subscription)
{
	$verboseMessage = Connect-AzAccount
	Write-Verbose $verboseMessage
	
	# reload context
	$currentContext = Get-AzContext
}

# Select subscription
Write-Host "Selecting subscription '$subscriptionName'"
$verboseMessage = Select-AzSubscription -SubscriptionName $subscriptionName
Write-Verbose $verboseMessage

# Connect to AzureAD (needed to call any of the AD functions)
Connect-AzureAD -TenantId $currentContext.Tenant.Id -AccountId $currentContext.Account.Id

# Create the application registration for TodoService
$todoServiceApplication = Create-AppRegistration -identifierUri $todoServiceAppIdentifierUri -displayName $todoServiceAppDisplayName -assignApplicationRoleToAppRegistration $true

# Create the application registration for TodoClient
$todoClientApplication = Create-AppRegistration -identifierUri $todoClientAppIdentifierUri -displayName $todoClientAppDisplayName -assignApplicationRoleToAppRegistration $false

# Output and display the values
Write-Host "TODO SERVICE APPLICATION DETAILS:"
Write-Host "-------------------------------"
Write-Host ("DisplayName: {0}" -f $todoServiceApplication.DisplayName) -ForegroundColor Yellow
Write-Host ("Uri: {0}" -f $todoServiceApplication.Uri) -ForegroundColor Yellow
Write-Host ("AppId: {0}" -f $todoServiceApplication.ApplicationId) -ForegroundColor Yellow
Write-Host ("Secret: {0}" -f $todoServiceApplication.Secret) -ForegroundColor Yellow
Write-Host ("Service Principal ObjectId: {0}" -f $todoServiceApplication.ServicePrincipalObjectId) -ForegroundColor Yellow

Write-Host "TODO CLIENT APPLICATION DETAILS:"
Write-Host "-------------------------------"
Write-Host ("DisplayName: {0}" -f $todoClientApplication.DisplayName) -ForegroundColor Yellow
Write-Host ("Uri: {0}" -f $todoClientApplication.Uri) -ForegroundColor Yellow
Write-Host ("AppId: {0}" -f $todoClientApplication.ApplicationId) -ForegroundColor Yellow
Write-Host ("Secret: {0}" -f $todoClientApplication.Secret) -ForegroundColor Yellow
Write-Host ("Service Principal ObjectId: {0}" -f $todoClientApplication.ServicePrincipalObjectId) -ForegroundColor Yellow

# Complete
Write-Host ("Creation complete " + [System.Datetime]::Now.ToString()) -ForegroundColor Green
	</code>
</pre><p></p><p>After successfully running the script, a bunch of values are output containing the appID, clientID, secret, and other values for both app registrations.  You will want to save those off for later steps.</p><p>Now, we should be able to use our credentials for the TodoClient service to obtain an OAuth token from AzureAD.  You can do this in Postman by making a POST to "https://login.microsoftonline.com/&lt;Your-Azure-Tenant-ID&gt;/oauth2/token".  Provide the following values in the Body section:</p><p><strong>grant_type:</strong>          client_credentials<br><strong>client_id:</strong>              The "AppId" for your TodoClient AzureAD application<br><strong>client_secret:</strong>     The "Secret" for your TodoClient AzureAD application<br><strong>resource:</strong>               The "IdentifierUri" for your TodoService AzureAD application</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://talkcloudlytome.com/content/images/2020/01/image-6.png" class="kg-image" alt="Utilizing AzureAD Application Roles for added security when using Service-to-Service authentication"><figcaption>Authenticating with your service principal to get an OAuth access_token</figcaption></figure><p>You can copy out the value in "access_token" and head over to <a href="https://jwt.io/">https://jwt.io/</a> to decode it.  You should see values like this in the payload section:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/01/image-7.png" class="kg-image" alt="Utilizing AzureAD Application Roles for added security when using Service-to-Service authentication"><figcaption>JWT payload without roles claim</figcaption></figure><p>Now that we know that's all setup properly, we need to actually assign the "TodoClient" service the permissions to use the "CallSecureEndpoints" role on the "TodoService" service, as well as grant Admin consent.  There may be a way to do this via Powershell scripting, but it's also very easy to just do it manually in the Azure portal:<br></p><ol><li>Open the "TodoClient" application</li><li>Go to "API permissions"</li><li>Select "Add a permission"</li><li>Choose "APIs my organization uses"</li><li>Find and select the "TodoService" application from the list</li><li>Choose "Application Permissions"</li><li>Select the "CallSecureEndpoints" role</li><li>Click "Add Permissions"</li></ol><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/01/image-11.png" class="kg-image" alt="Utilizing AzureAD Application Roles for added security when using Service-to-Service authentication"><figcaption>Adding the CallSecureEndpoints role assignment to the TodoClient service principal</figcaption></figure><p></p><p>Now you just need to grant admin consent to the permissions:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/01/image-10.png" class="kg-image" alt="Utilizing AzureAD Application Roles for added security when using Service-to-Service authentication"><figcaption>Granting admin consent to the permissions</figcaption></figure><p></p><p>Now, go back and send another request in Postman and get a new access_token.  Go back to <a href="https://jwt.io/">https://jwt.io/</a> and decode it.  You should see it now looks slightly different:</p><p></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/01/image-8.png" class="kg-image" alt="Utilizing AzureAD Application Roles for added security when using Service-to-Service authentication"><figcaption>JWT payload with roles claim</figcaption></figure><p>You can now see that we get the "roles" returned in our JWT token from AzureAD, and our "CallSecureEndpoints" role is present in the claim!</p><p>It's now pretty straightforward to check for the presence of that roles claim when validating your token in your backend code:</p><pre>
	<code class="language-csharp">
var jwtTokenHandler = new JwtSecurityTokenHandler();
var parameters = new Microsoft.IdentityModel.Tokens.TokenValidationParameters
{
	// ENTER YOUR PARAMETERS SPECIFIC TO YOUR SCENARIO HERE 
};

// replace "AuthorizationHeaderValue" with the value you got from POSTMAN in the format "Bearer access_token"
var claims = jwtTokenHandler.ValidateToken("AuthorizationHeaderValue", validationParameters, out var foundToken);

// Find the roles from the roles claim on the token
var assignedRoles = new List<string>();
claims.FindAll("http://schemas.microsoft.com/ws/2008/06/identity/claims/role").ForEach(claim =>
{
	assignedRoles.Add(claim.Value);
});

// Allow or deny actions based on if you found your role in the claim or not
	</string></code>
</pre><p></p><p>And you're all done!</p><p><strong>Additional Resources:</strong></p><p>I used the following resources when researching this process and building out this blog post:</p><ul><li><a href="https://joonasw.net/view/defining-permissions-and-roles-in-aad">https://joonasw.net/view/defining-permissions-and-roles-in-aad</a></li><li><a href="https://stackoverflow.com/questions/26497365/azure-api-management-scope-claim-null">https://stackoverflow.com/questions/26497365/azure-api-management-scope-claim-null</a></li><li><a href="https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps">https://docs.microsoft.com/en-us/azure/active-directory/develop/howto-add-app-roles-in-azure-ad-apps</a></li><li><a href="https://stackoverflow.com/questions/51651889/how-to-add-app-roles-under-manifest-in-azure-active-directory-using-powershell-s">https://stackoverflow.com/questions/51651889/how-to-add-app-roles-under-manifest-in-azure-active-directory-using-powershell-s</a></li></ul><p></p><p>Hopefully you find this helpful - if you have any questions feel free to let me know.</p><p>Thanks,<br>Justin</p>]]></content:encoded></item><item><title><![CDATA[Posting metrics to the ELK stack via REST API]]></title><description><![CDATA[Learn how to post data directly to Elasticsearch in the ELK stack via the REST API to capture and store your own metrics!]]></description><link>https://talkcloudlytome.com/posting-metrics-to-the-elk-stack-via-rest-api/</link><guid isPermaLink="false">5e1f51f72d8d6b040adf6ad9</guid><category><![CDATA[Metrics]]></category><category><![CDATA[ELK]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Wed, 15 Jan 2020 20:04:53 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2020/01/kibana.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2020/01/kibana.png" alt="Posting metrics to the ELK stack via REST API"><p>Recently I was working with the ELK stack and had a co-worker ask if there would be a way to have a scheduled task that ran a SQL query, and then output those values into ELK so we could build a visualization showing some of those metrics over time.   It was a pretty straightforward data type, basically something like this:</p><pre>
	<code class="language-json">
{
    "post_date": "2020-01-15T13:14:59",
    "client_name": "ClientABCD",
    "number_of_events": 72
}
	</code>
</pre><p></p><p>The intent here is that we would want to trend that data over time, per client, to see the "number_of_events" they had, and how that value increased or decreased over time.  </p><p>The most common use case with ELK is to use Logstash or Filebeats to parse log files into the system.  However, the ELK stack does provide a full <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/rest-apis.html">REST API</a> that you can use to directly interact with it.  My thought was to create a Powershell script that would run a SQL query, and then iterate through the result set and manually POST each record to the ELK endpoint.</p><p>I ended up finding the <a href="https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-index_.html">Index API</a>, which provided me with the syntax on how to make a POST to an Elasticsearch index.  It's worth noting that in my ELK stack, the system was setup to automatically create indices if they didn't already exist when inserting a document.  If this is NOT the case for your instance, my samples here will likely fail and you'll have to set the index up on your own.</p><p>I started with a simple test trying to POST my JSON body to my new index I wanted, "/metrictest/eventcounter":</p><pre>
	<code class="language-powershell">
$metrics_date = (Get-Date -Format "yyyy-MM-ddTHH:mm:ss").ToString()
    
$post_body_raw = @{
	post_date=$metrics_date;
    client_name='ClientABCD';
    number_of_events=155
}
    
$post_body_json = ConvertTo-Json ([System.Management.Automation.PSObject] $post_body_raw)
    
Invoke-WebRequest -UseBasicParsing -Uri http://your-elk-server-name:9200/metrictest/eventcounter -ContentType "application/json" -Method POST -Body $post_body_json
	</code>
</pre><p></p><p>When I tried to run that, I got this:<br></p><pre>
	<code class="language-powershell">
Invoke-WebRequest : {"error":{"root_cause":[{"type":"security_exception","reason":"missing authentication credentials for REST request [/metrictest/eventcounter]","header":
{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"missing authentication credentials     
for REST request [/metrictest/eventcounter]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
At line:1 char:1                                                                                                                                                                       
+ Invoke-WebRequest -UseBasicParsing -Uri http://your-elk-server-name:9200/metri...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo          : InvalidOperation: (System.Net.HttpWebRequest:HttpWebRequest) [Invoke-WebRequest], WebException 
+ FullyQualifiedErrorId : WebCmdletWebResponseException,Microsoft.PowerShell.Commands.InvokeWebRequestCommand 
	</code>
</pre><p></p><p>Uh-oh!   Looks like my ELK stack requires authentication!  In my setup, ELK was configured for "native" authentication, which means that ELK controls the usernames and passwords.  When configured that way, you can simply pass the username and password as a base64 encoded string in an authentication header.  </p><p>Let's update the Powershell script to add that in:<br></p><pre>
	<code class="language-powershell">
$user = "elasticUser"
$password = "elasticPassword"
$credential = "${user}:${password}"
$credentialBytes = [System.Text.Encoding]::ASCII.GetBytes($credential)
$base64Credential = [System.Convert]::ToBase64String($credentialBytes)
$basicAuthHeader = "Basic $base64Credential"
$headers = @{ Authorization = $basicAuthHeader }

$metrics_date = (Get-Date -Format "yyyy-MM-ddTHH:mm:ss").ToString()
    
$post_body_raw = @{
	post_date=$metrics_date;
    client_name='ClientABCD';
    number_of_events=155
}
    
$post_body_json = ConvertTo-Json ([System.Management.Automation.PSObject] $post_body_raw)
    
Invoke-WebRequest -UseBasicParsing -Uri http://your-elk-server-name:9200/metrictest/eventcounter -ContentType "application/json" -Headers $headers -Method POST -Body $post_body_json
	</code>
</pre><p></p><p>Success!</p><pre>
	<code class="language-powershell">
StatusCode        : 201
StatusDescription : Created
Content           : {"_index":"metrictest","_type":"eventcounter","_id":"dASMqm8BSqDFEHEt7te7","_version":1,"result":"created","_shards":{"total":2,"successful":2,"failed":0},"_seq_no":0,"_primary_term":1}
RawContent        : HTTP/1.1 201 Created
					Content-Length: 185
					Content-Type: application/json; charset=UTF-8
					Location: /metrictest/eventcounter/dASMqm8BSqDFEHEt7te7
					
					{"_index":"metrictest","_type":"eventcounter","_id"...   
Forms             :                                                                                                                                                                                              
Headers           : {[Content-Length, 185], [Content-Type, application/json; charset=UTF-8], [Location, /metrictest/eventcounter/dASMqm8BSqDFEHEt7te7]}  
Images            : {}
InputFields       : {}
Links             : {}
ParsedHtml        :
RawContentLength  : 185
	</code>
</pre><p></p><p>Now we can go into Kibana and go to "Management --&gt; Create Index Pattern" and add our "metrictest" index:<br></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/01/image.png" class="kg-image" alt="Posting metrics to the ELK stack via REST API"><figcaption>Create index pattern in Kibana</figcaption></figure><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/01/image-1.png" class="kg-image" alt="Posting metrics to the ELK stack via REST API"><figcaption>Select "post_date" as your Time Filter field</figcaption></figure><p></p><p>And then select that index pattern in the Discover section of Kibana and you'll be able to see your data:</p><p></p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2020/01/image-2.png" class="kg-image" alt="Posting metrics to the ELK stack via REST API"><figcaption>Viewing metrictest index in Kibana</figcaption></figure><p></p><p>Hopefully you find this useful.  I just started digging into the ELK stack recently and there's definitely a ton of cool stuff there!</p><p>Thanks,<br>Justin</p>]]></content:encoded></item><item><title><![CDATA[Using RBAC with Service Principals for Azure Storage]]></title><description><![CDATA[Learn how to securely manage access to Azure Blob Storage utilizing Role Based Access Control (RBAC) instead of access keys]]></description><link>https://talkcloudlytome.com/using-rbac-with-service-principals-for-azure-storage/</link><guid isPermaLink="false">5d52c86e1d587d03f268c3b6</guid><category><![CDATA[Azure]]></category><category><![CDATA[RBAC]]></category><category><![CDATA[Security]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Tue, 13 Aug 2019 17:50:31 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2019/08/rbac.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2019/08/rbac.png" alt="Using RBAC with Service Principals for Azure Storage"><p>Most of the time you'll see examples and tutorials online of accessing Azure Blob Storage programmatically using the master storage account key(s), or generating SAS keys and using those instead.   While this certainly works, it does have some drawbacks:</p><ul><li>The master storage key gives far more access than is needed (in most cases)</li><li>If a master storage key is compromised and you regenerate it, all SAS keys that were created off of that master key are now invalid and must be recreated</li></ul><p>It turns out there's a better way to do it!   Azure Blob Storage now supports the use of RBAC to control access.  You can do this with a regular Azure AD user as well, but for the purposes of this post, we will create a Service Principal and show how to use that.   The benefit of going this route is you never have to give out any keys to your storage account, and revoking access is as simple as removing the roles/permissions assigned to the particular service principal!</p><p>Here's a high level list of the steps to be performed:<br><em>(Note: You'll have to be a Global Administrator in your Azure account to do some of this!)</em></p><ul><li><a href="#create-service-principal-section">Create a service principal</a></li><li><a href="#create-resource-group-section">Create a resource group </a></li><li><a href="#create-storage-account-section">Create a storage account with few containers in your storage account</a></li><li><a href="#create-custom-roles-section">Create two custom RBAC roles - one which allows only READ access to containers, and one which allows WRITE access to containers</a></li><li><a href="#assign-custom-roles-section">Assign the roles with the appropriate permissions scopes to your service principal record</a></li><li><a href="#access-via-az-cli-section">Show how to access those resources via the az CLI</a></li><li><a href="#access-via-csharp-section">Show how to access those resources via C#</a></li></ul><p>
    <strong id="create-service-principal-section">Create the service principal via az CLI:</strong>
    <br>
    <em>(Replace "YOUR_SERVICE_PRINCIPAL_NAME" with the name you want to use)</em>
</p><pre>
	<code class="language-bash">
az ad sp create-for-rbac -n "YOUR_SERVICE_PRINCIPAL_NAME" --skip-assignment
	</code>
</pre><p></p><p>This command will output some values that are important to note - make sure you save off the "PASSWORD" and "APPLICATION_ID" values from the output!</p><p>
    <strong id="create-resource-group-section">Create the resource group via az CLI:</strong>
    <br>
    <em>(Replace "YOUR_RESOURCE_GROUP_NAME" with the name you want to use)</em>
</p><pre>
	<code class="language-bash">
az group create -l eastus -n YOUR_RESOURCE_GROUP_NAME
	</code>
</pre>
<br><p>
    <strong id="create-storage-account-section">Create the storage account and some containers via az CLI:</strong>
    <br>
    <em>(Replace "YOUR_RESOURCE_GROUP_NAME" with the name of your created resource group, and "YOUR_STORAGE_ACCOUNT_NAME" with the name you want to use)</em>
</p><pre>
	<code class="language-bash">
az storage account create -n YOUR_STORAGE_ACCOUNT_NAME -g YOUR_RESOURCE_GROUP_NAME -l eastus --sku STANDARD_LRS

az storage container create -n readonly --account-name YOUR_STORAGE_ACCOUNT_NAME

az storage container create -n writeonly --account-name YOUR_STORAGE_ACCOUNT_NAME
	</code>
</pre>
<br><p>
    <strong id="create-custom-roles-section">Create the custom RBAC roles via az CLI:</strong>
    <br>
</p><p>You can technically skip this part and just use some of the <a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles">core Azure RBAC roles</a> for assignment if you want.  However, in my case I wanted to make one container read only (i.e., you can only view and download the blobs, but not put anything there), and one container write only (you can only write a new blob there, not read or download anything).  I didn't find a core role for the "write only" option so decided to make my own custom RBAC roles for both of them.</p><p>First you'll want to create two files, "storage-reader-role-definition.json" and "storage-writer-role-definition.json" with the following contents:</p><p><em>storage-reader-role-definition.json</em><br><em>(Replace "YOUR_SUBSCRIPTION_ID" with the id of your Azure subscription)</em></p><pre>
	<code class="language-json">
{
  "Name": "custom-blob-storage-reader",
  "IsCustom": true,
  "Description": "Ability to list and download blobs from a given container",
  "Actions": [
    "Microsoft.Storage/storageAccounts/blobServices/containers/read"
  ],
  "NotActions": [],
  "DataActions": [
	"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read"
  ],
  "NotDataActions": [],
  "AssignableScopes": [
    "/subscriptions/YOUR_SUBSCRIPTION_ID"
  ]
}
	</code>
</pre><p></p><p><em>storage-writer-role-definition.json</em><br><em>(Replace "YOUR_SUBSCRIPTION_ID" with the id of your Azure subscription)</em></p><pre>
	<code class="language-json">
{
  "Name": "custom-blob-storage-writer",
  "IsCustom": true,
  "Description": "Ability to write blobs to a given container",
  "Actions": [
	"Microsoft.Storage/storageAccounts/blobServices/containers/read"
  ],
  "NotActions": [],
  "DataActions": [
	"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write",
	"Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action"
  ],
  "NotDataActions": [],
  "AssignableScopes": [
    "/subscriptions/YOUR_SUBSCRIPTION_ID"
  ]
}
	</code>
</pre><p></p><p>Now you can use the az CLI to create those custom roles in your Azure AD tenant:</p><pre>
	<code class="language-bash">
az role definition create --role-definition "storage-reader-role-definition.json"
az role definition create --role-definition "storage-writer-role-definition.json"
	</code>
</pre><p></p><p>Run a quick check to ensure your roles were successfully created:</p><pre>
	<code class="language-bash">
az role definition list --custom-role-only true
	</code>
</pre>
<br><p>
    <strong id="assign-custom-roles-section">Assign roles with appropriate permissions scopes to service principal record</strong>
    <br>
</p><p>Now we can assign these roles, with the appropriate permission scopes, to our service principal account.  We want to assign the "storage reader" one with access to our "readonly" container we created, and we want to assign the "storage writer" one with access to our "writeonly" container we created.</p><pre>
	<code class="language-bash">
az role assignment create --role "custom-blob-storage-reader" --assignee "YOUR_SERVICE_PRINCIPAL_APPLICATION_ID" --scope "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/YOUR_RESOURCE_GROUP_NAME/providers/Microsoft.Storage/storageAccounts/YOUR_STORAGE_ACCOUNT_NAME/blobServices/default/containers/readonly"

az role assignment create --role "custom-blob-storage-writer" --assignee "YOUR_SERVICE_PRINCIPAL_APPLICATION_ID" --scope "/subscriptions/YOUR_SUBSCRIPTION_ID/resourceGroups/YOUR_RESOURCE_GROUP_NAME/providers/Microsoft.Storage/storageAccounts/YOUR_STORAGE_ACCOUNT_NAME/blobServices/default/containers/writeonly"	
	</code>
</pre>
<br><p>
    <strong id="access-via-az-cli-section">Login and access storage resources via the az CLI:</strong>
    <br>
</p><p>We're now going to check and make sure we can login with our service principal credentials and are able to access the resources as we expect.</p><p><em>Before starting the rest of the test, manually upload an empty file called "testreadonly.txt" to the "readonly" container in your storage account.</em></p><p>First we need to login.  When using service principals (instead of a general Azure AD user record), there is no "dynamic" UI login.  You can only login by specifying the credentials to the az login command - so let's do that:</p><p><em>Replace the"YOUR_SERVICE_PRINCIPAL_CLIENT_ID" value with the "APPLICATION_ID" you obtained from the output of the create-for-rbac command.  Replace the "YOUR_SERVICE_PRINCIPAL_CLIENT_SECRET" value with the "PASSWORD" value you obtained from the create-for-rbac command.  And lastly replace "YOUR_TENANT_ID" with your appropriate Azure AD tenant ID as well.</em></p><pre>
	<code class="language-bash">
az login --service-principal --username YOUR_SERVICE_PRINCIPAL_CLIENT_ID --password YOUR_SERVICE_PRINCIPAL_CLIENT_SECRET --tenant YOUR_TENANT_ID
	</code>
</pre><p></p><p>We should now be logged in as our SP.  Let's test out our access and ensure our custom roles and permission scopes are working as expected:</p><p><em>NOTE:  For all scenarios below, replace "YOUR_STORAGE_ACCOUNT_NAME" with your storage account name created above.</em></p><p>1) Try to list files in the "readonly" container, and try to download them as well.  Both of these operations <strong><em>should</em> </strong>work.  Note that the "--auth-mode login" parameter in these (and all subsequent) commands - that is needed to tell the CLI to use the context of our currently logged in user (the SP).</p><pre>
	<code class="language-bash">
az storage blob list --account-name YOUR_STORAGE_ACCOUNT_NAME --container readonly --auth-mode login 

az storage blob download --account-name YOUR_STORAGE_ACCOUNT_NAME --container readonly --name "testreadonly.txt" --file "C:\path\to\file\testreadonly-downloaded.txt" --auth-mode login 
	</code>
</pre><p></p><p>2) Try to upload a file to the "writeonly" container.  This operation <strong><em>should</em> </strong>work.</p><pre>
	<code class="language-bash">
az storage blob upload --account-name YOUR_STORAGE_ACCOUNT_NAME --container writeonly --name "testwriteonly.txt" --file "C:\path\to\file\testwriteonly.txt" --auth-mode login 
	</code>
</pre><p></p><p>3) Try to upload a file to the "readonly" container.  This operation <strong>should fail</strong>.</p><pre>
	<code class="language-bash">
az storage blob upload --account-name YOUR_STORAGE_ACCOUNT_NAME --container readonly --name "testwriteonly.txt" --file "C:\path\to\file\testwriteonly.txt" --auth-mode login
	</code>
</pre><p></p><p>4) Try to download a file from the "writeonly" container.  This operation <strong>should fail</strong>.</p><pre>
	<code class="language-bash">
az storage blob download --account-name YOUR_STORAGE_ACCOUNT_NAME --container writeonly --name "testwriteonly.txt" --file "C:\path\to\file\testwriteonly-downloaded.txt" --auth-mode login 
	</code>
</pre>
<br><p>
    <strong id="access-via-csharp-section">Access storage resources with a service principal via C#:</strong>
    <br>
</p><p>The CLI access method is fine if you want to just want to use this as a manual process, or perhaps as a schedule task.  But you may want to have a background service access and authenticate against Azure storage using the SP as well.  This is also very easy to do utilizing some of the Microsoft NuGet libraries.</p><p>I'm doing this with the .NET Framework, but the libraries to do so should also be available for .NET Core/.NET Standard.</p><p>Install the following NuGet libraries into your solution:</p><ul><li>WindowsAzure.Storage@9.3.3</li><li>Microsoft.IdentityModel.Clients.ActiveDirectory@5.1.1</li></ul><p><em>Note that below you will want to fill in your own values in the "Fields" section with the values appropriate to your use case.</em></p><p>And then here's the code!</p><pre>
	<code class="language-csharp">
namespace AzureStorageTest
{
    using System;
    using System.Threading.Tasks;
    using Microsoft.IdentityModel.Clients.ActiveDirectory;
    using Microsoft.WindowsAzure.Storage.Auth;
    using Microsoft.WindowsAzure.Storage.Blob;

    public static class Program
    {
        #region Fields

        private const string TenantID = "YOUR_AZURE_TENANT_ID";
        private const string StorageAccountName = "YOUR_STORAGE_ACCOUNT_NAME";
        private const string ClientID = "YOUR_SERVICE_PRINCIPAL_CLIENT_ID";
        private const string ClientSecret = "YOUR_SERVICE_PRINCIPAL_CLIENT_SECRET";

        #endregion

        #region Methods

        public static void Main(string[] args) 
        {
	        // container to iterate over
            var containerName = "readonly";

            Task.Run(async () =>
            {
                var token = await Program.GetAccessToken();
                TokenCredential tokenCredential = new TokenCredential(token);
                StorageCredentials storageCredentials = new StorageCredentials(tokenCredential);

                CloudBlobClient client = new CloudBlobClient(new Uri($"https://{Program.StorageAccountName}.blob.core.windows.net"), storageCredentials);
                CloudBlobContainer container = client.GetContainerReference(containerName);
                foreach (var blob in container.ListBlobs())
                {
                    Console.WriteLine(blob.StorageUri.PrimaryUri.ToString());
                }
            }).Wait();

            Console.WriteLine("Program Completed");
            Console.ReadKey();
        }

        private static async Task<string> GetAccessToken()
        {
            var authContext = new AuthenticationContext($"https://login.windows.net/{Program.TenantID}");
            var credential = new ClientCredential(Program.ClientID, Program.ClientSecret);
            var result = await authContext.AcquireTokenAsync("https://storage.azure.com", credential);

            if (result == null)
            {
                throw new Exception("Failed to authenticate via ADAL");
            }

            return result.AccessToken;
        }

        #endregion
    }
}
	</string></code>
</pre><p></p><p>Hopefully this helps get you started on securing your Azure Blob Storage with RBAC instead of hard-coded keys or SAS keys!</p><p>For more resources on RBAC access in Azure - check out the following links:<br></p><ul><li><a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/role-assignments-portal">Manage access to Azure resources using RBAC and the Azure portal</a></li><li><a href="https://docs.microsoft.com/en-us/azure/role-based-access-control/built-in-roles">Built-in roles for Azure resources</a></li><li><a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-rbac-portal">Grant access to Azure blob and queue data with RBAC in the Azure portal</a></li><li><a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-auth-aad-app">Authenticate with Azure Active Directory from an application for access to blobs and queues</a></li></ul><p></p><p>Thanks,<br>Justin<br></p>]]></content:encoded></item><item><title><![CDATA[Using Azure File Shares to mount a volume in Kubernetes]]></title><description><![CDATA[Kubernetes has many options for integrating with various cloud vendor storage solutions for volume mounts.  Here we'll take a look at how to use Azure Storage File Shares in Kubernetes.]]></description><link>https://talkcloudlytome.com/using-azure-file-shares-to-mount-a-volume-in-kubernetes/</link><guid isPermaLink="false">5c9d007c36e67e03c052bb55</guid><category><![CDATA[Azure]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Tue, 02 Apr 2019 01:39:24 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2019/04/create-file-share-portal4.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2019/04/create-file-share-portal4.png" alt="Using Azure File Shares to mount a volume in Kubernetes"><p>As of Kubernetes version 1.14 and Windows Server 2019, it's now possible to mount an Azure File Share object as a PersistentVolume in Kubernetes, and mount it into a Windows-based pod.</p><p><em>Side Note:  All of these commands will also work just fine on a Linux pod/node as well, you just need to install the "cifs-utils" package with your distros package manager, such as "apt-get install cifs-utils":</em></p><p>Let's walk through the process to get it working!</p><p>First, you'll want to create a storage account in Azure, and then a File Share folder in that storage account.  First create the account and make note of the account name and the account access key:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2019/04/createStorageAcct.png" class="kg-image" alt="Using Azure File Shares to mount a volume in Kubernetes"><figcaption>Storage account name and access key</figcaption></figure><p>After that go over to the "Files" section from the overview page of your storage account and make a new file share called "configfiles":</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2019/04/image-4.png" class="kg-image" alt="Using Azure File Shares to mount a volume in Kubernetes"><figcaption>Creating the 'configfiles' file share</figcaption></figure><p>Now we can jump on over to the kubectl command-line and start making objects in Kubernetes.  Let's start off by making a namespace that can contain everything.</p><pre>
	<code class="language-bash">
kubectl create ns filesharetest
	</code>
</pre><p></p><p>Next, we create a secret that k8s will use to be able to access and mount the fileshare.  NOTE:  This secret MUST be in the same namespace as the PersistentVolumeClaim and Pods that will mount the volume!  Fill in your values for the "YourAzureStorageAccountNameHere" and "YourAzureStorageAccountKeyHere" based on what you obtained earlier.</p><pre>
	<code class="language-bash">
kubectl create secret generic azure-fileshare-secret --from-literal=azurestorageaccountname=YourAzureStorageAccountNameHere --from-literal=azurestorageaccountkey=YourAzureStorageAccountKeyHere -n filesharetest
	</code>
</pre><p></p><p>Now we can go ahead and create the PersistentVolume that will map to our Azure File Share.  This is what the YAML would look like for that:</p><pre>
	<code class="language-yaml">
apiVersion: v1
kind: PersistentVolume
metadata:
  name: fileshare-pv
  labels:
    usage: fileshare-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  azureFile:
    secretName: azure-fileshare-secret
    shareName: configfiles
    readOnly: false
	</code>
</pre><p></p><p>A few important things to note here:</p><ul><li>You can call it whatever you want, but make sure you note down what you put for the "usage" label - that's what we'll use in our PersistentVolume to allow our PersistentVolumeClaim to bind directly to it</li><li>You can change accessModes if you want - Azure File Share does support the "ReadWriteMany" mode, which means you can have multiple pods mounting this volume and reading/writing to it at the same time.  This is what we've set it up for above</li><li>Make sure the "secretName" matches up with what you called your secret, and the "shareName" matches up with the file share object you created in your storage account</li><li>You don't specify a namespace when creating a PV - they are cluster-level resources</li></ul><p>Go ahead create the file, save it as "pv.yaml", and apply it with:</p><pre>
	<code class="language-bash">
kubectl apply -f pv.yaml
	</code>
</pre><p></p><p>Now that we have a PersistentVolume, we can create a PersistentVolumeClaim to bind to it.  This is what the YAML would look like for that:</p><pre>
	<code class="language-yaml">
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: fileshare-pvc
  namespace: filesharetest
  # Set this annotation to NOT let Kubernetes automatically create
  # a persistent volume for this volume claim.
  annotations:
    volume.beta.kubernetes.io/storage-class: ""
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    # To make sure we match the claim with the exact volume, match the label
    matchLabels:
      usage: fileshare-pv
	</code>
</pre><p></p><p>Note here that we ARE defining the namespace (PVC must be tied to a namespace), and make sure that your "accessModes" value matches that of your PV.  Additionally, make sure in the selector/matchLabels, your "usage" value matches what you put for your PV - that's how this PVC will know what to bind to.</p><p>Go ahead create the file, save it as "pvc.yaml", and apply it with:</p><pre>
	<code class="language-bash">
kubectl apply -f pvc.yaml
	</code>
</pre><p></p><p>Once those resources have been created, you can verify with the following:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-4"><code class="language-bash">kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                         		STORAGECLASS   REASON   AGE
fileshare-pv   	 10Gi       RWX            Retain           Bound    filesharetest/fileshare-pvc                       	        66s
</code></pre><p></p><p>And then we can check our PersistentVolumeClaim with the following:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-4"><code class="language-bash">kubectl get pvc -n filesharetest
NAME             STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE
fileshare-pvc    Bound    fileshare-pv   10Gi       RWX                           93s
</code></pre><p></p><p>Perfect!  Next up - let's make a deployment with multiple pods, and have them all bind to this PVC - here's the YAML:</p><pre>
	<code class="language-yaml">
apiVersion: apps/v1
kind: Deployment
metadata:
  name: fileshare-deployment
  namespace: filesharetest
  labels:
    app: fileshare-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: fileshare-deployment
  template:
    metadata:
      labels:
        app: fileshare-deployment
    spec:
      volumes:
      - name: azure
        persistentVolumeClaim:
          claimName: fileshare-pvc
      containers:
      - name: main
        image: mcr.microsoft.com/windows/servercore:ltsc2019
        command: ["powershell", "Start-Sleep", "-s", "86400"]
        volumeMounts:
        - name: azure
          mountPath: "/configfiles"
	</code>
</pre><p></p><p>Here we're creating a deployment that will create three pods.  Each of them are just running a server core 2019 image that runs the Start-Sleep powershell command for one day - the purpose of this is to just keep the pod running so we can connect to it to run commands.</p><p>Note the spec.volumes and spec.containers.volumeMounts.  We're saying we want a volume matching to fileshare-pvc PVC, which we created earlier.  Then in the container spec, we're mounting that volume to "/configfiles", which would be available at C:\configfiles on a windows box or /configfiles on a Linux box.</p><p>Go ahead create the file, save it as "fileshare-deployment.yaml", and apply it with:</p><pre>
	<code class="language-bash">
kubectl apply -f fileshare-deployment.yaml
	</code>
</pre><p></p><p>Once everything is deployed and up and running, let's find the name of our pods:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-6"><code class="language-bash">kubectl get pods -n filesharetest
NAME                                   READY   STATUS    RESTARTS   AGE
fileshare-deployment-f8fdc848b-r94pk   1/1     Running   0          23s
fileshare-deployment-f8fdc848b-xvg8z   1/1     Running   0          23s
fileshare-deployment-f8fdc848b-zbnk7   1/1     Running   0          23s
</code></pre><p></p><p>Now we can exec to one of our pods to see the mount in action:</p><pre>
	<code class="language-bash">
kubectl exec -it YOUR_POD_NAME -n filesharetest powershell
	</code>
</pre><p></p><p>Once connected, we can run some PowerShell commands to see the volume mount:</p><pre>
	<code class="language-powershell">
cd C:\

# Note the "configfiles" folder in the root of C when you perform the ls command
ls

# Change to the directory and write a file called "pod1.txt"
cd configfiles
echo "Hello from first pod" >> pod1.txt

# Note that the "pod1.txt" file is present when you perform the ls command
ls

	</code>
</pre><p></p><p>If you jump back to the Azure Portal and look at your file share, you'll now see the file we just created:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2019/04/image.png" class="kg-image" alt="Using Azure File Shares to mount a volume in Kubernetes"><figcaption>Azure File Share</figcaption></figure><p></p><p>If you do the same kubectl exec steps and evaluate that directory on a different pod, you'll see the same file!  That's all there is to it, pretty simple!</p><p>One last note: Since we set the "persistentVolumeReclaimPolicy" on our PersistentVolume object to "Retain", we can delete the PersistentVolume in Kubernetes and our FileShare in Azure will be untouched.  Run the following to delete all the stuff you just created in Kubernetes:</p><pre>
	<code class="language-bash">
kubectl delete ns filesharetest && kubectl delete pv fileshare-pv
	</code>
</pre><p></p><p>Check back in the Azure Portal on your file share and you should see the share and the pod1.txt file are still there.</p><p>PersistentVolume and PersistentVolumeClaims in Kubernetes, along with the support of multiple cloud vendors storage solutions, can prove useful in many cases.  Hopefully this helps you get up and running!</p><p>If you're interested in learning more about PersistentVolumes and PersistentVolumeClaims, be sure to check out the docs at <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a></p><p>Thanks,<br>Justin</p>]]></content:encoded></item><item><title><![CDATA[Implementing an audit webhook for Kubernetes]]></title><description><![CDATA[Kubernetes allows for auditing to a "webhook", which allows you to post the audit data directly to your own web service and consume it how you see fit.  Let's take a look at how to set it up.]]></description><link>https://talkcloudlytome.com/implementing-an-audit-webhook-for-kubernetes/</link><guid isPermaLink="false">5c68d249a7815d03b79e9895</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Tue, 19 Feb 2019 14:54:01 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2019/02/kube-audit.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2019/02/kube-audit.png" alt="Implementing an audit webhook for Kubernetes"><p>I recently was looking into auditing available for Kuberentes.  Kubernetes has the option to enable built-in auditing that can show almost anything that is done to the system - who added pods, who deleted services, who viewed secrets, etc.  You can see the changes done by users via the kubectl CLI, as well as changes that the system itself (such as the kube-scheduler or kube-controller) made.</p><p>There are two main components to enabling auditing in Kubernetes.  First of all, you need an "Audit Policy", which essentially tells the kube-apiserver component what you want audited.  Secondly, you need to define WHERE you want the audits reported to.  For the purposes of this example, I'm using a cluster deployed by aks-engine.  In the default configuration, auditing is already turned on and writes the logs to disk on the kube-apiserver component.  You can read more about how to do this <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/audit/">here</a>.</p><p>So, I already have auditing enabled and writing to a log file on the local Linux master node.  However, there's a pretty neat feature called the "webhook backend", where you can have the kube-apiserver pass the audits to a web endpoint.  I was struggling to find some good documentation on how to do this, so I figured I would setup an example and get it working.</p><p>Essentially - how it works is as follows:</p><ul><li>Enable auditing in your cluster via an Audit Policy (already done for us in our aks-engine deployment)</li><li>Deploy a web endpoint somewhere (language doesn't matter - I wrote mine in ASP.NET core, but you can use anything) that is capable of accepting a POST with a JSON body (and then doing something with it)</li><li>Creating a kubeconfig file that has the address of your endpoint in it</li><li>Passing in the "--audit-webhook-config-file" parameter to the kube-apiserver startup parameters pointing to the kubeconfig file you just created</li></ul><p>Let's walk through an example!</p><p>First of all - this is what will be posted to your webhook endpoint - an "EventList" in the audit.k8s.io/v1 namespace, which has an "items" array, where each object in that array is an "Event" type in the audit.k8s.io/v1 namespace:</p><p><em>(If you're interested in seeing the structure of the actual class used, you can see it in the kube-apiserver source code <a href="https://github.com/kubernetes/kubernetes/blob/4c5e6156525b96b72961b86ff5bd82c44ea0cd96/staging/src/k8s.io/apiserver/pkg/apis/audit/v1/types.go">here</a>)</em></p><pre>
	<code class="language-json">
{
  "kind": "EventList",
  "apiVersion": "audit.k8s.io/v1",
  "metadata": {},
  "items": [
    {
      "level": "Request",
      "auditID": "b7699b7e-e876-4c97-9b18-f7e7adc18841",
      "stage": "ResponseComplete",
      "requestURI": "/api/v1/nodes?limit=500",
      "verb": "list",
      "user": {
        "username": "https://sts.windows.net/147a2b71-5ce9-4933-94c4-2054328de565/#a7b4eb91-181f-4b91-a405-c4d904f1af0f",
        "groups": [
          "29243834-aec2-4872-b903-661512b6ec08",
          "0e22bf18-6762-4e50-b489-6eb39b652962",
          "system:authenticated"
        ]
      },
      "sourceIPs": [
        "74.203.144.5"
      ],
      "userAgent": "kubectl.exe/v1.10.11 (windows/amd64) kubernetes/637c7e2",
      "objectRef": {
        "resource": "nodes",
        "apiVersion": "v1"
      },
      "responseStatus": {
        "metadata": {},
        "status": "Failure",
        "reason": "Forbidden",
        "code": 403
      },
      "requestReceivedTimestamp": "2019-02-06T14:45:25.277447Z",
      "stageTimestamp": "2019-02-06T14:45:25.277756Z",
      "annotations": {
        "authorization.k8s.io/decision": "forbid",
        "authorization.k8s.io/reason": ""
      }
    },
    {
      "level": "Request",
      "auditID": "2cf8ad1e-25b1-49d7-bb0d-b56341587c12",
      "stage": "ResponseComplete",
      "requestURI": "/api/v1/nodes?limit=500",
      "verb": "list",
      "user": {
        "username": "https://sts.windows.net/147a2b71-5ce9-4933-94c4-2054328de565/#a7b4eb91-181f-4b91-a405-c4d904f1af0f",
        "groups": [
          "29243834-aec2-4872-b903-661512b6ec08",
          "0e22bf18-6762-4e50-b489-6eb39b652962",
          "system:authenticated"
        ]
      },
      "sourceIPs": [
        "74.203.144.5"
      ],
      "userAgent": "kubectl.exe/v1.10.11 (windows/amd64) kubernetes/637c7e2",
      "objectRef": {
        "resource": "nodes",
        "apiVersion": "v1"
      },
      "responseStatus": {
        "metadata": {},
        "status": "Failure",
        "reason": "Forbidden",
        "code": 403
      },
      "requestReceivedTimestamp": "2019-02-06T14:46:36.735833Z",
      "stageTimestamp": "2019-02-06T14:46:36.735963Z",
      "annotations": {
        "authorization.k8s.io/decision": "forbid",
        "authorization.k8s.io/reason": ""
      }
    }
  ]
}
	</code>
</pre><p></p><p>For my web backend, I created a simple ASP.NET WebAPI application that has a single endpoint, which takes a POST with an input parameter of "dynamic", as shown here:</p><pre>
	<code class="language-csharp">
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Newtonsoft.Json;
using Newtonsoft.Json.Linq;

namespace kubernetes_audit_webhook.Controllers
{
    [Route("api/[controller]")]
    [ApiController]
    public class AuditsController : ControllerBase
    {
        // POST api/audits
        [HttpPost]
        public void Post(dynamic auditBody)
        {
            Console.WriteLine($"Received Audit Webhook Post at {DateTime.UtcNow.ToString("yyyy-MM-dd HH:mm:ss")}");
            try
            {
                var parsedJson = JObject.Parse(auditBody.ToString());
                JArray itemsArray = (JArray)parsedJson["items"];

                foreach (var auditEvent in itemsArray.Children())
                {
                    var timestamp = auditEvent["stageTimestamp"];
                    var level = auditEvent["level"];
                    var stage = auditEvent["stage"];
                    var requestURI = auditEvent["requestURI"];
                    var verb = auditEvent["verb"];

                    var username = "UNKNOWN";
                    var userElement = auditEvent["user"];
                    if (userElement != null)
                    {
                        var userNameElement = userElement["username"];
                        if (userNameElement != null)
                        {
                            username = userNameElement.ToString();
                        }
                    }

                    string sourceIPValue = "";
                    JArray sourceIPArray = (JArray)auditEvent["sourceIPs"];
                    if (sourceIPArray.Count == 0)
                    {
                        sourceIPValue = "UNKNOWN";
                    }
                    else if (sourceIPArray.Count == 1)
                    {
                        sourceIPValue = sourceIPArray[0].ToString();
                    }
                    else
                    {
                        var ipAddresses = new List<string>();
                        foreach (JToken ipAddress in sourceIPArray)
                        {
                            ipAddresses.Add(ipAddress.ToString());
                        }

                        sourceIPValue = String.Join(",", ipAddresses);
                    }

                    var resourceType = "UNKNOWN";
                    var resourceName = "UNKNOWN";

                    var objectRefElement = auditEvent["objectRef"];
                    if (objectRefElement != null)
                    {
                        var resourceTypeElement = objectRefElement["resource"];
                        if (resourceTypeElement != null)
                        {
                            resourceType = resourceTypeElement.ToString();
                        }

                        var resourceNameElement = objectRefElement["name"];
                        if (resourceNameElement != null)
                        {
                            resourceName = resourceNameElement.ToString();
                        }
                    }

                    var authorizationDecision = "UNKNOWN";
                    var authorizationReason = "UNKNOWN";

                    var annotationsElement = auditEvent["annotations"];
                    if (annotationsElement != null)
                    {
                        authorizationDecision = annotationsElement["authorization.k8s.io/decision"].ToString();
                        authorizationReason = annotationsElement["authorization.k8s.io/reason"].ToString();
                    }

                    // starting each actual record line with "###" so I can distinguish them in the log file from any errors
                    Console.WriteLine($"###{timestamp},{level},{stage},{requestURI},{verb},{username},{sourceIPValue},{resourceType},{resourceName},{authorizationDecision},{authorizationReason}");
                }
            }
            catch (Exception ex)
            {
                Console.WriteLine($"UNABLE TO PROCESS RECORD DUE TO ERROR: {ex.Message} - THE FULL POST BODY WILL BE SHOWN BELOW:");
                Console.WriteLine(auditBody.ToString());
            }
        }
    }
}
	</string></code>
</pre><p></p><p>You can see the whole source code for the project <a href="https://github.com/carlsoncoder/kubernetes-audit-webhook">here</a>.</p><p>Basically, this will take that JSON that was shown earlier, use Newtonsoft.Json to parse it out into values I care about, and just write out a line to STDOUT.  You could build and deploy this wherever you want, but I chose to deploy it into my actual Kubernetes cluster that I'm auditing.</p><p>First, clone the repo and build the Docker image:<br><em>(This is built with on Windows Server Build 1803, so you'll need a server 1803 box to build it, but it's .NET core so you should be able to change it to build on Linux if you desire - just change the Dockerfile to use the appropriate tags)</em></p><pre>
	<code class="language-bash">
git clone https://github.com/carlsoncoder/kubernetes-audit-webhook.git
cd kubernetes-audit-webhook
docker build -t kubernetes-audit-image .
	</code>
</pre><p></p><p>Then, you'll want to upload this to a container registry somewhere so you can pull it down.  I chose to use Azure Container Registry:</p><pre>
	<code class="language-powershell">
docker login yourcontainerregistry.azurecr.io -u userName -p password
$auditImage = yourcontainerregistry.azurecr.io/k8s-audit-webhook:build-1803-v1
docker tag kubernetes-audit-image $auditImage
docker push $auditImage
	</code>
</pre><p></p><p></p><p>Now we can deploy our image into a service and deployment in Kubernetes via kubectl:</p><p>First create a namespace to put everything in:</p><pre>
	<code class="language-bash">
kubectl create namespace auditing
	</code>
</pre><p></p><p>Then, make and submit a YAML file to create the service that we'll use to access the pods:</p><pre>
	<code class="language-yaml">
apiVersion: v1
kind: Service
metadata:
  name: audit-webhook-service
  namespace: auditing
spec:
  selector:
    app: audit-webhook
  ports:
    - protocol: TCP
      port: 80
	</code>
</pre><p></p><p>Lastly, create the deployment YAML file to make the deployment and submit it via kubectl.  Note that you'll want to modify the "imagePullSecrets" to be the actual secret name that you have in your cluster to reach your container registry:</p><p>(<em>Side note - the "command" portion in the spec really shouldn't be necessary - it's exactly the same as what's defined in the ENTRYPOINT of the Dockerfile for the image.  But for some reason .NET core would fail without this...?</em>)</p><pre>
	<code class="language-yaml">
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: audit-webhook-deployment
  namespace: auditing
  labels:
    app: audit-webhook
spec:
  replicas: 1
  selector:
    matchLabels:
      app: audit-webhook
  template:
    metadata:
      labels:
        app: audit-webhook
    spec:
      containers:
      - name: audit-webhook-application
        image: yourcontainerregistry.azurecr.io/k8s-audit-webhook:build-1803-v1
        command: ["dotnet.exe", "kubernetes-audit-webhook.dll"]
        ports:
        - containerPort: 80
      imagePullSecrets:
      - name: regcred
	</code>
</pre><p></p><p>Now we have our deployment backed by a service up and running.  If you recall from the code sample, the POST endpoint is available at /api/audits.  We have a service called "audit-webhook-service" in the "auditing" namespace, so with kube-dns running in our cluster, the full address for a pod to reach our service is:</p><p><strong>http://audit-webhook-service.auditing.svc.cluster.local/api/audits</strong></p><p>We're almost there!  Now create a file called "audit-webhook-kubeconfig" in the /etc/kubernetes directory on your master node, and use vim or a text editor to put the following text in it:</p><pre>
	<code class="language-yaml">
apiVersion: v1
clusters:
- cluster:
    server: http://audit-webhook-service.auditing.svc.cluster.local/api/audits
  name: audit-webhook-service
contexts:
- context:
    cluster: audit-webhook-service
    user: ""
  name: default-context
current-context: default-context
kind: Config
preferences: {}
users: []
	</code>
</pre><p><em>NOTE: For some reason, kube-dns doesn't seem to always work - I've run into issues where the kube-apiserver logs show that the kube-apiserver cannot resolve the "audit-webhook-service.auditing.svc.cluster.local" endpoint, while the host node can.  If you run into this issue, you can just replace the FQDN host name with the IP of your "audit-webhook-service" service entry.</em></p><p>All we're really doing here is defining the endpoint.  I believe you can also use the "users" section to do some basic auth if your endpoint requires it, but I really didn't look into that.</p><p>Next we need to set the "--audit-webhook-config-file" parameter for the kube-apiserver.  In aks-engine (and pretty much all other deployments), kube-apiserver is launched by the kubelet as a static pod.  This means the config for kube-apiserver will likely be in a YAML file, somewhere in a directory like /etc/kubernetes/manifests.  Find the YAML definition, and add the following line there to the startup parameters:</p><pre>
	<code class="language-yaml">
"--audit-webhook-config-file=/etc/kubernetes/audit-webhook-kubeconfig"
	</code>
</pre><p></p><p>And lastly we need to restart the kubelet service and delete the kube-apiserver pod to make the changes take effect:</p><pre>
	<code class="language-bash">
sudo systemctl restart kubelet.service
kubectl get pods -n kube-system # locate the name of the kube-apiserver pod
kubectl delete pod name-of-apiserver-pod -n kube-system
	</code>
</pre><p></p><p>Deleting the pod will force the kubelet to recreate it with our new parameters.</p><p>Since all our application does is parse the JSON and then output it to STDOUT, we can just view the kubectl logs for that pod to see that it's working:<br></p><pre>
	<code class="language-bash">
kubectl get pods -n auditing # locate the name of the audit-webhook pod
kubectl logs audit-webhook-pod-name -n auditing
	</code>
</pre><p></p><p>You should see some output like this:</p><pre>
	<code class="language-bash">
###2/18/2019 6:55:56 PM,Metadata,ResponseComplete,/apis/events.k8s.io/v1beta1/events?resourceVersion=1442977&timeout=7m55s&timeoutSeconds=475&watch=true,watch,client,10.255.255.5,events,UNKNOWN,allow,
###2/18/2019 6:55:57 PM,Request,ResponseComplete,/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s,get,client,10.255.255.5,endpoints,kube-controller-manager,allow,
###2/18/2019 6:55:57 PM,Request,ResponseComplete,/api/v1/namespaces/kube-system/endpoints/kube-scheduler?timeout=10s,get,client,10.255.255.5,endpoints,kube-scheduler,allow,
###2/18/2019 6:55:58 PM,Metadata,ResponseComplete,/apis/policy/v1beta1/poddisruptionbudgets?resourceVersion=1095915&timeout=9m38s&timeoutSeconds=578&watch=true,watch,client,10.255.255.5,poddisruptionbudgets,UNKNOWN,allow,
###2/18/2019 6:55:59 PM,Request,ResponseComplete,/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s,get,client,10.255.255.5,endpoints,kube-controller-manager,allow,
	</code>
</pre><p></p><p>And that's all you need to get it going!  So what are some other things that could be done to further improve this process?</p><ul><li>Configure the webhook service to run with TLS/SSL instead of just plain HTTP over port 80</li><li>Use credentials or a certificate to authenticate to your web endpoint prior to POSTing</li><li>Integrate one of the kubernetes client libraries so you can cast the JSON object directly to an object spec instead of doing all the JSON parsing yourself</li><li>Have your pod image post the data to OMS instead of writing to STDOUT</li><li>Other fun things!</li></ul><p>There's so many cool things you can do with Kubernetes.  It's really exciting every time I find out about some new feature like this and figure out how to make it work.  I hope you enjoyed the walkthrough, and feel free to reach out if you have any questions!</p><p>-Justin</p><p><br></p>]]></content:encoded></item><item><title><![CDATA[Obtain raw Kubernetes metrics with kubectl]]></title><description><![CDATA[Did you know that in most deployments of Kuberentes, there are a default set of performance metrics already being captured for you?   Let's see how to use kubectl to get at that metric data]]></description><link>https://talkcloudlytome.com/raw-kubernetes-metrics-with-kubectl/</link><guid isPermaLink="false">5c28f25fef5cbf0ceefe4c7d</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Metrics]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Tue, 15 Jan 2019 15:56:00 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2018/12/node-metrics-output.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2018/12/node-metrics-output.png" alt="Obtain raw Kubernetes metrics with kubectl"><p>Recently I was trying to find out the best metrics reporting and graphing solution for a hybrid Linux/Windows Kubernetes cluster.  Unfortunately, the support for Windows in many great open source tools still leaves something to be desired.</p><p>I'm still playing around with different options and will eventually settle on something (which will probably be its own blog post), but I wanted to share something I found while digging around that's pretty neat in its own regard.</p><p>Most Kubernetes deployments have the "metrics-server" pod running.  Check to see it yourself with this command:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-8"><code class="language-bash">kubectl get pods -n kube-system | grep metrics-server
kube-system          metrics-server-67b4964794-l7z5q                         1/1       Running   0          24d
</code></pre><p></p><p>If you see this pod running (it will have a different identifier at the end), then you'll be able to query kubectl to get the raw metrics for either your nodes or your pods:<br></p><pre>
	<code class="language-bash">
# Get the metrics for all nodes
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
        
# Get the metrics for all pods
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
	</code>
</pre><p></p><p>If you run this, you'll get raw, unformatted JSON back, which looks terrible in a terminal.  We can use a pretty cool utility, <a href="https://stedolan.github.io/jq/">jq</a>, to parse it out.  Run the same commands but pipe the output through jq:</p><pre>
	<code class="language-bash">
# Get the metrics for all nodes formatted through jq
kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes | jq '.'
        
# Get the metrics for all pods formatted through jq
kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods | jq '.'
	</code>
</pre><p></p><p>This will give you nicer output, such as this for the node metrics:</p><pre>
	<code class="language-json">
{
  "kind": "NodeMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes"
  },
  "items": [
    {
      "metadata": {
        "name": "k8s-master-13487264-1",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s-master-13487264-1",
        "creationTimestamp": "2018-12-31T20:15:19Z"
      },
      "timestamp": "2018-12-31T20:15:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "265m",
        "memory": "2684280Ki"
      }
    },
    {
      "metadata": {
        "name": "k8s-master-13487264-2",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/nodes/k8s-master-13487264-2",
        "creationTimestamp": "2018-12-31T20:15:19Z"
      },
      "timestamp": "2018-12-31T20:15:00Z",
      "window": "1m0s",
      "usage": {
        "cpu": "189m",
        "memory": "2663640Ki"
      }
    }
  ]
}
	</code>
</pre><p></p><p>Or this for the pod metrics:</p><pre>
	<code class="language-json">
{
  "kind": "PodMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
  "metadata": {
    "selfLink": "/apis/metrics.k8s.io/v1beta1/pods"
  },
  "items": [
      "metadata": {
        "name": "webportal-deployment-79785448db-dnvcq",
        "namespace": "mhwi285-production",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/mhwi285-production/pods/webportal-deployment-79785448db-dnvcq",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "webportal-application",
          "usage": {
            "cpu": "0",
            "memory": "270908Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "heapster-f4fbb999d-b8k6c",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/heapster-f4fbb999d-b8k6c",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "heapster",
          "usage": {
            "cpu": "0",
            "memory": "27724Ki"
          }
        },
        {
          "name": "heapster-nanny",
          "usage": {
            "cpu": "0",
            "memory": "10908Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "kube-apiserver-k8s-master-13487264-2",
        "namespace": "kube-system",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-apiserver-k8s-master-13487264-2",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "kube-apiserver",
          "usage": {
            "cpu": "20m",
            "memory": "679700Ki"
          }
        }
      ]
    },
    {
      "metadata": {
        "name": "webportal-deployment-79785448db-6vtbt",
        "namespace": "mhwi285-production",
        "selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/mhwi285-production/pods/webportal-deployment-79785448db-6vtbt",
        "creationTimestamp": "2018-12-31T20:16:38Z"
      },
      "timestamp": "2018-12-31T20:16:00Z",
      "window": "1m0s",
      "containers": [
        {
          "name": "webportal-application",
          "usage": {
            "cpu": "0",
            "memory": "395576Ki"
          }
        }
      ]
    }    
  ]
}
	</code>
</pre><p></p><p>You'll see you get some basic metric data back - for the nodes you get the node name, the timestamp for when the metrics were gathered, CPU usage and memory usage of the node.  For the pods, you get the pod name, namespace, the timestamp the metrics were created, as well as the name, CPU usage, and memory usage for each container inside the pod.</p><p>It's important to note that these values aren't persisted anywhere, at least the historical values.  The output you get from this command will just show what the data was the last time it was collected. If you want to see historical trends of this data, you'll need to store the output yourself or use some other open source tool to get at it.</p><p>Lastly - just to show off how cool that jq tool is, here are some samples of how you can pull just specific bits of data out of the JSON and format it the way you want:</p><p>Here we get just the name, CPU, and memory usage of each node:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-100"><code class="language-bash">kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes \
| jq '[.items [] | {nodeName: .metadata.name, nodeCpu: .usage.cpu, nodeMemory: .usage.memory}]'
[
  {
    "nodeName": "k8s-master-13487264-1",
    "nodeCpu": "210m",
    "nodeMemory": "2491580Ki"
  },
  {
    "nodeName": "k8s-master-13487264-2",
    "nodeCpu": "157m",
    "nodeMemory": "2465016Ki"
  },
  {
    "nodeName": "k8s-master-13487264-0",
    "nodeCpu": "137m",
    "nodeMemory": "3352384Ki"
  },
  {
    "nodeName": "1348k8s002",
    "nodeCpu": "372m",
    "nodeMemory": "1363444Ki"
  },
  {
    "nodeName": "1348k8s001",
    "nodeCpu": "242m",
    "nodeMemory": "1156788Ki"
  },
  {
    "nodeName": "1348k8s000",
    "nodeCpu": "615m",
    "nodeMemory": "1512472Ki"
  }
]
</code></pre><p></p><p>And here we can get the name and namespace of each pod, along with each container in the pod, with its CPU and memory usage:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-100"><code class="language-bash">kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods  \
| jq '[.items [] | {podName: .metadata.name, podNamespace: .metadata.namespace, containers: [{name: .containers[].name, cpu: .containers[].usage.cpu, memory: .containers[].usage.memory}]}]'

[
  {
    "podName": "kube-addon-manager-k8s-master-13487264-0",
    "podNamespace": "kube-system",
    "containers": [
      {
        "name": "kube-addon-manager",
        "cpu": "1m",
        "memory": "262564Ki"
      }
    ]
  },
  {
    "podName": "kubernetes-metrics-reader-deployment-78954dbf7b-llt7b",
    "podNamespace": "kube-system",
    "containers": [
      {
        "name": "kubernetes-metrics-reader",
        "cpu": "0",
        "memory": "52132Ki"
      }
    ]
  },
  {
    "podName": "kube-addon-manager-k8s-master-13487264-1",
    "podNamespace": "kube-system",
    "containers": [
      {
        "name": "kube-addon-manager",
        "cpu": "16m",
        "memory": "454192Ki"
      }
    ]
  }
]
</code></pre><p></p><h2 id="summary-">Summary:</h2><p>While you probably will want to use some well-known open source tools to actually track your metrics (<a href="https://prometheus.io/">Prometheus</a> and <a href="https://grafana.com/">Grafana</a> are two pretty good ones), this is a quick and dirty way to get at some core metrics to see how your pods and nodes are performing.</p>]]></content:encoded></item><item><title><![CDATA[Understanding Azure container offerings]]></title><description><![CDATA[It can be challenging at first to understand all the different container and container orchestration offerings available in the Azure cloud.  In this post I'll cover all the different options you have available]]></description><link>https://talkcloudlytome.com/understanding-azure-container-offerings/</link><guid isPermaLink="false">5c258e6a462a6603c0c0fd49</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Containers]]></category><category><![CDATA[Azure]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Tue, 01 Jan 2019 06:30:00 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2018/12/azure-container-image-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2018/12/azure-container-image-1.png" alt="Understanding Azure container offerings"><p>The Microsoft Azure platform offers several different solutions for container management as well as container orchestration:</p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2018/12/azure-container-offerings.png" class="kg-image" alt="Understanding Azure container offerings"><figcaption>What should I pick??</figcaption></figure><p>We'll go through each of the different services, and provide some detail about what they're used for, and also note what you probably shouldn't use due to planned sunsetting.</p><h2 id="what-are-the-different-options">What are the different options?<br></h2><ul><li>Azure Container Instances - Also referred to as "ACI" or "Container instances"</li><li>Azure Container Registry - Also referred to as "ACR" or "Container registry"</li><li>Azure Container Services - Also referred to as "ACS" or "Container services"</li><li>acs-engine - An open source project used to generate ARM templates to deploy Azure resources with orchestrator binaries already configured</li><li>Azure Kubernetes Services - Also referred to as "AKS" or "Kubernetes services"</li><li>aks-engine - An open source project used to generate ARM templates to deploy Azure resources with Kubernetes binaries already configured</li><li>Virtual kubelet - An experimental open source project used in conjunction with AKS and ACI</li></ul><h2 id="azure-container-instances-">Azure Container Instances:</h2><p><a href="https://azure.microsoft.com/en-us/services/container-instances/">Azure Container Instances</a> can be thought of as "container-infrastructure-as-a-service".  With ACI, you don't really have to worry about what container runtime you're dealing with (docker, rkt, etc.), or what container orchestrator (Kubernetes, DC/OS, etc.) is running them.   All you do is supply a container image and the specs (CPU/RAM/etc.) you need, and Azure takes care of the rest for you.</p><p>One upside to this is it's extremely easy to get started hosting a container in the cloud.  However, digging in further you can see this really is built more for one-off types of workloads, not for something more complex where you need the control.  For example, in a Kubernetes environment, you can easily specify that you need X instances of a container image to run.  However, with ACI, you only get one container in a container group, and to "scale up", you either need to use vertical scaling by increasing the size (and cost) of your container group, or by adding more container groups with the same image.</p><p>Azure container instances would probably not be the go-to if you have a full distributed application that you wanted to host.  However, if you have something like an infrequent batch processing job, or something that is triggered by an event in say an Azure queue or Azure function, ACI might be more useful to you there.</p><p><em>NOTE: ACI gets more interesting when used in conjunction with AKS and the Virtual Kubelet, which we'll talk about later in the article.</em></p><h2 id="azure-container-registry-">Azure Container Registry:</h2><p><a href="https://azure.microsoft.com/en-us/services/container-registry/">Azure Container Registry</a> is pretty much what it sounds like: a container registry to store your Docker images in, similar to the Docker hub.  If you just need something simple to store your images, and you don't care if they're private or not, it's probably easier to just use the Docker hub.  However, you only get one "private" image on the Docker registry, unless you start paying more for their upgraded version.</p><p>One benefit of using ACR is you get your images very close to the data center where you're going to deploy them.  If you're using ACI/ACS/AKS to deploy your containers/clusters, you can have a ge0-replicated container registry that's in the same datacenter as your VM's.  This would result in faster image download times when you spin up new containers or add new nodes to your cluster.</p><h2 id="azure-container-services">Azure Container Services</h2><p><a href="https://docs.microsoft.com/en-us/azure/container-service/">Azure Container Service</a> (ACS) is an offering that will deploy a container orchestrator system for you on Azure resources.  You can spin up a Kubernetes, DC/OS, or Docker Swarm cluster with this option.</p><p>This spins up a fully functional and deployed cluster.  For example, if we look at the Kubernetes option, it will deploy the master nodes, as well as the worker nodes.  The worker nodes can be split into multiple "pools", and can run either Windows or Linux OS's.  However, Kubernetes itself still doesn't officially support Windows containers, so you may run into issues with that deployment.  You should also note that YOU are responsible for maintaining both the master and worker nodes in this environment.</p><p>This is a great option to quickly get a fully-functional Kubernetes cluster up and running in Azure.  It's also your only option if you want to get a DC/OS or Docker Swarm cluster running in Azure.</p><p>However - you should NOT use this moving forward.  In December 2018, <a href="https://azure.microsoft.com/en-us/updates/azure-container-service-will-retire-on-january-31-2020/">Microsoft announced</a> that they would be ending support for ACS on January 31, 2020.</p><h2 id="acs-engine-">acs-engine:</h2><p><a href="https://github.com/Azure/acs-engine">acs-engine</a> is an open source tool that is used to generate Azure ARM templates to deploy clusters on Azure VM's, using either Kubernetes, DC/OS, OpenShift, Docker Swarm, or Swarm orchestrators.  It gives you a lot of variety and options in how you want to configure your cluster and resources.  In fact, the ACS offering actually utilizes some of the acs-engine tool behind the scenes.</p><p>Getting started is pretty easy - first we generate a resource group and create a service principal with access to that resource group:<br></p><pre>
	<code class="language-powershell">
# Fill in your variables:
$tenantId = 'Your-Azure-Tenant-ID'
$subscriptionName = 'Your-Subscription-Name'
$resourceGroupName = 'Name-of-resource-group-to-create'
$resourceGroupLocation = 'Azure-location'
$password = 'Your-password'
$applicationName = 'kubernetes-acs-engine-ABC'
$homePage = "http://{0}/{1}" -f $tenantId,$applicationName
$identifierUri = $homePage

# Login
Login-AzureRmAccount

# Select your desired subscription
Select-AzureRmSubscription -SubscriptionName $subscriptionName

# Create the resource group
New-AzureRmResourceGroup -Name $resourceGroupName -Location $resourceGroupLocation

# create secure password
$securePassword = $password | ConvertTo-SecureString -AsPlainText -Force

# Create the Azure AAD application
$aadApplication = New-AzureRmADApplication -DisplayName $applicationName -HomePage $homePage -IdentifierUris identifierUri -Password $securePassword -Verbose

# Create the SPN and sleep for a while to ensure SPN propagates through AAD
$servicePrincipal = New-AzureRmADServicePrincipal -ApplicationId $aadApplication.ApplicationId
Start-Sleep 30

# Assign contributor access for the AAD application to your resource group
New-AzureRmRoleAssignment -ResourceGroupName $resourceGroupName -ObjectId $servicePrincipal.Id -RoleDefinitionName Contributor

# Output these values and note them down - you will need them later
Write-Host "clientId: $aadApplication.ApplicationId"
Write-Host "secret: $password"
	</code>
</pre><p>Once we have this, we can create our cluster definition JSON file.  We can start with something simple - this will deploy a Kubernetes Linux cluster on version 1.12 with 1 master node and 3 worker nodes, all using the "Standard_D2_V2" VM image (<em>make sure you fill in your dnsPrefix, ssh public key, clientId, and secret):</em></p><pre>
	<code class="language-json">
{
  "apiVersion": "vlabs",
  "properties": {
    "orchestratorProfile": {
      "orchestratorType": "Kubernetes",
      "orchestratorRelease": "1.12"
    },
    "masterProfile": {
      "count": 1,
      "dnsPrefix": "your-custom-dns-prefix",
      "vmSize": "Standard_D2_v2"
    },
    "agentPoolProfiles": [
      {
        "name": "linuxagentpool1",
        "count": 3,
        "vmSize": "Standard_D2_v2",
        "availabilityProfile": "AvailabilitySet"
      }
    ],
    "linuxProfile": {
      "adminUsername": "azureuser",
      "ssh": {
        "publicKeys": [
          {
            "keyData": "your-ssh-public-key-data"
          }
        ]
      }
    },
    "servicePrincipalProfile": {
      "clientId": "clientId from powershell output",
      "secret": "secret from powershell output"
    }
  }
}
	</code>
</pre><p>At this point it's just a few simple commands to generate the ARM template from the JSON file and deploy it to your resource group:<br></p><pre>
	<code class="language-powershell">
    $dnsPrefix = 'your-custom-dns-prefix'
    $resourceGroup = 'your-resource-group-name'
    acs-engine.exe generate path-to-cluster-definition.json
    
    $templateFile = '_output\{0}\azuredeploy.json' -f $dnsPrefix
    $parametersFile = '_output\{0}\azuredeploy.parameters.json' -f $dnsPrefix
    
    New-AzureRmResourceGroupDeployment -Name <deployment_name> -ResourceGroupName $resourceGroup -TemplateFile $templateFile -TemplateParameterFile $parametersFile
	</deployment_name></code>
</pre><p>Wait a while (usually takes 20-30 minutes to fully deploy), and you'll have a complete Kubernetes cluster up and running!</p><p>The only real differences between ACS and acs-engine are:</p><ul><li>acs-engine gives you much more flexibility</li><li>ACS actually creates an "ACS" resource in the Azure portal, but all you can do with it is scale your node count up or down.</li></ul><p><em>NOTE: While ACS is being de-supported in 2020, a cluster created by acs-engine can't really be de-supported since it's just creating some Azure resources for you.  However, Microsoft has effectively killed the acs-engine project at this point.  All future development, features, and bug fixes will instead be done in the aks-engine project, which we will talk about later.</em></p><h2 id="azure-kubernetes-services-">Azure Kubernetes Services:</h2><p><a href="https://azure.microsoft.com/en-us/services/kubernetes-service/">Azure Kubernetes Service</a> (AKS) is the next evolution from ACS.  There are two major differences.  First of all, unlike ACS that supported Kubernetes, DC/OS, and Swarm, AKS ONLY supports Kubernetes as an orchestrator.  Additionally, AKS abstracts away the Kubernetes control plane ("master nodes") from you.  All setup, configuration, security, maintenance, etc. of the master nodes is handled from you - you don't even have access to SSH into them yourself.  You are, however, still responsible for maintaining your worker nodes.</p><p>Also, at the time of this writing, ONLY Linux worker nodes are supported.  You are not able to deploy Windows worker nodes into an AKS cluster.  Hopefully Microsoft will add this support once Kubernetes officially supports Windows nodes, which is <em>currently</em> planned for Kubernetes version 1.14 - see GitHub issue <a href="https://github.com/kubernetes/enhancements/issues/116">here</a>.</p><p>As previously noted, Microsoft is ending support for ACS in January 2020, so you should plan to move to this (or use the acs-engine/aks-engine options) before then.</p><h2 id="aks-engine">aks-engine</h2><p><a href="https://github.com/Azure/aks-engine">aks-engine</a> is the evolution of acs-engine.  From my evaluation, it looks like they just cloned the acs-engine repo and made some very minor changes.  The most important change is they've removed support for all the other orchestrators besides Kubernetes.</p><p>If you want to get started using it, you can literally use the exact same sample I provided up above in the acs-engine section, just replacing the call to "acs-engine.exe" with "aks-engine.exe".  This is the open source tool Microsoft will be actively developing in, so if you're currently using acs-engine to generate your templates, you should move to this ASAP.</p><p>I'm not exactly sure how the AKS managed service uses this tool.  It would be neat if you could use aks-engine to deploy a managed system with the control plane managed for you, but I'm not sure of a way to do that with aks-engine.  Perhaps Microsoft will add that in the future.</p><h2 id="virtual-kubelet">Virtual Kubelet</h2><p>The <a href="https://github.com/virtual-kubelet/virtual-kubelet">virtual kubelet</a> is a pretty cool open source project.  It essentially is a virtual implementation of the kubelet that is the main binary in any kubernetes node.  In theory, you can use this to mesh in VM's from many different providers and have them "act" like an available node in kubernetes.   The example Microsoft provides is using them for "burst" workloads in conjunction with an AKS cluster.</p><p>When you install the virtual kubelet into your cluster, it shows up as a node in 'kubectl get nodes', and you can schedule pods onto it by providing a specific nodeSelector and tolerations in your YAML files.  This uses Azure Container Instances (ACI) behind the scenes to spin up your containers.  More specific details on how to hook it up with AKS can be found <a href="https://docs.microsoft.com/en-us/azure/aks/virtual-kubelet">here</a>.</p><p>One thing to note, directly from the GitHub page though:<br><em>"Please note this software is experimental and should not be used for anything resembling a production workload."</em></p><p>So while it might be neat to play around with and see how you could use it, you should probably hold off on using it in production.</p><h2 id="summary-recommendations-">Summary/Recommendations:</h2><ul><li>If you just want to run a few containers and don't need a full scale orchestrator - use <strong>Azure Container Instances (ACI)</strong></li><li>If you want to run strictly Linux containers and don't need to stray too far from the defaults provided - use <strong>Azure Kubernetes Service (AKS)</strong></li><li>If you need to run Windows containers or need a higher level of configuration over what your cluster configuration looks like - use<strong> aks-engine</strong></li></ul><p>Hopefully this helps you understand some of the Azure offerings a bit better - feel free to reach out if you have any questions!<br></p><p></p><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Thoughts on the Kubernetes CKA and CKAD certifications]]></title><description><![CDATA[Recently I took both the Certified Kubernetes Administrator (CKA) and Certified Kubernetes Application Developer (CKAD) exams and am proud to say I passed both of them.  I wanted to share some of my thoughts on the certification process and what my training consisted of.]]></description><link>https://talkcloudlytome.com/thoughts-on-the-kubernetes-cka-and-ckad-certifications/</link><guid isPermaLink="false">5c1da4aac3bde103e0de9272</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[Certifications]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Sat, 22 Dec 2018 04:07:54 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2018/12/cka-ckad-combined.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2018/12/cka-ckad-combined.png" alt="Thoughts on the Kubernetes CKA and CKAD certifications"><p>Earlier this month, I passed both the exams for the <a href="https://www.cncf.io/certification/cka/">Certified Kubernetes Administrator (CKA) </a>and the <a href="https://www.cncf.io/certification/ckad/">Certified Kubernetes Application Developer (CKAD)</a> certifications offered by the Cloud Native Computing Foundation.</p><p>The <a href="https://www.cncf.io/">Cloud Native Computing Foundation</a> is the host of the Kubernetes open source project, as well as many other cloud-first open source technologies:</p><blockquote><em>CNCF is an open source software foundation dedicated to making cloud native computing universal and sustainable. Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. Cloud native technologies enable software developers to build great products faster.</em></blockquote><p>These are both pretty challenging exams.  There is no multiple choice or fill-in-the-blank here.  You essentially get a terminal and have to actually perform real operations and fix real problems with a real Kubernetes cluster.   You are allowed to open on tab in your browser to reference the docs at <a href="https://kubernetes.io/">https://kubernetes.io/</a>, which has some of the best documentation out there!   However, don't think you can skate by with just some base knowledge and referring to the documentation.  You only get 3 hours for the CKA exam and 2 hours for the CKAD exam, which means you have at most 5-10 minutes per question!</p><p>The certifications really do complement each other, and which one you can/should take will depend on your role and how you use Kubernetes.  The CKA exam focuses more on what an actual cluster administrator would do, such as actually installing/configuring a cluster, setting up networking, setting up storage, etc.  Someone in primarily a DevOps type of role would benefit most from this.  The CKAD exam, on the other hand, is geared more towards a developer that is building their applications to be Kubernetes-friendly, and consuming the cluster resources made available by the administrator.  My opinion is if you study enough to pass the CKA exam, you will probably already know most of the content on the CKAD, so if you want to take them both I would suggest the CKA first then followed by the CKAD.</p><p>You have to agree to an NDA when taking the exam that you won't discuss the content, so I can't go into specifics as to what the questions were like.  However, I will say that you definitely are getting your skills tested well.  You REALLY need to know your stuff!  Fortunately the curriculum they provide does a really good job of outlining the content you need to know, so you really shouldn't be caught off guard with any of the questions.</p><p>Time management is also hugely important.  I only had a small number of questions that sort of stumped me where I had to refer to the docs, and I ran out of time with 1-2 questions unanswered!  I learned from the first exam, and modified my approach with the second one.   You get to use a "notepad" type tool in the exam window to take notes.  I went through each question and if I wasn't 100% sure what approach to take, I would simply skip it and make a note in the notepad to come back to it.  Once I had burned through the ones I was confident on, I would go back to the ones I skipped.   It's also important to know that the questions are weighted, so some are more valuable than others.  As such, when going through the ones I skipped, I would start with the ones with a higher weight first.</p><h2 id="training-resources">Training Resources</h2><p>There are quite a few good resources out there that can help get you up to speed on Kubernetes:</p><ul>
<li><a href="https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x">Introduction to Kubernetes (edX online course - FREE)</a></li>
<li><a href="https://training.linuxfoundation.org/linux-courses/system-administration-training/kubernetes-fundamentals">LFS258 - Kubernetes Fundamentals (CNCF online course)</a>
<ul>
<li>This course is mapped to the CKA exam objectives</li>
<li>This is a paid course - it's $299, but if you order it and the exam ($299) at the same time, you can get the package for $499 and save some money)</li>
</ul>
</li>
<li><a href="https://training.linuxfoundation.org/training/kubernetes-for-developers/">LFD259 - Kubernetes for Developers (CNCF online course)</a>
<ul>
<li>This course is mapped to the CKAD exam objectives</li>
<li>Same note on the cost as noted above for CKA course</li>
</ul>
</li>
</ul>
<p>Additionally, <a href="https://kubernetes.io">kubernetes.io</a> has a ton of great material, specifically in the <a href="https://kubernetes.io/docs/tutorials/">tutorials</a> section.</p><p>
    I would also strongly recommend picking up a copy of "Kubernetes In Action" by Marko Luk&#353;a, reading it cover to cover, and going through every code example and tutorial:
</p><p> </p><figure class="kg-card kg-image-card"><img src="https://talkcloudlytome.com/content/images/2018/12/image-3.png" class="kg-image" alt="Thoughts on the Kubernetes CKA and CKAD certifications"><figcaption>Source: https://www.manning.com/books/kubernetes-in-action</figcaption></figure><p>Seriously, if you want to master Kubernetes, go out and buy this book now.  It's currently $56.99 on <a href="https://www.amazon.com/Kubernetes-Action-Marko-Luksa/dp/1617293725">Amazon</a>, but it's worth every single penny.</p><p>And lastly, go through Kelsey Hightower's <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way">Kubernetes the Hard Way</a> GitHub repo.  This is a step by step walkthrough on how to setup a cluster from scratch - no setup tools, no scripts, just a few bare VM's and you doing every single step of the process.  I learned so much from doing that all the way through 5-6 times, and I honestly think I might not have passed the exams without the knowledge I gained from that experience.</p><h2 id="summary">Summary</h2><p>Overall, these certifications were definitely challenging, but I feel they were completely worth it.  If you're looking to prove your Kubernetes knowledge, start studying and good luck with your exam!</p>]]></content:encoded></item><item><title><![CDATA[Troubleshooting Kubernetes Master Nodes]]></title><description><![CDATA[Kubernetes can be really easy to start working with, but can be hard to track down when things go wrong.

Let's dig into some troubleshooting steps to determine the state of the master node components and how we can view the logs and figure out what's going on.]]></description><link>https://talkcloudlytome.com/troubleshooting-kubernetes-master-nodes/</link><guid isPermaLink="false">5c1945922120474876bc05f9</guid><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Justin Carlson]]></dc:creator><pubDate>Wed, 19 Dec 2018 14:53:29 GMT</pubDate><media:content url="https://talkcloudlytome.com/content/images/2018/12/kubernetes-master-node.png" medium="image"/><content:encoded><![CDATA[<img src="https://talkcloudlytome.com/content/images/2018/12/kubernetes-master-node.png" alt="Troubleshooting Kubernetes Master Nodes"><p>Some distributions of Kubernetes hide the master nodes away from you so you don't need to worry about them.  Some examples of this are Azure AKS or Google Kubernetes Engine.  In those instances, you're paying for the vendor to manage the master nodes for you, so there's no need for you to monitor or troubleshoot them.</p><p>However, in some instances you will be responsible for the master nodes.  For example, if you've deployed a cluster using acs-engine, or built your own kubernetes cluster from scratch.  This post will cover how to troubleshoot the main components of the master nodes.</p><h2 id="what-are-the-different-components-on-the-master-nodes-and-what-do-they-do-">What are the different components on the master nodes and what do they do:</h2><ul>
<li>kubelet
<ul>
<li>The kubelet is one of the main kubernetes components.  This is responsible for reporting node ready status and actually interacting with the container runtime to execute pods</li>
</ul>
</li>
<li>etcd
<ul>
<li>etcd is the key-value store that stores the state of the kubernetes cluster.  The kube-apiserver uses this to read/write and store it's data</li>
</ul>
</li>
<li>kube-apiserver
<ul>
<li>This is the API web server that kubectl and other kubernetes components interact with</li>
</ul>
</li>
<li>kube-controller-manager
<ul>
<li>This component is used by kubernetes to ensure the system is in the correct desired states. For example, if you request 3 pods of a certain type, and one pod fails, the controller-manager will ensure another pod is started up</li>
</ul>
</li>
<li>kube-scheduler
<ul>
<li>This component is used by kubernetes to determine the appropriate node(s) to deploy pods to</li>
</ul>
</li>
</ul>
<p>There are a few reasons you may notice something is wrong with your master nodes.  Some possible scenarios that could occur if one or more components are down are:</p><ul><li>Unable to access kube-apiserver via kubectl</li><li>Pods are not getting deployed to nodes</li><li>Deployments are not creating an appropriate number of pods</li><li>One or more nodes are not showing as Ready via kubectl</li></ul><p>It's important to keep in mind that there are multiple different ways that a master node can be deployed.  However, the most common way you'll see today is you'll have a kubelet that's managed via the systemd startup process, and then all other components are launched as static pod manifests and actually run as containers themselves.   That configuration is what I'll talk about below:</p><h2 id="checking-node-status">Checking node status</h2><p>First of all you'll want to check the node status - you can do that via kubectl:</p><pre>
	<code class="language-bash">
# Get the status for all known nodes
kubectl get nodes
        
# Or get detailed information for a specific node
kubectl describe node NODE_NAME
	</code>
</pre><p>Here's some sample output you might get - while some components could fail and still leave the node in a "Ready" state, if you see one as "NotReady", there's definitely a problem to look into.</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-8"><code class="language-bash">kubectl get nodes
NAME                    STATUS    ROLES     AGE       VERSION
1348k8s000              Ready     agent     11d       v1.12.2
1348k8s001              Ready     agent     11d       v1.12.2
1348k8s002              Ready     agent     11d       v1.12.2
k8s-master-13487264-0   Ready     master    11d       v1.12.2
k8s-master-13487264-1   NotReady  master    11d       v1.12.2
k8s-master-13487264-2   Ready     master    11d       v1.12.2</code></pre><p>Digging in to a specific node can give you some more information - you're most interested in the "Conditions" and "Events" output of this command:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-12"><code class="language-bash">kubectl describe node k8s-master-13487264-1
...
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 19 Dec 2018 07:53:01 -0600   Fri, 07 Dec 2018 08:13:33 -0600   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 19 Dec 2018 07:53:01 -0600   Fri, 07 Dec 2018 08:13:33 -0600   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 19 Dec 2018 07:53:01 -0600   Fri, 07 Dec 2018 08:13:33 -0600   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 19 Dec 2018 07:53:01 -0600   Fri, 07 Dec 2018 08:13:33 -0600   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 19 Dec 2018 07:53:01 -0600   Fri, 07 Dec 2018 08:13:33 -0600   KubeletReady                 kubelet is posting ready status. AppArmor enabled
...
Events:         none
</code></pre><p></p><h2 id="checking-the-status-of-the-kubelet">Checking the status of the kubelet</h2><p>The kubelet will almost always be deployed as a systemd process.  That means we can use the systemctl commands to see the state:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-8"><code class="language-bash">systemctl status kubelet.service
● kubelet.service - Kubelet
   Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2018-12-18 20:17:39 UTC; 17h ago
</code></pre><p>The most important thing is that you see the status of "loaded" and "active (running)" in the output.  If you don't see that, you can always try to stop/restart/start the service with the following systemctl commands:</p><pre>
	<code class="language-bash">
# Start the service
systemctl start kubelet.service

# Stop the service
systemctl stop kubelet.service

# Restart the service
systemctl restart kubelet.service
	</code>
</pre><p>To check the configuration values passed to the kubelet, you'll want to note the .service file it's using.  This was noted in the following line from the systemctl status output you ran earlier:</p><p><em>Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: enabled)</em></p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-30"><code class="language-bash">cat /etc/systemd/system/kubelet.service
[Unit]
Description=Kubelet
ConditionPathExists=/usr/local/bin/kubelet
Requires=kms.service

[Service]
Restart=always
EnvironmentFile=/etc/default/kubelet
SuccessExitStatus=143
ExecStartPre=/bin/bash /opt/azure/containers/kubelet.sh
ExecStartPre=/bin/mkdir -p /var/lib/kubelet
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/bin/bash -c "if [ $(mount | grep \"/var/lib/kubelet\" | wc -l) -le 0 ] ; then /bin/mount --bind /var/lib/kubelet /var/lib/kubelet ; fi"
ExecStartPre=/bin/mount --make-shared /var/lib/kubelet
# This is a partial workaround to this upstream Kubernetes issue:
#  https://github.com/kubernetes/kubernetes/issues/41916#issuecomment-312428731
ExecStartPre=/sbin/sysctl -w net.ipv4.tcp_retries2=8
ExecStartPre=-/sbin/ebtables -t nat --list
ExecStartPre=-/sbin/iptables -t nat --list
ExecStart=/usr/local/bin/kubelet \
        --enable-server \
        --node-labels="${KUBELET_NODE_LABELS}" \
        --v=2 \
        --volume-plugin-dir=/etc/kubernetes/volumeplugins \
        $KUBELET_CONFIG $KUBELET_OPTS \
        $KUBELET_REGISTER_NODE $KUBELET_REGISTER_WITH_TAINTS

[Install]
WantedBy=multi-user.target
</code></pre><p>Down in the "ExecStart" you'll see what parameters are being used for the kubelet, and you'll notice there's also an "EnvironmentFile".  This is what's likely to have the configuration values we need, so let's take a look at that:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="2-8"><code class="language-bash">cat /etc/default/kubelet

KUBELET_OPTS=

KUBELET_CONFIG=--address=0.0.0.0 --allow-privileged=true --anonymous-auth=false --authorization-mode=Webhook --azure-container-registry-config=/etc/kubernetes/azure.json --cgroups-per-qos=true --client-ca-file=/etc/kubernetes/certs/ca.crt --cloud-config=/etc/kubernetes/azure.json --cloud-provider=azure --cluster-dns=10.0.0.10 --cluster-domain=cluster.local --enforce-node-allocatable=pods --event-qps=0 --feature-gates=PodPriority=true --image-gc-high-threshold=85 --image-gc-low-threshold=80 --image-pull-progress-deadline=30m --keep-terminated-pod-volumes=false --kubeconfig=/var/lib/kubelet/kubeconfig --max-pods=30 --network-plugin=cni --node-status-update-frequency=10s --non-masquerade-cidr=0.0.0.0 --pod-infra-container-image=k8s.gcr.io/pause-amd64:3.1 --pod-manifest-path=/etc/kubernetes/manifests --pod-max-pids=100
KUBELET_IMAGE=k8s.gcr.io/hyperkube-amd64:v1.12.2
KUBELET_NODE_LABELS=kubernetes.io/role=master,node-role.kubernetes.io/master=,kubernetes.azure.com/cluster=jdc-k8s-poc
</code></pre><p><em>Make a note of the value for the "--pod-manifest-path" (set to /etc/kubernetes/manifests) as we'll be using that later to troubleshoot the other components.</em></p><h2 id="viewing-kubelet-logs">Viewing kubelet logs</h2><p>Units deployed via systemd use journald for their logging.  This means we have some simple command line tools we can use to see the log details for our kubelet</p><pre>
	<code class="language-bash">
# View ALL known logs for the kubelet
journalctl -u kubelet.service

# View all logs for the kubelet since the last system boot
journalctl -b -u kubelet.service

# View all logs for the kubelet in the last hour
journalctl -b -u kubelet.service --since "1 hour ago"

# View all logs for the kubelet in the last fifteen minutes
journalctl -b -u kubelet.service --since "15 minutes ago"

# View all logs for the kubelet since a given date/time
journalctl -b -u kubelet.service --since "2018-12-10 15:30:00"
	</code>
</pre><p>For some more details on the command syntax while using systemctl and journalctl, check out these helpful guides:</p><ul><li><a href="https://www.digitalocean.com/community/tutorials/systemd-essentials-working-with-services-units-and-the-journal">https://www.digitalocean.com/community/tutorials/systemd-essentials-working-with-services-units-and-the-journal</a></li><li><a href="https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs">https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs</a></li><li><a href="https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files">https://www.digitalocean.com/community/tutorials/understanding-systemd-units-and-unit-files</a></li></ul><h2 id="checking-the-status-of-other-components-running-as-static-pods">Checking the status of other components running as static pods</h2><p>As noted earlier, the most common kubernetes deployment involves the kubelet being managed by systemd, and the other components being run as static pods by the kubelet.  If you remember before, we saw that our kubelet was being passed a parameter of "--pod-manifest-path", with a value of "/etc/kubernetes/manifests".  So let's take a look at what we find in that directory:</p><pre class="command-line" data-user="root" data-host="localhost" data-output="3"><code class="language-bash">cd /etc/kubernetes/manifests
ls
kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
</code></pre><p>If you open any of these YAML files, you'll see they are just Pod definitions that can be understood by kubernetes.  Open up and view any of the files to see the parameters and value being sent to that particular component.</p><h2 id="checking-the-status-of-the-static-pods-and-viewing-logs">Checking the status of the static pods and viewing logs</h2><p>In almost every case, these pods will be running in the "kube-system" namespace.  You can verify this by evaluating the metadata.namespace value in the YAML files as well.  We can then use kubectl to see the status of the pods, as well as gathering their logs:</p><pre>
	<code class="language-bash">
# See the status of the pod, and verify what node it's running on
kubectl get pods -n kube-system -o wide

# Once you've found a specific pod you want to review, describe it
kubectl describe pod POD_NAME -n kube-system

# Review the logs for your pod
kubectl logs POD_NAME -n kube-system
	</code>
</pre><p></p><h2 id="what-if-kubectl-isn-t-working-and-i-need-to-view-logs">What if kubectl isn't working and I need to view logs?</h2><p>Let's say that you're unable to execute kubectl to view logs - perhaps the kube-apiserver isn't running and as such you can't even use any kubectl commands.  You can validate the logs of these components by SSH'ing directly to the given node and looking at the docker logs themselves.</p><p>First we need to find our container ID:</p><pre>
	<code class="language-bash">
# Replace "kube-apiserver" with the name of the pod you're looking for
docker ps -a | grep kube-apiserver
	</code>
</pre><p>You'll likely see two containers returned here.  This is because every pod in Kubernetes actually launches 1 additional container, in addition to however many containers you specify.  This is the "pause" container, and the purpose of it is to glue together the network/storage stack of all the containers in your pod.  For our purposes, we don't really care about it.  So you'll want to ignore the container that's running the "/pause" command, and instead get the container ID of the other one.</p><p>Once you have the ID, you can just use the Docker CLI to query the logs:</p><pre>
	<code class="language-bash">
docker logs CONTAINER_ID
docker logs CONTAINER_ID --tail 500
	</code>
</pre><p>If for whatever reason the docker commands aren't working, you can always view the raw log files directly.  These are usually located in /var/lib/docker/containers/CONTAINER_ID.</p><h2 id="summary">Summary</h2><p>Hopefully this helps you get started in digging into your master nodes.  As time goes on, it's going to be more and more likely that these will be abstracted from the consumer (as is the case with GKE and AKS solutions), but having an understanding of how they work and how to troubleshoot them will still be an important skill until then.   If you have any questions or run into any problems, feel free to reach out!</p><p>-Justin</p><p></p>]]></content:encoded></item></channel></rss>