In my previous post, I walked through the process to deploy an ELK stack in AKS to play around with / learn / test stuff. In that example there was NO TLS/SSL communication, and nothing was protected by passwords.
I went through and read all the documentation and figured out how to do the same thing, but to deploy a fully-secured cluster for both Elasticsearch and Kibana, and wanted to share that process as well.
You would want to read through the previous post to deploy an AKS instance to use and get chocolatey/helm installed, unless you've already done so. For reference, you can check back here: https://talkcloudlytome.com/setting-up-your-own-elk-stack-in-kubernetes-with-azure-aks/
Once we have an AKS cluster to deploy to, we're going to have to create some certificates to secure our ELK stack with. The point of my blog post here is to show how to setup and secure your ELK stack. However - it would take far longer than I want to spend to setup an AppGateway/LoadBalancer with the proper DNS names so I could get correct certificate validation. So with this example, everything will be secured via TLS/SSL, but if you try to access it via the external IP from outside of the cluster, you WILL get certificate validation errors (as expected).
On other thing to note is that I'm using .pem certificates. If you're using PKCS#12 certificates, there are some different settings that you use for Elasticsearch, which is all covered in the documentation listed in the reference section below.
We're going to use cfssl and cfssljson to create our certs. Run this and save off the ca-key.pem, ca.pem, security-master-key.pem, and security-master.pem files that are created:
cat > ca-config.json <<EOF
{
"signing": {
"default": {
"expiry": "8760h"
},
"profiles": {
"elasticsearch": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "8760h"
}
}
}
}
EOF
cat > ca-csr.json <<EOF
{
"CN": "Elasticsearch",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Hartland",
"O": "Elasticsearch",
"OU": "CA",
"ST": "Wisconsin"
}
]
}
EOF
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
cat > security-master-csr.json <<EOF
{
"CN": "security-master",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Hartland",
"O": "Elasticsearch",
"OU": "Elasticsearch",
"ST": "Wisconsin"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=elasticsearch \
security-master-csr.json | cfssljson -bare security-master
Now create a file in your current directory (containing all the certificates you just created) called "kustomization.yaml" with the following contents:
secretGenerator:
- name: elastic-certificates
files:
- ca.pem
- security-master.pem
- security-master-key.pem
Run the following command to apply this secret to Kubernetes:
kubectl apply -k .
Make a note of the secret name that is output here, as you'll need it later. You can see it in the output - it will look something like this:
PS > kubectl apply -k .
secret/elastic-certificates-7ft8hkbftk created
Next we need to create a secret that will hold the credentials that Elasticsearch will run under. Replace "##PASSWORD##" with your actual password you want to use:
kubectl create secret generic elastic-credentials --from-literal=password=##PASSWORD## --from-literal=username=elastic
Now we're going to make and modify a values.yml file for Helm to deploy Elasticsearch with our security configuration settings. Previously we didn't bother with a values file for our deployment, since we were only overriding one value (service.type). However, we have to modify quite a few values here, so we'll put them all into one file.
Add the following into a file and save it as "elasticsearch-helm-values.yml". Make sure you replace "elastic-certificates-7ft8hkbftk" in the secretMounts.secretName section with the name of the secret you created above.
---
clusterName: "security"
nodeGroup: "master"
roles:
master: "true"
ingest: "true"
data: "true"
protocol: https
esConfig:
elasticsearch.yml: |
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: /usr/share/elasticsearch/config/certs/security-master-key.pem
xpack.security.transport.ssl.certificate: /usr/share/elasticsearch/config/certs/security-master.pem
xpack.security.transport.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/certs/ca.pem" ]
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.key: /usr/share/elasticsearch/config/certs/security-master-key.pem
xpack.security.http.ssl.certificate: /usr/share/elasticsearch/config/certs/security-master.pem
xpack.security.http.ssl.certificate_authorities: [ "/usr/share/elasticsearch/config/certs/ca.pem" ]
extraEnvs:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
- name: ELASTIC_USERNAME
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates-7ft8hkbftk
path: /usr/share/elasticsearch/config/certs
service:
labels: {}
labelsHeadless: {}
type: LoadBalancer
nodePort: ""
annotations: {}
httpPortName: http
transportPortName: transport
Now we're good to install it via Helm with these values:
helm install elasticsearch elastic/elasticsearch --values elasticsearch-helm-values.yml
You should get a similar output as in the previous post:
NAME: elasticsearch
LAST DEPLOYED: Fri Feb 21 19:26:40 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Watch all cluster members come up.
$ kubectl get pods --namespace=default -l app=elasticsearch-master -w
2. Test cluster health using Helm test.
$ helm test elasticsearch
Run the "get pods" command until all the pods have a "READY" value of "1/1" and a STATUS of "Running".
Find out the External IP address of your elasticsearch service:
kubectl get svc security-master
Make a note of the "EXTERNAL-IP" that you find here. Then go over to a browser and go to https://<YOUR-EXTERNAL-IP>:9200. Note that we're specifying "https" instead of "http" for the protocol now! You should see something like this:

Note that it does say "Not secure". It IS secured with TLS, however, due to how we setup our load balancer infrastructure, we currently have to access it via IP. If we were to access it with the host name (security-master), like it does with in-cluster communication, it would work fine and show as secure. In a production environment, you would probably configure this a little differently. But you can click on the "Not secure" link to see the certificate and validate that we are in fact securing communications with TLS:

Another thing you'll note that's different from our unsecured installation - we're now prompted for credentials when logging in! This is due to us setting "xpack.security.enabled" to true in our elasticsearch.yml configuration. To log in, use "elastic" for the username, and whatever you specified for your "elastic-credentials" secret password you created earlier.

Success! We now have an elasticsearch installation that's using encrypted TLS both for node-to-node and HTTP communications, and also requires a login to authenticate.
Next we're going to setup Kibana to talk to this instance. This is going to require additional configuration as well since we want Kibana to be accessed over HTTPS from the web browser, and we'll also need to configure it to communicate with our Elasticsearch instance over HTTPS as well.
We're going to go back to the same directory where we created our elasticsearch certs and make one for Kibana. Run this to create the certs. Save off the kibana-kibana.pem and kibana-kibana-key.pem files that are created:
cat > kibana-kibana-csr.json <<EOF
{
"CN": "kibana-kibana",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "US",
"L": "Hartland",
"O": "Kibana",
"OU": "Kibana",
"ST": "Wisconsin"
}
]
}
EOF
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=elasticsearch \
kibana-kibana-csr.json | cfssljson -bare kibana-kibana
Copy those .pem files into the same directory as all your other certificates, and modify the "kustomization.yaml" file you created earlier to contain only the following contents:
secretGenerator:
- name: kibana-certificates
files:
- kibana-kibana.pem
- kibana-kibana-key.pem
Run the following command to apply this secret to Kubernetes:
kubectl apply -k .
Make a note of the secret name that is output here, as you'll need it later. You can see it in the output - it will look something like this:
PS > kubectl apply -k .
secret/kibana-certificates-mkt5m8644t created
Now we're going to make and modify a values.yml file to deploy Kibana with our security config settings. Add the following into a file and save it as "kibana-helm-values.yml". Make sure you replace "elastic-certificates-7ft8hkbftk" and "kibana-certificates-mkt5m8644t" secret names with the respective names of the secrets you've created so far in this process!
---
elasticsearchHosts: "https://security-master:9200"
extraEnvs:
- name: 'ELASTICSEARCH_USERNAME'
valueFrom:
secretKeyRef:
name: elastic-credentials
key: username
- name: 'ELASTICSEARCH_PASSWORD'
valueFrom:
secretKeyRef:
name: elastic-credentials
key: password
kibanaConfig:
kibana.yml: |
server.ssl:
enabled: true
key: /usr/share/kibana/config/certs/kibana-kibana-key.pem
certificate: /usr/share/kibana/config/certs/kibana-kibana.pem
elasticsearch.ssl:
certificateAuthorities: /usr/share/kibana/config/elasticsearchcerts/ca.pem
verificationMode: certificate
protocol: https
secretMounts:
- name: kibana-certificates
secretName: kibana-certificates-mkt5m8644t
path: /usr/share/kibana/config/certs
- name: elastic-certificates
secretName: elastic-certificates-7ft8hkbftk
path: /usr/share/kibana/config/elasticsearchcerts
service:
type: LoadBalancer
port: 5601
nodePort: ""
labels: {}
annotations: {}
loadBalancerSourceRanges: []
Now we're good to install it via Helm with these values:
helm install kibana elastic/kibana --values kibana-helm-values.yml
You can run the following to watch the status of the Kibana pod:
kubectl get pods --namespace=default -l app=kibana -w
Once this command shows the pod(s) with a "READY" value of "1/1" and a STATUS of "Running", we're all good to go.
Find out the External IP address of the Kibana service:
kubectl get svc kibana-kibana
Make a note of the "EXTERNAL-IP" that you find here. Then go over to a browser and go to https://<YOUR-EXTERNAL-IP>:5601. Note that we're specifying "https" instead of "http" for the protocol now! All the same comments about the certificate being 'wrong' are still the same here, but you can view the certificate itself to see that we're indeed using TLS:

If you click through the security warnings, you'll see this:

Kibana uses the same credentials for login that Elasticsearch does, so you can use the same username/password you used to login to Elasticsearch earlier:
And we're in!

Lastly - I didn't go through and setup any Beats with this instance, but the changes are relatively minor to setup. Basically you just need to specify an "https" when you're setting up your elasticsearch output and provide a path to the CA of your elasticsearch certificate. You can read up some examples here if you're interested: https://www.elastic.co/guide/en/beats/filebeat/current/elasticsearch-output.html
Additional Resources:
I used the following resources when researching this process and building out this blog post:
- https://www.elastic.co/guide/en/elasticsearch/reference/7.6/configuring-tls.html#tls-http
- https://www.elastic.co/guide/en/kibana/current/configuring-tls.html
- https://github.com/elastic/helm-charts/tree/master/elasticsearch/examples/security
- https://github.com/elastic/helm-charts/tree/master/kibana/examples/security
I hope this helps you out - thanks for reading!
-Justin