close
Star Fork Issue

staging-20191113.2 - c71ba48

open

The Azure Kubernetes Workshop

Welcome to the Azure Kubernetes Workshop. In this lab, you’ll go through tasks that will help you master the basic and more advanced topics required to deploy a multi-container application to Kubernetes on Azure Kubernetes Service (AKS).

You can use this guide as a Kubernetes tutorial and as study material to help you get started to learn Kubernetes.

If you are new to Kubernetes, start with the Kubernetes Learning Path to learn Kubernetes basics, then go through the concepts of what Kubernetes is and what it isn’t.

If you are a more experienced Kubernetes developer or administrator, you may have a look at the Kubernetes best practices ebook.

Some of the things you’ll be going through:

  • Kubernetes deployments, services and ingress
  • Deploying MongoDB using Helm
  • Azure Monitor for Containers, Horizontal Pod Autoscaler and the Cluster Autoscaler
  • Building CI/CD pipelines using Azure DevOps and Azure Container Registry
  • Scaling using Virtual Nodes, setting up SSL/TLS for your deployments, using Azure Key Vault for secrets

You can review the changelog for what has recently changed.

Prerequisites

Tools

You can use the Azure Cloud Shell accessible at https://shell.azure.com once you login with an Azure subscription. The Azure Cloud Shell has the Azure CLI pre-installed and configured to connect to your Azure subscription as well as kubectl and helm.

Azure subscription

If you have an Azure subscription

Please use your username and password to login to https://portal.azure.com.

Also please authenticate your Azure CLI by running the command below on your machine and following the instructions.

az account show
az login

If you have been given an access to a subscription as part of a lab, or you already have a Service Principal you want to use

If you have lab environment credentials similar to the below or you already have a Service Principal you will use with this workshop,

Lab environment credentials

Please then perform an az login on your machine using the command below, passing in the Application Id, the Application Secret Key and the Tenant Id.

az login --service-principal --username APP_ID --password "APP_SECRET" --tenant TENANT_ID

Azure Cloud Shell

You can use the Azure Cloud Shell accessible at https://shell.azure.com once you login with an Azure subscription.

Head over to https://shell.azure.com and sign in with your Azure Subscription details.

Select Bash as your shell.

Select Bash

Select Show advanced settings

Select show advanced settings

Set the Storage account and File share names to your resource group name (all lowercase, without any special characters), then hit Create storage

Azure Cloud Shell

You should now have access to the Azure Cloud Shell

Set the storage account and fileshare names

Uploading and editing files in Azure Cloud Shell

  • You can use code <file you want to edit> in Azure Cloud Shell to open the built-in text editor.
  • You can upload files to the Azure Cloud Shell by dragging and dropping them
  • You can also do a curl -o filename.ext https://file-url/filename.ext to download a file from the internet.

Kubernetes basics

There is an assumption of some prior knowledge of Kubernetes and its concepts.

Application Overview

You will be deploying a customer-facing order placement and fulfillment application that is containerized and is architected for a microservice implementation.

Application diagram

The application consists of 3 components:

  • A public facing Order Capture swagger enabled API
  • A public facing frontend
  • A MongoDB database

Tasks

Useful resources are provided to help you work through each task. To ensure you progress at a good pace ensure workload is divided between team members where possible. This may mean anticipating work that might be required in a later task.

Hint: If you get stuck, you can ask for help from the proctors. You may also choose to peek at the solutions.

Core tasks

You are expected to at least complete the Getting up and running section. This involves setting up a Kubernetes cluster, deploying the application containers from Docker Hub, setting up monitoring and scaling your application.

DevOps tasks

Once you’re done with the above, next would be to include some DevOps. Complete as many tasks as you can. You’ll be setting up a Continuous Integration and Continuous Delivery pipeline for your application and then using Helm to deploy it.

Advanced cluster tasks

If you’re up to it, explore configuring the Azure Kubernetes Service cluster with Virtual Nodes, enabling MongoDB replication, using HashiCorp’s Terraform to deploy AKS and your application and more.

Getting up and running

Deploy Kubernetes with Azure Kubernetes Service (AKS)

Azure has a managed Kubernetes service, AKS (Azure Kubernetes Service), we’ll use this to easily deploy and standup a Kubernetes cluster.

Tasks

Get the latest Kubernetes version available in AKS

Get the latest available Kubernetes version in your preferred region into a bash variable. Replace <region> with the region of your choosing, for example eastus.

version=$(az aks get-versions -l <region> --query 'orchestrators[-1].orchestratorVersion' -o tsv)

Create a Resource Group

Note You don’t need to create a resource group if you’re using the lab environment. You can use the resource group created for you as part of the lab. To retrieve the resource group name in the managed lab environment, run az group list.

az group create --name <resource-group> --location <region>

Create the AKS cluster

Task Hints

  • It’s recommended to use the Azure CLI and the az aks create command to deploy your cluster. Refer to the docs linked in the Resources section, or run az aks create -h for details
  • The size and number of nodes in your cluster is not critical but two or more nodes of DS2_v2 or larger is recommended

Note You can create AKS clusters that support the cluster autoscaler.

Note If you’re using the provided lab environment, you’ll not be able to create the Log Analytics workspace required to enable monitoring while creating the cluster from the Azure Portal unless you manually create the workspace in your assigned resource group. Additionally, if you’re running this on an Azure Pass, please add --load-balancer-sku basic to the flags, as the Azure Pass only supports the basic Azure Load Balancer.

Create AKS using the latest version

  az aks create --resource-group <resource-group> \
    --name <unique-aks-cluster-name> \
    --location <region> \
    --kubernetes-version $version \
    --generate-ssh-keys
**Option 2 ** Create an AKS cluster with the cluster autoscaler

Note If you’re running this on an Azure Pass, please add --load-balancer-sku basic to the flags, as the Azure Pass only supports the basic Azure Load Balancer.

AKS clusters that support the cluster autoscaler must use virtual machine scale sets and run Kubernetes version 1.12.4 or later.

Use the az aks create command specifying the --enable-cluster-autoscaler parameter, and a node --min-count and --max-count.

  az aks create --resource-group <resource-group> \
    --name <unique-aks-cluster-name> \
    --location <region> \
    --kubernetes-version $version \
    --generate-ssh-keys \
    --vm-set-type VirtualMachineScaleSets \
    --enable-cluster-autoscaler \
    --min-count 1 \
    --max-count 3

Ensure you can connect to the cluster using kubectl

Task Hints

  • kubectl is the main command line tool you will using for working with Kubernetes and AKS. It is already installed in the Azure Cloud Shell
  • Refer to the AKS docs which includes a guide for connecting kubectl to your cluster (Note. using the cloud shell you can skip the install-cli step).
  • A good sanity check is listing all the nodes in your cluster kubectl get nodes.
  • This is a good cheat sheet for kubectl.
  • If you run kubectl in PowerShell ISE , you can also define aliases :
    function k([Parameter(ValueFromRemainingArguments = $true)]$params) { & kubectl $params }
    function kubectl([Parameter(ValueFromRemainingArguments = $true)]$params) { Write-Output "> kubectl $(@($params | ForEach-Object {$_}) -join ' ')"; & kubectl.exe $params; }
    function k([Parameter(ValueFromRemainingArguments = $true)]$params) { Write-Output "> k $(@($params | ForEach-Object {$_}) -join ' ')"; & kubectl.exe $params; }
    

Note kubectl, the Kubernetes CLI, is already installed on the Azure Cloud Shell.

Authenticate

az aks get-credentials --resource-group <resource-group> --name <unique-aks-cluster-name>

List the available nodes

kubectl get nodes

Resources

Deploy MongoDB

You need to deploy MongoDB in a way that is scalable and production ready. There are a couple of ways to do so.

Task Hints

  • Use Helm and a standard provided Helm chart to deploy MongoDB.
  • Be careful with the authentication settings when creating MongoDB. It is recommended that you create a standalone username/password. The username and password can be anything you like, but make a note of them for the next task.

Important: If you install using Helm and then delete the release, the MongoDB data and configuration persists in a Persistent Volume Claim. You may face issues if you redeploy again using the same release name because the authentication configuration will not match. If you need to delete the Helm deployment and start over, make sure you delete the Persistent Volume Claims created otherwise you’ll run into issues with authentication due to stale configuration. Find those claims using kubectl get pvc.

Tasks

Setup Helm

Helm is an application package manager for Kubernetes, and way to easily deploy applications and services into Kubernetes, via what are called charts. To use Helm you will need the helm command (already installed in the Azure Cloud Shell), the Tiller component in your cluster which is created with the helm init command and a chart to deploy.

Initialize the Helm components on the AKS cluster

Task Hints

  • Refer to the AKS docs which includes a guide for setting up Helm in your cluster
  • You will have RBAC enabled in your AKS cluster, unless you specifically disabled it when creating it (very unlikely)
  • You can ignore the instructions regarding TLS

Unless you specified otherwise your cluster will be RBAC enabled, so you have to create the appropriate ServiceAccount for Tiller (the server side Helm component) to use.

Save the YAML below as helm-rbac.yaml or download it from helm-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

And deploy it using

kubectl apply -f helm-rbac.yaml

Initialize Tiller (omit the --service-account flag if your cluster is not RBAC enabled). Setting --history-max on helm init is recommended as configmaps and other objects in helm history can grow large in number if not purged by max limit. Without a max history set the history is kept indefinitely, leaving a large number of records for helm and tiller to maintain.

helm init --history-max 200 --service-account tiller --node-selectors "beta.kubernetes.io/os=linux"

Deploy an instance of MongoDB to your cluster

Helm provides a standard repository of charts for many different software packages, and it has one for MongoDB that is easily replicated and horizontally scalable.

Task Hints

  • When installing a chart Helm uses a concept called a “release”, and this release needs a name. You should give your release a name (using --name), it is strongly recommend you use orders-mongo as the name, as we’ll need to refer to it later
  • When deploying a chart you provide parameters with the --set switch and a comma separated list of key=value pairs. There are MANY parameters you can provide to the MongoDB chart, but pay attention to the mongodbUsername, mongodbPassword and mongodbDatabase parameters

Note The application expects a database called akschallenge. Please DO NOT modify it.

The recommended way to deploy MongoDB would be to use Helm.

After you have Tiller initialized in the cluster, wait for a short while then install the MongoDB chart, then take note of the username, password and endpoints created. The command below creates a user called orders-user and a password of orders-password

helm install stable/mongodb --name orders-mongo --set mongodbUsername=orders-user,mongodbPassword=orders-password,mongodbDatabase=akschallenge

Hint By default, the service load balancing the MongoDB cluster would be accessible at orders-mongo-mongodb.default.svc.cluster.local

You’ll need to use the user created in the command above when configuring the deployment environment variables.

Create a Kubernetes secret to hold the MongoDB details

In the previous step, you installed MongoDB using Helm, with a specified username, password and a hostname where the database is accessible. You’ll now create a Kubernetes secret called mongodb to hold those details, so that you don’t need to hard-code them in the YAML files.

Task Hints

  • A Kubernetes secret can hold several items, indexed by key. The name of the secret isn’t critical, but you’ll need three keys stored your secret:
    • mongoHost
    • mongoUser
    • mongoPassword
  • The values for the username & password will be what you used on the helm install command when deploying MongoDB.
  • Run kubectl create secret generic -h for help on how to create a secret, clue: use the --from-literal parameter to allow you to provide the secret values directly on the command in plain text.
  • The value of mongoHost, will be dependant on the name of the MongoDB service. The service was created by the Helm chart and will start with the release name you gave. Run kubectl get service and you should see it listed, e.g. orders-mongo-mongodb
  • All services in Kubernetes get DNS names, this is assigned automatically by Kubernetes, there’s no need for you to configure it. You can use the short form which is simply the service name, e.g. orders-mongo-mongodb or better to use the “fully qualified” form orders-mongo-mongodb.default.svc.cluster.local
kubectl create secret generic mongodb --from-literal=mongoHost="orders-mongo-mongodb.default.svc.cluster.local" --from-literal=mongoUser="orders-user" --from-literal=mongoPassword="orders-password"

You’ll need to use the user created in the command above when configuring the deployment environment variables.

Resources

Architecture Diagram

If you want a picture of how the system should look at the end of this challenge click below

Deploy the Order Capture API

You need to deploy the Order Capture API (azch/captureorder). This requires an external endpoint, exposing the API on port 80 and needs to write to MongoDB.

Container images and source code

In the table below, you will find the Docker container images provided by the development team on Docker Hub as well as their corresponding source code on GitHub.

Component Docker Image Source Code Build Status
Order Capture API azch/captureorder source-code Build Status

Environment variables

The Order Capture API requires certain environment variables to properly run and track your progress. Make sure you set those environment variables in your deployment (you don’t need to set these locally in your shell).

  • MONGOHOST="<hostname of mongodb>"
    • MongoDB hostname. Read from the Kubernetes secret you created.
  • MONGOUSER="<mongodb username>"
    • MongoDB username. Read from the Kubernetes secret you created.
  • MONGOPASSWORD="<mongodb password>"
    • MongoDB password. Read from the Kubernetes secret you created.

Hint: The Order Capture API exposes the following endpoint for health-checks once you have completed the tasks below: http://[PublicEndpoint]:[port]/healthz

Tasks

Provision the captureorder deployment

Task Hints

  • Read the Kubernetes docs in the resources section below for details on how to create a deployment, you should create a YAML file and use the kubectl apply -f command to deploy it to your cluster
  • You provide environmental variables to your container using the env key in your container spec. By using valueFrom and secretRef you can reference values stored in a Kubernetes secret (i.e. the one you created holding the MongoDB host, username and password)
  • The container listens on port 8080
  • If your pods are not starting, not ready or are crashing, you can view their logs using kubectl logs <pod name> and/or kubectl describe pod <pod name>
  • Advanced: You can define a readinessProbe and livenessProbe using the /healthz endpoint exposed by the container and the port 8080, this is optional
Deployment

Save the YAML below as captureorder-deployment.yaml or download it from captureorder-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: captureorder
spec:
  selector:
      matchLabels:
        app: captureorder
  replicas: 2
  template:
      metadata:
        labels:
            app: captureorder
      spec:
        containers:
        - name: captureorder
          image: azch/captureorder
          imagePullPolicy: Always
          readinessProbe:
            httpGet:
              port: 8080
              path: /healthz
          livenessProbe:
            httpGet:
              port: 8080
              path: /healthz
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          env:
          - name: MONGOHOST
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoHost
          - name: MONGOUSER
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoUser
          - name: MONGOPASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoPassword
          ports:
          - containerPort: 8080

And deploy it using

kubectl apply -f captureorder-deployment.yaml
Verify that the pods are up and running
kubectl get pods -l app=captureorder -w

Wait until you see pods are in the Running state.

Hint If the pods are not starting, not ready or are crashing, you can view their logs using kubectl logs <pod name> and kubectl describe pod <pod name>.

Expose the captureorder deployment with a service

Task Hints

  • Read the Kubernetes docs in the resources section below for details on how to create a service, you should create a YAML file and use the kubectl apply -f command to deploy it to your cluster
  • Pay attention to the port, targetPort and the selector
  • Kubernetes has several types of services (described in the docs), specified in the type field. You will need to create a service of type LoadBalancer
  • The service should export port 80
Service

Save the YAML below as captureorder-service.yaml or download it from captureorder-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: captureorder
spec:
  selector:
    app: captureorder
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: LoadBalancer

And deploy it using

kubectl apply -f captureorder-service.yaml
Retrieve the External-IP of the Service

Use the command below. Make sure to allow a couple of minutes for the Azure Load Balancer to assign a public IP.

kubectl get service captureorder -o jsonpath="{.status.loadBalancer.ingress[*].ip}" -w

Ensure orders are successfully written to MongoDB

Task Hints

  • The IP of your service will be publicly available on the internet
  • The service has a Swagger/OpenAPI definition: http://[Your Service Public LoadBalancer IP]/swagger
  • The service has an orders endpoint which accepts GET and POST: http://[Your Service Public LoadBalancer IP]/v1/order
  • Orders take the form {"EmailAddress": "email@domain.com", "Product": "prod-1", "Total": 100} (The values are not validated)

Hint: You can test your deployed API either by using Postman or Swagger with the following endpoint : http://[Your Service Public LoadBalancer IP]/swagger/

Send a POST request using Postman or curl to the IP of the service you got from the previous command

curl -d '{"EmailAddress": "email@domain.com", "Product": "prod-1", "Total": 100}' -H "Content-Type: application/json" -X POST http://[Your Service Public LoadBalancer IP]/v1/order

You can expect the order ID returned by API once your order has been written into Mongo DB successfully

{
    "orderId": "5beaa09a055ed200016e582f"
}

Hint: You may notice we have deployed readinessProbe and livenessProbe in the YAML file when we’re deploying The Order Capture API. In Kubernetes, readiness probes define when a Container is ready to start accepting traffic, liveness probes monitor the container health. Hence here we can use the following endpoint to do a simple health-checks : http://[PublicEndpoint]:[port]/healthz

Resources

Architecture Diagram

If you want a picture of how the system should look at the end of this challenge click below

Deploy the frontend using Ingress

You need to deploy the Frontend (azch/frontend). This requires an external endpoint, exposing the website on port 80 and needs to connect to the Order Capture API public IP so it can display the status on the number of orders in the system.

Container images and source code

In the table below, you will find the Docker container images provided by the development team on Docker Hub as well as their corresponding source code on GitHub.

Component Docker Image Source Code Build Status
Frontend azch/frontend source-code Build Status

Environment variables

The frontend requires the CAPTUREORDERSERVICEIP environment variable to be set to the external public IP address of the captureorder service deployed in the previous step. Make sure you set this environment variable in your deployment file.

  • CAPTUREORDERSERVICEIP="<public IP of order capture service>"

Tasks

Provision the frontend deployment

Task Hints

  • As with the captureorder deployment you will need to create a YAML file which describes your deployment. Making a copy of your captureorder deployment YAML would be a good start, but beware you will need to change
    • image
    • readinessProbe (if you have one) endpoint clue use the root url ‘/’
    • livenessProbe (if you have one) endpoint, clue use the root url ‘/’
  • As before you need to provide environmental variables to your container using env, but this time nothing is stored in a secret
  • The container listens on port 8080
  • If your pods are not starting, not ready or are crashing, you can view their logs using kubectl logs <pod name> and/or kubectl describe pod <pod name>
Deployment

Save the YAML below as frontend-deployment.yaml or download it from frontend-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
      matchLabels:
        app: frontend
  replicas: 1
  template:
      metadata:
        labels:
            app: frontend
      spec:
        containers:
        - name: frontend
          image: azch/frontend
          imagePullPolicy: Always
          readinessProbe:
            httpGet:
              port: 8080
              path: /
          livenessProbe:
            httpGet:
              port: 8080
              path: /
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          env:
          - name: CAPTUREORDERSERVICEIP
            value: "<public IP of order capture service>" # Replace with your captureorder service IP
          ports:
          - containerPort: 8080

And deploy it using

kubectl apply -f frontend-deployment.yaml
Verify that the pods are up and running
kubectl get pods -l app=frontend -w

Hint If the pods are not starting, not ready or are crashing, you can view their logs using kubectl logs <pod name> and kubectl describe pod <pod name>.

Expose the frontend on a hostname

Instead of accessing the frontend through an IP address, you would like to expose the frontend over a hostname. Explore using Kubernetes Ingress to achieve this purpose.

As there are many options out there for ingress controllers, we will stick to the tried and true nginx-ingress controller, which is the most popular albeit not the most featureful controller.

  • Ingress controller: The Ingress controller is exposed to the internet by using a Kubernetes service of type LoadBalancer. The Ingress controller watches and implements Kubernetes Ingress resources, which creates routes to application endpoints.

We will leverage the nip.io reverse wildcard DNS resolver service to map our ingress controller LoadBalancerIP to a proper DNS name.

Task Hints

  • When placing services behind an ingress you don’t expose them directly with the LoadBalancer type, instead you use a ClusterIP. In this network model, external clients access your service via public IP of the ingress controller
  • This picture helps explain how this looks and hangs together
  • Use Helm to deploy The NGINX ingress controller. The Helm chart for the NGINX ingress controller requires no options/values when deploying it.
  • ProTip: Place in the ingress controller in a different namespace, e.g. ingress with the --namespace option.
  • Use kubectl get service (add --namespace if you deployed it to a different namespace) to discover the public/external IP of your ingress controller, you will need to make a note of it. the
  • nip.io is not related to Kubernetes or Azure, however it provide useful service of mapping any IP Address to a hostname. This saves you needed to create public DNS records. If your ingress controller had IP 12.34.56.78, you could access it via http://anythingyouwant.12.34.56.78.nip.io
  • The Kubernetes docs have an example of creating an Ingress object, except you will only be specifying a single host rule. Use nip.io and your ingress controller IP to set the host field. As with the deployment and service, you create this object via a YAML file and kubectl apply
Service

Save the YAML below as frontend-service.yaml or download it from frontend-service.yaml

Note Since you’re going to expose the deployment using an Ingress, there is no need to use a public IP for the Service, hence you can set the type of the service to be ClusterIP instead of LoadBalancer.

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: ClusterIP

And deploy it using

kubectl apply -f frontend-service.yaml
Deploy the ingress controller with helm

NGINX ingress controller is easily deployed with helm:

helm repo update

helm upgrade --install ingress stable/nginx-ingress --namespace ingress

In a couple of minutes, a public IP address will be allocated to the ingress controller, retrieve with:

kubectl get svc  -n ingress    ingress-nginx-ingress-controller -o jsonpath="{.status.loadBalancer.ingress[*].ip}"
Ingress

Create an Ingress resource that is annotated with the required annotation and make sure to replace _INGRESS_CONTROLLER_EXTERNAL_IP_ with the IP address you retrieved from the previous command.

Additionally, make sure that the serviceName and servicePort are pointing to the correct values as the Service you deployed previously.

Save the YAML below as frontend-ingress.yaml or download it from frontend-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
spec:
  rules:
  - host: frontend._INGRESS_CONTROLLER_EXTERNAL_IP_.nip.io
    http:
      paths:
      - backend:
          serviceName: frontend
          servicePort: 80
        path: /

And create it using

kubectl apply -f frontend-ingress.yaml

Browse to the public hostname of the frontend and watch as the number of orders change

Once the Ingress is deployed, you should be able to access the frontend at http://frontend.[cluster_specific_dns_zone], for example http://frontend.52.255.217.198.nip.io

If it doesn’t work from the first trial, give it a few more minutes or try a different browser.

Note: you might need to enable cross-scripting in your browser; click on the shield icon on the address bar (for Chrome) and allow unsafe script to be executed.

Orders frontend

Resources

Architecture Diagram

If you want a picture of how the system should look at the end of this challenge click below

Enable SSL/TLS on ingress

You want to enable connecting to the frontend website over SSL/TLS. In this task, you’ll use Let’s Encrypt free service to generate valid SSL certificates for your domains, and you’ll integrate the certificate issuance workflow into Kubernetes.

Important After you finish this task for the frontend, you may either receive some browser warnings about “mixed content” or the orders might not load at all because the calls happen via JavaScript. Use the same concepts to create an ingress for captureorder service and use SSL/TLS to secure it in order to fix this.

Tasks

Install cert-manager

cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. It will ensure certificates are valid and up to date periodically, and attempt to renew certificates at an appropriate time before expiry.

Task Hints

Install cert-manager using Helm and configure it to use letsencrypt as the certificate issuer.

# Install the CustomResourceDefinition resources separately
kubectl apply --validate=false -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.11/deploy/manifests/00-crds.yaml

# Create the namespace for cert-manager
kubectl create namespace cert-manager

# Add the Jetstack Helm repository
helm repo add jetstack https://charts.jetstack.io

# Update your local Helm chart repository cache
helm repo update

# Install the cert-manager Helm chart
helm install \
  --name cert-manager \
  --namespace cert-manager \
  --version v0.11.0 \
  jetstack/cert-manager

Create a Let’s Encrypt ClusterIssuer

In order to begin issuing certificates, you will need to set up a ClusterIssuer.

Task Hints

  • cert-manager uses a custom Kubernetes object called an Issuer or ClusterIssuer to act as the interface between you and the certificate issuing service (in our case Let’s Encrypt). There are many ways to create an issuer, but the cert-manager docs provides a working example YAML for Let’s Encrypt. It will require some small modifications, You must change the type to ClusterIssuer or it will not work. The recommendation is you call the issuer letsencrypt
  • Check the status with kubectl describe clusterissuer.cert-manager.io/letsencrypt (or other name if you didn’t call your issuer letsencrypt)

Save the YAML below as letsencrypt-clusterissuer.yaml or download it from letsencrypt-clusterissuer.yaml.

Note Make sure to replace _YOUR_EMAIL_ with your email.

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory # production
    #server: https://acme-staging-v02.api.letsencrypt.org/directory # staging
    email: _YOUR_EMAIL_ # replace this with your email
    privateKeySecretRef:
      name: letsencrypt
    solvers:
       - http01:
           ingress:
             class:  nginx

And apply it using

kubectl apply -f letsencrypt-clusterissuer.yaml

Update the ingress resource to automatically request a certificate

Issuing certificates can be done automatically by properly annotating the ingress resource.

Task Hints

  • You need to make changes to the frontend ingress, you can modify your existing frontend ingress YAML file or make a copy to a new name
  • The quick start guide for cert-manager provides guidance on the changes you need to make. Note the follow:
    • The annotation cert-manager.io/issuer: "letsencrypt-staging" in the metadata, you want that to refer to your issuer, e.g. letsencrypt
    • The new tls: section, here the host field should match the host in your rules section, and the secretName can be anything you like, this will be the name of the certificate issued (see next step)
  • Reapply your changed frontend ingress using kubectl

Save the YAML below as frontend-ingress-tls.yaml or download it from frontend-ingress-tls.yaml.

Note Make sure to replace _INGRESS_CONTROLLER_EXTERNAL_IP_ with your cluster ingress controller external IP. Also make note of the secretName: frontend-tls-secret as this is where the issued certificate will be stored as a Kubernetes secret.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt
spec:
  tls:
  - hosts:
    - frontend._INGRESS_CONTROLLER_EXTERNAL_IP_.nip.io
    secretName: frontend-tls-secret
  rules:
  - host: frontend._INGRESS_CONTROLLER_EXTERNAL_IP_.nip.io
    http:
      paths:
      - backend:
          serviceName: frontend
          servicePort: 80
        path: /

And apply it using

kubectl apply -f frontend-ingress-tls.yaml

Verify the certificate is issued and test the website over SSL

Task Hints

  • You can list custom objects such as certificates with regular kubectl commands, e.g. kubectl get cert and kubectl describe cert, use the describe command to validate the cert has been issued and is valid
  • Access the front end in your browser as before, e.g. http://frontend.{ingress-ip}.nip.io you might be automatically redirected to the https:// version, if not modify the URL to access using https://
  • You will probably see nothing in the Orders view, and many errors in the dev console (F12), to fix this you will need to do some work to make the orders API accessible over HTTPS with TLS:
    • Repeat the work you did when creating the frontend ingress, but this time for the captureorder service, i.e. direct traffic via the ingress (create an second ingress object using YAML)
    • The hostname will need to be different, but still point at your ingress controller IP, e.g. orders.{ingress-ip}.nip.io. You must set up TLS for it as you did with the frontend
    • Modify the CAPTUREORDERSERVICEIP environmental variable in the frontend deployment YAML. This now needs to refer to the hostname of your orders ingress, re-deploy the frontend to make the changes live

Let’s Encrypt should automatically verify the hostname in a few seconds. Make sure that the certificate has been issued by running:

kubectl describe certificate frontend

You should get back something like:

Name:         frontend
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"cert-manager.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"frontend","namespace":"default"},"sp...
API Version:  cert-manager.io/v1alpha1
Kind:         Certificate
Metadata:
  Creation Timestamp:  2019-02-13T02:40:40Z
  Generation:          1
  Resource Version:    11448
  Self Link:           /apis/cert-manager.io/v1alpha1/namespaces/default/certificates/frontend
  UID:                 c0a620ee-2f38-11e9-adae-0a58ac1f1147
Spec:
  Acme:
    Config:
      Domains:
        frontend.52.255.217.198.nip.io
      Http 01:
        Ingress Class:  addon-http-application-routing
  Dns Names:
    frontend.52.255.217.198.nip.io
  Issuer Ref:
    Kind:       ClusterIssuer
    Name:       letsencrypt
  Secret Name:  frontend-tls-secret

Verify that the frontend is accessible over HTTPS and that the certificate is valid.

Let's Encrypt SSL certificate

Note: even if the certificate is valid, you may still get a warning in your browser because of the unsafe cross-scripting.

Resources

Architecture Diagram

If you want a picture of how the system should look at the end of this challenge click below

Monitoring

You would like to monitor the performance of different components in your application, view logs and get alerts whenever your application availability goes down or some components fail.

Use a combination of the available tools to setup alerting capabilities for your application.

Tasks

Create Log Analytics workspace

Task Hints

If you are running this lab as part of the managed lab environment, you may not be able to create the required resources to enable monitoring due to insufficient permissions on the subscription. You’ll need to pre-create the Log Analytics workspace in your assigned environment resource group.

Follow the Create a Log Analytics workspace in the Azure portal instructions.

Alternatively you can create the workspace using the CLI with the command below, ensure you pick a unique name for the workspace

az resource create --resource-type Microsoft.OperationalInsights/workspaces \
 --name <workspace-name> \
 --resource-group <resource-group> \
 --location <region> \
 --properties '{}' -o table

Enable the monitoring addon

Task Hints

First get the resource id of the workspace you created, by running

az resource show --resource-type Microsoft.OperationalInsights/workspaces --resource-group <resource-group> --name <workspace-name> --query "id" -o tsv

Next enable the monitoring add-on by running the command below, replace the placeholder values and the workspace-resource-id value with the output from the previous command

az aks enable-addons --resource-group <resource-group> --name <unique-aks-cluster-name> --addons monitoring --workspace-resource-id <workspace-resource-id>

Leverage integrated Azure Kubernetes Service monitoring to figure out if requests are failing, inspect Kubernetes event or logs and monitor your cluster health

Task Hints

  • View the utilization reports and charts in the Azure portal, via the “Insights” view on your AKS cluster
  • It might be several minutes before the data appears
  • Check the cluster utilization under load Cluster utilization

  • Identify which pods are causing trouble Pod utilization

View the live container logs and Kubernetes events

Task Hints

  • You can view live log data from the ‘Containers’ tab in the Insights view, with the “View live data (preview)” button.
  • Will get an error, this can be fixed by setting up some RBAC roles and accounts in your cluster. This is covered in the AKS documentation. You might need to refresh the page in the portal for the changes to take effect.

If the cluster is RBAC enabled, which is the default, you have to create the appropriate ClusterRole and ClusterRoleBinding.

Save the YAML below as logreader-rbac.yaml or download it from logreader-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1 
kind: ClusterRole 
metadata: 
   name: containerHealth-log-reader 
rules: 
   - apiGroups: [""] 
     resources: ["pods/log", "events"] 
     verbs: ["get", "list"]  
--- 
apiVersion: rbac.authorization.k8s.io/v1 
kind: ClusterRoleBinding 
metadata: 
   name: containerHealth-read-logs-global 
roleRef: 
    kind: ClusterRole 
    name: containerHealth-log-reader 
    apiGroup: rbac.authorization.k8s.io 
subjects: 
   - kind: User 
     name: clusterUser 
     apiGroup: rbac.authorization.k8s.io

And deploy it using

kubectl apply -f logreader-rbac.yaml

If you have a Kubernetes cluster that is not configured with Kubernetes RBAC authorization or integrated with Azure AD single-sign on, you do not need to follow the steps above. Because Kubernetes authorization uses the kube-api, contributor access is required.

Head over to the AKS cluster on the Azure portal, click on Insights under Monitoring, click on the Containers tab and pick a container to view its live logs or event logs and debug what is going on.

Azure Monitor for Containers: Live Logs

Collect Prometheus metrics (optional)

Note The minimum agent version supported by this feature is microsoft/oms:ciprod07092019 or later.

  1. Run an demo application called “prommetrics-demo” which already has the Prometheus endpoint exposed. Save the YAML below as prommetrics-demo.yaml or download it from prommetrics-demo.yaml
  apiVersion: v1
  kind: Service
  metadata:
    name: prommetrics-demo
    labels:
      app: prommetrics-demo
  spec:
    selector:
      app: prommetrics-demo
    ports:
    - name: metrics
      port: 8000
      protocol: TCP
      targetPort: 8000
    - name: http
      port: 8080
      protocol: TCP
      targetPort: 8080
    type: ClusterIP
  ---
  apiVersion: apps/v1
  kind: Deployment
  metadata:
    name: prommetrics-demo
    labels:
      app: prommetrics-demo
  spec:
    replicas: 4
    selector:
      matchLabels:
        app: prommetrics-demo
    template:
      metadata:
        annotations:
          prometheus.io/scrape: "true"
          prometheus.io/path: "/"
          prometheus.io/port: "8000"
        labels:
          app: prommetrics-demo
      spec:
        containers:
        - name: prommetrics-demo
          image: vishiy/tools:prommetricsv5
          imagePullPolicy: Always
          ports:
          - containerPort: 8000
          - containerPort: 8080

And deploy it using

  kubectl apply -f prommetrics-demo.yaml

This application on purpose generates “Bad Request 500” when traffic is generated and it exposes a Prometheus metric called prommetrics_demo_requests_counter_total.

  1. Generate traffic to the application by running curl.

Find the pod you just created.

  kubectl get pods | grep prommetrics-demo

  prommetrics-demo-7f455766c4-gmpjb   1/1       Running   0          2m
  prommetrics-demo-7f455766c4-n7554   1/1       Running   0          2m
  prommetrics-demo-7f455766c4-q756r   1/1       Running   0          2m
  prommetrics-demo-7f455766c4-vqncw   1/1       Running   0          2m

Select one of the pods and login.

  kubectl exec -it prommetrics-demo-7f455766c4-gmpjb bash

While logged on, execute curl to generate traffic.

  while (true); do curl 'http://prommetrics-demo.default.svc.cluster.local:8080'; sleep 5; done 

Note Leave the window open and keep running this. You will see “Internal Server Error” but do not close the window.

  1. Download the configmap template yaml file and apply to start scraping the metrics.

This configmap is pre-configured to scrape the application pods and collect Prometheus metric “prommetrics_demo_requests_counter_total” from the demo application in 1min interval.

Download configmap from configmap.yaml

  interval = "1m"
  fieldpass = ["prommetrics_demo_requests_counter_total"]
  monitor_kubernetes_pods = true

And deploy it using

  kubectl apply -f configmap.yaml
  1. Query the Prometheus metrics and plot a chart.

To access Log Analytics, go to the AKS overview page and click Logs in the TOC under Monitor. Copy the query below and run.

  InsightsMetrics
  | where Name == "prommetrics_demo_requests_counter_total"
  | extend dimensions=parse_json(Tags)
  | extend request_status = tostring(dimensions.request_status)
  | where request_status == "bad"
  | project request_status, Val, TimeGenerated | render timechart

You should be able to plot a chart based on the Prometheus metrics collected.

Azure Monitor for Containers: Prometheus

Resources

Scaling

As popularity of the application grows the application needs to scale appropriately as demand changes. Ensure the application remains responsive as the number of order submissions increases.

Tasks

Run a baseline load test

Task Hints

  • A pre-built image on Dockerhub has been created called azch/loadtest, this uses a tool called ‘hey’ to inject a large amount of traffic to the capture order API
  • Azure Container Instances can be used to run this image as a container, e.g using the az container create command.
  • When running as a Container Instance set we don’t want it to restart once it has finished, so set --restart-policy Never
  • Provide the endpoint of your capture orders service in SERVICE_ENDPOINT environmental variable e.g. -e SERVICE_ENDPOINT=https://orders.{ingress-ip}.nip.io
  • You watch the orders come in with the frontend page, and can view the detailed output of the load test with the az container logs command
  • Make a note of the results, response times etc

There is a container image on Docker Hub (azch/loadtest) that is preconfigured to run the load test. You may run it in Azure Container Instances running the command below

az container create -g <resource-group> -n loadtest --image azch/loadtest --restart-policy Never -e SERVICE_ENDPOINT=https://<hostname order capture service>

This will fire off a series of increasing loads of concurrent users (100, 400, 1600, 3200, 6400) POSTing requests to your Order Capture API endpoint with some wait time in between to simulate an increased pressure on your application.

You may view the logs of the Azure Container Instance streaming logs by running the command below. You may need to wait for a few minutes to get the full logs, or run this command multiple times.

az container logs -g <resource-group> -n loadtest

When you’re done, you may delete it by running

az container delete -g <resource-group> -n loadtest

Make note of results (sample below), figure out what is the breaking point for the number of users.

Phase 5: Load test - 30 seconds, 6400 users.

Summary:
  Total:	41.1741 secs
  Slowest:	23.7166 secs
  Fastest:	0.8882 secs
  Average:	9.7952 secs
  Requests/sec:	569.1929

  Total data:	1003620 bytes
  Size/request:	43 bytes

Response time histogram:
  0.888 [1]	|
  3.171 [1669]	|■■■■■■■■■■■■■■
  5.454 [1967]	|■■■■■■■■■■■■■■■■■
  7.737 [4741]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.020 [3660]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  12.302 [3786]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  14.585 [4189]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  16.868 [2583]	|■■■■■■■■■■■■■■■■■■■■■■
  19.151 [586]	|■■■■■
  21.434 [151]	|■
  23.717 [7]	|

Status code distribution:
  [200]	23340 responses

Error distribution:
  [96]	Post http://23.96.91.35/v1/order: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

You may use the Azure Monitor (previous task) to view the logs and figure out where you need to optimize to increase the throughtput (requests/sec), reduce the average latency and error count.

Azure Monitor container insights

Create Horizontal Pod Autoscaler

Most likely in your initial test, the captureorder container was the bottleneck. So the first step would be to scale it out. There are two ways to do so, you can either manually increase the number of replicas in the deployment, or use Horizontal Pod Autoscaler.

Horizontal Pod Autoscaler allows Kubernetes to detect when your deployed pods need more resources and then it schedules more pods onto the cluster to cope with the demand.

Task Hints

  • The Horizontal Pod Autoscaler (or HPA) is a way for deployments to scale their pods out automatically based on metrics such as CPU.
  • There are two versions of the HPA object v1 and v2beta2, for this workshop you can work with the v1 version.
  • The kubectl autoscale command can easily set up a HPA for any deployment, this walkthrough guide has an example you can re-use.
  • Alternatively you can define the HPA object in a YAML file.
  • For the HPA to work, you must add resource limits to your captureorder deployment, if you haven’t already done so. Good values to use are cpu: "500m" (which is equivalent to half a CPU core), and for memory specify memory: "256Mi".
  • Validate the HPA with kubectl get hpa and make sure the targets column is not showing <unknown>

Save the YAML below as captureorder-hpa.yaml or download it from captureorder-hpa.yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: captureorder
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: captureorder
  minReplicas: 4
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

And deploy it using

kubectl apply -f captureorder-hpa.yaml

Important For the Horizontal Pod Autoscaler to work, you MUST remove the explicit replicas: 2 count from your captureorder deployment and redeploy it and your pods must define resource requests and resource limits.

Run a load test again after applying Horizontal Pod Autoscaler

Task Hints

  • Delete your load test container instance (az container delete) and re-create it to run another test, with the same parameters as before
  • Watch the behavior of the HPA with kubectl get hpa and use kubectl get pod to see the new captureorder pods start, when auto-scaling triggers more replicas
  • Observe the change in load test results

If you didn’t delete the load testing Azure Container Instance, delete it now

az container delete -g <resource-group> -n loadtest

Running the load test again

az container create -g <resource-group> -n loadtest --image azch/loadtest --restart-policy Never -e SERVICE_ENDPOINT=https://<hostname order capture service>

Observe your Kubernetes cluster reacting to the load by running

kubectl get pods -l  app=captureorder

Check if your cluster nodes needs to scale/autoscale

Task Hints

  • As the HPA scales out with more & more pods, eventually the cluster will run out of resources. You will see pods in pending state.
  • You may have to artificially force this situation by increasing the resource request and limit for memory in the captureorder deployment to memory: "4G" or even memory: "2G" (and re-deploy/apply the deployment)
  • If you enabled the cluster autoscaler, you might be able to get the cluster to scale automatically, check with the node count with kubectl get nodes
  • You you didn’t enable the autoscaler you can try manually scaling with the az aks scale command and the --node-count parameter

If your AKS cluster is not configured with the cluster autoscaler, scale the cluster nodes using the command below to the required number of nodes

az aks scale --resource-group <resource-group> --name <unique-aks-cluster-name> --node-count 4

Otherwise, if you configured your AKS cluster with cluster autoscaler, you should see it dynamically adding and removing nodes based on the cluster utilization. To change the node count, use the az aks update command and specify a minimum and maximum value. The following example sets the --min-count to 1 and the --max-count to 5:

az aks update \
  --resource-group <resource-group> \
  --name <unique-aks-cluster-name> \
  --update-cluster-autoscaler \
  --min-count 1 \
  --max-count 5

Resources

Create private highly available container registry

Instead of using the public Docker Hub registry, create your own private container registry using Azure Container Registry (ACR).

Tasks

Create an Azure Container Registry (ACR)

Task Hints

az acr create --resource-group <resource-group> --name <unique-acr-name> --sku Standard --location eastus

Use Azure Container Registry Build to push the container images to your new registry

Task Hints

  • ACR includes a feature called ‘ACR Tasks’ which can build container images remotely in Azure, and store the resulting image in the registry. This guide in the ACR documentation covers how to run a quick build task
  • The source code of the captureorder project is on GitHub here, https://github.com/Azure/azch-captureorder.git and can be checked out to your cloud shell session with git. It includes the Dockerfile to build the image.
  • There is no need to run az acr login when working the the Cloud Shell.
  • When running az acr build be careful with the last parameter which is the Docker build context directory, it’s usual to cd into the directory where the Dockerfile is situated, and pass . (a single dot) as the build context
  • You can tag and name the image however you wish, e.g. captureorder:v1 or you might want to use a variable to dynamically tag the image

Note The Azure Cloud Shell is already authenticated against Azure Container Registry. You don’t need to do az acr login which also won’t work on the Cloud Shell because this requires the Docker daemon to be running.

Clone the application code on Azure Cloud Shell

git clone https://github.com/Azure/azch-captureorder.git
cd azch-captureorder

Use Azure Container Registry Build to build and push the container images

az acr build -t "captureorder:{{.Run.ID}}" -r <unique-acr-name> .

Note You’ll get a build ID in a message similar to Run ID: ca3 was successful after 3m14s. Use ca3 in this example as the image tag in the next step.

Configure your application to pull from your private registry

Before you can use an image stored in a private registry you need to ensure your Kubernetes cluster has access to that registry.

Grant AKS generated Service Principal to ACR

Authorize the AKS cluster to connect to the Azure Container Registry.

Task Hints

az aks update -n <aks-cluster-name> -g <resource-group> --attach-acr <your-acr-name>

After you grant your Kubernetes cluster access to your private registry, you can update your deployment with the image you built in the previous step.

Kubernetes is declarative and keeps a manifest of all object resources. Edit your deployment object with the updated image.

Task Hints

  • You can edit a live deployment object in Kubernetes with kubectl edit deployment. It is suggested to run export KUBE_EDITOR="nano" to stop kubectl from using the vi editor.
  • The image name will need to be fully qualify and include your registry name which will be of the form <name>.azurecr.io/<image>:<tag>

From your Azure Cloud Shell run:

kubectl edit deployment captureorder

Replace the image tag with the location of the new image on Azure Container Registry. Replace <build id> with the ID you got from the message similar to Run ID: ca3 was successful after 3m14s after the build was completed.

Hint kubectl edit launches vim. To search in vim you can type /image: azch/captureorder. Go into insert mode by typing i, and replace the image with <unique-acr-name>.azurecr.io/captureorder:<build id>

Quit the editor by hitting Esc then typing :wq and run kubectl get pods -l app=captureorder -w.

If you successfully granted Kubernetes authorization to your private registry you will see one pod terminating and a new one creating. If the access to your private registry was properly granted to your cluster, your new pod should be up and running within 10 seconds.

Resources

DevOps tasks

Continuous Integration and Continuous Delivery

Your development team are making an increasing number of modifications to your application code. It is no longer feasible to manually deploy updates.

You are required to create a robust DevOps pipeline supporting CI/CD to deploy code changes.

Hint The source code repositories on GitHub contain an azure-pipelines.yml definition that you can use with Azure Pipelines to build and deploy the containers.

Tasks

Fork the captureorder project on GitHub

You’ll need to work on your own copy of the code. Fork the project https://github.com/Azure/azch-captureorder/ on GitHub.

Fork on GitHub

Update the manifest to point to your Azure Container Registry

After you fork the project, make sure to update manifests/deployment.yaml to point to your Azure Container Registry.

Update Azure Container Registry in manifest

Create an Azure DevOps account

Go to https://dev.azure.com and sign-in with your Azure subscription credentials.

If this is your first time to provision an Azure DevOps account, you’ll be taken through a quick wizard to create a new organization.

Getting started with Azure DevOps

Create a project on Azure DevOps

Create a new private project, call it captureorder

Create Azure DevOps project

Enable the new multistage pipeline and new service connection (Preview)

This pipeline is based on the new multi-stage pipelines feature of Azure DevOps Pipelines which is still in preview. Make sure to enable it on your Azure Pipelines account.

Click on your profile icon (top-left), then click on Preview features.

Preview features

Enable the multistage pipelines and new service connection experiences.

Enable multistage pipelines

Create Docker service connection

Now that you have a project, you need to set it up to authenticate to your Azure Container Registry through a service connection.

Go to your project settings page.

Project settings

Open the service connections page from the project settings page and create a new service connection.

Create a service connection

Add a Docker Registry service connection. Select Azure Container Registry and pick your ACR from the list. Name the connection containerRegistryConnection.

Create a service connection

Create a variable group to hold configuration

Now, instead of hard coding some configuration values and secrets into the pipeline or Kubernetes YAML configuration, you’re going to use variable group to hold this information.

Go to Pipelines -> Library and create a new variable group.

Create variable group

Name the group captureorder-variables and create a mongoPassword secret holding your MongoDB password. Make sure to click on the lock icon to designate this a a secret.

Create mongoPassword

Afterwards, also add the rest of the variables mongoHost and mongoUser.

Create rest of variables

Save and proceed.

Create an Environment

Environments are a new concept in Azure DevOps Pipelines. An environment represents a collection of resources such as namespaces within Kubernetes clusters, Azure Web Apps, virtual machines, databases, which can be targeted by deployments from a pipeline. Typical examples of environments include Dev, Test, QA, Staging and Production.

Currently, only Kubernetes resource type is supported, with the roadmap of environments including support for other resources such as virtual machines, databases and more.

Create your first environment

Create an environment

Create a Kubernetes environment called aksworkshop. This will create a Service Principal on your AKS cluster and store it in the service connections.

Create an aksworkshop environment

Select your AKS cluster, and create a new namespace called dev

Create a dev namespace

Create a pipeline

Go back to pipelines and create a new pipeline.

Create a pipeline

Walk through the steps of the wizard by first selecting GitHub as the location of your source code.

Where is your code?

You might be redirected to GitHub to sign in. If so, enter your GitHub credentials. When the list of repositories appears, select your repository.

Select repository

You might be redirected to GitHub to install the Azure Pipelines app. If so, select Approve and install.

Install GitHub app

The source code repository already contains a multi-stage pipeline, which is picked up by the configuration. Inspect the azure-pipelines.yml file for the full pipeline. We’re relying on a new feature in Azure DevOps which is called “Multi-stage pipelines”. Make sure the Multi-stage pipelines experience is turned on.

Review pipeline

The pipeline is broken into two main stages:

  1. Build
  2. Deploy

Each stage then has one or more jobs, which in turn have one or more tasks. A task is the building block for defining automation in a pipeline. A task is simply a packaged script or procedure that has been abstracted with a set of inputs.

Inspect the variables

There are a couple of variables that are referenced in the pipeline.

variables:
  - group: captureorder-variables # Variable Group containing 'mongoHost', 'mongoUser' and the secret 'mongoPassword'

  - name: dockerRegistryServiceConnection
    value: 'containerRegistryConnection' # make sure it matches the name you used in the service connection
  
  - name: acrEndpoint
    value: 'uniqueacrname.azurecr.io' # replace with container registry endpoint

  - name: tag
    value: '$(Build.BuildId)' # computed at build time

Important Make sure to change the value of the acrEndpoint variable from uniqueacrname.azurecr.io to your Azure Container Registry details, for example acr73505.azurecr.io.

Inspect the Build stage

The Build stage does two things, it uses the Docker task to build an image. It authenticates using the Docker service connection and tags the image with the build number.

- task: Docker@2
  displayName: Build and push an image to container registry
  inputs:
    command: buildAndPush
    repository: $(imageRepository)
    dockerfile: '**/Dockerfile'
    containerRegistry: $(dockerRegistryServiceConnection)
    tags: $(tag)

It then publishes the manifests folder, containing the Kubernetes YAML definitions as a pipeline artifact. This is used later in the deployment stage to deploy to Kubernetes.

- task: PublishPipelineArtifact@0
  inputs:
    artifactName: 'manifests'
    targetPath: 'manifests'

Hint

  • Inspect the YAML files in the manifests folder and make sure to replace with your own deployment files.
  • Following best practices, deployment.yaml loads the MongoDB hostname, username and password from a Kubernetes secret. Make sure to modify your configuration files to follow the same mechanism.
Inspect the Deploy stage and edit the imageRepository variable

The Deploy stage does multiple things. First, it downloads the pipeline artifacts (the manifest folder).

- task: DownloadPipelineArtifact@1
  inputs:
    artifactName: 'manifests'
    downloadPath: '$(System.ArtifactsDirectory)/manifests'

Then, using the Kubernetes manifest task createSecret action, it creates a Kubernetes secret holding the MongoDB host, username and password. Note that the secretArguments follows the same syntax as when creating a secret using kubectl. The values are coming from the variable group you created before.

- task: KubernetesManifest@0
  displayName: Create secret for MongoDB
  inputs:
    action: createSecret
    secretName: mongodb
    secretType: generic
    namespace: $(k8sNamespace)
    secretArguments: --from-literal=mongoHost=$(mongoHost) --from-literal=mongoUser=$(mongoUser) --from-literal=mongoPassword=$(mongoPassword)

The next task then deploys all the YAML files in the manifests folder. It automatically overrides the image name you defined in the deployment.yaml file to append the current build ID as a tag to the image.

Note For this to work, you should make sure to change the value of the acrEndpoint in the pipeline to match your deployment.yaml file. For example: uniqueacrname.azurecr.io.

- task: KubernetesManifest@0
  displayName: Deploy to Kubernetes cluster
  inputs:
    action: deploy
    namespace: $(k8sNamespace)
    manifests: $(System.ArtifactsDirectory)/manifests/*
    containers: '$(acrEndpoint)/captureorder:$(tag)'

Run your pipeline

To save your pipeline, click on Run. You will be able to view the status of the pipeline execution.

Run the pipeline

The build stage will build the container and push it to Azure Container Registry.

Build stage

The deploy stage will create the secrets and deploy the manifests to the dev namespace of your AKS cluster.

Build stage

Troubleshooting: If your deployment times out, you can inspect what’s going on by doing a kubectl get pods --namespace dev and kubectl get events --namespace dev. If you get an ImagePullBackOff, this probably means you missed updating the repository with your own Azure Container Registry in manifests/deployment.yaml. Make sure to update this in your code fork. Update deployment repository

Verify environments

If you go back to the environments tab, you should see that your environment has the latest build deployed.

Verify environments

Click through to the workloads, you should also see the Kubernetes deployment with 2 pods running.

Verify workload

Click through to the services, you should see the deployed services.

Verify service

You can also view the pod logs to make sure that everything is working as planned.

Verify service

Resources

Advanced tasks

The below tasks can be done in any order. You’re not expected to do all of them, pick what you’d like to try out!

New cluster: AKS with Virtual Nodes

To rapidly scale application workloads in an Azure Kubernetes Service (AKS) cluster, you can use Virtual Nodes. With Virtual Nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don’t need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run the additional pods.

Note

  • We will be using virtual nodes to scale out our API using Azure Container Instances (ACI).
  • These ACI’s will be in a private VNET, so we must deploy a new AKS cluster with advanced networking.
  • Take a look at documentation for regional availability and known limitations

Tasks

Create a virtual network and subnet

Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To provide this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using advanced networking.

Create a VNET

az network vnet create \
    --resource-group <resource-group> \
    --name myVnet \
    --address-prefixes 10.0.0.0/8 \
    --subnet-name myAKSSubnet \
    --subnet-prefix 10.240.0.0/16

And an additional subnet

az network vnet subnet create \
    --resource-group <resource-group> \
    --vnet-name myVnet \
    --name myVirtualNodeSubnet \
    --address-prefix 10.241.0.0/16

Create a service principal and assign permissions to VNET

Create a service principal

az ad sp create-for-rbac --skip-assignment

Output will look similar to below. You will use the appID and password in the next step.

{
  "appId": "7248f250-0000-0000-0000-dbdeb8400d85",
  "displayName": "azure-cli-2017-10-15-02-20-15",
  "name": "http://azure-cli-2017-10-15-02-20-15",
  "password": "77851d2c-0000-0000-0000-cb3ebc97975a",
  "tenant": "72f988bf-0000-0000-0000-2d7cd011db47"
}

Assign permissions. We will use this same SP to create our AKS cluster.

APPID=<replace with above>
PASSWORD=<replace with above>

VNETID=$(az network vnet show --resource-group <resource-group> --name myVnet --query id -o tsv)

az role assignment create --assignee $APPID --scope $VNETID --role Contributor

Get the latest Kubernetes version available in AKS

Get the latest available Kubernetes version in your preferred region into a bash variable. Replace <region> with the region of your choosing, for example eastus.

VERSION=$(az aks get-versions -l <region> --query 'orchestrators[-1].orchestratorVersion' -o tsv)

Register the Azure Container Instances service provider

If you have not previously used ACI, register the service provider with your subscription. You can check the status of the ACI provider registration using the az provider list command, as shown in the following example:

az provider list --query "[?contains(namespace,'Microsoft.ContainerInstance')]" -o table

The Microsoft.ContainerInstance provider should report as Registered, as shown in the following example output:

Namespace                    RegistrationState
---------------------------  -------------------
Microsoft.ContainerInstance  Registered

If the provider shows as NotRegistered, register the provider using the az provider register as shown in the following example:

az provider register --namespace Microsoft.ContainerInstance

Create the new AKS Cluster

Set the SUBNET variable to the one created above.

SUBNET=$(az network vnet subnet show --resource-group <resource-group> --vnet-name myVnet --name myAKSSubnet --query id -o tsv)

Create the cluster. Replace the name with a new, unique name.

Note: You may need to validate the variables below to ensure they are all set properly.

az aks create \
    --resource-group <resource-group> \
    --name <unique-aks-cluster-name> \
    --node-count 3 \
    --kubernetes-version $VERSION \
    --network-plugin azure \
    --service-cidr 10.0.0.0/16 \
    --dns-service-ip 10.0.0.10 \
    --docker-bridge-address 172.17.0.1/16 \
    --vnet-subnet-id $SUBNET \
    --service-principal $APPID \
    --client-secret $PASSWORD \
    --no-wait

Once completed, validate that your cluster is up and get your credentials to access the cluster.

az aks get-credentials -n <your-aks-cluster-name> -g <resource-group>
kubectl get nodes

Enable virtual nodes

Add Azure CLI extension.

az extension add --source https://aksvnodeextension.blob.core.windows.net/aks-virtual-node/aks_virtual_node-0.2.0-py2.py3-none-any.whl

Enable the virtual node in your cluster.

az aks enable-addons \
    --resource-group <resource-group> \
    --name <your-aks-cluster-name> \
    --addons virtual-node \
    --subnet-name myVirtualNodeSubnet

Verify the node is available.

kubectl get node

NAME                       STATUS   ROLES   AGE   VERSION
aks-nodepool1-30482081-0   Ready    agent   30m   v1.11.5
aks-nodepool1-30482081-1   Ready    agent   30m   v1.11.5
aks-nodepool1-30482081-2   Ready    agent   30m   v1.11.5
virtual-node-aci-linux     Ready    agent   11m   v1.13.1-vk-v0.7.4-44-g4f3bd20e-dev

Deploy MongoDB and the Capture Order API on the new cluster

Repeat the steps in the Deploy MongoDB to deploy the database on your new cluster. Repeat the steps in the Deploy Order Capture API to deploy the API on your new cluster on traditional nodes.

Create a new Capture Order API deployment targeting the virtual node

Save the YAML below as captureorder-deployment-aci.yaml or download it from captureorder-deployment-aci.yaml

Be sure to replace to environment variables in the yaml to match your environment:

  • TEAMNAME
  • CHALLENGEAPPINSIGHTS_KEY
  • MONGOHOST
  • MONGOUSER
  • MONGOPASSWORD
apiVersion: apps/v1
kind: Deployment
metadata:
  name: captureorder-aci
spec:
  selector:
      matchLabels:
        app: captureorder
  template:
      metadata:
        labels:
            app: captureorder
      spec:
        containers:
        - name: captureorder
          image: azch/captureorder
          imagePullPolicy: Always
          env:
          - name: TEAMNAME
            value: "team-azch"
          #- name: CHALLENGEAPPINSIGHTS_KEY # uncomment and set value only if you've been provided a key
          #  value: "" # uncomment and set value only if you've been provided a key
          - name: MONGOHOST
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoHost
          - name: MONGOUSER
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoUser
          - name: MONGOPASSWORD
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoPassword
          ports:
          - containerPort: 8080
        nodeSelector:
          kubernetes.io/role: agent
          beta.kubernetes.io/os: linux
          type: virtual-kubelet
        tolerations:
        - key: virtual-kubelet.io/provider
          operator: Exists
        - key: azure.com/aci
          effect: NoSchedule

Deploy it.

kubectl apply -f captureorder-deployment-aci.yaml

Note the added nodeSelector and tolerations sections that basically tell Kubernetes that this deployment will run on the Virtual Node on Azure Container Instances (ACI).

Validate ACI instances

You can browse in the Azure Portal and find your Azure Container Instances deployed.

You can also see them in your AKS cluster:

kubectl get pod -l app=captureorder

NAME                                READY   STATUS    RESTARTS   AGE
captureorder-5cbbcdfb97-wc5vd       1/1     Running   1          7m
captureorder-aci-5cbbcdfb97-tvgtp   1/1     Running   1          2m

You can scale each deployment up/down and validate each are functioning.

kubectl scale deployment captureorder --replicas=0

kubectl scale deployment captureorder-aci --replicas=5

Test the endpoint.

curl -d '{"EmailAddress": "email@domain.com", "Product": "prod-1", "Total": 100}' -H "Content-Type: application/json" -X POST http://[Your Service Public LoadBalancer IP]/v1/order

MongoDB replication using a StatefulSet

Now that you scaled the replicas running the API, maybe it is time to scale MongoDB? If you used the typical command to deploy MongoDB using Helm, most likely you deployed a single instance of MongoDB running in a single container. For this section, you’ll redeploy the chart with “replicaSet” enabled.

The authors of the MongoDB chart created it so that it supports deploying a MongoDB replica set through the use of Kubernetes StatefulSet for the secondary replica set. A replica set in MongoDB provides redundancy and high availability and in some cases, increased read capacity as clients can send read operations to different servers.

Tasks

Upgrade the MongoDB Helm release to use replication

helm upgrade orders-mongo stable/mongodb --set replicaSet.enabled=true,mongodbUsername=orders-user,mongodbPassword=orders-password,mongodbDatabase=akschallenge

Verify how many secondaries are running

kubectl get pods -l app=mongodb

You should get a result similar to the below

NAME                               READY   STATUS    RESTARTS   AGE
orders-mongo-mongodb-arbiter-0     1/1     Running   1          3m
orders-mongo-mongodb-primary-0     1/1     Running   0          2m
orders-mongo-mongodb-secondary-0   1/1     Running   0          3m

Now scale the secondaries using the command below.

kubectl scale statefulset orders-mongo-mongodb-secondary --replicas=3

You should now end up with 3 MongoDB secondary replicas similar to the below

NAME                               READY   STATUS              RESTARTS   AGE
orders-mongo-mongodb-arbiter-0     1/1     Running             3          8m
orders-mongo-mongodb-primary-0     1/1     Running             0          7m
orders-mongo-mongodb-secondary-0   1/1     Running             0          8m
orders-mongo-mongodb-secondary-1   0/1     Running             0          58s
orders-mongo-mongodb-secondary-2   0/1     Running             0          58s

Resources

Use Azure Key Vault for secrets

Kubernetes provides a primitive, secrets, which can be used to store sensitive information and later retrieve them as an environment variable or a mounted volume into memory. If you have tighter security requirements that Kubernetes secrets don’t quite meet yet, for example you want an audit trail of all interactions with the keys, or version control, or FIPs compliance, you’ll need to use an external key vault.

There are a couple of options to accomplish this including Azure Key Vault and HashiCorp Vault. In this task, you’ll use Azure Key Vault to store the MongoDB password.

The captureorder application can be configured to read the MongoDB password from either an environment variable or from the file system. This task is focused on configuring the captureorder container running in AKS to read the MongoDB password from a secret stored in Azure Key Vault using the Kubernetes FlexVolume plugin for Azure Key Vault.

Key Vault FlexVolume for Azure allows you to mount multiple secrets, keys, and certs stored in Azure Key Vault into pods as an in memory volume. Once the volume is attached, the data in it is mounted into the container’s file system in tmpfs.

Tasks

Create an Azure Key Vault

Azure Key Vault names are unique. Replace <unique keyvault name> with a unique name between 3 and 24 characters long.

az keyvault create --resource-group <resource-group> --name <unique keyvault name>

Store the MongoDB password as a secret

Replace orders-password with the password for MongoDB.

az keyvault secret set --vault-name <unique keyvault name> --name mongo-password --value "orders-password"

Create Service Principal to access Azure Key Vault

The Key Vault FlexVolume driver offers two modes for accessing a Key Vault instance: Service Principal and Pod Identity. In this task, we’ll create a Service Principal that the driver will use to access the Azure Key Vault instance.

Replace <name> with a service principal name that is unique in your organization.

az ad sp create-for-rbac --name "http://<name>" --skip-assignment

You should get back something like the below, make note of the appId and password.

{
  "appId": "9xxxxxb-bxxf-xx4x-bxxx-1xxxx850xxxe",
  "displayName": "<name>",
  "name": "http://<name>",
  "password": "dxxxxxx9-xxxx-4xxx-bxxx-xxxxe1xxxx",
  "tenant": "7xxxxxf-8xx1-41af-xxxb-xx7cxxxxxx7"
}

Ensure the Service Principal has all the required permissions to access secrets in your Key Vault instance

Retrieve your Azure Key Vault ID and store it in a variable KEYVAULT_ID, replacing <unique keyvault name> with your Azure Key Vault name.

KEYVAULT_ID=$(az keyvault show --name <unique keyvault name> --query id --output tsv)

Create the role assignment, replacing "http://<name>" with your service principal name that was created earlier, for example "http://sp-captureorder".

az role assignment create --role Reader --assignee "http://<name>" --scope $KEYVAULT_ID

Configure Azure Key Vault to allow access to secrets using the Service Principal you created

Apply the policy on the Azure Key Vault, replacing the <unique keyvault name> with your Azure Key Vault name, and <appId> with the appId above.

az keyvault set-policy -n <unique keyvault name> --secret-permissions get --spn <appId>

Create a Kubernetes secret to store the Service Principal created earlier

Add your service principal credentials as a Kubernetes secrets accessible by the Key Vault FlexVolume driver. Replace the <appId> and <password> with the values you got above.

kubectl create secret generic kvcreds --from-literal clientid=<appId> --from-literal clientsecret=<password> --type=azure/kv

Deploy Key Vault FlexVolume for Kubernetes into your AKS cluster

Install the KeyVault FlexVolume driver

kubectl create -f https://raw.githubusercontent.com/Azure/kubernetes-keyvault-flexvol/master/deployment/kv-flexvol-installer.yaml

To validate the installer is running as expected, run the following commands:

kubectl get pods -n kv

You should see the keyvault flexvolume pods running on each agent node:

keyvault-flexvolume-f7bx8   1/1       Running   0          3m
keyvault-flexvolume-rcxbl   1/1       Running   0          3m
keyvault-flexvolume-z6jm6   1/1       Running   0          3m

Retrieve the Azure subscription/tenant ID where the Azure Key Vault is deployed

You’ll need both to configure the Key Vault FlexVolume driver in the next step.

Retrieve your Azure subscription ID and keep it.

az account show --query id --output tsv

Retrieve your Azure tenant ID and keep it.

az account show --query tenantId --output tsv

Modify the captureorder deployment to read the secret from the FlexVolume

The captureorder application can read the MongoDB password from an environment variable MONGOPASSWORD or from a file on disk at /kvmnt/mongo-password if the environment variable is not set (see code if you’re interested).

In this task, you’re going to modify the captureorder deployment manifest to remove the MONGOPASSWORD environment variable and add the FlexVol configuration.

Edit your captureorder-deployment.yaml by removing the MONGOPASSWORD from the env: section of the environment variables.

- name: MONGOPASSWORD
  valueFrom:
    secretKeyRef:
      name: mongodb
      key: mongoPassword

Add the below volumes definition to the configuration, which defines a FlexVolume called mongosecret using the Azure Key Vault driver. The driver will look for a Kubernetes secret called kvcreds which you created in an earlier step in order to authenticate to Azure Key Vault.

volumes:
  - name: mongosecret
    flexVolume:
      driver: "azure/kv"
      secretRef:
        name: kvcreds
      options:
        usepodidentity: "false"
        keyvaultname: <unique keyvault name>
        keyvaultobjectnames: mongo-password # Name of Key Vault secret
        keyvaultobjecttypes: secret
        resourcegroup: <kv resource group>
        subscriptionid: <kv azure subscription id>
        tenantid: <kv azure tenant id>

Mount the mongosecret volume to the pod at /kvmnt

volumeMounts:
  - name: mongosecret
    mountPath: /kvmnt
    readOnly: true

You’ll need to replace the placeholders with the values mapping to your configuration.

The final deployment file should look like so. Save the YAML below as captureorder-deployment-flexvol.yaml or download it from captureorder-deployment-flexvol.yaml. Make sure to replace the placeholders with values for your configuration.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: captureorder
spec:
  selector:
      matchLabels:
        app: captureorder
  replicas: 2
  template:
      metadata:
        labels:
            app: captureorder
      spec:
        containers:
        - name: captureorder
          image: azch/captureorder
          imagePullPolicy: Always
          readinessProbe:
            httpGet:
              port: 8080
              path: /healthz
          livenessProbe:
            httpGet:
              port: 8080
              path: /healthz
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          env:
          - name: TEAMNAME
            value: "team-azch"
          #- name: CHALLENGEAPPINSIGHTS_KEY # uncomment and set value only if you've been provided a key
          #  value: "" # uncomment and set value only if you've been provided a key
          - name: MONGOHOST
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoHost
          - name: MONGOUSER
            valueFrom:
              secretKeyRef:
                name: mongodb
                key: mongoUser
          ports:
          - containerPort: 8080
          volumeMounts:
          - name: mongosecret
            mountPath: /kvmnt
            readOnly: true
        volumes:
        - name: mongosecret
          flexVolume:
            driver: "azure/kv"
            secretRef:
              name: kvcreds
            options:
              usepodidentity: "false"
              keyvaultname: <unique keyvault name>
              keyvaultobjectnames: mongo-password # Name of Key Vault secret
              keyvaultobjecttypes: secret
              keyvaultobjectversions: ""     # [OPTIONAL] list of KeyVault object versions (semi-colon separated), will get latest if empty
              resourcegroup: <kv resource group>
              subscriptionid: <kv azure subscription id>
              tenantid: <kv azure tenant id>

And deploy it using

kubectl apply -f captureorder-deployment-flexvol.yaml

Apply your changes.

Verify that everything is working

Once you apply the configuration, validate that the capture order pod loaded the secret from Azure Key Vault and that you can still process orders. You can also exec into one of the captureorder pods and verify that the MongoDB password has been mounted at /kvmnt/mongo-password

# Get the pod name.
kubectl get pod -l app=captureorder

# Exec into the pod and view the mounted secret.
kubectl exec <podname> cat /kvmnt/mongo-password

The last command will return "orders-password".

Resources

Clean up

Once you’re done with the workshop, make sure to delete the resources you created. You can read through manage Azure resources by using the Azure portal or manage Azure resources by using Azure CLI for more details.

Proctor notes

Creating the Challenge Application Insights resource

You can quickly create the Application Insights resource required to track the challenge progress and plot the results by running the below code:

az resource create \
    --resource-group <resource-group> \
    --resource-type "Microsoft.Insights/components" \
    --name akschallengeproctor \
    --location eastus \
    --properties '{"Application_Type":"web"}'  

Note Provide the Instrumentation Key to the attendees that they can use to fill in the CHALLENGEAPPINSIGHTS_KEY environment variables.

Throughput scoring

On the Azure Portal, navigate to the Application Insights resource you created, click on Analytics

Click on Analytics

Then you can periodically run the query below, to view which team has the highest successful requests per second (RPS).

requests
| where success == "True"
| where customDimensions["team"] != "team-azch"
| summarize rps = count(id) by bin(timestamp, 1s), tostring(customDimensions["team"])
| summarize maxRPS = max(rps) by customDimensions_team
| order by maxRPS desc
| render barchart

Bar chart of requests per second

Changelog

All notable changes to this workshop will be documented in this file.

2019-11-13

Changed

  • Removed preview references to the AKS Cluster Auto Scaler now GA.

2019-10-04

Changed

  • Updated AKS and ACR authentication instructions

2019-10-10

Changed

  • Added “Task Hints” for each task with links and guidance to hopefully make the task achievable without clicking “toggle solution”
  • Added diagrams showing how the system should look after each part (section 2 only)
  • Changed ACR connection method using new --attach-acr command

2019-07-15

Changed

  • Updated Log Analytics instructions

2019-07-11

Added

  • Prometheus metric collection using Azure Monitor

2019-07-09

Added

  • Missing instruction to update ACR repository in the forked code

2019-06-24

Added

  • Instructions to enable multi-stage pipeline preview

Changed

  • Updated capture order app to load MongoDB details from a Kubernetes secret

2019-05-24

Added

  • Added changelog

Removed

  • Removed Helm section from DevOps until it is reworked into the new multi-stage pipelines experience

Changed

  • Updated Azure DevOps section with the new multi-stage pipelines experience

2019-04-29

Fixed

  • Missing enable-vmss flag

Changed

  • Clarified when to use the liveness check hint

2019-04-24

Fixed

  • Added missing line breaks “\” to az aks create command

Changed

  • Updated proctor notes for using Application Insights key
  • Updated Azure Key Vault FlexVol instructions

2019-04-23

Changed

  • Updated prerequisite section
  • Simplified scoring section

2019-04-19

Added

  • Added cluster autoscaler section
  • Added clean up section

Removed

  • Load testing with VSTS load tests because it is deprecated

Fixed

  • Fixed command to get the latest Kubernetes version

Changed

  • Workshop can be done exclusively using Azure Cloud Shell

2019-04-02

Added

  • Screenshots to Azure DevOps variables tab

Fixed

  • Fixed command to observe pods after load to use the correct label app=captureorder

2019-03-15

Added

  • Added Azure Key Vault Flex Vol task

2019-03-14

Changed

  • Updated commands to always use the latest Kubernetes version

Fixed

  • Fixed sed command syntax in CI/CD task

2019-02-21

Changed

  • Updated scoring to use Requests Per Second (RPS)

2019-02-13

Added

  • Added Ingress section
  • Added SSL/TLS section

Changed

  • Updated DNS section

2019-01-24

Added

  • Virtual Nodes section

Contributors

The following people have contributed to this workshop, thanks!