NB: Suggestions, constructive criticism, and (especially) PRs are most welcome!
Tested on AWS and Azure with:
- Konvoy v1.4.2
- KUDO v0.12.0
- Kafka Operator Version v1.2.0
- Zookeeper Operator Version v0.3.0
- This demo assumes you have
kubectl
installed and connected to a Konvoy cluster. - Install the KUDO CLI plugin for
kubectl
via:brew install kudo-cli
/brew upgrade kudo-cli
ORkubectl krew install kudo
/kubectl krew upgrade kudo
- Confirm you are running the latest KUDO CLI (v0.12.0) via:
kubectl kudo --version
- To execute in an airgapped environment, please follow the airgapped instructions before proceeding.
- Nagivate to Grafana
- Hover over
+
in left-hand side nav bar - Select import
- Copy and paste JSON found here
- Click
Upload
- Select
Prometheus
as data source
- Confirm you do not have an earlier version of KUDO deployed to your cluster by running
kubectl get ns kudo-system
. If kubectl responds saying the namespace was not found, proceed to the next step. If you discover that KUDO is already installed in your cluster, please begin by deleting the KUDO instance on your cluster:kubectl kudo init --dry-run -o yaml | kubectl delete -f -
- Install KUDO on your cluster:
kubectl kudo init --wait
- Next, install Zookeeper, which is a dependency for Kafka:
kubectl kudo install zookeeper --wait
- Wait for all 3 Zookeeper pods to be
RUNNING
andREADY
kubectl kudo install kafka --instance=kafka -p ADD_SERVICE_MONITOR=true --wait
- Wait for all 3 Kafka brokers to be
RUNNING
andREADY
kubectl kudo install cassandra --instance=cassandra -p PROMETHEUS_EXPORTER_ENABLED=true --wait
- Wait for all 3 Cassandra nodes to be
RUNNING
andREADY
kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/kafka-client-api/kafka-client-api.yaml
kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/svelte-ui/svelte-client.yaml
kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/kafka-node-js-api/kafka-node-js-api.yaml
kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/kafka-dummy-actors/kafka-dummy-actor.yaml
kubectl apply -f https://raw.githubusercontent.com/tbaums/konvoy-kudo-studio/master/kafka-cassandra-connector/kafka-cassandra-connector.yaml
- Visit <your AWS elb>/svelte
- Click 'Manufactoring & IoT' in the Nav bar
- Explain demo architecture
- Click '-' button to collapse architecture diagram
- Click button 'Click me to start fetch'
- Run
kubectl scale deploy kafka-dummy-actor --replicas=1
- Observe a single actor on the map (left) and in the actor list (on right).
- Run
kubectl scale deploy kafka-dummy-actor --replicas=7
to see the list fill in real-time and observe the actors moving around the map.
- Click 'User Research' in the Nav bar
- Explain demo architecture
- Click '-' button to collapse architecture diagram
- Open browser 'Network' panel and reload the page. (Right click on the page, and select "Inspect Element". Then, select the 'Network' panel tab. Reload the page to start capturing network traffic.)
- Move mouse across left-hand screenshot
- Explain that each mouse movement captured by the browser is posted directly to the Python Kafka API server, via an endpoint exposed through Traefik
- Observe Node.js Kafka API reading from Kafka queue and returning the mouse movements in the right-hand screenshot
- Observe POST request duration (should be ~500ms)
To demonstrate the power of granular microservice scaling, first we need to generate more load on the Python Kafka API. We will then observe POST request times increase. Lastly, we will scale the Python Kafka API and observe POST request times return to normal.
From User Research screen (assumes above demo steps completed):
kubectl scale deploy kafka-dummy-actor --replicas=70
- Move mouse across left-hand panel
- Observe POST request duration in browser's Network panel (should be >1000ms)
kubectl scale deploy kafka-client-api --replicas=5
- Observe POST request duration (should return to ~500ms)