46 Rolling Back or Rolling Forward
46 Rolling Back or Rolling Forward
46 Rolling Back or Rolling Forward
In this lesson, we will discuss different scenarios to help us decide whether to roll back the Deployment or roll
forward.
After all, how much time would it take you to fix a problem caused by only a
few hours of work (maybe a day) and that was discovered minutes after you
committed? Probably not much. The problem was introduced by a very recent
change that is still in the engineer’s head. Fixing it should not take long, and
we should be able to deploy a new release soon.
You might not have frequent releases, or the amount of changes included is
more than a couple of hundreds of lines of code. In such a case, rolling
forward might not be as fast as it should be. Still, rolling back might not even
be possible.
We did our best to discourage you from rolling back. Still, in some cases that is
a better option. In others, that might be the only option. Luckily, rolling back
is reasonably straightforward with Kubernetes.
kubectl describe \
-f deploy/go-demo-2-api.yml
The output of the latter command, limited to the last lines, is as follows.
OldReplicaSets: <none>
NewReplicaSet: go-demo-2-api-68df567fb5 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 6m deployment-controller Scaled up replica set go-de
Normal ScalingReplicaSet 6m deployment-controller Scaled down replica set go-
Normal ScalingReplicaSet 6m deployment-controller Scaled up replica set go-de
Normal ScalingReplicaSet 6m deployment-controller Scaled down replica set go-
Normal ScalingReplicaSet 6m deployment-controller Scaled up replica set go-de
Normal ScalingReplicaSet 6m deployment-controller Scaled down replica set go-
Normal DeploymentRollback 1m deployment-controller Rolled back deployment "go-
Normal ScalingReplicaSet 1m deployment-controller Scaled up replica set go-de
Normal ScalingReplicaSet 1m deployment-controller Scaled down replica set go-
Normal ScalingReplicaSet 1m (x2 over 6m) deployment-controller Scaled up replica set go-de
Normal ScalingReplicaSet 1m (x3 over 1m) deployment-controller (combined from similar even
We can see from the events section that the Deployment initiated rollback
and, from there on, the process we experienced before was reversed. It
started increasing the replicas of the older ReplicaSet, and decreasing those
from the latest one. Once the process is finished, the older ReplicaSet became
active with all the replicas, and the newer one was scaled down to zero.
The end result might be easier to see from the NewReplicaSet entry located
just above Events . Before we undid the rollout, the value was go-demo-2-api-
68c75f4f5 , and now it’s go-demo-2-api-68df567fb5 .
REVISION CHANGE-CAUSE
2 kubectl set image api=vfarcic/go-demo-2:2.0 --filename=deploy/go-demo-2-api.yml
3 kubectl create --filename=deploy/go-demo-2-api.yml --record=true
If you look at the third revision, you’ll notice that the change cause is the same
command we used to create the Deployment the first time. Before we
executed kubectl rollout undo , we had two revisions; 1 and 2 . The undo
command checked the second-to-last revision ( 1 ).
In the next lesson, we will try playing around further with the existing
Deployment in our cluster.