DevOps Engineer Questions
DevOps Engineer Questions
DevOps Engineer Questions
By Aaradhya Bodade
When looking to fill out DevOps roles, organizations look for a clear set of skills. The most
important of these are:
Experience with infrastructure automation tools like Chef, Puppet, Ansible, SaltStack or
Windows PowerShell DSC.
Fluency in web languages like Ruby, Python, PHP or Java.
Interpersonal skills that help you communicate and collaborate across teams and roles.
If you have the above skills, then you are ready to start preparing for your DevOps interview!
If not, don’t worry – our DevOps certification training will help you master DevOps.
In order to structure the questions below, I put myself in your shoes. Most of the answers in
this blog are written from your perspective, i.e. someone who is a potential DevOps expert. I
have also segregated the questions in the following manner:
Q1. What are the fundamental differences between DevOps & Agile?
The differences between the two are listed down in the table below.
Features DevOps Agile
Agility in both Development &
Agility Agility in only Development
Operations
Involves processes such as CI, Involves practices such as Agile
Processes/ Practices
CD, CT, etc. Scrum, Agile Kanban, etc.
Timeliness & quality have equal
Key Focus Area Timeliness is the main priority
priority
Release Cycles/ Smaller release cycles with
Smaller release cycles
Development Sprints immediate feedback
Feedback is from self
Source of Feedback Feedback is from customers
(Monitoring tools)
Scope of Work Agility & need for Automation Agility only
Q2. What is the need for DevOps?
According to me, this answer should start by explaining the general market trend. Instead of
releasing big sets of features, companies are trying to see if small features can be transported
to their customers through a series of release trains. This has many advantages like quick
feedback from customers, better quality of software etc. which in turn leads to high customer
satisfaction. To achieve this, companies are required to:
DevOps fulfills all these requirements and helps in achieving seamless software delivery.
You can give examples of companies like Etsy, Google and Amazon which have
adopted DevOps to achieve levels of performance that were unthinkable even five years ago.
They are doing tens, hundreds or even thousands of code deployments per day while
delivering world-class stability, reliability and security.
If I have to test your knowledge on DevOps, you should know the difference between Agile
and DevOps. The next question is directed towards that.
Agile is a set of values and principles about how to produce i.e. develop software. Example:
if you have some ideas and you want to turn those ideas into working software, you can use
the Agile values and principles as a way to do that. But, that software might only be working
on a developer’s laptop or in a test environment. You want a way to quickly, easily and
repeatably move that software into production infrastructure, in a safe and simple way. To do
that you need DevOps tools and techniques.
You can summarize by saying Agile software development methodology focuses on the
development of software but DevOps on the other hand is responsible for development as
well as deployment of the software in the safest and most reliable way possible. Here’s a blog
that will give you more information on the evolution of DevOps.
Now remember, you have included DevOps tools in your previous answer so be prepared to
answer some questions related to that.
Q4. Which are the top DevOps tools? Which tools have you worked on?
You can also mention any other tool if you want, but make sure you include the above tools
in your answer.
The second part of the answer has two possibilities:
1. If you have experience with all the above tools then you can say that I have worked on
all these tools for developing good quality software and deploying those softwares
easily, frequently, and reliably.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
2. If you have experience only with some of the above tools then mention those tools and
say that I have specialization in these tools and have an overview about the rest of the
tools.
Given below is a generic logical flow where everything gets automated for seamless delivery.
However, this flow may vary from organization to organization as per the requirement.
1. Developers develop the code and this source code is managed by Version Control
System tools like Git etc.
2. Developers send this code to the Git repository and any changes made in the code is
committed to this Repository.
3. Jenkins pulls this code from the repository using the Git plugin and build it using tools
like Ant or Maven.
4. Configuration management tools like puppet deploys & provisions testing environment
and then Jenkins releases this code on the test environment on which testing is done
using tools like selenium.
5. Once the code is tested, Jenkins send it for deployment on the production server (even
production server is provisioned & maintained by tools like puppet).
6. After deployment It is continuously monitored by tools like Nagios.
7. Docker containers provides testing environment to test the build features.
For this answer, you can use your past experience and explain how DevOps helped you in
your previous job. If you don’t have any such experience, then you can mention the below
advantages.
Technical benefits:
Business benefits:
According to me, the most important thing that DevOps helps us achieve is to get the changes
into production as quickly as possible while minimizing risks in software quality assurance
and compliance. This is the primary objective of DevOps. Learn more in this DevOps
tutorial blog.
However, you can add many other positive effects of DevOps. For example, clearer
communication and better working relationships between teams i.e. both the Ops team and
Dev team collaborate together to deliver good quality software which in turn leads to higher
customer satisfaction.
Q9. Explain with a use case where DevOps can be used in industry/ real-life.
There are many industries that are using DevOps so you can mention any of those use cases,
you can also refer the below example:
Etsy is a peer-to-peer e-commerce website focused on handmade or vintage items and
supplies, as well as unique factory-manufactured items. Etsy struggled with slow, painful site
updates that frequently caused the site to go down. It affected sales for millions of Etsy’s
users who sold goods through online market place and risked driving them to the competitor.
With the help of a new technical management team, Etsy transitioned from its waterfall
model, which produced four-hour full-site deployments twice weekly, to a more agile
approach. Today, it has a fully automated deployment pipeline, and its continuous delivery
practices have reportedly resulted in more than 50 deployments a day with fewer disruptions.
For this answer, share your past experience and try to explain how flexible you were in your
previous job. You can refer the below example:
DevOps engineers almost always work in a 24/7 business-critical online environment. I was
MY DEVOPS QUESTIONS
By Aaradhya Bodade
adaptable to on-call duties and was available to take up real-time, live-system responsibility. I
successfully automated processes to support continuous software deployments. I have
experience with public/private clouds, tools like Chef or Puppet, scripting and automation
with tools like Python and PHP, and a background in Agile.
A pattern is common usage usually followed. If a pattern commonly adopted by others does
not work for your organization and you continue to blindly follow it, you are essentially
adopting an anti-pattern. There are myths about DevOps. Some of them include:
DevOps is a process
Agile equals DevOps?
We need a separate DevOps group
Devops will solve all our problems
DevOps means Developers Managing Production
DevOps is Development-driven release management
1. DevOps is not development driven.
2. DevOps is not IT Operations driven.
We can’t do DevOps – We’re Unique
We can’t do DevOps – We’ve got the wrong people
Plan – In this stage, all the requirements of the project and everything regarding the
project like time for each stage, cost, etc are discussed. This will help everyone in the
team to get a brief idea about the project.
Code – The code is written over here according to the client’s requirements. Here codes
are written in the form of small codes called units.
Build – Building of the units is done in this step.
Test – Testing is done in this stage and if there are mistakes found it is returned for re-
build.
Integrate – All the units of the codes are integrated into this step.
Deploy – codeDevOpsNow is deployed in this step on the client’s environment.
Operate – Operations are performed on the code if required.
Monitor – Monitoring of the application is done over here in the client’s environment.
Q14. What are the KPIs that are used for gauging the success of a DevOps
team?
KPI Means Key Performance Indicators are used to measure the performance of a DevOps
team, identify mistakes and rectify them. This helps the DevOps team to increase
productivity and which directly impacts revenue.
There are many KPIs which one can track in a DevOps team. Following are some of them:
As we know before DevOps there are two other software development models:
Waterfall model
Agile model
In the waterfall model, we have limitations of one-way working and lack of communication
with customers. This was overcome in Agile by including the communication between the
customer and the company by taking feedback. But in this model, another issue is faced
regarding communication between the Development team and operations team due to which
there is a delay in the speed of production. This is where DevOps is introduced. It bridges the
gap between the development team and the operation team by including the automation
feature. Due to this, the speed of production is increased. By including automation, testing is
integrated into the development stage. Which resulted in finding the bugs at the very initial
stage which increased the speed and efficiency.
AWS [Amazon Web Services ] is one of the famous cloud providers. In AWS DevOps is
provided with some benefits:
Flexible Resources: AWS provides all the DevOps resources which are flexible to use.
Scaling: we can create several instances on AWS with a lot of storage and computation
power.
Automation: Automation is provided by AWS like CI/CD
Security: AWS provides security when we create an instance like IAM
This Edureka “DevOps Interview” video will firstly address the required skills a DevOps
Engineer is supposed to have. Moving on, we will understand how one must prepare for
his/her DevOps Interview.
Q17. Name three important DevOps KPIs
1. Lead time for changes: It measures the time taken from committing a change to code
repository to the time it becomes available in production.
2. Deployment frequency: It measures the number of times changes are deployed to
production in a given period of time.
3. Mean time to recover (MTTR): It measures the average time taken to recover from a
service disruption or failure.
Q18. What are some technical and business benefits of DevOps work culture?
DevOps work culture brings many technical and business benefits, including:
Technical Benefits:
MY DEVOPS QUESTIONS
By Aaradhya Bodade
1. Faster time to market: DevOps enables faster release cycles and reduces the time it
takes to go from development to deployment.
2. Improved software quality: DevOps emphasizes collaboration between development
and operations teams, resulting in better-quality software.
3. Increased reliability and scalability: DevOps automates processes, reduces human
error, and allows for better resource allocation, resulting in increased reliability and
scalability.
Business Benefits:
Q19. What are the core operations of DevOps in terms of development and
infrastructure?
DevOps is not considered an Agile methodology, but it can be used in conjunction with Agile
practices to improve the software development process. Agile methodology focuses on
delivering small increments of working software frequently, while DevOps focuses on
improving collaboration and communication between development and operations teams to
increase efficiency and reduce the time it takes to release software to production. DevOps can
be seen as a complementary approach to Agile, as it can help Agile teams to better achieve
their goals of delivering software quickly and reliably.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
This is probably the easiest question you will face in the interview. My suggestion is to first
give a definition of Version control. It is a system that records changes to a file or set of files
over time so that you can recall specific versions later. Version control systems consist of a
central shared repository where teammates can commit changes to a file or set of file. Then
you can mention the uses of version control.
1. With Version Control System (VCS), all the team members are allowed to work freely
on any file at any time. VCS will later allow you to merge all the changes into a
common version.
2. All the past versions and variants are neatly packed up inside the VCS. When you need
it, you can request any version at any time and you’ll have a snapshot of the complete
project right at hand.
3. Every time you save a new version of your project, your VCS requires you to provide a
short description of what was changed. Additionally, you can see what exactly was
changed in the file’s content. This allows you to know who has made what change in the
project.
4. A distributed VCS like Git allows all the team members to have complete history of the
project so if there is a breakdown in the central server you can use any of your
teammate’s local Git repository.
This question is asked to test your branching experience so tell them about how you have
used branching in your previous job and what purpose does it serves, you can refer the below
points:
Feature branching
A feature branch model keeps all of the changes for a particular feature inside of a
MY DEVOPS QUESTIONS
By Aaradhya Bodade
branch. When the feature is fully tested and validated by automated tests, the branch is
then merged into master.
Task branching
In this model each task is implemented on its own branch with the task key included in
the branch name. It is easy to see which code implements which task, just look for the
task key in the branch name.
Release branching
Once the develop branch has acquired enough features for a release, you can clone that
branch to form a Release branch. Creating this branch starts the next release cycle, so no
new features can be added after this point, only bug fixes, documentation generation,
and other release-oriented tasks should go in this branch. Once it is ready to ship, the
release gets merged into master and tagged with a version number. In addition, it should
be merged back into develop branch, which may have progressed since the release was
initiated.
In the end tell them that branching strategies varies from one organization to another, so I
know basic branching operations like delete, merge, checking out a branch etc.
You can just mention the VCS tool that you have worked on like this: “I have worked on Git
and one major advantage it has over other VCS tools like SVN is that it is a distributed
version control system.”
Distributed VCS tools do not necessarily rely on a central server to store all the versions of a
project’s files. Instead, every developer “clones” a copy of a repository and has the full
history of the project on their own hard drive.
I will suggest that you attempt this question by first explaining about the architecture of git as
shown in the below diagram. You can refer to the explanation given below:
Git is a Distributed Version Control system (DVCS). It can track changes to a file and
allows you to revert back to any particular change.
Its distributed architecture provides many advantages over other Version Control
Systems (VCS) like SVN one major advantage is that it does not rely on a central server
to store all the versions of a project’s files. Instead, every developer “clones” a copy of a
repository I have shown in the diagram below with “Local repository” and has the full
history of the project on his hard drive so that when there is a server outage, all you need
for recovery is one of your teammate’s local Git repository.
There is a central cloud repository as well where developers can commit changes and
share it with other teammates as you can see in the diagram where all collaborators are
commiting changes “Remote repository”.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Q7. In Git how do you revert a commit that has already been pushed and
made public?
There can be two answers to this question so make sure that you include both because any of
the below options can be used depending on the situation:
Remove or fix the bad file in a new commit and push it to the remote repository. This is
the most natural way to fix an error. Once you have made necessary changes to the file,
commit it to the remote repository for that I will use
git commit -m “commit message”
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Create a new commit that undoes all changes that were made in the bad commit.to do
this I will use a command
git revert <name of bad commit>
There are two options to squash last N commits into a single commit. Include both of the
below mentioned options in your answer:
If you want to write the new commit message from scratch use the following command
git reset –soft HEAD~N &&
git commit
If you want to start editing the new commit message with a concatenation of the existing
commit messages then you need to extract those messages and pass them to Git commit
for that I will use
git reset –soft HEAD~N &&
git commit –edit -m”$(git log –format=%B –reverse .HEAD@{N})”
Q9. What is Git bisect? How can you use it to determine the source of a
(regression) bug?
I will suggest you to first give a small definition of Git bisect, Git bisect is used to find the
commit that introduced a bug by using binary search. Command for Git bisect is
git bisect <subcommand> <options>
Now since you have mentioned the command above, explain what this command will do,
This command uses a binary search algorithm to find which commit in your project’s history
introduced a bug. You use it by first telling it a “bad” commit that is known to contain the
bug, and a “good” commit that is known to be before the bug was introduced. Then Git bisect
picks a commit between those two endpoints and asks you whether the selected commit is
“good” or “bad”. It continues narrowing down the range until it finds the exact commit that
introduced the change.
Q10. What is Git rebase and how can it be used to resolve conflicts in a
feature branch before merge?
According to me, you should start by saying git rebase is a command which will merge
another branch into the branch where you are currently working, and move all of the local
commits that are ahead of the rebased branch to the top of the history on that branch.
Now once you have defined Git rebase time for an example to show how it can be used to
resolve conflicts in a feature branch before merge, if a feature branch was created from
master, and since then the master branch has received new commits, Git rebase can be used to
move the feature branch to the tip of master.
The command effectively will replay the changes made in the feature branch at the tip of
master, allowing conflicts to be resolved in the process. When done with care, this will allow
the feature branch to be merged into master with relative ease and sometimes as a simple fast-
forward operation.
Q11. How do you configure a Git repository to run code sanity checking tools
right before making commits, and preventing them if the test fails?
MY DEVOPS QUESTIONS
By Aaradhya Bodade
I will suggest you to first give a small introduction to sanity checking, A sanity or smoke test
determines whether it is possible and reasonable to continue testing.
Now explain how to achieve this, this can be done with a simple script related to the pre-
commit hook of the repository. The pre-commit hook is triggered right before a commit is
made, even before you are required to enter a commit message. In this script one can run
other tools, such as linters and perform sanity checks on the changes being committed into
the repository.
Finally give an example, you can refer the below script:
1#!/bin/sh
2files=$(git diff --cached --name-only --diff-filter=ACM | grep '.go$')
3if [ -z files ]; then
4exit 0
5fi
6unfmtd=$(gofmt -l $files)
7if [ -z unfmtd ]; then
8exit 0
9fi
10echo “Some .go files are not fmt’d”
11exit 1</p>
12<p style="text-align: justify;"><span>
This script checks to see if any .go file that is about to be committed needs to be passed
through the standard Go source code formatting tool gofmt. By exiting with a non-zero status,
the script effectively prevents the commit from being applied to the repository.
Q12. How do you find a list of files that has changed in a particular commit?
For this answer instead of just telling the command, explain what exactly this command will
do so you can say that, To get a list files that has changed in a particular commit use
command
git diff-tree -r {hash}
Given the commit hash, this will list all the files that were changed or added in that commit.
The -r flag makes the command list individual files, rather than collapsing them into root
directory names only.
You can also include the below mention point although it is totally optional but will help in
impressing the interviewer.
The output will also include some extra information, which can be easily suppressed by
including two flags:
git diff-tree –no-commit-id –name-only -r {hash}
Here –no-commit-id will suppress the commit hashes from appearing in the output, and –
name-only will only print the file names, instead of their paths.
Q13. How do you setup a script to run every time a repository receives new
commits through push?
There are three ways to configure a script to run every time a repository receives new
commits through push, one needs to define either a pre-receive, update, or a post-receive
hook depending on when exactly the script needs to be triggered.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Pre-receive hook in the destination repository is invoked when commits are pushed to it.
Any script bound to this hook will be executed before any references are updated. This
is a useful hook to run scripts that help enforce development policies.
Update hook works in a similar manner to pre-receive hook, and is also triggered before
any updates are actually made. However, the update hook is called once for every
commit that has been pushed to the destination repository.
Finally, post-receive hook in the repository is invoked after the updates have been
accepted into the destination repository. This is an ideal place to configure simple
deployment scripts, invoke some continuous integration systems, dispatch notification
emails to repository maintainers, etc.
Hooks are local to every Git repository and are not versioned. Scripts can either be created
within the hooks directory inside the “.git” directory, or they can be created elsewhere and
links to those scripts can be placed within the directory.
Q14. How will you know in Git if a branch has already been merged into
master?
Q16. What is the difference between Git Merge and Git Rebase?
Here, both are merging mechanisms but the difference between the Git Merge and Git Rebase
is, in Git Merge logs will be showing the complete history of commits.
However, when one does Git Rebase, the logs are rearranged. The rearrangement is done to
make the logs look linear and simple to understand. This is also a drawback since other team
members will not understand how the different commits were merged into one another.
Q17. Explain the difference between git fetch and git pull?
Shift left is a concept used in DevOps for a better level of security, performance, etc. Let us
get in detail with an example, if we see all the phases in DevOps we can say that security is
tested before the step of deployment. By using the left shift method we can include the
security in the development phase which is on the left.[will be shown in the diagram] not
only in development we can integrate with all phases like before development and in the
testing phase too. This probably increases the level of security by finding the errors in the
very initial stages.
Check out our DevOps Training Course in Top Cities
Yes, here are some examples of version control systems that are in use today:
Git
Subversion (SVN)
Mercurial
Microsoft Team Foundation Server (TFS)
Apache Cassandra
Bitbucket
GitHub
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Q20. How would you go about creating a branch for an existing project?
Note: Before creating a branch, it is recommended to sync your local repository with the
remote repository to ensure you are working on the latest version of the project.
Explore Curriculum
Tags are labels or keywords that describe the content of a web page or a blog post. They are
used in HTML or XML code to categorize the content and make it easier for search engines
and users to find relevant information.
For example, a blog post about cooking may have tags such as “recipe,” “food,” “cooking,”
and “dinner.” This allows users to search for and find similar posts based on those tags.
In addition, tags can also be used to format and style text, such as headings, links, and
images. The use of tags helps to organize and structure the content, making it easier for both
users and search engines to understand.
I will advise you to begin this answer by giving a small definition of Continuous Integration
(CI). It is a development practice that requires developers to integrate code into a shared
repository several times a day. Each check-in is then verified by an automated build, allowing
teams to detect problems early.
I suggest that you explain how you have implemented it in your previous job. You can refer
the below given example:
For this answer, you should focus on the need of Continuous Integration. My suggestion
would be to mention the below explanation in your answer:
Continuous Integration of Dev and Testing improves the quality of software, and reduces the
time taken to deliver it, by replacing the traditional practice of testing after completing all
development. It allows Dev team to easily detect and locate problems early because
developers need to integrate code into a shared repository several times a day (more
frequently). Each check-in is then automatically tested.
Here you have to mention the requirements for Continuous Integration. You could include the
following points in your answer:
Q4. Explain how you can move or copy Jenkins from one server to another?
I will approach this task by copying the jobs directory from the old server to the new one.
There are multiple ways to do that; I have mentioned them below:
You can:
Move a job from one installation of Jenkins to another by simply copying the
corresponding job directory.
Make a copy of an existing job by making a clone of a job directory by a different name.
Rename an existing job by renaming a directory. Note that if you change a job name you
will need to change any other job that tries to call the renamed job.
Q5. Explain how can create a backup and copy files in Jenkins?
Answer to this question is really direct. To create a backup, all you need to do is to
periodically back up your JENKINS_HOME directory. This contains all of your build jobs
configurations, your slave node configurations, and your build history. To create a back-up of
your Jenkins setup, just copy this directory. You can also copy a job directory to clone or
replicate a job or rename the directory.
My approach to this answer will be to first mention how to create Jenkins job. Go to Jenkins
top page, select “New Job”, then choose “Build a free-style software project”.
Then you can tell the elements of this freestyle job:
Optional SCM, such as CVS or Subversion where your source code resides.
Optional triggers to control when Jenkins will perform builds.
Some sort of build script that performs the build (ant, maven, shell script, batch file,
etc.) where the real work happens.
Optional steps to collect information out of the build, such as archiving the artifacts
and/or recording javadoc and test results.
Optional steps to notify other people/systems with the build result, such as sending e-
mails, IMs, updating issue tracker, etc..
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Maven 2 project
Amazon EC2
HTML publisher
Copy artifact
Join
Green Balls
These Plugins, I feel are the most useful plugins. If you want to include any other Plugin that
is not mentioned above, you can add them as well. But, make sure you first mention the
above stated plugins and then add your own.
The way I secure Jenkins is mentioned below. If you have any other way of doing it, please
mention it in the comments section below:
Jenkins is one of the many popular tools that are used extensively in
DevOps. Edureka’s DevOps Certification course will provide you hands-on training with
Jenkins and high quality guidance from industry experts. Give it a look:
This is a continuous deployment strategy that is generally used to decrease downtime. This is
used for transferring the traffic from one instance to another.
For Example, let us take a situation where we want to include a new version of code. Now we
have to replace the old version with a new version of code. The old version is considered to
be in a blue environment and the new version is considered as a green environment. we had
made some changes to the existing old version which transformed into a new version with
minimum changes.
Now to run the new version of the instance we need to transfer the traffic from the old
instance to the new instance. That means we need to transfer the traffic from the blue
environment to the green environment. The new version will be running on the green
instance. Gradually the traffic is transferred to the green instance. The blue instance will be
kept on idle and used for the rollback.
In Blue-Green deployment, the application is not deployed in the same environment. Instead,
a new server or environment is created where the new version of the application is deployed.
Once the new version of the application is deployed in a separate environment, the traffic to
the old version of the application is redirected to the new version of the application.
We follow the Blue-Green Deployment model, so that any problem which is encountered in
the production environment for the new application if detected. The traffic can be
immediately redirected to the previous Blue environment, with minimum or no impact on the
business. Following diagram shows, Blue-Green Deployment.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
A pattern can be defined as an ideology on how to solve a problem. Now anti-pattern can be
defined as a method that would help us to solve the problem now but it may result in
damaging our system [i.e, it shows how not to approach a problem ].
Q11. How will you approach a project that needs to implement DevOps?
First, if we want to approach a project that needs DevOps, we need to know a few concepts
like :
Any programming language [C, C++, JAVA, Python..] concerning the project.
Get an idea of operating systems for management purposes [like memory management,
disk management..etc].
Get an idea about networking and security concepts.
Get the idea about what DevOps is, what is continuous integration, continuous
development, continuous delivery, continuous deployment, monitoring, and its tools
used in various phases.[like GIT, Docker, Jenkins,…etc]
Now after this interact with other teams and design a roadmap for the process.
Once all teams get cleared then create a proof of concept and start according to the
project plan.
Now the project is ready to go through the phases of DevOps Version control,
integration, testing, deployment, delivery, and monitoring.
Here’s what Shyam Verma, one of our DevOps learners had to say about our DevOps
Training course:
MY DEVOPS QUESTIONS
By Aaradhya Bodade
CI/CD (Continuous Integration and Continuous Deployment) and DevOps are related
concepts, but they are not the same thing.
CI/CD refers to the process of automating the building, testing, and deployment of software.
The goal of CI/CD is to make it easier to release new software changes and bug fixes to users
quickly and reliably.
DevOps, on the other hand, is a cultural and technical movement focused on improving
collaboration and communication between development and operations teams. DevOps
emphasizes the automation of processes and the use of technology to enable organizations to
deliver software faster and more reliably.
In summary, CI/CD is a set of practices for software development, while DevOps is a cultural
movement and set of practices aimed at improving collaboration and communication between
development and operations teams to deliver software faster and more reliably.
Automation testing or Test Automation is a process of automating the manual process to test
the application/system under test. Automation testing involves use of separate testing tools
which lets you create test scripts which can be executed repeatedly and doesn’t require any
manual intervention.
I have listed down some advantages of automation testing. Include these in your answer and
you can add your own experience of how Continuous Testing helped your previous company:
I have mentioned a generic flow below which you can refer to:
In DevOps, developers are required to commit all the changes made in the source code to a
shared repository. Continuous Integration tools like Jenkins will pull the code from this
shared repository every time a change is made in the code and deploy it for Continuous
Testing that is done by tools like Selenium as shown in the below diagram.
In this way, any change in the code is continuously tested unlike the traditional approach.
You can answer this question by saying, “Continuous Testing allows any change made in the
code to be tested immediately. This avoids the problems created by having “big-bang” testing
left to the end of the cycle such as release delays and quality issues. In this way, Continuous
Testing facilitates more frequent and good quality releases.”
Risk Assessment: It Covers risk mitigation tasks, technical debt, quality assessment and
test coverage optimization to ensure the build is ready to progress toward next stage.
Policy Analysis: It ensures all processes align with the organization’s evolving business
and compliance demands are met.
Requirements Traceability: It ensures true requirements are met and rework is not
required. An object assessment is used to identify which requirements are at risk,
working as expected or require further validation.
Advanced Analysis: It uses automation in areas such as static code analysis, change
impact analysis and scope assessment/prioritization to prevent defects in the first place
and accomplishing more within each iteration.
Test Optimization: It ensures tests yield accurate outcomes and provide actionable
findings. Aspects include Test Data Management, Test Optimization Management and
Test Maintenance
Service Virtualization: It ensures access to real-world testing environments. Service
visualization enables access to the virtual form of the required testing stages, cutting the
waste time to test environment setup and availability.
Q7. Which Testing tool are you comfortable with and what are the benefits of
that tool?
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Here mention the testing tool that you have worked with and accordingly frame your answer.
I have mentioned an example below:
I have worked on Selenium to ensure high quality and more frequent releases.
Assert command checks whether the given condition is true or false. Let’s say we assert
whether the given element is present on the web page or not. If the condition is true,
then the program control will execute the next test step. But, if the condition is false, the
execution would stop and no further test would be executed.
Verify command also checks whether the given condition is true or false. Irrespective of
the condition being true or false, the program execution doesn’t halts i.e. any failure
during verification would not stop the execution and all the test steps would be
executed.
For this answer, my suggestion would be to give a small definition of Selenium Grid. It can
be used to execute same or different test scripts on multiple platforms and browsers
concurrently to achieve distributed test execution. This allows testing under different
environments and saving execution time remarkably.
Learn Automation testing and other DevOps concepts in live instructor-led online classes in
our DevOps Certification course.
Revise capability,
Improve performance,
Reliability or maintainability,
Extend life,
Reduce cost,
Reduce risk and
Liability, or correct defects.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Given below are few differences between Asset Management and Configuration
Management:
According to me, you should first explain Asset. It has a financial value along with a
depreciation rate attached to it. IT assets are just a sub-set of it. Anything and everything that
has a cost and the organization uses it for its asset value calculation and related benefits in tax
calculation falls under Asset Management, and such item is called an asset.
Configuration Item on the other hand may or may not have financial values assigned to it. It
will not have any depreciation linked to it. Thus, its life would not be dependent on its
financial value but will depend on the time till that item becomes obsolete for the
organization.
Now you can give an example that can showcase the similarity and differences between both:
1) Similarity:
Server – It is both an asset as well as a CI.
2) Difference:
Building – It is an asset but not a CI.
Document – It is a CI but not an asset
Q4. What do you understand by “Infrastructure as code”? How does it fit into
the DevOps methodology? What purpose does it achieve?
Infrastructure as Code (IAC) is a type of IT infrastructure that operations teams can use to
automatically manage and provision through code, rather than using a manual process.
Companies for faster deployments treat infrastructure like software: as code that can be
managed with the DevOps tools and processes. These tools let you make infrastructure
changes more easily, rapidly, safely and reliably.
Q5. Which among Puppet, Chef, SaltStack and Ansible is the best
Configuration Management (CM) tool? Why?
This depends on the organization’s need so mention few points on all those tools:
Puppet is the oldest and most mature CM tool. Puppet is a Ruby-based Configuration
Management tool, but while it has some free features, much of what makes Puppet great is
only available in the paid version. Organizations that don’t need a lot of extras will find
Puppet useful, but those needing more customization will probably need to upgrade to the
paid version.
Chef is written in Ruby, so it can be customized by those who know the language. It also
includes free features, plus it can be upgraded from open source to enterprise-level if
MY DEVOPS QUESTIONS
By Aaradhya Bodade
I will advise you to first give a small definition of Puppet. It is a Configuration Management
tool which is used to automate administration tasks.
Now you should describe its architecture and how Puppet manages its Agents. Puppet has a
Master-Slave architecture in which the Slave has to first send a Certificate signing request to
Master and Master has to sign that Certificate in order to establish a secure connection
between Puppet Master and Puppet Slave as shown on the diagram below. Puppet Slave
sends request to Puppet Master and Puppet Master then pushes configuration on Slave.
Refer the diagram below that explains the above description.
Q7. Before a client can authenticate with the Puppet Master, its certs need to
be signed and accepted. How will you automate this task?
Firewall your puppet master – restrict port tcp/8140 to only networks that you trust.
Create puppet masters for each ‘trust zone’, and only include the trusted nodes in that
Puppet masters manifest.
Never use a full wildcard such as *.
Q8. Describe the most significant gain you made from automating a process
through Puppet.
For this answer, I will suggest you to explain you past experience with Puppet. you can refer
the below example:
I automated the configuration and deployment of Linux and Windows machines using
Puppet. In addition to shortening the processing time from one week to 10 minutes, I used the
roles and profiles pattern and documented the purpose of each module in README to ensure
that others could update the module using Git. The modules I wrote are still being used, but
they’ve been improved by my teammates and members of the community
Q9. Which open source or community tools do you use to make Puppet more
powerful?
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Over here, you need to mention the tools and how you have used those tools to make Puppet
more powerful. Below is one example for your reference:
Changes and requests are ticketed through Jira and we manage requests through an internal
process. Then, we use Git and Puppet’s Code Manager app to manage Puppet code in
accordance with best practices. Additionally, we run all of our Puppet changes through our
continuous integration pipeline in Jenkins using the beaker testing framework.
It is a very important question so make sure you go in the correct flow. According to me, you
should first define Manifests. Every node (or Puppet Agent) has got its configuration details
in Puppet Master, written in the native Puppet language. These details are written in the
language which Puppet can understand and are termed as Manifests. They are composed of
Puppet code and their filenames use the .pp extension.
Now give an exampl. You can write a manifest in Puppet Master that creates a file and
installs apache on all Puppet Agents (Slaves) connected to the Puppet Master.
For this answer, you can go with the below mentioned explanation:
A Puppet Module is a collection of Manifests and data (such as facts, files, and templates),
and they have a specific directory structure. Modules are useful for organizing your Puppet
code, because they allow you to split your code into multiple Manifests. It is considered best
practice to use Modules to organize almost all of your Puppet Manifests.
Puppet programs are called Manifests which are composed of Puppet code and their file
names use the .pp extension.
You are expected to answer what exactly Facter does in Puppet so according to me, you
should say, “Facter gathers basic information (facts) about Puppet Agent such as hardware
details, network settings, OS type and version, IP addresses, MAC addresses, SSH keys, and
more. These facts are then made available in Puppet Master’s Manifests as variables.”
Begin this answer by defining Chef. It is a powerful automation platform that transforms
infrastructure into code. Chef is a tool for which you write scripts that are used to automate
processes. What processes? Pretty much anything related to IT.
Now you can explain the architecture of Chef, it consists of:
Chef Server: The Chef Server is the central store of your infrastructure’s configuration
data. The Chef Server stores the data necessary to configure your nodes and provides
search, a powerful tool that allows you to dynamically drive node configuration based
on data.
Chef Node: A Node is any host that is configured using Chef-client. Chef-client runs on
your nodes, contacting the Chef Server for the information necessary to configure the
node. Since a Node is a machine that runs the Chef-client software, nodes are sometimes
referred to as “clients”.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Chef Workstation: A Chef Workstation is the host you use to modify your cookbooks
and other configuration data.
DevOps Training
Next
For this answer, I will suggest you to use the above mentioned flow: first define Recipe. A
Recipe is a collection of Resources that describes a particular configuration or policy. A
Recipe describes everything that is required to configure part of a system.
After the definition, explain the functions of Recipes by including the following points:
The answer to this is pretty direct. You can simply say, “a Recipe is a collection of
Resources, and primarily configures a software package or some piece of infrastructure. A
Cookbook groups together Recipes and other information in a way that is more manageable
than having just Recipes alone.”
My suggestion is to first give a direct answer: when you don’t specify a resource’s action,
Chef applies the default action.
Now explain this with an example, the below resource:
file ‘C:UsersAdministratorchef-reposettings.ini’ do
content ‘greeting=hello world’
end
is same as the below resource:
file ‘C:UsersAdministratorchef-reposettings.ini’ do
action :create
content ‘greeting=hello world’
MY DEVOPS QUESTIONS
By Aaradhya Bodade
end
because: create is the file Resource’s default action.
Modules are considered to be the units of work in Ansible. Each module is mostly standalone
and can be written in a standard scripting language such as Python, Perl, Ruby, bash, etc..
One of the guiding properties of modules is idempotency, which means that even if an
operation is repeated multiple times e.g. upon recovery from an outage, it will always place
the system into the same state.
Playbooks are Ansible’s configuration, deployment, and orchestration language. They can
describe a policy you want your remote systems to enforce, or a set of steps in a general IT
process. Playbooks are designed to be human-readable and are developed in a basic text
language.
At a basic level, playbooks can be used to manage configurations of and deployments to
remote machines.
Ansible by default gathers “facts” about the machines under management, and these facts can
be accessed in Playbooks and in templates. To see a list of all of the facts that are available
about a machine, you can run the “setup” module as an ad-hoc action:
Ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host.
WebLogic Server 8.1 allows you to select the load order for applications. See the Application
MBean Load Order attribute in Application. WebLogic Server deploys server-level resources
(first JDBC and then JMS) before deploying applications. Applications are deployed in this
order: connectors, then EJBs, then Web Applications. If the application is an EAR, the
individual components are loaded in the order in which they are declared in the
application.xml deployment descriptor.
Yes, you can use weblogic.Deployer to specify a component and target a server, using the
following syntax:
java weblogic.Deployer -adminurl http://admin:7001 -name appname -targets server1,server2
-deploy jsps/*.jsp
The auto-deployment feature checks the applications folder every three seconds to determine
whether there are any new applications or any changes to existing applications and then
dynamically deploys these changes.
The auto-deployment feature is enabled for servers that run in development mode. To disable
auto-deployment feature, use one of the following methods to place servers in production
mode:
In the Administration Console, click the name of the domain in the left pane, then select
the Production Mode checkbox in the right pane.
At the command line, include the following argument when starting the domain’s
Administration Server:
-Dweblogic.ProductionModeEnabled=true
Production mode is set for all WebLogic Server instances in a given domain.
Set -external_stage using weblogic.Deployer if you want to stage the application yourself,
and prefer to copy it to its target by your own means.
Ansible and Puppet are two of the most popular configuration management tools among
DevOps engineers.
Generally, SSH is used for connecting two computers and helps to work on them remotely.
SSH is mostly used by the operations team as the operations team will be dealing with
managing tasks with which they will require the admin system remotely. The developers will
also be using SSH but comparatively less than the operations team as most of the time they
will be working in the local systems. As we know, the DevOps development team and
operation team will collaborate and work together.SSH will be used when the operations
team faces any problem and needs some assistance from the development team then SSH is
used.
This is generally used in the management of memory in dynamic web applications by caching
the data in RAM. This helps to reduce the frequency of fetching from external sources. This
also helps in speeding up the dynamic web applications by alleviating database load.
By the name, we can say it is a type of meeting which is conducted at the end of the project.
In this meeting, all the teams come together and discuss the failures in the current project.
Finally, they will conclude how to avoid them and what measures need to be taken in the
future to avoid these failures.
Functional testing is a type of testing that verifies if a system meets the specified functional
requirements and works as intended. It tests the functionality of the software, including
inputs, outputs, and processes.
Non-functional testing, on the other hand, is a type of testing that evaluates the non-
functional aspects of a system, such as performance, security, reliability, scalability, and
usability. It ensures that the software is not only functional, but also meets the performance,
security, and other quality criteria required by the user.
In summary, functional testing focuses on the functionality of the software, while non-
functional testing focuses on the quality criteria of the software.
Black box testing is a method of software testing that examines the functionality of an
application without looking at its internal structures or codes. The tester is only concerned
with inputs and expected outputs and does not have any knowledge of the internal workings
of the application. The techniques used in black box testing include:
1. Functional testing: testing the functions or features of the application to ensure they are
working as specified.
2. Integration testing: testing how different components of the application work together.
3. System testing: testing the complete system to ensure it meets the specified
requirements.
4. Acceptance testing: testing to determine if the system is acceptable for delivery to the
end user.
5. Usability testing: testing to determine the ease of use of the application for end users.
6. Load testing: testing the application’s performance under various load conditions.
7. Security testing: testing the application for potential security risks or vulnerabilities.
8. Compatibility testing: testing the application’s compatibility with different hardware,
software, and operating systems.
The goal of automated testing in DevOps is to increase the speed and efficiency of the testing
process while also reducing the risk of bugs being introduced into the production
environment.
continuous audit
continuous controls monitoring
continuous transaction inspection
You can answer this question by first mentioning that Nagios is one of the monitoring tools.
It is used for Continuous monitoring of systems, applications, services, and business
processes etc in a DevOps culture. In the event of a failure, Nagios can alert technical staff of
the problem, allowing them to begin remediation processes before outages affect business
processes, end-users, or customers. With Nagios, you don’t have to explain why an unseen
infrastructure outage affect your organization’s bottom line.
Now once you have defined what is Nagios, you can mention the various things that you can
achieve using Nagios.
By using Nagios you can:
This completes the answer to this question. Further details like advantages etc. can be added
as per the direction where the discussion is headed.
I will advise you to follow the below explanation for this answer:
Nagios runs on a server, usually as a daemon or service. Nagios periodically runs plugins
residing on the same server, they contact hosts or servers on your network or on the internet.
One can view the status information using the web interface. You can also receive email or
SMS notifications if something happens.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It
stores the results of those scripts and will run other scripts if these results change.
Now expect a few questions on Nagios components like Plugins, NRPE etc..
Begin this answer by defining Plugins. They are scripts (Perl scripts, Shell scripts, etc.) that
can run from a command line to check the status of a host or service. Nagios uses the results
from Plugins to determine the current status of hosts and services on your network.
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Once you have defined Plugins, explain why we need Plugins. Nagios will execute a Plugin
whenever there is a need to check the status of a host or service. Plugin will perform the
check and then simply returns the result to Nagios. Nagios will process the results that it
receives from the Plugin and take the necessary actions.
For this answer, give a brief definition of Plugins. The NRPE addon is designed to allow you
to execute Nagios plugins on remote Linux/Unix machines. The main reason for doing this is
to allow Nagios to monitor “local” resources (like CPU load, memory usage, etc.) on remote
machines. Since these public resources are not usually exposed to external machines, an
agent like NRPE must be installed on the remote Linux/Unix machines.
I will advise you to explain the NRPE architecture on the basis of diagram shown below. The
NRPE addon consists of two pieces:
There is a SSL (Secure Socket Layer) connection between monitoring host and remote host
as shown in the diagram below.
According to me, the answer should start by explaining Passive checks. They are initiated and
performed by external applications/processes and the Passive check results are submitted to
Nagios for processing.
Then explain the need for passive checks. They are useful for monitoring services that are
Asynchronous in nature and cannot be monitored effectively by polling their status on a
regularly scheduled basis. They can also be used for monitoring services that are Located
behind a firewall and cannot be checked actively from the monitoring host.
Make sure that you stick to the question during your explanation so I will advise you to
follow the below mentioned flow. Nagios check for external commands under the following
conditions:
For this answer, first point out the basic difference Active and Passive checks. The major
difference between Active and Passive checks is that Active checks are initiated and
performed by Nagios, while passive checks are performed by external applications.
If your interviewer is looking unconvinced with the above explanation then you can also
mention some key features of both Active and Passive checks:
Passive checks are useful for monitoring services that are:
The interviewer will be expecting an answer related to the distributed architecture of Nagios.
So, I suggest that you answer it in the below mentioned format:
With Nagios you can monitor your whole enterprise by using a distributed monitoring
scheme in which local slave instances of Nagios perform monitoring tasks and report the
results back to a single master. You manage all configuration, notification, and reporting
from the master, while the slaves do all the work. This design takes advantage of Nagios’s
ability to utilize passive checks i.e. external applications or processes that send results back to
Nagios. In a distributed configuration, these external applications are other instances of
Nagios.
First mention what this main configuration file contains and its function. The main
configuration file contains a number of directives that affect how the Nagios daemon
operates. This config file is read by both the Nagios daemon and the CGIs (It specifies the
location of your main configuration file).
Now you can tell where it is present and how it is created. A sample main configuration file
is created in the base directory of the Nagios distribution when you run the configure script.
The default name of the main configuration file is nagios.cfg. It is usually placed in the etc/
subdirectory of you Nagios installation (i.e. /usr/local/nagios/etc/).
I will advise you to first explain Flapping first. Flapping occurs when a service or host
changes state too frequently, this causes lot of problem and recovery notifications.
Once you have defined Flapping, explain how Nagios detects Flapping. Whenever Nagios
checks the status of a host or service, it will check to see if it has started or stopped flapping.
Nagios follows the below given procedure to do that:
Storing the results of the last 21 checks of the host or service analyzing the historical
check results and determine where state changes/transitions occur
MY DEVOPS QUESTIONS
By Aaradhya Bodade
Using the state transitions to determine a percent state change value (a measure of
change) for the host or service
Comparing the percent state change value against low and high flapping thresholds
A host or service is determined to have started flapping when its percent state change first
exceeds a high flapping threshold. A host or service is determined to have stopped flapping
when its percent state goes below a low flapping threshold.
Q12. What are the three main variables that affect recursion and inheritance
in Nagios?
Name
Use
Register
Then give a brief explanation for each of these variables. Name is a placeholder that is used
by other objects. Use defines the “parent” object whose properties should be used. Register
can have a value of 0 (indicating its only a template) and 1 (an actual object). The register
value is never inherited.
Answer to this question is pretty direct. I will answer this by saying, “One of the features of
Nagios is object configuration format in that you can create object definitions that inherit
properties from other object definitions and hence the name. This simplifies and clarifies
relationships between various components.”
I will advise you to first give a small introduction on State Stalking. It is used for logging
purposes. When Stalking is enabled for a particular host or service, Nagios will watch that
host or service very carefully and log any changes it sees in the output of check results.
Depending on the discussion between you and interviewer you can also add, “It can be very
helpful in later analysis of the log files. Under normal circumstances, the result of a host or
service check is only logged if the host or service has changed state since it was last
checked.”
Want to get trained in monitoring tools like Nagios? Want to certified as a DevOps Engineer?
Make sure you check out our DevOps Engineer Certification Course Masters Program.
Nagios is considered object-oriented because it uses a modular design, where elements in the
system are represented as objects with specific properties and behaviors. These objects can
interact with each other to produce a unified monitoring system. This design philosophy
MY DEVOPS QUESTIONS
By Aaradhya Bodade
allows for easier maintenance and scalability, as well as allowing for more efficient data
management.
A Nagios backend refers to the component of Nagios that stores and manages the data
collected by the monitoring process, such as monitoring results, configuration information,
and event history. The backend is usually implemented as a database or a data store, and is
accessed by the Nagios frontend to display the monitoring data. The backend is a crucial
component of Nagios, as it enables the persistence of monitoring data and enables historical
analysis of the monitored systems.
My suggestion is to explain the need for containerization first, containers are used to provide
consistent computing environment from a developer’s laptop to a test environment, from a
staging environment into production.
Now give a definition of containers, a container consists of an entire runtime environment: an
application, plus all its dependencies, libraries and other binaries, and configuration files
needed to run it, bundled into one package. Containerizing the application platform and its
dependencies removes the differences in OS distributions and underlying infrastructure.
Containers provide real-time provisioning and scalability but VMs provide slow
provisioning
Containers are lightweight when compared to VMs
VMs have limited performance when compared to containers
Containers have better resource utilization compared to VMs
Q3. How exactly are containers (Docker in our case) different from
hypervisor virtualization (vSphere)? What are the benefits?
Given below are some differences. Make sure you include these differences in your answer:
MY DEVOPS QUESTIONS
By Aaradhya Bodade
This is a very important question so just make sure you don’t deviate from the topic. I advise
you to follow the below mentioned format:
Docker containers include the application and all of its dependencies but share the kernel
with other containers, running as isolated processes in user space on the host operating
system. Docker containers are not tied to any specific infrastructure: they run on any
computer, on any infrastructure, and in any cloud.
Now explain how to create a Docker container, Docker containers can be created by either
creating a Docker image and then running it or you can use Docker images that are present on
the Dockerhub.
Docker containers are basically runtime instances of Docker images.
Answer to this question is pretty direct. Docker hub is a cloud-based registry service which
allows you to link to code repositories, build your images and test them, stores manually
pushed images, and links to Docker cloud so you can deploy images to your hosts. It provides
a centralized resource for container image discovery, distribution and change management,
user and team collaboration, and workflow automation throughout the development pipeline.
You should start this answer by explaining Docker Swarn. It is native clustering for Docker
which turns a pool of Docker hosts into a single, virtual Docker host. Docker Swarm serves
the standard Docker API, any tool that already communicates with a Docker daemon can use
Swarm to transparently scale to multiple hosts.
I will also suggest you to include some supported tools:
Dokku
Docker Compose
Docker Machine
Jenkins
This answer according to me should begin by explaining the use of Dockerfile. Docker can
build images automatically by reading the instructions from a Dockerfile.
Now I suggest you to give a small definition of Dockerfle. A Dockerfile is a text document
that contains all the commands a user could call on the command line to assemble an image.
Using docker build users can create an automated build that executes several command-line
instructions in succession.
You can use json instead of yaml for your compose file, to use json file with compose,
specify the filename to use for eg:
docker-compose -f docker-compose.json up
Explain how you have used Docker to help rapid deployment. Explain how you have scripted
Docker and used Docker with other tools like Puppet, Chef or Jenkins. If you have no past
practical experience in Docker and have past experience with other tools in similar space, be
honest and explain the same. In this case, it makes sense if you can compare other tools to
Docker in terms of functionality.
I will suggest you to give a direct answer to this. We can use Docker image to create Docker
container by using the below command:
docker run -t -i <image name> <command name>
This command will create and start container.
You should also add, If you want to check the list of all running container with status on a
host use the below command:
docker ps -a
In order to stop the Docker container you can use the below command:
docker stop <container ID>
Now to restart the Docker container you can use:
docker restart <container ID>
Large web deployments like Google and Twitter, and platform providers such as Heroku and
dotCloud all run on container technology, at a scale of hundreds of thousands or even
millions of containers running in parallel.
I will start this answer by saying Docker runs on only Linux and Cloud platforms and then I
will mention the below vendors of Linux:
Cloud:
Amazon EC2
Google Compute Engine
Microsoft Azure
Rackspace
You can answer this by saying, no I won’t lose my data when the Docker container exits.
Any data that your application writes to disk gets preserved in its container until you
MY DEVOPS QUESTIONS
By Aaradhya Bodade
explicitly delete the container. The file system for the container persists even after the
container halts.
DevOps Pipeline can be defined as a set of tools and processes in which both the
development team and operations team work together. In DevOps automation, CI/CD plays
an important role. Now if we look into the flow of DevOps, First when we complete
continuous integration. Then the next step towards continuous delivery is triggered. After the
continuous delivery, the next step of continuous deployment will be triggered. The
connection of all these functions can be defined as a pipeline.
Q18. How to check for Docker Client and Docker Server version?
You can check the version of the Docker Client by running the following command in your
terminal:
Copy code
docker version
And you can check the version of the Docker Server (also known as the Docker Engine) by
running the following command:
Copy code
docker info
The output of the above command will include the Docker Engine version.
Q19. If you vaguely remember the command and you’d like to confirm it, how
will you get help on that particular command?
You can use the “man” or “–help” command to get information on a specific command in
most Unix-like systems. For example, if you’re trying to recall the syntax for the “grep”
command, you can type “man grep” or “grep –help” in the terminal. This will display the
manual page for the command, which includes usage examples and options.
I hope these DevOps interview questions help you crack your interview. If you’re searching
for a demanding and rewarding career. Whether you’ve worked in DevOps or are new to the
field, the Post Graduate Program in DevOps is what you need to learn how to succeed. From
the basic to the most advanced techniques, we cover everything.
Got a question for us? Please mention it in the comments section and we will get back to you.