Nothing Special   »   [go: up one dir, main page]

AZ 400T00A ENU TrainerHandbook

Download as pdf or txt
Download as pdf or txt
You are on page 1of 749

Microsoft

Official
Course

AZ-400T00
Designing and
Implementing Microsoft
DevOps solutions
AZ-400T00
Designing and Implementing
Microsoft DevOps solutions
II  Disclaimer

 
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in 
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
 
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
 
The names of manufacturers, products, or URLs are provided for informational purposes only and   
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is
not responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained  
therein.
 
© 2019 Microsoft Corporation. All rights reserved.
 
Microsoft and the trademarks listed at http://www.microsoft.com/trademarks 1are trademarks of the
Microsoft group of companies. All other trademarks are property of their respective owners.
 
 

1 http://www.microsoft.com/trademarks
EULA  III

MICROSOFT LICENSE TERMS


MICROSOFT INSTRUCTOR-LED COURSEWARE
These license terms are an agreement between Microsoft Corporation (or based on where you live, one
of its affiliates) and you. Please read them. They apply to your use of the content accompanying this
agreement which includes the media on which you received it, if any. These license terms also apply to
Trainer Content and any updates and supplements for the Licensed Content unless other terms accompa-
ny those items. If so, those terms apply.
BY ACCESSING, DOWNLOADING OR USING THE LICENSED CONTENT, YOU ACCEPT THESE TERMS.
IF YOU DO NOT ACCEPT THEM, DO NOT ACCESS, DOWNLOAD OR USE THE LICENSED CONTENT.
If you comply with these license terms, you have the rights below for each license you acquire.
1. DEFINITIONS.
1. “Authorized Learning Center” means a Microsoft Imagine Academy (MSIA) Program Member,
Microsoft Learning Competency Member, or such other entity as Microsoft may designate from
time to time.
2. “Authorized Training Session” means the instructor-led training class using Microsoft Instruc-
tor-Led Courseware conducted by a Trainer at or through an Authorized Learning Center.
3. “Classroom Device” means one (1) dedicated, secure computer that an Authorized Learning Center
owns or controls that is located at an Authorized Learning Center’s training facilities that meets or
exceeds the hardware level specified for the particular Microsoft Instructor-Led Courseware.
4. “End User” means an individual who is (i) duly enrolled in and attending an Authorized Training
Session or Private Training Session, (ii) an employee of an MPN Member (defined below), or (iii) a
Microsoft full-time employee, a Microsoft Imagine Academy (MSIA) Program Member, or a
Microsoft Learn for Educators – Validated Educator.
5. “Licensed Content” means the content accompanying this agreement which may include the
Microsoft Instructor-Led Courseware or Trainer Content.
6. “Microsoft Certified Trainer” or “MCT” means an individual who is (i) engaged to teach a training
session to End Users on behalf of an Authorized Learning Center or MPN Member, and (ii) current-
ly certified as a Microsoft Certified Trainer under the Microsoft Certification Program.
7. “Microsoft Instructor-Led Courseware” means the Microsoft-branded instructor-led training course
that educates IT professionals, developers, students at an academic institution, and other learners
on Microsoft technologies. A Microsoft Instructor-Led Courseware title may be branded as MOC,
Microsoft Dynamics, or Microsoft Business Group courseware.
8. “Microsoft Imagine Academy (MSIA) Program Member” means an active member of the Microsoft
Imagine Academy Program.
9. “Microsoft Learn for Educators – Validated Educator” means an educator who has been validated
through the Microsoft Learn for Educators program as an active educator at a college, university,
community college, polytechnic or K-12 institution.
10. “Microsoft Learning Competency Member” means an active member of the Microsoft Partner
Network program in good standing that currently holds the Learning Competency status.
11. “MOC” means the “Official Microsoft Learning Product” instructor-led courseware known as
Microsoft Official Course that educates IT professionals, developers, students at an academic
institution, and other learners on Microsoft technologies.
12. “MPN Member” means an active Microsoft Partner Network program member in good standing.
IV  EULA

13. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic
device that you personally own or control that meets or exceeds the hardware level specified for
the particular Microsoft Instructor-Led Courseware.
14. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led
Courseware. These classes are not advertised or promoted to the general public and class attend-
ance is restricted to individuals employed by or contracted by the corporate customer.
15. “Trainer” means (i) an academically accredited educator engaged by a Microsoft Imagine Academy
Program Member to teach an Authorized Training Session, (ii) an academically accredited educator
validated as a Microsoft Learn for Educators – Validated Educator, and/or (iii) a MCT.
16. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and
additional supplemental content designated solely for Trainers’ use to teach a training session
using the Microsoft Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint
presentations, trainer preparation guide, train the trainer materials, Microsoft One Note packs,
classroom setup guide and Pre-release course feedback form. To clarify, Trainer Content does not
include any software, virtual hard disks or virtual machines.
2. USE RIGHTS. The Licensed Content is licensed, not sold. The Licensed Content is licensed on a one
copy per user basis, such that you must acquire a license for each individual that accesses or uses the
Licensed Content.
●● 2.1 Below are five separate sets of use rights. Only one set of rights apply to you.
1. If you are a Microsoft Imagine Academy (MSIA) Program Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:

1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User who is enrolled in the Authorized Training Session, and only immediately
prior to the commencement of the Authorized Training Session that is the subject matter
of the Microsoft Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they
can access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content.
3. For each license you acquire, you must comply with the following:

1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure each End User attending an Authorized Training Session has their own
valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the
Authorized Training Session,
3. you will ensure that each End User provided with the hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
EULA  V

User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
4. you will ensure that each Trainer teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
5. you will only use qualified Trainers who have in-depth knowledge of and experience with
the Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware
being taught for all your Authorized Training Sessions,
6. you will only deliver a maximum of 15 hours of training per week for each Authorized
Training Session that uses a MOC title, and
7. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer
resources for the Microsoft Instructor-Led Courseware.
2. If you are a Microsoft Learning Competency Member:
1. Each license acquire may only be used to review one (1) copy of the Microsoft Instruc-
tor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Course-
ware is in digital format, you may install one (1) copy on up to three (3) Personal Devices.
You may not install the Microsoft Instructor-Led Courseware on a device you do not own or
control.
2. For each license you acquire on behalf of an End User or MCT, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Authorized Training Session and only immediately prior to
the commencement of the Authorized Training Session that is the subject matter of the
Microsoft Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) MCT with the unique redemption code and instructions on how
they can access one (1) Trainer Content.
3. For each license you acquire, you must comply with the following:
1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure that each End User attending an Authorized Training Session has their
own valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of
the Authorized Training Session,
3. you will ensure that each End User provided with a hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
VI  EULA

4. you will ensure that each MCT teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
5. you will only use qualified MCTs who also hold the applicable Microsoft Certification
credential that is the subject of the MOC title being taught for all your Authorized
Training Sessions using MOC,
6. you will only provide access to the Microsoft Instructor-Led Courseware to End Users,
and
7. you will only provide access to the Trainer Content to MCTs.
3. If you are a MPN Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:

1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Private Training Session, and only immediately prior to the
commencement of the Private Training Session that is the subject matter of the Micro-
soft Instructor-Led Courseware being provided, or
2. provide one (1) End User who is attending the Private Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) Trainer who is teaching the Private Training Session with the
unique redemption code and instructions on how they can access one (1) Trainer
Content.
3. For each license you acquire, you must comply with the following:

1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure that each End User attending an Private Training Session has their own
valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the
Private Training Session,
3. you will ensure that each End User provided with a hard copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
4. you will ensure that each Trainer teaching an Private Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Private Training Session,
EULA  VII

5. you will only use qualified Trainers who hold the applicable Microsoft Certification
credential that is the subject of the Microsoft Instructor-Led Courseware being taught
for all your Private Training Sessions,
6. you will only use qualified MCTs who hold the applicable Microsoft Certification creden-
tial that is the subject of the MOC title being taught for all your Private Training Sessions
using MOC,
7. you will only provide access to the Microsoft Instructor-Led Courseware to End Users,
and
8. you will only provide access to the Trainer Content to Trainers.
4. If you are an End User:
For each license you acquire, you may use the Microsoft Instructor-Led Courseware solely for
your personal training use. If the Microsoft Instructor-Led Courseware is in digital format, you
may access the Microsoft Instructor-Led Courseware online using the unique redemption code
provided to you by the training provider and install and use one (1) copy of the Microsoft
Instructor-Led Courseware on up to three (3) Personal Devices. You may also print one (1) copy
of the Microsoft Instructor-Led Courseware. You may not install the Microsoft Instructor-Led
Courseware on a device you do not own or control.
5. If you are a Trainer.
1. For each license you acquire, you may install and use one (1) copy of the Trainer Content in
the form provided to you on one (1) Personal Device solely to prepare and deliver an
Authorized Training Session or Private Training Session, and install one (1) additional copy
on another Personal Device as a backup copy, which may be used only to reinstall the
Trainer Content. You may not install or use a copy of the Trainer Content on a device you do
not own or control. You may also print one (1) copy of the Trainer Content solely to prepare
for and deliver an Authorized Training Session or Private Training Session.
2. If you are an MCT, you may customize the written portions of the Trainer Content that are
logically associated with instruction of a training session in accordance with the most recent
version of the MCT agreement.
3. If you elect to exercise the foregoing rights, you agree to comply with the following: (i)
customizations may only be used for teaching Authorized Training Sessions and Private
Training Sessions, and (ii) all customizations will comply with this agreement. For clarity, any
use of “customize” refers only to changing the order of slides and content, and/or not using
all the slides or content, it does not mean changing or modifying any slide or content.
●● 2.2 Separation of Components. The Licensed Content is licensed as a single unit and you
may not separate their components and install them on different devices.
●● 2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights
above, you may not distribute any Licensed Content or any portion thereof (including any permit-
ted modifications) to any third parties without the express written permission of Microsoft.
●● 2.4 Third Party Notices. The Licensed Content may include third party code that Micro-
soft, not the third party, licenses to you under this agreement. Notices, if any, for the third party
code are included for your information only.
●● 2.5 Additional Terms. Some Licensed Content may contain components with additional
terms, conditions, and licenses regarding its use. Any non-conflicting terms in those conditions
and licenses also apply to your use of that respective component and supplements the terms
described in this agreement.
VIII  EULA

3. LICENSED CONTENT BASED ON PRE-RELEASE TECHNOLOGY. If the Licensed Content’s subject


matter is based on a pre-release version of Microsoft technology (“Pre-release”), then in addition to
the other provisions in this agreement, these terms also apply:
1. Pre-Release Licensed Content. This Licensed Content subject matter is on the Pre-release
version of the Microsoft technology. The technology may not work the way a final version of the
technology will and we may change the technology for the final version. We also may not release a
final version. Licensed Content based on the final version of the technology may not contain the
same information as the Licensed Content based on the Pre-release version. Microsoft is under no
obligation to provide you with any further content, including any Licensed Content based on the
final version of the technology.
2. Feedback. If you agree to give feedback about the Licensed Content to Microsoft, either directly
or through its third party designee, you give to Microsoft without charge, the right to use, share
and commercialize your feedback in any way and for any purpose. You also give to third parties,
without charge, any patent rights needed for their products, technologies and services to use or
interface with any specific parts of a Microsoft technology, Microsoft product, or service that
includes the feedback. You will not give feedback that is subject to a license that requires Micro-
soft to license its technology, technologies, or products to third parties because we include your
feedback in them. These rights survive this agreement.
3. Pre-release Term. If you are an Microsoft Imagine Academy Program Member, Microsoft Learn-
ing Competency Member, MPN Member, Microsoft Learn for Educators – Validated Educator, or
Trainer, you will cease using all copies of the Licensed Content on the Pre-release technology upon
(i) the date which Microsoft informs you is the end date for using the Licensed Content on the
Pre-release technology, or (ii) sixty (60) days after the commercial release of the technology that is
the subject of the Licensed Content, whichever is earliest (“Pre-release term”). Upon expiration or
termination of the Pre-release term, you will irretrievably delete and destroy all copies of the
Licensed Content in your possession or under your control.
4. SCOPE OF LICENSE. The Licensed Content is licensed, not sold. This agreement only gives you some
rights to use the Licensed Content. Microsoft reserves all other rights. Unless applicable law gives you
more rights despite this limitation, you may use the Licensed Content only as expressly permitted in
this agreement. In doing so, you must comply with any technical limitations in the Licensed Content
that only allows you to use it in certain ways. Except as expressly permitted in this agreement, you
may not:
●● access or allow any individual to access the Licensed Content if they have not acquired a valid
license for the Licensed Content,
●● alter, remove or obscure any copyright or other protective notices (including watermarks), brand-
ing or identifications contained in the Licensed Content,
●● modify or create a derivative work of any Licensed Content,
●● publicly display, or make the Licensed Content available for others to access or use,
●● copy, print, install, sell, publish, transmit, lend, adapt, reuse, link to or post, make available or
distribute the Licensed Content to any third party,
●● work around any technical limitations in the Licensed Content, or
●● reverse engineer, decompile, remove or otherwise thwart any protections or disassemble the
Licensed Content except and only to the extent that applicable law expressly permits, despite this
limitation.
5. RESERVATION OF RIGHTS AND OWNERSHIP. Microsoft reserves all rights not expressly granted to
you in this agreement. The Licensed Content is protected by copyright and other intellectual property
EULA  IX

laws and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property
rights in the Licensed Content.
6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regula-
tions. You must comply with all domestic and international export laws and regulations that apply to
the Licensed Content. These laws include restrictions on destinations, end users and end use. For
additional information, see www.microsoft.com/exporting.
7. SUPPORT SERVICES. Because the Licensed Content is provided “as is”, we are not obligated to
provide support services for it.
8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you
fail to comply with the terms and conditions of this agreement. Upon termination of this agreement
for any reason, you will immediately stop all use of and delete and destroy all copies of the Licensed
Content in your possession or under your control.
9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible
for the contents of any third party sites, any links contained in third party sites, or any changes or
updates to third party sites. Microsoft is not responsible for webcasting or any other form of trans-
mission received from any third party sites. Microsoft is providing these links to third party sites to
you only as a convenience, and the inclusion of any link does not imply an endorsement by Microsoft
of the third party site.
10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
11. APPLICABLE LAW.
1. United States. If you acquired the Licensed Content in the United States, Washington state law
governs the interpretation of this agreement and applies to claims for breach of it, regardless of
conflict of laws principles. The laws of the state where you live govern all other claims, including
claims under state consumer protection laws, unfair competition laws, and in tort.
2. Outside the United States. If you acquired the Licensed Content in any other country, the laws of
that country apply.
12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the
laws of your country. You may also have rights with respect to the party from whom you acquired the
Licensed Content. This agreement does not change your rights under the laws of your country if the
laws of your country do not permit it to do so.
13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS" AND "AS AVAILA-
BLE." YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE AFFILIATES GIVES NO
EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY HAVE ADDITIONAL CON-
SUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. TO
THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND ITS RESPECTIVE AFFILI-
ATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICU-
LAR PURPOSE AND NON-INFRINGEMENT.
14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO
US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST
PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
X  EULA

This limitation applies to


●● anything related to the Licensed Content, services, content (including code) on third party Internet
sites or third-party programs; and
●● claims for breach of contract, breach of warranty, guarantee or condition, strict liability, negligence,
or other tort to the extent permitted by applicable law.
It also applies even if Microsoft knew or should have known about the possibility of the damages. The
above limitation or exclusion may not apply to you because your country may not allow the exclusion
or limitation of incidental, consequential, or other damages.
Please note: As this Licensed Content is distributed in Quebec, Canada, some of the clauses in this
agreement are provided below in French.
Remarque : Ce le contenu sous licence étant distribué au Québec, Canada, certaines des clauses
dans ce contrat sont fournies ci-dessous en français.
EXONÉRATION DE GARANTIE. Le contenu sous licence visé par une licence est offert « tel quel ». Toute
utilisation de ce contenu sous licence est à votre seule risque et péril. Microsoft n’accorde aucune autre
garantie expresse. Vous pouvez bénéficier de droits additionnels en vertu du droit local sur la protection
dues consommateurs, que ce contrat ne peut modifier. La ou elles sont permises par le droit locale, les
garanties implicites de qualité marchande, d’adéquation à un usage particulier et d’absence de contre-
façon sont exclues.
LIMITATION DES DOMMAGES-INTÉRÊTS ET EXCLUSION DE RESPONSABILITÉ POUR LES DOMMAG-
ES. Vous pouvez obtenir de Microsoft et de ses fournisseurs une indemnisation en cas de dommages
directs uniquement à hauteur de 5,00 $ US. Vous ne pouvez prétendre à aucune indemnisation pour les
autres dommages, y compris les dommages spéciaux, indirects ou accessoires et pertes de bénéfices.
Cette limitation concerne:
●● tout ce qui est relié au le contenu sous licence, aux services ou au contenu (y compris le code)
figurant sur des sites Internet tiers ou dans des programmes tiers; et.
●● les réclamations au titre de violation de contrat ou de garantie, ou au titre de responsabilité stricte, de
négligence ou d’une autre faute dans la limite autorisée par la loi en vigueur.
Elle s’applique également, même si Microsoft connaissait ou devrait connaître l’éventualité d’un tel
dommage. Si votre pays n’autorise pas l’exclusion ou la limitation de responsabilité pour les dommages
indirects, accessoires ou de quelque nature que ce soit, il se peut que la limitation ou l’exclusion ci-dessus
ne s’appliquera pas à votre égard.
EFFET JURIDIQUE. Le présent contrat décrit certains droits juridiques. Vous pourriez avoir d’autres droits
prévus par les lois de votre pays. Le présent contrat ne modifie pas les droits que vous confèrent les lois
de votre pays si celles-ci ne le permettent pas.
Revised April 2019
Contents

■■ Module 0 Welcome  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1
Start here  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  1
■■ Module 1 Get started on a DevOps transformation journey  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  9
Introduction to DevOps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  9
Choose the right project  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  25
Describe team structures  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  31
Choose the DevOps tools  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  40
Plan Agile with GitHub Projects and Azure Boards  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  46
Introduction to source control  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  66
Describe types of source control systems  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  71
Work with Azure Repos and GitHub  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  87
Labs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  94
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  96
■■ Module 2 Development for enterprise DevOps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  107
Structure your Git Repo  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  107
Manage Git branches and workflows  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  111
Collaborate with pull requests in Azure Repos  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  129
Explore Git hooks  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  139
Plan foster inner source  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  144
Manage Git repositories  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  153
Identify technical debt  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  162
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  170
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  171
■■ Module 3 Implement CI with Azure Pipelines and GitHub Actions  . . . . . . . . . . . . . . . . . . . . . . .  181
Explore Azure Pipelines  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  181
Manage Azure Pipeline agents and pools  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  188
Describe pipelines and concurrency  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  198
Explore Continuous integration  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  206
Implement a pipeline strategy  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  212
Integrate with Azure Pipelines  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  218
Introduction to GitHub Actions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  241
Learn continuous integration with GitHub Actions  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  250
Design a container build strategy  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  258
Labs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  267
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  271
■■ Module 4 Design and implement a release strategy  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  283
Introduction to continuous delivery  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  283
Create a release pipeline  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  291
Explore release recommendations  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  312
Provision and test environments  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  331
Manage and modularize tasks and templates  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  348
Automate inspection of health  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  360
Labs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  387
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  389
■■ Module 5 Implement a secure continuous deployment using Azure Pipelines  . . . . . . . . . . . . .  401
Introduction to deployment patterns  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  401
Implement blue-green deployment and feature toggles  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  406
Implement canary releases and dark launching  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  424
Implement A-B testing and progressive exposure deployment  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  428
Integrate with identity management systems  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  436
Manage application configuration data  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  440
Labs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  453
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  456
■■ Module 6 Manage infrastructure as code using Azure and DSC  . . . . . . . . . . . . . . . . . . . . . . . . . .  467
Explore infrastructure as code and configuration management  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  467
Create Azure resources using Azure Resource Manager templates  . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  474
Create Azure resources by using Azure CLI  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  486
Explore Azure Automation with DevOps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  492
Implement Desired State Configuration (DSC)  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  516
Implement Bicep  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  529
Labs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  546
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  547
■■ Module 7 Implement security and validate code bases for compliance  . . . . . . . . . . . . . . . . . . .  557
Introduction to Secure DevOps  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  557
Implement open-source software  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  571
Software Composition Analysis  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  577
Static analyzers  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  587
OWASP and Dynamic Analyzers  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  589
Security Monitoring and Governance  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  594
Labs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  605
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  607
■■ Module 8 Design and implement a dependency management strategy  . . . . . . . . . . . . . . . . . . .  617
Explore package dependencies  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  617
Understand package management  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  622
Migrate consolidate and secure artifacts  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  632
Implement a versioning strategy  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  638
Introduction to GitHub Packages  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  646
Lab  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  655
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  656
■■ Module 9 Implement continuous feedback  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  663
Implement tools to track usage and flow  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  663
Develop monitor and status dashboards  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  694
Share knowledge within teams  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  700
Design processes to automate application analytics  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  710
Manage alerts, Blameless retrospectives and a just culture  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  716
Labs  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  725
Module review and takeaways  . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .  728
Module 0 Welcome

Start here
Microsoft DevOps curriculum
Welcome to the Designing and Implementing Microsoft DevOps Solutions course. This course will
help you prepare for the AZ-400, Designing and Implementing Microsoft DevOps Solutions1 certifica-
tion exam.

The DevOps certification exam is for DevOps professionals who combine people, processes, and technol-
ogies to continuously deliver valuable products and services that meet end-user needs and business
objectives. DevOps professionals streamline delivery by optimizing practices, improving communications
and collaboration, and creating automation. They design and implement strategies for application code
and infrastructure that allow for continuous integration, continuous testing, continuous delivery, and
continuous monitoring and feedback.
Exam candidates must be proficient with Agile practices. They must be familiar with Azure administration
and Azure development and experts in at least one of these areas. DevOps professionals must be able to
design and implement DevOps practices for version control, compliance, infrastructure as code, configu-
ration management, build, release, and testing by using Azure technologies.
There are seven exam study areas.

AZ-400 Domain Area Weight


Configure Processes and Communications. 10 - 15%
Design and Implement Source Control. 15 - 20%
Design and Implement Build and Release Pipelines. 40 - 45%
Develop a Security and Compliance Plan. 10 - 15%
Implement an Instrumentation Strategy. 10 - 15%

1 https://docs.microsoft.com/en-us/learn/certifications/exams/AZ-400
2     

About this course


This course provides the knowledge and skills to design and implement DevOps processes and practices.
Students will learn how to plan for DevOps, use source control, scale Git for an enterprise, consolidate
artifacts, design a dependency management strategy, manage secrets, implement continuous integration,
implement a container build strategy, design a release strategy, set up a release management workflow,
implement a deployment pattern, and optimize feedback mechanisms.
Level: Advanced
Audience
Students in this course are interested in designing and implementing DevOps processes or in passing the
Microsoft Azure DevOps Solutions certification exam.
Pre-requisites
Successful learners will have prior knowledge and understanding of:
●● Cloud computing concepts, including an understanding of PaaS, SaaS, and IaaS implementations.
●● Both Azure administration and Azure development with proven expertise in at least one of these
areas.
●● Version control, Agile software development, and core software development principles. It would be
helpful to have experience in an organization that delivers software.
If you are new to Azure and cloud computing, consider one of the following resources:
●● Free online: Azure Fundamentals2
●● Instructor-led course: AZ-900: Azure Fundamentals3
If you are new to Azure Administration, consider taking:
●● Free online: Prerequisites for Azure Administrators4
●● Instructor-led courses: AZ-104: Microsoft Azure Administrator5 and AZ-010: Azure Administra-
tion for AWS SysOps6
If you are new to Azure Developer, consider taking:
●● Free online: Create serverless applications7
●● Instructor-led courses: AZ-204: Developing Solutions for Microsoft Azure8 and AZ-020: Microsoft
Azure Solutions for AWS Developers9
You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps10.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-

2 https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
3 https://docs.microsoft.com/en-us/learn/certifications/courses/az-900t01
4 https://docs.microsoft.com/en-us/learn/paths/az-104-administrator-prerequisites/
5 https://docs.microsoft.com/en-us/learn/certifications/courses/az-104t00
6 https://docs.microsoft.com/en-us/learn/certifications/courses/az-010t00
7 https://docs.microsoft.com/en-us/learn/paths/create-serverless-applications/
8 https://docs.microsoft.com/en-us/learn/certifications/courses/az-204t00
9 https://docs.microsoft.com/en-us/learn/certifications/courses/az-020t00
10 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
    3

ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps11.
You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't have it
yet, see:
●● Join GitHub · GitHub12
●● If you already have your GitHub account, create a new repository Creating a new repository -
GitHub Docs13.
Expected learning
After completing this course, students will be able to:
●● Plan for the transformation with shared goals and timelines
●● Select a project and identify project metrics and Key Performance Indicators (KPI's)
●● Create a team and agile organizational structure
●● Design a tool integration strategy
●● Design a license management strategy (e.g., Azure DevOps and GitHub users)
●● Design a strategy for end-to-end traceability from work items to working software
●● Design an authentication and access strategy
●● Design a strategy for integrating on-premises and cloud resources
●● Describe the benefits of using Source Control
●● Describe Azure Repos and GitHub
●● Migrate from TFVC to Git
●● Manage code quality, including technical debt SonarCloud, and other tooling solutions
●● Build organizational knowledge on code quality
●● Explain how to structure Git repos
●● Describe Git branching workflows
●● Leverage pull requests for collaboration and code reviews
●● Leverage Git hooks for automation
●● Use Git to foster inner source across the organization
●● Explain the role of Azure Pipelines and its components
●● Configure Agents for use in Azure Pipelines
●● Explain why continuous integration matters
●● Implement continuous integration using Azure Pipelines
●● Design processes to measure end-user satisfaction and analyze user feedback
●● Design processes to automate application analytics
●● Manage alerts and reduce meaningless and non-actionable alerts

11 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
12 https://github.com/signup
13 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
4     

●● Carry out blameless retrospectives and create a just culture


●● Define an infrastructure and configuration strategy and appropriate toolset for a release pipeline and
application infrastructure
●● Implement compliance and security in your application infrastructure
●● Describe the potential challenges with integrating open-source software
●● Inspect open-source software packages for security and license compliance
●● Manage organizational security and compliance policies
●● Integrate license and vulnerability scans into build and deployment pipelines
●● Configure build pipelines to access package security and license ratings

Course syllabus
This course includes content that will help you prepare for the Microsoft DevOps Solution certification
exam. Other content is included to ensure you have a complete picture of DevOps. The course content
consists of graphics, reference links, module review questions, and optional hands-on labs.
Module 1 – Get started on a DevOps transformation journey
●● Lesson 1: Introduction to DevOps
●● Lesson 2: Choose the right project
●● Lesson 3: Describe team structures
●● Lesson 4: Choose the DevOps tools
●● Lesson 5: Plan Agile with GitHub Projects and Azure Boards
●● Lesson 6: Introduction to source control
●● Lesson 7: Describe types of source control systems
●● Lesson 8: Work with Azure Repos and GitHub
●● Labs

●● Lab 01: Agile planning and portfolio management with Azure Boards
●● Lab 02: Version controlling with Git in Azure Repos
Module 2 – Development for enterprise DevOps
●● Lesson 1: Structure your Git Repo
●● Lesson 2: Manage Git branches and workflows
●● Lesson 3: Collaborate with pull requests in Azure Repos
●● Lesson 4: Explore Git hooks
●● Lesson 5: Plan foster inner source
●● Lesson 6: Manage Git repositories
●● Lesson 7: Identify technical debt
    5

●● Lab

●● Lab 03: Version controlling with Git in Azure Repos


Module 3 – Implement CI with Azure Pipelines and GitHub Actions
●● Lesson 1: Explore Azure Pipelines
●● Lesson 2: Manage Azure Pipeline agents and pools
●● Lesson 3: Describe pipelines and concurrency
●● Lesson 4: Explore Continuous integration
●● Lesson 5: Implement a pipeline strategy
●● Lesson 6: Integrate with Azure Pipelines
●● Lesson 7: Introduction to GitHub Actions
●● Lesson 8: Learn continuous integration with GitHub Actions
●● Lesson 9: Design a container build strategy
●● Labs

●● Lab 04: Configuring agent pools and understanding pipeline styles


●● Lab 05: Enabling continuous integration with Azure Pipelines
●● Lab 06: Integrating external source control with Azure Pipelines
●● Lab 07: Implementing GitHub Actions by using DevOps Starter
●● Lab 08: Deploying Docker Containers to Azure App Service web apps
Module 4 – Design and implement a release strategy
●● Lesson 1: Introduction to continuous delivery
●● Lesson 2: Explore release strategy recommendations
●● Lesson 3: Build a high-quality release pipeline
●● Lesson 4: Introduction to deployment patterns
●● Lesson 5: Implement blue-green deployment and feature toggles
●● Lesson 6: Implement canary releases and dark launching
●● Lesson 7: Implement A/B testing and progressive exposure deployment
●● Labs

●● Lab 09: Control Deployments using Release Gates


●● Lab 10: Creating a release dashboard
Module 5 - Implement a secure continuous deployment using Azure Pipelines
●● Lesson 1: Create a release pipeline
●● Lesson 2: Provision and test environments
●● Lesson 3: Manage and modularize tasks and templates
●● Lesson 4: Automate inspection of health
6     

●● Lesson 5: Manage application configuration data


●● Lesson 6: Integrate with identity management systems
●● Lesson 7: Implement application configuration
●● Labs

●● Lab 11: Configuring pipelines as code with YAML


●● Lab 12: Setting up and running functional tests
●● Lab 13: Integrate Azure Key Vault with Azure DevOps
Module 6 - Manage infrastructure as code using Azure and DSC
●● Lesson 1: Explore infrastructure as code and configuration management
●● Lesson 2: Create Azure resources using Azure Resource Manager templates
●● Lesson 3: Create Azure resources by using Azure CLI
●● Lesson 4: Explore Azure Automation with DevOps
●● Lesson 5: Implement Desired State Configuration (DSC)
●● Lesson 6: Implement Bicep
●● Lab

●● Lab 14: Azure deployments using Azure Resource Manager templates


Module 07 - Implement security and validate code bases for compliance
●● Lesson 1: Introduction to Secure DevOps
●● Lesson 2: Implement open-source software
●● Lesson 3: Software Composition Analysis
●● Lesson 4: Static analyzers
●● Lesson 5: OWASP and Dynamic Analyzers
●● Lesson 6: Security Monitoring and Governance
●● Labs

●● Lab 15: Implement security and compliance in Azure Pipelines


●● Lab 16: Managing technical debt with SonarQube and Azure DevOps
Module 8 - Design and implement a dependency management strategy
●● Lesson 1: Explore package dependencies
●● Lesson 2: Understand package management
●● Lesson 3: Migrate, consolidate, and secure artifacts
●● Lesson 4: Implement a versioning strategy
●● Lesson 5: Introduction to GitHub Packages
    7

●● Lab

●● Lab 17: Package management with Azure Artifacts


Module 9 - Implement continuous feedback
●● Lesson 1: Implement tools to track usage and flow
●● Lesson 2: Develop monitor and status dashboards
●● Lesson 3: Share knowledge within teams
●● Lesson 4: Design processes to automate application analytics
●● Lesson 5: Manage alerts, Blameless retrospectives, and a just culture
●● Labs

●● Lab 18: Monitoring application performance with Application Insights


●● Lab 19: Integration between Azure DevOps and Microsoft Teams
●● Lab 20: Sharing Team Knowledge using Azure Project Wikis

Lab 00: Validate Lab Environment


Lab overview
In this lab, you will set up the lab environment for the rest of the course.

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions14

14 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
Module 1 Get started on a DevOps transfor-
mation journey

Introduction to DevOps
Introduction
"DevOps is the union of people, process, and products to enable continuous delivery of value to
our end users." - According to Donovan Brown in What is DevOps?1
The DevOps learning paths will help you prepare for a DevOps journey. You'll learn the main characteris-
tics of the DevOps process, tools, and people involved during the lifecycle. Also, it prepares you for the
Microsoft DevOps Solution certification exam. You'll see other content to ensure you have a complete
picture of DevOps. The module's content consists of graphics, reference links, module review questions,
and optional hands-on labs.
You'll learn:
●● How to plan for DevOps.
●● Use source control.
●● Scale Git for an enterprise.
●● Combine artifacts.
●● Design a dependency management strategy.
●● Manage secrets.
●● Implement continuous integration.
●● Implement a container build strategy.
●● Design a release strategy.
●● Set up a release management workflow.
●● Implement a deployment pattern.

1 https://www.donovanbrown.com/post/what-is-devops
10     

●● Optimize feedback mechanisms.


Plan before your act. This module will help you understand what DevOps is and how to plan for a DevOps
transformation journey.

What is the DevOps transformation journey?


The DevOps transformation journey is a series of 10 learning paths. It familiarizes you with Azure DevOps
and GitHub. Also, learn its many services, features, and integration with tools to support your DevOps
process.

Why should I take the DevOps learning path?


People in these modules are interested in designing and implementing DevOps processes. Also, they're
preparing for the AZ-400 - Designing and Implementing Microsoft DevOps Solutions2 certification
exam.
The certification exam is for DevOps professionals. Combine people, processes, and technologies to
continuously deliver valuable products and services that meet end-user needs and business goals.
DevOps professionals streamline delivery by optimizing practices, improving communications and
collaboration, and creating automation.
They design and implement application code and infrastructure strategies that allow continuous integra-
tion, testing, delivery, monitoring, and feedback.
Exam candidates must be proficient with Agile practices. They must be familiar with Azure administration,
Azure development, and experts in at least one of these areas.
DevOps professionals must design and implement DevOps practices for version control, compliance,
infrastructure as code, configuration management, build, release, and testing-using Azure technologies.
There are seven domain areas.

AZ-400 Domain Area Weight


Configure Processes and Communications. 13%
Design and Implement Source Control. 19%
Design and Implement Build and Release Pipelines. 42%
Develop a Security and Compliance Plan. 14%
Implement an Instrumentation Strategy. 13%

Learning objectives
After completing this module, students and professionals can:
●● Plan for the transformation with shared goals and timelines.
●● Select a project and identify project metrics and Key Performance Indicators (KPI).
●● Create a team and agile organizational structure.
●● Design a tool integration strategy.
●● Design a license management strategy (for example, Azure DevOps and GitHub users).
●● Design a plan for end-to-end traceability from work items to working software.

2 https://docs.microsoft.com/en-us/learn/certifications/exams/az-400
    11

●● Design an authentication and access strategy.


●● Design a strategy for integrating on-premises and cloud resources.

Prerequisites
Successful learners will have prior knowledge and understanding of:
●● Cloud computing concepts, including an understanding of PaaS, SaaS, and IaaS implementations.
●● Azure administration and Azure development with proven expertise in at least one of these areas.
●● Version control, Agile software development, and core software development principles. It would be
helpful to have experience in an organization that delivers software.
If you're new to Azure and cloud computing, consider one of the following resources:
●● Free online: Azure Fundamentals3.
●● Instructor-led course: AZ-900: Azure Fundamentals4.
If you're new to Azure Administration, consider taking:
●● Free online: Prerequisites for Azure Administrators5.
●● Instructor-led courses: AZ-104: Microsoft Azure Administrator6 and AZ-010: Azure Administra-
tion for AWS SysOps7.
If you're new to Azure Developer, consider taking:
●● Free online: Create serverless applications8.
●● Instructor-led courses: AZ-204: Developing Solutions for Microsoft Azure9 and AZ-020: Microsoft
Azure Solutions for AWS Developers10.
You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps11.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps12.
You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't have it
yet, see:
●● Join GitHub · GitHub13
●● If you already have your GitHub account, create a new repository Creating a new repository -
GitHub Docs14.

3 https://docs.microsoft.com/en-us/learn/paths/az-900-describe-cloud-concepts/
4 https://docs.microsoft.com/en-us/learn/certifications/courses/az-900t01
5 https://docs.microsoft.com/en-us/learn/paths/az-104-administrator-prerequisites/
6 https://docs.microsoft.com/en-us/learn/certifications/courses/az-104t00
7 https://docs.microsoft.com/en-us/learn/certifications/courses/az-010t00
8 https://docs.microsoft.com/en-us/learn/paths/create-serverless-applications/
9 https://docs.microsoft.com/en-us/learn/certifications/courses/az-204t00
10 https://docs.microsoft.com/en-us/learn/certifications/courses/az-020t00
11 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
12 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
13 https://github.com/signup
14 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
12     

What is DevOps?
The contraction of “Dev” and "Ops" refers to replacing siloed Development and Operations. The idea is to
create multidisciplinary teams that now work together with shared and efficient practices and tools.
Essential DevOps practices include agile planning, continuous integration, continuous delivery, and
monitoring of applications. DevOps is a constant journey.

Understand your cycle time


Let us start with a basic assumption about software development. We will describe it with the OODA
(Observe, Orient, Decide, Act) loop. Originally designed to keep fighter pilots from being shot out of the
sky, the OODA loop is an excellent way to think about staying ahead of your competitors. You start with
observing business, market, needs, current user behavior, and available telemetry data. Then you orient
with the enumeration of options for what you can deliver, perhaps with experiments. Next, you decide
what to pursue, and you act by delivering working software to real users. You can see all occurring in
some cycle time.
    13

Become data-informed
We recommend you use data to inform what to do in your next cycle. Many experience reports tell us
that roughly one-third of the deployments will have negative business results. Approximately one-third
will have positive results, and one-third will make no difference. Fail fast on effects that do not advance
the business and double down on outcomes that support the business. Sometimes the approach is called
pivot or persevere.

Strive for validated learning


How quickly you can fail fast or double down is determined by your cycle time. Also, in how long that
loop takes, or in lean terms. Your cycle time determines how quickly you can gather feedback to deter-
mine what happens in the next loop. The feedback that you collect with each cycle should be factual,
actionable data. We call it validated learning.
14     

Shorten your cycle time


When you adopt DevOps practices:
●● You shorten your cycle time by working in smaller batches.
●● Using more automation.
●● Hardening your release pipeline.
●● Improving your telemetry.
●● Deploying more frequently.
    15

Optimize validated learning


The more frequently you deploy, the more you can experiment. The more opportunity you have to pivot
or persevere and gain validated learning each cycle. This acceleration in validated learning is the value of
the improvement. Think of it as the sum of progress that you achieve and the failures that you avoid.
16     

Explore the DevOps journey


Remember, the goal is to shorten cycle time. Start with the release pipeline. How long does it take you to
deploy a change of one line of code or configuration? Ultimately, that is the brake on your velocity.
1. Continuous Integration drives the ongoing merging and testing of code, which leads to finding
defects early. Other benefits include less time wasted on fighting merge issues and rapid feedback for
development teams.
    17

2. Continuous Delivery of software solutions to production and testing environments helps organiza-
tions quickly fix bugs and respond to ever-changing business requirements.

3. Version Control, usually with a Git-based Repository, enables teams located anywhere in the world to
communicate effectively during daily development activities. Also, integrate with software develop-
ment tools for monitoring activities such as deployments.
18     

4. Use Agile planning and lean project management techniques to:

●● Plan and isolate work into sprints.


●● Manage team capacity and help teams quickly adapt to changing business needs.
●● A DevOps Definition of Done is working software collecting telemetry against the intended
business goals.

5. Monitoring and Logging of running applications. Including production environments for application
health and customer usage. It helps organizations create a hypothesis and quickly validate or disprove
strategies. Rich data is captured and stored in various logging formats.
    19

6. Public and Hybrid Clouds have made the impossible easy. The cloud has removed traditional bottle-
necks and helped commoditize Infrastructure. You can use Infrastructure as a Service (IaaS) to lift and
shift your existing apps or Platform as a Service (PaaS) to gain unprecedented productivity. The cloud
gives you a data center without limits.

7. Infrastructure as Code (IaC): Enables the automation and validation of the creation and teardown of
environments to help with delivering secure and stable application hosting platforms.
20     

8. Use Microservices architecture to isolate business use cases into small reusable services that commu-
nicate via interface contracts. This architecture enables scalability and efficiency.

9. Containers are the next evolution in virtualization. They are much more lightweight than virtual
machines, allow much faster hydration, and easily configure files.
    21

DevOps may hurt at first.


If it hurts, do it more often. Just like going to the gym, adopting new practices is likely to hurt at first. The
more often you exercise the new techniques, the easier they will become.
Like training at the gym, where you first exercise large muscles before small muscles, adopt practices that
have the most significant impact. Cross-train to develop synergy.
22     

Note: Source article is defining DevOps15.

Identify transformation teams


Unless you're building an entirely new organization, one of the significant challenges with any DevOps
Transformation Project is taking actions that conflict. At least in some way with ongoing business states.
The first challenge is the availability of staff. Suppose the staff members, who were leading the transfor-
mation project, are also involved in existing day-to-day work within the organization.
It will be challenging to focus on the transformation when their current role directly impacts customer
outcomes.
We all know the desperate situations that involve customers will always win over a long-term project, like
DevOps Transformations.
Another issue will be how the organization currently operates—implementing existing processes and
procedures to support current business outcomes.
The disruption required for a true DevOps Transformation will usually challenge existing processes and
procedures. Doing that is often difficult.
In the book “Beyond the Idea: How to Execute Innovation,” Dr. Vijay Govindarajan and Dr. Chris Trimble
noting when successful, have researched what is involved in allowing Innovation to occur in organiza-
tions.

15 https://docs.microsoft.com/azure/devops/learn/what-is-devops
    23

It has often been despite the existing organizational processes. Concluding it only works creating a
separate team to pursue the transformation.
For DevOps transformations, the separate team should be composed of staff members. Focused on and
measured on the transformation outcomes and not involved in the day-to-day operational work. The
team might also include some external experts that can fill the knowledge gaps—also, helping to advise
on processes that are new to the existing staff members.
Ideally, the staff members recruited should already be well regarded throughout the organization. As a
group, they should offer a broad knowledge base to think outside the box.

Explore shared goals and define timelines


Explore shared goals
These outcomes should include specific, measurable targets like:
●● Reduce the time spent on fixing bugs by 60%.
●● Reduce the time spent on unplanned work by 70%.
●● Reduce the out-of-hours work required by staff to no more than 10% of total working time.
●● Remove all direct patching of production systems.
Note: One of the aims of DevOps is to provide more excellent customer value, so outcomes should have
a customer value focus.

Define timelines for goals


Measurable goals also need to have timelines. While it is easy to set longer-term goals, it is also easy to
put off work when you do not require it for a while.
It is essential to have an ongoing series of short-term goals. Overall, projects should have timelines that
span anywhere from a few months to a year or two in any DevOps transformation project.
Every few weeks, the improvements should be clear and measurable. Ideally, evident to the organization
or its customers.
The timeline should not be too short and should always be challenging yet achievable. A review should
occur after each short-term goal to help plan the next one.
There are several advantages of the shorter timelines:
●● It is easier to change plans or priorities when necessary.
●● The reduced delay between doing work and getting feedback helps ensure that the learnings and
feedback are incorporated quickly.
●● It is easier to keep organizational support when positive outcomes are clear.

Summary
This module explored the key areas that organizations must apply to start their DevOps transformation
Journey, changing the team's mindset, and defining timelines and goals.
24     

You learned how to describe the benefits and usage of:


●● Understand what DevOps is and the steps to accomplish it.

●● Identify teams to implement the process.


●● Plan for the transformation with shared goals and timelines.
●● Plan and define timelines for goals.

Learn more
●● Donovan Brown | What is DevOps?16
●● What is DevOps? - Azure DevOps | Microsoft Docs17
●● Getting started with GitHub - GitHub Docs18
●● View of features and epics on the Feature Timeline - Azure DevOps | Microsoft Docs19
●● Plan and track work in Azure Boards with Basic or Agile processes - Azure Boards | Microsoft
Docs20
●● Agile Manifesto for Software Development | Agile Alliance21
●● 12 Principles Behind the Agile Manifesto | Agile Alliance22

16 https://www.donovanbrown.com/post/what-is-devops
17 https://docs.microsoft.com/devops/what-is-devops
18 https://docs.github.com/get-started
19 https://docs.microsoft.com/azure/devops/boards/extensions/feature-timeline
20 https://docs.microsoft.com/azure/devops/boards/get-started/plan-track-work
21 https://www.agilealliance.org/agile101/the-agile-manifesto
22 https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto
    25

Choose the right project


Introduction
Are you interested in implementing DevOps practices but are not sure what it can do for you?
This module helps organizations decide how to apply the DevOps process and tools to minimize initial
resistance.

Learning objectives
After completing this module, students and professionals can:
●● Understand different projects and systems to guide the journey.
●● Select a project to start the DevOps transformation.
●● Identify groups to minimize initial resistance.
●● Identify project metrics and Key Performance Indicators (KPIs).

Prerequisites
●● Understanding of what DevOps is and its concepts.

Explore greenfield and brownfield projects


The terms greenfield and brownfield have their origins in residential and industrial building projects.
A greenfield project is one done on a green field, undeveloped land. A brownfield project is done on the
used ground for other purposes.
Because of the land use that has once occurred, there could be challenges reusing the land. Like existing
buildings, some would be obvious but less obvious, like polluted soil.

Applied to software or DevOps Projects


The same terms are routinely applied to software projects and commonly describe DevOps Projects.
On the surface, it can seem that a greenfield DevOps project would be easier to manage and to achieve
success.
●● There was no existing codebase.
●● No existing team dynamics of politics. Possibly no current, rigid processes.
A common misconception is that DevOps is only for greenfield projects and suits startups best. However,
DevOps can also succeed with brownfield projects.
The beauty of these projects is that there's often a large gap between customer expectations and
delivery.
The teams involved may well realize that the status quo needs to change. They've lived the challenges
and the limitations associated with what they're currently doing.
26     

Decide when to use greenfield and brownfield


projects
When starting a DevOps transformation, you might need to choose between greenfield and brownfield
projects.
There is a common misconception that we need to demystify that DevOps suits greenfield projects are
better than brownfield projects.

Greenfield projects
A greenfield project will always appear to be a more accessible starting point. A blank slate offers the
chance to implement everything the way that you want.
You might also have a better chance of avoiding existing business processes that do not align with your
project plans.
Suppose current IT policies do not allow the use of cloud-based infrastructure. In that case, the project
might be qualified for entirely new applications designed for that environment from scratch.
For example, you can sidestep internal political issues that are well entrenched.

Brownfield projects
Usually, brownfield projects come with:
●● The baggage of existing codebases.
●● Existing teams.
●● A significant amount of technical debt.
But, they can still be ideal projects for DevOps transformations.
When your teams spend large percentages of their time just maintaining existing brownfield applications,
you have limited ability to work on new code.
It is essential to find a way to reduce that time and to make software release less risky. A DevOps trans-
formation can provide that.

The limitations will have often worn down the existing team members. For example, they are working in
the past and are keen to experiment with new ideas.
The system is often crucial for organizations. It might also be easier to gain more robust management
buy-in for these projects because of the potential benefits delivered.
Management might also have a stronger sense of urgency to point brownfield projects in an appropriate
direction when compared to greenfield projects that do not currently exist.

Take the first step


Eventually, the goal will be to evolve your entire organization. In looking to take the first step, many
organizations start with a greenfield project and then move on.
    27

Decide when to use systems of record versus


systems of engagement
When selecting systems as candidates for starting a DevOps transformation, it is necessary to consider
the types of systems that you operate.
Some researchers suggest that organizations often use Bimodal IT, a practice of managing two separate,
coherent modes of IT delivery - one focused on stability and predictability and the other on agility.

Systems of record
Systems that provide the truth about data elements are often-called systems of record. These systems
have historically evolved slowly and carefully. For example, it is crucial that a banking system accurately
reflects your bank balance. Systems of record emphasize accuracy and security.

Systems of engagement
Many organizations have other systems that are more exploratory. These often use experimentation to
solve new problems. Systems of engagement are modified regularly. Usually, it is a priority to make quick
changes over ensuring that the changes are correct.
There is a perception that DevOps suits systems of engagement more than systems of record. The lessons
from high-performing companies show that is not the case.
Sometimes, the criticality of doing things right with a system of record is an excuse for not implementing
DevOps practices.
Worse, given the way that applications are interconnected, an issue in a system of engagement might
end up causing a problem in a system of record anyway.
Both types of systems are great. At the same time, it might be easier to start with a system of engage-
ment when first beginning a DevOps Transformation.
DevOps practices apply to both types of systems. The most significant outcomes often come from
transforming systems of record.

Identify groups to minimize initial resistance


Not all staff members within an organization will be receptive to the change required for a DevOps
transformation.
In discussions around continuous delivery, we usually categorize users into three general buckets:
●● Canary users voluntarily test bleeding edge features as soon as they're available.
●● Early adopters who voluntarily preview releases, considered more refined than the code that exposes
canary users.
●● Users who consume the products after passing through canary and early adopters.
It's essential to find staff members keen to see new features as soon as they're available and highly
tolerant of issues when choosing Canary.
Early adopters have similar characteristics to the Canaries. They often have work requirements that make
them less tolerant of issues and interruptions to work.
While development and IT operations staff might generally be less conservative than users.
28     

The staff will also range from traditional to early adopters, and others happy to work at the innovative
edge.

Ideal target improvements


It's also important to roll out changes incrementally. There is an old saying in the industry that any
successful large IT system was previously a successful small IT system.
Large-scale systems rolled out all at once have an abysmal record of success. Most fail, no matter how
much support management has provided.
When starting, it is essential to find an improvement goal that:
●● It can be used to gain early wins.
●● It is small enough to be achievable in a reasonable timeframe.
●● Has benefits that are significant enough to be evident to the organization.
It allows constant learning from rapid feedback and recovering from mistakes quickly.
Note: The aim is to build a snowball effect where each new successful outcome adds to previous success-
ful results. It will maximize the buy-in from all affected.

Identify project metrics and key performance


indicators (KPIs)
We spoke earlier about the importance of shared goals. It was also agreed upon by team members that
the goals needed to be specific, measurable, and time-bound.
It is essential to establish (and agree upon) appropriate metrics and Key Performance Indicators (KPIs) to
ensure these goals are measurable.
While there is no specific list of metrics and KPIs that apply to all DevOps Projects, the following are
commonly used:

Faster outcomes
●● Deployment Frequency. Increasing the frequency of deployments is often a critical driver in DevOps
Projects.
●● Deployment Speed. It is necessary to reduce the time that they take.
●● Deployment Size. How many features, stories, and bug fixes are being deployed each time?
●● Lead Time. How long does it take from the creation of a work item until it is completed?

Efficiency
●● Server to Admin Ratio. Are the projects reducing the number of administrators required for a given
number of servers?
●● Staff Member to Customers Ratio. Is it possible for fewer staff members to serve a given number of
customers?
●● Application Usage. How busy is the application?
    29

●● Application Performance. Is the application performance improving or dropping? (Based upon


application metrics)?

Quality and security


●● Deployment failure rates. How often do deployments (or applications) fail?
●● Application failure rates. How often do application failures occur, such as configuration failures,
performance timeouts, and so on?
●● Mean time to recover. How quickly can you recover from a failure?
●● Bug report rates. You do not want customers finding bugs in your code. Is the amount they are
seeing increasing or lowering?
●● Test pass rates. How well is your automated testing working?
●● Defect escape rate. What percentage of defects are being found in production?
●● Availability. What percentage of time is the application truly available for customers?
●● Service level agreement achievement. Are you meeting your service level agreements (SLAs)?
●● Mean time to detection. If there is a failure, how long does it take for it to be detected?

Culture
●● Employee morale. Are employees happy with the transformation and where the organization is head-
ing? Are they still willing to respond to further changes? This metric can be challenging to measure
but is often done by periodic, anonymous employee surveys.
●● Retention rates. Is the organization losing staff?
Note: It is crucial to choose metrics that focus on specific business outcomes and achieve a return on
investment and increased business value.

Summary
In this module, you learned how to decide how to apply the DevOps process and tools to minimize initial
resistance.
Also, how to identify teams, plan for a DevOps transformation culture, and define timelines for your
goals.
You learned how to describe the benefits and usage of:
●● Understand what DevOps is and the steps to accomplish it.

●● Identify teams to implement the process.


●● Plan for the transformation with shared goals and timelines.
●● Plan and define timelines for goals.

Learn more
●● Greenfield project - Wikipedia23

23 https://en.wikipedia.org/wiki/Greenfield_project
30     

●● Brownfield (software development) - Wikipedia24


●● About teams & Agile tools | Microsoft Docs25
●● Best practices for Agile project management | Microsoft Docs26

24 https://en.wikipedia.org/wiki/Brownfield_%28software_development%29
25 https://docs.microsoft.com/azure/devops/organizations/settings/about-teams-and-settings
26 https://docs.microsoft.com/azure/devops/boards/best-practices-agile-project-management
    31

Describe team structures


Introduction
Starting to apply Agile practices in your company is not easy. It demands collaboration from your teams,
stakeholder support, and training.
This module explores agile development practices and helps to define and to configure teams and tools
for collaboration.

Learning objectives
After completing this module, students and professionals can:
●● Understand agile practices and principles of agile development.
●● Create a team and agile organizational structure.
●● Identify ideal DevOps team members.
●● Select and configure tools for collaboration.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with Agile software development and core software development principles is helpful but is
not necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore agile development practices


Waterfall
Traditional software development practices involve:
●● Determining a problem.
●● Analyzing the requirements.
●● Building and testing the required code.
●● The delivery outcome to users.
Usually, all refer to as a waterfall approach.
The waterfall model follows a sequential order. A project development team only moves to the next
development phase or testing if the previous step is completed successfully.
It's what an engineer would do when building a bridge or a building. So, it might seem appropriate for
software projects as well.
However, the waterfall methodology has some drawbacks. One relates to the customer requirements.
For example, It doesn't matter if the customer requirements are defined accurately at the start of a
project.
Usually, the project takes a long time, and the outcome may no longer match the customer's needs.
32     

There's a real challenge with gathering customer requirements in the first place.
Taking a long time to deliver something would often be different from what the customer needs, even if
you built exactly what the customer asked.
Customers often don't know what they want until they see it or can't explain what they need.

Agile
By comparison, Agile methodology constantly emphasizes adaptive planning and early delivery with
continual improvement.
Rather than restricting development to rigid specifications, it encourages rapid and flexible responses to
changes as they occur.
In 2001, highly regarded developers published a manifesto for Agile software development.
They said that:
●● Development needs to favor individuals and interactions over processes and tools.
●● Working software over comprehensive documentation.
●● Customer collaboration over contract negotiation.
●● Respond to changes over following a plan.
Agile software development methods are based on releases and iterations:
●● One release might consist of several iterations.
●● Each iteration is like a small independent project.
●● After being estimated and prioritization:

●● Features, bug fixes, enhancements, and refactoring width are assigned to a release.
●● And then assigned again to a specific iteration within the release, generally on a priority basis.
●● At the end of each iteration, there should be tested working code.
●● In each iteration, the team must focus on the outcomes of the previous iteration and learn from them.
Having teams focused on shorter-term outcomes is that teams are also less likely to waste time over-en-
gineering features. Or allowing unnecessary scope creep to occur.
Agile software development helps teams keep focused on business outcomes.

Comparison of the waterfall and agile methodologies

Waterfall Agile
Divided into distinct phases. Separates the project development lifecycle into
sprints.
It can be rigid. Known for flexibility.
All project development phases, such as design, It follows an iterative development approach so
development, and test, are completed once. that each phase may appear more than once.
Define requirements at the start of the project with Requirements are expected to change and evolve.
little change expected.
    33

Focus on completing the project. Focus on meeting customers' demands.

Explore principles of agile development


The Agile Alliance27 says its mission is to support people who explore and apply agile values and
principles. Also, practices to make building software solutions more effective, humane, and sustainable.
They have published a Manifesto for Agile Software Development28.
And from the publish, they have distilled the 12 Principles Behind the Agile Manifesto29.
●● Our highest priority is to satisfy the customer through early and continuous delivery of valuable
software.
●● Welcome changing requirements, even late in development. Agile processes harness change for the
customer's competitive advantage.
●● Deliver working software frequently, from a couple of weeks to a couple of months, with a preference
to the shorter timescale.
●● Businesspeople and developers must work together daily throughout the project.
●● Build projects around motivated individuals. Give them the environment and support they need and
trust them to get the job done.
●● The most efficient and effective method of conveying information to and within a development team
is face-to-face conversation.
●● Working software is the primary measure of progress.
●● Agile processes promote sustainable development. The sponsors, developers, and users should be
able to maintain a constant pace indefinitely.
●● Continuous attention to technical excellence and good design enhances agility.
●● Simplicity - the art of maximizing the amount of work not done - is essential.
●● The best architectures, requirements, and designs emerge from self-organizing teams.
●● At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its
behavior accordingly.

Define organization structure for agile practices


For most organizations, reorganizing to be agile is difficult. It requires a mind-shift and a culture-shift
that challenges many existing policies and processes within the organization.
Good governance in organizations, particularly in large organizations, often leads to many relatively rigid
rules, operating structures, and methods. It also tends to avoid a broad delegation of authority.
While most large organizations have not moved to an agile structure, most are now experimenting with
doing so.
Their business environments are volatile and complex, and they have seen the limitations of their current
systems, mainly an inability to cope with change quickly.
They realize that it is common today for long-term established businesses and their industries to be
disrupted by startups.

27 https://www.agilealliance.org/
28 https://www.agilealliance.org/agile101/the-agile-manifesto/
29 https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto/
34     

Horizontal vs. vertical teams


Traditionally, horizontal team structures divide teams according to the software architecture. In this
example, the teams have been divided into the user interface, service-oriented architecture, and data
teams:

By comparison, vertical team structures span the architecture and are aligned with skillsets or disciplines:

Vertical teams have been shown to provide more good outcomes in Agile projects. Each product must
have an identified owner.
Another key benefit of the vertical team structure is that scaling can occur by adding teams. In this
example, feature teams have been created rather than just project teams:

Explore ideal DevOps team members


For a successful DevOps transformation, the aim is to find team members with the following characteris-
tics:
●● They already think there is a need to change.
●● They have previously shown an ability to innovate.
●● They are already well respected within the organization.
●● They have a broad knowledge of the organization and how it operates.
●● Ideally, they already believe that DevOps practices are what is needed.

Mentoring team members on agile practices


While it's desirable to have formal agile training for staff members, no matter how good any agile course
is, there is a world of difference between learning a concept within a few days and putting it into practice.
    35

When they first start an agile transformation, many teams hire external coaches or mentors.
Agile coaches help teams or individuals to adopt agile methods or to improve the current techniques and
practices.
They must be agents of change by helping people understand how they work and encouraging them to
adopt new approaches.
Agile coaches typically work with more than one team and remove any roadblocks from inside or outside
the organization.
This work requires various skills, including coaching, mentoring, teaching, and making easier. Agile coach-
es must be both trainers and consultants.
There is more than one type of agile coach.
●● Some coaches are technical experts who aim to show staff members how to apply specific concepts—
for example, test-driven development and continuous integration or deployment.

●● These coaches might do peer programming sessions with staff members.


●● Other coaches are focused on agile processes, determining requirements, and managing work
activities.

●● They might help how to run effective stand-up and review meetings.
●● Some coaches may themselves act as scrum masters.

●● They might mentor staff in how to fill these roles.


Over time, though, team members need to develop an ability to mentor each other. Teams should aim to
be self-organizing. Team members are often expected to learn as they work and to acquire skills from
each other. To make it effective, though, the work itself needs to be done collaboratively, not by individu-
als working by themselves.

Enable in-team and cross-team collaboration


Effective collaboration is critical for well-functioning Agile teams. Enabling it requires both cultural
changes, cross-functional team collaboration, and tooling.

Cultural changes
Over recent decades, offices have often become open spaces with few walls. At the time of writing, a
significant shift to working from home has started, started as a response to the pandemic. Both situations
can limit collaboration, and ambient noise and distractions often also reduce productivity. Staff tends to
work better when they have comfortable working environments. Defined meeting times and locations let
teams choose when they want to interact with others.
Asynchronous communication should be encouraged, but there should not be an expectation that all
communications will be responded to urgently. Staff should focus on their primary tasks without feeling
like they are being left out of important decisions.
All meetings should have strict timeframes, and more importantly, have a plan. If there is no plan, there
should be no meeting.
36     

As it is becoming harder to find the required staff, great teams will be as comfortable with remote or
work-from-home workers in the office.
To be successful, though, collaboration via communication should become part of the organization's
DNA.
Staff should be encouraged to communicate openly and frankly. Learning to deal with conflict is essential
for any team, as there will be disagreements at some point. Mediation skills training would be helpful.

Cross-functional teams
Team members need good collaboration. It is also essential to have a great partnership with wider teams
to bring people with different functional expertise together to work toward a common goal.
Often, there will be people from other departments within an organization.
Faster and better innovation can occur in these cross-functional teams.
People from different areas of the organization will have different views of the same problem, and they
are more likely to come up with alternate solutions to problems or challenges. Existing entrenched ideas
are more likely to be challenged.
Cross-functional teams can also minimize turf-wars within organizations. The more widely that a project
appears to have ownership, the easier it will be to be widely accepted. Bringing cross-functional teams
together also helps to spread knowledge across an organization.
Recognizing and rewarding collective behavior across cross-functional teams can also help to increase
team cohesion.

Collaboration tooling
Agile teams commonly use the following collaboration tools:
Teams (Microsoft)30 A group chat application from Microsoft. It provides a combined location with
workplace chat, meetings, notes, and storage of file attachments. A user can be a member of many
teams.
Slack31 is A commonly used tool for collaboration in Agile and DevOps teams. From a single interface, it
provides a series of separate communication channels that can be organized by project, team, or topic.
Conversations are kept and are searchable. It is straightforward to add both internal and external team
members. Slack integrates with many third-party tools like GitHub32 for source code and DropBox33 for
document and file storage.
Jira34 is A commonly used tool that allows for planning, tracking, releasing, and reporting.
Asana35 is A standard tool designed to keep team plans, progress, and discussions in a single place. It has
strong capabilities around timelines and boards.
Glip36 is An offering from Ring Central that provides chat, video, and task management.
Other standard tools that include collaboration offerings include ProofHub, RedBooth, Trello, DaPulse,
and many others.

30 https://products.office.com/microsoft-teams/group-chat-software
31 https://slack.com/
32 https://github.com/
33 https://dropbox.com/
34 https://www.atlassian.com/software/jira
35 https://asana.com/
36 https://glip.com/
    37

Select tools and processes for agile practices


While developing agile methods does not require specific tooling, tools can often enhance the achieved
outcomes.
It is essential to realize that the vital tool for agile development is the process itself.
Become familiar with the procedures you need to follow before working out how to implement tools.
Several categories of tools are commonly used.

Physical tools
Not all tools need to be digital tools. Many teams use whiteboards to collaborate on ideas, index cards
for recording stories, and sticky notes for moving around tasks.
Even when digital tools are available, it might be more convenient to use these physical tools during
stand-up and other meetings.

Collaboration tools
We described collaboration tools in the previous topic.

Project management tools


These tools usually include:
●● Project planning and execution monitoring abilities (including how to respond to impediments).
●● Automation for stand-up meetings.
●● Management and tracking of releases.
●● A way to record and work with the outcomes of retrospectives.
●● Many include Kanban boards and detailed sprint planning options.
These tools will also provide detailed visualizations, often as a graphic dashboard that shows team
progress against assigned goals and targets.
Some tools integrate directly with code repositories and CI/CD tools and add code-related metrics,
including quality metrics and direct support for code reviews.

As a complete CI/CD system, we have Azure DevOps and GitHub that includes:
●● Flexibility in Kanban boards.
●● Traceability through Backlogs.
38     

●● Customizability in dashboards.
●● Built-in scrum boards.
●● Integrability directly with code repositories.
●● Code changes can be linked directly to tasks or bugs.
Apart from Azure DevOps and GitHub, other standard tools include:
●● Jira.
●● Trello.
●● Active Collab.
●● Agilo for Scrum.
●● SpiraTeam.
●● Icescrum.
●● SprintGround.
●● Gravity.
●● Taiga.
●● VersionOne.
●● Agilean.
●● Wrike.
●● Axosoft.
●● Assembla.
●● PlanBox.
●● Asana.
●● Binfire.
●● Proggio.
●● VivifyScrum, and many others.

Screen recording tools


It might seem odd to add screen recording tools to this list. Still, they are beneficial when:
●● Work with remote team members.
●● Recording bugs in action.
●● Building walkthroughs and tutorials that demonstrate actual or potential features.
A screen recorder is built into Windows, but other common ones include SnagIt, Camtasia, OBS, and
Loom.

Summary
This module explored agile development practices and helped define and configure teams and tools for
collaboration.
    39

You learned how to describe the benefits and usage of:


●● Understand agile practices and principles of agile development.
●● Create a team and agile organizational structure.
●● Identify ideal DevOps team members.
●● Select and configure tools for collaboration.

Learn more
●● DevOps vs. Agile | Microsoft Azure37.
●● Best practices for Agile project management - Azure Boards | Microsoft Docs38.
●● Agile Manifesto for Software Development | Agile Alliance39.
●● Agile Board | Trello40.
●● Agile Alliance41.
●● 12 Principles Behind the Agile Manifesto | Agile Alliance42.
●● Jira | Issue & Project Tracking Software | Atlassian43.

37 https://azure.microsoft.com/overview/devops-vs-agile/
38 https://docs.microsoft.com/azure/devops/boards/best-practices-agile-project-management
39 https://www.agilealliance.org/agile101/the-agile-manifesto
40 https://trello.com/b/DnZvFigA/agile-board
41 https://www.agilealliance.org/
42 https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto
43 https://www.atlassian.com/software/jira
40     

Choose the DevOps tools


Introduction
Do you currently use tools to manage daily work and plan for the future? It is essential when implement-
ing DevOps if you want to increase your maturity level.
This module explores Azure DevOps and GitHub tools and helps organizations define their work manage-
ment tool and licensing strategy.

Learning objectives
After completing this module, students and professionals can:
●● Design a tool integration strategy.
●● Design a license management strategy (for example, Azure DevOps and GitHub users).
●● Design a strategy for end-to-end traceability from work items to working software.
●● Design an authentication and access strategy.
●● Design a strategy for integrating on-premises and cloud resources.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with DevOps tools is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

What is Azure DevOps?


Azure DevOps is a Software as a service (SaaS) platform from Microsoft that provides an end-to-end
DevOps toolchain for developing and deploying software.
It also integrates with the most-leading tools on the market and is an excellent option for orchestrating a
DevOps toolchain.

What does Azure DevOps provide?


Azure DevOps includes a range of services covering the complete development life cycle.
●● Azure Boards: agile planning, work item tracking, visualization, and reporting tool.
●● Azure Pipelines: a language, platform, and cloud-agnostic CI/CD platform-supporting containers or
Kubernetes.
●● Azure Repos: provides cloud-hosted private git repos.
●● Azure Artifacts: provides integrated package management with support for Maven, npm, Python, and
NuGet package feeds from public or private sources.
●● Azure Test Plans: provides an integrated planned and exploratory testing solution.
Also, you can use Azure DevOps to orchestrate third-party tools.
    41

What if we are not a Microsoft / Microsoft .NET organiza-


tion?
Azure DevOps is not focused on organizations that are end-to-end Microsoft or Windows.
Azure DevOps provides a platform that is:
●● Flexible: you do not have to go ‘all in’ on Azure DevOps. It is possible to adopt each of the services
independently and integrate them with your existing toolchain; most popular tools are supported.
●● Cross-Platform: designed to work with any platform (Linux, macOS, and Windows). Or language
(including Node.js, Python, Java, PHP, Ruby, C/C++, .NET, Android, and iOS apps). Azure DevOps is
not aimed at organizations building and shipping on the Microsoft technology stack.
●● Cloud Agnostic: continuous delivery is supported to AWS, GCP, and Azure.

What is GitHub?
GitHub is a Software as a service (SaaS) platform from Microsoft that provides Git-based repositories and
DevOps tooling for developing and deploying software.
It has a wide range of integrations with other leading tools.

What does GitHub provide?


GitHub provides a range of services for software development and deployment.
●● Codespaces: Provides a cloud-hosted development environment (based on Visual Studio Code) that
can be operated from within a browser or external tools. Eases cross-platform development.
●● Repos: Public and private repositories based upon industry-standard Git commands.
●● Actions: Allows for the creation of automation workflows. These workflows can include environment
variables and customized scripts.
●● Packages: The majority of the world's open-source projects are already contained in GitHub reposito-
ries. GitHub makes it easy to integrate with this code and with other third-party offerings.
●● Security: Provides detailed code scanning and review features, including automated code review
assignment.

Explore an authorization and access strategy


Azure DevOps Services uses enterprise-grade authentication. To protect and secure your data, you can
use:
●● Microsoft account.
●● GitHub account.
●● Azure Active Directory (Azure AD).
Tools like Visual Studio and Azure DevOps natively support the use of Microsoft Accounts and Azure AD.
Eclipse can also support this form of authentication if you install a Team Explorer Everywhere plug-in.

Personal access tokens


Use personal access tokens (PAT) for tools that do not directly support Microsoft accounts or Azure AD
for authentication. Also, if you want them to integrate with Azure DevOps.
42     

For example, tools like:


●● Git-based repositories.
●● NuGet.
●● Xcode.

These tokens can be set up using Git Credential managers, or you can create them manually.
Personal access tokens are also helpful when establishing access to command-line tools, external tools,
and tasks in build pipelines.
Also, when calling REST-based APIs, you do not have a UI popping out to do the authentication. When
access is no longer required, you can then revoke the personal access token.

Security groups
Azure DevOps is pre-configured with default security groups. Default permissions are assigned to the
default security groups. But you can also configure access at the organization level, the collection level,
and the project or object level.
In the organization settings in Azure DevOps, you can configure app access policies. Based on your
security policies, you might allow alternate authentication methods, enable third-party applications to
access via OAuth, or even allow anonymous access to some projects.
For even tighter control, you can set conditional access to Azure DevOps. It offers simple ways to help
secure resources when using Azure Active Directory for authentication.

Multifactor authentication
Conditional access policies such as multifactor authentication can help to minimize the risk of compro-
mised credentials.
As part of a conditional access policy, you might require:
●● Security group membership.
●● A location or network identity.
●● A specific operating system.
●● A managed device, or other criteria.

Migrate or integrate existing work management


tools
Both Azure DevOps and GitHub can be integrated with different kinds of work management tools.
As an example, in the Visual Studio Marketplace, Microsoft offers Trello integration tooling.
Migrating from other work management tools to Azure DevOps takes considerable planning.
Most work management tools are highly configurable by the end user. There might not be a tool availa-
ble that will do the migration without further configuration.
    43

Jira
Jira is a commonly used work management tool.
In the Visual Studio Marketplace, Solidify offers a tool for Jira to Azure DevOps migration. It migrates in
two phases. Jira issues are exported to files, and then the files are imported to Azure DevOps.
If you decide to write the migration code yourself, the following blog post provides a sample code that
might help you get started: Migrate your project from Jira to Azure DevOps44.

Other applications
Third-party organizations offer commercial tooling to assist with migrating other work management tools
like:
●● Aha.
●● BugZilla.
●● ClearQuest.
●● And others to Azure DevOps.

Migrate or integrate existing test management


tools
Azure Test Plans track manual testing for sprints and milestones, allowing you to follow when that testing
is complete.
Azure DevOps also has a Test Feedback extension available in the Visual Studio Marketplace. The exten-
sion is used to help teams do exploratory testing and provide feedback.
All team members and other stakeholders can use the extension to submit bugs or provide feedback. For
example:
●● Developers.
●● Product owners.
●● Managers.
●● UX.
●● UI engineers.
●● Marketing teams.
●● Early adopters.
For load tests, you can use Azure Load Testing. For more information, see What is Azure Load Testing?45.

Other helpful testing tools:


Apache JMeter is open-source software written in Java and designed to load test, and measure perfor-
mance.
Pester is a tool that can automate the testing of PowerShell code.

44 http://www.azurefieldnotes.com/2018/10/01/migrate-your-project-from-jira-to-azure-devops/
45 https://docs.microsoft.com/en-us/azure/load-testing/overview-what-is-azure-load-testing
44     

SoapUI is another testing framework for SOAP and REST testing.


If you are using Microsoft Test Manager, you should plan to migrate to Azure Test Plans instead.
For more information, search for Test Management at Visual Studio Marketplace.

Design a license management strategy


When designing a license management strategy, you first need to understand your progress in the
DevOps implementation phase.
If you have a scratch of the architecture you're planning for the DevOps implementation; you already
know part of the resources to consume.
For example, you started with a version control-implementing Git and created some pipelines to build
and release your code.
If you have multiple teams building their solutions, you don't want to wait in the queue to start building
yours.
Probably, you want to pay for parallel jobs and make your builds run in parallel without depending on the
queue availability.
To consider:
●● What phase are you in?
●● How many people are using the feature?
●● How long are you willing to wait if in the queue for pipelines? Is this urgent? Is this a validation only?
●● Should all users access all features? Are they Stakeholders? Basic users? Do they already have a Visual
Studio license?
●● Do you have an advanced Package Management strategy? Maybe you need more space for Artifacts.
For the latest, most up-to-date Azure DevOps pricing information, visit Azure DevOps Pricing46.
For the latest, most up-to-date GitHub pricing information, visit GitHub Pricing47.

Summary
This module explored the basics of Azure DevOps and GitHub and how to start using them to implement
DevOps.
You learned how to describe the benefits and usage of:
●● Design a tool integration strategy.
●● Design a license management strategy (for example, Azure DevOps and GitHub users).
●● Design a strategy for end-to-end traceability from work items to working software.
●● Design an authentication and access strategy.
●● Design a strategy for integrating on-premises and cloud resources.

46 https://azure.microsoft.com/pricing/details/devops/azure-devops-services/
47 https://github.com/pricing/
    45

Learn more
●● Azure DevOps Services | Microsoft Azure48.
●● Features | GitHub49.
●● Pricing Calculator | Microsoft Azure50.
●● Azure DevOps Server to Services Migration overview - Azure DevOps | Microsoft Docs51.
●● Azure DevOps Services Pricing | Microsoft Azure52.
●● GitHub Pricing53.

48 https://azure.microsoft.com/services/devops/
49 https://github.com/features
50 https://azure.microsoft.com/pricing/calculator/
51 https://docs.microsoft.com/azure/devops/migrate/migration-overview?view=azure-devops
52 https://azure.microsoft.com/pricing/details/devops/azure-devops-services/
53 https://github.com/pricing/
46     

Plan Agile with GitHub Projects and Azure


Boards
Introduction
This module introduces you to GitHub Projects, GitHub Project Boards, and Azure Boards. It explores
ways to link Azure Boards and GitHub, configure GitHub Projects and Project views, and manage work
with GitHub Projects.

Learning objectives
After completing this module, students and professionals can:
●● Describe GitHub Projects and Azure Boards.
●● Link Azure Boards and GitHub.
●● Configure and Manage GitHub Projects and boards.
●● Customize Project views.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't
have it yet, see: Join GitHub · GitHub54. If you already have your GitHub account, create a new
repository Creating a new repository - GitHub Docs55.

Introduction to GitHub Projects and Project


boards
Project Boards
During the application or project lifecycle, it's crucial to plan and prioritize work. With Project boards, you
can control specific feature work, roadmaps, release plans, etc.
Project boards are made up of issues, pull requests, and notes categorized as cards that you can drag and
drop into your chosen columns. The cards contain relevant metadata for issues and pull requests, like
labels, assignees, the status, and who opened it.

54 https://github.com/signup
55 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
    47

There are different types of project boards:


●● User-owned project boards: Can contain issues and pull requests from any personal repository.
●● Organization-wide project boards: Can contain issues and pull requests from any repository that
belongs to an organization.
●● Repository project boards: Are scoped to issues and pull requests within a single repository.
To create a project board for your organization, you must be an organization member.
It's possible to use templates to set up a new project board that will include columns and cards with tips.
The templates can be automated and already configured.

Templates Description
Basic kanban Track your tasks with: To do, In progress, and Done
columns.
Automated kanban Cards automatically move between: To do, In
progress, and Done columns.
Automated kanban with review Cards automatically moves between: To do, In
progress, and Done columns, with extra triggers
for pull request review status.
Bug triage Triage and prioritize bugs with: To do, High
priority, Low priority, and Closed columns.
48     

For more information about Project boards, see:


●● Creating a project board56.
●● Editing a project board57.
●● Copying a project board58.
●● Adding issues and pull requests to a project board59.
●● Project board permissions for an organization60.

Projects (beta)
Projects (beta) are a new, customizable and flexible tool version of projects for planning and tracking
work on GitHub.
Note: Projects (beta) are currently in public beta and subject to change.
A project is a customizable spreadsheet that you can configure the layout by filtering, sorting, and
grouping your issues and PRs, and adding custom fields to track metadata.
You can use different views such as Board or spreadsheet/table.

If you make changes in your pull request or issue, your project reflects that change.
You can use custom fields in your tasks. For example:
●● A date field to track target ship dates.
●● A number field to track the complexity of a task.
●● A single select field to track whether a task is Low, Medium, or High priority.
●● A text field to add a quick note.
●● An iteration field to plan work week-by-week, including support for breaks.
For more information about Projects (beta), see:
●● Creating a project (beta)61.

56 https://docs.github.com/articles/creating-a-project-board
57 https://docs.github.com/articles/editing-a-project-board
58 https://docs.github.com/articles/copying-a-project-board
59 https://docs.github.com/articles/adding-issues-and-pull-requests-to-a-project-board
60 https://docs.github.com/articles/project-board-permissions-for-an-organization
61 https://docs.github.com/issues/trying-out-the-new-projects-experience/creating-a-project
    49

●● Managing iterations in projects (beta)62.


●● Customizing your project (beta) views63.
●● Automating projects (beta)64.

Introduction to Azure Boards


Azure Boards is a customizable tool to manage software projects supporting Agile, Scrum, and Kanban
processes by default. Track work, issues, and code defects associated with your project. Also, you can
create your custom process templates and use them to create a better and more customized experience
for your company.
You have multiple features and configurations to support your teams, such as calendar views, configur-
able dashboards, and integrated reporting.
The Kanban board is one of several tools that allows you to add, update, and filter user stories, bugs,
features, and epics.

You can track your work using the default work item types such as user stories, bugs, features, and epics.
It's possible to customize these types or create your own. Each work item provides a standard set of
system fields and controls, including Discussion for adding and tracking comments, History, Links, and
Attachments.

62 https://docs.github.com/issues/trying-out-the-new-projects-experience/managing-iterations
63 https://docs.github.com/issues/trying-out-the-new-projects-experience/customizing-your-project-views
64 https://docs.github.com/issues/trying-out-the-new-projects-experience/automating-projects
50     

If you need to create reports or a list of work with specific filters, you can use the queries hub to generate
custom lists of work items.
Queries support the following tasks:
●● Find groups of work items with something in common.
●● Triage work to assign to a team member or sprint and set priorities.
●● Perform bulk updates.
●● View dependencies or relationships between work items.
●● Create status and trend charts that you can optionally add to dashboards.

Delivery plans
It's possible to create another view with deliverables and track dependencies across several teams in a
calendar view using Delivery Plans.

Delivery plans are fully interactive, supporting the following tasks:


●● View up to 15 team backlogs, including a mix of backlogs and teams from different projects.
●● View custom portfolio backlogs and epics.
●● View work that spans several iterations.
●● Add backlog items from a plan.
●● View rollup progress of features, epics, and other portfolio items.
●● View dependencies that exist between work items.
For more information about Azure Boards, see:
●● Azure Boards documentation | Microsoft Docs65.
●● Reasons to start using Azure Boards | Microsoft Docs66.

65 https://docs.microsoft.com/azure/devops/boards
66 https://docs.microsoft.com/azure/devops/boards/get-started/why-use-azure-boards
    51

●● GitHub and Azure Boards67.

Link GitHub to Azure Boards


Use GitHub, track work in Azure Boards
Use Azure Boards to plan and track your work and GitHub as source control for software development.
Connect Azure Boards with GitHub repositories, enabling linking GitHub commits, pull requests, and
issues to work items in Boards.

Azure Boards App


The integration is created using the Azure Boards App, acting as a bridge between Azure Boards and
GitHub.
To install the app, you must be an administrator or owner of the GitHub repository or the GitHub organi-
zation.
The app is installed from the GitHub Marketplace. Azure Boards App68

67 https://docs.microsoft.com/azure/devops/boards/github
68 https://github.com/marketplace/azure-boards
52     

Authenticating to GitHub
Azure Boards can connect to GitHub. For GitHub in the cloud, when adding a GitHub connection, the
authentication options are:
●● Username/Password
●● Personal Access Token (PAT)
For a walkthrough on making the connection, see: Connect Azure Boards to GitHub69.
You can configure other Azure Boards/Azure DevOps Projects, GitHub.com repositories, or change the
current configuration from the Azure Boards app page.
Once you've integrated Azure Boards with GitHub using the Azure Boards app, you can add or remove
repositories from the web portal for Azure Boards.

Supported integration scenarios


Azure Boards-GitHub integration supports the following connections:
●● From GitHub:
●● Support integration for all repositories for a GitHub account or organization or select repositories.
●● Add or remove GitHub repositories participating in the integration and configure the project they
connect to.
●● Suspend Azure Boards-GitHub integration or uninstall the app.
●● From Azure Boards:
●● Connect one or more GitHub repositories to an Azure Boards project.
●● Add or remove GitHub repositories from a GitHub connection within an Azure Boards project.
●● Completely remove a GitHub connection for a project.
●● Allow a GitHub repository to connect to one or more Azure Boards projects within the same Azure
DevOps organization or collection.

69 https://docs.microsoft.com/azure/devops/boards/github/connect-to-github
    53

Azure Boards-GitHub integration supports the following operational tasks:


●● Create links between work items and GitHub commits, pull requests, and issues based on GitHub
mentions.
●● Support state transition of work items to a Done or Completed state when using GitHub mention by
using fix, fixes, or fixed.
●● Support full traceability by posting a discussion comment to GitHub when linking from a work item to
a GitHub commit, pull request, or issue.
●● Show linked to GitHub code artifacts within the work item Development section.
●● Show linked to GitHub artifacts as annotations on Kanban board cards.
●● Support status badges of Kanban board columns added to GitHub repositories.
The following tasks aren't supported at this time:
●● Query for work items with links to GitHub artifacts. However, you can query for work items with an
External Link Count greater than 0.
Note: Reference: Azure Boards-GitHub integration70.
For more information, see:
●● Change GitHub repository access, or suspend or uninstall the integration71.
●● Add or remove GitHub repositories72.
●● Link GitHub commits, pull requests, and issues to work items for details on linking to work
items73.

Configure GitHub Projects


Create a project
To start working with GitHub Projects, you first need to create an organization project or a user project.

70 https://docs.microsoft.com/azure/devops/boards/github
71 https://docs.microsoft.com/azure/devops/boards/github/change-azure-boards-app-github-repository-access
72 https://docs.microsoft.com/azure/devops/boards/github/add-remove-repositories
73 https://docs.microsoft.com/azure/devops/boards/github/link-to-from-github
54     

To create an organization project:


1. On GitHub, navigate to the main page of your organization.
2. Click Projects.
3. Select the New project drop-down menu and click New project (Beta).
To create a user project:
1. On any GitHub page, click on your avatar, then select Your projects.
2. Select the New project drop-down menu and click New project (Beta).
Create a project description or a README file:
1. Navigate to your project.
2. In the top-right, click to open the menu.
3. In the menu, click Settings.
4. Under Add a description, type your description in the text box and click Save.
5. To update your project's README, under README, type your content in the text box.
6. Click Save.

Adding issues
When your new project initializes, it prompts you to add items.
Click on the plus (+) sign to add more issues.
    55

Project settings and permissions


You can view and make quick changes to your project description and README by navigating to your
project and clicking on the top right.
56     

You can create custom fields to add to your project.


    57

Manage access and permissions by adding collaborators.


58     

For more information about Projects (Beta), see:


●● Quickstart for projects (beta) - GitHub Docs74.
●● Creating an issue - GitHub Docs75.

Manage work with GitHub Project boards


GitHub Projects allow you to control project deliverables, release dates, and iterations to plan upcoming
work.
You can create an iteration to:
●● Associate items with specific repeating blocks of time.
●● Set to any length of time.
●● Include breaks.
It's possible to configure your project to group by iteration to visualize the balance of upcoming work.

74 https://docs.github.com/issues/trying-out-the-new-projects-experience/quickstart
75 https://docs.github.com/issues/tracking-your-work-with-issues/creating-an-issue
    59

When you first create an iteration field, three iterations are automatically created. You can add other
iterations if needed.

Iteration field
You can use the command palette or the project's interface to create an iteration field.
Tip: To open the project command palette, press Ctrl+K (Windows/Linux) or Command+K (Mac).
Start typing any part of “Create new field”. When "Create new field" displays in the command palette,
select it.
Or follow the steps using the interface:
1. Navigate to your project.
2. Click in the plus (+) sign in the rightmost field header. A drop-down menu with the project fields will
appear.
3. Click in the New field.
4. Enter a name for the new iteration field.
5. Select the dropdown menu below and click Iteration.
6. (Optional) Change the starting date from the current day, select the calendar dropdown next to Starts
on, and click on a new starting date.
7. To change the duration of each iteration, type a new number, then select the dropdown and click
either days or weeks.
60     

8. Click Save and create.

Adding new iterations


1. Navigate to your project.
2. In the top-right, click to open the menu.
3. In the menu, click Settings to access the project settings.
4. Click the name of the iteration field you want to adjust.
5. To add a new iteration of the same duration, click Add iteration.
6. (Optional) Customize the duration of the new iteration and when it starts.
1. Click next to Add iteration.
2. Select a starting date and duration.
3. Click Add.
7. Click Save changes.
    61

Also, you can insert breaks into your iterations to communicate when you're taking time away from
scheduled work.
For more information about iterations, see:
●● Managing iterations in projects (beta) - GitHub Docs76.
●● Best practices for managing projects (beta) - GitHub Docs77.

Customize Project views


Using Projects views, you can organize information by changing the layout, grouping, sorting, and
filtering your work.
You can create and use different visualizations, for example, Board view:

76 https://docs.github.com/issues/trying-out-the-new-projects-experience/managing-iterations
77 https://docs.github.com/issues/trying-out-the-new-projects-experience/best-practices-for-managing-projects
62     

Project command palette


Use the project command palette to change settings and run commands in your project.
1. Open the project command palette by pressing Command + K (Mac) or Ctrl + K (Windows/Linux).
2. Type any command part or navigate through the command palette window to find a command.
You have multiple commands to apply, such as:
●● Switch layout: Table.
●● Show: Milestone.
●● Sort by: Assignees, asc.
●● Remove sort-by.
●● Group by: Status.
●● Remove group-by.
●● Column field by: Status.
●● Filter by Status.
●● Delete view.
Note: For more information about GitHub Command Palette, see GitHub Command Palette - GitHub
Docs78.
Also, you can perform changes using the interface.

78 https://docs.github.com/get-started/using-github/github-command-palette
    63

Creating a project view


Project views allow you to view specific aspects of your project. Each view is displayed on a separate tab
in your project.
For example, you can have:
●● A view that shows all items not yet started (filter on Status).
●● A view that shows the workload for each team (group by a custom Team field).
●● A view that shows the items with the earliest target ship date (sort by a date field).
To add a new view:
1. To open the project command palette, press Command + K (Mac) or Ctrl + K (Windows/Linux).
2. Start typing New view (to create a new view) or Duplicate view (to duplicate the current view).
3. Choose the required command.
4. The new view is automatically saved.
For more information about projects (beta), see:
●● About projects (beta)79.
●● Creating a project (beta)80.

Collaborate using team discussions


GitHub discussions can help make your team plan together, update one another, or talk about any topic
you'd like in discussion posts on your team's page in an organization.
You can use team discussions for conversations that span across projects or repositories (issues, pull
requests, etc.). Instead of opening an issue in a repository to discuss an idea, you can include the entire
team by having a conversation in a team discussion.

79 https://docs.github.com/issues/trying-out-the-new-projects-experience/about-projects
80 https://docs.github.com/issues/trying-out-the-new-projects-experience/creating-a-project
64     

With team discussions, you can:


●● Post on your team's page or participate in a public discussion.
●● Link to any team discussion to reference it elsewhere.
●● Pin important posts to your team's page.
●● Receive email or web notifications.
Team discussions are available in organizations by default.
You can also use organization discussions to facilitate conversations across your organization.
For more information about team discussion, see:
●● Enabling or disabling GitHub Discussions for an organization81.
●● Quickstart for communicating on GitHub82.
●● About teams83.
●● Creating a team discussion84.
●● Editing or deleting a team discussion85.

81 https://docs.github.com/organizations/managing-organization-settings/enabling-or-disabling-github-discussions-for-an-organization
82 https://docs.github.com/github/collaborating-with-issues-and-pull-requests/quickstart-for-communicating-on-github
83 https://docs.github.com/articles/about-teams
84 https://docs.github.com/organizations/collaborating-with-your-team/creating-a-team-discussion
85 https://docs.github.com/organizations/collaborating-with-your-team/editing-or-deleting-a-team-discussion
    65

Summary
This module introduced you to GitHub Projects, GitHub Project Boards, and Azure Boards. It explored
ways to link Azure Boards and GitHub, configure GitHub Projects and Project views, and manage work
with GitHub Projects.
You learned how to describe the benefits and usage of:
●● Describe GitHub Projects and Azure Boards.
●● Link Azure Boards and GitHub.
●● Configure and Manage GitHub Projects and boards.
●● Customize Project views.

Learn more
●● Quickstart for projects (beta) - GitHub Docs86.
●● About project boards - GitHub Docs87.
●● What is Azure Boards? Tools to manage software development projects. - Azure Boards |
Microsoft Docs88.
●● Azure Boards-GitHub integration - Azure Boards | Microsoft Docs89.
●● Managing iterations in projects (beta) - GitHub Docs90.
●● Quickstart for projects (beta) - GitHub Docs91.
●● Best practices for managing projects (beta) - GitHub Docs92.
●● Customizing your project (beta) views - GitHub Docs93.
●● About team discussions - GitHub Docs94.

86 https://docs.github.com/issues/trying-out-the-new-projects-experience/quickstart
87 https://docs.github.com/issues/organizing-your-work-with-project-boards/managing-project-boards/about-project-boards
88 https://docs.microsoft.com/azure/devops/boards/get-started/what-is-azure-boards
89 https://docs.microsoft.com/azure/devops/boards/github
90 https://docs.github.com/issues/trying-out-the-new-projects-experience/managing-iterations
91 https://docs.github.com/issues/trying-out-the-new-projects-experience/quickstart
92 https://docs.github.com/issues/trying-out-the-new-projects-experience/best-practices-for-managing-projects
93 https://docs.github.com/issues/trying-out-the-new-projects-experience/customizing-your-project-views
94 https://docs.github.com/organizations/collaborating-with-your-team/about-team-discussions
66     

Introduction to source control


Introduction
You can think of source control as an essential everyday practice. Versioning is a standard part of the
developer's routine and, if used correctly, can save organizations enormous costs and resources.
The source control is a common-sense aspect of programming. It is essential from time to time to look at
why we do what we do and how versioning impacts the entire value stream at an organization.
This module introduces you to the basics of source control, exploring benefits and best practices.

Learning objectives
After completing this module, students and professionals can:
●● Understand source control.
●● Apply best practices for source control.
●● Describe the benefits of using source control.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore DevOps foundational practices


The State of DevOps Report 202195 highlights version control in almost all stages of DevOps evolution.

It is helpful for non-developers in an organization to understand the fundamentals of the discipline.


Also, it is so deeply rooted in the daily life of software engineers. Essential if those individuals decide
which version control tools and platforms to use.

95 https://puppet.com/resources/report/2021-state-of-devops-report
    67

Version control is essential for all software development projects and is vital at large businesses and
enterprises.
Enterprises have many stakeholders. For example:
●● Distributed teams.
●● Strict processes and workflows.
●● Siloed organizations.
●● Hierarchical organizations.
All those characteristics represent coordination and integration challenges when it comes to merging and
deploying code.
Companies within highly regulated industries need a practical way to ensure that all standards are met
appropriately and mitigate risk—for example, banking and healthcare.

What is source control?


A Source control system (or version control system) allows developers to collaborate on code and track
changes. Use version control to save your work and coordinate code changes across your team. Source
control is an essential tool for multi-developer projects.
The version control system saves a snapshot of your files (history) so that you can review and even roll
back to any version of your code with ease. Also, it helps to resolve conflicts when merging contributions
from multiple sources.
For most software teams, the source code is a repository of invaluable knowledge and understanding
about the problem domain that the developers have collected and refined through careful effort.
Source control protects source code from catastrophe and the casual degradation of human error and
unintended consequences.
68     

Without version control, you're tempted to keep multiple copies of code on your computer. It could be
dangerous. Easy to change or delete a file in the wrong code copy, potentially losing work.
Version control systems solve this problem by managing all versions of your code but presenting you
with a single version at a time.
Tools and processes alone aren't enough to accomplish the above, such as adopting Agile, Continuous
Integration, and DevOps. Believe it or not, all rely on a solid version control practice.
Version control is about keeping track of every change to software assets—tracking and managing the
who, what, and when. Version control is the first step needed to assure quality at the source, ensure flow
and pull value, and focus on the process. All of these create value not just for the software teams but
ultimately for the customer.
Version control is a solution for managing and saving changes made to any manually created assets. If
changes are made to the source code, you can go back in time and easily roll back to previous-working
versions.
Version control tools will enable you to see who made changes, when, and what exactly was changed.
Version control also makes experimenting easy and, most importantly, makes collaboration possible.
Without version control, collaborating over source code would be a painful operation.
There are several perspectives on version control.
●● For developers, it's a daily enabler for work and collaboration to happen. It's part of the daily job, one
of the most-used tools.
●● For management, the critical value of version control is in:

●● IP security.
●● Risk management.
●● Time-to-market speed through Continuous Delivery, where version control is a fundamental
enabler.

Explore benefits of source control


“Code does not exist unless it is committed into source control. Source control is the fundamental
enabler of continuous delivery.”
    69

Whether writing code professionally or personally, you should always version your code using a source
control management system. Some of the advantages of using source control are,
●● Create workflows. Version control workflows prevent the chaos of everyone using their development
process with different and incompatible tools. Version control systems provide process enforcement
and permissions, so everyone stays on the same page.
●● Work with versions. Every version has a description in the form of a comment. These descriptions
help you follow changes in your code by version instead of by individual file changes. Code stored in
versions can be viewed and restored from version control at any time as needed. It makes it easy to
base new work on any version of code.
●● Collaboration. Version control synchronizes versions and makes sure that your changes do not
conflict with other changes from your team. Your team relies on version control to help resolve and
prevent conflicts, even when people make changes simultaneously.
●● Maintains history of changes. Version control keeps a record of changes as your team saves new
versions of your code. This history can be reviewed to find out who, why, and when changes were
made. The history gives you the confidence to experiment since you can roll back to a previous good
version at any time. The history lets your base work from any code version, such as fixing a bug in an
earlier release.
●● Automate tasks. Version control automation features save your team time and generate consistent
results. Automate testing, code analysis, and deployment when new versions are saved to version
control.

Common software development values


●● Reusability – why do the same thing twice? Reuse of code is a common practice and makes building
on existing assets simpler.
●● Traceability – Audits are not just for fun; in many industries, it is a legal matter. All activities must be
traced, and managers can produce reports when needed. Traceability also makes debugging and
identifying root cause easier. Additionally, it helps with feature reuse as developers can link require-
ments to implementation.
●● Manageability – Can team leaders define and enforce workflows, review rules, create quality gates
and enforce QA throughout the lifecycle?
●● Efficiency – are we using the right resources for the job, minimizing time and effort? This one is
self-explanatory.
70     

●● Collaboration – When teams work together, quality tends to improve. We catch one another's
mistakes and can build on each other's strengths.
●● Learning – Organizations benefit when they invest in employees learning and growing. It is important
for onboarding new team members, the lifelong learning of seasoned members, and the opportunity
for workers to contribute to the bottom line and the industry.

Explore best practices for source control


●● Make small changes. In other words, commit early and commit often. Be careful not to commit any
unfinished work that could break the build.
●● Do not commit personal files. It could include application settings or SSH keys. Often personal files
are committed accidentally but cause problems later when other team members work on the same
code.
●● Update often and right before pushing to avoid merge conflicts.
●● Verify your code change before pushing it to a repository; ensure it compiles and tests are
passing.
●● Pay close attention to commit messages, as it will tell you why a change was made. Consider
committing messages as a mini form of documentation for the change.
●● Link code changes to work items. It will concretely link what was created to why it was created—or
modified by providing traceability across requirements and code changes.
●● No matter your background or preferences, be a team player and follow agreed conventions
and workflows. Consistency is essential and helps ensure quality, making it easier for team members
to pick up where you left off, review your code, debug, and so on.
Using version control of some kind is necessary for any organization, and following the guidelines can
help developers avoid needless time spent fixing errors and mistakes.
These practices also help organizations reap more significant benefits from having a good version control
system.

Summary
This module explored the basics of source control and how to work with it daily to benefit team collabo-
ration and code maintenance.
You learned how to describe the benefits and usage of:
●● Understand source control.
●● Apply best practices for source control.
●● Describe the benefits of using source control.

Learn more
●● Understand source control - Azure DevOps96.
●● Using source control in your codespace - GitHub Docs97.

96 https://docs.microsoft.com/azure/devops/user-guide/source-control
97 https://docs.github.com/codespaces/developing-in-codespaces/using-source-control-in-your-codespace
    71

Describe types of source control systems


Introduction
This module describes different source control systems such Git and Team Foundation Version Control
(TFVC) and helps with the initial steps for Git utilization.

Learning objectives
After completing this module, students and professionals can:
●● Apply source control practices in your development process.
●● Explain the differences between centralized and distributed version control.
●● Understand Git and Team Foundation Version Control (TFVC).
●● Develop using Git.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

Understand centralized source control

Centralized source control systems are based on the idea that there's a single “central” copy of your
project somewhere (probably on a server). Programmers will check in (or commit) their changes to this
central copy.
“Committing” a change means to record the difference in the central system. Other programmers can
then see this change.
Also, it's possible to pull down the change. The version control tool will automatically update the con-
tents of any files that were changed.
Most modern version control systems deal with “changesets,” which are a group of changes (possibly too
many files) that should be treated as a cohesive whole.
Programmers no longer must keep many copies of files on their hard drives manually. The version control
tool can talk to the central copy and retrieve any version they need on the fly.
72     

Some of the most common-centralized version control systems you may have heard of or used are Team
Foundation Version Control (TFVC), CVS, Subversion (or SVN), and Perforce.

A typical centralized source control workflow


If working with a centralized source control system, your workflow for adding a new feature or fixing a
bug in your project will usually look something like this:
●● Get the latest changes other people have made from the central server.
●● Make your changes, and make sure they work correctly.
●● Check in your changes to the main server so that other programmers can see them.

Understand distributed source control

Over time, so-called “distributed” source control or version control systems (DVCS for short) have become
the most important. The three most popular are Git, Mercurial, and Bazaar.
These systems do not necessarily rely on a central server to store all the versions of a project's files.
Instead, every developer “clones” a copy of a repository and has the project's complete history on their
hard drive.
This copy (or “clone”) has all the metadata of the original.
This method may sound wasteful, but in practice, it is not a problem. Most programming projects consist
primarily of plain text files (and maybe a few images).
The disk space is so cheap that storing many copies of a file does not create a noticeable dent in a hard
drive's free space. Modern systems also compress the files to use even less space.
The act of getting new changes from a repository is called “pulling.”
The act of moving your changes to a repository is called “pushing.”
In both cases, you move changesets (changes to file groups as coherent wholes), not single-file diffs.
One common misconception about distributed version control systems is that there cannot be a central
project repository. It is not true. Nothing is stopping you from saying, “this copy of the project is the
authoritative one.”
It means that instead of a central repository required by the tools you use, it is now optional.
    73

Advantages over centralized source control


The act of cloning an entire repository gives distributed source control tools several advantages over
centralized systems:
●● Doing actions other than pushing and pulling changesets is fast because the tool only needs to access
the hard drive, not a remote server.
●● Committing new changesets can be done locally without anyone else seeing them. Once you have a
group of changesets ready, you can push all of them at once.
●● Everything but pushing and pulling can be done without an internet connection. So, you can work on
a plane, and you will not be forced to commit several bug fixes as one large changeset.
●● Since each programmer has a full copy of the project repository, they can share changes with one or
two other people to get some feedback before showing the changes to everyone.

Disadvantages compared to centralized source control


There are almost no disadvantages to using a distributed source control system over a centralized one.
Distributed systems do not prevent you from having a single “central” repository; they provide more
options on top of it.
There are only two major inherent disadvantages to using a distributed system:
●● If your project contains many large, binary files that cannot be efficiently compressed, the space
needed to store all versions of these files can accumulate quickly.
●● If your project has a long history (50,000 changesets or more), downloading the entire history can
take an impractical amount of time and disk space.

Explore Git and Team Foundation Version Con-


trol
Git (distributed)
Git is a distributed version control system. Each developer has a copy of the source repository on their
development system. Developers can commit each set of changes on their dev machine.
Branches are lightweight. When you need to switch contexts, you can create a private local branch. You
can quickly switch from one branch to another to pivot among different variations of your codebase.
Later, you can merge, publish, or dispose of the branch.

Team Foundation Version Control (TFVC-centralized)


Team Foundation Version Control (TFVC) is a centralized version control system.
Typically, team members have only one version of each file on their dev machines. Historical data is
maintained only on the server. Branches are path-based and created on the server.
TFVC has two workflow models:
●● Server workspaces - Before making changes, team members publicly check out files. Most opera-
tions require developers to be connected to the server. This system helps lock workflows. Other
74     

software that works this way includes Visual Source Safe, Perforce, and CVS. You can scale up to huge
codebases with millions of files per branch—also, large binary files with server workspaces.
●● Local workspaces - Each team member copies the latest codebase version with them and works
offline as needed. Developers check in their changes and resolve conflicts as necessary. Another
system that works this way is Subversion.

Examine and choose Git


Switching from a centralized version control system to Git changes the way your development team
creates software.
If you are a company that relies on its software for mission-critical applications, altering your develop-
ment workflow impacts your entire business.
Developers would gain the following benefits by moving to Git.

Community
In many circles, Git has come to be the expected version control system for new projects.
If your team is using Git, odds are you will not have to train new hires on your workflow because they will
already be familiar with distributed development.

Also, Git is popular among open-source projects. It is easy to use 3rd-party libraries and encourage
others to fork your open-source code.

Distributed development
In TFVC, each developer gets a working copy that points back to a single central repository. Git, however,
is a distributed version control system. Instead of a working copy, each developer gets their local reposi-
tory, complete with an entire history of commits.
    75

Having a complete local history makes Git fast since it means you do not need a network connection to
create commits, inspect previous versions of a file, or do diffs between commits.
Distributed development also makes it easier to scale your engineering team. If someone breaks the
production branch in SVN, other developers cannot check in their changes until it is fixed. With Git, this
kind of blocking does not exist. Everybody can continue going about their business in their local reposi-
tories.
And, like feature branches, distributed development creates a more reliable environment. Even if devel-
opers obliterate their repository, they can clone from someone else and start afresh.

Trunk-based development
One of the most significant advantages of Git is its branching capabilities. Unlike centralized version
control systems, Git branches are cheap and easy to merge.
76     

Trunk-based development provides an isolated environment for every change to your codebase. When
developers want to start working on something—no matter how large or small—they create a new
branch. It ensures that the master branch always contains production-quality code.
Using trunk-based development is more reliable than directly-editing production code, but it also
provides organizational benefits.
They let you represent development work at the same granularity as your agile backlog.
For example, you might implement a policy where each work item is addressed in its feature branch.

Pull requests
Many source code management tools such as Azure Repos enhance core Git functionality with pull
requests.
A pull request is a way to ask another developer to merge one of your branches into their repository.
It makes it easier for project leads to keep track of changes and lets developers start discussions around
their work before integrating it with the rest of the codebase.
    77

Since they are essentially a comment thread attached to a feature branch, pull requests are incredibly
versatile.
When a developer gets stuck with a complex problem, they can open a pull request to ask for help from
the rest of the team.
Instead, junior developers can be confident that they are not destroying the entire project by treating pull
requests as a formal code review.

Faster release cycle


A faster release cycle is the ultimate result of feature branches, distributed development, pull requests,
and a stable community.
These capabilities promote an agile workflow where developers are encouraged to share more minor
changes more frequently.
In turn, changes can get pushed down the deployment pipeline faster than the standard of the monolith-
ic releases with centralized version control systems.
78     

As you might expect, Git works well with continuous integration and continuous delivery environments.
Git hooks allow you to run scripts when certain events occur inside a repository, which lets you automate
deployment to your heart’s content.
You can even build or deploy code from specific branches to different servers.
For example, you might want to configure Git to deploy the most recent commit from the develop branch
to a test server whenever anyone merges a pull request into it.
Combining this kind of build automation with peer review means you have the highest possible confi-
dence in your code as it moves from development to staging to production.

Understand objections to using Git


There are three common objections I often hear to migrating to Git:
●● I can overwrite history.
●● I have large files.
●● There is a steep learning curve.

Overwriting history
Git technically does allow you to overwrite history - but like any helpful feature, if misused can cause
conflicts.
    79

If your teams are careful, they should never have to overwrite history.
If you are synchronizing to Azure Repos, you can also add a security rule that prevents developers from
overwriting history by using the explicit “Force Push” permissions.
Every source control system works best when developers understand how it works and which conventions
work.
While you cannot overwrite history with Team Foundation Version Control (TFVC), you can still overwrite
code and do other painful things.

Large files
Git works best with repos that are small and do not contain large files (or binaries).
Every time you (or your build machines) clone the repo, they get the entire repo with its history from the
first commit.
It is great for most situations but can be frustrating if you have large files.
Binary files are even worse because Git cannot optimize how they are stored.
That is why Git LFS98 was created.
It lets you separate large files of your repos and still has all the benefits of versioning and comparing.
Also, if you are used to storing compiled binaries in your source repos, stop!
Use Azure Artifacts99 or some other package management tool to store binaries for which you have
source code.
However, teams with large files (like 3D models or other assets) can use Git LFS to keep the code repo
slim and trimmed.

Learning curve
There is a learning curve. If you have never used source control before, you are probably better off when
learning Git. I have found that users of centralized source control (TFVC or SubVersion) battle initially to
make the mental shift, especially around branches and synchronizing.
Once developers understand how Git branches work and get over the fact that they must commit and
then push, they have all the basics they need to succeed in Git.

Describe working with Git locally


Git and Continuous Delivery is one of those delicious chocolate and peanut butter combinations. We
occasionally find in the software world two great tastes that taste great together!
Continuous Delivery of software demands a significant level of automation. It is hard to deliver continu-
ously if you do not have a quality codebase.
Git provides you with the building blocks to take charge of quality in your codebase. It allows you to
automate most of the checks in your codebase.
Also, it works before committing the code into your repository.

98 https://git-lfs.github.com/
99 https://azure.microsoft.com/services/devops/artifacts/
80     

To fully appreciate the effectiveness of Git, you must first understand how to carry out basic operations
on Git. For example, clone, commit, push, and pull.
The natural question is, how do we get started with Git?
One option is to go native with the command line or look for a code editor that supports Git natively.
Visual Studio Code is a cross-platform, open-source code editor that provides powerful developer tooling
for hundreds of languages.
To work in open-source, you need to embrace open-source tools.
This recipe will start by:
●● Setting up the development environment with Visual Studio Code.
●● Creating a new Git repository.
●● Committing code changes locally.
●● Pushing changes to a remote repository on Azure DevOps.

Getting ready
In this tutorial, we'll learn how to initialize a Git repository locally.
Then we will use the ASP.NET Core MVC project template to create a new project and version it in the
local Git repository.
We will then use Visual Studio Code to interact with the Git repository to do basic commit, pull, and push
operations.
You will need to set up your working environment with:
●● .NET Core 3.1 SDK or later: Download .NET100.
●● Visual Studio Code: Download Visual Studio Code101.
●● C# Visual Studio Code extension: C# programming with Visual Studio Code102.
●● Git: Git - Downloads103

●● Git for Windows (if you are using Windows): Git for Windows104
The Visual Studio Marketplace features several extensions for Visual Studio Code that you can install to
enhance your experience of using Git:
●● Git Lens105: This extension brings visualization for code history by using Git blame annotations and
code lens. The extension enables you to seamlessly navigate and explore the history of a file or
branch. Also, the extension allows you to gain valuable insights via powerful comparison commands
and so much more.
●● Git History106: Brings visualization and interaction capabilities to view the Git log, file history and
compare branches or commits.

100 https://dotnet.microsoft.com/download
101 https://code.visualstudio.com/Download
102 https://code.visualstudio.com/docs/languages/csharp
103 https://git-scm.com/downloads
104 https://gitforwindows.org/
105 https://gitlens.amod.io/
106 https://github.com/DonJayamanne/gitHistoryVSCode/blob/master/README.md
    81

How to do it
1. Open the Command Prompt and create a new-working folder:
mkdir myWebApp
cd myWebApp

2. In myWebApp, initialize a new Git repository:


git init

3. Configure global settings for the name and email address to be used when committing in this Git
repository:
git config --global user.name "John Doe"
git config --global user.email "john.doe@contoso.com"

If you are working behind an enterprise proxy, you can make your Git repository proxy-aware by
adding the proxy details in the Git global configuration file.
Different variations of this command will allow you to set up an HTTP/HTTPS proxy (with username/
password) and optionally bypass SSL verification.
Run the below command to configure a proxy in your global git config.
git config --global http.proxy
http://proxyUsername:proxyPassword@proxy.server.com:port

4. Create a new ASP.NET core application. The new command offers a collection of switches that can be
used for language, authentication, and framework selection. More details can be found on Microsoft
docs107.
dotnet new mvc

Launch Visual Studio Code in the context of the current-working folder:


code .

5. When the project opens in Visual Studio Code, select Yes for the Required assets to build and
debug are missing from ‘myWebApp.’ Add them? Warning message. Select Restore for the There
are unresolved dependencies info message. Hit F5 to debug the application, then myWebApp will
load in the browser, as shown in the following screenshot:

107 https://docs.microsoft.com/dotnet/core/tools/dotnet-new
82     

If you prefer to use the command line, you can run the following commands in the context of the git
repository to run the web application.

dotnet build
dotnet run

You will notice the “.vscode” folder is added to your working folder. To avoid committing this folder
into your Git repository, you can include it in the .gitignore file. With the ".vscode" folder selected, hit
F1 to launch the command window in Visual Studio Code, type gitIgnore, and accept the option to
include the selected folder in the .gitIgnore file:

6. To stage and commit the newly created myWebApp project to your Git repository from Visual Studio
Code, navigate the Git icon from the left panel. Add a commit comment and commit the changes by
clicking the checkmark icon. It will stage and commit the changes in one operation:

Open Program.cs, you will notice Git lens decorates the classes and functions with the commit history
and brings this information inline to every line of code:
    83

7. Now launch cmd in the context of the git repository and run git branch --list. It will show you
that currently, only the main branch exists in this repository. Now run the following command to
create a new branch called feature-devops-home-page.
git branch feature-devops-home-page
git checkout feature-devops-home-page
git branch --list

With these commands, you have created a new branch, checked it out. The --list keyword shows
you a list of all branches in your repository. The green color represents the branch that is currently
checked out.
8. Now navigate to the file ~\Views\Home\Index.cshtml and replace the contents with the text
below.
@{
ViewData["Title"] = "Home Page";
}

<div class="text-center">
<h1 class="display-4">Welcome</h1>
<p>Learn about <a href="https://azure.microsoft.com/services/devops/">Azure DevOps</a>.</p>
</div>

9. Refresh the web app in the browser to see the changes.


84     

10. In the context of the git repository, execute the following commands. These commands will stage the
changes in the branch and then commit them.
git status

git add .

git commit -m "updated welcome page."

git status

11. To merge the changes from the feature-devops-home-page into main, run the following commands
in the context of the git repository.

git checkout main

git merge feature-devops-home-page

12. Run the below command to delete the feature branch.

git branch --delete feature-devops-home-page

How it works
The easiest way to understand the outcome of the steps done earlier is to check the history of the
operation. Let us have a look at how to do it.
1. In Git, committing changes to a repository is a two-step process. Running: add . The changes are
staged but not committed. Finally, running commit promotes the staged changes into the repository.
2. To see the history of changes in the main branch, run the command git log -v

3. To investigate the actual changes in the commit, you can run the command git log -p
    85

There is more
Git makes it easy to back out changes. Following our example, if you want to take out the changes made
to the welcome page.
You can do It hard resetting the main branch to a previous version of the commit using the following
command.
git reset --hard 5d2441f0be4f1e4ca1f8f83b56dee31251367adc

Running the above command would reset the branch to the project init change.
If you run git log -v, you will see that the changes done to the welcome page are removed from the
repository.

Summary
This module described different source control systems such Git and Team Foundation Version Control
(TFVC) and helped with the initial steps for Git utilization.
You learned how to describe the benefits and usage of:
●● Apply source control practices in your development process.
●● Explain the differences between centralized and distributed version control.
●● Understand Git and Team Foundation Version Control (TFVC).
●● Develop using Git.

Learn more
●● What is Team Foundation Version Control - Azure Repos | Microsoft Docs108.
●● Migrate from TFVC to Git - Azure DevOps | Microsoft Docs109.
●● Git and TFVC version control - Azure Repos | Microsoft Docs110.

108 https://docs.microsoft.com/azure/devops/repos/tfvc/what-is-tfvc
109 https://docs.microsoft.com/devops/develop/git/migrate-from-tfvc-to-git
110 https://docs.microsoft.com/azure/devops/repos/tfvc/comparison-git-tfvc
86     

●● Get started with Git and Visual Studio - Azure Repos | Microsoft Docs111.

111 https://docs.microsoft.com/azure/devops/repos/git/gitquickstart
    87

Work with Azure Repos and GitHub


Introduction
This module introduces you to Azure Repos and GitHub. It explores ways to link Azure Boards and
GitHub, migrate from Team Foundation Version Control (TFVC) to Git, and work with GitHub Codespaces
for development.

Learning objectives
After completing this module, students and professionals can:
●● Work with Azure Repos and GitHub.
●● Link Azure Boards and GitHub.
●● Plan and migrate from TFVC to Git.
●● Work with GitHub Codespaces.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with Git and version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

Introduction to Azure Repos


Azure Repos is a set of version control tools that you can use to manage your code.
Whether your software project is large or small, using version control is a good idea.
Azure Repos provides two types of version control:
●● Git: distributed version control
●● Team Foundation Version Control (TFVC): centralized version control

What do I get with Azure Repos?


●● Use of free private Git repositories, pull requests, and code search: Get unlimited private Git repository
hosting and support for TFVC that scales from a hobby project to the world’s largest repository.
●● Support for any Git client: Securely connect with and push code into your Git repos from any IDE,
editor, or Git client.
●● Web hooks and API integration: Add validations and extensions from the marketplace or build your
own-using web hooks and REST APIs.
●● Semantic code search: Quickly find what you are looking for with a code-aware search that under-
stands classes and variables.
●● Collab to build better code: Do more effective Git code reviews with threaded discussion and continu-
ous integration for each change. Use forks to promote collaboration with inner source workflows.
88     

●● Automation with built-in CI/CD: Set up continuous integration/continuous delivery (CI/CD) to trigger
builds, tests, and deployments automatically. It with every completed pull request-using Azure
Pipelines or your tools.
●● Protection of your code quality with branch policies: Keep code quality high by requiring code
reviewer sign-out, successful builds, and passing tests before merge pull requests. Customize your
branch policies to maintain your team’s high standards.
●● Usage of your favorite tools: Use Git and TFVC repositories on Azure Repos with your favorite editor
and IDE.
For further reference on using git in Azure Repos, refer to Microsoft Docs.112

Introduction to GitHub
GitHub is the largest open-source community in the world. Microsoft owns GitHub. GitHub is a develop-
ment platform inspired by the way you work.
You can host and review code, manage projects, and build software alongside 40 million developers from
open source to business.
GitHub is a Git repository hosting service, but it adds many of its features.
While Git is a command-line tool, GitHub provides a Web-based graphical interface.
It also provides access control and several collaboration features, such as wikis and essential task man-
agement tools for every project.
So what are the main benefits of using GitHub? Nearly every open-source project uses GitHub to manage
its project.
Using GitHub is free if your project is open source and includes a wiki and issue tracker that makes it easy
to have more in-depth documentation and get feedback about your project.

What are some of the features offered by GitHub?


●● Automate from code to cloud: Cycle your production code faster and simplify your workflow with
GitHub Packages and built-in CI/CD using GitHub Actions.

●● Automate your workflows: Build, test, deploy, and run CI/CD the way you want in the same place
you manage code. Trigger Actions from any GitHub event to any available API. Build your Actions
in the language of your choice or choose from thousands of workflows and Actions created by the
community.
●● Packages at home with their code: Use Actions to publish new package versions to GitHub Packag-
es automatically. Install packages and images hosted on GitHub Packages or your preferred
packages registry in your CI/CD workflows. It is always free for open source, and data transfer
within Actions is unlimited for everyone.
●● Securing software together: GitHub plays a role in securing the world's code—developers, maintain-
ers, researchers, and security teams. On GitHub, development teams everywhere can work together to

112 https://docs.microsoft.com/azure/devops/repos
    89

secure the world's software supply chain, from fork to finish.

●● Get alerts about vulnerabilities in your code: GitHub continuously scans security advisories for
popular languages. Also, sends security alerts to maintainers of affected repositories with details
so they can remediate risks.
●● Automatically update vulnerabilities: GitHub monitors your project dependencies and automatical-
ly opens pull requests to update dependencies to the minimum version that resolves known
vulnerabilities.
●● Stay on top of CVEs: Stay up to date with the latest Common Vulnerabilities and Exposures (CVEs),
and learn how they affect you with the GitHub Advisory Database.
●● Find vulnerabilities that other tools miss: CodeQL is the industry's leading semantic code analysis
engine. GitHub's revolutionary approach treats code as data to identify security vulnerabilities
faster.
●● Eliminate variants: Never make the same mistake twice. Proactive vulnerability scanning prevents
vulnerabilities from ever reaching production.
●● Keep your tokens safe: Accidentally committed a token to a public repository? GitHub got you.
With support from 20 service providers, GitHub takes steps to keep you safe.
●● Seamless code review: Code review is the surest path to better code, and it is fundamental to how
GitHub works. Built-in review tools make code review an essential part of your team's process.

●● Propose changes: Better code starts with a Pull Request, a living conversation about changes
where you can talk through ideas, assign tasks, discuss details, and conduct reviews.
●● Request reviews: If you are on the other side of a review, you can request reviews from your peers
to get the detailed feedback you need.
●● See the difference: Reviews happen faster when you know exactly what is changed. Diffs compare
versions of your source code side by side, highlighting the new, edited, or deleted parts.
●● Comment in context: Discussions happen in comment threads, right within your code. Bundle
comments into one review or reply to someone else are inline to start a conversation.
●● Give clear feedback: Your teammates should not have to think too hard about what a thumbs-up
emoji means. Specify whether your comments are required changes or just a few suggestions.
●● Protect branches: Only merge the highest-quality code. You can configure repositories to require
status checks, reducing both human error and administrative overhead.
●● All your code and documentation in one place: There are hundreds of millions of private, public, and
open-source repositories hosted on GitHub. Every repository is equipped with tools to help your host,
version, and release code and documentation.

●● Code where you collaborate: Repositories keep code in one place and help your teams collaborate
with the tools they love, even if you work with large files using Git LFS. With unlimited private
repositories for individuals and teams, you can create or import as many projects as you would
like.
●● Documentation alongside your code: Host your documentation directly from your repositories
with GitHub Pages. Use Jekyll as a static site generator and publish your Pages from the /docs
folder on your master branch.
90     

●● Manage your ideas: Coordinate early, stay aligned, and get more done with GitHub's project manage-
ment tools.

●● See your project's large picture: See everything happening in your project and choose where to
focus your team's efforts with Projects, task boards that live right where they belong: close to your
code.
●● Track and assign tasks: Issues help you identify, assign, and keep track of tasks within your team.
You can open an Issue to track a bug, discuss an idea with an @mention, or start distributing work.
●● The human side of software: Building software is as much about managing teams and communities as
it is about code. Whether you are on a team of two or 2.000, GitHub got the support your people
need.

●● Manage and grow teams: Help people get organized with GitHub teams, level up to access
administrative roles, and fine-tune your permissions with nested teams.
●● Keep conversations: Moderation tools, like issue and pull request locking, help your team stay
focused on code. And if you maintain an open-source project, user blocking reduces noises and
ensures conversations are productive.
●● Set community guidelines: Set roles and expectations without starting from scratch. Customize
standard codes of conduct to create the perfect one for your project. Then choose a pre-written
license right from your repository.
GitHub offers excellent learning resources for its platform. You can find everything from git introduction
training to deep dive on publishing static pages to GitHub and how to do DevOps on GitHub right
here113.

Migrate from TFVC to Git


Migrating the tip
Most teams wish they could reorganize their source control structure.
Typically, the structure the team is using today was set up by a well-meaning developer a decade ago,
but it is not optimal.
Migrating to Git could be an excellent opportunity to restructure your repo.
In this case, it probably does not make sense to migrate history anyway since you are going to restruc-
ture the code (or break the code into multiple repos).
The process is simple:
●● Create an empty Git repo (or multiple empty repos).
●● Get-latest from TFS.
●● Copy/reorganize the code into the empty Git repos.
●● Commit and push, and you are there!
If you have shared code, you need to create builds of the shared code to publish to a package feed. And
then consume those packages in downstream applications, but the Git part is straightforward.

113 https://lab.github.com/
    91

Single branch import


If you are on TFVC and in Azure DevOps, you have the option of a simple single-branch import. Click on
the Import repository from the Azure Repos top-level drop-down menu to open the dialog. Then enter
the path to the branch you are migrating to (yes, you can only choose one branch). Select if you want
history or not (up to 180 days). Add in a name for the repo, and the import will be triggered.

Import repository
Import repository also allows you to import a git repository. It is beneficial to move your git repositories
from GitHub or any other public or private hosting spaces into Azure Repos.

There are some limitations here (that apply only when migrating source type TFVC): a single branch and
only 180 days of history.
However, if you only care about one branch and are already in Azure DevOps, it is an effortless but
effective way to migrate.

Use GIT-TFS
What if you need to migrate more than a single branch and keep branch relationships? Or you are going
to drag all the history with you?
In that case, you are going to have to use GIT-TFS. It is an open-source project that is built to synchro-
nize Git and TFVC repos.
But you can use it to do a once-off migration using Git TFS clone.
GIT-TFS has the advantage that it can migrate multiple branches and preserve the relationships to merge
branches in Git after you migrate.
Be warned that it can take a while to do this conversion - especially for large repos or repos with a long
history.
You can quickly dry run the migration locally, iron out any issues, and then do it for real.
92     

There are lots of flexibilities with this tool, so I highly recommend it.
If you are on Subversion, then you can use GIT-SVN to import your Subversion repo similarly to using
GIT-TFS.

Migrating from TFVC to Git using GIT-TFS


If Chocolatey is already installed on your computer, run choco install gittfs
Add the GIT-TFS folder path to your PATH. You could also set it temporary (the time of your current
terminal session) using: set PATH=%PATH%;%cd%\GitTfs\bin\Debug
You need .NET 4.5.2, and maybe the 2012 or 2013 version of Team Explorer installed (or Visual Studio). It
depends on the version of TFS you want to target.
Clone the whole repository (wait for a while.) :
git tfs clone http://tfs:8080/tfs/DefaultCollection $/some_project
You can find advanced use cases of cloning the TFVC repository into Git are documented here114.
cd some_project
git log

Develop online with GitHub Codespaces


GitHub Codespaces addresses several issues from which developers regularly suffer.
First, many developers are working with old hardware and software systems, which are not refreshed.
Second, developers are often tied to individual development systems. Moving from location to location
or system to system is inconvenient or slow to configure.
A problem for the developers' organizations is the proliferation of intellectual property across all these
machines.

What is GitHub Codespaces?


Codespaces is a cloud-based development environment that GitHub hosts. It is essentially an online
implementation of Visual Studio Code.
Codespaces allows developers to work entirely in the cloud.
Codespaces even will enable developers to contribute from tablets and Chromebooks.
Because it is based on Visual Studio Code, the development environment is still rich with:
●● Syntax highlighting.
●● Autocomplete.
●● Integrated debugging.
●● Direct Git integration.
Developers can create a codespace (or multiple codespaces) for a repository. Each codespace is associat-
ed with a specific branch of a repository.

114 https://github.com/git-tfs/git-tfs/blob/master/doc/commands/clone.md
    93

Using GitHub Codespaces


You can do all your work in codespaces within a browser.
For an even more responsive experience, you can connect to a codespace from a local copy of Visual
Studio Code.

Summary
This module-introduced Azure Repos and GitHub tools and ways to link Azure Boards and GitHub. Also,
how to migrate from Team Foundation Version Control (TFVC) to Git and work with GitHub Codespaces
for development.
You learned how to describe the benefits and usage of:
●● Work with Azure Repos and GitHub.
●● Link Azure Boards and GitHub.
●● Plan and migrate from TFVC to Git.
●● Work with GitHub Codespaces.

Learn more
●● Integration of Azure Repos and Git Repositories115.
●● Integration of Azure Boards and GitHub116.
●● Import repositories from TFVC to Git - Azure Repos | Microsoft Docs117.
●● GitHub Codespaces118.

115 https://azure.microsoft.com/services/devops/repos/
116 https://docs.microsoft.com/azure/devops/boards/github/
117 https://docs.microsoft.com/azure/devops/repos/git/import-from-tfvc
118 https://github.com/features/codespaces
94     

Labs
Lab 01: Agile planning and portfolio manage-
ment with Azure Boards

Lab overview
In this lab, you will learn about the agile planning and portfolio management tools and processes
provided by Azure Boards and how they can help you quickly plan, manage, and track work across your
entire team. You will explore the product backlog, sprint backlog, and task boards which can be used to
track the flow of work during the course of an iteration. We will also take a look at how the tools have
been enhanced in this release to scale for larger teams and organizations.

Objectives
After you complete this lab, you will be able to:
●● Manage teams, areas, and iterations
●● Manage work items
●● Manage sprints and capacity
●● Customize Kanban boards
●● Define dashboards
●● Customize team process

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions119

119 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    95

Lab 02: Version controlling with Git in Azure Re-


pos
Lab overview
Azure DevOps supports two types of version control, Git and Team Foundation Version Control (TFVC).
Here is a quick overview of the two version control systems:
●● Team Foundation Version Control (TFVC): TFVC is a centralized version control system. Typically,
team members have only one version of each file on their dev machines. Historical data is maintained
only on the server. Branches are path-based and created on the server.
●● Git: Git is a distributed version control system. Git repositories can live locally (such as on a develop-
er's machine). Each developer has a copy of the source repository on their dev machine. Developers
can commit each set of changes on their dev machine and perform version control operations such as
history and compare without a network connection.
Git is the default version control provider for new projects. You should use Git for version control in your
projects unless you have a specific need for centralized version control features in TFVC.
In this lab, you will learn how to establish a local Git repository, which can easily be synchronized with a
centralized Git repository in Azure DevOps. In addition, you will learn about Git branching and merging
support. You will use Visual Studio Code, but the same processes apply for using any Git-compatible
client.

Objectives
After you complete this lab, you will be able to:
●● Clone an existing repository
●● Save work with commits
●● Review history of changes
●● Work with branches by using Visual Studio Code

Lab duration
●● Estimated time: 50 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions120

120 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
96     

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices best describes DevOps?
†† DevOps is the role of who manages source control, pipelines, and monitor environments to continue
delivering value to the software project.
†† DevOps is the union of people, process, and products to enable continuous delivery of value to our
end users.
†† DevOps is the new process of creating continuous delivery and continuous integration for software
projects.

Multiple choice
Which of the following choices drives the ongoing merging and testing of code that leads to finding defects
early?
†† Continuous Integration.
†† Continuous Delivery.
†† Continuous Feedback.

Multiple choice
Which of the following choices is a practice that enables the creation automation of the environments?
†† Infrastructure as a Service (IaaS).
†† Infrastructure as Code (IaC).
†† Software as a Service (SaaS).

Multiple choice
In which of the following choices would you find large amounts of technical debt?
†† Greenfield project.
†† Brownfield project.
†† Bluefield project.
    97

Multiple choice
Which of the following choices would a system that manages inventory in a warehouse be considered?
†† System of Record.
†† System of Engagement.
†† System of History.

Multiple choice
Which of the following choices are the categorized user groups most adopted in Continuous Delivery?
†† Canaries, Early adopters, and Users.
†† Alpha and Beta Users.
†† Blue and Green Users.

Multiple choice
An Agile tool manages and visualizes work by showing tasks moving from left to right across columns
representing stages. What is this tool commonly called?
†† Backlog.
†† Kanban Board.
†† Delivery Plans.

Multiple choice
Which of the following choices isn't describing processes and methodologies correctly?
†† It is required to implement Waterfall to evolve and implement Agile.
†† It is possible to start with Waterfall and evolve to Agile.
†† Agile is the best and most used methodology and the best to start your project.

Multiple choice
Which of the following choices isn't an Agile/Project management tool?
†† Azure DevOps.
†† GitHub.
†† Camtasia.

Multiple choice
Which of the following choices isn't an Azure DevOps service/feature?
†† Azure Boards.
†† Azure Monitor.
†† Azure Repos.
98     

Multiple choice
Which of the following is a correct statement about Azure DevOps?
†† Azure DevOps only works in on-premises environments. If you want to use a cloud service, you need
to choose GitHub.
†† Azure DevOps and GitHub are only available in the cloud.
†† Azure DevOps and GitHub provide options to work on-premises and in the cloud.

Multiple choice
Which of the following choices does Azure DevOps Services enterprise-grade authentication supports?
†† Microsoft Account, Azure Active Directory (Azure AD), GitHub Account.
†† Microsoft Account, Azure Active Directory (Azure AD).
†† GitHub Account.

Multiple choice
Which of the following choices isn't a benefit of source control?
†† Manageability.
†† Efficiency.
†† Accountability.

Multiple choice
Which of the following choices isn't a source control best practice?
†† Make small changes.
†† Commit personal and secure files.
†† Link code changes to work items.

Multiple choice
Which of the following choices correctly describes one of the most valuable version control features?
†† Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, you can go back in time and easily roll back to previous-working
versions.
†† Version control is a solution for automatically incrementing the version number for deployments.
†† Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, be careful, you can't go back in time and easily roll back to previ-
ous-working versions.
    99

Multiple choice
Which of the following choices correctly describes what is source control?
†† Source control is the practice of controlling what is deployed to test and production environments.
†† Source control is the practice of controlling security through code files.
†† Source control is the practice of tracking and managing changes to code.

Multiple choice
Which of the following choices isn't a benefit of using distributed version control?
†† Complete offline support
†† Allows exclusive file locking.
†† Portable history.

Multiple choice
Which of the following choices isn't a benefit of using centralized version control?
†† Easily scales for large codebases.
†† An open-source friendly code review model via pull requests.
†† Allows exclusive file locking.

Multiple choice
Which of the following choices isn't a supported Azure Repos version control system?
†† Git.
†† Team Foundation Version Control (TFVC).
†† Source Safe.

Multiple choice
Which of the following choices is the exact number of days of history you can import Team Foundation
Version Control (TFVC) to Git using the "Import repository" feature in Azure DevOps?
†† 90 days.
†† 180 days.
†† 365 days.

Multiple choice
Which of the following choices describe GitHub Codespaces?
†† It's a platform for hosting and managing packages, including containers and other dependencies.
†† It's an AI pair programmer that helps you write code faster and with less work.
†† It's an online implementation of Visual Studio Code.
100     

Multiple choice
Which of the following choices best describes the Azure DevOps and GitHub integration?
†† Azure Boards have direct integration with Azure Repos, but it can also be integrated with GitHub to
plan and track work linking commits, PRs, and issues.
†† Azure Boards has direct integration with Azure Repos. Azure Repos only integrates with GitHub using
extensions from Marketplace for the read-only track.
†† Azure Boards has direct integration with GitHub for tracking activities and moving tasks on both sides
Azure Boards to GitHub and GitHub to Azure Repos.

Multiple choice
Which of the following Project boards types can contain issues and pull requests from any personal reposito-
ry?
†† User-owned project boards.
†† Organization-wide project boards.
†† Repository project boards

Multiple choice
Which of the following choices isn't a Project Boards supported template by default?
†† Basic kanban.
†† Automated kanban with review.
†† Automated CMMI.
    101

Answers
Multiple choice
Which of the following choices best describes DevOps?
†† DevOps is the role of who manages source control, pipelines, and monitor environments to continue
delivering value to the software project.
■■ DevOps is the union of people, process, and products to enable continuous delivery of value to our
end users.
†† DevOps is the new process of creating continuous delivery and continuous integration for software
projects.
Explanation
According to Donovan Brown, "DevOps is the union of people, process, and products to enable continuous
delivery of value to our end users."
Multiple choice
Which of the following choices drives the ongoing merging and testing of code that leads to finding
defects early?
■■ Continuous Integration.
†† Continuous Delivery.
†† Continuous Feedback.
Explanation
Continuous Integration drives the ongoing merging and testing of code, which leads to finding defects early.
Multiple choice
Which of the following choices is a practice that enables the creation automation of the environments?
†† Infrastructure as a Service (IaaS).
■■ Infrastructure as Code (IaC).
†† Software as a Service (SaaS).
Explanation
Infrastructure as Code (IaC) is a practice that enables the automation and validation of the creation and
teardown of environments to help with delivering secure and stable application hosting platforms.
Multiple choice
In which of the following choices would you find large amounts of technical debt?
†† Greenfield project.
■■ Brownfield project.
†† Bluefield project.
Explanation
A Brownfield Project comes with the baggage of existing codebases, existing teams, and often a significant
technical debt. They can still be ideal projects for DevOps transformations.
102     

Multiple choice
Which of the following choices would a system that manages inventory in a warehouse be considered?
■■ System of Record.
†† System of Engagement.
†† System of History.
Explanation
Systems that provide the truth about data elements are often called Systems of Record.
Multiple choice
Which of the following choices are the categorized user groups most adopted in Continuous Delivery?
■■ Canaries, Early adopters, and Users.
†† Alpha and Beta Users.
†† Blue and Green Users.
Explanation
In discussions around continuous delivery, users are often categorized into three general buckets: Canaries,
Early adopters, and Users.
Multiple choice
An Agile tool manages and visualizes work by showing tasks moving from left to right across columns
representing stages. What is this tool commonly called?
†† Backlog.
■■ Kanban Board.
†† Delivery Plans.
Explanation
A Kanban Board lets you visualize the flow of work and constrain the amount of work in progress. Your
Kanban board turns your backlog into an interactive signboard, providing a visual flow of work.
Multiple choice
Which of the following choices isn't describing processes and methodologies correctly?
■■ It is required to implement Waterfall to evolve and implement Agile.
†† It is possible to start with Waterfall and evolve to Agile.
†† Agile is the best and most used methodology and the best to start your project.
Explanation
The waterfall methodology is a traditional software development practice not related to Agile. You can
implement Agile without any dependencies. Also, you can evolve your current process to Agile.
    103

Multiple choice
Which of the following choices isn't an Agile/Project management tool?
†† Azure DevOps.
†† GitHub.
■■ Camtasia.
Explanation
Camtasia is a screen recorder tool, not an Agile or Project Management tool. All others are Agile-based.
Multiple choice
Which of the following choices isn't an Azure DevOps service/feature?
†† Azure Boards.
■■ Azure Monitor.
†† Azure Repos.
Explanation
Azure DevOps includes a range of services covering the complete development life cycle like Azure Boards,
Azure Pipelines, Azure Repos, Azure Artifacts, Azure Test Plans. Azure Monitor is an Azure feature.
Multiple choice
Which of the following is a correct statement about Azure DevOps?
†† Azure DevOps only works in on-premises environments. If you want to use a cloud service, you need
to choose GitHub.
†† Azure DevOps and GitHub are only available in the cloud.
■■ Azure DevOps and GitHub provide options to work on-premises and in the cloud.
Explanation
Azure DevOps provides both on-premises and cloud options, named Azure DevOps Server (on-premises)
and Azure DevOps Services (SaaS). Also, the same applies to GitHub with GitHub (SaaS) and GitHub
Enterprise (On-premises and Cloud).
Multiple choice
Which of the following choices does Azure DevOps Services enterprise-grade authentication supports?
■■ Microsoft Account, Azure Active Directory (Azure AD), GitHub Account.
†† Microsoft Account, Azure Active Directory (Azure AD).
†† GitHub Account.
Explanation
To protect and secure your data, Azure DevOps support: Microsoft Account, GitHub Account, and Azure
Active Directory (Azure AD).
104     

Multiple choice
Which of the following choices isn't a benefit of source control?
†† Manageability.
†† Efficiency.
■■ Accountability.
Explanation
Source control is the practice of tracking and managing changes to code. Benefits include but are not
limited to manageability and efficiency.
Multiple choice
Which of the following choices isn't a source control best practice?
†† Make small changes.
■■ Commit personal and secure files.
†† Link code changes to work items.
Explanation
Commit personal and secure files are unsecured. Do not commit personal or secure files. These could
include application settings or SSH keys. Often these are committed accidentally but cause problems later
when other team members are working on the same code.
Multiple choice
Which of the following choices correctly describes one of the most valuable version control features?
■■ Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, you can go back in time and easily roll back to previous-working
versions.
†† Version control is a solution for automatically incrementing the version number for deployments.
†† Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, be careful, you can't go back in time and easily roll back to previ-
ous-working versions.
Explanation
Version control tools allow you to see who made changes, when, and what exactly was changed, allowing
you to revert it when needed.
Multiple choice
Which of the following choices correctly describes what is source control?
†† Source control is the practice of controlling what is deployed to test and production environments.
†† Source control is the practice of controlling security through code files.
■■ Source control is the practice of tracking and managing changes to code.
Explanation
Source control is the practice of tracking and managing changes to code.
    105

Multiple choice
Which of the following choices isn't a benefit of using distributed version control?
†† Complete offline support
■■ Allows exclusive file locking.
†† Portable history.
Explanation
Distributed version control supports full offline versioning; it is cross-platform and has a portable history.
Multiple choice
Which of the following choices isn't a benefit of using centralized version control?
†† Easily scales for large codebases.
■■ An open-source friendly code review model via pull requests.
†† Allows exclusive file locking.
Explanation
Only open-source friendly code review models via PRs are exclusive distributed version control based on Git
characteristics.
Multiple choice
Which of the following choices isn't a supported Azure Repos version control system?
†† Git.
†† Team Foundation Version Control (TFVC).
■■ Source Safe.
Explanation
Azure Repos provides two types of version control systems. Git: distributed version control and Team
Foundation Version Control (TFVC): centralized version control.
Multiple choice
Which of the following choices is the exact number of days of history you can import Team Foundation
Version Control (TFVC) to Git using the "Import repository" feature in Azure DevOps?
†† 90 days.
■■ 180 days.
†† 365 days.
Explanation
There are some limitations here (that apply only when migrating source type TFVC) to a single branch and
only 180 days of history.
106     

Multiple choice
Which of the following choices describe GitHub Codespaces?
†† It's a platform for hosting and managing packages, including containers and other dependencies.
†† It's an AI pair programmer that helps you write code faster and with less work.
■■ It's an online implementation of Visual Studio Code.
Explanation
Codespaces is a cloud-based development environment that GitHub hosts. It's essentially an online imple-
mentation of Visual Studio Code.
Multiple choice
Which of the following choices best describes the Azure DevOps and GitHub integration?
■■ Azure Boards have direct integration with Azure Repos, but it can also be integrated with GitHub to
plan and track work linking commits, PRs, and issues.
†† Azure Boards has direct integration with Azure Repos. Azure Repos only integrates with GitHub using
extensions from Marketplace for the read-only track.
†† Azure Boards has direct integration with GitHub for tracking activities and moving tasks on both sides
Azure Boards to GitHub and GitHub to Azure Repos.
Explanation
Integrating GitHub with Azure Boards lets you plan and track your work by linking GitHub commits, pull
requests, and issues, directly to work items in Boards.
Multiple choice
Which of the following Project boards types can contain issues and pull requests from any personal
repository?
■■ User-owned project boards.
†† Organization-wide project boards.
†† Repository project boards
Explanation
User-owned project boards can contain issues and pull requests from any personal repository.
Multiple choice
Which of the following choices isn't a Project Boards supported template by default?
†† Basic kanban.
†† Automated kanban with review.
■■ Automated CMMI.
Explanation
CMMI isn't a valid Project Boards template. The templates that can be automated and already configured
are: Basic kanban, Automated kanban, Automated kanban with review, and Bug triage.
Module 2 Development for enterprise DevOps

Structure your Git Repo


Introduction
As a version control system, Git is easy to get started with but challenging to master.
While there's no one way to implement Git in the right way, many techniques can help you scale the
implementation of Git across the organization.
Simple things like structuring your code into micro repos, selecting a lean branching and merging model,
and using pull requests for code review can make your teams more productive.
This module examines Git repositories structure, explains the differences between mono versus multiple
repos, and helps you create a changelog.

Learning objectives
After completing this module, students and professionals can:
●● Understand Git repositories.
●● Implement mono repo or multiple repos.
●● Explain how to structure Git Repos.
●● Implement a changelog.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
108     

Explore monorepo versus multiple repos


A repository is simply a place where the history of your work is stored. It often lives in a .git subdirectory
of your working copy.
However, what's the best way to organize your code repository? Software development teams start with
the best intentions to keep a clear separation of concerns in both the software being developed and their
code repositories.
It's not uncommon for the code repositories to be bloated with unrelated code and artifacts over time.
There are two philosophies on organizing your repos: Monorepo or Multiple repos.
●● Monorepos is a source control pattern where all the source code is kept in a single repository. It's
super simple to give all your employee's access to everything in one shot. Just clone it down and
done.
●● Multiple repositories refer to organizing your projects each into their separate repository.
The fundamental difference between the monorepo and multiple repos philosophies boils down to a
difference about what will allow teams working together on a system to go fastest.
The multiple repos view, in extreme form, is that if you let every subteam live in its repo.
They have the flexibility to work in their area however they want, using whatever libraries, tools, develop-
ment workflow, and so on, will maximize their productivity.
The cost is that anything not developed within a given repo must be consumed as if it was a third-party
library or service. It would be the same even if it were written by the person sitting one desk over.
If you find a bug in a library you use, you must fix it in the appropriate repo. Get a new artifact published,
and then return to your repo to change your code.
You must deal with a different code base, various libraries, and tools, or even a different workflow in the
other repo.
Or maybe you must ask someone who owns that system to make the change for you and wait for them
to get around to it.
The monorepo view the friction, especially when dealing with more complicated dependency graphs, is
much more costly than multiple repos advocates recognize.
The productivity gains to be had by letting different teams go their way aren't that significant.
While some teams may find a locally optimal way of working, it may be the case. It's also likely that other
teams will offset their gains by choosing a suboptimal way of working.
By putting all your eggs in the one basket of the monorepo, you can then afford to invest in watching
that basket carefully.
The friction of having to make changes in other repos. Or, worse, having to wait for other teams to make
changes for you. It's primarily avoided in a mono repo because anyone can (and is encouraged) to
change anything.
If you find a bug in a library, you can fix it and get on with your life. It's no more friction than if you had
found a bug in your code.

Azure DevOps project repositories


In Azure DevOps, a project can contain multiple repositories. It's common to use one repository for each
associated solution.
    109

Implement a change log


The concept of a changelog is simple enough: It's a file that has a list of changes made to a project,
usually in date order. The typical breakdown is to separate a list of versions, and then within each version,
show:
●● Added features
●● Modified/Improved features
●● Deleted features
Some teams will post changelogs as blog posts; others will create a CHANGELOG.md file in a GitHub
repository.

Automated change log tooling


While changelogs can be created and manually maintained, you might want to consider using an auto-
mated changelog creation tool. At least as a starting point.

Using native GitHub commands


The git log command can be useful for automatically creating content. Example: create a new section per
version:
git log [options] vX.X.X..vX.X.Y | helper-script > projectchangelogs/X.X.Y

Git changelog
One standard tool is gitchangelog1. This tool is based on Python.

GitHub changelog generator


Another standard tool is called github-changelog-generator.2
$ github_changelog_generator -u github-changelog-generator -p Timer-
Trend-3.0

This tool is based on Gem.

Should you use autogenerated log-based data?


Preference is always to avoid dumping log entries into a changelog. Logs are “noisy,” so it's easy to
generate a mess that isn't helpful.

1 https://pypi.org/project/gitchangelog/
2 https://github.com/github-changelog-generator/github-changelog-generator
110     

Summary
This module examined Git repositories structure, explained the differences between mono versus multiple
repos, and helped you create a changelog.
You learned how to describe the benefits and usage of:
●● Understand Git repositories.
●● Implement mono repo or multiple repos.
●● Explain how to structure Git Repos.
●● Implement a changelog.

Learn more
●● Understand source control - Azure DevOps3.
●● Build Azure Repos Git repositories - Azure Pipelines | Microsoft Docs4.
●● Check out multiple repositories in your pipeline - Azure Pipelines | Microsoft Docs5.

3 https://docs.microsoft.com/azure/devops/user-guide/source-control
4 https://docs.microsoft.com/azure/devops/pipelines/repos/azure-repos-git
5 https://docs.microsoft.com/azure/devops/pipelines/repos/multi-repo-checkout
    111

Manage Git branches and workflows


Introduction
This module explores Git branching types, concepts, and models for the continuous delivery process. It
helps companies define their branching strategy and organization.

Learning objectives
After completing this module, students and professionals can:
●● Describe Git branching workflows.
●● Implement feature branches.
●● Fork a repo.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore branch workflow types


What is a successful Git branch workflow?
When evaluating a workflow for your team, you must consider your team's culture. You want the work-
flow to enhance your team's effectiveness and not be a burden that limits productivity. Some things to
consider when evaluating a Git workflow are:
●● Does this workflow scale with team size?
●● Is it easy to undo mistakes and errors with this workflow?
●● Does this workflow impose any new unnecessary cognitive overhead on the team?

Common branch workflows


Most popular Git workflows will have some sort of centralized repo that individual developers will push
and pull from.
Below is a list of some popular Git workflows that we'll go into more detail about in the next section.
These comprehensive workflows offer more specialized patterns about managing branches for feature
development, hotfixes, and eventual release.

Trunk-based development
Trunk-based development is a logical extension of Centralized Workflow.
112     

The core idea behind the Feature Branch Workflow is that all feature development should take place in a
dedicated branch instead of the main branch.
This encapsulation makes it easy for multiple developers to work on a particular feature without disturb-
ing the main codebase.
It also means the main branch should never contain broken code, which is a huge advantage for continu-
ous integration environments.

Forking workflow
The Forking Workflow is fundamentally different than the other workflows discussed in this tutorial.
Instead of using a single server-side repository to act as the “central” codebase, it gives every developer a
server-side repository.
It means that each contributor has two Git repositories:
●● A private local one.
●● A public server-side one.

Explore feature branch workflow


The core idea behind the Feature Branch Workflow is that all feature development should take place in a
dedicated branch instead of the main branch.
The encapsulation makes it easy for multiple developers to work on a particular feature without disturb-
ing the main codebase.
It also means the main branch will never contain broken code, which is a huge advantage for continuous
integration environments.
Encapsulating feature development also makes it possible to use pull requests, which are a way to start
discussions around a branch.
They allow other developers to sign out on a feature before it gets integrated into the official project.
Or, if you get stuck in the middle of a feature, you can open a pull request asking for suggestions from
your colleagues.
The point is, pull requests make it incredibly easy for your team to comment on each other's work.
Also, feature branches can (and should) be pushed to the central repository.
It makes it possible to share a feature with other developers without touching any official code.
Since the main is the only “special” branch, storing several feature branches on the central repository
doesn't pose any problems.
It's also a convenient way to back up everybody's local commits.

Trunk-based development workflow


The trunk-based development Workflow assumes a central repository, and the main represents the
official project history.
Instead of committing directly to their local main branch, developers create a new branch every time they
start work on a new feature.
    113

Feature branches should have descriptive names, like new-banner-images or bug-91. The idea is to give a
clear, highly focused purpose to each branch.
Git makes no technical distinction between the main branch and feature branches, so developers can
edit, stage, and commit changes to a feature branch.

Create a branch

When you're working on a project, you're going to have many different features or ideas in progress at
any given time – some of which are ready to go and others that aren't.
Branching exists to help you manage this workflow.
When you create a branch in your project, you're creating an environment where you can try out new
ideas.
Changes you make on a branch don't affect the main branch, so you're free to experiment and commit
changes, safe in the knowledge that your branch won't be merged until it's ready to be reviewed by
someone you're collaborating with.
Branching is a core concept in Git, and the entire branch flow is based upon it. There's only one rule:
anything in the main branch is always deployable.
Because of this, your new branch must be created off main when working on a feature or a fix.
Your branch name should be descriptive (for example, refactor-authentication, user-content-cache-key,
make-retina-avatars) so that others can see what is being worked on.

Add commits
114     

Once your branch has been created, it's time to start making changes. Whenever you add, edit, or delete
a file, you're making a commit and adding them to your branch.
This process of adding commits keeps track of your progress as you work on a feature branch.
Commits also create a transparent history of your work that others can follow to understand what you've
done and why.
Each commit has an associated commit message, which explains why a particular change was made.
Furthermore, each commit is considered a separate unit of change. It lets you roll back changes if a bug is
found or you decide to head in a different direction.
Commit messages are essential, especially since Git tracks your changes and then displays them as
commits once they're pushed to the server.
By writing clear commit messages, you can make it easier for other people to follow along and provide
feedback.

Open a pull request

Pull Requests start discussion about your commits. Because they're tightly integrated with the underlying
Git repository, anyone can see exactly what changes would be merged if they accept your request.
You can open a Pull Request at any point during the development process when:
●● You've little or no code but want to share some screenshots or general ideas.
●● You're stuck and need help or advice.
●● You're ready for someone to review your work.
Using the @mention system in your Pull Request message, you can ask for feedback from specific people
or teams, whether they're down the hall or 10 time zones away.
Pull Requests help contribute to projects and for managing changes to shared repositories.
If you're using a Fork & Pull Model, Pull Requests provide a way to notify project maintainers about the
changes you'd like them to consider.
If you're using a Shared Repository Model, Pull Requests help start code review and conversation about
proposed changes before they're merged into the main branch.
    115

Discuss and review your code

Once a Pull Request has been opened, the person or team reviewing your changes may have questions or
comments.
Perhaps the coding style doesn't match project guidelines, the change is missing unit tests, or maybe
everything looks excellent, and props are in order.
Pull Requests are designed to encourage and capture this type of conversation.
You can also continue to push to your branch, considering discussion and feedback about your commits.
Suppose someone comments that you forgot to do something, or if there's a bug in the code, you can fix
it in your branch and push up the change.
Git will show your new commits and any other feedback you may receive in the unified Pull Request view.
Pull Request comments are written in Markdown, so you can embed images and emoji, use pre-format-
ted text blocks, and another lightweight formatting.

Deploy

With Git, you can deploy from a branch for final testing in an environment before merging to main.
Once your pull request has been reviewed and the branch passes your tests, you can deploy your chang-
es to verify them. If your branch causes issues, you can roll it back by deploying the existing main.
116     

Merge

Now that your changes have been verified, it's time to merge your code into the main branch.
Once merged, Pull Requests preserve a record of the historical changes to your code. Because they're
searchable, they let anyone go back in time to understand why and how a decision was made.
By incorporating specific keywords into the text of your Pull Request, you can associate issues with code.
When your Pull Request is merged, the related issues can also close.
This workflow helps organize and track branches that are focused on business domain feature sets.
Other Git workflows like the Git Forking Workflow and the Gitflow Workflow are repo-focused and can
use the Git Feature Branch Workflow to manage their branching models.

Explore Git branch model for continuous deliv-


ery
The purpose of writing code is to ship enhancements to your software.
A branching model that introduces too much process overhead doesn't help increase the speed of
getting changes to customers—developing a branching model gives you enough padding not to ship
poor-quality changes. But, at the same time doesn't introduce too many processes to slow you down.
The internet is full of branching strategies for Git; while there's no right or wrong, a perfect branching
strategy works for your team!
You'll learn always to use the combination of feature branches and pull requests to have a ready-to-ship
main branch.

Getting ready
Let's cover the principles of what we suggest:
●● The main branch:
●● The main branch is the only way to release anything to production.
●● The main branch should always be in a ready-to-release state.
●● Protect the main branch with branch policies.
●● Any changes to the main branch flow through pull requests only.
●● Tag all releases in the main branch with Git tags.
    117

●● The feature branch:


●● Use feature branches for all new features and bug fixes.
●● Use feature flags to manage long-running feature branches.
●● Changes from feature branches to the main only flow through pull requests.
●● Name your feature to reflect its purpose.
List of branches:
bugfix/description
features/feature-name
features/feature-area/feature-name
hotfix/description
users/username/description
users/username/workitem

●● Pull requests:
●● Review and merge code with pull requests.
●● Automate what you inspect and validate as part of pull requests.
●● Tracks pull request completion duration and set goals to reduce the time it takes.
We'll be using the myWebApp created in the previous exercises. In this recipe, we'll be using three trendy
extensions from the marketplace:
●● Azure CLI6: is a command-line interface for Azure.
●● Azure DevOps CLI7: It's an extension for the Azure CLI for working with Azure DevOps and Azure
DevOps Server. It's designed to seamlessly integrate with Git, CI pipelines, and Agile tools. With the
Azure DevOps CLI, you can contribute to your projects without leaving the command line. CLI runs on
Windows, Linux, and Mac.
●● Git Pull Request Merge Conflict: This open-source extension created by Microsoft DevLabs allows you
to review and resolve pull request merge conflicts on the web. Before a Git pull request can complete,
any conflicts with the target branch must be resolved. With this extension, you can resolve these
conflicts on the web as part of the pull request merge instead of doing the merge and resolving
conflicts in a local clone.
The Azure DevOps CLI supports returning the query results in JSON, JSONC, YAML, YAMLC, table, TSV,
and none. You can configure your preference by using the configure command.

How to do it
1. After you've cloned the main branch into a local repository, create a new feature branch, myFeature-1:
myWebApp >
git checkout -b feature/myFeature-1

Output:
Switched to a new branch ‘feature/myFeature-1’.

6 https://docs.microsoft.com/cli/azure/install-azure-cli
7 https://docs.microsoft.com/azure/devops/cli
118     

2. Run the Git branch command to see all the branches. The branch showing up with an asterisk is the
“currently-checked-out” branch:
myWebApp >
git branch *

Output:
feature/myFeature-1
main
3. Make a change to the Program.cs file in the feature/myFeature-1 branch:
myWebApp >
notepad Program.cs

4. Stage your changes and commit locally, then publish your branch to remote:
myWebApp >
git status

Output:
On branch feature/myFeature-1 Changes not staged for commit: (use “git add <file>...” to update what
will be committed) (use "git checkout – <file>..." to discard changes in working directory) modified:
Program.cs.
myWebApp >
git add .
git commit -m "Feature 1 added to Program.cs"

Output:
[feature/myFeature-1 70f67b2] feature 1 added to program.cs 1 file changed, 1 insertion(+).
myWebApp >
git push -u origin feature/myFeature-1

Output:
Delta compression using up to 8 threads. Compressing objects: 100% (3/3), done. Writing objects: 100%
(3/3), 348 bytes | 348.00 KiB/s, done. Total 3 (delta 2), reused 0 (delta 0) remote: Analyzing objects…
(3/3) (10 ms) remote: Storing packfile… done (44 ms) remote: Storing index… done (62 ms) To https://
dev.azure.com/Geeks/PartsUnlimited/_git/MyWebApp * [new branch] feature/myFeature-1 -> feature/
myFeature-1 Branch feature/myFeature-1 set up to track remote branch feature/myFeature-1 from
origin.
The remote shows the history of the changes:
    119

5. Configure Azure DevOps CLI for your organization and project. Replace organization and project
name:
az devops configure --defaults organization=https://dev.azure.com/organization project="project
name"

6. Create a new pull request (using the Azure DevOps CLI) to review the changes in the feature-1 branch:
az repos pr create --title "Review Feature-1 before merging to main" --work-items 38 39 `
--description "#Merge feature-1 to main" `
--source-branch feature/myFeature-1 --target-branch main `
--repository myWebApp --open

Use the –open switch when raising the pull request to open the pull request in a web browser after it
has been created. The –deletesource-branch switch can be used to delete the branch after the pull
request is complete. Also, consider using –auto-complete to complete automatically when all policies
have passed, and the source branch can be merged into the target branch.
Note: For more information about az repos pr create parameter, see Create a pull request to
review and merge code8.
The team jointly reviews the code changes and approves the pull request:

8 https://docs.microsoft.com/azure/devops/repos/git/pull-requests
120     

The main is ready to release. Team tags main branch with the release number:

7. Start work on Feature 2. Create a branch on remote from the main branch and do the checkout
locally:
myWebApp >
git push origin origin:refs/heads/feature/myFeature-2

Output:
Total 0 (delta 0), reused 0 (delta 0) To https://dev.azure.com/Geeks/PartsUnlimited/_git/MyWebApp *
[new branch] origin/HEAD -> refs/heads/feature/myFeature-2.
myWebApp >
git checkout feature/myFeature-2
    121

Output:
Switched to a new branch ‘feature/myFeature-2’ Branch feature/myFeature-2 set up to track remote
branch feature/myFeature-2 from origin.
8. Modify Program.cs by changing the same comment line in the code changed in feature-1.
public class Program
{
// Editing the same line (file from feature-2 branch)
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}

public static IWebHost BuildWebHost(string[] args) =>


WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.Build();

9. Commit the changes locally, push them to the remote repository, and then raise a pull request to
merge the changes from feature/myFeature-2 to the main branch:
az repos pr create --title "Review Feature-2 before merging to main" --work-items 40 42 `
--description "#Merge feature-2 to main" `
--source-branch feature/myFeature-2 --target-branch main `
--repository myWebApp --open

A critical bug is reported in production against the feature-1 release with the pull request in flight. To
investigate the issue, you need to debug against the version of code currently deployed in produc-
tion. To investigate the issue, create a new fof branch using the release_feature1 tag:
myWebApp >
git checkout -b fof/bug-1 release_feature1

Output:
Switched to a new branch ‘fof/bug-1’.
10. Modify Program.cs by changing the same line of code that was changed in the feature-1 release:
public class Program
{
// Editing the same line (file from feature-FOF branch)
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}

public static IWebHost BuildWebHost(string[] args) =>


WebHost.CreateDefaultBuilder(args)
.UseStartup<Startup>()
.Build();
122     

11. Stage and commit the changes locally, then push changes to the remote repository:
myWebApp >
git add .
git commit -m "Adding FOF changes."
git push -u origin fof/bug-1

Output:
To https://dev.azure.com/Geeks/PartsUnlimited/_git/MyWebApp * [new branch] fof/bug-1 -> fof/bug-1
Branch fof/bug-1 set up to track remote branch fof/bug-1 from origin.
12. Immediately after the changes have been rolled out to production, tag the fof\bug-1 branch with the
release_bug-1 tag, then raise a pull request to merge the changes from fof/bug-1 back into the main:
az repos pr create --title "Review Bug-1 before merging to main" --work-items 100 `
--description "#Merge Bug-1 to main" `
--source-branch fof/Bug-1 --target-branch main `
--repository myWebApp --open

As part of the pull request, the branch is deleted. However, you can still reference the entire history
using the tag.
With the critical bug fix out of the way, let's go back to the review of the feature-2 pull request.
The branches page makes it clear that the feature/myFeature-2 branch is one change ahead of the
main and two changes behind the main:

If you tried to approve the pull request, you'd see an error message informing you of a merge conflict:
    123

13. The Git Pull Request Merge Conflict resolution extension makes it possible to resolve merge conflicts
right in the browser. Navigate to the conflicts tab and click on Program.cs to resolve the merge
conflicts:

The user interface allows you to take the source, target, add custom changes, review, and submit the
merge. With the changes merged, the pull request is completed.

How it works
We learned how the Git branching model gives you the flexibility to work on features in parallel by
creating a branch for each feature.
The pull request workflow allows you to review code changes using the branch policies.
Git tags are a great way to record milestones, such as the version of code released; tags give you a way to
create branches from tags.
We created a branch from a previous release tag to fix a critical bug in production.
The branches view in the web portal makes it easy to identify branches ahead of the main. Also, it forces
a merge conflict if any ongoing pull requests try to merge to the main without resolving the merge
conflicts.
A lean branching model allows you to create short-lived branches and push quality changes to produc-
tion faster.

Explore GitHub flow


GitHub is the best tool to enable collaboration in your projects. GitHub flow is a branch-based workflow
suggested for GitHub.
124     

Note: To implement GitHub flow, you'll need a GitHub account and a repository. See “Signing up for
GitHub9” and "Create a repo10."
Tip: You can complete all steps of GitHub flow through the GitHub web interface, command line, GitHub
CLI11, or GitHub Desktop12.
The first step is to create a branch in your repository to work without affecting the default branch, and
you give collaborators a chance to review your work.
For more information, see “Creating and deleting branches within your repository13.”

Make any desired changes to the repository. If you make a mistake, you can revert or push extra changes
to fix it.
Commit and push your changes to your branch to back up your work to remote storage, giving each
commit a descriptive message. Each commit should contain an isolated, complete change making it easy
to revert if you take a different approach.
Anyone collaborating with your project can see your work, answer questions, and make suggestions or
contributions. Continue to create, commit, and push changes to your branch until you're ready to ask for
feedback.
Tip: You can make a separate branch for each change to make it easy for reviewers to give feedback or
for you to understand the differences.
Once you're ready, you can create a pull request to ask collaborators for feedback on your changes. See
“Creating a pull request14.”
Pull request review is one of the most valuable features of collaboration. You can require approval from
your peers and team before merging changes. Also, you can mark it as a draft in case you want early
feedback or advice before you complete your changes.

9 https://docs.github.com/en/github/getting-started-with-github/signing-up-for-github
10 https://docs.github.com/en/github/getting-started-with-github/create-a-repo
11 https://cli.github.com/
12 https://docs.github.com/en/free-pro-team@latest/desktop
13 https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository
14 https://docs.github.com/en/articles/creating-a-pull-request
    125

Describe the pull request as much as possible with the suggested changes and what problem you're
resolving. You can add images, links, related issues, or any information to document your change and
help reviewers understand the PR without opening each file. See “Basic writing and formatting syn-
tax15” and "Linking a pull request to an issue16."

Another way to improve PR quality and documentation and explicitly point something out to the review-
ers is to use the comment session area. Also, you can @mention or request a review from specific people
or teams.

15 https://docs.github.com/en/github/writing-on-github/basic-writing-and-formatting-syntax
16 https://docs.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue
126     

There are other Pull Requests configurations, such as automatically requesting a review from specific
teams or users when a pull request is created or checks to run on pull requests. For more information, see
“About status checks17” and "About protected branches18."
After the reviewers' comments and checks validation, the changes should be ready to be merged, and
they can approve the Pull Request. See Merging a pull request19."
If you have any conflicts, GitHub will inform you to resolve them. “Addressing merge conflicts20.”

After a successful pull request merges, there's no need for the remote branch to stay there. You can
delete your branch to prevent others from accidentally using old branches. For more information, see
“Deleting and restoring branches in a pull request21.”
Note: GitHub keeps the commit and merges history if you need to restore or revert your pull request.

Explore fork workflow


The forking workflow is fundamentally different than other popular Git workflows.
Instead of using a single server-side repository to act as the “central” codebase, it gives every developer
their server-side repository.
It means that each contributor has two Git repositories:
●● A private local.

17 https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-status-checks
18 https://docs.github.com/en/github/administering-a-repository/about-protected-branches
19 https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/merging-a-pull-
request
20 https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/addressing-merge-conflicts
21 https://docs.github.com/en/github/administering-a-repository/deleting-and-restoring-branches-in-a-pull-request
    127

●● A public server-side.
The forking workflow is most often seen in public open-source projects.
The main advantage of the forking workflow is that contributions can be integrated without the need for
everybody to push to a single central repository.
Developers push to their server-side repositories, and only the project maintainer can push to the official
repository.
It allows the maintainer to accept commits from any developer without giving them written access to the
official codebase.
The forking workflow typically will be intended for merging into the original project maintainer's reposi-
tory.
The result is a distributed workflow that provides you a flexible way for large, organic teams (including
untrusted third parties) to collaborate securely.
This also makes it an ideal workflow for open-source projects.

How it works
As in the other Git workflows, the forking workflow begins with an official public repository stored on a
server.
But when a new developer wants to start working on the project, they don't directly clone the official
repository.
Instead, they fork the official repository to create a copy of it on the server.
This new copy serves as their personal public repository—no other developers can push to it, but they
can pull changes from it (we'll see why this is necessary in a moment).
After they've created their server-side copy, the developer does a git clone to get a copy of it onto their
local machine.
It serves as their private development environment, just like in the other workflows.
When they're ready to publish a local commit, they push the commit to their public repository—not the
official one.
Then, they file a pull request with the main repository, which lets the project maintainer know that an
update is ready to be integrated.
The pull request also serves as a convenient discussion thread if there are issues with the contributed
code.
The following is a step-by-step example of this workflow:
●● A developer ‘forks’ an 'official' server-side repository. It creates their server-side copy.
●● The new server-side copy is cloned to their local system.
●● A Git remote path for the ‘official’ repository is added to the local clone.
●● A new local feature branch is created.
●● The developer makes changes to the new branch.
●● New commits are created for the changes.
●● The branch gets pushed to the developer's server-side copy.
128     

●● The developer opens a pull request from the new branch to the ‘official’ repository.
●● The pull request gets approved for merge and is merged into the original server-side repository.
To integrate the feature into the official codebase:
●● The maintainer pulls the contributor's changes into their local repository.
●● Checks to make sure it doesn't break the project.
●● Merges it into their local main branch.
●● Pushes the main branch to the official repository on the server.
The contribution is now part of the project, and other developers should pull from the official repository
to synchronize their local repositories.
It's essential to understand that the notion of an “official” repository in the forking workflow is merely a
convention.
The only thing that makes the official repository, so official is that it's the repository of the project
maintainer.

Forking vs. cloning


It's essential to note that “forked” repositories and "forking" aren't special operations.
Forked repositories are created using the standard git clone command. Forked repositories are generally
“server-side clones” managed and hosted by a Git service provider such as Azure Repos.
There's no unique Git command to create forked repositories.
A clone operation is essentially a copy of a repository and its history.

Summary
This module explored Git branching types, concepts, and models for the continuous delivery process. It
helped companies define their branching strategy and organization.
You learned how to describe the benefits and usage of:
●● Describe Git branching workflows.
●● Implement feature branches.
●● Fork a repo.

Learn more
●● Git branching guidance - Azure Repos | Microsoft Docs22.
●● Create a new Git branch from the web - Azure Repos | Microsoft Docs23.
●● How Microsoft develops modern software with DevOps - Azure DevOps | Microsoft Docs24.
●● Fork your repository - Azure Repos | Microsoft Docs25.

22 https://docs.microsoft.com/azure/devops/repos/git/git-branching-guidance
23 https://docs.microsoft.com/azure/devops/repos/git/create-branch
24 https://docs.microsoft.com/devops/develop/how-microsoft-develops-devops
25 https://docs.microsoft.com/azure/devops/repos/git/forks
    129

Collaborate with pull requests in Azure Repos


Introduction
This module presents pull requests for collaboration and code reviews using Azure DevOps and GitHub
mobile for pull request approvals.
It helps understand how pull requests work and how to configure them.

Learning objectives
After completing this module, students and professionals can:
●● Use pull requests for collaboration and code reviews.
●● Give feedback-using pull requests.
●● Configure branch policies.
●● Use GitHub mobile for pull requests approvals.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Collaborate with pull requests


Pull requests let you tell others about changes you've pushed to a GitHub repository.
Once a pull request is sent, interested parties can review the set of changes, discuss potential modifica-
tions, and even push follow-up commits if necessary.
Pull requests are commonly used by teams and organizations collaborating using the Shared Repository
Model.
Everyone shares a single repository, and topic branches are used to develop features and isolate changes.
Many open-source projects on GitHub use pull requests to manage changes from contributors.
They help provide a way to notify project maintainers about changes one has made.
Also, start code review and general discussion about a set of changes before being merged into the main
branch.
Pull requests combine the review and merge of your code into a single collaborative process.
Once you're done fixing a bug or new feature in a branch, create a new pull request.
Add the team members to the pull request so they can review and vote on your changes.
Use pull requests to review works in progress and get early feedback on changes.
There's no commitment to merge the changes as the owner can abandon the pull request at any time.
130     

Get your code reviewed


The code review done in a pull request isn't just to find bugs—that's what your tests are concerning.
A good code review catches less obvious problems that could lead to costly issues later.
Code reviews help protect your team from bad merges and broken builds that sap your team's productiv-
ity.
The review catches these problems before the merge, protecting your essential branches from unwanted
changes.
Cross-pollinate expertise and spread problem-solving strategies by using a wide range of reviewers in
your code reviews.
Diffusing skills and knowledge makes your team more robust and more resilient.

Give great feedback


High-quality reviews start with high-quality feedback. The keys to great feedback in a pull request are:
●● Have the right people review the pull request.
●● Make sure that reviewers know what the code does.
●● Give actionable, constructive feedback.
●● Reply to comments promptly.
When assigning reviewers to your pull request, make sure you select the right set of reviewers.
You want reviewers who know how your code works and try to include developers working in other areas
to share their ideas.
Also, who can provide a clear description of your changes and build your code that has your fix or feature
running in it.
Reviewers should try to provide feedback on changes they disagree with. Identify the issue and give a
specific suggestion on what you would do differently.
This feedback has clear intent and is easy for the owner of the pull request to understand.
The pull request owner should reply to the comments, accept the suggestion, or explain why the suggest-
ed change isn't ideal.
Sometimes a suggestion is good, but the changes are outside the scope of the pull request.
    131

Take these suggestions and create new work items and feature branches separate from the pull request
to make those changes.

Protect branches with policies


There are a few critical branches in your repo that the team relies on always in suitable shapes, such as
your main branch.
Require pull requests to make any changes on these branches. Developers pushing changes directly to
the protected branches will have their pushes rejected.
Add more conditions to your pull requests to enforce a higher level of code quality in your key branches.
A clean build of the merged code and approval from multiple reviewers are extra requirements you can
set to protect your key branches.

Exercise Azure Repos collaborating with pull re-


quests
Code issues that are found sooner are both easier and cheaper to fix. So development teams strive to
push code quality checks as far left into the development process as possible.
As the name suggests, branch policies give you a set of out-of-the-box policies that can be applied to the
branches on the server.
Any changes being pushed to the server branches need to follow these policies before the changes can
be accepted.
Policies are a great way to enforce your team's code quality and change-management standards. In this
recipe, you'll learn how to configure branch policies on your main branch.

Getting ready
The out-of-the-box branch policies include several policies, such as build validation and enforcing a
merge strategy. We'll only focus on the branch policies needed to set up a code-review workflow in this
recipe.

How to do it
1. Open the branches view for the myWebApp Git repository in the parts-unlimited team portal. Select
the main branch, and from the pull-down, context menu choose Branch policies:
132     

2. In the policies view, It presents out-of-the-box policies. Set the minimum number of reviewers to 1:

The Allow requestors to approve their own changes option allows the submitter to self-approve their
changes.
It's OK for mature teams, where branch policies are used as a reminder for the checks that need to be
performed by the individual.
3. Use the review policy with the comment-resolution policy. It allows you to enforce that the code
review comments are resolved before the changes are accepted. The requester can take the feedback
from the comment and create a new work item and resolve the changes. It at least guarantees that
code review comments aren't lost with the acceptance of the code into the main branch:
    133

4. A requirement instigates a code change in the team project. If the work item triggered the work isn't
linked to the change, it becomes hard to understand why it was made over time. It's especially useful
when reviewing the history of changes. Configure the Check for linked work items policy to block
changes that don't have a work item linked to them:

5. Select the option to automatically include reviewers when a pull request is raised automatically. You
can map which reviewers are added based on the area of the code being changed:
134     

How it works
With the branch policies in place, the main branch is now fully protected.
The only way to push changes to the main branch is by first making the changes in another branch and
then raising a pull request to trigger the change-acceptance workflow.
Choose to create a new branch from one of the existing user stories in the work item hub.
By creating a new branch from a work item, that work item automatically gets linked to the branch.
You can optionally include more than one work item with a branch as part of the create workflow:

Prefix in the name when creating the branch to make a folder for the branch to go in.
In the preceding example, the branch will go in the folder. It is a great way to organize branches in busy
environments.
With the newly created branch selected in the web portal, edit the HomeController.cs file to include the
following code snippet and commit the changes to the branch.
In the image below, you'll see that you can directly commit the changes after editing the file by clicking
the commit button.
The file path control in the team portal supports search.
Start typing the file path to see all files in your Git repository under that directory, starting with these
letters showing up in the file path search results dropdown.
    135

The code editor in the web portal has several new features in Azure DevOps Server, such as support for
bracket matching and toggle white space.
You can load the command palette by pressing it. Among many other new options, you can now toggle
the file using a file mini-map, collapse, and expand, and other standard operations.
To push these changes from the new branch into the main branch, create a pull request from the pull
request view.
Select the new branch as the source and the main as the target branch.
The new pull request form supports markdown, so you can add the description using the markdown
syntax.
The description window also supports @ mentions and # to link work items:

The pull request is created; the overview page summarizes the changes and the status of the policies.
The Files tab shows you a list of changes and the difference between the previous and the current
versions.
Any updates pushed to the code files will show up in the Updates tab, and a list of all the commits is
shown under the Commits tab:

Open the Files tab: this view supports code comments at the line level, file level, and overall.
136     

The comments support both @ for mentions and # to link work items, and the text supports markdown
syntax:
The code comments are persisted in the pull request workflow; the code comments support multiple
iterations of reviews and work well with nested responses.
The reviewer policy allows for a code review workflow as part of the change acceptance.
It's an excellent way for the team to collaborate on any code changes pushed into the main branch.
When the required number of reviewers approves the pull request, it can be completed.
You can also mark the pull request to autocomplete after your review. It autocompletes the pull requests
once all the policies have been successfully compiled.

There's more
Have you ever been in a state where a branch has been accidentally deleted? It can't be easy to figure out
what happened.
Azure DevOps Server now supports searching for deleted branches. It helps you understand who deleted
it and when. The interface also allows you to recreate the branch.
Deleted branches are only shown if you search for them by their exact name to cut out the noise from the
search results.
To search for a deleted branch, enter the full branch name into the branch search box. It will return any
existing branches that match that text.
You'll also see an option to search for an exact match in the list of deleted branches.
If a match is found, you'll see who deleted it and when. You can also restore the branch. Restoring the
branch will re-create it at the commit to which is last pointed.
However, it won't restore policies and permissions.
    137

Examine GitHub mobile for pull request approv-


als

Using a mobile app in combination with Git is a convenient option, particularly when urgent pull request
approvals are required.
●● The app can render markdown, images, and PDF files directly on the mobile device.
●● Pull requests can be managed within the app, along with marking files as viewed, collapsing files.
●● Comments can be added.
●● Emoji short codes are rendered.

Summary
This module presented pulls requests for collaboration and code reviews using Azure DevOps and GitHub
mobile for pull request approvals.
It helped understanding how pull requests work and how to configure them.
You learned how to describe the benefits and usage of:
●● Use pull requests for collaboration and code reviews.
●● Give feedback using pull requests.
●● Configure branch policies.
●● Use GitHub mobile for pull requests approvals.

Learn more
●● About pull requests and permissions - Azure Repos | Microsoft Docs26.

26 https://docs.microsoft.com/azure/devops/repos/git/about-pull-requests
138     

●● Create a pull request to review and merge code - Azure Repos | Microsoft Docs27.
●● Review and comment on pull requests - Azure Repos | Microsoft Docs28.
●● Protect your Git branches with policies - Azure Repos | Microsoft Docs29.
●● Creating an issue or pull request - GitHub Docs30.

27 https://docs.microsoft.com/azure/devops/repos/git/pull-requests
28 https://docs.microsoft.com/azure/devops/repos/git/review-pull-requests
29 https://docs.microsoft.com/azure/devops/repos/git/branch-policies
30 https://docs.github.com/desktop/contributing-and-collaborating-using-github-desktop/working-with-your-remote-repository-on-github-
or-github-enterprise/creating-an-issue-or-pull-request
    139

Explore Git hooks


Introduction
This module describes Git hooks and their usage during the development process, implementation, and
behavior.

Learning objectives
After completing this module, students and professionals can:
●● Understand Git hooks.
●● Identify when used Git hooks.
●● Implement Git hooks for automation.
●● Explain Git hooks' behavior.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

Introduction to Git hooks


Continuous delivery demands a significant level of automation. You can't be continuously delivering if
you don't have a quality codebase. It's where git fares so well.
It lets you automate most of the checks in your codebase. Before committing the code into your local
repository, let alone the remote.

Git hooks
Git hooks are a mechanism that allows code to be run before or after certain Git lifecycle events.
For example, one could hook into the commit-msg event to validate that the commit message structure
follows the recommended format.
The hooks can be any executable code, including shell, PowerShell, Python, or other scripts. Or they may
be a binary executable. Anything goes!
The only criteria are that hooks must be stored in the .git/hooks folder in the repo root. Also, they must
be named to match the related events (Git 2.x):
●● applypatch-msg
●● pre-applypatch
●● post-applypatch
●● pre-commit
●● prepare-commit-msg
●● commit-msg
140     

●● post-commit
●● pre-rebase
●● post-checkout
●● post-merge
●● pre-receive
●● update
●● post-receive
●● post-update
●● pre-auto-gc
●● post-rewrite
●● pre-push

Practical use cases for using Git hooks


Since Git hooks execute the scripts on the specific event type they're called on, you can do much any-
thing with Git hooks.
Some examples of where you can use hooks to enforce policies, ensure consistency, and control your
environment:
●● In Enforcing preconditions for merging
●● Verifying work Item ID association in your commit message
●● Preventing you & your team from committing faulty code
●● Sending notifications to your team's chat room (Teams, Slack, HipChat, etc.)
In the next unit, you will see how to implement Git Hooks.

Implement Git hooks


When pushing quality into the development process, developing code locally, you want developers to
identify and catch code quality issues. It's even before raising the pull request to trigger the branch
policies.
Git hooks allow you to run custom scripts whenever certain important events occur in the Git life cycle—
for example, committing, merging, and pushing.
Git ships with several sample hook scripts in the repo .git\hooks directory. Git executes the contents on
the particular occasion type they're approached. You can do practically anything with Git hooks.
Here are a few instances of where you can use Git hooks to uphold arrangements, guarantee consistency,
and control your environment:
●● Enforcing preconditions for merging
●● Verifying work Item ID association in your commit message Preventing you and your team from
committing faulty code
●● Sending notifications to your team's chat room (Teams, Slack, HipChat)
In this recipe, we'll use the pre-commit Git hook to scan the commit for keywords from a predefined list
to block the commit if it contains any of these keywords.
    141

Getting ready
Let's start by exploring client-side Git hooks. Navigate to the repo .git\hooks directory – you'll find that
there are a bunch of samples, but they're disabled by default.

Note: If you open that folder, you'll find a file called precommit.sample. To enable it, rename it to
pre-commit by removing the .sample extension and making the script executable.
The script is found and executed when you attempt to commit using git commit. You commit successfully
if your pre-commit script exits with a 0 (zero). Otherwise, the commit fails. If you're using Windows,
simply renaming the file won't work.
Git will fail to find the shell in the chosen path specified in the script.
The problem is lurking in the first line of the script, the shebang declaration:
#!/bin/sh

On Unix-like OSs, the #! Tells the program loader that it's a script to be interpreted, and /bin/sh is the
path to the interpreter you want to use, sh in this case.
Windows isn't a Unix-like OS. Git for Windows supports Bash commands and shell scripts via Cygwin.
By default, what does it find when it looks for sh.exe at /bin/sh?
Nothing, nothing at all. Fix it by providing the path to the sh executable on your system. It's using the
64-bit version of Git for Windows, so the baseline looks like this:
#!C:/Program\ Files/Git/usr/bin/sh.exe

How to do it
How could Git hooks stop you from accidentally leaking Amazon AWS access keys to GitHub?
You can invoke a script at pre-commit.
Using Git hooks to scan the increment of code being committed into your local repository for specific
keywords:
Replace the code in this pre-commit shell file with the following code.
142     

#!C:/Program\ Files/Git/usr/bin/sh.exe
matches=$(git diff-index --patch HEAD | grep '^+' | grep -Pi 'password|key-
word2|keyword3')
if [ ! -z "$matches" ]
then
cat <<\EOT
Error: Words from the blocked list were present in the diff:
EOT
echo $matches
exit 1
fi

You don't have to build the complete keyword scan list in this script.
You can branch off to a different file by referring to it here to encrypt or scramble if you want to.

How it works
The Git diff-index identifies the code increment committed in the script. This increment is then compared
against the list of specified keywords. If any matches are found, an error is raised to block the commit;
the script returns an error message with the list of matches. The pre-commit script doesn't return 0 (zero),
which means the commit fails.

There's more
The repo .git\hooks folder isn't committed into source control. You may wonder how you share the
goodness of the automated scripts you create with the team.
The good news is that, from Git version 2.9, you can now map Git hooks to a folder that can be commit-
ted into source control.
You could do that by updating the global settings configuration for your Git repository:
Git config --global core.hooksPath '~/.githooks'

If you ever need to overwrite the Git hooks you have set up on the client-side, you can do so by using the
no-verify switch:
Git commit --no-verify

Server-side service hooks with Azure Repos


So far, we've looked at the client-side Git Hooks on Windows. Azure Repos also exposes server-side
hooks. Azure DevOps uses the exact mechanism itself to create Pull requests. You can read more about it
at the Server hooks event reference31.

Summary
This module described Git hooks and their usage during the development process, implementation, and
behavior.

31 https://docs.microsoft.com/azure/devops/service-hooks/events
    143

You learned how to describe the benefits and usage of:


●● Understand Git hooks.
●● Identify when used Git hooks.
●● Implement Git hooks for automation.
●● Explain Git hooks' behavior.

Learn more
●● Creating a pre-receive hook script - GitHub Docs32.
●● Service hooks event reference - Azure DevOps | Microsoft Docs33.

32 https://docs.github.com/enterprise-server@3.1/admin/policies/enforcing-policy-with-pre-receive-hooks/creating-a-pre-receive-hook-
script
33 https://docs.microsoft.com/azure/devops/service-hooks/events
144     

Plan foster inner source


Introduction
This module explains how to use Git to foster inner source and implement Fork and its workflows.

Learning objectives
After completing this module, students and professionals can:
●● Use Git to foster inner source across the organization.
●● Implement fork workflow.
●● Choose between branches and forks.
●● Share code between forks.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore foster inner source


The fork-based pull request workflow is popular with open-source projects because it allows anybody to
contribute to a project.
You don't need to be an existing contributor or write access to a project to offer your changes.
This workflow isn't just for open source: forks also help support inner source workflows within your
company.
Before forks, you could contribute to a project-using Pull Requests.
The workflow is simple enough: push a new branch up to your repository, open a pull request to get a
code review from your team, and have Azure Repos evaluate your branch policies.
You can click one button to merge your pull request into main and deploy when your code is approved.
This workflow is great for working on your projects with your team. But what if you notice a simple bug in
a different project within your company and you want to fix it yourself?
What if you're going to add a feature to a project that you use, but another team develops?
It's where forks come in; forks are at the heart of inner source practices.

Inner source
Inner source – sometimes called “internal open source” – brings all the benefits of open-source software
development inside your firewall.
It opens your software development processes so that your developers can easily collaborate on projects
across your company.
It uses the same processes that are popular throughout the open-source software communities.
    145

But it keeps your code safe and secure within your organization.
Microsoft uses the inner source approach heavily.
As part of the efforts to standardize a one-engineering system throughout the company – backed by
Azure Repos – Microsoft has also opened the source code to all our projects to everyone within the
company.
Before the move to the inner source, Microsoft was “siloed”: only engineers working on Windows could
read the Windows source code.
Only developers working on Office could look at the Office source code.
So, if you're an engineer working on Visual Studio and you thought that you found a bug in Windows or
Office – or wanted to add a new feature – you're out of luck.
But by moving to offer inner sources throughout the company, powered by Azure Repos, it's easy to fork
a repository to contribute back.
As an individual making the change, you don't need to write access to the original repository, just the
ability to read it and create a fork.

Implement the fork workflow


A fork is a copy of a repository. Forking a repository allows you to experiment with changes without
affecting the original project freely.
Most commonly, forks are used to propose changes to someone else's project. Or use someone else's
project as a starting point for your idea.
A fork is a complete copy of a repository, including all files, commits, and (optionally) branches.
Forks are a great way to support an Inner Source workflow: you can create a fork to suggest changes
when you don't have permission to write to the original project directly.
Once you're ready to share those changes, it's easy to contribute them back-using pull requests.

What's in a fork?
A fork starts with all the contents of its upstream (original) repository.
You can include all branches or limit them to only the default branch when you create a fork.
None of the permissions, policies, or build pipelines are applied.
The new fork acts as if someone cloned the original repository, then pushed it to a new, empty reposito-
ry.
After a fork has been created, new files, folders, and branches aren't shared between the repositories
unless a Pull Request (PR) carries them along.

Sharing code between forks


You can create PRs in either direction: from fork to upstream or upstream to fork.
The most common approach will be from fork to upstream.
The destination repository's permissions, policies, builds, and work items will apply to the PR.
146     

Choosing between branches and forks


For a small team (2-5 developers), we recommend working in a single repo.
Everyone should work in a topic branch, and the main should be protected with branch policies.
As your team grows more significant, you may find yourself outgrowing this arrangement and prefer to
switch to a forking workflow.
We recommend the forking workflow if your repository has many casual or infrequent committees (like
an open-source project).
Typically, only core contributors to your project have direct commit rights into your repository.
It would help if you asked collaborators from outside this core set of people to work from a fork of the
repository.
Also, it will isolate their changes from yours until you've had a chance to vet the work.

The forking workflow


●● Create a fork.
●● Clone it locally.
●● Make your changes locally and push them to a branch.
●● Create and complete a PR to upstream.
●● Sync your fork to the latest from upstream.

Create the Fork


1. Navigate to the repository to fork and choose fork.
2. Specify a name and choose the project where you want the fork to be created. If the repository
contains many topic branches, we recommend you fork only the default branch.
3. Choose the ellipsis, then Fork to create the fork.
    147

Note: You must have the Create Repository permission in your chosen project to create a fork. We
recommend you create a dedicated project for forks where all contributors have the Create Repository
permission. For an example of granting this permission, see Set Git repository permissions.

Clone your fork locally


Once your fork is ready, clone it using the command line or an IDE like Visual Studio. The fork will be your
origin remote.
For convenience, after cloning, you'll want to add the upstream repository (where you forked from) as a
remote named upstream.
git remote add upstream {upstream_url}

Make and push changes


It's possible to work directly in main - after all, this fork is your copy of the repo.
We recommend you still work in a topic branch, though.
It allows you to maintain multiple independent workstreams simultaneously.
Also, it reduces confusion later when you want to sync changes into your fork.
Make and commit your changes as you normally would. When you're done with the changes, push them
to origin (your fork).

Create and complete a PR


Open a pull request from your fork to the upstream. All the policies required reviewers and builds will be
applied in the upstream repo. Once all policies are satisfied, the PR can be completed, and the changes
become a permanent part of the upstream repo.
148     

Important: Anyone with the Read permission can open a PR to upstream. If a PR build pipeline is config-
ured, the build will run against the code introduced in the fork.

Sync your fork to the latest


When you've gotten your PR accepted into upstream, you'll want to make sure your fork reflects the
latest state of the repo.
We recommend rebasing on upstream's main branch (assuming main is the main development branch).
git fetch upstream main
git rebase upstream/main
git push origin

The forking workflow lets you isolate changes from the main repository until you're ready to integrate
them. When you're ready, integrating code is as easy as completing a pull request.
    149

Describe inner source with forks


People fork repositories when they want to change the code in a repository that they don't have write
access to.
If you don't have write access, you aren't part of the team contributing to that repository, so why would
you modify the code repository?
In our line of work, we tend to look for technical reasons to improve something.
You may find a better way of implementing the solution or enhancing the functionality by contributing to
or improving an existing feature.
You can fork repositories in the following situations:
●● I want to make a change.
●● I think the project is exciting and may want to use it in the future.
●● I want to use some code in that repository as a starting point for my project.
Software teams are encouraged to contribute to all projects internally, not just their software projects.
Forks are a great way to foster a culture of inner open source.
Forks are a recent addition to the Azure DevOps Git repositories.
This recipe will teach you to fork an existing repository and contribute changes upstream via a pull
request.

Getting ready
A fork starts with all the contents of its upstream (original) repository.
When you create a fork in the Azure DevOps, you can include all branches or limit them to only the
default branch.
A fork doesn't copy the permissions, policies, or build definitions of the repository being forked.
After a fork has been created, the newly created files, folders, and branches aren't shared between the
repositories unless you start a pull request.
Pull requests are supported in either direction: from fork to upstream or upstream to fork.
The most common approach for a pull request will be from fork to upstream.

How to do it
1. Choose the Fork button (1), and then select the project where you want the fork to be created (2).
Give your fork a name and choose the Fork button (3).
150     

2. Once your fork is ready, clone it using the command line or an IDE, such as Visual Studio. The fork will
be your origin remote. For convenience, you'll want to add the upstream repository (where you forked
from) as a remote named upstream. On the command line, type:
git remote add upstream {upstream_url}

3. It's possible to work directly in the main – after all, this fork is your copy of the repo. We recommend
you still work in a topic branch, though. It allows you to maintain multiple independent workstreams
simultaneously. Also, it reduces confusion later when you want to sync changes into your fork. Make
and commit your changes as you normally would. When you're done with the changes, push them to
origin (your fork).

4. Open a pull request from your fork to the upstream. All the policies, required reviewers, and builds will
be applied in the upstream repo. Once all the policies are satisfied, the PR can be completed, and the
changes become a permanent part of the upstream repo:
    151

5. When your PR is accepted into upstream, you'll want to make sure your fork reflects the latest state of
the repo. We recommend rebasing on the upstream's main branch (assuming the main is the main
development branch). On the command line, run:
git fetch upstream main
git rebase upstream/main
git push origin

How it works
The forking workflow lets you isolate changes from the main repository until you're ready to integrate
them.
When you're ready, integrating code is as easy as completing a pull request.
For more information, see:
●● Clone an Existing Git repo34.
●● Azure Repos Git Tutorial35.

Summary
This module explained how to use Git to foster inner source and implement Fork and its workflows.
You learned how to describe the benefits and usage of:
●● Use Git to foster inner source across the organization.
●● Implement fork workflow.
●● Choose between branches and forks.
●● Share code between forks.

34 https://docs.microsoft.com/azure/devops/repos/git/clone
35 https://docs.microsoft.com/azure/devops/repos/git/gitworkflow
152     

Learn more
●● Fork your repository - Azure Repos | Microsoft Docs36.
●● Clone an existing Git repo - Azure Repos | Microsoft Docs37.

36 https://docs.microsoft.com/azure/devops/repos/git/forks
37 https://docs.microsoft.com/azure/devops/repos/git/clone
    153

Manage Git repositories


Introduction
This module explores how to work with large repositories and purge repository data.

Learning objectives
After completing this module, students and professionals can:
●● Understand large Git repositories.
●● Explain Git Virtual File System (GVFS).
●● Use Git Large File Storage (LFS).
●● Purge repository data.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Work with large repositories


Git is a great version control system widely adopted and recommended, but a few concerns should be
made and taken care of when working with large repositories.
While having a local copy of repositories in a distributed version control system is functional, that can be
a significant problem when large repositories are in place.
For example, Microsoft discovered this issue when migrating a repository with over 300 GB of data from
an internal system to Git.

Why repositories become large


There are two primary causes for large repositories:
●● Long history
●● Large binary files

Shallow clone
If developers don't need all the available history in their local repositories, a good option is to implement
a shallow clone.
It saves both space on local development systems and the time it takes to sync.
You can specify the depth of the clone that you want to execute:
git clone --depth [depth] [clone-url]
154     

You can also reduce clones by filtering branches or cloning only a single branch.

VFS for Git


VFS for Git helps with large repositories. It requires a Git LFS client.
Typical Git commands are unaffected, but the Git LFS works with the standard filesystem to download
necessary files in the background when you need files from the server.
The Git LFS client was released as open-source. The protocol is a straightforward one with four endpoints
similar to REST endpoints.
For more information on large repositories, see: Working with large files38 and Virtual File System for
Git: Enable Git at Enterprise Scale39.

Scalar

Scalar is a .NET Core application available for Windows and macOS. With tools and extensions for Git to
allow very large repositories to maximize your Git command performance. Microsoft uses it for Windows
and Office repositories.
If Azure Repos hosts your repository, you can clone a repository using the GVFS protocol40.
It achieves by enabling some advanced Git features, such as:
●● Partial clone: reduces time to get a working repository by not downloading all Git objects right away.
●● Background prefetch: downloads Git object data from all remotes every hour, reducing the time for
foreground git fetch calls.
●● Sparse-checkout: limits the size of your working directory.
●● File system monitor: tracks the recently modified files and eliminates the need for Git to scan the
entire work tree.
●● Commit-graph: accelerates commit walks and reachability calculations, speeding up commands like
git log.
●● Multi-pack-index: enables fast object lookups across many pack files.

38 https://docs.github.com/repositories/working-with-files/managing-large-files
39 https://github.com/microsoft/VFSForGit
40 https://github.com/microsoft/VFSForGit/blob/master/Protocol.md#the-gvfs-protocol-v1
    155

●● Incremental repack: Repacks the packed Git data into fewer pack files without disrupting concurrent
commands using the multi-pack-index.
Note: We update the list of features that Scalar automatically configures as a new Git version is released.
For more information, see:
●● microsoft/scalar: Scalar41.
●● Introducing Scalar: Git at scale for everyone42.

Purge repository data


While one of the benefits of Git is its ability to hold long histories for repositories efficiently, there are
times when you need to purge data.
The most common situations are where you want to:
●● Significantly reduce the size of a repository by removing history.
●● Remove a large file that was accidentally uploaded.
●● Remove a sensitive file that shouldn't have been uploaded.
If you commit sensitive data (for example, password, key) to Git, it can be removed from history. Two
tools are commonly used:

git filter-repo tool


The git filter-repo is a tool for rewriting history.
Its core filter-repo contains a library for creating history rewriting tools. Users with specialized needs can
quickly create entirely new history rewriting tools.
Note: More details are in the repository git-filter-repo43.

BFG Repo-Cleaner
BFG Repo-Cleaner is a commonly used open-source tool for deleting or “fixing” content in repositories.
It's easier to use than the git filter-branch command. For a single file or set of files, use the –delete-files
option:
$ bfg --delete-files file_I_should_not_have_committed

The following bash shows how to find all the places that a file called passwords.txt exists in the reposito-
ry. Also, to replace all the text in it, you can execute the –replace-text option:
$ bfg --replace-text passwords.txt

For more information, see:


Quickly rewrite git repository history44.

41 https://github.com/microsoft/scalar/
42 https://devblogs.microsoft.com/devops/introducing-scalar/
43 https://github.com/newren/git-filter-repo
44 https://github.com/newren/git-filter-repo/
156     

Removing files from Git Large File Storage45.


Removing sensitive data from a repository46.
BFG Repo Cleaner47.

Manage releases with GitHub Repos


Part of the release process starts with your version control. You'll understand how to manage releases in
the repository using GitHub.
In the following modules, you'll see details about deploying a piece of software after packaging your
code, binary files, release notes, and related tasks.
Releases in GitHub are based on Git tags48. You can think of a tag as a photo of your repository's current
state. If you need to mark an essential phase of your code or your following deliverable code is done, you
can create a tag and use it during the build and release process to package and deploy that specific
version. For more information, see Viewing your repository's releases and tags.49
When creating new releases with release notes, it's possible to @mentions contributors, add links to
binary files and edit or delete existing releases.

45 https://docs.github.com/repositories/working-with-files/managing-large-files/removing-files-from-git-large-file-storage
46 https://docs.github.com/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository
47 https://rtyley.github.io/bfg-repo-cleaner
48 https://git-scm.com/book/en/Git-Basics-Tagging
49 https://docs.github.com/repositories/releasing-projects-on-github/viewing-your-repositorys-releases-and-tags
    157

Image reference: Releases · Microsoft/azure-pipelines-agent (github.com)50


Also, you can:
●● Publish an action from a specific release in GitHub Marketplace.
●● Choose whether Git LFS objects are included in the ZIP files and tarballs GitHub creates for each
release.
●● Receive notifications when new releases are published in a repository.

Creating a release
To create a release, use the gh release create command. Replace the tag with the desired tag name for
the release and follow the interactive prompts.

50 https://github.com/Microsoft/azure-pipelines-agent/releases
158     

gh release create tag

To create a prerelease with the specified title and notes.


gh release create v1.2.1 --title

If you @mention any GitHub users in the notes, the published release on GitHub.com will include a
Contributors section with an avatar list of all the mentioned users.
You can check other commands and arguments from the GitHub CLI manual51.

Editing a release
You can't edit Releases with GitHub CLI.
To edit, use the Web Browser:
1. Navigate to the main repository page on GitHub.com.
2. Click Releases to the right of the list of files.
3. Click on the edit icon on the right side of the page, next to the release you want to edit.
4. Edit the details for the release, then click Update release.

Deleting a release
To delete a release, use the following command, replace the tag with the release tag to delete, and use
the -y flag to skip confirmation.
gh release delete tag -y

For more information, see:


●● Managing releases in a repository - GitHub Docs52 - If you want to perform the same steps from
Web Browser instead of GitHub CLI.
●● Publishing an action in the GitHub Marketplace53.
●● Managing Git LFS objects in archives of your repository54.
●● Viewing your subscriptions55.

Automate release notes with GitHub


After learning how to create and manage release tags in your repository, you'll learn how to configure the
automatically generated release notes template from your GitHub releases.

51 https://cli.github.com/manual/gh_release_create
52 https://docs.github.com/repositories/releasing-projects-on-github/managing-releases-in-a-repository
53 https://docs.github.com/actions/creating-actions/publishing-actions-in-github-marketplace
54 https://docs.github.com/github/administering-a-repository/managing-git-lfs-objects-in-archives-of-your-repository
55 https://docs.github.com/github/managing-subscriptions-and-notifications-on-github/viewing-your-subscriptions
    159

You can generate an overview of the contents of a release, and you can also customize your automated
release notes.
It's possible to use labels to create custom categories to organize pull requests you want to include or
exclude specific labels and users from appearing in the output.

Creating automatically generated release notes


While configuring your release, you'll see the option Auto-generate release notes to include all changes
between your tag and the last release. If you never created a release, it will consist of all changes from
your repository.

You can choose if you want to customize it or leave it as it is.

Configuring automatically generated release notes tem-


plate
You can customize the auto-generate release notes template by using the following steps.
1. Navigate to your repository and create a new file.
160     

2. You can use the name .github/release.yml to create the release.yml file in the .github directory.

3. Specify in YAML the pull request labels and authors you want to exclude from this release. You can
also create new categories and list the pull request labels in each. For more information about
configuration options, see Automatically generated release notes - GitHub Docs.56
Example configuration:
# .github/release.yml

changelog:
exclude:
labels:
- ignore-for-release
authors:
- octocat
categories:
- title: Breaking Changes 🛠
labels:
- Semver-Major
- breaking-change
- title: Exciting New Features 🎉
labels:
- Semver-Minor
- enhancement
- title: Other Changes
labels:
-*

4. Commit your new file.

56 https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes#configuration-options
    161

5. Try to create a new release and click + Auto-generate release notes to see the template structure.
For more information, see:
●● About releases - GitHub Docs57
●● Linking to releases - GitHub Docs58
●● Automation for release forms with query parameters - GitHub Docs59

Summary
This module explored how to work with large repositories and purge repository data.
You learned how to describe the benefits and usage of:
●● Understand large Git repositories.
●● Explain Git Virtual File System (GVFS).
●● Use Git Large File Storage (LFS).
●● Purge repository data.

Learn more
●● Get started with Git and Visual Studio - Azure Repos | Microsoft Docs60.
●● Using Git LFS and VFS for Git introduction - Code With Engineering Playbook (microsoft.github.
io)61.
●● Work with large files in your Git repo - Azure Repos | Microsoft Docs62.
●● Delete a Git repo from your project - Azure Repos | Microsoft Docs63.

57 https://docs.github.com/repositories/releasing-projects-on-github/about-releases
58 https://docs.github.com/repositories/releasing-projects-on-github/linking-to-releases
59 https://docs.github.com/repositories/releasing-projects-on-github/automation-for-release-forms-with-query-parameters
60 https://docs.microsoft.com/azure/devops/repos/git/gitquickstart
61 https://microsoft.github.io/code-with-engineering-playbook/source-control/git-guidance/git-lfs-and-vfs/
62 https://docs.microsoft.com/azure/devops/repos/git/manage-large-files
63 https://docs.microsoft.com/azure/devops/repos/git/delete-existing-repo
162     

Identify technical debt


Introduction
This module examines technical debt, complexity, quality metrics, and plans for effective code reviews
and code quality validation.

Learning objectives
After completing this module, students and professionals can:
●● Identify and manage technical debt.
●● Integrate code quality tools.
●● Plan code reviews.
●● Describe complexity and quality metrics.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Examine code quality


The quality of code shouldn't be measured subjectively.
A developer-writing code would rate the quality of their code high, but that isn't a great way to measure
code quality.
Different teams may use different definitions based on context.
Code that is considered high quality may mean one thing for an automotive developer.
And it may mean another for a web application developer.
The quality of the code is essential, as it impacts the overall software quality.
A study on “Software Defect Origins and Removal Methods” found that individual programmers are less
than 50% efficient at finding bugs in their software.
And most forms of testing are only 35% efficient. It makes it difficult to determine quality.
There are five key traits to measure for higher quality.

Reliability
Reliability measures the probability that a system will run without failure over a specific period of opera-
tion.
It relates to the number of defects and availability of the software. Several defects can be measured by
running a static analysis tool.
Software availability can be measured using the mean time between failures (MTBF).
Low defect counts are crucial for developing a reliable codebase.
    163

Maintainability
Maintainability measures how easily software can be maintained. It relates to the size, consistency,
structure, and complexity of the codebase.
And ensuring maintainable source code relies on several factors, such as testability and understandability.
You can't use a single metric to ensure maintainability.
Some metrics you may consider to improve maintainability are the number of stylistic warnings and
Halted complexity measures.
Both automation and human reviewers are essential for developing maintainable codebases.

Testability
Testability measures how well the software supports testing efforts.
It relies on how well you can control, observe, isolate, and automate testing, among other factors.
Testability can be measured based on how many test cases you need to find potential faults in the
system.
The size and complexity of the software can impact testability.
So, applying methods at the code level—such as cyclomatic complexity—can help you improve the
testability of the component.

Portability
Portability measures how usable the same software is in different environments. It relates to platform
independence.
There isn't a specific measure of portability. But there are several ways you can ensure portable code.
It's essential to regularly test code on different platforms rather than waiting until the end of develop-
ment.
It's also a good idea to set your compiler warning levels as high as possible—and use at least two
compilers.
Enforcing a coding standard also helps with portability.

Reusability
Reusability measures whether existing assets—such as code—can be used again.
Assets are more easily reused if they have characteristics such as modularity or loose coupling.
Reusability can be measured by the number of interdependencies.
Running a static analyzer can help you identify these interdependencies.

Examine complexity and quality metrics


While there are various quality metrics, a few of the most important ones are listed here.
Complexity metrics can help in measuring quality. Cyclomatic complexity measures the number of linearly
independent paths through a program's source code. Another way to understand quality is through
calculating Halted's complexity measures.
164     

This measure:
●● Program vocabulary.
●● Program length.
●● Calculated program length.
●● Volume.
●● Difficulty.
●● Effort.
Code analysis tools can be used to check for considerations such as security, performance, interoperabili-
ty, language usage, globalization and should be part of every developer's toolbox and software build
process.
Regularly running a static code analysis tool and reading its output is a great way to improve as a
developer because the things caught by the software rules can often teach you something.

Common-quality-related metrics
One of the promises of DevOps is to deliver software both faster and with higher quality. Previously,
these two metrics have been almost opposites. The more quickly you went, the lower the quality. The
higher the quality, the longer it took. But DevOps processes can help you find problems earlier, which
usually means that they take less time to fix.
We've previously talked about some general project metrics and KPIs. The following is a list of metrics
that directly relate to the quality of the code being produced and the build and deployment processes.
●● Failed builds percentage - Overall, what percentage of builds are failing?
●● Failed deployments percentage - Overall, what percentage of deployments are failing?
●● Ticket volume - What is the overall volume of customer or bug tickets?
●● Bug bounce percentage - What percentage of customer or bug tickets are being reopened?
●● Unplanned work percentage - What percentage of the overall work being performed is unplanned?

Introduction to technical debt


Technical debt is a term that describes the future cost that will be incurred by choosing an easy solution
today instead of using better practices because they would take longer to complete.
The term-technical debt was chosen for its comparison to financial debt. It's common for people in
financial debt to make decisions that seem appropriate or the only option at the time, but in so doing,
interest increases.
The more interest that accumulates, the harder it is for them in the future and the more minor options
available to them later. With financial debt, soon, interest increases on interest, creating a snowball effect.
Similarly, technical debt can build up to the point where developers spend almost all their time sorting
out problems and doing rework, either planned or unplanned, rather than adding value.
So, how does it happen?
The most common excuse is tight deadlines. When developers are forced to create code quickly, they'll
often take shortcuts. For example, instead of refactoring a method to include new functionality, let us
copy to create a new version. Then I only test my new code and can avoid the level of testing required if I
change the original method because other parts of the code use it.
    165

Now I have two copies of the same code that I need to modify in the future instead of one, and I run the
risk of the logic diverging. There are many causes. For example, there might be a lack of technical skills
and maturity among the developers or no clear product ownership or direction.
The organization might not have coding standards at all. So, the developers didn't even know what they
should be producing. The developers might not have precise requirements to target. Well, they might be
subject to last-minute requirement changes.
Necessary-refactoring work might be delayed. There might not be any code quality testing, manual or
automated. In the end, it just makes it harder and harder to deliver value to customers in a reasonable
time frame and at a reasonable cost.
Technical debt is one of the main reasons that projects fail to meet their deadlines.
Over time, it increases in much the same way that monetary debt does. Common sources of technical
debt are:
●● Lack of coding style and standards.
●● Lack of or poor design of unit test cases.
●● Ignoring or not-understanding object orient design principles.
●● Monolithic classes and code libraries.
●● Poorly envisioned the use of technology, architecture, and approach. (Forgetting that all system
attributes, affecting maintenance, user experience, scalability, and others, need to be considered).
●● Over-engineering code (adding or creating code that isn't required, adding custom code when
existing libraries are sufficient, or creating layers or components that aren't needed).
●● Insufficient comments and documentation.
●● Not writing self-documenting code (including class, method, and variable names that are descriptive
or indicate intent).
●● Taking shortcuts to meet deadlines.
●● Leaving dead code in place.

Note: Over time, the technical debt must be paid back. Otherwise, the team's ability to fix issues and
implement new features and enhancements will take longer and eventually become cost-prohibitive.

We have seen that technical debt adds a set of problems during development and makes it much more
difficult to add extra customer value.
Having technical debt in a project saps productivity, frustrates development teams, makes code both
hard to understand and fragile, increases the time to make changes and validate those changes. Un-
planned work frequently gets in the way of planned work.
Longer-term, it also saps the organization's strength. Technical debt tends to creep up on an organiza-
tion. It starts small and grows over time. Every time a quick hack is made or testing is circumvented
because changes need to be rushed through, the problem grows worse and worse. Support costs get
higher and higher, and invariably, a serious issue arises.
Eventually, the organization can't respond to its customers' needs in a timely and cost-efficient way.
166     

Automated measurement for monitoring


One key way to minimize the constant acquisition of technical debt is to use automated testing and
assessment.
In the demos that follow, we'll look at one of the common tools used to assess the debt: SonarCloud.
(The original on-premises version was SonarQube).
There are other tools available, and we'll discuss a few of them.
Later, in the next hands-on lab, you'll see how to configure your Azure Pipelines to use SonarCloud,
understand the analysis results, and finally how to configure quality profiles to control the rule sets that
are used by SonarCloud when analyzing your projects.
For more information, see SonarCloud64.

To review:
Azure DevOps can be integrated with a wide range of existing tooling used to check code quality during
builds.
Which code quality tools do you currently use (if any)?
What do you like or don't you like about the tools?

Measure and manage technical debt


Integrating the assessment and measurement of technical debt and code quality overall is essential as
part of your Continuous Integration and Deployment pipelines in Azure DevOps.
In the Continuous Integration course in this series, we showed how to add support for SonarCloud into
an Azure DevOps pipeline.
After it's added and a build done, you can see an analysis of your code:

If you drill into the issues, you can see what the issues are, along with suggested remedies and estimates
of the time required to apply a remedy.

64 https://sonarcloud.io/about
    167

Integrate other code quality tools


Many tools can be used to assess aspects of your code quality and technical debt.

NDepend
For .NET developers, a common tool is NDepend.
NDepend is a Visual Studio extension that assesses the amount of technical debt a developer has added
during a recent development period, typically in the last hour.
With this information, the developer might make the required corrections before ever committing the
code.
NDepend lets you create code rules expressed as C# LINQ queries, but it has many built-in rules that
detect a wide range of code smells.

Resharper Code Quality Analysis


Resharper can make a code quality analysis from the command line. Also, be set to fail builds when code
quality issues are found automatically.
Rules can be configured for enforcement across teams.

Search in Azure DevOps


To find other code quality tools that are easy to integrate with Azure DevOps, search for the word Quality
when adding a task into your build pipeline.
168     

For more information, you can see:


●● NDepend65
●● Visual Studio marketplace
●● Resharper Code Quality Analysis

Plan effective code reviews


Most developers would agree that code reviews can improve the quality of the applications they produce,
but only if the process for the code reviews is effective. It is essential, upfront, to agree that everyone is
trying to achieve better code quality.
Achieving code quality can seem to challenge because there's no single best way to write any piece of
code, at least code with any complexity. Everyone wants to do good work and to be proud of what they
create.
It means that it's easy for developers to become over-protective of their code. The organizational culture
must let all involved feel that the code reviews are more like mentoring sessions where ideas about

65 https://www.ndepend.com
    169

improving code are shared than interrogation sessions where the aim is to identify problems and blame
the author.
The knowledge-sharing that can occur in mentoring-style sessions can be one of the most important
outcomes of the code review process. It often happens best in small groups (even two people) rather
than in large team meetings. And it's important to highlight what has been done well, not just what
needs improvement.
Developers will often learn more in effective code review sessions than they'll in any formal training.
Reviewing code should be an opportunity for all involved to learn, not just as a chore that must be
completed as part of a formal process.
It's easy to see two or more people working on a problem and think that one person could have com-
pleted the task by themselves. That is a superficial view of the longer-term outcomes.
Team management needs to understand that improving the code quality reduces the cost of code, not
increases it. Team leaders need to establish and foster an appropriate culture across their teams.

Summary
This module examined technical debt, complexity, quality metrics, and plans for effective code reviews
and code quality validation.
You learned how to describe the benefits and usage of:
●● Identify and manage technical debt.
●● Integrate code quality tools.
●● Plan code reviews.
●● Describe complexity and quality metrics.

Learn more
●● Technical Debt – The Anti-DevOps Culture - Developer Support (microsoft.com)66.
●● Microsoft Security Code Analysis documentation overview | Microsoft Docs67.
●● Build Quality Indicators report - Azure DevOps Server | Microsoft Docs68.

66 https://devblogs.microsoft.com/premier-developer/technical-debt-the-anti-devops-culture/
67 https://docs.microsoft.com/azure/security/develop/security-code-analysis-overview
68 https://docs.microsoft.com/azure/devops/report/sql-reports/build-quality-indicators-report
170     

Lab
Lab 03: Version controlling with Git in Azure Re-
pos
Lab overview
Azure DevOps supports two types of version control, Git and Team Foundation Version Control (TFVC).
Here is a quick overview of the two version control systems:
●● Team Foundation Version Control (TFVC): TFVC is a centralized version control system. Typically,
team members have only one version of each file on their dev machines. Historical data is maintained
only on the server. Branches are path-based and created on the server.
●● Git: Git is a distributed version control system. Git repositories can live locally (such as on a develop-
er's machine). Each developer has a copy of the source repository on their dev machine. Developers
can commit each set of changes on their dev machine and perform version control operations such as
history and compare without a network connection.
Git is the default version control provider for new projects. You should use Git for version control in your
projects unless you have a specific need for centralized version control features in TFVC.
In this lab, you will learn how to work with branches and repositories in Azure DevOps.

Objectives
After you complete this lab, you will be able to:
●● Work with branches in Azure Repos
●● Work with repositories in Azure Repos

Lab duration
●● Estimated time: 30 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions69

69 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    171

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices isn't a repository strategy?
†† Monorepo.
†† Multiple Repo.
†† Feature Repo.

Multiple choice
Which of the following choices is the fundamental difference between the monorepo and multiple repos
philosophies?
†† Allow teams to have their own repositories to develop.
†† Allow teams working together on a system to go fastest.
†† Avoid teams losing their changes.

Multiple choice
Which of the following choices is a common tool to create a changelog?
†† gitchangelog.
†† gitlogchange.
†† gitloggenerator.

Multiple choice
Which of the following choices isn't a branch workflow type?
†† Forking workflow.
†† Trunk-based development.
†† SmartFlow workflow.

Multiple choice
Which of the following choices gives every developer a server-side repository?
†† Feature branching.
†† Trunk-based.
†† Forking.
172     

Multiple choice
Which of the following choices creates a feature branch using git commands?
†† git checkout develops, and git checkout -b feature_branch.
†† git checkout -n feature_branch.
†† git flow feature start feature_branch.

Multiple choice
Which of the following statements best describe a pull request?
†† Pull requests are commonly used by teams and organizations collaborating, where everyone shares a
single repository, and topic branches are used to develop features and isolate changes.
†† Pull requests are used to publish new changes from feature branches into production branches and
update the environments with the latest code version.
†† Pull request creates reports from the latest branch updates and generates remediation summary for
resolving issues during development.

Multiple choice
Which of the following choices isn't a benefit of using pull requests?
†† Protect branches with policies.
†† Give feedback.
†† Create reports about code security and quality.

Multiple choice
Which of the following choices is the most effective way to avoid direct updates from feature branches into
main?
†† Branch Policies.
†† Branch Security.
†† Branch Lock.

Multiple choice
Which of the following choices isn't a Git hook event?
†† Post-checkout.
†† Pre-commit.
†† After-commit.
    173

Multiple choice
Which of the following choices is a practical use case for using Git hooks?
†† Enforce preconditions for merging applying custom code with the event post-merge.
†† Send notifications to your team’s chat room.
†† Validate code and merge branches automatically using the commit-msg event.

Multiple choice
Which of the following choices is a valid event to verify Work Item ID association in your commit message
before commit?
†† Commit-msg.
†† Commit-msg-id.
†† Msg-commit.

Multiple choice
Which of the following is a correct statement about a fork?
†† A fork is a copy of a repository.
†† A fork is a branch for future development.
†† A fork is a branching strategy when working with centralized version control.

Multiple choice
Which of the following choices are the correct-forking workflow steps?
†† Create a fork, make your changes locally and push them to a branch, create and complete a PR to the
upstream.
†† Create a fork, clone it locally, make your changes locally, push them to a branch, create and complete
a PR to the upstream, and sync your fork to the latest from upstream.
†† Create a fork, clone it locally, make your changes locally, create and complete a PR to the upstream,
push the changes to a branch.

Multiple choice
Which of the following choices is the recommended team size to use a single repo instead of fork workflow?
†† Large teams.
†† 10-20 developers.
†† 2-5 developers.
174     

Multiple choice
Which of the following choices is for working with large files in repositories?
†† Package Management.
†† Git LFS.
†† Git.

Multiple choice
Which of the following choices isn't a common situation when you need to purge data from repositories?
†† When you need to reduce the size of a repository significantly by removing history.
†† When you need to remove a sensitive file that should not have been uploaded.
†† When you need to hide code and history without removing permissions.

Multiple choice
Which of the following choices is the built-in Git command for removing files from the repository?
†† git remove-branch command.
†† git filter-branch command.
†† git purge-branch command.
    175

Answers
Multiple choice
Which of the following choices isn't a repository strategy?
†† Monorepo.
†† Multiple Repo.
■■ Feature Repo.
Explanation
There are two philosophies on how to organize your repos: Monorepo or Multiple repos.
Multiple choice
Which of the following choices is the fundamental difference between the monorepo and multiple repos
philosophies?
†† Allow teams to have their own repositories to develop.
■■ Allow teams working together on a system to go fastest.
†† Avoid teams losing their changes.
Explanation
The fundamental difference between the monorepo and multiple repos philosophies boils down to a
difference about what will allow teams working together on a system to go fastest.
Multiple choice
Which of the following choices is a common tool to create a changelog?
■■ gitchangelog.
†† gitlogchange.
†† gitloggenerator.
Explanation
One common tool is gitchangelog. This tool is based on Python. Another common tool is called
github-changelog-generator.
Multiple choice
Which of the following choices isn't a branch workflow type?
†† Forking workflow.
†† Trunk-based development.
■■ SmartFlow workflow.
Explanation
There are a few branching workflow types to consider, such as Trunk-based development, and Forking
workflow.
176     

Multiple choice
Which of the following choices gives every developer a server-side repository?
†† Feature branching.
†† Trunk-based.
■■ Forking.
Explanation
Forking Workflow gives every developer a server-side repository.
Multiple choice
Which of the following choices creates a feature branch using git commands?
■■ git checkout develops, and git checkout -b feature_branch.
†† git checkout -n feature_branch.
†† git flow feature start feature_branch.
Explanation
Create a new feature branch by using git checkout -b feature_branch.
Multiple choice
Which of the following statements best describe a pull request?
■■ Pull requests are commonly used by teams and organizations collaborating, where everyone shares a
single repository, and topic branches are used to develop features and isolate changes.
†† Pull requests are used to publish new changes from feature branches into production branches and
update the environments with the latest code version.
†† Pull request creates reports from the latest branch updates and generates remediation summary for
resolving issues during development.
Explanation
Pull requests are commonly used by teams and organizations collaborating, where everyone shares a single
repository, and topic branches are used to develop features and isolate changes.
Multiple choice
Which of the following choices isn't a benefit of using pull requests?
†† Protect branches with policies.
†† Give feedback.
■■ Create reports about code security and quality.
Explanation
The common benefits of collaborating with pull requests are: Get your code reviewed, Give great feedback,
and Protect branches with policies.
    177

Multiple choice
Which of the following choices is the most effective way to avoid direct updates from feature branches
into main?
■■ Branch Policies.
†† Branch Security.
†† Branch Lock.
Explanation
Policies are a great way to enforce your team's code quality and change-management standards.
Multiple choice
Which of the following choices isn't a Git hook event?
†† Post-checkout.
†† Pre-commit.
■■ After-commit.
Explanation
Hooks must be stored in the .git/hooks folder in the repo root, and they must be named to match the
corresponding events like post-checkout, pre-commit, post-commit, update, and so on.
Multiple choice
Which of the following choices is a practical use case for using Git hooks?
†† Enforce preconditions for merging applying custom code with the event post-merge.
■■ Send notifications to your team’s chat room.
†† Validate code and merge branches automatically using the commit-msg event.
Explanation
There are some examples of where you can use hooks to enforce policies and one is sending notifications to
your team’s chat room (Teams, Slack, HipChat, and so on).
Multiple choice
Which of the following choices is a valid event to verify Work Item ID association in your commit message
before commit?
■■ Commit-msg.
†† Commit-msg-id.
†† Msg-commit.
Explanation
For example, you could have a hook into the commit-msg event to validate that the commit message
structure follows the recommended format, or it's associated with a work item.
178     

Multiple choice
Which of the following is a correct statement about a fork?
■■ A fork is a copy of a repository.
†† A fork is a branch for future development.
†† A fork is a branching strategy when working with centralized version control.
Explanation
A fork is a copy of a repository.
Multiple choice
Which of the following choices are the correct-forking workflow steps?
†† Create a fork, make your changes locally and push them to a branch, create and complete a PR to the
upstream.
■■ Create a fork, clone it locally, make your changes locally, push them to a branch, create and complete
a PR to the upstream, and sync your fork to the latest from upstream.
†† Create a fork, clone it locally, make your changes locally, create and complete a PR to the upstream,
push the changes to a branch.
Explanation
The forking workflow is to create a fork, clone it locally, make your changes locally, push them to a branch,
create and complete a PR to the upstream, and sync your fork to the latest from upstream.
Multiple choice
Which of the following choices is the recommended team size to use a single repo instead of fork
workflow?
†† Large teams.
†† 10-20 developers.
■■ 2-5 developers.
Explanation
For a small team (2-5 developers), we recommend working in a single repo.
Multiple choice
Which of the following choices is for working with large files in repositories?
†† Package Management.
■■ Git LFS.
†† Git.
Explanation
When you have source files with large differences between versions and frequent updates, you can use Git
LFS.
    179

Multiple choice
Which of the following choices isn't a common situation when you need to purge data from repositories?
†† When you need to reduce the size of a repository significantly by removing history.
†† When you need to remove a sensitive file that should not have been uploaded.
■■ When you need to hide code and history without removing permissions.
Explanation
When you need to hide code and history without removing permissions.
Multiple choice
Which of the following choices is the built-in Git command for removing files from the repository?
†† git remove-branch command.
■■ git filter-branch command.
†† git purge-branch command.
Explanation
git filter-branch command.
Module 3 Implement CI with Azure Pipelines
and GitHub Actions

Explore Azure Pipelines


Introduction
Azure Pipelines is a fully featured service used to create cross-platform CI (Continuous Integration) and
CD (Continuous Deployment). It works with your preferred Git provider and can deploy to most major
cloud services, including Azure. Azure DevOps offers a comprehensive Pipelines offering.
This module introduces Azure Pipelines concepts and explains key terms and components of the tool,
helping you decide your pipeline strategy and responsibilities.

Learning objectives
After completing this module, students and professionals can:
●● Describe Azure Pipelines.
●● Explain the role of Azure Pipelines and its components.
●● Decide Pipeline automation responsibility.
●● Understand Azure Pipeline key terms.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
182     

Explore the concept of pipelines in DevOps


Business demands continuous delivery of value. Value is created only when a product is delivered to a
satisfied customer.
It's not created when one silo in the process is completed.
It demands that you reset focus from silos to an end-to-end flow of value.
The core idea is to create a repeatable, reliable, and incrementally-improving process for taking software
from concept to customer.
The goal is to enable a constant flow of changes into production via an automated software production
line.
Think of it as a pipeline. The pipeline breaks down the software delivery process into stages.
Each stage aims to verify the quality of new features from a different angle to validate the new function-
ality and prevent errors from affecting your users.
The pipeline should provide feedback to the team. Also, visibility into the changes flows to everyone
involved in delivering the new feature(s).
A delivery pipeline enables the flow of more minor changes more frequently, with a focus on flow.
Your teams can concentrate on optimizing the delivery of changes that bring quantifiable value to the
business.
This approach leads teams to continuously monitor and learn where they're finding obstacles, resolve
those issues, and gradually improve the pipeline's flow.
As the process continues, the feedback loop provides new insights into new issues and barriers to be
resolved.
The pipeline is the focus of your continuous improvement loop.
A typical pipeline will include the following stages: build automation and continuous integration, test
automation, and deployment automation.

Build automation and continuous integration


The pipeline starts by building the binaries to create the deliverables passed to the following stages. New
features implemented by the developers are integrated into the central code base, built, and unit tested.
It's the most direct feedback cycle that informs the development team about the health of their applica-
tion code.

Test automation
The new version of an application is rigorously tested throughout this stage to ensure that it meets all
wished system qualities. It's crucial that all relevant aspects—whether functionality, security, performance,
or compliance—are verified by the pipeline. The stage may involve different types of automated or
(initially, at least) manual activities.

Deployment automation
A deployment is required every time the application is installed in an environment for testing, but the
most critical moment for deployment automation is rollout time.
Since the preceding stages have verified the overall quality of the system, It's a low-risk step.
    183

The deployment can be staged, with the new version being initially released to a subset of the production
environment and monitored before being rolled out.
The deployment is automated, allowing for the reliable delivery of new functionality to users within
minutes if needed.

Your pipeline needs platform provisioning and configura-


tion management
The deployment pipeline is supported by platform provisioning and system configuration management.
It allows teams to create, maintain, and tear down complete environments automatically or at the push of
a button.
Automated platform provisioning ensures that your candidate applications are deployed to, and tests
carried out against correctly configured and reproducible environments.
It also helps horizontal scalability and allows the business to try out new products in a sandbox environ-
ment at any time.

Orchestrating it all: release and pipeline orchestration


The multiple stages in a deployment pipeline involve different groups of people collaborating and
supervising the release of the new version of your application.
Release and pipeline orchestration provide a top-level view of the entire pipeline, allowing you to define
and control the stages and gain insight into the overall software delivery process.
By carrying out value stream mappings on your releases, you can highlight any remaining inefficiencies
and hot spots and pinpoint opportunities to improve your pipeline.
These automated pipelines need infrastructure to run on. The efficiency of this infrastructure will have a
direct impact on the effectiveness of the pipeline.

Describe Azure Pipelines


Azure Pipelines is a cloud service that automatically builds and tests your code project and makes it
available to other users. It works with just about any language or project type.
Azure Pipelines combines continuous integration (CI) and continuous delivery (CD) to test and build your
code and ship it to any target constantly and consistently.

Does Azure Pipelines work with my language and tools?


Azure Pipelines is a fully featured cross-platform CI and CD service. It works with your preferred Git
provider and can deploy to most major cloud services, including Azure services.

Languages
You can use many languages with Azure Pipelines, such as Python, Java, PHP, Ruby, C#, and Go.
184     

Version control systems


Before you use continuous integration and continuous delivery practices for your applications, you must
have your source code in a version control system. Azure Pipelines integrates with GitHub, GitLab, Azure
Repos, Bitbucket, and Subversion.

Application types
You can use Azure Pipelines with most application types, such as Java, JavaScript, Python, .NET, PHP, Go,
XCode, and C++.

Deployment targets
Use Azure Pipelines to deploy your code to multiple targets. Targets including:
●● Container registries.
●● Virtual machines.
●● Azure services, or any on-premises or cloud target such:

●● Microsoft Azure.
●● Google Cloud.
●● Amazon Web Services (AWS).

Package formats
To produce packages that others can consume, you can publish NuGet, npm, or Maven packages to the
built-in package management repository in Azure Pipelines.
You also can use any other package management repository of your choice.

Why should I use CI and CD, and Azure Pipelines?


Implementing CI and CD pipelines help to ensure consistent and quality code that's readily available to
users.
Azure Pipelines is a quick, easy, and safe way to automate building your projects and making them availa-
ble to users.

Use CI and CD for your project


Continuous integration is used to automate tests and builds for your project. CI helps to catch bugs or
issues early in the development cycle when they're easier and faster to fix. Items known as artifacts are
produced from CI systems. The continuous delivery release pipelines use them to drive automatic
deployments.
Continuous delivery is used to automatically deploy and test code in multiple stages to help drive quality.
Continuous integration systems produce deployable artifacts, which include infrastructure and apps.
Automated release pipelines consume these artifacts to release new versions and fixes to the target of
your choice.

Continuous integration (CI) Continuous delivery (CD)


    185

Increase code coverage. Automatically deploy code to production.


Build faster by splitting test and build runs. Ensure deployment targets have the latest code.
Automatically ensure you don't ship broken code. Use tested code from the CI process.
Run tests continually.

Use Azure Pipelines for CI and CD


There are several reasons to use Azure Pipelines for your CI and CD solution. You can use it to:
●● Work with any language or platform.
●● Deploy to different types of targets at the same time.
●● Integrate with Azure deployments.
●● Build on Windows, Linux, or macOS machines.
●● Integrate with GitHub.
●● Work with open-source projects.

Understand Azure Pipelines key terms


Understanding the basic terms and parts of Azure Pipelines helps you further explore how it can help you
deliver better code more efficiently and reliably.

Agent
When your build or deployment runs, the system begins one or more jobs. An agent is an installable
software that runs a build or deployment job.

Artifact
An artifact is a collection of files or packages published by a build. Artifacts are made available for next
tasks, such as distribution or deployment.
186     

Build
A build represents one execution of a pipeline. It collects the logs associated with running the steps and
the results of running tests.

Continuous delivery
Continuous delivery (CD) (also known as Continuous Deployment) is a process by which code is built,
tested, and deployed to one or more test and production stages. Deploying and testing in multiple
stages helps drive quality. Continuous integration systems produce deployable artifacts, which include
infrastructure and apps. Automated release pipelines consume these artifacts to release new versions and
fixes to existing systems. Monitoring and alerting systems constantly run to drive visibility into the entire
CD process. This process ensures that errors are caught often and early.

Continuous integration
Continuous integration (CI) is the practice used by development teams to simplify the testing and
building of code. CI helps to catch bugs or problems early in the development cycle, making them more
accessible and faster to fix. Automated tests and builds are run as part of the CI process. The process can
run on a set schedule, whenever code is pushed, or both. Items known as artifacts are produced from CI
systems. The continuous delivery release pipelines use them to drive automatic deployments.

Deployment target
A deployment target is a virtual machine, container, web app, or any service used to host the application
being developed. A pipeline might deploy the app to one or more deployment targets after the build is
completed and tests are run.

Job
A build contains one or more jobs. Most jobs run on an agent. A job represents an execution boundary of
a set of steps. All the steps run together on the same agent.
For example, you might build two configurations - x86 and x64. In this case, you have one build and two
jobs.

Pipeline
A pipeline defines the continuous integration and deployment process for your app. It's made up of steps
called tasks.
It can be thought of as a script that describes how your test, build, and deployment steps are run.

Release
When you use the visual designer, you create a release pipeline also to a build pipeline. A release is a
term used to describe one execution of a release pipeline. It's made up of deployments to multiple
stages.
    187

Stage
Stages are the primary divisions in a pipeline: “build the app,” "run integration tests," and “deploy to user
acceptance testing” are good examples of stages.

Task
A task is the building block of a pipeline. For example, a build pipeline might consist of build tasks and
test tasks. A release pipeline consists of deployment tasks. Each task runs a specific job in the pipeline.

Trigger
A trigger is something that's set up to tell the pipeline when to run. You can configure a pipeline to run
upon a push to a repository at scheduled times or upon completing another build. All these actions are
known as triggers.

Summary
This module introduced Azure Pipelines concepts and explained key terms and components of the tool. It
helped you decide your pipeline strategy and responsibilities.
You learned how to describe the benefits and usage of:
●● Describe Azure Pipelines.
●● Explain the role of Azure Pipelines and its components.
●● Decide Pipeline automation responsibility.
●● Understand Azure Pipeline key terms.

Learn more
●● What is Azure Pipelines? - Azure Pipelines | Microsoft Docs1.
●● Use Azure Pipelines - Azure Pipelines | Microsoft Docs2.

1 https://docs.microsoft.com/azure/devops/pipelines/get-started/what-is-azure-pipelines
2 https://docs.microsoft.com/azure/devops/pipelines/get-started/pipelines-get-started
188     

Manage Azure Pipeline agents and pools


Introduction
This module explores differences between Microsoft-hosted and self-hosted agents, detail job types, and
introduces agent pools configuration.
You will understand typical situations to use agent pools and how to manage its security.

Learning objectives
After completing this module, students and professionals can:
●● Choose between Microsoft-hosted and self-hosted agents.
●● Install and configure Azure Pipelines Agents.
●● Configure agent pools.
●● Make the agents and pools secure.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Choose between Microsoft-hosted versus


self-hosted agents
To build your code or deploy your software, you generally need at least one agent.
As you add more code and people, you'll eventually need more.
When your build or deployment runs, the system begins one or more jobs.
An agent is an installable software that runs one build or deployment job at a time.

Microsoft-hosted agent
If your pipelines are in Azure Pipelines, then you've got a convenient option to build and deploy using a
Microsoft-hosted agent.
With a Microsoft-hosted agent, maintenance and upgrades are automatically done.
Each time a pipeline is run, a new virtual machine (instance) is provided. The virtual machine is discarded
after one use.
For many teams, this is the simplest way to build and deploy.
You can try it first and see if it works for your build or deployment. If not, you can use a self-hosted agent.
A Microsoft-hosted agent has job time limits.
    189

Self-hosted agent
An agent that you set up and manage on your own to run build and deployment jobs is a self-hosted
agent.
You can use a self-hosted agent in Azure Pipelines. A self-hosted agent gives you more control to install
dependent software needed for your builds and deployments.
You can install the agent on:
●● Linux.
●● macOS.
●● Windows.
●● Linux Docker containers.
After you've installed the agent on a machine, you can install any other software on that machine as
required by your build or deployment jobs.
A self-hosted agent doesn't have job time limits.

Explore job types


In Azure DevOps, there are four types of jobs available:
●● Agent pool jobs.
●● Container jobs.
●● Deployment group jobs.
●● Agentless jobs.

Agent pool jobs


The most common types of jobs. The jobs run on an agent that is part of an agent pool.

Container jobs
Similar jobs to Agent Pool Jobs run in a container on an agent part of an agent pool.

Deployment group jobs


Jobs that run on systems in a deployment group.

Agentless jobs
Jobs that run directly on the Azure DevOps. They don't require an agent for execution. It's also-of-
ten-called Server Jobs.

Introduction to agent pools


Instead of managing each agent individually, you organize agents into agent pools. An agent pool
defines the sharing boundary for all agents in that pool.
190     

In Azure Pipelines, pools are scoped to the entire organization so that you can share the agent machines
across projects.
If you create an Agent pool for a specific project, only that project can use the pool until you add the
project pool into another project.
When creating a build or release pipeline, you can specify which pool it uses, organization, or project
scope.
Pools scoped to a project can only use them across build and release pipelines within a project.
To share an agent pool with multiple projects, use an organization scope agent pool and add them in
each of those projects, add an existing agent pool, and choose the organization agent pool. If you create
a new agent pool, you can automatically grant access permission to all pipelines.

Explore predefined agent pool


Azure Pipelines provides a pre-defined agent pool-named Azure Pipelines with Microsoft-hosted agents.
It will often be an easy way to run jobs without configuring build infrastructure.
The following virtual machine images are provided by default:
●● Windows Server 2022 with Visual Studio 2022.
●● Windows Server 2019 with Visual Studio 2019.
●● Ubuntu 20.04.
●● Ubuntu 18.04.
●● macOS 11 Big Sur.
●● macOS X Catalina 10.15.
By default, all contributors in a project are members of the User role on each hosted pool.
It allows every contributor to the author and runs build and release pipelines using a Microsoft-hosted
pool.
Pools are used to run jobs. Learn about specifying pools for jobs3.
Note: See Microsoft-hosted agents for the most up-to-date list of Agent Pool Images. Also, the complete
list of software installed on these machines.

Understand typical situations for agent pools


If you've got many agents intended for different teams or purposes, you might want to create more
pools, as explained below.

Create agent pools


Here are some typical situations when you might want to create agent pools:
●● You're a project member, and you want to use a set of machines your team owns for running build
and deployment jobs.

●● First, make sure you're a member of a group in All Pools with the Administrator role.

3 https://docs.microsoft.com/azure/devops/pipelines/process/phases
    191

●● Next, create a New project agent pool in your project settings and select the option to Create a
new organization agent pool. As a result, both an organization and project-level agent pool will be
created.
●● Finally, install and configure agents to be part of that agent pool.
●● You're a member of the infrastructure team and would like to set up a pool of agents for use in all
projects.

●● First, make sure you're a member of a group in All Pools with the Administrator role.
●● Next, create a New organization agent pool in your admin settings and select Autoprovision
corresponding project agent pools in all projects while creating the pool. This setting ensures all
projects have a pool pointing to the organization agent pool. The system creates a pool for
existing projects, and in the future, it will do so whenever a new project is created.
●● Finally, install and configure agents to be part of that agent pool.
●● You want to share a set of agent machines with multiple projects, but not all of them.

●● First, create a project agent pool in one of the projects and select the option to Create a new
organization agent pool while creating that pool.
●● Next, go to each of the other projects, and create a pool in each of them while selecting the option
to Use an existing organization agent pool.
●● Finally, install and configure agents to be part of the shared agent pool.

Communicate with Azure Pipelines


The agent communicates with Azure Pipelines to determine which job it needs to run and report the logs
and job status.
The agent always starts this communication. All the messages from the agent to Azure Pipelines over
HTTPS, depending on how you configure the agent.
This pull model allows the agent to be configured in different topologies, as shown below.

Here's a standard communication pattern between the agent and Azure Pipelines.
The user registers an agent with Azure Pipelines by adding it to an agent pool. You've to be an agent
pool administrator to register an agent in that pool. The identity of the agent pool administrator is
192     

needed only at the time of registration. It isn't persisted on the agent, nor is it used to communicate
further between the agent and Azure Pipelines.
Once the registration is complete, the agent downloads a listener OAuth token and uses it to listen to the
job queue.
Periodically, the agent checks to see if a new job request has been posted in the job queue in Azure
Pipelines. The agent downloads the job and a job-specific OAuth token when a job is available. This token
is generated by Azure Pipelines for the scoped identity specified in the pipeline. That token is short-lived
and is used by the agent to access resources (for example, source code) or modify resources (for exam-
ple, upload test results) on Azure Pipelines within that job.
Once the job is completed, the agent discards the job-specific OAuth token and checks if there's a new
job request using the listener OAuth token.
The payload of the messages exchanged between the agent, and Azure Pipelines are secured using
asymmetric encryption. Each agent has a public-private key pair, and the public key is exchanged with the
server during registration.
The server uses the public key to encrypt the job's payload before sending it to the agent. The agent
decrypts the job content using its private key. Secrets stored in build pipelines, release pipelines, or
variable groups are secured when exchanged with the agent.

Communicate to deploy to target servers


When you use the agent to deploy artifacts to a set of servers, it must-have “line of sight” connectivity to
those servers.
The Microsoft-hosted agent pools, by default, have connectivity to Azure websites and servers running in
Azure.
Suppose your on-premises environments don't have connectivity to a Microsoft-hosted agent pool
(because of intermediate firewalls). In that case, you'll need to manually configure a self-hosted agent on
the on-premises computer(s).
The agents must have connectivity to the target on-premises environments and access to the Internet to
connect to Azure Pipelines or Azure DevOps Server, as shown in the following diagram.
    193

Examine other considerations


Authentication
To register an agent, you need to be a member of the administrator role in the agent pool.
The identity of the agent pool administrator is only required at the time of registration. It's not persisted
on the agent and isn't used in any following communication between the agent and Azure Pipelines.
Also, you must be a local administrator on the server to configure the agent.
Your agent can authenticate to Azure DevOps using one of the following methods:

Personal access token (PAT)


Generate and use a PAT to connect an agent with Azure Pipelines. PAT is the only scheme that works with
Azure Pipelines. Also, as explained above, this PAT is used only when registering the agent and not for
succeeding communication.

Interactive versus service


You can run your agent as either a service or an interactive process. Whether you run an agent as a
service or interactively, you can choose which account you use to run the agent.
It's different from your credentials when registering the agent with Azure Pipelines. The choice of agent
account depends solely on the needs of the tasks running in your build and deployment jobs.
194     

For example, to run tasks that use Windows authentication to access an external service, you must run
the agent using an account with access to that service.
However, if you're running UI tests such as Selenium or Coded UI tests that require a browser, the
browser is launched in the context of the agent account.
After configuring the agent, we recommend you first try it in interactive mode to ensure it works. Then,
we recommend running the agent in one of the following modes so that it reliably remains to run for
production use. These modes also ensure that the agent starts automatically if the machine is restarted.
You can use the service manager of the operating system to manage the lifecycle of the agent. Also, the
experience for auto-upgrading the agent is better when it's run as a service.
As an interactive process with autologon enabled. In some cases, you might need to run the agent
interactively for production use, such as UI tests.
When the agent is configured to run in this mode, the screen saver is also disabled.
Some domain policies may prevent you from enabling autologon or disabling the screen saver.
In such cases, you may need to seek an exemption from the domain policy or run the agent on a work-
group computer where the domain policies don't apply.
Note: There are security risks when you enable automatic login or disable the screen saver. You allow
other users to walk up to the computer and use the account that automatically logs on. If you configure
the agent to run in this way, you must ensure the computer is physically protected; for example, located
in a secure facility. If you use Remote Desktop to access the computer on which an agent is running with
autologon, simply closing the Remote Desktop causes the computer to be locked, and any UI tests that
run on this agent may fail. To avoid this, use the tscon command to disconnect from Remote Desktop.

Agent version and upgrades


Microsoft updates the agent software every few weeks in Azure Pipelines.
The agent version is indicated in the format {major}.{minor}. For instance, if the agent version is 2.1, the
major version is 2, and the minor version is 1.
When a newer version of the agent is only different in minor versions, it's automatically upgraded by
Azure Pipelines.
This upgrade happens when one of the tasks requires a newer version of the agent.
If you run the agent interactively or a newer major version of the agent is available, you must manually
upgrade the agents. Also, you can do it from the agent pools tab under your project collection or
organization.
You can view the version of an agent by navigating to Agent pools and selecting the Capabilities tab for
the wanted agent.
Azure Pipelines: <a href="https://dev.azure.com/{your_organization}/_ad-
min/_AgentPool" title="" target="_blank" data-generated=''>https://dev.
azure.com/{your_organization}/_admin/_AgentPool</a>
    195

Question and Answer

Do self-hosted agents have any performance advantages


over Microsoft-hosted agents?
In many cases, yes. Specifically:
●● If you use a self-hosted agent, you can run incremental builds. For example, you define a CI build
pipeline that doesn't clean the repo or do a clean build. Your builds will typically run faster.

●● You don't get these benefits when using a Microsoft-hosted agent. The agent is destroyed after
the build or release pipeline is completed.
●● A Microsoft-hosted agent can take longer to start your build. While it often takes just a few seconds
for your job to be assigned to a Microsoft-hosted agent, it can sometimes take several minutes for an
agent to be allocated, depending on the load on our system.

Can I install multiple self-hosted agents on the same ma-


chine?
Yes. This approach can work well for agents who run jobs that don't consume many shared resources. For
example, you could try it for agents that run releases that mostly orchestrate deployments and don't do
much work on the agent itself.
In other cases, you might find that you don't gain much efficiency by running multiple agents on the
same machine.
For example, it might not be worthwhile for agents that run builds that consume many disks and I/O
resources.
You might also have problems if parallel build jobs use the same singleton tool deployment, such as npm
packages.
For example, one build might update a dependency while another build is in the middle of using it, which
could cause unreliable results and errors.
Further instructions on how to set up self-hosted agents can be found at:
●● Self-hosted Windows agents4.
●● Run a self-hosted agent behind a web proxy.5

Describe security of agent pools


Understanding how security works for agent pools helps you control sharing and use of agents.

Azure Pipelines
In Azure Pipelines, roles are defined on each agent pool. Membership in these roles governs what
operations you can do on an agent pool.

4 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-windows
5 https://docs.microsoft.com/azure/devops/pipelines/agents/proxy
196     

Role on an organization agent pool Purpose


Reader Members of this role can view the organization's
agent pool and agents. You typically use it to add
operators that are responsible for monitoring the
agents and their health.
Service Account Members of this role can use the organization
agent pool to create a project agent pool in a
project. If you follow the guidelines above for
creating new project agent pools, you typically
don't have to add any members here.
Administrator Also, with all the above permissions, members of
this role can register or unregister agents from the
organization's agent pool. They can also refer to
the organization agent pool when creating a
project agent pool in a project. Finally, they can
also manage membership for all roles of the
organization agent pool. The user that made the
organization agent pool is automatically added to
the Administrator role for that pool.
The All agent pools node in the Agent Pools tab is used to control the security of all organization agent
pools.
Role memberships for individual organization agent pools are automatically inherited from the ‘All agent
pools’ node.
Roles are also defined on each organization's agent pool. Memberships in these roles govern what
operations you can do on an agent pool.

Role on a project agent pool Purpose


Reader Members of this role can view the project agent
pool. You typically use it to add operators respon-
sible for monitoring the build and deployment
jobs in that project agent pool.
User Members of this role can use the project agent
pool when authoring build or release pipelines.
Administrator Also, to all the above operations, members of this
role can manage membership for all roles of the
project agent pool. The user that created the pool
is automatically added to the Administrator role
for that pool.
The All agent pools node in the Agent pools tab controls the security of all project agent pools in a
project.
Role memberships for individual project agent pools are automatically inherited from the ‘All agent pools’
node.
By default, the following groups are added to the Administrator role of ‘All agent pools’: Build Adminis-
trators, Release Administrators, Project Administrators.
    197

Summary
This module explored differences between Microsoft-hosted and self-hosted agents, detailed job types,
and introduced agent pools configuration.
You learned how to describe the benefits and usage of:
●● Choose between Microsoft-hosted and self-hosted agents.
●● Install and configure Azure Pipelines Agents.
●● Configure agent pools.
●● Make the agents and pools secure.

Learn more
●● Microsoft-hosted agents for Azure Pipelines - Azure Pipelines | Microsoft Docs6.
●● Deploy an Azure Pipelines agent on Linux - Azure Pipelines | Microsoft Docs7.
●● Deploy a build and release agent on macOS - Azure Pipelines | Microsoft Docs8.
●● Deploy an Azure Pipelines agent on Windows - Azure Pipelines | Microsoft Docs9.
●● Agents pools - Azure Pipelines | Microsoft Docs10.

6 https://docs.microsoft.com/azure/devops/pipelines/agents/hosted
7 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-linux
8 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-osx
9 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-windows
10 https://docs.microsoft.com/azure/devops/pipelines/agents/pools-queues
198     

Describe pipelines and concurrency


Introduction
This module describes parallel jobs and how to estimate their usage. Also, it presents Azure DevOps for
open-source projects, explores Visual Designer and YAML pipelines.

Learning objectives
After completing this module, students and professionals can:
●● Use and estimate parallel jobs.
●● Use Azure Pipelines for open-source or private projects.
●● Use Visual Designer.
●● Work with Azure Pipelines and YAML.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.

Understand parallel jobs


How a parallel job is consumed by a build or release
Consider an organization that has only one Microsoft-hosted parallel job.
This job allows users in that organization to collectively run only one build or release job at a time.
When more jobs are triggered, they're queued and will wait for the previous job to finish.

A release consumes a parallel job only when it's being actively deployed to a stage.
While the release is waiting for approval or manual intervention, it doesn't consume a parallel job.
    199

A simple example of parallel jobs


●● FabrikamFiber CI Build 102 (main branch) starts first.
●● Deployment of FabrikamFiber Release 11 is triggered by the completion of FabrikamFiber CI Build 102.
●● FabrikamFiber CI Build 101 (feature branch) is triggered. The build can't start yet because Release 11's
deployment is active. So, the build stays queued.
●● Release 11 waits for approvals. Fabrikam CI Build 101 starts because a release waiting for approvals
doesn't consume a parallel job.
●● Release 11 is approved. It resumes only after Fabrikam CI Build 101 is completed.

Relationship between jobs and parallel jobs


The term job can refer to multiple concepts, and its meaning depends on the context:
●● When you define a build or release, you can define it as a collection of jobs. When a build or release
runs, you can run multiple jobs as part of that build or release.
●● Each job consumes a parallel job that runs on an agent. When there aren't enough parallel jobs
available for your organization, then the jobs are queued up and run one after the other.
You don't consume any parallel jobs when you run a server job or deploy to a deployment group.

Estimate parallel jobs


Determine how many parallel jobs you need
You could begin by seeing if the free tier offered in your organization is enough for your teams.
When you've reached the 1,800 minutes per month limit for the free tier of Microsoft-hosted parallel
jobs, you can start by buying one parallel job to remove this monthly time limit before deciding to
purchase more.
As the number of queued builds and releases exceeds the number of parallel jobs you have, your build
and release queues will grow longer.
When you find the queue delays are too long, you can purchase extra parallel jobs as needed.

Simple estimate
A simple rule of thumb: Estimate that you'll need one parallel job for every four to five users in your
organization.

Detailed estimate
In the following scenarios, you might need multiple parallel jobs:
●● If you have multiple teams, and if each of them requires a CI build, you'll likely need a parallel job for
each team.
●● If your CI build trigger applies to multiple branches, you'll likely need a parallel job for each active
branch.
●● If you develop multiple applications by using one organization or server, you'll likely need more
parallel jobs: one to deploy each application simultaneously.
200     

View available parallel jobs


Browse to Organization settings > Pipelines > Parallel jobs.

Location of parallel jobs in organization settings


URL example: https://{your_organization}/_settings/buildqueue?_a=concurrentJobs
View the maximum number of parallel jobs that are available in your organization.
Select View in-progress jobs to display all the builds and releases that are actively consuming an available
parallel job or queued waiting for a parallel job to be available.

Sharing of parallel jobs across projects in a collection


Parallel jobs are purchased at the organization level, and they're shared by all projects in an organization.
Currently, there isn't a way to partition or dedicate parallel job capacity to a specific project or agent
pool. For example:
●● You purchase two parallel jobs in your organization.
●● You queue two builds in the first project, and both the parallel jobs are consumed.
●● You queue a build in the second project. That build won't start until one of the builds in your first
project is completed.
    201

Describe Azure Pipelines and open-source pro-


jects
Azure DevOps offers developers a suite of DevOps capabilities, including Source control, Agile planning,
Build, Release, Test, and more.
But to use Azure DevOps features requires the user to first sign in using a Microsoft or GitHub Account.
However, this blocks many engaging scenarios where you want to publicly share your code and artifacts
or provide a wiki library or build status page for unauthenticated users.
With public projects, users can mark an Azure DevOps Team Project as public.
This will enable anonymous users to view the contents of that project in a read-only state enabling
collaboration with anonymous (unauthenticated) users that wasn't possible before.
Anonymous users will essentially see the same views as authenticated users, with non-public functionality
such as settings or actions (such as queue build) hidden or disabled.

Public versus private projects


Projects in Azure DevOps provide a repository for source code and a place for a group of developers and
teams to plan, track progress, and collaborate on building software solutions.
One or more projects can be defined within an organization in Azure DevOps.
Users that aren't signed into the service have read-only access to public projects on Azure DevOps.
Private projects require users to be granted access to the project and signed in to access the services.

Supported services
Non-members of a public project will have read-only access to a limited set of services, precisely:
●● Browse the code base, download code, view commits, branches, and pull requests.
●● View and filter work items.
●● View a project page or dashboard.
●● View the project Wiki.
●● Do a semantic search of the code or work items.
For more information, see Differences and limitations for non-members of a public project11.

A practical example: .NET Core CLI


Supporting open-source development is one of the most compelling scenarios for public projects. A
good example is the .NET Core CLI.
Their source is hosted on GitHub, and they use Azure DevOps for their CI builds.
However, if you click on the build badges in their readme, you'll not see the build results unless you were
one of the project's maintainers.
Since this is an open-source project, everybody should view the full results to see why a build failed and
maybe even send a pull request to help fix it.

11 https://docs.microsoft.com/azure/devops/organizations/public/feature-differences
202     

Thanks to public projects capabilities, the team will enable just that experience. Everyone in the commu-
nity will have access to the same build results, whether they are a maintainer on the project.

How do I qualify for the free tier of Azure Pipelines for


public projects?
Microsoft will automatically apply the free tier limits for public projects if you meet both conditions:
●● Your pipeline is part of an Azure Pipelines public project.
●● Your pipeline builds a public repository from GitHub or the same public project in your Azure DevOps
organization.

Are there limits on who can use Azure Pipelines?


You can have as many users as you want when you're using Azure Pipelines. There's no per-user charge
for using Azure Pipelines.
Users with both basic and stakeholder access can author as many builds and releases as they want.

Are there any limits on the number of builds and release


pipelines that I can create?
No. You can create hundreds or even thousands of definitions for no charge. You can register any number
of self-hosted agents for no cost.

As a Visual Studio Enterprise subscriber, do I get more par-


allel jobs for Azure Pipelines?
Yes. Visual Studio Enterprise subscribers get one self-hosted parallel job in each Azure DevOps Services
organization where they're a member.

When you're using the per-minute plan, you can run only
one job at a time.
If you run builds for more than 14 paid hours in a month, the per-minute plan might be less cost-effec-
tive than the parallel jobs model.
See Azure DevOps Services Pricing | Microsoft Azure12 for current pricing.

Explore Azure Pipelines and Visual Designer


You can create and configure your build and release pipelines in the Azure DevOps web portal with the
visual designer. (Often referred to as “Classic Pipelines”).
Configure Azure Pipelines to use your Git repo.
1. Use the Azure Pipelines visual designer to create and configure your build and release pipelines.
2. Push your code to your version control repository. This action triggers your pipeline and runs tasks
such as building or testing code.

12 https://azure.microsoft.com/pricing/details/devops/azure-devops-services/
    203

3. The build creates an artifact used by the rest of your pipeline to run tasks such as deploying to
staging or production.
4. Your code is now updated, built, tested, and packaged. It can be deployed to any target.

Benefits of using the Visual Designer


The visual designer is great for new users in continuous integration (CI) and continuous delivery (CD).
●● The visual representation of the pipelines makes it easier to get started.
●● The visual designer is in the same hub as the build results. This location makes it easier to switch back
and forth and make changes.
If you think the designer workflow is best for you, create your first pipeline using the visual designer13.

Describe Azure Pipelines and YAML


Mirroring the rise of interest in infrastructure as code, there has been considerable interest in defining
pipelines as code. However, pipeline as code doesn't mean executing a script that's stored in source
control.
Codified pipelines use their programming model to simplify the setup and maximize reuse.
A typical microservice architecture will require many deployment pipelines that are identical. It's tedious
to craft these pipelines via a user interface or SDK.
The ability to define the pipeline and the code helps apply all principles of code sharing, reuse, templati-
zation, and code reviews. Azure DevOps offers you both experiences. You can either use YAML to define
your pipelines or use the visual designer to do the same. You will, however, find that more product-level
investments are being made to enhance the YAML pipeline experience.
When you use YAML, you define your pipeline mostly in code (a YAML file) alongside the rest of the code
for your app. When using the visual designer, you define a build pipeline to build and test your code and
publish artifacts.
You also specify a release pipeline to consume and deploy those artifacts to deployment targets.

13 https://docs.microsoft.com/azure/devops/pipelines/get-started-designer
204     

Use Azure Pipelines with YAML


You can configure your pipelines in a YAML file that exists alongside your code.
1. Configure Azure Pipelines to use your Git repo.
2. Edit your azure-pipelines.yml file to define your build.
3. Push your code to your version control repository. This action kicks off the default trigger to build and
deploy and then monitor the results.
4. Your code is now updated, built, tested, and packaged. It can be deployed to any target.

Benefits of using YAML


●● The pipeline is versioned with your code and follows the same branching structure. You get validation
of your changes through code reviews in pull requests and branch build policies.
●● Every branch you use can modify the build policy by adjusting the azure-pipelines.yml file.
●● A change to the build process might cause a break or result in an unexpected outcome. Because the
change is in version control with the rest of your codebase, you can more easily identify the issue.
If you think the YAML workflow is best for you, create your first pipeline by using YAML14.
While there's a slightly higher learning curve and a higher degree of code orientation when defining
pipelines with YAML, it's now the preferred method.

Summary
This module described parallel jobs and how to estimate their usage. Also, it presented Azure DevOps for
open-source projects, explored Visual Designer and YAML pipelines.
You learned how to describe the benefits and usage of:
●● Use and estimate parallel jobs.
●● Use Azure Pipelines for open-source or private projects.
●● Use Visual Designer.
●● Work with Azure Pipelines and YAML.

Learn more
●● Configure and pay for parallel jobs - Azure DevOps | Microsoft Docs15.
●● Azure DevOps Services Pricing | Microsoft Azure16.

14 https://docs.microsoft.com/azure/devops/pipelines/get-started-yaml
15 https://docs.microsoft.com/azure/devops/pipelines/licensing/concurrent-jobs
16 https://azure.microsoft.com/pricing/details/devops/azure-devops-services
    205

●● About public projects - Azure DevOps Services | Microsoft Docs17.


●● Create your first pipeline - Azure Pipelines | Microsoft Docs18.
●● Customize your pipeline - Azure Pipelines | Microsoft Docs19.

17 https://docs.microsoft.com/azure/devops/organizations/public/about-public-projects
18 https://docs.microsoft.com/azure/devops/pipelines/create-first-pipeline
19 https://docs.microsoft.com/azure/devops/pipelines/customize-pipeline
206     

Explore Continuous integration


Introduction
Continuous Integration is one of the key pillars of DevOps.
Once you have your code in a version control system, you need an automated way of integrating the
code on an ongoing basis.
Azure Pipelines can be used to create a fully featured cross-platform CI and CD service.
It works with your preferred Git provider and can deploy to most major cloud services, including Azure.
This module details continuous integration practice and the pillars for implementing it in the develop-
ment lifecycle, its benefits, and properties.

Learning objectives
After completing this module, students and professionals can:
●● Explain why Continuous Integration matters.
●● Implement Continuous Integration using Azure Pipelines.
●● Explain the benefits of Continuous Integration.
●● Describe build properties.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Introduction to continuous integration


Continuous integration (CI) is the process of automating the build and testing of code every time a team
member commits changes to version control.
CI encourages developers to share their code and unit tests by merging their changes into a shared
version control repository after every small task completion.
Committing code triggers an automated build system to grab the latest code from the shared repository
and build, test, and validate the entire main branch (also known as the trunk or main).
    207

The idea is to minimize the cost of integration by making it an early consideration.


Developers can discover conflicts at the boundaries between new and existing code early, while conflicts
are still relatively easy to reconcile.
Once the conflict is resolved, work can continue with confidence that the new code honors the require-
ments of the existing codebase.
Integrating code frequently doesn't, by itself, offer any guarantees about the quality of the new code or
functionality.
In many organizations, integration is costly because manual processes ensure that the code meets
standards, introduces bugs, and breaks existing functionality.
Frequent integration can create friction when the level of automation doesn't match the amount of
quality assurance measures in place.
In practice, continuous integration relies on robust test suites and an automated system to run those
tests to address this friction within the integration process.
When a developer merges code into the main repository, automated processes kick off a build of the new
code.
Afterward, test suites are run against the new build to check whether any integration problems were
introduced.
If either the build or the test phase fails, the team is alerted to work to fix the build.
The end goal of continuous integration is to make integration a simple, repeatable process part of the
everyday development workflow to reduce integration costs and respond to early defects.
Working to make sure the system is robust, automated, and fast while cultivating a team culture that
encourages frequent iteration and responsiveness to build issues is fundamental to the strategy's success.
208     

Learn the four pillars of continuous integration


Continuous integration relies on four key elements for successful implementation: a Version Control
System, Packet Management System, Continuous Integration System, and an Automated Build Process.
A version control system manages changes to your source code over time.
●● Git20
●● Apache Subversion21
●● Team Foundation Version Control22
A package management system is used to install, uninstall, and manage software packages.
●● NuGet23
●● Node Package Manager (NPM)24
●● Chocolatey25
●● HomeBrew26
●● RPM27
A continuous integration system merges all developer working copies to shared mainline several times
a day.
●● Azure DevOps28
●● TeamCity29
●● Jenkins30
An automated build process creates a software build, including compiling, packaging, and running
automated tests.
●● Apache Ant31
●● NAnt32
●● Gradle33
Note: For each element, your team needs to select the specific platforms and tools they'll use. You need
to ensure that you've established each pillar before proceeding.

Explore benefits of continuous integration


Continuous integration (CI) provides many benefits to the development process, including:
●● Improving code quality based on rapid feedback

20 https://git-scm.com/
21 https://subversion.apache.org/
22 https://docs.microsoft.com/azure/devops/repos/tfvc/overview
23 https://www.nuget.org/
24 https://www.npmjs.com/
25 https://chocolatey.org/
26 https://brew.sh/
27 http://rpm.org/
28 https://azure.microsoft.com/services/devops
29 https://www.jetbrains.com/teamcity/
30 https://jenkins.io/
31 http://ant.apache.org/
32 http://nant.sourceforge.net/
33 https://gradle.org/
    209

●● Triggering automated testing for every code change


●● Reducing build times for quick feedback and early detection of problems (risk reduction)
●● Better managing technical debt and conducting code analysis
●● Reducing long, complex, and bug-inducing merges
●● Increasing confidence in codebase health long before production deployment

Key Benefit: Rapid Feedback for Code Quality


Possibly the most essential benefit of continuous integration is rapid feedback to the developer.
If the developer commits something and breaks the code, they'll know that immediately from the build,
unit tests, and other metrics.
Suppose successful integration is happening across the team.
In that case, the developer will also know if their code change breaks something that another team
member did to a different part of the codebase.
This process removes long, complex, and drawn-out bug-inducing merges, allowing organizations to
deliver swiftly.
Continuous integration also enables tracking metrics to assess code quality over time. For example, unit
test passing rates, code that breaks frequently, code coverage trends, and code analysis.
It can provide information on what has been changed between builds for traceability benefits. Also,
introduce evidence of what teams do to have a global view of build results.
For more information, you can see: What is Continuous Integration?34

CI implementation challenges
●● Have you tried to implement continuous integration in your organization?
●● Were you successful?
●● If you did successfully, what lessons did you learn?
●● If you didn't get successful, what were the challenges?

Describe build properties


You may have noticed that in some demos, the build number was just an integer, yet in other demos,
there's a formatted value that was based upon the date.
It is one of the items that can be set in the Build Options.

Build number formatting


The example shown below is from the build options that were configured by the ASP.NET Web Applica-
tion build template:

34 https://docs.microsoft.com/devops/develop/what-is-continuous-integration
210     

In this case, the date has been retrieved as a system variable, then formatted via yyyyMMdd, and the
revision is then appended.

Build status
While we have been manually queuing each build, we'll soon see that builds can be automatically
triggered.
It's a key capability required for continuous integration.
But there are times that we might not want the build to run, even if it's triggered.
It can be controlled with these settings:

Note: You can use the Paused setting to allow new builds to queue but to hold off then starting them.

Authorization and timeouts


You can configure properties for the build job as shown here:
    211

The authorization scope determines whether the build job is limited to accessing resources in the current
project. Or accessing resources in other projects in the project collection.
The build job timeout determines how long the job can execute before being automatically canceled.
A value of zero (or leaving the text box empty) specifies that there's no limit.
The build job cancel timeout determines how long the server will wait for a build job to respond to a
cancellation request.

Badges
Some development teams like to show the state of the build on an external monitor or website.
These settings provide a link to the image to use for it. Here's an example Azure Pipelines badge that has
Succeeded:

For more information, see Build Pipeline Options35.

Summary
This module detailed the Continuous Integration practice—also, the pillars for implementing it in the
development lifecycle, its benefits, and properties.
You learned how to describe the benefits and usage of:
●● Explain why Continuous Integration matters.
●● Implement Continuous Integration using Azure Pipelines.
●● Explain the benefits of Continuous Integration.
●● Describe build properties.

Learn more
●● Design a CI/CD pipeline using Azure DevOps - Azure Example Scenarios | Microsoft Docs36.
●● Build options - Azure Pipelines | Microsoft Docs37.
●● Create your first pipeline - Azure Pipelines | Microsoft Docs38.

35 https://docs.microsoft.com/azure/devops/pipelines/build/options
36 https://docs.microsoft.com/azure/architecture/example-scenario/apps/devops-dotnet-webapp
37 https://docs.microsoft.com/azure/devops/pipelines/build/options
38 https://docs.microsoft.com/azure/devops/pipelines/create-first-pipeline
212     

Implement a pipeline strategy


Introduction
This module describes pipeline strategies, configuration, implementation of multi-agent builds, and what
source controls Azure Pipelines supports.

Learning objectives
After completing this module, students and professionals can:
●● Define a build strategy.
●● Explain and configure demands.
●● Implement multi-agent builds.
●● Use different source control types available in Azure Pipelines.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Configure agent demands


Not all agents are the same. We've seen that they can be based on different operating systems, but they
can also install different dependencies.
To describe it, every agent has a set of capabilities configured as name-value pairs. The capabilities such
as machine name and operating system type that are automatically discovered are referred to as System
capabilities. The ones that you define are called User-defined capabilities.
There's a tab for Capabilities on the Agent Pools page (at the Organization level) when you select an
agent.
You can use it to see the available capabilities for an agent and to configure user capabilities.
Opening a configured self-hosted agent, you can see the capabilities on that tab:
    213

When you configure a build pipeline and the agent pool to use, you can specify specific demands that
the agent must meet on the Options tab.

In the build job image, the HasPaymentService is required in the collection of capabilities. And an exists
condition, you can choose that a capability equals a specific value.
214     

For more information, see Capabilities39.

Explore multi-configuration and multi-agent


You might need to run the tasks of a pipeline multiple times but with a slightly different configuration or
split a large batch of work among various agents.
There are three different types of jobs you can run.
●● None: Tasks will run on a single agent.
●● Multi-configuration: Run the same set of tasks on multiple configurations as specified in the multi-
pliers. Configurations will run in parallel, and each configuration will use a single agent. For example

●● Run the release once with configuration setting A on WebApp A and setting B for WebApp B.
●● Deploy to different geographic regions.

●● Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents. For
example, you can run a broad suite of 1000 tests on a single agent. Or you can use two agents and
run 500 tests on each one in parallel.
For more information, see Specify jobs in your pipeline40.

Do you see a purpose for Release Jobs in your pipeline?


How would you set it up?
Topics you might want to consider are:
●● Do you have artifacts from multiple sources?
●● Do you want to run deployments on different servers simultaneously?
●● Do you need multiple platforms?
●● How long does your release take?
●● Can you run your deployment in parallel, or does it need to run in sequence?

39 https://docs.microsoft.com/azure/devops/pipelines/agents/agents
40 https://docs.microsoft.com/azure/devops/pipelines/process/phases
    215

Implement multi-job builds


You can use multiple build agents to support multiple build machines. Either distribute the load, run
builds in parallel, or use different agent capabilities.
As an example, components of an application might require different incompatible versions of a library or
dependency.

Multiple jobs in a pipeline


Adding multiple jobs to a pipeline lets you:
●● Break your pipeline into sections that need different agent pools or self-hosted agents.
●● Publish artifacts in one job and consume them in one or more subsequent jobs.
●● Build faster by running multiple jobs in parallel.
●● Enable conditional execution of tasks.
To add another agent job to an existing pipeline, click on the ellipsis and choose as shown in this image:

Parallel jobs
At the organization level, you can configure the number of parallel jobs that are made available.
216     

The free tier allows for one parallel job of up to 1800 minutes per month. The self-hosted agents have
higher levels.
Note: You can define a build as a collection of jobs rather than as a single job. Each job consumes one of
these parallel jobs that run on an agent. If there aren't enough parallel jobs available for your organiza-
tion, the jobs will be queued and run sequentially.

Build Related Tooling


Azure DevOps can be integrated with a wide range of existing tooling used for builds or associated with
builds.
Which build-related tools do you currently use?
What do you like or don't like about the tools?

Explore source control types supported by Azure


Pipelines
Azure Pipelines offers both YAML-based pipelines and the classic editor.
The table shows the repository types supported by both.

Repository type Azure Pipelines (YAML) Azure Pipelines (classic editor)


Azure Repos Git Yes Yes
Azure Repos TFVC No Yes
Bitbucket Cloud Yes Yes
    217

Other Git (generic) No Yes


GitHub Yes Yes
GitHub Enterprise Server Yes Yes
Subversion No Yes

Summary
This module described pipeline strategies, configuration, implementation of multi-agent builds, and what
source controls Azure Pipelines supports.
You learned how to describe the benefits and usage of:
●● Define a build strategy.
●● Explain and configure demands.
●● Implement multi-agent builds.
●● Use different source control types available in Azure Pipelines.

Learn more
●● Create a multi-platform pipeline - Azure Pipelines | Microsoft Docs41.
●● Demands - Azure Pipelines | Microsoft Docs42.
●● Build source repositories - Azure Pipelines | Microsoft Docs43.

41 https://docs.microsoft.com/azure/devops/pipelines/get-started-multiplatform
42 https://docs.microsoft.com/azure/devops/pipelines/process/demands
43 https://docs.microsoft.com/azure/devops/pipelines/repos
218     

Integrate with Azure Pipelines


Introduction
This module details Azure Pipelines anatomy and structure, templates, YAML resources, and how to use
multiple repositories in your pipeline.
Also, it explores communication to deploy using Azure Pipelines to target servers.

Learning objectives
After completing this module, students and professionals can:
●● Describe advanced Azure Pipelines anatomy and structure.
●● Detail templates and YAML resources.
●● Implement and use multiple repositories.
●● Explore communication to deploy using Azure Pipelines.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Understanding of Azure Pipelines.
●● Beneficial to have experience in an organization that delivers software.

Describe the anatomy of a pipeline


Azure Pipelines can automatically build and validate every pull request and commit to your Azure Repos
Git repository.
Azure Pipelines can be used with Azure DevOps public projects and Azure DevOps private projects.
In future sections of this training, we'll also learn how to use Azure Repos with external code repositories
such as GitHub.
Let's start by creating a hello world YAML Pipeline.

Hello world
Start slowly and create a pipeline that echoes “Hello world!” to the console. No technical course is
complete without a hello world example.
name: 1.0$(Rev:.r)

# simplified trigger (implied branch)


trigger:

- main

# equivalents trigger
# trigger:
    219

# branches:
# include:
# - main

variables:
name: martin

pool:
vmImage: ubuntu-latest

jobs:

- job: helloworld
steps:

- script: echo "Hello, $(name)"

Most pipelines will have these components:


●● Name – though often it's skipped (if it's skipped, a date-based name is generated automatically).
●● Trigger – more on triggers later, but without an explicit trigger. There's an implicit “trigger on every
commit to any path from any branch in this repo.”
●● Variables – “Inline” variables (more on other types of variables later).
●● Job – every pipeline must have at least one job.
●● Pool – you configure which pool (queue) the job must run on.
●● Checkout – the “checkout: self” tells the job which repository (or repositories if there are multiple
checkouts) to check out for this job.
●● Steps – the actual tasks that need to be executed: in this case, a “script” task (the script is an alias) that
can run inline scripts.

Name
The variable name is a bit misleading since the name is in the build number format. You'll get an integer
number if you don't explicitly set a name format. It's a monotonically increasing number for run triggered
off this pipeline, starting at 1. This number is stored in Azure DevOps. You can make use of this number
by referencing $(Rev).
To make a date-based number, you can use the format $(Date:yyyy-mm-dd-HH-mm) to get a build
number like 2020-01-16-19-22.
To get a semantic number like 1.0.x, you can use something like 1.0.$(Rev:.r).

Triggers
If there's no explicit triggers section, then it's implied that any commit to any path in any branch will
trigger this pipeline to run.
However, you can be more precise using filters such as branches or paths.
Let's consider this trigger:
220     

trigger:
branches:
include:

- main

This trigger is configured to queue the pipeline only when there's a commit to the main branch. What
about triggering for any branch except the main? You guessed it: use exclude instead of include:
trigger:
branches:
exclude:

- main

Tip: You can get the name of the branch from the variables Build.SourceBranch (for the full name like
refs/heads/main) or Build.SourceBranchName (for the short name like main).
What about a trigger for any branch with a name that starts with topic/ and only if the change is in the
webapp folder?
trigger:
branches:
include:

- feature/*
paths:
include:

- webapp/**

You can mix includes and excludes if you need to. You can also filter on tags.
Tip: Don't forget one overlooked trigger: none. If you never want your pipeline to trigger automatically,
you can use none. It's helpful if you're going to create a pipeline that is only manually triggered.
There are other triggers for other events, such as:
●● Pull Requests (PRs) can also filter branches and paths.
●● Schedules allow you to specify cron expressions for scheduling pipeline runs.
●● Pipelines will enable you to trigger pipelines when other pipelines are complete, allowing pipeline
chaining.
You can find all the documentation on triggers here44.

44 https://docs.microsoft.com/azure/devops/pipelines/build/triggers
    221

Jobs
A job is a set of steps executed by an agent in a queue (or pool). Jobs are atomic – they're performed
wholly on a single agent. You can configure the same job to run on multiple agents simultaneously, but
even in this case, the entire set of steps in the job is run on every agent. If you need some steps to run on
one agent and some on another, you'll need two jobs.
A job has the following attributes besides its name:
●● displayName – a friendly name.
●● dependsOn - a way to specify dependencies and ordering of multiple jobs.
●● condition – a binary expression: if it evaluates to true, the job runs; if false, the job is skipped.
●● strategy - used to control how jobs are parallelized.
●● continueOnError - to specify if the rest of the pipeline should continue or not if this job fails.
●● pool – the name of the pool (queue) to run this job on.
●● workspace - managing the source workspace.
●● container - for specifying a container image to execute the job later.
●● variables – variables scoped to this job.
●● steps – the set of steps to execute.
●● timeoutInMinutes and cancelTimeoutInMinutes for controlling timeouts.
●● services - sidecar services that you can spin up.

Dependencies
You can define dependencies between jobs using the dependensOn property. It lets you specify se-
quences and fan-out and fan-in scenarios.
A sequential dependency is implied if you don't explicitly define a dependency.
If you want jobs to run parallel, you need to specify dependsOn: none.
Let's look at a few examples. Consider this pipeline:
jobs:

- job: A
steps:
# steps omitted for brevity

- job: B
steps:
# steps omitted for brevity

Because no dependsOn was specified, the jobs will run sequentially: first A and then B.
To have both jobs run in parallel, we add dependsOn: none to job B:
222     

jobs:

- job: A
steps:
# steps omitted for brevity

- job: B
dependsOn: none
steps:
# steps omitted for brevity

If we want to fan out and fan in, we can do that too:


jobs:

- job: A
steps:

- script: echo' job A.'

- job: B
dependsOn: A
steps:

- script: echo' job B.'

- job: C
dependsOn: A
steps:

- script: echo' job C.'

- job: D
dependsOn:

- B
- C
steps:

- script: echo' job D.'

- job: E
dependsOn:

- B
    223

- D
steps:

- script: echo' job E.'

Checkout
Classic builds implicitly checkout any repository artifacts, but pipelines require you to be more explicit
using the checkout keyword:
●● Jobs check out the repo they're contained in automatically unless you specify checkout: none.
●● Deployment jobs don't automatically check out the repo, so you'll need to specify checkout: self for
deployment jobs if you want access to files in the YAML file's repo.

Download
Downloading artifacts requires you to use the download keyword. Downloads also work the opposite way
for jobs and deployment jobs:
●● Jobs don't download anything unless you explicitly define a download.
●● Deployment jobs implicitly do a download: current, which downloads any pipeline artifacts that have
been created in the existing pipeline. To prevent it, you must specify download: none.

Resources
What if your job requires source code in another repository? You'll need to use resources. Resources let
you reference:
●● other repositories
●● pipelines
●● builds (classic builds)
●● containers (for container jobs)
224     

●● packages
To reference code in another repo, specify that repo in the resources section and then reference it via its
alias in the checkout step:
resources:
repositories:

- repository: appcode
type: git
name: otherRepo

steps:

- checkout: appcode

Steps are Tasks


Steps are the actual “things” that execute in the order specified in the job.
Each step is a task: out-of-the-box (OOB) tasks come with Azure DevOps. Many of which have aliases and
tasks installed to your Azure DevOps organization via the marketplace.
Creating custom tasks is beyond the scope of this chapter, but you can see how to make your custom
tasks here45.

Variables
It would be tough to achieve any sophistication in your pipelines without variables. Though this classifica-
tion is partly mine, several types of variables exist, and pipelines don't distinguish between these types.
However, I've found it helpful to categorize pipeline variables to help teams understand some nuances
when dealing with them.
Every variable is a key: value pair. The key is the variable's name, and it has a value.
To dereference a variable, wrap the key in $(). Let's consider this example:
variables:
name: martin
steps:

- script: echo "Hello, $(name)!"

It will write Hello, martin! To the log.

Understand the pipeline structure


A pipeline is one or more stages that describe a CI/CD process.

45 https://docs.microsoft.com/azure/devops/extend/develop/add-build-task
    225

Stages are the primary divisions in a pipeline. The stages “Build this app,” "Run these tests," and “Deploy
to preproduction” are good examples.
A stage is one or more jobs, units of work assignable to the same machine.
You can arrange both stages and jobs into dependency graphs. Examples include “Run this stage before
that one” and "This job depends on the output of that job."
A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of a YAML file like:
●● Pipeline

●● Stage A

●● Job 1

●● Step 1.1
●● Step 1.2
●● ...
●● Job 2

●● Step 2.1
●● Step 2.2
●● ...
●● Stage B

●● ...
Simple pipelines don't require all these levels. For example, you can omit the containers for stages and
jobs in a single job build because there are only steps.
Because many options shown in this article aren't required and have reasonable defaults, your YAML
definitions are unlikely to include all of them.

Pipeline
The schema for a pipeline:
name: string # build numbering format
resources:
pipelines: [ pipelineResource ]
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: # several syntaxes
trigger: trigger
pr: pr
stages: [ stage | templateReference ]
226     

If you have a single-stage, you can omit the stages keyword and directly specify the jobs keyword:
# ... other pipeline-level keywords
jobs: [ job | templateReference ]

If you've a single-stage and a single job, you can omit the stages and jobs keywords and directly specify
the steps keyword:
# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateRef-
erence ]

Stage
A stage is a collection of related jobs. By default, stages run sequentially. Each stage starts only after the
preceding stage is complete.
Use approval checks to control when a stage should run manually. These checks are commonly used to
control deployments to production environments.
Checks are a mechanism available to the resource owner. They control when a stage in a pipeline con-
sumes a resource.
As an owner of a resource like an environment, you can define checks required before a stage that
consumes the resource can start.
This example runs three stages, one after another. The middle stage runs two jobs in parallel.
stages:

- stage: Build
jobs:

- job: BuildJob
steps:

- script: echo Building!


- stage: Test
jobs:

- job: TestOnWindows
steps:

- script: echo Testing on Windows!


- job: TestOnLinux
steps:

- script: echo Testing on Linux!


- stage: Deploy
jobs:
    227

- job: Deploy
steps:

- script: echo Deploying the code!

Job
A job is a collection of steps run by an agent or on a server. Jobs can run conditionally and might depend
on previous jobs.
jobs:

- job: MyJob
displayName: My First Job
continueOnError: true
workspace:
clean: outputs
steps:

- script: echo My first job

Deployment strategies
Deployment strategies allow you to use specific techniques to deliver updates when deploying your
application.
Techniques examples:
●● Enable initialization.
●● Deploy the update.
●● Route traffic to the updated version.
●● Test the updated version after routing traffic.
●● If there's a failure, run steps to restore to the last known good version.

RunOnce
runOnce is the simplest deployment strategy wherein all the lifecycle hooks.
strategy:
runOnce:
preDeploy:
pool: [ server | pool ] # See pool schema.
steps:
- script: [ script | bash | pwsh | powershell | checkout | task
| templateReference ]
deploy:
pool: [ server | pool ] # See pool schema.
228     

steps: ...
routeTraffic:
pool: [ server | pool ]
steps:
...
postRouteTraffic:
pool: [ server | pool ]
steps:
...
on:
failure:
pool: [ server | pool ]
steps:
...
success:
pool: [ server | pool ]
steps:
...

For details and examples, see Deployment jobs46.

Rolling
A rolling deployment replaces instances of the previous version of an application with instances of the
new version. It can be configured by specifying the keyword rolling: under the strategy: node.
strategy:
rolling:
maxParallel: [ number or percentage as x% ]
preDeploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task
| templateReference ]
deploy:
steps:
...
routeTraffic:
steps:
...
postRouteTraffic:
steps:
...
on:
failure:
steps:
...
success:
steps:

46 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
    229

...

For details and examples, see Deployment jobs47.

Canary
By using this strategy, you can roll out the changes to a small subset of servers first. The canary deploy-
ment strategy is an advanced deployment strategy that helps mitigate the risk involved in rolling out new
versions of applications.
As you gain more confidence in the new version, you can release it to more servers in your infrastructure
and route more traffic to it.
strategy:
canary:
increments: [ number ]
preDeploy:
pool: [ server | pool ] # See pool schema.
steps:
- script: [ script | bash | pwsh | powershell | checkout | task
| templateReference ]
deploy:
pool: [ server | pool ] # See pool schema.
steps:
...
routeTraffic:
pool: [ server | pool ]
steps:
...
postRouteTraffic:
pool: [ server | pool ]
steps:
...
on:
failure:
pool: [ server | pool ]
steps:
...
success:
pool: [ server | pool ]
steps:
...

For details and examples, see Deployment jobs48.

47 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
48 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
230     

Lifecycle hooks
You can achieve the deployment strategies technique by using lifecycle hooks. Depending on the pool
attribute, each resolves into an agent or server job49.
Lifecycle hooks inherit the pool specified by the deployment job. Deployment jobs use the $(Pipeline.
Workspace) system variable.
Available lifecycle hooks:
●● preDeploy: Used to run steps that initialize resources before application deployment starts.
●● deploy: Used to run steps that deploy your application. Download artifact task will be auto injected
only in the deploy hook for deployment jobs. To stop downloading artifacts, use - download: none or
choose specific artifacts to download by specifying Download Pipeline Artifact task50.
●● routeTraffic: Used to run steps that serve the traffic to the updated version.
●● postRouteTraffic: Used to run the steps after the traffic is routed. Typically, these tasks monitor the
health of the updated version for a defined interval.
●● on: failure or on: success: Used to run steps for rollback actions or clean-up.

Steps
A step is a linear sequence of operations that make up a job. Each step runs its process on an agent and
accesses the pipeline workspace on a local hard drive.
This behavior means environment variables aren't preserved between steps, but file system changes are.
steps:

- script: echo This run in the default shell on any machine


- bash: |
echo This multiline script always runs in Bash.
echo Even on Windows machines!

- pwsh: |
Write-Host "This multiline script always runs in PowerShell Core."
Write-Host "Even on non-Windows machines!"

Tasks
Tasks are the building blocks of a pipeline. There's a catalog of tasks available to choose from.
steps:

- task: VSBuild@1
displayName: Build
timeoutInMinutes: 120
inputs:
solution: '**\*.sln'

49 https://docs.microsoft.com/azure/devops/pipelines/process/phases
50 https://docs.microsoft.com/azure/devops/pipelines/yaml-schema/steps-download
    231

Detail templates
Template references
You can export reusable sections of your pipeline to a separate file. These individual files are known as
templates. Azure Pipelines supports four kinds of templates:
Azure Pipelines supports four types of templates:
●● Stage
●● Job
●● Step
●● Variable
You can also use templates to control what is allowed in a pipeline and define how parameters can be
used.
●● Parameter
Templates themselves can include other templates. Azure Pipelines supports a maximum of 50 individual
template files in a single pipeline.

Stage templates
You can define a set of stages in one file and use it multiple times in other files.
In this example, a stage is repeated twice for two different testing regimes. The stage itself is specified
only once.
# File: stages/test.yml

parameters:
name: ''
testFile: ''

stages:

- stage: Test_${{ parameters.name }}


jobs:

- job: ${{ parameters.name }}_Windows


pool:
vmImage: vs2017-win2016
steps:

- script: npm install


- script: npm test -- --file=${{ parameters.testFile }}

- job: ${{ parameters.name }}_Mac


pool:
232     

vmImage: macos-10.14
steps:

- script: npm install


- script: npm test -- --file=${{ parameters.testFile }}

Templated pipeline
# File: azure-pipelines.yml

stages:

- template: stages/test.yml # Template reference


parameters:
name: Mini
testFile: tests/miniSuite.js

- template: stages/test.yml # Template reference


parameters:
name: Full
testFile: tests/fullSuite.js

Job templates
You can define a set of jobs in one file and use it multiple times in other files.
In this example, a single job is repeated on three platforms. The job itself is specified only once.
# File: jobs/build.yml

parameters:
name: ''
pool: ''
sign: false

jobs:

- job: ${{ parameters.name }}


pool: ${{ parameters.pool }}
steps:

- script: npm install


- script: npm test

- ${{ if eq(parameters.sign, 'true') }}:


- script: sign
    233

# File: azure-pipelines.yml

jobs:

- template: jobs/build.yml # Template reference


parameters:
name: macOS
pool:
vmImage: 'macOS-10.14'

- template: jobs/build.yml # Template reference


parameters:
name: Linux
pool:
vmImage: 'ubuntu-16.04'

- template: jobs/build.yml # Template reference


parameters:
name: Windows
pool:
vmImage: 'vs2017-win2016'
sign: true # Extra step on Windows only

Step templates
You can define a set of steps in one file and use it multiple times in another file.
# File: steps/build.yml

steps:

- script: npm install


- script: npm test

# File: azure-pipelines.yml

jobs:

- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:

- template: steps/build.yml # Template reference


234     

- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:

- template: steps/build.yml # Template reference

- job: Windows
pool:
vmImage: 'vs2017-win2016'
steps:

- template: steps/build.yml # Template reference


- script: sign # Extra step on Windows only

Variable templates
You can define a set of variables in one file and use it multiple times in other files.
In this example, a set of variables is repeated across multiple pipelines. The variables are specified only
once.
# File: variables/build.yml
variables:

- name: vmImage
value: vs2017-win2016

- name: arch
value: x64

- name: config
value: debug

# File: component-x-pipeline.yml
variables:

- template: variables/build.yml # Template reference


pool:
vmImage: ${{ variables.vmImage }}
steps:

- script: build x ${{ variables.arch }} ${{ variables.config }}

# File: component-y-pipeline.yml
variables:
    235

- template: variables/build.yml # Template reference


pool:
vmImage: ${{ variables.vmImage }}
steps:

- script: build y ${{ variables.arch }} ${{ variables.config }}

Explore YAML resources


Resources in YAML represent sources of pipelines, repositories, and containers. For more information on
Resources, see here51.

General schema
resources:
pipelines: [ pipeline ]
repositories: [ repository ]
containers: [ container ]

Pipeline resource
If you have an Azure pipeline that produces artifacts, your pipeline can consume the artifacts by using the
pipeline keyword to define a pipeline resource.
resources:
pipelines:

- pipeline: MyAppA
source: MyCIPipelineA

- pipeline: MyAppB
source: MyCIPipelineB
trigger: true

- pipeline: MyAppC
project: DevOpsProject
source: MyCIPipelineC
branch: releases/M159
version: 20190718.2
trigger:
branches:
include:

- master
- releases/*

51 https://docs.microsoft.com/azure/devops/pipelines/process/resources
236     

exclude:

- users/*

Container resource
Container jobs let you isolate your tools and dependencies inside a container. The agent launches an
instance of your specified container then runs steps inside it. The container keyword lets you specify your
container images.
Service containers run alongside a job to provide various dependencies like databases.
resources:
containers:

- container: linux
image: ubuntu:16.04

- container: windows
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection

- container: my_service
image: my_service:tag
ports:

- 8080:80 # bind container port 80 to 8080 on the host machine


- 6379 # bind container port 6379 to a random available port on the
host machine
volumes:

- /src/dir:/dst/dir # mount /src/dir on the host into /dst/dir in the


container

Repository resource
Let the system know about the repository if:
●● If your pipeline has templates in another repository.
●● If you want to use multi-repo checkout with a repository that requires a service connection.
The repository keyword lets you specify an external repository.
resources:
repositories:

- repository: common
type: github
name: Contoso/CommonTools
    237

endpoint: MyContosoServiceConnection

Use multiple repositories in your pipeline


You might have micro git repositories providing utilities used in multiple pipelines within your project.
Pipelines often rely on various repositories.
You can have different repositories with sources, tools, scripts, or other items that you need to build your
code. By using multiple checkout steps in your pipeline, you can fetch and check out other repositories to
the one you use to store your YAML pipeline.
Previously Azure Pipelines hasn't offered support for using multiple code repositories in a single pipeline.
Using artifacts or directly cloning other repositories via script within a pipeline, you can work around it. It
leaves access management and security down to you.
Repositories are now first-class citizens within Azure Pipelines. It enables some exciting use cases, such as
checking out specific repository parts and checking multiple repositories.
There's also a use case for not checking out any repository in the pipeline. It can be helpful in cases
where you're setting up a pipeline to do a job that has no dependency on any repository.

Specify multiple repositories


Repositories can be specified as a repository resource or in line with the checkout step. Supported
repositories are Azure Repos Git, GitHub, and BitBucket Cloud.
The following combinations of checkout steps are supported.
●● If there are no checkout steps, the default behavior is checkout: self is the first step.
●● If there's a single checkout: none step, no repositories are synced or checked out.
●● If there's a single checkout: self step, the current repository is checked out.
●● If there's a single checkout step that isn't self or none, that repository is checked out instead of self.
●● If there are multiple checkout steps, each named repository is checked out to a folder named after
the repository. Unless a different path is specified in the checkout step, use checkout: self as one of
the checkout steps.

Repository resource - How to do it?


If your repository type requires a service connection or other extended resources field, you must use a
repository resource.
Even if your repository type doesn't require a service connection, you may use a repository resource.
For example, you have a repository resource defined already for templates in a different repository.
In the following example, three repositories are declared as repository resources. The repositories and the
current self-repository containing the pipeline YAML are checked out.
resources:
repositories:

- repository: MyGitHubRepo # The name used to reference this repository


in the checkout step.
238     

type: github
endpoint: MyGitHubServiceConnection
name: MyGitHubOrgOrUser/MyGitHubRepo

- repository: MyBitBucketRepo
type: bitbucket
endpoint: MyBitBucketServiceConnection
name: MyBitBucketOrgOrUser/MyBitBucketRepo

- repository: MyAzureReposGitRepository
type: git
name: MyProject/MyAzureReposGitRepo

trigger:

- main

pool:
vmImage: 'ubuntu-latest'

steps:

- checkout: self
- checkout: MyGitHubRepo
- checkout: MyBitBucketRepo
- checkout: MyAzureReposGitRepository

- script: dir $(Build.SourcesDirectory)

If the self-repository is named CurrentRepo, the script command produces the following output: Curren-
tRepo MyAzureReposGitRepo MyBitBucketRepo MyGitHubRepo.
In this example, the repositories' names are used for the folders because no path is specified in the
checkout step.

Inline - How to do it?


If your repository doesn't require a service connection, you can declare it according to your checkout
step.
steps:

- checkout: git://MyProject/MyRepo # Azure Repos Git repository in the same


organization

The default branch is checked out unless you choose a specific ref.
If you're using inline syntax, choose the ref by appending @ref. For example:
    239

- checkout: git://MyProject/MyRepo@features/tools # checks out the fea-


tures/tools branch
- checkout: git://MyProject/MyRepo@refs/heads/features/tools # also checks
out the features/tools branch.
- checkout: git://MyProject/MyRepo@refs/tags/MyTag # checks out the commit
referenced by MyTag.

GitHub Repository
Azure Pipelines can automatically build and validate every pull request and commit to your GitHub repos-
itory.
When creating your new pipeline, you can select a GitHub repository and then a YAML file in that reposi-
tory (self repository). By default, this is the repository that your pipeline builds.
Azure Pipelines must be granted access to your repositories to trigger their builds and fetch their code
during builds.
There are three authentication types for granting Azure Pipelines access to your GitHub repositories while
creating a pipeline.
●● GitHub App.
●● OAuth.
●● Personal access token (PAT).
You can create a continuous integration (CI) trigger to run a pipeline whenever you push an update to the
specified branches or push selected tags.
YAML pipelines are configured by default with a CI trigger on all branches.
trigger:
- main
- releases/*

You can configure complex triggers that use exclude or batch.


# specific branches build
trigger:
branches:
include:
- master
- releases/*
exclude:
- releases/old*

Also, it's possible to configure pull request (PR) triggers to run whenever a pull request is opened with
one of the specified target branches or when updates are made to such a pull request.
You can specify the target branches when validating your pull requests.
240     

To validate pull requests that target main and releases/* and start a new run the first time a new pull
request is created, and after every update made to the pull request:
pr:
- main
- releases/*

You can specify the full name of the branch or a wildcard.

For more information and guidance about GitHub integration, see:

●● Build GitHub repositories52.

Summary
This module detailed Azure Pipelines anatomy and structure, templates, YAML resources, and how to use
multiple repositories in your pipeline. Also, it explored communication to deploy using Azure Pipelines to
target servers.
You learned how to describe the benefits and usage of:
●● Describe advanced Azure Pipelines anatomy and structure.
●● Detail templates and YAML resources.
●● Implement and use multiple repositories.
●● Explore communication to deploy using Azure Pipelines.

Learn more
●● Azure Pipelines New User Guide - Key concepts - Azure Pipelines | Microsoft Docs53.
●● Azure Pipelines YAML pipeline editor guide - Azure Pipelines | Microsoft Docs54.
●● Check out multiple repositories in your pipeline - Azure Pipelines | Microsoft Docs55.
●● Azure Pipelines Agents - Azure Pipelines | Microsoft Docs56.

52 https://docs.microsoft.com/azure/devops/pipelines/repos/github
53 https://docs.microsoft.com/azure/devops/pipelines/get-started/key-pipelines-concepts
54 https://docs.microsoft.com/azure/devops/pipelines/get-started/yaml-pipeline-editor
55 https://docs.microsoft.com/azure/devops/pipelines/repos/multi-repo-checkout
56 https://docs.microsoft.com/azure/devops/pipelines/agents/agents
    241

Introduction to GitHub Actions


Introduction
GitHub Actions are the primary mechanism for automation within GitHub.
They can be used for a wide variety of purposes, but one of the most common is to implement Continu-
ous Integration.
In this module, you will learn what GitHub Actions, action flow, and its elements are. Understand what
events are, explore jobs and runners, and how to read console output from actions.

Learning objectives
After completing this module, students and professionals can:
●● Explain GitHub Actions and workflows.
●● Create and work with GitHub Actions and Workflows.
●● Describe Events, Jobs, and Runners.
●● Examine the output and release management for actions.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

What are actions?


Actions are the mechanism used to provide workflow automation within the GitHub environment.
They're often used to build continuous integration (CI) and continuous deployment (CD) solutions.
However, they can be used for a wide variety of tasks:
●● Automated testing.
●● Automatically responding to new issues, mentions.
●● Triggering code reviews.
●● Handling pull requests.
●● Branch management.
They're defined in YAML and stay within GitHub repositories.
Actions are executed on “runners,” either hosted by GitHub or self-hosted.
Contributed actions can be found in the GitHub Marketplace. See Marketplace Actions57.

57 https://github.com/marketplace
242     

Explore Actions flow

GitHub tracks events that occur. Events can trigger the start of workflows.
Workflows can also start on cron-based schedules and can be triggered by events outside of GitHub.
They can be manually triggered.
Workflows are the unit of automation. They contain Jobs.
Jobs use Actions to get work done.

Understand Workflows
Workflows define the automation required. It details the events that should trigger the workflow.
Also, define the jobs that should run when the workflow is triggered.
The job defines the location in which the actions will run, like which runner to use.
Workflows are written in YAML and live within a GitHub repository at the place .github/workflows.
Example workflow:
# .github/workflows/build.yml
name: Node Build.

on: [push]

jobs:
mainbuild:

runs-on: ${{ matrix.os }}


    243

strategy:
matrix:
node-version: [12.x]
os: [windows-latest]

steps:

- uses: actions/checkout@v1
- name: Run node.js on latest Windows.
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}

- name: Install NPM and build.


run: |
npm ci
npm run build

You can find a set of starter workflows here: Starter Workflows58.


You can see the allowable syntax for workflows here: Workflow syntax for GitHub Actions59.

Describe standard workflow syntax elements


Workflows include several standard syntax elements.
●● Name: is the name of the workflow. It's optional but is highly recommended. It appears in several
places within the GitHub UI.
●● On: is the event or list of events that will trigger the workflow.
●● Jobs: is the list of jobs to be executed. Workflows can contain one or more jobs.
●● Runs-on: tells Actions which runner to use.
●● Steps: It's the list of steps for the job. Steps within a job execute on the same runner.
●● Uses: tells Actions, which predefined action needs to be retrieved. For example, you might have an
action that installs node.js.
●● Run: tells the job to execute a command on the runner. For example, you might execute an NPM
command.
You can see the allowable syntax for workflows here: Workflow syntax for GitHub Actions60.

Explore Events
Events are implemented by the on clause in a workflow definition.
There are several types of events that can trigger workflows.

58 https://github.com/actions/starter-workflows
59 https://docs.github.com/actions/learn-github-actions/workflow-syntax-for-github-actions
60 https://docs.github.com/actions/learn-github-actions/workflow-syntax-for-github-actions
244     

Scheduled events
With this type of trigger, a cron schedule needs to be provided.
on:
schedule:

- cron: '0 8-17 * * 1-5'

Cron schedules are based on five values:


●● Minute (0 - 59)
●● Hour (0 - 23)
●● Day of the month (1 - 31)
●● Month (1 - 12)
●● Day of the week (0 - 6)
Aliases for the months are JAN-DEC and for days of the week are SUN-SAT.
A wild card means any. (* is a special value in YAML, so the cron string will need to be quoted)
So, in the example above, the schedule would be 8 AM - 5 PM Monday to Friday.

Code events
Code events will trigger most actions. It occurs when an event of interest occurs in the repository.
on:
pull_request

The above event would fire when a pull request occurs.


on:
[push, pull_request]

The above event would fire when either a push or a pull request occurs.
on:
pull_request:
branches:

- develop

The event shows how to be specific about the section of the code that is relevant.
In this case, it will fire when a pull request is made in the develop branch.
    245

Manual events
There's a unique event that is used to trigger workflow runs manually. You should use the workflow_dis-
patch event.
Your workflow must be in the default branch for the repository.

Webhook events
Workflows can be executed when a GitHub webhook is called.
on:
gollum

This event would fire when someone updates (or first creates) a Wiki page.

External events
Events can be on repository_dispatch. That allows events to fire from external systems.
For more information on events, see Events that trigger workflows61.

Explore Jobs
Workflows contain one or more jobs. A job is a set of steps that will be run in order on a runner.
Steps within a job execute on the same runner and share the same filesystem.
The logs produced by jobs are searchable, and artifacts produced can be saved.

Jobs with dependencies


By default, if a workflow contains multiple jobs, they run in parallel.
jobs:
startup:
runs-on: ubuntu-latest
steps:

- run: ./setup_server_configuration.sh
build:
steps:

- run: ./build_new_server.sh

Sometimes you might need one job to wait for another job to complete.
You can do that by defining dependencies between the jobs.
jobs:
startup:

61 https://docs.github.com/actions/learn-github-actions/events-that-trigger-workflows
246     

runs-on: ubuntu-latest
steps:

- run: ./setup_server_configuration.sh
build:
needs: startup
steps:

- run: ./build_new_server.sh

Note: If the startup job in the example above fails, the build job won't execute.
For more information on job dependencies, see the section Creating Dependent Jobs at Managing
complex workflows62.

Explore Runners
When you execute jobs, the steps execute on a Runner.
The steps can be the execution of a shell script or the execution of a predefined Action.
GitHub provides several hosted runners to avoid you needing to spin up your infrastructure to run
actions.
Now, the maximum duration of a job is 6 hours, and for a workflow is 72 hours.
For JavaScript code, you have implementations of node.js on:
●● Windows
●● macOS
●● Linux
If you need to use other languages, a Docker container can be used. Now, the Docker container support
is only Linux-based.
These options allow you to write in whatever language you prefer.
JavaScript actions will be faster (no container needs to be used) and more versatile runtime.
The GitHub UI is also better for working with JavaScript actions.

Self-hosted runners
If you need different configurations to the ones provided, you can create a self-hosted runner.
GitHub has published the source code for self-hosted runners as open-source, and you can find it here:
https://github.com/actions/runner.
It allows you to customize the runner completely. However, you then need to maintain (patch, upgrade)
the runner system.
Self-hosted runners can be added at different levels within an enterprise:
●● Repository-level (single repository).

62 https://docs.github.com/actions/learn-github-actions/managing-complex-workflows
    247

●● Organizational-level (multiple repositories in an organization).


●● Enterprise-level (multiple organizations across an enterprise).

GitHub strongly recommends that you don't use self-host-


ed runners in public repos.
Doing it would be a significant security risk, as you would allow someone (potentially) to run code on
your runner within your network.
For more information on self-hosted runners, see: About self-hosted runners63.

Examine release and test an action


Actions will often produce console output. You don't need to connect directly to the runners to retrieve
that output.
The console output from actions is available directly from within the GitHub UI.
Select Actions on the top repository menu to see a list of executed workflows to see the output.
Next, click on the name of the job to see the output of the steps.

Console output can help debug. If it isn't sufficient, you can also enable more logging. See: Enabling
debug logging64.

Release Management for Actions


While you might be happy to retrieve the latest version of the action, there are many situations where
you might want a specific version of the action.
You can request a specific release of action in several ways:

Tags
Tags allow you to specify the precise versions that you want to work.

63 https://docs.github.com/actions/hosting-your-own-runners/about-self-hosted-runners
64 https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging
248     

steps:
-uses: actions/install-timer@v2.0.1

SHA-based hashes
You can specify a requested SHA-based hash for an action. It ensures that the action hasn't changed.
However, the downside to this is that you also won't receive updates to the action automatically either.
steps:
-uses: actions/install-timer@327239021f7cc39fe7327647b213799853a9eb98

Branches
A common way to request actions is to refer to the branch you want to work with. You'll then get the
latest version from that branch. That means you'll benefit from updates, but it also increases the chance
of code-breaking.
steps:
-uses: actions/install-timer@develop

Test an Action
GitHub offers several learning tools for actions.
GitHub Actions: hello-world65
You'll see a basic example of how to:
●● Organize and identify workflow files.
●● Add executable scripts.
●● Create workflow and action blocks.
●● Trigger workflows.
●● Discover workflow logs.

Summary
In this module, you learned what GitHub Actions, action flow, and its elements are. Also, understood what
events are, explored jobs and runners, and how to read console output from actions.
You learned how to describe the benefits and usage of:
●● Explain GitHub Actions and workflows.
●● Create and work with GitHub Actions and Workflows.
●● Describe Events, Jobs, and Runners.
●● Examine the output and release management for actions.

65 https://lab.github.com/githubtraining/github-actions:-hello-world
    249

Learn more
●● Quickstart for GitHub Actions66.
●● Workflow syntax for GitHub Actions - GitHub Docs67.
●● Events that trigger workflows - GitHub Docs68.

66 https://docs.github.com/actions/quickstart
67 https://docs.github.com/actions/learn-github-actions/workflow-syntax-for-github-actions
68 https://docs.github.com/actions/learn-github-actions/events-that-trigger-workflows
250     

Learn continuous integration with GitHub Ac-


tions
Introduction
This module details continuous integration using GitHub Actions and describes environment variables,
artifacts, best practices, and how to secure your pipeline using encrypted variables and secrets.

Learning objectives
After completing this module, students and professionals can:
●● Implement Continuous Integration with GitHub Actions.
●● Use environment variables.
●● Share artifacts between jobs and use Git tags.
●● Create and manage secrets.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Describe continuous integration with actions


It's an example of a basic continuous integration workflow created by using actions:
name: dotnet Build

on: [push]

jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:

- uses: actions/checkout@main
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.x'

- run: dotnet build awesomeproject

●● On: Specifies what will occur when code is pushed.


    251

●● Jobs: There's a single job called build.


●● Strategy: It's being used to specify the Node.js version.
●● Steps: Are doing a checkout of the code and setting up dotnet.
●● Run: Is building the code.

Examine environment variables


When using Actions to create CI or CD workflows, you'll typically need to pass variable values to the
actions. It's done by using Environment Variables.

Built-in environment variables


GitHub provides a series of built-in environment variables. It all has a GITHUB_ prefix.
Note: Setting that prefix for your variables will result in an error.
Examples of built-in environment variables are:
GITHUB_WORKFLOW is the name of the workflow.
GITHUB_ACTION is the unique identifier for the action.
GITHUB_REPOSITORY is the name of the repository (but also includes the name of the owner in owner/
repo format)

Using variables in workflows


Variables are set in the YAML workflow files. They're passed to the actions that are in the step.
jobs:
verify-connection:
steps:
- name: Verify Connection to SQL Server
- run: node testconnection.js
env:
PROJECT_SERVER: PH202323V
PROJECT_DATABASE: HAMaster

For more information on environment variables, including a list of built-in environment variables, see
Environment variables69.

Share artifacts between jobs


When using Actions to create CI or CD workflows, you'll often need to pass artifacts created by one job to
another.
The most common ways to do it are by using the upload-artifact and download-artifact actions.

69 https://docs.github.com/actions/learn-github-actions/environment-variables
252     

Upload-artifact
This action can upload one or more files from your workflow to be shared between jobs.
You can upload a specific file:

- uses: actions/upload-artifact
with:
name: harness-build-log
path: bin/output/logs/harness.log

You can upload an entire folder:

- uses: actions/upload-artifact
with:
name: harness-build-logs
path: bin/output/logs/

You can use wildcards:

- uses: actions/upload-artifact
with:
name: harness-build-logs
path: bin/output/logs/harness[ab]?/*

You can specify multiple paths:

- uses: actions/upload-artifact
with:
name: harness-build-logs
path: |
bin/output/logs/harness.log
bin/output/logs/harnessbuild.txt

For more information on this action, see upload-artifact.70

Download-artifact
There's a corresponding action for downloading (or retrieving) artifacts.

- uses: actions/download-artifact
with:

70 https://github.com/actions/upload-artifact
    253

name: harness-build-log

If no path is specified, it's downloaded to the current directory.


For more information on this action, see download-artifact.71

Artifact retention
A default retention period can be set for the repository, organization, or enterprise.
You can set a custom retention period when uploading, but it can't exceed the defaults for the repository,
organization, or enterprise.

- uses: actions/upload-artifact
with:
name: harness-build-log
path: bin/output/logs/harness.log
retention-days: 12

Deleting artifacts
You can delete artifacts directly in the GitHub UI.
For details, you can see: Removing workflow artifacts72.

Examine Workflow badges


Badges can be used to show the status of a workflow within a repository.
They show if a workflow is currently passing or failing. While they can appear in several locations, they
typically get added to the README.md file for the repository.
Badges are added by using URLs. The URLs are formed as follows:
https://github.com/AAAAA/RRRRR/workflows/WWWWW/badge.svg
Where:
●● AAAAA is the account name.
●● RRRRR is the repository name.
●● WWWWW is the workflow name.

They usually indicate the status of the default branch but can be branch-specific. You do by adding a URL
query parameter:

71 https://github.com/actions/download-artifact
72 https://docs.github.com/actions/managing-workflow-runs/removing-workflow-artifacts
254     

?branch=BBBBB
where:
●● BBBBB is the branch name.
For more information, see: Adding a workflow status badge73.

Describe best practices for creating actions


It's important to follow best practices when creating actions:
●● Create chainable actions. Don't create large monolithic actions. Instead, create smaller functional
actions that can be chained together.

●● Version your actions like other code. Others might take dependencies on various versions of your
actions. Allow them to specify versions.
●● Provide a latest label. If others are happy to use the latest version of your action, make sure you
provide a latest label that they can specify to get it.
●● Add appropriate documentation. As with other code, documentation helps others use your actions
and can help avoid surprises about how they function.
●● Add details action.yml metadata. At the root of your action, you'll have an action.yml file. Make sure
it has been populated with author, icon, any expected inputs, and outputs.
●● Consider contributing to the marketplace. It's easier for us all to work with actions when we all
contribute to the marketplace. Help to avoid people needing to relearn the same issues endlessly.

Mark releases with Git tags


Releases are software iterations that can be packed for release.
In Git, releases are based on Git tags. These tags mark a point in the history of the repository. Tags are
commonly assigned as releases are created.

Often these tags will contain version numbers, but they can have other values.
Tags can then be viewed in the history of a repository.

73 https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/adding-a-workflow-status-badge
    255

For more information on tags and releases, see: About releases74.

Create encrypted secrets


Actions often can use secrets within pipelines. Common examples are passwords or keys.
In GitHub actions, It's called Secrets.

Secrets
Secrets are similar to environment variables but encrypted. They can be created at two levels:
●● Repository
●● Organization
If secrets are created at the organization level, access policies can limit the repositories that can use them.

Creating secrets for a repository


To create secrets for a repository, you must be the owner of the repository. From the repository Settings,
choose Secrets, then New Secret.

74 https://docs.github.com/repositories/releasing-projects-on-github/about-releases
256     

For more information on creating secrets, see Encrypted secrets75.

Use secrets in a workflow


Secrets aren't passed automatically to the runners when workflows are executed.
Instead, when you include an action that requires access to a secret, you use the secrets context to
provide it.
steps:

- name: Test Database Connectivity


with:
db_username: ${{ secrets.DBUserName }}
db_password: ${{ secrets.DBPassword }}

Command-line secrets
Secrets shouldn't be passed directly as command-line arguments as they may be visible to others.
Instead, treat them like environment variables:
steps:

- shell: pwsh
env:
DB_PASSWORD: ${{ secrets.DBPassword }}
run: |

75 https://docs.github.com/actions/security-guides/encrypted-secrets
    257

db_test "$env:DB_PASSWORD"

Limitations
Workflows can use up to 100 secrets, and they're limited to 64 KB in size.
For more information on creating secrets, see Encrypted secrets76.

Summary
This module detailed continuous integration using GitHub Actions. It described environment variables,
artifacts, best practices, and how to secure your pipeline using encrypted variables and secrets.
You learned how to describe the benefits and usage of:
●● Implement Continuous Integration with GitHub Actions.
●● Use environment variables.
●● Share artifacts between jobs and use Git tags.
●● Create and manage secrets.

Learn more
●● About continuous integration - GitHub Docs77.
●● Environment variables - GitHub Docs78.
●● Storing workflow data as artifacts - GitHub Docs79.
●● Encrypted secrets - GitHub Docs80.

76 https://docs.github.com/actions/security-guides/encrypted-secrets
77 https://docs.github.com/actions/automating-builds-and-tests/about-continuous-integration
78 https://docs.github.com/actions/learn-github-actions/environment-variables
79 https://docs.github.com/actions/advanced-guides/storing-workflow-data-as-artifacts
80 https://docs.github.com/actions/security-guides/encrypted-secrets
258     

Design a container build strategy


Introduction
Container
Containers are the third computing model, after bare metal and virtual machines – and containers are
here to stay.
Unlike a VM, which provides hardware virtualization, a container provides operating-system-level virtual-
ization by abstracting the “user space,” not the entire operating system. The operating system level
architecture is being shared across containers. It's what makes containers so lightweight.
Docker gives you a simple platform for running apps in containers, old and new apps on Windows and
Linux, and that simplicity is a powerful enabler for all aspects of modern IT.
Containers aren't only faster and easier to use than VMs; they also make far more efficient use of com-
puting hardware.
As most modern software developers attest, containers have provided engineering teams with dramati-
cally more flexibility for running cloud-native applications on physical and virtual infrastructure.
Containers package up the application services and make them portable across different computing
environments for dev/test and production use.
With containers, it's easy to ramp application instances to match spikes in demand quickly. And because
containers draw on resources of the host OS, they're much lighter weight than virtual machines. It means
containers make highly efficient use of the underlying server infrastructure.
Though the container runtime APIs are well suited to managing individual containers, they're woefully
inadequate for managing applications that might comprise hundreds of containers spread across multi-
ple hosts.
Containers need to be managed and connected to the outside world for scheduling, load balancing, and
distribution. It's where a container orchestration tool like Azure Kubernetes Services (AKS) comes into its
own.
An open-source system for deploying, scaling, and managing containerized applications, AKS handles the
work of scheduling containers onto a compute cluster and manages the workloads to ensure they run as
the user intended. Instead of bolting on operations as an afterthought, AKS brings software development
and operations together by design. By using declarative, infrastructure-agnostic constructs to describe
how applications are composed, how they interact, and how they're managed, AKS enables an or-
der-of-magnitude increase in the operability of modern software systems.
This module helps you plan a container build strategy, explains containers and their structure, introduces
Docker and related services.

What other benefits do containers offer?


Containers are portable. A container will run wherever Docker is supported.
Containers allow you to have a consistent development environment. For example, a SQL Server 2019
CU2 container that one developer is working with will be identical to another developer.
    259

Containers can be lightweight. A container may be only tens of megabytes in size, but a virtual machine
with its entire operating system may be several gigabytes in size. Because of it, a single server can host
far more containers than virtual machines.
Containers can be efficient: fast to deploy, fast to boot, fast to patch, quick to update.

Learning objectives
After completing this module, students and professionals can:
●● Design a container strategy.
●● Work with Docker Containers.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Examine structure of containers


If you're a programmer or techie, you've at least heard of Docker: a helpful tool for packing, shipping,
and running applications within “containers.”
With all the attention it's getting these days, it would be hard not to, from developers and system admins
alike.
There's a difference between containers and Docker. A container is a thing that runs a little program
package, while Docker is the container runtime and orchestrator.

What are containers, and why do you need them?


Containers are a solution to the problem of how to get the software to run reliably when moved from
one computing environment to another.
It could be from a developer's laptop to a test environment, from a staging environment into production,
and perhaps from a physical machine in a data center to a virtual machine in a private or public cloud.
Problems arise when the supporting software environment isn't identical.
For example, say you'll develop using Python 3, but when it gets deployed to production, it will run on
Python 2.7. It's likely to cause several issues.
It's not limited to the software environment; you're likely to come across issues in production if there are
differences in the networking stack between the two environments.

How do containers solve this problem?


A container consists of an entire runtime environment: an application, plus all its dependencies, libraries
and other binaries, and configuration files needed to run it, bundled into one package.
By containerizing the application platform and its dependencies, differences in OS distributions and
underlying infrastructure are abstracted.
260     

What's the difference between containers and virtualiza-


tion?
Containers and VMs are similar in their goals: to isolate an application and its dependencies into a
self-contained unit that can run anywhere.
Also, containers and VMs remove the need for physical hardware, allowing for more efficient use of
computing resources, both in energy consumption and cost-effectiveness.
The main difference between containers and VMs is in their architectural approach. Let's take a closer
look.

Virtual Machines
A VM is essentially an emulation of a real computer that executes programs like a real computer. VMs run
on top of a physical machine using a “hypervisor.”
As you can see in the diagram, VMs package up the virtual hardware, a kernel (OS), and user space for
each new VM.
    261

Container
Unlike a VM, which provides hardware virtualization, a container provides operating-system-level virtual-
ization by abstracting the “user space.”
This diagram shows that containers package up just the user space, not the kernel or virtual hardware like
a VM does. Each container gets its isolated user space to allow multiple containers to run on a single host
machine. We can see that all the operating system-level architecture is being shared across containers.
The only parts that are created from scratch are the bins and libs. It's what makes containers so light-
weight.

Work with Docker containers


Container Lifecycle
The standard steps when working with containers are:
Docker build - You create an image by executing a Dockerfile.
Docker pull - You retrieve the image, likely from a container registry.
Docker run - You execute the container. An instance is created of the image.
You can often execute the docker run without needing first to do the docker pull.
In that case, Docker will pull the image and then run it. Next time, it won't need to pull it again.

Understand Dockerfile core concepts


Dockerfiles are text files that contain the commands needed by docker build to assemble an image.
Here's an example of a basic Dockerfile:
262     

FROM ubuntu
LABEL maintainer="johndoe@contoso.com"
ADD appsetup /
RUN /bin/bash -c 'source $HOME/.bashrc; \
echo $HOME'
CMD ["echo", "Hello World from within the container"]

The first line refers to the parent image based on which this new image will be based.
Generally, all images will be based on another existing image. In this case, the Ubuntu image would be
retrieved from either a local cache or from DockerHub.
An image that doesn't have a parent is called a base image. In that rare case, the FROM line can be
omitted, or FROM scratch can be used instead.
The second line indicates the email address of the person who maintains this file. Previously, there was a
MAINTAINER command, but that has been deprecated and replaced by a label.
The third line adds a file to the root folder of the image. It can also add an executable.
The fourth and fifth lines are part of a RUN command. Note the use of the backslash to continue the
fourth line onto the fifth line for readability. It's equivalent to having written it instead:
RUN /bin/bash -c 'source $HOME/.bashrc; echo $HOME'

The RUN command is run when the docker build creates the image. It's used to configure items within
the image.
By comparison, the last line represents a command that will be executed when a new container is created
from the image; it's run after container creation.
For more information, you can see:
Dockerfile reference81

Examine multi-stage dockerfiles


What are multi-stage Dockerfiles? Multi-stage builds give the benefits of the builder pattern without the
hassle of maintaining three separate files.
Let us look at a multi-stage Dockerfile.
FROM mcr.microsoft.com/dotnet/core/aspnetcore:3.1 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443

FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build


WORKDIR /src
COPY ["WebApplication1.csproj", ""]
RUN dotnet restore "./WebApplication1.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "WebApplication1.csproj" -c Release -o /app/build

81 https://docs.docker.com/engine/reference/builder/
    263

FROM build AS publish


RUN dotnet publish "WebApplication1.csproj" -c Release -o /app/publish

FROM base AS final


WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

At first, it simply looks like several dockerfiles stitched together. Multi-stage Dockerfiles can be layered or
inherited.
When you look closer, there are a couple of key things to realize.
Notice the third stage.
FROM build AS publish
build isn't an image pulled from a registry. It's the image we defined in stage 2, where we named the
result of our-build (SDK) image “builder.” Docker build will create a named image we can later
reference.
We can also copy the output from one image to another. It's the real power to compile our code with one
base SDK image (mcr.microsoft.com/dotnet/core/sdk:3.1) while creating a production image
based on an optimized runtime image (mcr.microsoft.com/dotnet/core/aspnet:3.1). Notice
the line.
COPY --from=publish /app/publish .
It takes the /app/publish directory from the published image and copies it to the working directory of the
production image.

Breakdown of stages
The first stage provides the base of our optimized runtime image. Notice it derives from mcr.micro-
soft.com/dotnet/core/aspnet:3.1.
We would specify extra production configurations, such as registry configurations, MSIexec of other
components. You would hand off any of those environment configurations to your ops folks to prepare
the VM.
The second stage is our build environment. mcr.microsoft.com/dotnet/core/sdk:3.1 This
includes everything we need to compile our code. From here, we have compiled binaries we can publish
or test—more on testing in a moment.
The third stage derives from our build stage. It takes the compiled output and “publishes” them in .NET
terms.
Publish means taking all the output required to deploy your “app/publish/service/component” and
placing it in a single directory. It would include your compiled binaries, graphics (images), JavaScript, and
so on.
The fourth stage takes the published output and places it in the optimized image we defined in the first
stage.
264     

Why is publish separate from the build?


You'll likely want to run unit tests to verify your compiled code. Or the aggregate of the compiled code
from multiple developers being merged continues to function as expected.
You could place the following stage between builder and publish to run unit tests.
FROM build AS test
WORKDIR /src/Web.test
RUN dotnet test

If your tests fail, the build will stop to continue.

Why is base first?


You could argue it's simply the logical flow. We first define the base runtime image. Get the compiled
output ready, and place it in the base image.
However, it's more practical. While debugging your applications under Visual Studio Container Tools, VS
will debug your code directly in the base image.
When you hit F5, Visual Studio will compile the code on your dev machine. The first stage then volume
mounts the output to the built runtime image.
You can test any configurations you have made to your production image, such as registry configurations
or otherwise.
When the docker build --target base is executed, docker starts processing the dockerfile from
the beginning through the stage (target) defined.
Since the base is the first stage, we take the shortest path, making the F5 experience as fast as possible.
If the base were after compilation (builder), you would have to wait for all the next steps to complete.
One of the perf optimizations we make with VS Container Tools is to take advantage of the Visual Studio
compilations on your dev machine.

Examine considerations for multiple stage builds


Adopt container modularity
Try to avoid creating overly complex container images that couple together several applications.
Instead, use multiple containers and try to keep each container to a single purpose.
The website and the database for a web application should likely be in separate containers.
There are always exceptions to any rule but breaking up application components into separate containers
increases the chances of reusing containers.
It also makes it more likely that you could scale the application.
For example, in the web application mentioned, you might want to add replicas of the website container
but not for the database container.
    265

Avoid unnecessary packages


To help minimize image sizes, it's also essential to avoid including packages that you suspect might be
needed but aren't yet sure if they're required.
Only include them when they're required.

Choose an appropriate base


While optimizing the contents of your Dockerfiles is essential, it's also crucial to choose the appropriate
parent (base) image. Start with an image that only contains packages that are required.

Avoid including application data


While application data can be stored in the container, it will make your images more prominent.
It would be best to consider using docker volume support to maintain the isolation of your application
and its data. Volumes are persistent storage mechanisms that exist outside the lifespan of a container.
For more information, see Use multiple-stage builds82.

Explore Azure container-related services


Azure provides a wide range of services that help you work with containers.
Here are the essential services that are involved:
Azure Container Instances (ACI)83
Running your workloads in Azure Container Instances (ACI) allows you to create your applications rather
than provisioning and managing the infrastructure that will run the applications.
ACIs are simple and fast to deploy, and when you're using them, you gain the security of hypervisor
isolation for each container group. It ensures that your containers aren't sharing an operating system
kernel with other containers.
Azure Kubernetes Service (AKS)84
Kubernetes has quickly become the de-facto standard for container orchestration. This service lets you
quickly deploy and manage Kubernetes, to scale and run applications while maintaining overall solid
security.
This service started life as Azure Container Services (ACS) and supported Docker Swarm and Mesos/
Mesosphere DC/OS at release to manage orchestrations. These original ACS workloads are still supported
in Azure, but Kubernetes support was added.
It quickly became so popular that Microsoft changed the acronym for Azure Container Services to AKS
and later changed the name of the service to Azure Kubernetes Service (also AKS).
Azure Container Registry (ACR)85
This service lets you store and manage container images in a central registry. It provides you with a
Docker private registry as a first-class Azure resource.

82 https://docs.docker.com/develop/develop-images/multistage-build/
83 https://azure.microsoft.com/services/container-instances/
84 https://azure.microsoft.com/services/kubernetes-service/
85 https://azure.microsoft.com/services/container-registry/
266     

All container deployments, including DC/OS, Docker Swarm, and Kubernetes, are supported. The registry
is integrated with other Azure services such as the App Service, Batch, Service Fabric, and others.
Importantly, it allows your DevOps team to manage the configuration of apps without being tied to the
configuration of the target-hosting environment.
Azure Container Apps86
Azure Container Apps allows you to build and deploy modern apps and microservices using serverless
containers. It deploys containerized apps without managing complex infrastructure.
You can write code using your preferred programming language or framework and build microservices
with full support for Distributed Application Runtime (Dapr)87. Scale dynamically based on HTTP traffic
or events powered by Kubernetes Event-Driven Autoscaling (KEDA)88.
Azure App Service89
Azure Web Apps provides a managed service for both Windows and Linux-based web applications and
provides the ability to deploy and run containerized applications for both platforms. It provides autoscal-
ing and load balancing options and is easy to integrate with Azure DevOps.

Summary
This module helped you plan a container build strategy, explained containers and their structure, intro-
duced Docker, microservices, Azure Container Registry, and related services.
You learned how to describe the benefits and usage of:
●● Design a container strategy.
●● Work with Docker Containers.
●● Create an Azure Container Registry.
●● Explain Docker microservices and containers.

Learn more
●● Container Jobs in Azure Pipelines and TFS - Azure Pipelines | Microsoft Docs90.
●● Build an image - Azure Pipelines | Microsoft Docs91.
●● Service Containers - Azure Pipelines & TFS | Microsoft Docs92.
●● Quickstart - Create registry in portal - Azure Container Registry | Microsoft Docs93.
●● What are Microservices? - Azure DevOps | Microsoft Docs94.
●● CI/CD for microservices - Azure Architecture Center | Microsoft Docs95.

86 https://azure.microsoft.com/services/container-apps/
87 https://dapr.io/
88 https://keda.sh/
89 https://azure.microsoft.com/services/app-service/
90 https://docs.microsoft.com/azure/devops/pipelines/process/container-phases
91 https://docs.microsoft.com/azure/devops/pipelines/ecosystems/containers/build-image
92 https://docs.microsoft.com/azure/devops/pipelines/process/service-containers
93 https://docs.microsoft.com/azure/container-registry/container-registry-get-started-portal
94 https://docs.microsoft.com/devops/deliver/what-are-microservices
95 https://docs.microsoft.com/azure/architecture/microservices/ci-cd
    267

Labs
Lab 04: Configuring agent pools and under-
standing pipeline styles
Lab overview
YAML-based pipelines allow you to fully implement CI/CD as code, in which pipeline definitions reside in
the same repository as the code that is part of your Azure DevOps project. YAML-based pipelines support
a wide range of features that are part of the classic pipelines, such as pull requests, code reviews, history,
branching, and templates.
Regardless of the choice of the pipeline style, to build your code or deploy your solution by using Azure
Pipelines, you need an agent. An agent hosts compute resources that runs one job at a time. Jobs can be
run directly on the host machine of the agent or in a container. You have an option to run your jobs using
Microsoft-hosted agents, which are managed for you, or implementing a self-hosted agent that you set
up and manage on your own.
In this lab, you will step through the process of converting a classic pipeline into a YAML-based one and
running it first by using a Microsoft-hosted agent and then performing the equivalent task by using a
self-hosted agent.

Objectives
After you complete this lab, you will be able to:
●● implement YAML-based pipelines
●● implement self-hosted agents

Lab duration
●● Estimated time: 90 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions96

Lab 05: Deploying Docker Containers to Azure


App Service web apps
Lab overview
In this lab, you will learn how to use an Azure DevOps CI/CD pipeline to build a custom Docker image,
push it to Azure Container Registry, and deploy it as a container to Azure App Service.

96 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
268     

Objectives
After you complete this lab, you will be able to:
●● Build a custom Docker image by using an Microsoft hosted Linux agent
●● Push an image to Azure Container Registry
●● Deploy a Docker image as a container to Azure App Service by using Azure DevOps

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions97

Lab 06: Enabling continuous integration with


Azure Pipelines
Lab overview
In this lab, you will learn how to configure continuous integration (CI) and continuous deployment (CD)
for your applications using Build and Release in Azure Pipelines. This scriptable CI/CD system is both
web-based and cross-platform, while also providing a modern interface for visualizing sophisticated
workflows. Although we won’t demonstrate all the cross-platform possibilities in this lab, it is important
to point out that you can also build for iOS, Android, Java (using Ant, Maven, or Gradle) and Linux.

Objectives
After you complete this lab, you will be able to:
●● Create a basic build pipeline from a template
●● Track and review a build
●● Invoke a continuous integration build

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions98

97 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
98 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    269

Lab 07: Implementing GitHub Actions by using


DevOps Starter
Lab overview
In this lab, you will learn how to implement a GitHub Action workflow that deploys an Azure web app by
using DevOps Starter.

Objectives
After you complete this lab, you will be able to:
●● Implement a GitHub Action workflow by using DevOps Starter
●● Explain the basic characteristics of GitHub Action workflows

Lab duration
●● Estimated time: 30 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions99

Lab 08: Integrating external source control with


Azure Pipelines
Lab overview
With the introduction of Azure DevOps, Microsoft is offering developers a new continuous integration/
continuous delivery (CI/CD) service called Azure Pipelines that enables you to continuously build, test,
and deploy to any platform or cloud. It has cloud-hosted agents for Linux, macOS, and Windows; power-
ful workflows with native container support; and flexible deployments to Kubernetes, VMs, and serverless
environments.
Azure Pipelines provides unlimited CI/CD minutes and 10 parallel jobs to every GitHub open source
project for free. All open source projects run on the same infrastructure that our paying customers use.
That means you’ll have the same fast performance and high quality of service. Many of the top open
source projects are already using Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual
Studio Code, and TypeScript-and the list is growing every day.
In this lab, you’ll see how easy it is to set up Azure Pipelines with your GitHub projects and how you can
start seeing benefits immediately.

99 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
270     

Objectives
After you complete this lab, you will be able to:
●● Install Azure Pipelines from the GitHub Marketplace
●● Integrate a GitHub project with an Azure DevOps pipeline
●● Track pull requests through the pipeline

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions100

100 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    271

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices aren't characteristic of a pipeline?
†† Reliable.
†† Repeatable.
†† Immutable.

Multiple choice
Which of the following is a correct statement about Azure Pipelines?
†† Azure Pipelines works exclusively with Azure and Microsoft Technology. For open-source, GitHub is
recommended.
†† Azure Pipelines work with any language or platform.
†† Azure Pipelines only deploys in cloud environments.

Multiple choice
Which of the following choices is a Continuous Integration (CI) benefit?
†† Automatically ensure you don't ship broken code.
†† Automatically deploy code to production.
†† Ensure deployment targets have the latest code.

Multiple choice
Which of the following commands do you use to retrieve an image from a container registry?
†† Docker run.
†† Docker pull.
†† Docker build.

Multiple choice
Which of the following choices isn't container-related Azure services?
†† Azure App Service.
†† Azure Container Instances.
†† Azure virtual machine scale sets.
272     

Multiple choice
Which of the following choices creates an instance of the image?
†† Docker run.
†† Docker build.
†† Docker pull.

Multiple choice
Which of the following choices are the two types of agents being used in pipelines?
†† Client-Hosted and Server-Hosted.
†† Self-Hosted and Microsoft-Hosted.
†† Windows-Hosted and Linux-Hosted.

Multiple choice
Which of the following is a correct statement about Agent Pools?
†† Agent pools are scoped to the entire project and can be shared across pipelines.
†† Agent pools are scoped to the entire repository and can be shared across pipelines.
†† Agent pools are scoped to the entire organization and can be shared across projects.

Multiple choice
Which of the following choices is the role that can manage membership for all roles of the project agent
pool?
†† Administrator.
†† User.
†† Contributor.

Multiple choice
Which of the following choices isn't used to create and configure an Azure Pipeline?
†† XML.
†† YAML.
†† Visual Designer.

Multiple choice
Which of the following choices is a benefit of using the Visual Designer?
†† Every branch you use can modify the build policy by modifying the azure-pipelines.yml file.
†† The visual designer is in the same hub as the build results.
†† A change to the build process might cause a break or result in an unexpected outcome. Because the
change is in version control with the rest of your codebase, you can more easily identify the issue.
    273

Multiple choice
Which of the following choices is a benefit of using YAML?
†† The pipeline is versioned with your code and follows the same branching structure.
†† The YAML representation of the pipelines makes it easier to get started.
†† The YAML is in the same hub as the build results.

Multiple choice
Which of the following choices isn't a pillar of Continuous Integration?
†† Version Control System.
†† Automated Build Process.
†† Automated Deploy Process.

Multiple choice
Which of the following choices describe the status of a build pipeline that will queue new build requests and
not start them?
†† Disabled.
†† Paused.
†† On Hold.

Multiple choice
Which of the following choices is where you change the build number format?
†† Build properties.
†† Library.
†† Project Settings.

Multiple choice
Which of the following choices could be a user capability?
†† Agent.ComputerName.
†† Agent.OS.
†† ContosoApplication.Path.

Multiple choice
Which of the following choices is the scope of where parallel jobs are defined?
†† Project scope.
†† Organization scope.
†† Pipeline scope.
274     

Multiple choice
Which of the following choices of repository types does a YAML pipeline supports?
†† Azure Repos TFVC.
†† Azure Repos Git.
†† Other Git (generic).

Multiple choice
Which of the following choices is the YAML property responsible for creating a dependency between jobs?
†† steps.
†† dependsOn.
†† condition.

Multiple choice
Which of the following choices is responsible for always starting the communication between Azure Pipe-
lines and its agent?
†† Service Hook.
†† Azure Pipelines.
†† Agent.

Multiple choice
Which of the following choices describes a reason for installing an agent using interactive mode?
†† To run UI Tests.
†† To run Java pipelines.
†† To communicate with cloud environments from on-premises agents (self-hosted).

Multiple choice
Which of the following choices is a keyword to make one job wait for another job to complete?
†† on.
†† needs.
†† uses.

Multiple choice
Which of the following choices isn't a level that self-hosted runners can be added within an enterprise?
†† Enterprise-level (multiple organizations across an enterprise).
†† Organizational-level (multiple repositories in an organization).
†† Project-level.
    275

Multiple choice
Which of the following choices is the location where you can find workflows?
†† .github/workflows.
†† .github/build/workflows.
†† .github/actions/workflows.

Multiple choice
Which of the following choices is where the database passwords that are needed in a CI pipeline should be
stored?
†† Repo.
†† action.yml.
†† Encrypted Secrets.

Multiple choice
Which of the following files is the metadata for an action held?
†† workflow.yml.
†† action.yml.
†† meta.yml.

Multiple choice
Which of the following choices is how can the status of a workflow be shown in a repository?
†† Using Badges.
†† Status Files.
†† Conversation Tab.
276     

Answers
Multiple choice
Which of the following choices aren't characteristic of a pipeline?
†† Reliable.
†† Repeatable.
■■ Immutable.
Explanation
The core pipeline idea is to create a repeatable, reliable, and incrementally improving process.
Multiple choice
Which of the following is a correct statement about Azure Pipelines?
†† Azure Pipelines works exclusively with Azure and Microsoft Technology. For open-source, GitHub is
recommended.
■■ Azure Pipelines work with any language or platform.
†† Azure Pipelines only deploys in cloud environments.
Explanation
Azure Pipelines work with any language or platform.
Multiple choice
Which of the following choices is a Continuous Integration (CI) benefit?
■■ Automatically ensure you don't ship broken code.
†† Automatically deploy code to production.
†† Ensure deployment targets have the latest code.
Explanation
Automatically ensure you don't ship broken code.
Multiple choice
Which of the following commands do you use to retrieve an image from a container registry?
†† Docker run.
■■ Docker pull.
†† Docker build.
Explanation
It's docker pull. You retrieve the image, likely from a container registry.
    277

Multiple choice
Which of the following choices isn't container-related Azure services?
†† Azure App Service.
†† Azure Container Instances.
■■ Azure virtual machine scale sets.
Explanation
Azure provides a wide range of services that help you to work with containers such as Azure App Service
and Azure Container Instances.
Multiple choice
Which of the following choices creates an instance of the image?
■■ Docker run.
†† Docker build.
†† Docker pull.
Explanation
It's docker run. You execute the container. An instance is created of the image.
Multiple choice
Which of the following choices are the two types of agents being used in pipelines?
†† Client-Hosted and Server-Hosted.
■■ Self-Hosted and Microsoft-Hosted.
†† Windows-Hosted and Linux-Hosted.
Explanation
Self-hosted and Microsoft-hosted.
Multiple choice
Which of the following is a correct statement about Agent Pools?
†† Agent pools are scoped to the entire project and can be shared across pipelines.
†† Agent pools are scoped to the entire repository and can be shared across pipelines.
■■ Agent pools are scoped to the entire organization and can be shared across projects.
Explanation
Agent pools are scoped to the entire organization and can be shared across projects.
Multiple choice
Which of the following choices is the role that can manage membership for all roles of the project agent
pool?
■■ Administrator.
†† User.
†† Contributor.
Explanation
It's Administrator.
278     

Multiple choice
Which of the following choices isn't used to create and configure an Azure Pipeline?
■■ XML.
†† YAML.
†† Visual Designer.
Explanation
Azure Pipelines support YAML and Visual Designer.
Multiple choice
Which of the following choices is a benefit of using the Visual Designer?
†† Every branch you use can modify the build policy by modifying the azure-pipelines.yml file.
■■ The visual designer is in the same hub as the build results.
†† A change to the build process might cause a break or result in an unexpected outcome. Because the
change is in version control with the rest of your codebase, you can more easily identify the issue.
Explanation
The visual designer is in the same hub as the build results. This location makes it easier to switch back and
forth and make changes.
Multiple choice
Which of the following choices is a benefit of using YAML?
■■ The pipeline is versioned with your code and follows the same branching structure.
†† The YAML representation of the pipelines makes it easier to get started.
†† The YAML is in the same hub as the build results.
Explanation
The pipeline is versioned with your code and follows the same branching structure. You get validation of
your changes through code reviews in pull requests and branch build policies.
Multiple choice
Which of the following choices isn't a pillar of Continuous Integration?
†† Version Control System.
†† Automated Build Process.
■■ Automated Deploy Process.
Explanation
The four pillars of CI are Version Control System, Packet Management System, Continuous Integration
System, and an Automated Build Process.
    279

Multiple choice
Which of the following choices describe the status of a build pipeline that will queue new build requests
and not start them?
†† Disabled.
■■ Paused.
†† On Hold.
Explanation
A build pipeline that is Paused will queue new build requests and not start them.
Multiple choice
Which of the following choices is where you change the build number format?
■■ Build properties.
†† Library.
†† Project Settings.
Explanation
It's in the build properties.
Multiple choice
Which of the following choices could be a user capability?
†† Agent.ComputerName.
†† Agent.OS.
■■ ContosoApplication.Path.
Explanation
The capabilities such as Agent.ComputerName and Agent.OS that is automatically discovered is referred to
as system capabilities. The ones that you define such ContosoApplication.Path is called user capabilities.
Multiple choice
Which of the following choices is the scope of where parallel jobs are defined?
†† Project scope.
■■ Organization scope.
†† Pipeline scope.
Explanation
At the organization level, you can configure the number of parallel jobs that are made available.
Multiple choice
Which of the following choices of repository types does a YAML pipeline supports?
†† Azure Repos TFVC.
■■ Azure Repos Git.
†† Other Git (generic).
Explanation
The supported source control types in YAML are Azure Repos Git, BitBucket Cloud, GitHub, GitHub Enter-
prise Server.
280     

Multiple choice
Which of the following choices is the YAML property responsible for creating a dependency between
jobs?
†† steps.
■■ dependsOn.
†† condition.
Explanation
You can define dependencies between jobs using the dependensOn.
Multiple choice
Which of the following choices is responsible for always starting the communication between Azure
Pipelines and its agent?
†† Service Hook.
†† Azure Pipelines.
■■ Agent.
Explanation
The Agent always initiates the communication with Azure Pipelines.
Multiple choice
Which of the following choices describes a reason for installing an agent using interactive mode?
■■ To run UI Tests.
†† To run Java pipelines.
†† To communicate with cloud environments from on-premises agents (self-hosted).
Explanation
In some cases, you might need to run the agent interactively for production use - such as to run UI tests.
Multiple choice
Which of the following choices is a keyword to make one job wait for another job to complete?
†† on.
■■ needs.
†† uses.
Explanation
Jobs run in parallel by default, but you can define dependencies between the jobs using needs.
    281

Multiple choice
Which of the following choices isn't a level that self-hosted runners can be added within an enterprise?
†† Enterprise-level (multiple organizations across an enterprise).
†† Organizational-level (multiple repositories in an organization).
■■ Project-level.
Explanation
Self-hosted runners can be added at different levels within an enterprise: Organizational-level (multiple
repositories in an organization), Enterprise-level (multiple organizations across an enterprise), Reposito-
ry-level (single repository).
Multiple choice
Which of the following choices is the location where you can find workflows?
■■ .github/workflows.
†† .github/build/workflows.
†† .github/actions/workflows.
Explanation
Workflows are written in YAML and live within a GitHub repository, at the location .github/workflows.
Multiple choice
Which of the following choices is where the database passwords that are needed in a CI pipeline should
be stored?
†† Repo.
†† action.yml.
■■ Encrypted Secrets.
Explanation
Secrets are similar to environment variables but encrypted.
Multiple choice
Which of the following files is the metadata for an action held?
†† workflow.yml.
■■ action.yml.
†† meta.yml.
Explanation
Add details action.yml metadata. At the root of your action, you will have an action.yml file. Make sure it
has been populated with author, icon, any expected inputs, and outputs.
282     

Multiple choice
Which of the following choices is how can the status of a workflow be shown in a repository?
■■ Using Badges.
†† Status Files.
†† Conversation Tab.
Explanation
Badges can be used to show the status of a workflow within a repository.
Module 4 Design and implement a release
strategy

Introduction to continuous delivery


Introduction
This module introduces continuous delivery concepts and their implementation in a traditional IT devel-
opment cycle.

Learning objectives
After completing this module, students and professionals can:
●● Explain continuous delivery (CD).
●● Implement continuous delivery in your development cycle.
●● Understand releases and deployment.
●● Identify project opportunities to apply CD.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore traditional IT development cycle


A few years ago, IT was a facilitating department. IT was there to support the business users, and because
time had proven that developed software had bad quality by default, software changes were a risk.
The resolution for this “quality problem” was to keep changes under strict control.
The department that became responsible for controlling the changes became the IT(-Pro) department.
284     

In the past and today, the IT(-Pro) department is responsible for the stability of the systems, while the
development department is responsible for creating new value.
This split brings many companies into a problematic situation. Development departments are motivated
to deliver value as soon as possible to keep their customers happy.
On the other hand, IT is motivated to change nothing because change is a risk, and they're responsible
for eliminating the risks and keeping everything stable. And what do we get out of it? Long release cycles.

Silo-based development
Long release cycles, numerous tests, code freezes, night and weekend work, and many people ensure that
everything works.
But the more we change, the more risk it leads to, and we're back at the beginning, on many occasions
resulting in yet another document or process that should be followed.
It's what I call silo-based development.

If we look at this picture of a traditional, silo-based value stream, we see Bugs and Unplanned work,
necessary updates or support work, and planned (value-adding) work, all added to the teams' backlog.
Everything is planned, and the first “gate” can be opened. Everything drops to the next phase. All the
work, and so all the value, moves in piles to the next stage.
It moves from the Plan phase to a Realize phase where all the work is developed, tested, and document-
ed, and from here, it moves to the release phase.
All the value is released at the same time. As a result, the release takes a long time.
    285

What is continuous delivery?


Continuous delivery (CD) is a set of processes, tools, and techniques for rapid, reliable, and continuous
software development and delivery.
It means that continuous delivery goes beyond the release of software through a pipeline. The pipeline is
a crucial component and the focus of this course, but continuous delivery is more.
To explain a bit more, look at the eight principles of continuous delivery:
●● The process for releasing/deploying software must be repeatable and reliable.
●● Automate everything!
●● If something is difficult or painful, do it more often.
●● Keep everything in source control.
●● Done means “released.”
●● Build quality in!
●● Everybody has responsibility for the release process.
●● Improve continuously.
If we want to fulfill these eight principles, we can see that an automated pipeline doesn't suffice.
To deploy more often, we need to reconsider our:
●● Software architecture (monoliths are hard to deploy).
●● Testing strategy (manual tests don't scale well).
●● Organization (separated business and IT departments don't work smoothly), and so forth.
This module will focus on the release management part of continuous delivery but be aware of the other
changes you might find.
Find the next bottleneck in your process, solve it, learn from it, and repeat it forever.
Continuous delivery is an enabler for DevOps. DevOps focuses on organizations and bringing people
together to build and run their software products.
Continuous delivery is a practice. It's being able to deliver software on-demand. Not necessarily 1000
times a day. Deploying every code change to production is what we call continuous deployment.
To do it, we need automation and a strategy.

Move to continuous delivery


But times have changed, and we need to deal with a new normal. Our customers demand working
software, and they wanted it yesterday.
If we can't deliver, they go to a competitor. And competition is fierce. With the Internet, we always have
global competition.
We have competitors on our stack that deliver a best-of-breed tool for one aspect of the software we
built.
We need to deliver fast, and the product we make must be good. And we should do this with our soft-
ware production being cheap and quality being high.
To achieve this, we need something like Continuous Delivery.
286     

We need to move towards a situation where the value isn't piled up and released all at once but flows
through a pipeline.
Just like in the picture, a piece of work is a marble. And only one part of the work can flow through the
pipeline at once.
So, work must be prioritized in the right way. As you can see, the pipeline has green and red outlets.
These are the feedback loops or quality gates that we want to have in place. A feedback loop can be
different things:
●● A unit test to validate the code.
●● An automated build to validate the sources.
●● An automated test on a Test environment.
●● Some monitor on a server.
●● Usage instrumentation in the code.
If one of the feedback loops is red, the marble can't pass the outlet, and it will end up in the Monitor and
Learn tray.
It's where the learning happens. The problem is analyzed and solved so that the next time a marble
passes the outlet, it's green.
Every single piece of workflow through the pipeline until it ends up in the tray of value.
The more that is automated, the faster value flows through the pipeline.
Companies want to move toward Continuous Delivery.
●● They see the value.
    287

●● They hear their customers.


●● Companies wish to deliver their products as fast as possible.
●● Quality should be higher.
●● The move to production should be faster.
●● Technical Debt should be lower.
A great way to improve your software development practices was the introduction of Agile and Scrum.
Last year around 80% of all companies claimed that they adopted Scrum as a software development
practice.
Using Scrum, many teams can produce a working piece of software after a sprint of maybe two or three
weeks.
But creating working software isn't the same as delivering working software.
The result is that all “done” increments are waiting to be delivered in the next release, which is coming in
a few months.
We see now that Agile teams within a non-agile company are stuck in a delivery funnel.
The bottleneck is no longer the production of working software, but the problem has become the
delivery of working software.
The finished product is waiting to be delivered to the customers to get business value, but it doesn't
happen.
Continuous Delivery needs to solve this problem.

Understand releases and deployments


One of the essential steps in moving software more quickly to production is changing how we deliver
software to production.
It's common to have teams that need to do overtime on the weekend to install and release new software
in our industry.
It's caused by the fact that we have two parts of the release process bolted together. As soon as we
deploy new software, we also release new features to the end users.
The best way to safely move your software to production while maintaining stability is by separating
these two concerns. So, we separate deployments from our release.
It can also be phrased as separating your functional release from your technical release (deployment).

What is a release, and what is a deployment?


When we talk about releases and deployments, we see that commonly used tools deal differently with
the terminology as we did in the previous chapter.
To understand the concepts and the technical implementation in many tools, you need to know how tool
vendors define the difference between a release and a deployment.
A release is a package or container containing a versioned set of artifacts specified in a release pipeline in
your CI/CD process.
288     

It includes a snapshot of all the information required to carry out all the tasks and actions in the release
pipeline, such as:
●● The stages or environments.
●● The tasks for each one.
●● The values of task parameters and variables.
●● The release policies such as triggers, approvers, and release queuing options.
There can be multiple releases from one release pipeline (or release process).
Deployment is the action of running the tasks for one stage, which results in a tested and deployed
application and other activities specified for that stage.
Starting a release starts each deployment based on the settings and policies defined in the original
release pipeline.
There can be multiple deployments of each release, even for one stage.
When a release deployment fails for a stage, you can redeploy the same release to that stage.
See also Releases in Azure Pipelines1.

Separating technical releases from functional releases


When we want to separate the technical and functional release, we need to start with our software itself.
The software needs to be built so that new functionality or features can be hidden from end users while
it's running.
A common way to do this is the use of Feature Toggles. The simplest form of a Feature Toggle is an if
statement that either executes or doesn't execute a certain piece of code.
By making the if-statement configurable, you can implement the Feature Toggle. We'll talk about Feature
Toggles in Module 3 in more detail.
See also: Explore how to progressively expose your features in production for some or all users2.
Once we've prepared our software, we must ensure that the installation won't expose any new or
changed functionality to the end user.
When the software has been deployed, we need to watch how the system behaves. Does it act the same
as it did in the past?
If the system is stable and operates the same as before, we can decide to flip a switch. It might reveal one
or more features to the end user or change a set of routines part of the system.
The whole idea of separating deployment from release (exposing features with a switch) is compelling
and something we want to incorporate in our Continuous Delivery practice.
It helps us with more stable releases and better ways to roll back when we run into issues when we have
a new feature that produces problems.
We switch it off again and then create a hotfix. By separating deployment from the release of a feature,
you make the opportunity to deploy any time of the day since the new software won't affect the system
that already works.

1 https://docs.microsoft.com/azure/devops/pipelines/release/releases
2 https://docs.microsoft.com/azure/devops/articles/phase-features-with-feature-flags
    289

Topics you might want to discuss are:

●● Does your organization need Continuous Delivery?


●● Do you use Agile/Scrum?

●● Is everybody involved or only the Dev departments?


●● Can you deploy your application multiple times per day? Why or why not?
●● What is the main bottleneck for Continuous Delivery in your organization?

●● The Organization
●● Application Architecture
●● Skills
●● Tooling
●● Tests
●● Other things?

Understand release process versus release


Before diving into high-quality release pipelines, we need to consider the difference between release and
release processes. Or, when you talk about tooling, a release pipeline.
We start with defining a release process or release pipeline. The release pipeline contains all the steps you
walk through when you move your artifact from one of the artifact sources discussed earlier through the
stages or environments.
The stage can be a development stage, a test stage, a production stage, or just a stage where a specific
user can access the application.
Part of your pipeline is the people who approve the release or the deployment to a specific stage. Also,
triggers or schedules on which the releases execute, and the release gates, the automatic approvals of the
process.
The release itself is something different. The release is an instance of the release pipeline. You can
compare it with object instantiation.
In Object Orientation, a class contains the blueprint or definition of an object. But the object itself is an
instance of that blueprint.
290     

Summary
This module introduces continuous delivery concepts and their implementation in a traditional IT devel-
opment cycle.
You learned how to describe the benefits and usage of:
●● Explain continuous delivery (CD).
●● Implement continuous delivery in your development cycle.
●● Understand releases and deployment.
●● Identify project opportunities to apply CD.

Learn more
●● What is Continuous Delivery? - Azure DevOps | Microsoft Docs3.
●● Continuous Delivery vs. Continuous Deployment | Microsoft Azure4.
●● Release pipelines - Azure Pipelines | Microsoft Docs5.
●● Define a Classic release pipeline - Azure Pipelines | Microsoft Docs6.

3 https://docs.microsoft.com/devops/deliver/what-is-continuous-delivery
4 https://azure.microsoft.com/overview/continuous-delivery-vs-continuous-deployment
5 https://docs.microsoft.com/azure/devops/pipelines/release/?view=azure-devops
6 https://docs.microsoft.com/azure/devops/pipelines/release/define-multistage-release-process
    291

Create a release pipeline


Introduction
This module describes Azure Pipelines capabilities, build and release tasks, and multi-configuration and
multi-agent differences.

Learning objectives
After completing this module, students and professionals can:
●● Explain the terminology used in Azure DevOps and other Release Management Tooling.
●● Describe what a Build and Release task is, what it can do, and some available deployment tasks.
●● Implement release jobs.
●● Differentiate between multi-agent and multi-configuration release jobs.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Describe Azure DevOps release pipeline capabil-


ities
Azure DevOps has extended support for pipelines as code (also called YAML pipelines) for continuous
deployment and started introducing various release management capabilities into pipelines as code.
The existing UI-based release management solution in Azure DevOps is referred to as classic release.
You'll find a list of capabilities and availability in YAML pipelines vs. classic build and release pipelines in
the following table.

Feature YAML Classic Build Classic Release Notes


Agents Yes Yes Yes Specifies a re-
quired resource on
which the pipeline
runs.
Approvals Yes No Yes Defines a set of
validations
required before
completing a
deployment stage.
Artifacts Yes Yes Yes Supports publish-
ing or consuming
different package
types.
292     

Caching Yes Yes No Reduces build time


by allowing
outputs or down-
loaded dependen-
cies from one run
to be reused in
later runs. In
Preview, available
with Azure
Pipelines only.
Conditions Yes Yes Yes Specifies condi-
tions to be met
before running a
job.
Container jobs Yes No No Specifies jobs to
run in a container.
Demands Yes Yes Yes Ensures pipeline
requirements are
met before
running a pipeline
stage. Requires
self-hosted agents.
Dependencies Yes Yes Yes Specifies a require-
ment that must be
met to run the
next job or stage.
Deployment Yes No Yes Defines a logical
groups set of deployment
target machines.
Deployment group No No Yes Specifies a job to
jobs release to a
deployment group.
Deployment jobs Yes No No Defines the
deployment steps.
Requires Mul-
ti-stage pipelines
experience.
Environment Yes No No Represents a
collection of
resources targeted
for deployment.
Available with
Azure Pipelines
only.
    293

Gates No No Yes Supports automat-


ic collection and
evaluation of
external health
signals before
completing a
release stage.
Available with
Azure Pipelines
only.
Jobs Yes Yes Yes Defines the
execution se-
quence of a set of
steps.
Service connec- Yes Yes Yes Enables a connec-
tions tion to a remote
service that is
required to
execute tasks in a
job.
Service containers Yes No No Enables you to
manage the
lifecycle of a
containerized
service.
Stages Yes No Yes Organizes jobs
within a pipeline.
Task groups No Yes Yes Encapsulates a
sequence of tasks
into a single
reusable task. If
using YAML, see
templates.
Tasks Yes Yes Yes Defines the
building blocks
that make up a
pipeline.
Templates Yes No No Defines reusable
content, logic, and
parameters.
Triggers Yes Yes Yes Defines the event
that causes a
pipeline to run.
Variables Yes Yes Yes Represents a value
to be replaced by
data to pass to the
pipeline.
294     

Variable groups Yes Yes Yes Use to store values


that you want to
control and make
available across
multiple pipelines.

Explore release pipelines


A release pipeline takes artifacts and releases them through stages and finally into production.

Let us quickly walk through all the components step by step.


The first component in a release pipeline is an artifact:
●● Artifacts can come from different sources.
●● The most common source is a package from a build pipeline.
●● Another commonly seen artifact source is, for example, source control.
Furthermore, a release pipeline has a trigger: the mechanism that starts a new release.
A trigger can be:
●● A manual trigger, where people start to release by hand.
●● A scheduled trigger, where a release is triggered based on a specific time.
●● A continuous deployment trigger, where another event triggers a release. For example, a completed
build.
Another vital component of a release pipeline is stages or sometimes called environments. It's where the
artifact will be eventually installed. For example, the artifact contains the compiled website installed on
the webserver or somewhere in the cloud. You can have many stages (environments), and part of the
release strategy is finding out the appropriate combination of stages.
Another component of a release pipeline is approval.
In many cases, people want to sign out a release before it's installed in the environment.
    295

In more mature organizations, this manual approval process can be replaced by an automatic process
that checks the quality before the components move on to the next stage.
Finally, we have the tasks within the various stages. The tasks are the steps that need to be executed to
install, configure, and validate the installed artifact.
In this part of the module, we'll walk through all the components of the release pipeline in detail and talk
about what to consider for each element.
The components that make up the release pipeline or process are used to create a release. There's a
difference between a release and the release pipeline or process. The release pipeline is the blueprint
through which releases are done. We'll cover more of it when discussing the quality of releases and
releases processes.
See also Release pipelines7.

Explore artifact sources


What is an artifact? An artifact is a deployable component of your application. These components can
then be deployed to one or more environments.
In general, the idea about build and release pipelines and Continuous Delivery is to build once and
deploy many times.
It means that an artifact will be deployed to multiple environments. The artifact should be a stable
package if you want to achieve it.
The configuration is the only thing you want to change when deploying an artifact to a new environment.
The contents of the package should never change. It's what we call immutability8. We should be 100%
sure that the package that we build, the artifact, remains unchanged.
How do we get an artifact? There are different ways to create and retrieve artifacts, and not every method
is appropriate for every situation.

7 https://docs.microsoft.com/azure/devops/pipelines/release
8 https://docs.microsoft.com/azure/devops/artifacts/artifacts-key-concepts
296     

The most common way to get an artifact within the release pipeline is to use a build artifact.
The build pipeline compiles, tests, and eventually produces an immutable package stored in a secure
place (storage account, database, and so on).
The release pipeline then uses a secure connection to this secured place to get the build artifact and do
extra actions to deploy it to an environment.
The significant advantage of using a build artifact is that the build produces a versioned artifact.
The artifact is linked to the build and gives us automatic traceability. We can always find the sources that
made this artifact. Another possible artifact source is version control.
We can directly link our version control to our release pipeline.
The release is related to a specific commit in our version control system. With that, we can also see which
version of a file or script is eventually installed. In this case, the version doesn't come from the build but
from version control.
Consideration for choosing a version control artifact instead of a build artifact can be that you only want
to deploy one specific file. If you don't need to run more actions before using this file in your release
pipeline, creating a versioned package (build artifact) containing only one file doesn't make sense.
Helper scripts that do actions to support the release process (clean up, rename, string actions) are
typically good candidates to get from version control.
Another possibility of an artifact source can be a network share containing a set of files. However, you
should be aware of the possible risk. The risk is that you aren't 100% sure that the package you're going
to deploy is the same package that was put on the network share. If other people can also access the
network share, the package might be compromised. Therefore, this option won't be sufficient to prove
integrity in a regulated environment (banks, insurance companies).
Finally, container registries are a rising star regarding artifact sources. Container registries are versioned
repositories where container artifacts are stored. Pushing a versioned container to the content repository
    297

and consuming that same version within the release pipeline has more or less the same advantages as
using a build artifact stored in a safe location.

Choose the appropriate artifact source


When you use a release pipeline to deploy your packages to production, you need traceability.
That means you want to know where the package that you're deploying originates from.
It's essential to understand that the sources that you built and checked into your version control are
precisely the same as the sources that you're going to deploy to the various environments that are going
to be used by your customers.
Primarily when you work in a regulated environment like a bank or an insurance company, auditors ask
you to provide traceability to sources that you deployed to prove the integrity of the package.
Another crucial aspect of your artifacts is auditability. You want to know who changed that line of code
and who triggered the build that produces the artifact deployed.
A proper mechanism to make sure you can provide the correct traceability and auditability is using
immutable packages.
It isn't something that you can buy, but something that you need to implement yourself.
Using a build pipeline that produces a package stored in a location that humans can't access, you ensure
the sources are unchanged throughout the whole-release process. It's an essential concept of release
pipelines.
You identify an immutable package by giving it a version so that you can refer to it at a later stage.
Versioning strategy is a complex concept and isn't in the scope of this module.
Still, having a unique identification number or label attached to the package and ensuring that this
number or label cannot be changed or modified afterward ensures traceability and auditability from
source code to production.
Read more about Semantic Versioning9.
Choosing the right artifact source is tightly related to the requirements you have about traceability and
auditability.
If you need an immutable package (containing multiple files) that can never be changed and be traced, a
build artifact is the best choice.
If it's one file, you can directly link to source control.
You can also point at a disk or network share, but it implies some risk-concerning auditability and
immutability. Can you ensure the package never changed?
See also Release artifacts and artifact sources10.

Exercise - select an artifact source


In this exercise, you'll investigate Artifact Sources.

9 https://semver.org
10 https://docs.microsoft.com/azure/devops/pipelines/release/artifacts
298     

Steps
Let's look at how to work with one or more artifact sources in the release pipeline.
1. In the Azure DevOps environment, open the Parts Unlimited project, then from the main menu,
select Pipelines, then select Releases.

2. In the main screen area, select New pipeline.

3. In the Select a template pane, see the available templates, but then select the Empty job option at
the top. It's because we're going to focus on selecting an artifact source.
4. In the Artifacts section, select +Add an artifact.
5. See the available options in the Add an artifact pane, and select the option to see more artifact
types, so that you can see all the available artifact types:
    299

While we're in this section, let's briefly look at the available options.
6. Select Build and see the parameters required. This option is used to retrieve artifacts from an Azure
DevOps Build pipeline. Using it requires a project name and a build pipeline name. (Projects can have
multiple build pipelines). It's the option that we will use shortly.

7. Select Azure Repository and see the parameters required. It requires a project name and asks you to
select the source repository.

8. Select GitHub and see the parameters required. The Service is a connection to the GitHub repository.
It can be authorized by either OAuth or by using a GitHub personal access token. You also need to
select the source repository.
300     

9. Select TFVC and see the parameters required. It also requires a project name and asks you to select
the source repository.

Note:A release pipeline can have more than one set of artifacts as input. A typical example is when you also
need to consume a package from a feed and your project source.
10. Select Azure Artifacts and see the parameters required. It requires you to identify the feed, package
type, and package.

11. Select GitHub Release and see the parameters required. It requires a service connection and the
source repository.
    301

Note:We'll discuss service connections later in the module.


12. Select Azure Container Registry and see the parameters required. Again, it requires a secure service
connection, and the Azure Resource Group details that the container registry is located. It allows you
to provide all your Docker containers directly into your release pipeline.

13. Select Docker Hub and see the parameters required. This option would be helpful if your containers
are stored in Docker Hub rather than in an Azure Container Registry. After choosing a secure service
connection, you need to select the namespace and the repository.

14. Finally, select Jenkins and see the parameters required. You don't need to get all your artifacts from
Azure. You can retrieve them from a Jenkins build. So, if you have a Jenkins Server in your infrastruc-
ture, you can use the build artifacts from there directly in your Azure Pipelines.
302     

Configuring the build artifact


Let's return to adding our Build output as the artifact source.
15. Select the Build source type again. See that the Project should show the current project. From the
Source (build pipeline) drop-down list, select Parts Unlimited-ASP.NET-CI. Take a record of the
default values for the other options, and then select Add.

We've now added the artifacts that we'll need for later walkthroughs.
    303

16. To save the work, select Save, then in the Save dialog box, select OK.

Examine considerations for deployment to stag-


es
When you have a clear view of the different stages you'll deploy, you need to think about when you want
to deploy to these stages.
As we mentioned in the introduction, Continuous Delivery is about deploying multiple times a day and
can deploy on-demand.
When we define our cadence, questions that we should ask ourselves are:
●● Do we want to deploy our application?
●● Do we want to deploy multiple times a day?
●● Can we deploy to a stage? Is it used?
For example, a tester testing an application during the day might not want to deploy a new version of the
app during the test phase.
Another example is when your application incurs downtime, you don't want to deploy when users use the
application.
The frequency of deployment, or cadence, differs from stage to stage.
A typical scenario we often see is continuous deployment during the development stage.
Every new change ends up there once it's completed and builds.
Deploying to the next phase doesn't always occur multiple times but only at night.
When designing your release strategy, choose your triggers carefully and consider the required release
cadence.
Some things we need to take into consideration are:
●● What is your target environment?
●● Does one team use it, or do multiple teams use it?

●● If a single team uses it, you can deploy it frequently. Otherwise, it would be best if you were a bit
more careful.
●● Who are the users? Do they want a new version multiple times a day?
●● How long does it take to deploy?
●● Is there downtime? What happens to performance? Are users affected?

Exercise - set up stages


In this demonstration, you'll investigate Stages.

Steps
Let's now look at the other section in the release pipeline that we've created: Stages.
1. Click on Stage 1, and in the Stage properties pane, set Stage name to Development and close the
pane.
304     

Note:Stages can be based on templates. For example, you might be deploying a web application using
node.js or Python. For this walkthrough, that won't matter because we're focusing on defining a strategy.
2. To add a second stage, click +Add in the Stages section and note the available options. You have a
choice to create a new stage or to clone an existing stage. Cloning a stage can help minimize the
number of parameters that need to be configured. But for now, click New stage.

3. When the Select a template pane appears, scroll down to see the available templates. For now, we
don't need any of these, so click Empty job at the top, then in the Stage properties pane, set Stage
name to Test, then close the pane.
    305

4. Hover over the Test stage and notice that two icons appear below. These are the same options that
were available in the menu drop-down that we used before. Click the Clone icon to clone the stage to
a new stage.

5. Click on the Copy of Test stage, and in the stage properties pane, set Stage name to Production and
close the pane.

We've now defined a traditional deployment strategy. Each stage contains a set of tasks, and we'll look at
those tasks later in the course.
Note: The same artifact sources move through each of the stages.
The lightning bolt icon on each stage shows that we can set a trigger as a pre-deployment condition. The
person icon on both ends of a stage shows that we can have pre and post-deployment approvers.

Concurrent stages
You'll notice that now, we have all the stages one after each other in a sequence. It's also possible to have
concurrent stages. Let's see an example.
6. Click the Test stage, and on the stage properties pane, set Stage name to Test Team A and close the
pane.
7. Hover over the Test Team A stage and click the Clone icon that appears to create a new cloned stage.
8. Click the Copy of Test Team A stage, and on the stage properties pane, set Stage name to Test
Team B and close the pane.
9. Click the Pre-deployment conditions icon (that is, the lightning bolt) on Test Team B to open the
pre-deployment settings.
306     

10. In the Pre-deployment conditions pane, the stage can be triggered in three different ways:

The stage can immediately follow Release. (That is how the Development stage is currently configured). It
can require manual triggering. Or, more commonly, it can follow another stage. Now, its following Test
Team A, but that's not what we want.
11. From the Stages drop-down list, choose Development and uncheck Test Team A, then close the
pane.
We now have two concurrent Test stages.

Stage versus Environment


You may have wondered why these items are called Stages and not Environments.
    307

In the current configuration, we're using them for different environments. But it's not always the case.
Here's a deployment strategy based upon regions instead:

Azure Pipelines are configurable and support a wide variety of deployment strategies. The name Stages
is a better fit than Environment even though the stages can be used for environments.
For now, let's give the pipeline a better name and save the work.
12. At the top of the screen, hover over the New release pipeline name and when a pencil appears, click
it to edit the name. Type Release to all environments as the name and hit enter or click elsewhere on
the screen.

13. For now, save the environment-based release pipeline you've created by clicking Save. Then, click OK
in the Save dialog box.
308     

Explore build and release tasks


A build and release platform requires executing any number of repeatable actions during the build
process. Tasks are units of executable code used to do selected actions in a specified order.


Add steps to specify what you want to build. The tests you want to run and all the other steps needed to
complete the build process.
There are steps for building, testing, running utilities, packaging, and deploying.
If a task isn't available, you can find numerous community tasks in the marketplace.
Jenkins, Azure DevOps, and Atlassian have an extensive marketplace where other tasks can be found.
    309

Links
For more information, see also:
●● Task types & usage11
●● Tasks for Azure12
●● Atlassian marketplace13
●● Jenkins Plugins14

Explore custom build and release tasks


Instead of using out-of-the-box tasks, a command line, or a shell script, you can also use your custom
build and release task.
By creating your tasks, the tasks are available publicly or privately to everyone you share them with.
Creating your task has significant advantages.
●● You get access to variables that are otherwise not accessible.
●● You can use and reuse a secure endpoint to a target server.
●● You can safely and efficiently distribute across your whole organization.
●● Users don't see implementation details.
For more information, see Add a build or release task15.

Explore release jobs


You can organize your build or release pipeline into jobs. Every build or deployment pipeline has at least
one job.
A job is a series of tasks that run sequentially on the same target. It can be a Windows server, a Linux
server, a container, or a deployment group.
A release job is executed by a build/release agent. This agent can only run one job at the same time.
You specify a series of tasks that you want to run on the same agent during your job design.
When the build or release pipeline is triggered at runtime, each job is dispatched to its target as one or
more jobs.
A scenario that speaks to the imagination, where Jobs plays an essential role, is the following.
Assume that you built an application with a backend in .NET, a front end in Angular, and a native IOS
mobile App. It might be developed in three different source control repositories triggering three other
builds, delivering three other artifacts.
The release pipeline brings the artifacts together and wants to deploy the backend, frontend, and Mobile
App all together as part of one release.
The deployment needs to take place on different agents.

11 https://docs.microsoft.com/azure/devops/pipelines/process/tasks
12 https://github.com/microsoft/azure-pipelines-tasks
13 https://marketplace.atlassian.com/addons/app/bamboo/trending
14 https://plugins.jenkins.io/
15 https://docs.microsoft.com/azure/devops/extend/develop/add-build-task
310     

If an IOS app needs to be built and distributed from a Mac, the angular app is hosted on Linux, so best
deployed from a Linux machine.
The backend might be deployed from a Windows machine.
Because you want all three deployments to be part of one pipeline, you can define multiple Release Jobs,
which target the different agents, servers, or deployment groups.
By default, jobs run on the host machine where the agent is installed.
It's convenient and typically well suited for projects just beginning to adopt continuous integration (CI).
Over time, you may find that you want more control over the stage where your tasks run.

For more information, see Jobs in Azure Pipelines16.

Summary
This module described Azure Pipelines capabilities, build and release tasks, and multi-configuration and
multi-agent differences.
You learned how to describe the benefits and usage of:
●● Explain the terminology used in Azure DevOps and other Release Management Tooling.
●● Describe what a Build and Release task is, what it can do, and some available deployment tasks.
●● Implement release jobs.
●● Differentiate between multi-agent and multi-configuration release jobs.

16 https://docs.microsoft.com/azure/devops/pipelines/process/phases
    311

Learn more
●● Release pipelines - Azure Pipelines | Microsoft Docs17.
●● Build and Release Tasks - Azure Pipelines | Microsoft Docs18.
●● Jobs in Azure Pipelines and TFS - Azure Pipelines | Microsoft Docs19.
●● Configure and pay for parallel jobs - Azure DevOps | Microsoft Docs20.

17 https://docs.microsoft.com/azure/devops/pipelines/release
18 https://docs.microsoft.com/azure/devops/pipelines/process/tasks
19 https://docs.microsoft.com/azure/devops/pipelines/process/phases
20 https://docs.microsoft.com/azure/devops/pipelines/licensing/concurrent-jobs
312     

Explore release recommendations


Introduction
This module explores the critical release strategy recommendations organizations must consider when
designing automated deployments.
It explains how to define components of a release pipeline and artifact sources, creates approval, and
configure release gates.

Learning objectives
After completing this module, students and professionals can:
●● Explain things to consider when designing your release strategy.
●● Define the components of a release pipeline and use artifact sources.
●● Create a release approval plan.
●● Implement release gates.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● For some exercises, you need to create an Azure DevOps Organization and a Team Project. If you
don't have it, see Create an organization - Azure DevOps21.
●● If you already have your organization created, use the Azure DevOps Demo Generator22 and create a
new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel free to create a
blank project. See Create a project - Azure DevOps23
●● Complete the previous module's walkthroughs.

Understand the delivery cadence and three


types of triggers
But both the release and stages make use of triggers. There are three types of triggers we recognize.

Continuous deployment trigger


You can set up this trigger on your release pipeline. Once you do that, your release pipeline will trigger
every time a build completes, and a new release will be created.

21 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
22 https://azuredevopsdemogenerator.azurewebsites.net/
23 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
    313

Scheduled triggers
This speaks for itself, but it allows you to set up a time-based manner to start a new release—for exam-
ple, every night at 3:00 AM or 12:00 PM. You can have one or multiple schedules per day, but it will
always run at this specific time.

Manual trigger
With a manual trigger, a person or system triggers the release based on a specific event. When it's a
person, it probably uses some UI to start a new release. When it's an automated process, some events will
likely occur. You can trigger the release from another system using the automation engine, which is
usually part of the release management tool.
For more information, see also:
●● Release and Stage triggers24

Exercise - select your delivery and deployment


cadence
In this exercise, you'll investigate Delivery Cadence.

Steps
Let's now look at when our release pipeline is used to create deployments. Mainly, it will involve the use
of triggers.
When we refer to deployment, we refer to each stage. Each stage can have its own set of triggers that
determine when the deployment occurs.
1. Click the lightning bolt on the _Parts Unlimited-ASP.NET-CI artifact.

2. In the Continuous deployment trigger pane, click the Disabled option to enable continuous deploy-
ment. It will then say Enabled.

24 https://docs.microsoft.com/azure/devops/pipelines/release/triggers
314     

Once it's selected, every time a build completes, deployment of the release pipeline will start.
Note: You can filter which branches affect it, so for example, you could choose the main branch or a
particular feature branch.

Scheduled deployments
You might not want to have a deployment start every time a build completes.
It might be disruptive to testers downstream if it was happening too often.
Instead, it might make sense to set up a deployment schedule.
3. Click on the Scheduled release trigger icon to open its settings.

4. In the Scheduled release trigger pane, click the Disabled option to enable the scheduled release. It
will then say Enabled, and extra options will appear.
    315

You can see in the screenshot that a deployment using the release pipeline would now occur each
weekday at 3 AM.
It might be convenient when you, for example, share a stage with testers who work during the day.
You don't want to constantly deploy new versions to that stage while they're working. This setting would
create a clean, fresh environment for them at 3 AM each weekday.
Note: The default timezone is UTC. You can change it to suit your local timezone, as it might be easier to
work with when creating schedules.
5. For now, we don't need a scheduled deployment. Click the Enabled button again to disable the
scheduled release trigger and close the pane.

Pre-deployment triggers
6. Click the lightning bolt on the Development stage to open the pre-deployment conditions.

Note: Both artifact filters and schedules can be set at the pre-deployment for each stage rather than just
at the artifact configuration level.
Deployment to any stage doesn't happen automatically unless you have chosen to allow that.

Explore release approvals


As we've described in the introduction, Continuous Delivery is all about delivering on-demand.
But, as we discussed in the differences between release and deployment, delivery, or deployment, it's
only the technical part of the Continuous Delivery process.
It's all about how you can technically install the software on an environment, but it doesn't say anything
about the process that needs to be in place for a release.
316     

Release approvals don't control how but control if you want to deliver multiple times a day.
Manual approvals also suit a significant need. Organizations that start with Continuous Delivery often lack
a certain amount of trust.
They don't dare to release without manual approval. After a while, when they find that the approval
doesn't add value and the release always succeeds, the manual approval is often replaced by an automat-
ic check.
Things to consider when you're setting up a release approval are:
●● What do we want to achieve with the approval? Is it an approval that we need for compliance rea-
sons? For example, we need to adhere to the four-eyes principle to get out SOX compliance. Or Is it
an approval that we need to manage our dependencies? Or is it an approval that needs to be in place
purely because we need a sign-out from an authority like Security Officers or Product Owners.
●● Who needs to approve? We need to know who needs to approve the release. Is it a product owner,
Security officer, or just someone that isn't the one that wrote the code? It's essential because the
approver is part of the process. They're the ones that can delay the process if not available. So be
aware of it.
●● When do you want to approve? Another essential thing to consider is when to approve. It's a direct
relationship with what happens after approval. Can you continue without approval? Or is everything
on hold until approval is given. By using scheduled deployments, you can separate approval from
deployment.
Although manual approval is a great mechanism to control the release, it isn't always helpful.
On many occasions, the check can be done at an earlier stage.
For example, it's approving a change that has been made in Source Control.
Scheduled deployments have already solved the dependency issue.
You don't have to wait for a person in the middle of the night. But there's still a manual action involved.
If you want to eliminate manual activities but still want control, you start talking about automatic approv-
als or release gates.
●● Release approvals and gates overview25.

Exercise - set up manual approvals


In this exercise, you'll investigate Manual Approval.

Steps
Let's now look at when our release pipeline needs manual approval before deployment of a stage starts
or manual approval that the deployment is completed as expected.
While DevOps is all about automation, manual approvals are still helpful. There are many scenarios where
they're needed. For example, a product owner might want to sign out a release before it moves to
production.
Or the scrum team wants to make sure that no new software is deployed to the test environment before
someone signs off on it because they might need to find an appropriate time if it's constantly in use.
This can help to gain trust in the DevOps processes within the business.

25 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals
    317

Even if the process is later automated, people might still want manual control until they become comfort-
able with the processes. Explicit manual approvals can be a great way to achieve that.
Let's try one.
1. Click the pre-deployment conditions icon for the Development stage to open the settings.

2. Click the Disabled button in the Pre-deployment approvals section to enable it.

3. In the Approvers list, find your name and select it. Then set the Timeout to 1 Day.

Note:Approvers is a list, not just a single value. If you add more than one person to the list, you can also
choose if they need to approve in sequence or if either or both approvals are required.
4. Take record of the approver policy options that are available:

It's prevalent not to allow a user who requests a release or deployment also to approve it.
318     

In this case, we're the only approver, so that we'll leave that unchecked.
5. Close the Pre-deployment conditions pane and notice that a checkmark has appeared beside the
person in the icon.

Test the approval


Now it's time to see what happens when approval is required.
6. Click Save to save the work, and OK on the popup window.
7. Click Create release to start the release process.

8. In the Create a new release pane, see the available options, then click Create.
    319

9. In the upper left of the screen, you can see that a release has been created.

10. At this point, an email should have been received, indicating that approval is required.

At this point, you could click the link in the email, but instead, we'll navigate within Azure DevOps to see
what's needed.
11. Click on the Release 1 Created link (or whatever number it is for you) in the area we looked at in Step
9. We're then taken to a screen that shows the status of the release.
320     

You can see that a release has been manually triggered and that the Development stage is waiting for
approval. As an approver, you can now do that approval.
12. Hover over the Development stage and click the Approve icon that appears.

Note:Options to cancel the deployment or to view the logs are also provided at this point.
13. In the Development approvals window, add a comment and click Approve.
    321

The deployment stage will then continue. Watch as each stage proceeds and succeeds.

Explore release gates


Release gates give you more control over the start and completion of the deployment pipeline.
They're often set up as pre-deployment and post-deployment conditions.
In many organizations, there are so-called dependency meetings.
It's a planning session where the release schedule of dependent components is discussed.
Think of downtime of a database server or an update of an API.
It takes much time and effort, and the only thing needed is a signal if the release can continue.
Instead of having this meeting, you can create a mechanism where people press a button on a form when
the release can't advance.
322     

When the release starts, it checks the state of the gate by calling an API. If the “gate” is open, we can
continue. Otherwise, we'll stop the release.
By using scripts and APIs, you can create your release gates instead of manual approval. Or at least
extending your manual approval.
Other scenarios for automatic approvals are, for example.
●● Incident and issues management. Ensure the required status for work items, incidents, and issues. For
example, ensure that deployment only occurs if no bugs exist.
●● Notify users such as legal approval departments, auditors, or IT managers about a deployment by
integrating with approval collaboration systems such as Microsoft Teams or Slack and waiting for the
approval to complete.
●● Quality validation. Query metrics from tests on the build artifacts such as pass rate or code coverage
and only deploy within required thresholds.
●● Security scan on artifacts. Ensure security scans such as anti-virus checking, code signing, and policy
checking for build artifacts have been completed. A gate might start the scan and wait for it to finish
or check for completion.
●● User experience relative to baseline. Using product telemetry, ensure the user experience hasn't
regressed from the baseline state. The experience level before the deployment could be considered a
baseline.
●● Change management. Wait for change management procedures in a system such as ServiceNow com-
plete before the deployment occurs.
●● Infrastructure health. Execute monitoring and validate the infrastructure against compliance rules after
deployment or wait for proper resource use and a positive security report.
In short, approvals and gates give you more control over the start and completion of the deployment
pipeline.
They can usually be set up as pre-deployment and post-deployment conditions, including waiting for
users to approve or reject deployments manually and checking with other automated systems until
specific requirements are verified.
Also, you can configure a manual intervention to pause the deployment pipeline and prompt users to
carry out manual tasks, then resume or reject the deployment.
To find out more about Release Approvals and Gates, check these documents.
●● Release approvals and gates overview26.
●● Release Gates27.

Use release gates to protect quality


A quality gate is the best way to enforce a quality policy in your organization. It's there to answer one
question: can I deliver my application to production or not?
A quality gate is located before a stage that is dependent on the outcome of a previous stage. A quality
gate was typically something that a QA department monitored in the past.
They had several documents or guidelines, and they verified if the software was of a good enough quality
to move on to the next stage.

26 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals
27 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/gates
    323

When we think about Continuous Delivery, all manual processes are a potential bottleneck.
We need to reconsider the notion of quality gates and see how we can automate these checks as part of
our release pipeline.
By using automatic approval with a release gate, you can automate the approval and validate your
company's policy before moving on.
Many quality gates can be considered.
●● No new blocker issues.
●● Code coverage on new code greater than 80%.
●● No license violations.
●● No vulnerabilities in dependencies.
●● No further technical debt was introduced.
●● Is the performance not affected after a new release?
●● Compliance checks

●● Are there work items linked to the release?


●● Is the release started by someone else as the one who commits the code?
Defining quality gates improves the release process, and you should always consider adding them.

Exercise - set up a release gate


In this exercise walkthrough, you'll investigate Release Gates.

Steps
Let's now look at when our release pipeline needs to do automated checks for issues like code quality
before continuing with the deployments. That automated approval phase is achieved by using Release
Gates.
First, we need to make sure that Release Gates can execute work item queries.
1. On the Boards > Queries page, click All to see all the queries (not just favorites).
2. Click the ellipsis for Shared Queries and choose Security.
3. Add a user ProjectName Build Service (CompanyName) if they aren't already present, and choose
to Allow for Read permissions.
Now let's look at configuring a release gate.
1. Click the lightning icon on the Development stage to open the pre-deployment conditions settings.
324     

2. In the Pre-deployment conditions pane, click the Disabled button beside Gates to enable them.

3. Click +Add to see the available types of gates, then click Query work items.

We'll use the Query work items gate to check if any outstanding bugs need to be dealt with. It does this
by running a work item query. This is an example of what is commonly called a Quality Gate.
4. Set Display name to No critical bugs allowed, and from the Query drop-down list, choose Critical
Bugs. Leave the Upper threshold set to zero because we don't want to allow any bugs at all.
    325

5. Click the drop-down beside Evaluation options to see what can be configured. While 15 minutes is a
reasonable value in production, change The time between re-evaluation of gates to 5 minutes for
our testing.

The release gate doesn't just fail or pass a single time. It can keep evaluating the status of the gate. It
might fail the first time, but it might then pass after re-evaluation if the underlying issue has been
corrected.
6. Close the pane and click Save and OK to save the work.
7. Click Create release to start a new release, and in the Create a new release pane, click Create.

8. Click on the release number to see how the release is proceeding.

9. If it's waiting for approval, click Approve to allow it to continue, and in the Development pane, click
Approve.
326     

After a short while, you should see the release continuing and then enter the phase where the gates
will process.

10. In the Development pane, click Gates to see the status of the release gates.

You'll notice that the gate failed the first time it was checked. It will be stuck in the processing gates stage
as there's a critical bug. Let's look at that bug and resolve it.
11. Close the pane and click Save, then OK to save the work.
12. In the main menu, click Boards, then click Queries.
    327

13. In the Queries window, click All to see all the available queries.

14. From the list of queries, click Critical bugs.

You'll see that there's one critical bug that needs to be resolved.

15. In the properties pane for the bug, change the State to Done, then click Save.
328     

16. Click Run query to re-execute the work item query.

There are now no critical bugs that will stop the release.
17. Return to the release by clicking Pipelines, then Releases in the main menu, then clicking the name
of the latest release.

18. When the release gate is checked next time, the release should continue and complete successfully.
    329

Gates are a potent automated approval mechanism.

Clean up
To avoid excessive wait time in later walkthroughs, we'll disable the release gates.
19. In the main menu, click Pipelines, click Releases, and click Edit to open the release pipeline editor.
20. Click the Pre-deployment conditions icon (that is, the lightning bolt) on the Development task, and
in the Pre-deployment conditions pane, click the switch beside Gates to disable release gates.
21. Click Save, then click OK.

Summary
This module explored the critical release strategy recommendations that organizations must consider
when designing automated deployments.
It explained how to define components of a release pipeline and artifact sources, create approves, and
configure release gates.
You learned how to describe the benefits and usage of:
●● Explain things to consider when designing your release strategy.
●● Define the components of a release pipeline and use artifact sources.
●● Create a release approval plan.
●● Implement release gates.

Learn more
●● How Microsoft plans efficient workloads with DevOps - Azure DevOps | Microsoft Docs28.
●● Release engineering app development - Azure Architecture Center | Microsoft Docs29.

28 https://docs.microsoft.com/devops/plan/how-microsoft-plans-devops
29 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-app-dev
330     

●● How Microsoft develops modern software with DevOps - Azure DevOps | Microsoft Docs30.
●● Control deployments by using approvals - Azure Pipelines | Microsoft Docs31.
●● Control deployments by using gates - Azure Pipelines | Microsoft Docs32.

30 https://docs.microsoft.com/devops/develop/how-microsoft-develops-devops
31 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals
32 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/gates
    331

Provision and test environments


Introduction
This module details target environment provisioning, service connections creation process, and test
infrastructure setup. You'll learn how to configure functional test automation and run availability tests.

Learning objectives
After completing this module, students and professionals can:
●● Provision and configure target environment.
●● Deploy to an environment securely using a service connection.
●● Configure functional test automation and run availability tests.
●● Setup test infrastructure.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● For some exercises, you need to create an Azure DevOps Organization and a Team Project. If you
don't have it yet, see: Create an organization - Azure DevOps33.

●● If you already have your organization created, use the Azure DevOps Demo Generator34 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps35.

Provision and configure target environments


The release pipeline deploys software to a target environment. But it isn't only the software that will be
deployed with the release pipeline.
If you focus on Continuous Delivery, Infrastructure as Code and spinning up Infrastructure as part of your
release pipeline are essential.
When we focus on the deployment of the Infrastructure, we should first consider the differences between
the target environments that we can deploy to:
●● On-Premises servers.
●● Cloud servers or Infrastructure as a Service (IaaS). For example, Virtual machines or networks.
●● Platform as a Service (PaaS) and Functions as a Service (FaaS). For example, Azure SQL Database in
both PaaS and serverless options.
●● Clusters.

33 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
34 https://azuredevopsdemogenerator.azurewebsites.net/
35 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
332     

●● Service Connections.
Let us dive a bit further into these different target environments and connections.

On-premises servers
In most cases, when you deploy to an on-premises server, the hardware and the operating system are
already in place. The server is already there and ready.
In some cases, empty, but most of the time not. In this case, the release pipeline can only focus on
deploying the application.
You might want to start or stop a virtual machine (Hyper-V or VMware).
The scripts you use to start or stop the on-premises servers should be part of your source control and
delivered to your release pipeline as a build artifact.
Using a task in the release pipeline, you can run the script that starts or stops the servers.
To take it one step further and configure the server, you should look at technologies like PowerShell
Desired State Configuration(DSC).
The product will maintain your server and keep it in a particular state. When the server changes its state,
you can recover the changed configuration to the original configuration.
Integrating a tool like PowerShell DSC into the release pipeline is no different from any other task you
add.

Infrastructure as a service
When you use the cloud as your target environment, things change slightly. Some organizations lift and
shift from their on-premises server to cloud servers.
Then your deployment works the same as an on-premises server. But when you use the cloud to provide
you with Infrastructure as a Service (IaaS), you can use the power of the cloud to start and create servers
when needed.
It's where Infrastructure as Code (IaC) starts playing a significant role.
Creating a script or template can make a server or other infrastructural components like a SQL server, a
network, or an IP address.
By defining a template or using a command line and saving it in a script file, you can use that file in your
release pipeline tasks to execute it on your target cloud.
The server (or another component) will be created as part of your pipeline. After that, you can run the
steps to deploy the software.
Technologies like Azure Resource Manager are great for creating Infrastructure on demand.

Platform as a Service
When you move from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS), you'll get the
Infrastructure from the cloud you're running on.
For example: In Azure, you can choose to create a Web application. The cloud arranges the server, the
hardware, the network, the public IP address, the storage account, and even the webserver.
The user only needs to take care of the web application that will run on this Platform.
    333

You only need to provide the templates that instruct the cloud to create a WebApp. The same goes for
Functions as a Service(FaaS) or Serverless technologies.
In Azure, it's called Azure Functions. You only deploy your application, and the cloud takes care of the
rest. However, you must instruct the Platform (the cloud) to create a placeholder where your application
can be hosted.
You can define this template in Azure Resource Manager. You can use the Azure CLI or command-line
tools.
In all cases, the Infrastructure is defined in a script file and live alongside the application code in source
control.

Clusters
Finally, you can deploy your software to a cluster. A cluster is a group of servers that work together to
host high-scale applications.
When you run a cluster as Infrastructure as a Service, you need to create and maintain the cluster. It
means that you need to provide the templates to create a cluster.
You also need to ensure that you roll out updates, bug fixes, and patches to your cluster. It's comparable
with Infrastructure as a Service.
When you use a hosted cluster, you should consider it a Platform as a Service. You instruct the cloud to
create the cluster, and you deploy your software to the cluster.
When you run a container cluster, you can use the container cluster technologies like AKS.

Service connections
When a pipeline needs access to resources, you'll often need to create service connections.

Summary
Whatever the technology you choose to host your application, your Infrastructure's creation or configura-
tion should be part of your release pipeline and source control repository.
Infrastructure as Code is a fundamental part of Continuous Delivery and gives you the freedom to create
servers and environments on-demand.

Links
●● Desired State Configuration Overview36.
●● Azure Functions37.
●● Azure Resource Manager38.

Exercise - set up service connections


In this exercise, you'll investigate Service Connections.

36 https://docs.microsoft.com/en-us/powershell/dsc/overview/dscforengineers?view=dsc-1.1
37 https://azure.microsoft.com/services/functions
38 https://docs.microsoft.com/azure/azure-resource-manager/resource-group-overview
334     

Note: To follow along with this walkthrough, you'll need to have an existing Azure subscription that
contains an existing storage account.

Steps
You can set up a service connection to environments to create a secure and safe connection to the
environment that you want to deploy.
Service connections are also used to get resources from other places in a secure manner.
For example, you might need to get your source code from GitHub. In this case, let's look at configuring a
service connection to Azure.
1. From the main menu in the Parts Unlimited project, click Project settings at the bottom of the
screen.

2. In the Project Settings pane, from the Pipelines section, click Service connections. Click the drop-
down beside +New service connection.

As you can see, there are many types of service connections. You can create a connection to:
●● Apple App Store.
●● Docker Registry
●● Bitbucket.
●● Azure Service bus.
In this case, we want to deploy a new Azure resource, so we'll use the Azure Resource Manager
option.
3. Click Azure Resource Manager to add a new service connection.
    335

4. Set the Connection name to Azure Resource Manager Service Connection, click on an Azure
Subscription, then select an existing Resource Group.
Note: You might be prompted to log on to Azure at this point. If so, log on first.
336     

Notice that what we are creating is a Service Principal. We'll be using the Service Principal for
authenticating to Azure. At the top of the window, there's also an option to set up Managed Identity
Authentication instead.
The Service Principal is a service account that only has permissions in the specific subscription and
resource group. It makes it a safe way to connect from the pipeline.
Important: When you create a service connection with Azure, the service principal gets a contributor
role to the subscription or resource group. It's not enough to upload data to blob storage for the service
principal. You must explicitly add the service principal to the storage account blob contributor role.
Otherwise, the release gets failed with an authorization permission mismatch error.
5. Click OK to create it. It will then be shown in the list.
    337

6. In the main Parts Unlimited menu, click Pipelines, then Releases, Edit to see the release pipeline.
Click the link to View stage tasks.

The current list of tasks is then shown. Because we started with an empty template, there are no tasks
yet. Each stage can execute many tasks.

7. Click the + sign to the right of the Agent job to add a new task. See the available list of task types.
338     

8. In the Search box, enter the word storage and see the list of storage-related tasks. These include
standard tasks and tasks available from the Marketplace.

We'll use the Azure file copy task to copy one of our source files to a storage account container.
9. Hover over the Azure file copy task type and click Add when it appears. The task will be added to the
stage but requires further configuration.

10. Click the File Copy task to see the required settings. Select the latest task version.
    339

11. Set the Display Name to Backup website zip file, click the ellipsis beside Source, locate the file as
follows, and click OK to select it.
340     

We then need to provide details of how to connect to the Azure subscription. The easiest and most
secure way to do that be to use our new Service Connection.
12. From the Azure Subscription drop-down list, find and select the Azure Resource Manager Service
Connection that we created.

13. From the Destination Type drop-down list, select Azure Blob, and from the RM Storage Account
and Container Name, select the storage account, enter the container's name, then click Save at the
top of the screen OK.

14. To test the task, click Create release, and in the Create a new release pane, click Create.
15. Click the new release to view the details.

16. On the release page, approve the release so that it can continue.
17. Once the Development stage has been completed, you should see the file in the Azure storage
account.
    341

A key advantage of using service connections is that this type of connection is managed in a single
place within the project settings. It doesn't involve connection details spread throughout the pipeline
tasks.

Configure automated integration and functional


test automation
The first thing that comes to mind when talking about Continuous Delivery is that everything needs to be
automated.
Otherwise, you can't deploy multiple times a day. But how to deal with testing then?
Many companies still have a broad suite of manual tests to be run before delivering to production.
Somehow these tests need to run every time a new release is created.
Instead of automating all your manual tests into automated UI tests, you need to rethink your testing
strategy.
As Lisa Crispin describes in her book Agile Testing, you can divide your tests into multiple categories.

Source: https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants39
We can make four quadrants where each side of the square defines our targets with our tests.
●● Business facing - the tests are more functional and often executed by end users of the system or by
specialized testers that know the problem domain well.
●● Supporting the Team - it helps a development team get constant feedback on the product to find
bugs fast and deliver a product with quality build-in.
●● Technology facing - the tests are rather technical and non-meaningful to business people. They're
typical tests written and executed by the developers in a development team.
●● Critique Product - tests that are there to validate the workings of a product on its functional and
non-functional requirements.

39 https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/
342     

Now we can place different test types we see in the other quadrants. For example, we can put Unit tests,
Component tests, and System or integration tests in the first quadrant.
In quadrant two, we can place functional tests, Story tests, prototypes, and simulations. These tests are
there to support the team in delivering the correct functionality and are business-facing since they're
more functional.
In quadrant three, we can place tests like exploratory, usability, acceptance, etc.
We place performance, load, security, and other non-functional requirements tests in quadrant four.
Looking at these quadrants, specific tests are easy to automate or automated by nature. These tests are
in quadrants 1 and 4. Tests that are automatable but most of the time not automated by nature are the
tests in quadrant 2. Tests that are the hardest to automate are in quadrant 3.
We also see that the tests that can't be automated or are hard to automate are tests that can be executed
in an earlier phase and not after release.
We call shift-left, where we move the testing process towards the development cycle.
We need to automate as many tests as possible and test them.
A few of the principles we can use are:
●● Tests should be written at the lowest level possible.
●● Write once, run anywhere, including the production system.
●● The product is designed for testability.
●● Test code is product code; only reliable tests survive.
●● Test ownership follows product ownership.
By testing at the lowest level possible, you'll find many tests that don't require infrastructure or applica-
tions to be deployed.
We can use the pipeline to execute the tests that need an app or infrastructure. To perform tests within
the pipeline, we can run scripts or use specific tests tools.
On many occasions, these are external tools that you execute from the pipeline, like Owasp ZAP, Spec-
Flow, or Selenium.
You can use test functionality from a platform like Azure on other occasions. For example, Availability or
Load Tests executed from within the cloud platform.
When you want to write your automated tests, choose the language that resembles the language from
your code.
In most cases, the application developers should also write the test, so it makes sense to use the same
language. For example, write tests for your .NET application in .NET, and write tests for your Angular
application in Angular.
The build and release agent can handle it to execute Unit Tests or other low-level tests that don't need a
deployed application or infrastructure.
When you need to do tests with a UI or other specialized functionality, you need a Test agent to run the
test and report the results. Installation of the test agent then needs to be done upfront or as part of the
execution of your pipeline.
    343

Understand Shift-left
The goal for shifting left is to move quality upstream by performing tests early in the pipeline. It repre-
sents the phrase “fail fast, fail often” combining test and process improvements reduces the time it takes
for tests to be run and the impact of failures later on.
The idea is to ensure that most of the testing is complete before merging a change into the main branch.

Many teams find that their test takes too long to run during the development lifecycle.
As projects scale, the number and nature of tests will grow substantially, taking hours or days to run the
complete test.
They get pushed further until they're run at the last possible moment, and the benefits intended to be
gained from building those tests aren't realized until long after the code has been committed.
There are several essential principles that DevOps teams should adhere to in implementing any quality
vision.
344     

Other important characteristics to take into consideration:


●● Unit tests: These tests need to be fast and reliable.

●● One team at Microsoft runs over 60,000 unit tests in parallel in less than 6 minutes, intending to
get down to less than a minute.
●● Functional tests: Must be independent.
●● Defining a test taxonomy is an essential aspect of DevOps. The developers should understand the
suitable types of tests in different scenarios.

●● L0 tests are a broad class of fast in-memory unit tests. It's a test that depends on code in the
assembly under test and nothing else.
●● L1 tests might require assembly plus SQL or the file system.
●● L2 tests are functional tests run against testable service deployments. It's a functional test category
that requires a service deployment but may have critical service dependencies stubbed out
somehow.
●● L3 tests are a restricted class of integration tests that run against production. They require a
complete product deployment.
Check the case study in shifting left at Microsoft: Shift left to make testing fast and reliable40.
For more information, see:
●● Shift right to test in production41.

Set up and run availability tests


After you've deployed your web app or website to any server, you can set up tests to monitor its availa-
bility and responsiveness.
It's helpful to check if your application is still running and gives a healthy response.
Some applications have specific Health endpoints that an automated process can check. The Health
endpoint can be an HTTP status or a complex computation that uses and consumes crucial parts of your
application.
For example, you can create a Health endpoint that queries the database. This way, you can check that
your application is still accessible, but also the database connection is verified.
You can create your framework to create availability tests (ping test) or use a platform that can do it for
you.
Azure has the functionality to develop Availability tests. You can use these tests in the pipeline and as
release gates.
In Azure, you can set up availability tests for any HTTP or HTTPS endpoint accessible from the public
internet.
You don't have to add anything to the website you're testing. It doesn't even have to be your site: you
could try a REST API service you depend on.

40 https://docs.microsoft.com/devops/develop/shift-left-make-testing-fast-reliable
41 https://docs.microsoft.com/devops/deliver/shift-right-test-production
    345

There are two types of availability tests:


●● URL ping test: a simple test that you can create in the Azure portal. You can check the URL and check
the response and status code of the response.
●● Multi-step web test: Several HTTP calls that are executed in sequence.
For more information, see also:
●● Creating an Application Insights Web Test and Alert Programmatically42.
●● Monitor the availability of any website43.

Explore Azure Load Testing


Azure Load Testing Preview is a fully managed load-testing service that enables you to generate high-
scale load.
The service simulates traffic for your applications, helping you to optimize application performance,
scalability, or capacity.
You can create a load test using existing test scripts based on Apache JMeter. Azure Load Testing ab-
stracts the infrastructure to run your JMeter script and load test your application.
For Azure-based applications, Azure Load Testing collects detailed resource metrics to help you identify
performance bottlenecks44 across your Azure application components.
You can automate regression testing45 by running load tests as part of your continuous integration and
continuous deployment (CI/CD) workflow.

42 https://azure.microsoft.com/blog/creating-a-web-test-alert-programmatically-with-application-insights/
43 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
44 https://docs.microsoft.com/azure/load-testing/overview-what-is-azure-load-testing
45 https://docs.microsoft.com/azure/load-testing/overview-what-is-azure-load-testing
346     

Note: The overview image shows how Azure Load Testing uses Azure Monitor to capture metrics for app
components. Learn more about the supported Azure resource types46.
You can automatically run a load test at the end of each sprint or in a staging environment to validate a
release candidate build.
You can trigger Azure Load Testing from Azure Pipelines or GitHub Actions workflows.
Get started with adding load testing to your Azure Pipelines CI/CD workflow47 or use our Azure Load
Testing GitHub action48.
For more information about the Azure Load Testing preview, see:
●● What is Azure Load Testing?49
●● Tutorial: Use a load test to identify performance bottlenecks50.
●● Tutorial: Set up automated load testing51.
●● Learn about the key concepts for Azure Load Testing52.
●● Quickstart: Create and run a load test with Azure Load Testing53.
●● Tutorial: Identify performance regressions with Azure Load Testing and GitHub Actions - Azure
Load Testing54.
●● Configure Azure Load Testing for high-scale load tests - Azure Load Testing55.

Summary
This module detailed target environment provisioning, service connections creation process, and test
infrastructure setup. You learned how to configure functional test automation and run availability tests.
You learned how to describe the benefits and usage of:
●● Provision and configure target environment.
●● Deploy to an environment securely using a service connection.
●● Configure functional test automation and run availability tests.
●● Setup test infrastructure.

Learn more
●● Create target environment - Azure Pipelines | Microsoft Docs56.
●● Integrate DevTest Labs environments into Azure Pipelines - Azure DevTest Labs | Microsoft
Docs57.

46 https://docs.microsoft.com/azure/load-testing/resource-supported-azure-resource-types
47 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-azure-pipelines
48 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-github-actions
49 https://docs.microsoft.com/azure/load-testing/overview-what-is-azure-load-testing
50 https://docs.microsoft.com/azure/load-testing/tutorial-identify-bottlenecks-azure-portal
51 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-azure-pipelines
52 https://docs.microsoft.com/azure/load-testing/concept-load-testing-concepts
53 https://docs.microsoft.com/azure/load-testing/quickstart-create-and-run-load-test
54 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-github-actions
55 https://docs.microsoft.com/azure/load-testing/how-to-high-scale-load
56 https://docs.microsoft.com/azure/devops/pipelines/process/environments
57 https://docs.microsoft.com/azure/devtest-labs/integrate-environments-devops-pipeline
    347

●● Connect to Microsoft Azure - Azure Pipelines | Microsoft Docs58.


●● Service connections in Azure Pipelines - Azure Pipelines | Microsoft Docs59.
●● Run Functional Tests task - Azure Pipelines | Microsoft Docs60.
●● Monitor availability with URL ping tests - Azure Monitor | Microsoft Docs61.
●● Testing your app and Azure environment - Azure Architecture Center | Microsoft Docs62.

58 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure
59 https://docs.microsoft.com/azure/devops/pipelines/library/service-endpoints
60 https://docs.microsoft.com/azure/devops/pipelines/tasks/test/run-functional-tests
61 https://docs.microsoft.com/azure/azure-monitor/app/monitor-web-app-availability
62 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-testing
348     

Manage and modularize tasks and templates


Introduction
This module describes the creation of task and variable groups, creating custom build and release tasks,
and using release variables and stage variables in your pipeline.

Learning objectives
After completing this module, students and professionals can:
●● Use and manage task and variable groups.
●● Use release variables and stage variables in your release pipeline.
●● Use variables in release pipelines.
●● Create custom build and release tasks.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● For some exercises, you need to create an Azure DevOps Organization and a Team Project. If you
don't have it yet, see: Create an organization - Azure DevOps63.

●● If you already have your organization created, use the Azure DevOps Demo Generator64 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps65.

Examine task groups


A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release
pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other
task.
You can choose to extract the parameters from the encapsulated tasks as configuration variables and
abstract the rest of the task information.
Task groups are a way to standardize and centrally manage deployment steps for all your applications.
When you include a task group in your definitions and then make a change centrally to the task group,
the change is automatically reflected in all the definitions that use the task group.
There's no need to change each one individually.
For more information, see Task groups for builds and releases66.

63 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
64 https://azuredevopsdemogenerator.azurewebsites.net/
65 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
66 https://docs.microsoft.com/azure/devops/pipelines/library/task-groups
    349

Note: Note: Task Groups aren't currently supported in YAML. Use templates instead. See Template Refer-
ences67.

Exercise - create and manage task groups


In this exercise, you'll investigate Task Groups.
Note:Before starting this walkthrough, ensure you've done the steps in the prerequisites section and the
previous exercises.

Steps
Let's now look at how a release pipeline can reuse groups of tasks.
It's common to reuse a group of tasks in more than one stage within a pipeline or in different pipelines.
1. In the main menu for the Parts Unlimited project, click Pipelines, then click Task groups.

You'll notice that you don't currently have any task groups defined.

There's an option to import task groups, but the most common way to create a task group is directly
within the release pipeline, so let's do that.
2. Click Pipelines, click Releases and click Edit to open the pipeline we worked on in the main menu.

67 https://docs.microsoft.com/azure/devops/pipelines/yaml-schema
350     

3. The Development stage currently has a single task. We'll add another task to that stage. Click the
View stage tasks link to open the stage editor.

You can see that there's currently one task.

4. Click the + sign to the right of the Agent job line to add a new task, in the Search box, type data-
base.

We'll add a task to deploy an Azure SQL Database.


    351

5. Hover over the Azure SQL Database Deployment option and click Add. Click the Azure SQL
DacpacTask when it appears in the list to open the settings pane.

6. Set the Display name to Deploy devopslog database, and from the Azure Subscriptions drop-
down list, click ARM Service Connection.
Note:We can reuse our service connection here.

7. In the SQL Database section, set a unique name for the SQL Server, set the Database to devopslog,
set the Login to devopsadmin, and set any suitable password.
352     

8. In the Deployment Package section, set the Deploy type to Inline SQL Script, set the Inline SQL
Script to:
CREATE TABLE dbo.TrackingLog
(
TrackingLogID int IDENTITY(1,1) PRIMARY KEY,
TrackingDetails nvarchar(max)
);

9. Click Save then OK to save the work.


Now that we have two tasks let's use them to create a task group.
10. Click to select the Backup website zip file task and select the Deploy devopslog database task, then
right-click either task.

11. Click Create task group, then in the Create task group window, set Name to Backup website zip
file and deploy devopslog. Click the Category drop-down list to see the available options. Ensure
that Deploy is selected, and click Create.
    353

The individual tasks have now disappeared from the list of tasks, and the new task group appears
instead.

12. From the Task drop-down list, select the Test Team A stage.

There are currently no tasks on the stage.

13. Click the + sign to the right of the Agent job to add a new task. In the Search box, type backup and
notice that the new task group appears like any other task.
354     

14. Hover on the task group and click Add when it appears.

Task groups allow for each reuse of a set of tasks and limit the number of places where edits need to
occur.

Walkthrough cleanup
15. Click Remove to remove the task group from the Test Team A stage.
16. From the Tasks drop-down list, select the Development stage. Again click Remove to remove the
task group from the Development stage.
17. Click Save, then OK.

Explore variables in release pipelines


Variables give you a convenient way to get critical bits of data into various parts of the pipeline.
As the name suggests, the contents of a variable may change between releases, stages of jobs of your
pipeline.
The system predefines some variables, and you're free to add your own as well.
The variable's scope is the most important thing you need to think about when using variables in the
release pipeline.
You can imagine that a variable containing the target server's name may vary between a Development
environment and a Test Environment.
Within the release pipeline, you can use variables in different scopes and different ways.
For more information, see Release variables and debugging68.

Predefined variables
When running your release pipeline, you always need variables that come from the agent or context of
the release pipeline.

68 https://docs.microsoft.com/azure/devops/pipelines/release/variables
    355

For example, the agent directory where the sources are downloaded, the build number or build ID, the
agent's name, or any other information.
This information is accessible in predefined variables that you can use in your tasks.

Release pipeline variables


Choose a release pipeline variable when you need to use the same value across all the stages and tasks in
the release pipeline, and you want to change the value in a single place.

Stage variables
Share values across all the tasks within one specific stage by using stage variables.
Use a stage-level variable for values that vary from stage to stage (and are the same for all the tasks in a
stage).

Variable groups
Share values across all the definitions in a project by using variable groups. We'll cover variable groups
later in this module.

Normal and secret variables


Because the pipeline tasks are executed on an agent, variable values are passed to the various tasks using
environment variables.
The task knows how to read it. You should be aware that a variable contains clear text and can be ex-
posed to the target system.
When you use the variable in the log output, you can also see the variable's value.
When the pipeline has finished, the values will be cleared.
You can mark a variable in the release pipeline as secret. This way, the secret is hidden from the log
output. It's beneficial when writing a password or other sensitive information.

Understand variable groups


A variable group stores values that you want to make available across multiple builds and release pipe-
lines.
Examples
●● Store the username and password for a shared server.
356     

●● Store a share connection string.


●● Store the geolocation of an application.
●● Store all settings for a specific application.
For more information, see Variable Groups for Azure Pipelines69.

Exercise - create and manage variable groups


In this exercise, you'll investigate Variable Groups.
Note:Before starting this walkthrough, ensure you've done the steps in the prerequisites section and the
previous activities.

Steps
Let's now look at how a release pipeline can use predefined variables, called Variable Groups.
Like how we used task groups, variable groups provide a convenient way to avoid redefining many
variables when defining stages within pipelines and even when working across multiple pipelines.
Let's create a variable group and see how it can be used.
1. On the main menu for the Parts Unlimited project, click Pipelines, then click Library. There are
currently no variable groups in the project.

2. Click + Variable group to start creating a variable group. Set Variable group name to Website Test
Product Details.

3. In the Variables section, click +Add, enter Name, enter ProductCode, and in Value, enter RED-
POLOXL.

69 https://docs.microsoft.com/azure/devops/pipelines/library/variable-groups
    357

You can see an extra column that shows a lock. It allows you to have variable values that are locked
and not displayed in the configuration screens.
While it's often used for values like passwords, notice an option to link secrets from an Azure key vault
as variables.
It would be a preferable option for variables that provide credentials that need to be secured outside
the project.
In this example, we're just providing details of a product used in testing the website.
4. Add another variable called Quantity with a value of 12.
5. Add another variable called SalesUnit with a value of Each.

6. Click Save to save the new variable group.

7. On the main menu, click Pipelines, click Releases and click Edit to return to editing the release
pipeline we have been working on. From the top menu, click Variables.

8. In the left-hand pane, click Variable Groups.


358     

Variable groups are linked to pipelines rather than being directly added to them.
9. Click Link variable group, then in the Link variable group pane, click the Website Test Product
Details variable group (notice that it shows you how many variables are contained). In the Variable
group scope, select the Development, Test Team A, and Test Team B stages.
    359

We need the test product for development and testing, but we don't need it in production. If required
in all stages, we would have chosen Release for the Variable group scope instead.
10. Click the Link to complete the link.

The variables contained in the variable group are now available for use within all stages except produc-
tion, just the same way as any other variable.

Summary
This module described the creation of task and variable groups, creating custom build and release tasks,
and using release variables and stage variables in your pipeline.
You learned how to describe the benefits and usage of:
●● Use and manage task and variable groups.
●● Use release variables and stage variables in your release pipeline.
●● Use variables in release pipelines.
●● Create custom build and release tasks.

Learn more
●● Variable groups for Azure Pipelines - Azure Pipelines | Microsoft Docs70.
●● Define variables - Azure Pipelines | Microsoft Docs71.
●● Add a build or release task in an extension - Azure DevOps | Microsoft Docs72.

70 https://docs.microsoft.com/azure/devops/pipelines/library/variable-groups
71 https://docs.microsoft.com/azure/devops/pipelines/process/variables
72 https://docs.microsoft.com/azure/devops/extend/develop/add-build-task
360     

Automate inspection of health


Introduction
This module describes how to automate the inspection of health events, configure notifications, and set
up service hooks to monitor pipelines.

Learning objectives
After completing this module, students and professionals can:
●● Implement automated inspection of health.
●● Create and configure events.
●● Configure notifications.
●● Create service hooks to monitor the pipeline.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see: Create an organization - Azure DevOps73.

●● If you already have your organization created, use the Azure DevOps Demo Generator74 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps75.

Automate inspection of health


Inspection of the release pipeline and release is something you should consider from the start.
When you run multiple deployments a day, you want to:
●● Stay informed.
●● Know whether a release passed or failed.
●● Know the quality of the release.
●● Know details about the release and how it has been done.
●● Stop releases when you detect something suspicious.
●● Visualize some of these things on a dashboard.

You can do a few different things to stay informed about your release pipeline automatedly. In the follow-
ing chapters, we'll dive a bit deeper into these.

73 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
74 https://azuredevopsdemogenerator.azurewebsites.net
75 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
    361

Release gates
Release gates allow automatic collection of health signals from external services and then promote the
release when all the signs are booming at the same time or stop the deployment on timeout.
Typically, gates are connected with incident management, problem management, change management,
monitoring, and external approval systems. Release gates are discussed in an upcoming module.

Events, subscriptions, and notifications


Events are raised when specific actions occur, like when a release is started, or a build is completed.
A notification subscription is associated with a supported event type. The subscription ensures you get
notified when a specific event occurs.
Notifications are usually emails that you receive when an event occurs to which you're subscribed.

Service hooks
Service hooks enable you to do tasks on other services when events happen in your Azure DevOps
Services projects.
For example, create a card in Trello when a work item is created or send a push notification to your
team's Slack when a build fails.
Service hooks can also be used in custom apps and services as a more efficient way to drive activities
when events happen in your projects.

Reporting
Reporting is the most static approach to inspection but also the most evident in many cases.
Creating a dashboard that shows the status of your build and releases combined with team-specific
information is, in many cases, a valuable asset to get insights.
Read more at About dashboards, charts, reports, & widgets76.

Explore events and notifications


One of the first requests many people have when working in a system that does asynchronous actions is
to get notifications or alerts. Why?
Because they don't want to open the application, log in and see if things changed repeatedly.
The ability to receive Alerts and notifications is a powerful mechanism to get notified about certain events
in your system when they happen.
For example, when a build takes a while to complete, you probably don't want to stare at the screen until
it has finished. But you want to know when it does.
Getting an email or another kind of notification instead is powerful and convenient. Another example is a
system that needs to be monitored.
You want to get notified by the system in real time. By implementing a successful alert mechanism, you
can use alerts to react to situations proactively before anybody is bothered by them.

76 https://docs.microsoft.com/azure/devops/report/dashboards/overview
362     

Alerts
However, when you define alerts, you need to be careful. When you get alerts for every single event that
happens in the system, your mailbox will quickly be flooded with numerous alerts.
The more alerts you get that aren't relevant, the more significant the change that people will never look
at the alerts and notifications and will miss the important ones.

Target audience and delivery mechanism


When defining alerts or notifications, you need to think about the target audience. Who needs to react to
the alerts? Don't send messages to people for information only. They'll stop looking at it quickly.
Another thing to consider when defining alerts is the mechanism to deliver them. Do you want to send an
email, or do you like to send a message in Slack for your team? Or do you want to call another system to
do a particular action?
Within Azure DevOps, there are multiple ways to define your alerts. By using query and filter mechanisms,
you can filter out specific alerts. For example, you only want to get notified for failed releases and not for
successful ones.
Almost every action in the system raises an event to which you can subscribe. A subscription is personal
or for your whole team. When you have made a subscription, you can select how you want the notifica-
tion to be delivered.
For more information, see also:
●● About notifications77.
●● Events, subscriptions, and notifications78.

Explore service hooks


Service hooks enable you to do tasks on other services when your Azure DevOps Services projects
happen.
For example, create a card in Trello when a work item is created.
Or send a push notification to your team's mobile devices when a build fails.
Service hooks can also be used in custom apps and services.
It's a more efficient way to drive activities when events happen in your projects.
Azure DevOps includes built-in support for the following Service Hooks:

Build and release. Collaborate Customer sup- Plan and track Integrate
port
AppVeyor Campfire UserVoice Trello Azure Service Bus
Bamboo Flowdock Zendesk Azure Storage
Jenkins HipChat Web Hooks
MyGet Hubot Zapier
Slack
This list will change over time.

77 https://docs.microsoft.com/azure/devops/notifications/index
78 https://docs.microsoft.com/azure/devops/notifications/concepts-events-and-notifications
    363

To learn more about service hooks and how to use and create them, read Service Hooks in Azure
DevOps79.

Exercise - set up service hooks to monitor the


pipeline
In this exercise, you'll investigate Service Hooks.

Steps
Let's now look at how a release pipeline can communicate with other services by using service hooks.
Azure DevOps can be integrated with a wide variety of other applications. It has built-in support for many
applications and generic hooks for working with other applications. Let's look.
1. Below the main menu for the Parts Unlimited project, click Project Settings.

2. In the Project settings menu, click Service hooks.

3. Click +Create subscription.

79 https://docs.microsoft.com/azure/devops/service-hooks/overview
364     

By using service hooks, we can notify other applications that an event has occurred within Azure
DevOps. We could also send a message to a team in Microsoft Teams or Slack. We could also trigger
an action in Bamboo or Jenkins.
4. Scroll to the bottom of the list of applications and click on Web Hooks.
    365

Suppose the application that you want to communicate with isn't in the list of available application
hooks.
In that case, you can almost always use the Web Hooks option as a generic way to communicate. It
allows you to make an HTTP POST when an event occurs.
So, if, for example, you wanted to call an Azure Function or an Azure Logic App, you could use this
option.
To demonstrate the basic process for calling web hooks, we'll write a message into a queue in the
Azure Storage account that we have been using.
5. From the list of available applications, click Azure Storage.
366     

6. Click Next. On the Trigger page, we determine which event causes the service hook to be called. Click
the drop-down for Trigger on this type of event to see the available event types.

7. Ensure that Release deployment completed is selected, then in the Release pipeline name select
Release to all environments. For Stage, select Production. Drop down the list for Status and see the
available options.
    367

8. Ensure that Succeeded is selected, then click Next.

9. On the Action page, enter the name of your Azure storage account.
10. Open the Azure portal, and from the settings for the storage account, copy the value for Key in the
Access keys section.

11. Back in the Action page in Azure DevOps, paste in the key.

12. For Queue name, enter deploymentmessages, then click Test.


368     

13. Make sure that the test succeeded, then click Close, and on the Action page, click Finish.

Create a release to test the service hook


Now that you've successfully added the service hook, it's time to test it.
1. From the Parts Unlimited project's main menu, click Pipelines, click Releases, then click Create
release, and in the Create a new release pane, enter Test the queue service hook for Release
description, and click Create.
    369

2. Click to view the release details.

3. If the release is waiting for approval, click to approve it and wait for the release to complete success-
fully.
370     

Check the queue contents


1. In the Azure portal, click Queues from the Queue service section in the blade for the storage
account.

2. Click to open the deploymentmessages queue.

Note: If you have run multiple releases, you might have various messages.
3. Click the latest message (usually the bottom of the list) to open it and review the message properties,
then close the Message properties pane.
    371

You've successfully integrated this message queue with your Azure DevOps release pipeline.

Configure Azure DevOps notifications


After defining your target audience, you need to configure your notifications. Using Azure DevOps, you
can help your team stay informed about your projects' activities.
You can configure notifications to be sent based on rules or subscriptions by default, out-of-the-box
(OOB), created by you or the team or group administrator.
You can get a notification when the following items change:
●● Work items.
●● Code reviews.
●● Pull requests.
●● Source control files (TFVC or Git).
●● Builds.
●● Release.
For example, you can get notified whenever your build completes, or your release fails.
372     

There are four notification types that you can manage in Azure DevOps:
●● Personal notifications.
●● Team notifications.
●● Project notifications.
●● Global notifications.
For each notification, you have a set of specific steps to configure. The following steps show how to
manage global notifications:
1. Open your Azure DevOps organization https://dev.azure.com/{organization}/_settings/organization-
Overview.
2. Click on Organization settings at the bottom left side.
3. Click on Global notifications under the General tab.

The Default subscriptions tab lists all default global subscriptions available. The globe icon on a notifica-
tion subscription indicates the subscription is a default subscription. You can view all default notification
subscriptions80.
You can view and enable options available in the context menu (...) for each subscription.
Note: Only Project Collection Administrators can enable/disable any default subscription in this view.
Project Collection Valid Users group can only view the details of the default subscription.
In the Subscribers tab, you can see users subscribed to each notification item. The Settings section shows
the Default delivery option setting. All teams and groups inherit this setting.
You can see how to manage your personal notifications following manage your personal notifica-
tions81.
For more information, see:
●● Get started with notifications in Azure DevOps - Azure DevOps82.

80 https://docs.microsoft.com/en-us/azure/devops/notifications/oob-built-in-notifications?view=azure-devops
81 https://docs.microsoft.com/azure/devops/notifications/manage-your-personal-notifications
82 https://docs.microsoft.com/azure/devops/notifications/about-notifications
    373

●● Manage notifications for a team, project, organization, or collection - Azure DevOps83.


●● Events, subscriptions, and notifications - Azure DevOps.84

Configure GitHub notifications


Github.com notifications provide updates about the activity that you've subscribed to. You can use the
notifications inbox to customize, triage, and manage your updates.
You can choose to subscribe to notifications for:
●● A conversation in a specific issue, pull request, or gist.
●● All activity in a repository or team discussion.
●● CI activity, such as the status of workflows in repositories set up with GitHub Actions.
●● Repository issues, pull requests, releases, security alerts, or discussions (if enabled).
By default, you automatically watch all repositories you create and own by your personal account and
subscribe to conversations when you have:
●● Not disabled automatic watching for repositories or teams you've joined in your notification settings.
●● Been assigned to an issue or pull request.
●● Opened a pull request, issue, or created a team discussion post.
●● Commented on a thread.
●● Subscribed to a thread manually by clicking Watch or Subscribe.
●● Had your username @mentioned.
●● Changed the thread's state by closing an issue or merging a pull request.
●● Had a team you're a member of @mentioned.
Tip: To unsubscribe from conversations, you can change your notification settings or directly unsubscribe
or unwatch activity on GitHub.com. For more information, see “Managing your subscriptions85.”

Notification settings
1. Click on the notification icon in the upper-right corner of any page.
2. Click on the notification settings under the list of repositories in the left sidebar.

83 https://docs.microsoft.com/azure/devops/notifications/manage-team-group-global-organization-notifications
84 https://docs.microsoft.com/azure/devops/notifications/concepts-events-and-notifications
85 https://docs.github.com/github/managing-subscriptions-and-notifications-on-github/managing-your-subscriptions
374     

3. On the notifications settings page, choose how you receive notifications.


    375
376     

For more information, see:


●● About notifications - GitHub Docs86.
●● Configuring notifications - GitHub Docs87.
●● Viewing your subscriptions - GitHub Docs88.

Explore how to measure quality of your release


process
How do you measure the quality of your release process? The quality of your release process can't be
measured directly because it's a process. What you can measure is how well your process works.
If your release process constantly changes, it might indicate something wrong with the process. If your
releases continuously fail, and you regularly must update your release process to make it work, it might
also suggest that something is wrong with your release process.
Maybe something is wrong with the schedule on which your release runs, and you notice that your
release always fails on a particular day or at a specific time. Or your release always fails after the deploy-
ment to another environment. It might be an indication that some things are maybe dependent or
related.
You can keep track of your release process quality by creating visualizations about the quality of all the
releases following that same release process or release pipeline.
For example, we're adding a dashboard widget that shows you the status of every release.

The release also has a quality aspect, but it's tightly related to the quality of the deployment and package
deployed. When we want to measure the quality of a release itself, we can do all kinds of checks within
the pipeline.

86 https://docs.github.com/account-and-profile/managing-subscriptions-and-notifications-on-github/setting-up-notifications/about-
notifications
87 https://docs.github.com/account-and-profile/managing-subscriptions-and-notifications-on-github/setting-up-notifications/configuring-
notifications
88 https://docs.github.com/account-and-profile/managing-subscriptions-and-notifications-on-github/managing-subscriptions-for-activity-
on-github/viewing-your-subscriptions
    377

You can execute all different types of tests like integration tests, load tests, or even UI tests while running
your pipeline and checking the release's quality.
Using a quality gate is also a perfect way to check the quality of your release. There are many different
quality gates. For example, a gate that monitors to check if everything is healthy on your deployment
targets, work item gates that verify the quality of your requirements process.
You can add extra security and compliance checks. For example, do we follow the four-eyes principle, or
do we have the proper traceability?

Examine release notes and documentation


When you deploy a new release to a customer or install new software on your server, and you want to
communicate what has been released to your customer, the usual way is to use release notes.
But where do the release notes come from? There are different ways to store your release notes.

Document store
An often-used way of storing release notes is by creating text files or documents in some document
store. This way, the release notes are stored together with other documents.
The downside of this approach is that there's no direct connection between the release in the release
management tool and the release notes that belong to this release.

Wiki
The most used way for customers is to store the release notes in a Wiki. For example:
●● Confluence from Atlassian
●● SharePoint Wiki
●● SlimWiki
●● Wiki in Azure DevOps
Release notes are created as a page in the wiki and by using hyperlinks. Relations can be associated with
the build, the release, and the artifacts.

In the codebase
When you look at it, release notes belong strictly to the release of the features you implemented and
your code. In that case, the best option might be to store release notes as part of your code repository.
Once the team completes a feature, they or the product owner also write the release notes and save
them alongside the code. This way, it becomes living documentation because the notes change with the
rest of the code.

In a work item
Another option is to store your release notes as part of your work items. Work items can be Bugs, Tasks,
Product Backlog Items, or User Stories.
You can create or use a different field within the work item to save release notes in work items. In this
field, you type the publicly available release notes that will be communicated to the customer.
378     

With a script or specific task in your build and release pipeline, you can generate the release notes and
store them as artifacts or publish them to an internal or external website.

●● Generate Release Notes Build Task89.


●● Wiki Updater Tasks90.
●● Atlassian Confluence91.
●● Azure DevOps Wiki92.
There's a difference between functional and technical documentation. Also, a difference between docu-
mentation designing the product, primarily written upfront, and documentation describing the product
afterward, like manuals or help files.
Storing technical documentation about your products in the design phase is done on a document-shar-
ing portal, like SharePoint or Confluence.
Creating a wiki is a better and more modern way to store your documentation. Wiki's don't contain
Documents, Presentations, or Spreadsheets but text files called Markdown Files.
These markdowns can refer to pictures, hold code samples, and be part of your code repository. Code
repositories can deal well with text files. Changes and history can be easily tracked by using the native
code tools.
However, the most significant advantage of using a Wiki instead of documents is that a Wiki is accessible
to everyone in your team. People can work together on the documentation instead of waiting for each
other when working on the same document by giving all the team members the correct permissions.
Manuals or documentation you release together with the product should be treated as source code.
When the product changes and new functionality are added, the documentation needs to change.
You can store the documentation as part of your code repository or create a new repository containing
your documentation. In any case, the documentation should be treated the same way as your source

89 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes
90 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks
91 https://www.atlassian.com/software/confluence
92 https://azure.microsoft.com/services/devops/wiki/
    379

code. Create a documentation artifact in your build pipeline and deliver this artifact to the release
pipeline.
The release pipeline can then deploy the documentation to a site or include it in the boxed product.

Examine considerations for choosing release


management tools
When choosing the right Release Management tool, you should look at the possibilities of all the differ-
ent components and map them to your needs.
There are many tools available in the marketplace, from which we'll discuss some in the next chapter. The
most important thing to notice is that not every vendor or tool treats Release Management differently.
The tools in the marketplace can be divided into two categories.
●● Tools that can do Build and Continuous Integration and Deployment.
●● Tools that can do Release Management.
In many cases, companies only require the deployment part of Release Management.
Many build or release tools can perform deployment or installation. Primarily because the technical
aspect of the release is executing a script or running a program, Release Management that requires
approvals, quality gates, and different stages needs a different kind of tool that tightly integrates with the
build and CI tools isn't the same.

Artifacts and artifact source


Artifacts can come from different sources. When you treat your artifact as a versioned package, it needs
to be stored somewhere before your release pipeline consumes it. Considerations for choosing your tool
can be:
●● Which Source Control systems are supported?
●● Can you've one or more artifact sources in your release pipeline? In other words, can you combine
artifacts from different sources into one release?
●● Does it integrate with your build server?
●● Does it support other build servers?
●● Does it support container registries?
●● How do you secure the connection to the artifact source?
●● Can you extend the artifact sources?

Triggers and schedules


Triggers are an essential component in the pipeline. Triggers are required to start a release, but if you
want to have multiple stages, also create a deployment. Considerations for choosing your trigger can be:
●● Does your system support Continuous Deployment triggers?
●● Can you trigger releases from an API (for integration with other tools)?
●● Can you schedule releases?
●● Can you schedule and trigger each stage separately?
380     

Approvals and gates


Starting a release using scripts, executables, or deployable artifacts doesn't differentiate between a
Continuous Integration/Build tool and a Release Management tool. Adding a release approval workflow
to the pipeline is the critical component that does make the difference. Considerations When it comes to
approvals:
●● Do you need approvals for your release pipeline?
●● Is the approvers part of your company? Do they need a tool license?
●● Do you want to use manual or automatic approvals? Or both?
●● Can you approve with an API (integration with other tools)
●● Can you set up a workflow with approvers (optional and required)?
●● Can you have different approvers per stage?
●● Can you've more than one approver? Do you need them all to approve?
●● What are the possibilities for automatic approval?
●● Can you have a manual approval or step in your release pipeline?

Stages
Running a Continuous Integration pipeline that builds and deploys your product is a commonly used
scenario. But what if you want to deploy the same release to different environments? When choosing the
right release management tool, you should consider the following things when it comes to stages (or
environments)
●● Can you use the same artifact to deploy to different stages?
●● Can you differ the configuration between the stages?
●● Can you have different steps for each stage?
●● Can you follow the release between the stages?
●● Can you track the artifacts/work items and source code between the stages?

Build and release tasks


Finally, the work needs to be done within the pipeline. It isn't only about the workflow and orchestration,
but the code must also be deployed or installed. Things to consider when it comes to the execution of
tasks.
●● How do you create your steps? Is it running a script (bat, shell, PowerShell CLI), or are there special-
ized tasks?
●● Can you create your tasks?
●● How do tasks authenticate to secure sources?
●● Can tasks run on multiple platforms?
●● Can tasks be easily reused?
●● Can you integrate with multiple environments? (Linux, Windows, Container Clusters, PaaS, Cloud)
●● Can you control the tasks that are used in the pipeline?
    381

Traceability, auditability, and security


One of the essential things in enterprises and companies that need to adhere to compliance frameworks
is:
●● Traceability.
●● Auditability.
●● Security.
Although it isn't explicitly related to a release pipeline, it's an important part.
382     

When it comes to compliance, three principles are fundamental:


●● four-eyes principle

●● Does at least one other person review the deployed artifact?


●● Is the person that deploys another person the one that writes the code?
●● Traceability

●● Can we see where the released software originates from (which code)?
●● Can we see the requirements that led to this change?
●● Can we follow the requirements through the code, build, and release?
●● Auditability

●● Can we see who, when, and why the release process changed?
●● Can we see who, when, and why a new release has been deployed?
Security is vital in it. It isn't ok when people can do everything, including deleting evidence. Setting up the
right roles, permissions, and authorization is essential to protect your system and pipeline.
When looking at an appropriate Release Management tool, you can consider:
●● Does it integrate with your company's Active Directory?
●● Can you set up roles and permissions?
●● Is there a change history of the release pipeline itself?
●● Can you ensure the artifact didn't change during the release?
●● Can you link requirements to the release?
●● Can you link source code changes to the release pipeline?
●● Can you enforce approval or the four-eyes principle?
●● Can you see the release history and the people who triggered the release?

Explore common release management tools


The following tools are mainstream in the current ecosystem. You'll find links to the product websites to
explore the product and see if it fits the needs we described in the previous chapter.
●● What Artifacts and Artifact sources does the tool support?
●● What Triggers and Schedules?
●● Does the tool support Approvals and gates?
●● Can you define multiple stages?
●● How do the Build and Release Tasks work?
●● How does the tool deal with Traceability, Auditability, and Security?
●● What is the Extensibility model?
    383

Per tool is indicated if it's part of a more extensive suite. Integration with a bigger suite gives you many
advantages regarding traceability, security, and auditability. Numerous integrations are already there out
of the box.

GitHub Actions
GitHub Actions help you build, test, and deploy your code. You can implement continuous integration
and continuous delivery (CI/CD) that allows you to make code reviews, branch management, and issue
triaging work the way you want.
●● Trigger workflows with various events.
●● Configure environments to set rules before a job can proceed and to limit access to secrets.
●● Use concurrency to control the number of deployments running at a time.

Links
●● GitHub Actions93.
●● Understanding GitHub Actions94.
●● Essential features of GitHub Actions95.
●● Deploying with GitHub Actions96.

Azure Pipelines
Azure Pipelines helps you implement a build, test, and deployment pipeline for any app.
Tutorials, references, and other documentation show you how to configure and manage the continuous
integration and Continuous Delivery (CI/CD) for the app and platform of your choice.
●● Hosted on Azure as a SaaS in multiple regions and available as an on-premises product.
●● Complete Rest API for everything around Build and Release Management.
●● Integration with many build and source control systems

●● GitHub.
●● Azure Repos.
●● Jenkins.
●● Bitbucket.
●● and so on.
●● Cross-Platform support, all languages, and platforms.
●● Rich marketplace with extra plugins, build tasks and release tasks, and dashboard widgets.
●● Part of the Azure DevOps suite. Tightly integrated.
●● Fully customizable.

93 https://docs.github.com/en/actions
94 https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions
95 https://docs.github.com/en/actions/learn-github-actions/essential-features-of-github-actions
96 https://docs.github.com/en/actions/deployment/about-deployments/deploying-with-github-actions
384     

●● Manual approvals and Release Quality Gates supported.


●● Integrated with (Azure) Active Directory.
●● Extensive roles and permissions.

Links
●● Azure Pipelines97.
●● Building and Deploying your Code with Azure Pipelines98.

Jenkins
Jenkins's leading open-source automation server provides hundreds of plugins to support building,
deploying, and automating any project.
●● On-premises system. They're offered as SaaS by a third party.
●● No part of a bigger suite.
●● Industry-standard, especially in the full-stack space.
●● Integrates with almost every source control system.
●● A rich ecosystem of plugins.
●● CI/Build tool with deployment possibilities.
●● No release management capabilities.

Links
●● Jenkins99.
●● Tutorial: Jenkins CI/CD to deploy an ASP.NET Core application to Azure Web App service100.
●● Azure Friday - Jenkins CI/CD with Service Fabric101.

Circle CI
CircleCI's continuous integration and delivery platform help software teams rapidly release code with
confidence by automating the build, test, and deploy process.
CircleCI offers a modern software development platform that lets teams ramp quickly, scale easily, and
build confidently every day.
●● CircleCI is a cloud-based system or an on-premises system.
●● Rest API—you have access to projects, builds, and artifacts.
●● The result of the build is going to be an artifact.
●● Integration with GitHub and BitBucket.
●● Integrates with various clouds.

97 https://azure.microsoft.com/services/devops/pipelines/
98 https://www.youtube.com/watch?v=NuYDAs3kNV8
99 https://jenkins.io/
100 https://cloudblogs.microsoft.com/opensource/2018/09/21/configure-jenkins-cicd-pipeline-deploy-asp-net-core-application/
101 https://www.youtube.com/watch?v=5RYmooIZqS4
    385

●● Not part of a bigger suite.


●● Not fully customizable.

Links
●● CircleCI102.
●● How to get started on CircleCI 2.0: CircleCI 2.0 Demo103

GitLab Pipelines
GitLab helps teams automate the release and delivery of their applications to shorten the delivery
lifecycle, streamline manual processes and accelerate team velocity.
With Continuous Delivery (CD) built into the pipeline, deployment can be automated to multiple environ-
ments like staging and production and support advanced features such as canary deployments.
Because the configuration and definition of the application are version controlled and managed, it's easy
to configure and deploy your application on demand.

Link
●● GitLab104

Atlassian Bamboo
Bamboo is a continuous integration (CI) server that can automate the release management for a software
application, creating a Continuous Delivery pipeline.

Link
●● Atlassian Bamboo105

Summary
This module described how to automate the inspection of health events, configure notifications, and set
up service hooks to monitor pipelines.
You learned how to describe the benefits and usage of:
●● Implement automated inspection of health.
●● Create and configure events.
●● Configure notifications.
●● Create service hooks to monitor the pipeline.

102 https://circleci.com/
103 https://www.youtube.com/watch?v=KhjwnTD4oec
104 https://about.gitlab.com/stages-devops-lifecycle/release/
105 https://www.atlassian.com/software/bamboo/features
386     

Learn more
●● DevOps checklist - Azure Design Review Framework | Microsoft Docs106.
●● Events, subscriptions, and notifications - Azure DevOps | Microsoft Docs107.
●● Integrate with service hooks - Azure DevOps | Microsoft Docs108.
●● Build Quality Indicators report - Azure DevOps Server | Microsoft Docs109.

106 https://docs.microsoft.com/azure/architecture/checklist/dev-ops
107 https://docs.microsoft.com/azure/devops/notifications/concepts-events-and-notifications
108 https://docs.microsoft.com/azure/devops/service-hooks/overview
109 https://docs.microsoft.com/azure/devops/report/sql-reports/build-quality-indicators-report
    387

Labs
Lab 09: Controlling deployments using Release
Gates
Lab overview
This lab covers the configuration of the deployment gates and details how to use them to control
execution of Azure pipelines. To illustrate their implementation, you will configure a release definition
with two environments for an Azure Web App. You will deploy to the Canary environment only when
there are no blocking bugs for the app and mark the Canary environment complete only when there are
no active alerts in Application Insights of Azure Monitor.
A release pipeline specifies the end-to-end release process for an application to be deployed across a
range of environments. Deployments to each environment are fully automated by using jobs and tasks.
Ideally, you do not want new updates to the applications to be exposed to all the users at the same time.
It is a best practice to expose updates in a phased manner i.e. expose to a subset of users, monitor their
usage and expose to other users based on the experience of the initial set of users.
Approvals and gates enable you to take control over the start and completion of the deployments in a
release. With approvals, you can wait for users to manually approve or reject deployments. Using release
gates, you can specify application health criteria that must be met before release is promoted to the next
environment. Prior to or after any environment deployment, all the specified gates are automatically
evaluated until they all pass or until they reach your defined timeout period and fail.
Gates can be added to an environment in the release definition from the pre-deployment conditions or
the post-deployment conditions panel. Multiple gates can be added to the environment conditions to
ensure all the inputs are successful for the release.
As an example:
●● Pre-deployment gates ensure there are no active issues in the work item or problem management
system before deploying a build to an environment.
●● Post-deployment gates ensure there are no incidents from the monitoring or incident management
system for the app after it’s been deployed, before promoting the release to the next environment.
There are 4 types of gates included by default in every account.
●● Invoke Azure function: Triggers execution of an Azure function and ensures a successful completion.
●● Query Azure monitor alerts: Observes the configured Azure monitor alert rules for active alerts.
●● Invoke REST API: Makes a call to a REST API and continues if it returns a successful response.
●● Query Workitems: Ensures the number of matching work items returned from a query is within a
threshold.

Objectives
After you complete this lab, you will be able to:
●● Configure release pipelines
●● Configure release gates
●● Test release gates
388     

Lab duration
●● Estimated time: 75 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions110

Lab 10: Creating a release dashboard


Lab overview
In this lab, you will step through creation of a release dashboard and the use of REST API to retrieve
Azure DevOps release data, which you can make this way available to your custom applications or
dashboards.
The lab leverages the Azure DevOps Starter resource, which automatically creates an Azure DevOps
project that builds and deploys an application into Azure.

Objectives
After you complete this lab, you will be able to:
●● create a release dashboard
●● use REST API to query release information

Lab duration
●● Estimated time: 45 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions111

110 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
111 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    389

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices isn't considered a feedback loop or quality gates during the Continuous
Delivery?
†† An automated test on a Test environment.
†† An automated build to validate the sources.
†† An automated Azure Boards and repository integration.

Multiple choice
Which of the following choices best describe Continuous Delivery?
†† Continuous delivery (CD) triggers automated testing for every code change.
†† Continuous delivery (CD) is a set of processes, tools, and techniques for rapid, reliable, and continuous
software development and delivery.
†† Continuous delivery (CD) is the process of automating the build and testing of code every time a team
member commits changes to version control.

Multiple choice
Which of the following choices is a definition for a technical release (deployment)?
†† Deployment is the action of running the tasks for one stage, which results in a tested and deployed
application and other activities specified for that stage.
†† Deployment is a package or container containing a versioned set of artifacts specified in a release
pipeline in your CI/CD process.
†† Deployment is a construct that holds a versioned set of artifacts specified in a CI/CD pipeline.

Multiple choice
Which of the following choices is a container with versioned artifacts, pipeline, approvals, stages, variables?
†† Deployment.
†† Build.
†† Release.
390     

Multiple choice
Which of the following choices isn't a release trigger?
†† A manual trigger.
†† A continuous deployment trigger.
†† A feature trigger.

Multiple choice
Which of the following choices can you use to prevent deployment in Azure DevOps when a security testing
tool finds a compliance problem?
†† Work Item.
†† Release Gate.
†† Manual trigger.

Multiple choice
Which of the following choices is a logical boundary in your release pipeline at which you can pause the
pipeline and perform various checks?
†† Stage.
†† Trigger.
†† Quality Gate.

Multiple choice
Which of the following choices describe how you measure the quality of your release process?
†† Based on the success and failures of your pipeline.
†† Based on your pipeline's configuration and CI/CD implementation.
†† The quality of your release process cannot be measured directly because it is a process.

Multiple choice
Which of the following choices describes the Release Notes and documentation importance?
†† To comply with the internal company process and audit.
†† It's essential when you want to communicate what has been released to your customer.
†† To control what was delivered during the sprint.

Multiple choice
Which of the following choices isn't a Release Management tool?
†† Azure Repos.
†† Azure Pipelines.
†† Jenkins.
    391

Multiple choice
Which of the following choices describe microservice?
†† A microservice is a multi-function software to deliver more than one thing very well.
†† A microservice is software built as one monolith.
†† A microservice is an autonomous, independently deployable, and scalable software component.

Multiple choice
Which of the following choices is a microservice characteristic?
†† Each microservice can be tested based on other microservice results.
†† If one microservice changes, it shouldn't impact any other microservices within your landscape.
†† If one microservice changes, it affects other services, and you need to wait for other services to go
online.

Multiple choice
Which of the following choices represent a classical deployment pattern?
†† Dev, Test, Staging, and Production deployment.
†† Blue-green deployments.
†† A/B testing.

Multiple choice
Which of the following choices is the easiest way to create a staging environment for an Azure WebApp?
†† Create a deployment slot.
†† Use Application Insights.
†† Create an app close.

Multiple choice
Which of the following choices describes a Feature Flag functionality?
†† Feature Flags allow teams to create automatic release notes.
†† Feature Flags can be used to control exposure to new product functionality.
†† Feature Flags help teams control deployment release gates.

Multiple choice
Which of the following choices isn't a deployment pattern that allows you to plan to slowly increase the
traffic to a newer version of your site?
†† A/B Testing.
†† Blue-Green.
†† Canary Release.
392     

Multiple choice
Which of the following choices is an Azure-based tool can you use to divert a percentage of your web traffic
to a newer version of an Azure website?
†† Load Balancer.
†† Application Gateway.
†† Traffic Manager.

Multiple choice
Which of the following choices is a characteristic that makes users suitable for working with Canary deploy-
ments?
†† Uses the app irregularly.
†† High tolerance for issues.
†† It depends highly on the app working all the time.

Multiple choice
Which of the following choices describes the difference Dark Launching has from Canary releases?
†† You're looking to assess users' responses to new features in your frontend rather than testing the
performance of the backend.
†† You're looking to assess users' responses to new features in your backend and frontend.
†† You're looking to assess users' responses to new features in your backend rather than testing the
performance of the frontend.

Multiple choice
Which of the following choices is a deployment pattern extension of Canary Release?
†† Blue-Green.
†† A/B Testing.
†† Progressive Exposure.

Multiple choice
Which of the following choices is another way A/B testing is called?
†† Smoke testing.
†† Split testing or Bucket testing.
†† Integration testing.
    393

Multiple choice
Which of the following choices is a correct statement about A/B testing?
†† A/B testing isn't part of Continuous Delivery or a pre-requisite for Continuous Delivery.
†† A/B testing can be implemented using Azure Artifacts and multiple environments.
†† A/B testing is a pre-requisite and part of Continuous Delivery.
394     

Answers
Multiple choice
Which of the following choices isn't considered a feedback loop or quality gates during the Continuous
Delivery?
†† An automated test on a Test environment.
†† An automated build to validate the sources.
■■ An automated Azure Boards and repository integration.
Explanation
A feedback loop can be different things: A unit test to validate the code, An automated build to validate the
sources, An automated test on a Test environment, Some monitor on a server, and so on.
Multiple choice
Which of the following choices best describe Continuous Delivery?
†† Continuous delivery (CD) triggers automated testing for every code change.
■■ Continuous delivery (CD) is a set of processes, tools, and techniques for rapid, reliable, and continuous
software development and delivery.
†† Continuous delivery (CD) is the process of automating the build and testing of code every time a team
member commits changes to version control.
Explanation
Continuous delivery (CD) is a set of processes, tools, and techniques for rapid, reliable, and continuous
software development and delivery.
Multiple choice
Which of the following choices is a definition for a technical release (deployment)?
■■ Deployment is the action of running the tasks for one stage, which results in a tested and deployed
application and other activities specified for that stage.
†† Deployment is a package or container containing a versioned set of artifacts specified in a release
pipeline in your CI/CD process.
†† Deployment is a construct that holds a versioned set of artifacts specified in a CI/CD pipeline.
Explanation
Deployment is the action of running the tasks for one stage, which results in a tested and deployed applica-
tion and other activities specified for that stage.
Multiple choice
Which of the following choices is a container with versioned artifacts, pipeline, approvals, stages, varia-
bles?
†† Deployment.
†† Build.
■■ Release.
Explanation
A release is a package or container that holds a versioned set of artifacts specified in a release pipeline in
your CI/CD process.
    395

Multiple choice
Which of the following choices isn't a release trigger?
†† A manual trigger.
†† A continuous deployment trigger.
■■ A feature trigger.
Explanation
A feature trigger.
Multiple choice
Which of the following choices can you use to prevent deployment in Azure DevOps when a security
testing tool finds a compliance problem?
†† Work Item.
■■ Release Gate.
†† Manual trigger.
Explanation
Release Gate. Release gates give you additional control over the start and completion of the deployment
pipeline. They are often set up as a pre-deployment and post-deployment conditions.
Multiple choice
Which of the following choices is a logical boundary in your release pipeline at which you can pause the
pipeline and perform various checks?
■■ Stage.
†† Trigger.
†† Quality Gate.
Explanation
Using stage you can pause and validate the pipeline with various checks.
Multiple choice
Which of the following choices describe how you measure the quality of your release process?
†† Based on the success and failures of your pipeline.
†† Based on your pipeline's configuration and CI/CD implementation.
■■ The quality of your release process cannot be measured directly because it is a process.
Explanation
The quality of your release process can't be measured directly because it's a process. What you can measure
is how well your process works.
396     

Multiple choice
Which of the following choices describes the Release Notes and documentation importance?
†† To comply with the internal company process and audit.
■■ It's essential when you want to communicate what has been released to your customer.
†† To control what was delivered during the sprint.
Explanation
When you deploy a new release to a customer or install new software on your server, you want to commu-
nicate what has been released. The usual way to do this is the use of release notes.
Multiple choice
Which of the following choices isn't a Release Management tool?
■■ Azure Repos.
†† Azure Pipelines.
†† Jenkins.
Explanation
Azure Repos is a Source Control tool.
Multiple choice
Which of the following choices describe microservice?
†† A microservice is a multi-function software to deliver more than one thing very well.
†† A microservice is software built as one monolith.
■■ A microservice is an autonomous, independently deployable, and scalable software component.
Explanation
A microservice is an autonomous, independently deployable, and scalable software component.
Multiple choice
Which of the following choices is a microservice characteristic?
†† Each microservice can be tested based on other microservice results.
■■ If one microservice changes, it shouldn't impact any other microservices within your landscape.
†† If one microservice changes, it affects other services, and you need to wait for other services to go
online.
Explanation
If one microservice changes, it should not affect any other microservices within your landscape.
Multiple choice
Which of the following choices represent a classical deployment pattern?
■■ Dev, Test, Staging, and Production deployment.
†† Blue-green deployments.
†† A/B testing.
Explanation
The traditional or classical deployment pattern was moving your software to a development stage, a testing
stage, maybe an acceptance or staging stage, and finally a production stage.
    397

Multiple choice
Which of the following choices is the easiest way to create a staging environment for an Azure WebApp?
■■ Create a deployment slot.
†† Use Application Insights.
†† Create an app close.
Explanation
With deployment slots, you can validate app changes in staging before swapping them with your production
slot.
Multiple choice
Which of the following choices describes a Feature Flag functionality?
†† Feature Flags allow teams to create automatic release notes.
■■ Feature Flags can be used to control exposure to new product functionality.
†† Feature Flags help teams control deployment release gates.
Explanation
Feature Flags can be used to control exposure to new product functionality.
Multiple choice
Which of the following choices isn't a deployment pattern that allows you to plan to slowly increase the
traffic to a newer version of your site?
†† A/B Testing.
■■ Blue-Green.
†† Canary Release.
Explanation
A/B Testing and Canary Release allows you to plan to slowly increase the traffic to a newer version of your
site.
Multiple choice
Which of the following choices is an Azure-based tool can you use to divert a percentage of your web
traffic to a newer version of an Azure website?
†† Load Balancer.
†† Application Gateway.
■■ Traffic Manager.
Explanation
You set the traffic to be distributed to a small percentage of the users, and you carefully watch the applica-
tion's behavior.
398     

Multiple choice
Which of the following choices is a characteristic that makes users suitable for working with Canary
deployments?
†† Uses the app irregularly.
■■ High tolerance for issues.
†† It depends highly on the app working all the time.
Explanation
It's a high tolerance for issues.
Multiple choice
Which of the following choices describes the difference Dark Launching has from Canary releases?
■■ You're looking to assess users' responses to new features in your frontend rather than testing the
performance of the backend.
†† You're looking to assess users' responses to new features in your backend and frontend.
†† You're looking to assess users' responses to new features in your backend rather than testing the
performance of the frontend.
Explanation
You are looking to assess users' responses to new features in your frontend rather than testing the perfor-
mance of the backend.
Multiple choice
Which of the following choices is a deployment pattern extension of Canary Release?
†† Blue-Green.
†† A/B Testing.
■■ Progressive Exposure.
Explanation
It's Progressive Exposure.
Multiple choice
Which of the following choices is another way A/B testing is called?
†† Smoke testing.
■■ Split testing or Bucket testing.
†† Integration testing.
Explanation
A/B testing is also known as split testing or bucket testing. It compares two versions of a web page or app
against each other to determine which one performs better.
    399

Multiple choice
Which of the following choices is a correct statement about A/B testing?
■■ A/B testing isn't part of Continuous Delivery or a pre-requisite for Continuous Delivery.
†† A/B testing can be implemented using Azure Artifacts and multiple environments.
†† A/B testing is a pre-requisite and part of Continuous Delivery.
Explanation
A/B testing is not part of Continuous Delivery or a pre-requisite for Continuous Delivery. It's more the other
way around.
Module 5 Implement a secure continuous de-
ployment using Azure Pipelines

Introduction to deployment patterns


Introduction
This module introduces deployment patterns and explains microservices architecture to help improve the
deployment cycle and examine classical and modern deployment patterns.
Continuous Delivery is an extension of Continuous Integration. It's all about getting changes to custom-
ers quickly and using sustainable methods.
Continuous Delivery goes further, and changes that pass through production pipelines are released to
customers.
Continuous Delivery is more than release management.
Continuous Delivery is all about the process, the people, and the tools that you need to make sure that
you can deliver your software on demand.
Deployment is only one step within the Continuous Delivery process. To deploy on-demand or multiple
times a day, all the prerequisites need to be in place.
For example:

Testing strategy
Your testing strategy should be in place. If you need to run many manual tests to validate your software,
it is a bottleneck to delivering on-demand.

Coding practices
If your software isn't written in a safe and maintainable manner, the chances are that you can't maintain a
high release cadence.
402     

When your software is complex because of a large amount of technical Debt, it's hard to change the code
quickly and reliably.
Writing high-quality software and high-quality tests are an essential part of Continuous Delivery.

Architecture
The architecture of your application is always significant. But when implementing Continuous Delivery, it's
maybe even more so.
If your software is a monolith with many tight coupling between the various components, it's challenging
to deliver your software continuously.
Every part that is changed might impact other parts that didn't change. Automated tests can track many
these unexpected dependencies, but it's still hard.
There's also the time aspect when working with different teams. When Team A relies on the service of
Team B, Team A can't deliver until Team B is done. It introduces another constraint on delivery.
Continuous Delivery for large software products is complex.
For smaller parts, it's easier. So, breaking up your software into smaller, independent pieces is a good
solution in many cases.
One approach to solving these issues is to implement microservices.
Continuous Integration is one of the key pillars of DevOps.
Once you have your code in a version control system, you need an automated way of integrating the
code on an ongoing basis.
Azure Pipelines can be used to create a fully featured cross-platform CI and CD service.
It works with your preferred Git provider and can deploy to most major cloud services, including Azure.
This module details continuous integration practice and the pillars for implementing it in the develop-
ment lifecycle, its benefits, and properties.

Learning objectives
After completing this module, students and professionals can:
●● Describe deployment patterns.
●● Explain microservices architecture.
●● Understand classical and modern deployment patterns.
●● Plan and design your architecture.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
    403

Explore microservices architecture


Today, you'll frequently hear the term microservices. A microservice is an autonomous, independently
deployable, and scalable software component.
They're small, focused on doing one thing well, and they can run autonomously. If one microservice
changes, it shouldn't impact any other microservices within your landscape.
By choosing a microservices architecture, you'll create a landscape of services that can be developed,
tested, and deployed separately. It implies other risks and complexity.
It would be best if you created to keep track of interfaces and how they interact with each other. And you
need to maintain multiple application lifecycles instead of one.

In a traditional application, we can often see a multi-layer architecture.


One layer with the UI, a layer with the business logic and services, and a layer with the data services.
Sometimes there are dedicated teams for the UI and the backend. When something needs to change, it
needs to change in all the layers.
When moving towards a microservices architecture, all these layers are part of the same microservice.
Only the microservice contains one specific function.
The interaction between the microservices is done in an asynchronous matter.
They don't call each other directly but make use of asynchronous mechanisms like queues or events.
Each microservice has its lifecycle and Continuous Delivery pipeline. If you built them correctly, you could
deploy new microservice versions without impacting other parts of the system.
Microservice architecture is undoubtedly not a prerequisite for doing Continuous Delivery, but smaller
software components certainly help implement a fully automated pipeline.

Examine classical deployment patterns


When we have our prerequisites to deliver our software continuously, we need to start thinking about a
deployment pattern.
Traditionally a deployment pattern was straightforward.
404     

The software was built, and when all features had been implemented, the software was deployed to an
environment where a group of people could start using it.
The traditional or classical deployment pattern was moving your software to a development stage, a
testing stage, maybe an acceptance or staging stage, and finally a production stage.
The software moved as one piece through the stages.
The production release was, in most cases, a Big Bang release, where users were confronted with many
changes at the same time.
Despite the different stages to test and validate, this approach still involves many risks.
By running all your tests and validation on non-production environments, it's hard to predict what
happens when your production users start using it.
You can run load tests and availability tests, but in the end, there's no place like production.

Understand modern deployment patterns


End-users always use your application differently. Unexpected events will happen in a data center,
multiple events from multiple users will cooccur, triggering some code that hasn't been tested in that
way.
To overcome, we need to embrace that some features can only be tested in production.
Testing in production sounds a bit scary, but that shouldn't be the case.
When we talked about separating our functional and technical releases, we already saw that it's possible
to deploy features without exposing them to all users.
When we take this concept of feature toggling and use it with our deployment patterns, we can test our
software in production.
For example:
●● Blue-green deployments.
●● Canary releases.
●● Dark launching.
●● A/B testing.
●● Progressive exposure or ring-based deployment.
●● Feature toggles.

Take a critical look at your architecture


Are your architecture and the current state of your software ready for Continuous Delivery?
    405

Topics you might want to consider are:


●● Is your software built as one giant monolith, or is it divided into multiple components?
●● Can you deliver parts of your application separately?
●● Can you guarantee the quality of your software when deploying multiple times a week?
●● How do you test your software?
●● Do you run one or multiple versions of your software?
●● Can you run multiple versions of your software side by side?
●● What do you need to improve to implement Continuous Delivery?

Summary
This module-introduced deployment patterns and explained microservices architecture to help improve
the deployment cycle and examine classical and modern deployment patterns.
You learned how to describe the benefits and usage of:
●● Describe deployment patterns.
●● Explain microservices architecture.
●● Understand classical and modern deployment patterns.
●● Plan and design your architecture.

Learn more
●● Deployment jobs - Azure Pipelines | Microsoft Docs1.
●● What are Microservices? - Azure DevOps | Microsoft Docs2.
●● Design a CI/CD pipeline-using Azure DevOps - Azure Example Scenarios | Microsoft Docs3.

1 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
2 https://docs.microsoft.com/devops/deliver/what-are-microservices
3 https://docs.microsoft.com/azure/architecture/example-scenario/apps/devops-dotnet-webapp
406     

Implement blue-green deployment and fea-


ture toggles
Introduction
This module describes the blue-green deployment process and introduces feature toggle techniques to
implement in the development process.

Learning objectives
After completing this module, students and professionals can:
●● Explain deployment strategies.
●● Implement blue-green deployment.
●● Understand deployment slots.
●● Implement and manage feature toggles.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see: Create an organization - Azure DevOps4.

●● If you already have your organization created, use the Azure DevOps Demo Generator5 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps6.

What is blue-green deployment?


Blue-green deployment is a technique that reduces risk and downtime by running two identical environ-
ments. These environments are called blue and green.
Only one of the environments is live, with the live environment serving all production traffic.

4 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
5 https://azuredevopsdemogenerator.azurewebsites.net/
6 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
    407

For this example, blue is currently live, and green is idle.


As you prepare a new version of your software, the deployment and final testing stage occur in an
environment that isn't live: in this example, green. Once you've deployed and thoroughly tested the
software in green, switch the router or load balancer so all incoming requests go to green instead of blue.
Green is now live, and blue is idle.
This technique can eliminate downtime because of app deployment. Besides, blue-green deployment
reduces risk: if something unexpected happens with your new version on the green, you can immediately
roll back to the last version by switching back to blue.
When it involves database schema changes, this process isn't straightforward. You probably can't swap
your application. In that case, your application and architecture should be built to handle both the old
and the new database schema.

Explore deployment slots


When using a cloud platform like Azure, doing blue-green deployments is relatively easy. You don't need
to write your code or set up infrastructure. You can use an out-of-the-box feature called deployment slots
when using web apps.
Deployment slots are a feature of Azure App Service. They're live apps with their hostnames. You can
create different slots for your application (for example, Dev, Test, or Stage). The production slot is the slot
where your live app stays. You can validate app changes in staging with deployment slots before swap-
ping them with your production slot.
You can use a deployment slot to set up a new version of your application, and when ready, swap the
production environment with the new staging environment. It's done by an internal swapping of the IP
addresses of both slots.

Swap
The swap eliminates downtime when you deploy your app with seamless traffic redirection, and no
requests are dropped because of swap operations.
408     

To learn more about Deployment slots and swap, see also:


●● Set up Staging Environments in Azure App Service7.
●● Considerations on using Deployment Slots in your DevOps Pipeline8.
●● What happens during a swap.9

Exercise - set up a blue-green deployment


In this demonstration, you'll investigate Blue-Green Deployment.

Steps
Let's now look at how a release pipeline can be used to implement blue-green deployments.
We'll start by creating a new project with a release pipeline that can deploy the Parts Unlimited template
again.

An initial app deployment


1. Navigate to Azure DevOps Demo Generator in a browser: https://azuredevopsdemogenerator.
azurewebsites.net and click Sign in.
You'll be prompted to sign in if necessary.
2. In the Create New Project window, select your existing Organization, set the Project Name to PU
Hosted, and click Choose template.

3. Click on the PartsUnlimited project (not the PartsUnlimited-YAML project), click Select Template,
and click Create Project. When the deployment completes, click Navigate to the project.
4. In the main menu for PU Hosted, click Pipelines, then click Builds, then Queue, and finally Run to
start a build.
The build should succeed.
Note:Warnings might appear but can be ignored for this walkthrough.

7 https://docs.microsoft.com/azure/app-service/deploy-staging-slots
8 https://blogs.msdn.microsoft.com/devops/2017/04/10/considerations-on-using-deployment-slots-in-your-devops-pipeline/
9 https://docs.microsoft.com/azure/app-service/deploy-staging-slots
    409

5. In the main menu, click Releases. Because a continuous integration trigger was in place, a release was
attempted. However, we haven't yet configured the release so it will have failed. Click Edit to enter
edit mode for the release.

6. Select the Dev stage from the drop-down list beside Tasks, then click to select the Azure Deploy-
ment task.
7. In the Azure resource group deployment pane, select your Azure subscription, then click Authorize
when prompted. When authorization completes, choose a Location for the web app.
Note:You might be prompted to sign in to Azure at this point.
410     

8. Click Azure App Service Deploy in the task list to open its settings. Again, select your Azure sub-
scription. Set the Deployment slot to Staging.
    411

Note:The template creates a production site and two deployment slots: Dev and Staging. We'll use
Staging for our Green site.
9. In the task list, click Dev, and in the Agent job pane, select Azure Pipelines for the Agent pool and
vs2017-win2016 for the Agent Specification.

10. From the top menu, click Pipelines. Click the Dev stage, and in the properties window, rename it to
Green Site. Click the QA stage and click Delete and Confirm. Click the Production stage and click
Delete and Confirm. Click Save, then OK.
412     

11. Hover over the Green Site stage and click the Clone icon when it appears. Change the Stage name to
Production. From the Tasks drop-down list, select Production.

12. Click the Azure App Service Deploy task and uncheck the Deploy to slot option. Click Save and OK.

The production site isn't deployed to a deployment slot. It's deployed to the main site.
13. Click Create release, then Create to create the new release. When created, click the release link to
view its status.

After a while, the deployment should succeed.


    413

Test the green site and the production site


14. Open the blade for the ASPDOTNET resource group created by the project deployment in the Azure
portal. Notice the names of the web apps that have been deployed. Click to open the Staging* web
app's blade. Copy the URL from the top left-hand side.

15. Open a new browser tab and navigate to the copied URL. It will take the application a short while to
compile, but then the Green website (on the Staging slot) should appear.

Note:You can tell that the staging slot is being used because of the -staging suffix in the website URL.
16. Open another new browser tab and navigate to the same URL but without the -staging slot. The
production site should also be working.
414     

Note:Leave both browser windows open for later in the walkthrough.

Configure blue-green swap and approval


Now that both sites are working, let's configure the release pipeline for blue-green deployment.
17. In Azure DevOps, in the main menu for the PU Hosted project, click Pipelines, then click Releases,
then click Edit to return to edit mode.
18. Click the Production stage, click Delete, then Confirm to remove it. Click +Add to add an extra stage
and click Empty job for the template. Set Swap Blue-Green for the Stage name.

19. Click Variables and modify the Scope of WebsiteName to Release.

20. From the Tasks drop-down list, click to select the Swap Blue-Green stage. Click the + to the right-
hand side of Agent Job to add a new task. In the Search box, type CLI.
    415

21. Hover over the Azure CLI template and when the Add button appears, click it, then click to select the
Azure CLI task to open its settings pane.

22. Configure the pane as follows, with your subscription, a Script Location of Inline script, and the
Inline Script:
Az webapp deployment slot swap -g $(ResourceGroupName) -n $(WebsiteName)
--slot Staging --target-slot production
416     

23. From the menu above the task list, click Pipeline. Click the Pre-deployment conditions icon for the
Swap Blue-Green stage, then in the Triggers pane, enable Pre-deployment approvals.
24. Configure yourself as an approver, click Save, then OK.

Test the blue-green swap


25. In the PU Hosted main menu, click Repos, then click Files to open the project files. Navigate to the
following file.

We'll make a cosmetic change to see that the website has been updated. We'll change the word tires
in the main page rotation to tyres to target an international audience.
26. Click Edit to allow editing, then find the word tires and replace it with the word tyres. Click Commit
and Commit to save the changes and trigger a build and release.
    417

27. From the main menu, click Pipelines, then Builds. Wait for the continuous integration build to
complete successfully.

28. From the main menu, click Releases. Click to open the latest release (at the top of the list).

You're now being asked to approve the deployment swap across to Production. We'll check the green
deployment first.
29. Refresh the Green site (that is, Staging slot) browser tab and see if your change has appeared. It now
shows the altered word.
418     

30. Refresh the Production site browser tab and notice that it still isn't updated.

31. As you're happy with the change, in release details, click Approve, then Approve and wait for the
stage to complete.

32. Refresh the Production site browser tab and check that it now has the updated code.

Final notes
If you check the production site, you'll see it has the previous version of the code.
    419

It's the critical difference with Swap, rather than just a typical deployment process from one staged site to
another. You have a rapid fallback option by swapping the sites back if needed.

Introduction to feature toggles


Feature Flags allow you to change how our system works without making significant changes to the code.
Only a small configuration change is required. In many cases, it will also only be for a few users.
Feature Flags offer a solution to the need to push new code into the trunk and deploy it, but it isn't
functional yet.
They're commonly implemented as the value of variables used to control conditional logic.
Imagine that your team is all working in the main trunk branch of a banking application.
You've decided it's worth trying to have all the work done in the main branch to avoid messy operations
of merge later.
Still, you need to ensure that significant changes to the interest calculations can happen, and people
depend on that code every day.
Worse, the changes will take you weeks to complete. You can't leave the main code broken for that
period.
A Feature Flag could help you get around it.
You can change the code so that other users who don't have the Feature Flag set will use the original
interest calculation code.
The members of your team who are working on the new interest calculations and set to see the Feature
Flag will have the new interest calculation code.
It's an example of a business feature flag used to determine business logic.
The other type of Feature Flag is a Release Flag. Now, imagine that after you complete the work on the
interest calculation code, you're nervous about publishing a new code out to all users at once.
You have a group of users who are better at dealing with new code and issues if they arise, and these
people are often called Canaries.
The name is based on the old use of the Canaries in coal mines.
You change the configuration so that the Canary users also have the Feature Flag set, and they'll start to
test the new code as well. If problems occur, you can quickly disable the flag for them again.
Another release flag might be used for AB testing. Perhaps you want to find out if a new feature makes it
faster for users to complete a task.
You could have half the users working with the original version of the code and the other half working
with the new code version.
You can then directly compare the outcome and decide if the feature is worth keeping. Feature Flags are
sometimes called Feature Toggles instead.
By exposing new features by just “flipping a switch” at runtime, we can deploy new software without
exposing any new or changed functionality to the end-user.
The question is, what strategy do you want to use in releasing a feature to an end-user.
●● Reveal the feature to a segment of users, so you can see how the new feature is received and used.
●● Reveal the feature to a randomly selected percentage of users.
420     

●● Reveal the feature to all users at the same time.


The business owner plays a vital role in the process, and you need to work closely with them to choose
the right strategy.
Just as in all the other deployment patterns mentioned in the introduction, the most crucial part is always
looking at how the system behaves.
The idea of separating feature deployment from Feature exposure is compelling and something we want
to incorporate in our Continuous Delivery practice.
It helps us with more stable releases and better ways to roll back when we run into issues when we have
a new feature that produces problems.
We switch it off again and then create a hotfix. By separating deployments from revealing a feature, you
make the opportunity to release a day anytime since the new software won't affect the system that
already works.

What are feature toggles?


Feature toggles are also known as feature flippers, feature flags, feature switches, conditional features,
and so on.
Besides the power they give you on the business side, they also provide an advantage on the develop-
ment side.
Feature toggles are a great alternative to branching as well. Branching is what we do in our version
control system.
To keep features isolated, we maintain a separate branch.
When we want the software to be in production, we merge it with the release branch and deploy it.
With feature toggles, you build new features behind a toggle. Your feature is “off” when a release occurs
and shouldn't be exposed to or impact the production software.

How to implement a feature toggle


In the purest form, a feature toggle is an IF statement.
    421

When the switch is off, it executes the code in the IF, otherwise the ELSE.
You can make it much more intelligent, controlling the feature toggles from a dashboard or building
capabilities for roles, users, and so on.
If you want to implement feature toggles, many different frameworks are available commercially as Open
Source.
For more information, see also Explore how to progressively expose your features in production for
some or all users10.

Describe feature toggle maintenance


A feature toggle is just code. And to be more specific, conditional code. It adds complexity to the code
and increases the technical debt.
Be aware of that when you write them, and clean up when you don't need them anymore.
While feature flags can be helpful, they can also introduce many issues of their own.
The idea of a toggle is that it's short-lived and only stays in the software when it's necessary to release it
to the customers.

10 https://docs.microsoft.com/azure/devops/articles/phase-features-with-feature-flags
422     

You can classify the different types of toggles based on two dimensions as described by Martin Fowler.
He states that you can look at the dimension of how long a toggle should be in your codebase and, on
the other side how dynamic the toggle needs to be.

Planning feature flag lifecycles

The most important thing is to remember that you need to remove the toggles from the software.
If you don't do that, they'll become a form of technical debt if you keep them around for too long.
As soon as you introduce a feature flag, you've added to your overall technical debt.
Like other technical debt, they're easy to add, but the longer they're part of your code, the bigger the
technical debt becomes because you've added scaffolding logic needed for the branching within the
code.
The cyclomatic complexity of your code keeps increasing as you add more feature flags, as the number of
possible paths through the code increases.
Using feature flags can make your code less solid and can also add these issues:
●● The code is harder to test effectively as the number of logical combinations increases.
●● The code is harder to maintain because it's more complex.
●● The code might even be less secure.
●● It can be harder to duplicate problems when they're found.
A plan for managing the lifecycle of feature flags is critical. As soon as you add a flag, you need to plan
for when it will be removed.
Feature flags shouldn't be repurposed. There have been high-profile failures because teams decided to
reuse an old flag that they thought was no longer part of the code for a new purpose.

Tooling for release flag management


The amount of effort required to manage feature flags shouldn't be underestimated. It's essential to
consider using tooling that tracks:
●● Which flags exist.
●● Which flags are enabled in which environments, situations, or target customer categories.
●● The plan for when the flags will be used in production.
●● The plan for when the flags will be removed.
Using a feature flag management system lets you get the benefits of feature flags while minimizing the
risk of increasing your technical debt too high.
    423

Azure App Configuration offers a Feature Manager. See Azure App Configuration Feature Manager11.

Summary
This module described the blue-green deployment process and introduced feature toggle techniques to
implement in the development process.
You learned how to describe the benefits and usage of:
●● Explain deployment strategies.
●● Implement blue-green deployment.
●● Understand deployment slots.
●● Implement and manage feature toggles.

Learn more
●● Release Engineering Continuous deployment - Azure Architecture Center | Microsoft Docs12.
●● Deployment jobs - Azure Pipelines | Microsoft Docs13.
●● Configure canary deployments for Azure Linux virtual machines - Azure Virtual Machines |
Microsoft Docs14.
●● Progressive experimentation with feature flags - Azure DevOps | Microsoft Docs15.
●● Set up staging environments - Azure App Service | Microsoft Docs16.

11 https://docs.microsoft.com/azure/azure-app-configuration/manage-feature-flags
12 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-cd
13 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
14 https://docs.microsoft.com/azure/virtual-machines/linux/tutorial-azure-devops-blue-green-strategy
15 https://docs.microsoft.com/devops/operate/progressive-experimentation-feature-flags
16 https://docs.microsoft.com/azure/app-service/deploy-staging-slots
424     

Implement canary releases and dark launching


Introduction
This module describes deployment strategies around canary releases and dark launching and examines
traffic managers.

Learning objectives
After completing this module, students and professionals can:
●● Describe deployment strategies.
●● Implement canary release.
●● Explain traffic manager.
●● Understand dark launching.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore canary releases


The term canary release comes from the days that miners took a canary with them into the coal mines.
The purpose of the canary was to identify the existence of toxic gasses.
The canary would die much sooner than the miner, giving them enough time to escape the potentially
lethal environment.
A canary release is a way to identify potential problems without exposing all your end users to the issue
at once.
The idea is that you tell a new feature only to a minimal subset of users.
By closely monitoring what happens when you enable the feature, you can get relevant information from
this set of users and either continue or rollback (disable the feature).
If the canary release shows potential performance or scalability problems, you can build a fix for that and
apply that in the canary environment.
After the canary release has proven to be stable, you can move the canary release to the actual produc-
tion environment.
    425

Canary releases can be implemented using a combination of feature toggles, traffic routing, and deploy-
ment slots.
●● You can route a percentage of traffic to a deployment slot with the new feature enabled.
●● You can target a specific user segment by using feature toggles.

Examine traffic manager


In the previous module, we saw how Deployment slots in Azure Web Apps enable you to swap between
two different versions of your application quickly.
Suppose you want more control over the traffic that flows to your other versions. Deployment slots alone
aren't enough.
To control traffic in Azure, you can use a component called Azure Traffic Manager.

Azure Traffic Manager


Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally
to services across global Azure regions while providing high availability and responsiveness.
Traffic Manager uses DNS to direct client requests to the most appropriate service endpoint based on a
traffic-routing method and the health of the endpoints.
An endpoint is an Internet-facing service hosted inside or outside of Azure.
Traffic Manager provides a range of traffic-routing methods and endpoint monitoring options to suit
different application needs and automatic failover models.
Traffic Manager is resilient to failure, including the breakdown of an entire Azure region.
While the available options can change over time, the Traffic Manager currently provides six options to
distribute traffic:
●● Priority: Select Priority when you want to use a primary service endpoint for all traffic and provide
backups if the primary or the backup endpoints are unavailable.
●● Weighted: Select Weighted when you want to distribute traffic across a set of endpoints, either evenly
or according to weights, which you define.
●● Performance: Select Performance when you have endpoints in different geographic locations, and
you want end users to use the “closest” endpoint for the lowest network latency.
●● Geographic: Select Geographic so that users are directed to specific endpoints (Azure, External, or
Nested) based on which geographic location their DNS query originates from. It empowers Traffic
Manager customers to enable scenarios where knowing a user's geographic region and routing them
426     

based on that is necessary. Examples include following data sovereignty mandates, localization of
content & user experience, and measuring traffic from different regions.
●● Multivalue: Select MultiValue for Traffic Manager profiles that can only have IPv4/IPv6 addresses as
endpoints. When a query is received for this profile, all healthy endpoints are returned.
●● Subnet: Select the Subnet traffic-routing method to map sets of end-user IP address ranges to a
specific endpoint within a Traffic Manager profile. The endpoint returned will be mapped for that
request's source IP address when a request is received.
When we look at the options the Traffic Manager offers, the most used option for Continuous Delivery is
routing traffic based on weights.
Note: Traffic is only routed to endpoints that are currently available.
For more information, see also:
●● What is Traffic Manager?17
●● How Traffic Manager works18
●● Traffic Manager Routing Methods19

Controlling your canary release


Using a combination of feature toggles, deployment slots, and Traffic Manager, you can achieve complete
control over the traffic flow and enable your canary release.
You deploy the new feature to the new deployment slot or a new instance of an application, and you
enable the feature after verifying the deployment was successful.
Next, you set the traffic to be distributed to a small percentage of the users.
You carefully watch the application's behavior, for example, by using application insights to monitor the
performance and stability of the application.

Understand dark launching


Dark launching is in many ways like canary releases.
However, the difference here's that you're looking to assess users' responses to new features in your
frontend rather than testing the performance of the backend.
The idea is that rather than launch a new feature for all users, you instead release it to a small set of users.
Usually, these users aren't aware they're being used as test users for the new feature, and often you don't
even highlight the new feature to them, as such the term “Dark” launching.
Another example of dark launching is launching a new feature and using it on the backend to get metrics.
Let me illustrate with a real-world “launch” example.
As Elon Musk describes in his biography, they apply all kinds of Agile development principles in SpaceX.
SpaceX builds and launches rockets to launch satellites. SpaceX also uses dark launching.
When they have a new version of a sensor, they install it alongside the old one.
All data is measured and gathered both by the old and the new sensor.

17 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview
18 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-how-it-works
19 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-routing-methods
    427

Afterward, they compare the outcomes of both sensors.


Only when the new one has the same or improved results the old sensor is replaced.
The same concept can be applied to software. You run all data and calculations through your new feature,
but it isn't “exposed” yet.

How to implement dark launching


In essence, dark launching doesn't differ from a canary release or the implementation and switching of a
feature toggle.
The feature is released and only exposed at a particular time.
As such, the techniques, as described in the previous chapters, do also apply for dark launching.

Summary
This module described deployment strategies around canary releases and dark launching and examined
traffic managers.
You learned how to describe the benefits and usage of:
●● Describe deployment strategies.
●● Implement canary release.
●● Explain traffic manager.
●● Understand dark launching.

Learn more
●● Release Engineering Continuous deployment - Azure Architecture Center | Microsoft Docs20.
●● Canary deployment strategy for Kubernetes deployments - Azure Pipelines | Microsoft Docs21.
●● Azure Traffic Manager | Microsoft Docs22.
●● Progressive experimentation with feature flags - Azure DevOps | Microsoft Docs23.

20 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-cd
21 https://docs.microsoft.com/azure/devops/pipelines/ecosystems/kubernetes/canary-demo
22 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview
23 https://docs.microsoft.com/devops/operate/progressive-experimentation-feature-flags
428     

Implement A-B testing and progressive expo-


sure deployment
Introduction
This module introduces A/B testing and progressive exposure deployment concepts and explores CI/CD
with deployment rings–ring-based deployment.

Learning objectives
After completing this module, students and professionals can:
●● Implement progressive exposure deployment.
●● Implement A/B testing.
●● Implement CI/CD with deployment rings.
●● Identify the best deployment strategy.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

What is A/B testing?


A/B testing (also known as split testing or bucket testing) compares two versions of a web page or app
against each other to determine which one does better.
A/B testing is mainly an experiment where two or more page variants are shown to users at random.
Also, statistical analysis is used to determine which variation works better for a given conversion goal.

A/B testing isn't part of continuous delivery or a pre-requisite for continuous delivery. It's more the other
way around.
Continuous delivery allows you to deliver MVPs to a production environment and your end-users quickly.
Common aims are to experiment with new features, often to see if they improve conversion rates.
Experiments are continuous, and the impact of change is measured.
    429

A/B testing is out of scope for this course.


But because it's a powerful concept that is enabled by implementing continuous delivery, it's mentioned
here to dive into further.

Explore CI-CD with deployment rings


Progressive exposure deployment, also called ring-based deployment, was first discussed in Jez Humble's
Continuous Delivery book.
They support the production-first DevOps mindset and limit the impact on end users while gradually
deploying and validating changes in production.
Impact (also called blast radius) is evaluated through observation, testing, analysis of telemetry, and user
feedback.
In DevOps, rings are typically modeled as stages.
Rings are, in essence, an extension of the canary stage. The canary release releases to a stage to measure

impact. Adding another ring is essentially the same thing.


With a ring-based deployment, you first deploy your changes to risk-tolerant customers and progressive-
ly roll out to a more extensive set of customers.
The Microsoft Windows team, for example, uses these rings.
430     

When you have identified multiple groups of users and see value in investing in a ring-based deploy-
ment, you need to define your setup.
Some organizations that use canary releasing have multiple deployment slots set up as rings.
The first release of the feature to ring 0 targets a well-known set of users, mostly their internal organiza-
tion.
After things have been proven stable in ring 0, they propagate the release to the next ring. It's with a
limited set of users outside their organization.
And finally, the feature is released to everyone. It is often done by flipping the switch on the feature
toggles in the software.
As in the other deployment patterns, monitoring and health checks are essential.
By using post-deployment release gates that check a ring for health, you can define an automatic
propagation to the next ring after everything is stable.
When a ring isn't healthy, you can halt the deployment to the following rings to reduce the impact.
For more information, see also Explore how to progressively expose your Azure DevOps extension
releases in production to validate before impacting all users24.

Exercise - ring-based deployment


In this exercise, you'll investigate ring-based deployment.

Steps
Let's look at how a release pipeline can stage features using ring-based deployments.

24 https://docs.microsoft.com/azure/devops/articles/phase-rollout-with-rings
    431

When I have a new feature, I might want to release it to a few users first, just in case something goes
wrong.
I could do it in authenticated systems by having those users as members of a security group and letting
members of that group use the new features.
However, on a public website, I might not have logged-in users. Instead, I might want to direct a small
percentage of the traffic to use the new features.
Let's see how that's configured.
We'll create a new release pipeline that isn't triggered by code changes but manually when we slowly
release a new feature.
We start by assuming that a new feature has already been deployed to the Green site (the staging slot).
1. In the main menu for the PU Hosted project, click Pipelines, then click Release, click +New, then
click New release pipeline.
2. When prompted to select a template, click Empty job from the top of the pane.
3. Click on the Stage 1 stage and rename it to Ring 0 (Canary).

4. Hover over the New release pipeline name at the top of the page, and when a pencil appears, click it,
and change the pipeline name to Ring-based Deployment.

5. Select the Ring 0 (Canary) stage from the Tasks drop-down list. Click the + to add a new task, and
from the list of tasks, hover over Azure CLI when the Add button appears, click it, then click to select
the Azure CLI task in the task list for the stage.
432     

6. In the Azure CLI settings pane, select your Azure subscription, set Script Location to Inline script,
set the Inline Script to the following, then click Save and OK.
az webapp traffic-routing set --resource-group $(ResourceGroupName) --name
$(WebsiteName) --distribution staging=10
This distribution will cause 10% of the web traffic to be sent to the new feature Site (currently the
staging slot).
7. From the menu above the task list, click Variables. Create two new variables as shown. (Make sure to
use your correct website name).

8. From the menu above the variables, click Pipeline to return to editing the pipeline. Hover over the
Ring 0 (Canary) stage and click the Clone icon when it appears. Select the new stage and rename it
to Ring 1 (Early Adopters).

9. Select the Ring 1 (Early Adopters) stage from the Tasks drop-down list and select the Azure CLI task.
Modify the script by changing the value from 10 to 30 to cause 30% of the traffic to go to the new
feature site.
    433

az webapp traffic-routing set --resource-group $(ResourceGroupName) --name


$(WebsiteName) --distribution staging=30
It allows us to move the new feature into broader distribution if it works ok in smaller users.
10. From the menu above the tasks, click Pipeline to return to editing the release pipeline. Hover over the
Ring 1 (Early Adopters) stage and when the Clone icon appears, click it. Click to select the new stage
and rename it to Public. Click Save and OK.

11. Click the Pre-deployment conditions icon for the Ring 1 (Early Adopters) stage and add yourself as
a pre-deployment approver. Do the same for the Public stage—Click Save and OK.

The first step in releasing the new code to the public is to swap the new feature site (that is, the
staging site) with the production so that production is now running the new code.
12. From the Tasks drop-down list, select the Public stage. Select the Azure CLI task, change the Display
name to Swap sites and change the Inline Script to the following command:
az webapp deployment slot swap -g $(ResourceGroupName) -n $(WebsiteName)
--slot staging --target-slot production

az webapp deployment slot swap -g $(ResourceGroupName) -n $(WebsiteName) –slot staging –tar-


get-slot production
Next, we need to remove any traffic from the staging site.
13. Right-click the Swap sites task and click Clone tasks(s). Select the Swap sites copy task, change its
Display name to Stop staging traffic, and set the Inline Script to the following:
434     

az webapp traffic-routing set --resource-group $(ResourceGroupName) --name


$(WebsiteName) --distribution staging=0
14. Click Save, then OK to save your work.
15. Click Create release and Create to start a release. Click the release link to see the progress.

Wait until Ring 0 (Canary) has succeeded.

Currently, 10% of the traffic will go to the new feature site.


16. Click Approve on the Ring 1 (Early Adopters) stage, and then Approve.
When this stage completes, 30% of the traffic will go to the early adopters in ring 1.

17. Click Approve on the Public stage, and then Approve.


When this stage completes, all the traffic will go to the swapped production site, running the new
code.
    435

The new feature has been fully deployed.

Summary
This module introduced A/B testing and progressive exposure deployment concepts and explored CI/CD
with deployment rings–ring-based deployment.
You learned how to describe the benefits and usage of:
●● Implement progressive exposure deployment.
●● Implement A/B testing.
●● Implement CI/CD with deployment rings.
●● Identify the best deployment strategy.

Learn more
●● Progressively expose your releases using deployment rings - Azure DevOps | Microsoft Docs25.
●● Testing your app and Azure environment - Azure Architecture Center | Microsoft Docs26.
●● What is Continuous Delivery? - Azure DevOps | Microsoft Docs27.

25 https://docs.microsoft.com/azure/devops/migrate/phase-rollout-with-rings
26 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-testing
27 https://docs.microsoft.com/devops/deliver/what-is-continuous-delivery
436     

Integrate with identity management systems


Introduction
This module describes the integration with GitHub and single sign-on (SSO) for authentication, service
principal, and managed service identities.

Learning objectives
After completing this module, students and professionals can:
●● Integrate Azure DevOps with identity management systems.
●● Integrate GitHub with single sign-on (SSO).
●● Understand and create a service principal.
●● Create managed service identities.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Integrate GitHub with single sign-on (SSO)


To use SSO, you need to connect your identity provider to GitHub at the organization level.
GitHub offers both SAML and SCIM support.

Provider Available Support


Active Directory Federation Services (ADFS) SAML
Azure Active Directory (Azure AD) SAML and SCIM
Okta SAML and SCIM
OneLogin SAML and SCIM
PingOne SAML
Shibboleth SAML
For more information, see:
●● Enforcing SAML single sign-on for your organization28.

Explore service principals


Azure AD offers different kinds of mechanisms for authentication. In DevOps Projects, though, one of the
most important is the use of Service Principals.

28 https://docs.github.com/organizations/managing-saml-single-sign-on-for-your-organization/enforcing-saml-single-sign-on-for-your-
organization
    437

Azure AD applications
Applications are registered with an Azure AD tenant within Azure Active Directory. Registering an applica-
tion creates an identity configuration. You also determine who can use it:
●● Accounts in the same organizational directory.
●● Accounts in any organizational directory.
●● Accounts in any organizational directory and Microsoft Accounts (personal).
●● Microsoft Accounts (Personal accounts only).

Client secret
Once the application is created, you then should create at least one client secret for the application.
438     

Grant permissions
The application identity can then be granted permissions within services and resources that trust Azure
Active Directory.

Service principal
To access resources, an entity must be represented by a security principal. To connect, the entity must
know:
●● TenantID.
●● ApplicationID.
●● Client Secret.
For more information on Service Principals, see App Objects and Service Principals29.

Explore Managed Identity


Another authentication mechanism offered by Azure AD is Managed identities.
Imagine that you need to connect from an Azure Data Factory (ADF) to an Azure SQL Database. What
identity should ADF present to the database?

The traditional answer would have been to use SQL Authentication with a username and password. It
leaves yet another credential that needs to be managed on an ongoing basis.

Identity of the service


Many Azure services expose their own identity. It isn't an identity that you need to manage. For example,
you don't need to worry about password policies and so on.
You can assign permissions to that identity, as with any other Azure AD identity.
In the ADF example, you can add the ADF MSI as an Azure SQL Database user and add it to roles within
the database.

29 https://docs.microsoft.com/azure/active-directory/develop/app-objects-and-service-principals
    439

Managed identity types


There are two types of managed identities:
●● System-assigned - It's the types of identities described above. Many, but not all, services expose these
identities.
●● User-assigned - you can create a managed identity as an Azure resource. It can then be assigned to
one or more instances of a service.
For more information, see: What are managed identities for Azure resources?30

Summary
This module described the integration with GitHub and single sign-on (SSO) for authentication, service
principal, and managed service identities.
You learned how to describe the benefits and usage of:
●● Integrate Azure DevOps with identity management systems.
●● Integrate GitHub with single sign-on (SSO).
●● Understand and create a service principal.
●● Create managed service identities.

Learn more
●● About security, authentication, authorization, and security policies - Azure DevOps | Microsoft
Docs31.
●● Azure Identity and Access Management Solutions | Microsoft Azure32.
●● About authentication with SAML single sign-on - GitHub Docs33.
●● Connect to Microsoft Azure - Azure Pipelines | Microsoft Docs34.

30 https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview
31 https://docs.microsoft.com/azure/devops/organizations/security/about-security-identity
32 https://azure.microsoft.com/product-categories/identity/
33 https://docs.github.com/authentication/authenticating-with-saml-single-sign-on/about-authentication-with-saml-single-sign-on
34 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure
440     

Manage application configuration data


Introduction
This module explores ways to rethink application configuration data and the separation of concerns
method. It helps you understand configuration patterns and how to integrate Azure Key Vault with Azure
Pipelines.

Learning objectives
After completing this module, students and professionals can:
●● Rethink application configuration data.
●● Understand separation of concerns.
●● Integrate Azure Key Vault with Azure Pipelines.
●● Manage secrets, tokens, and certificates.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Rethink application configuration data


Configuration information in files
Most application runtime environments include configuration information that is held in files deployed
with the application. In some cases, it's possible to edit these files to change the application behavior
after deploying.
However, changes to the configuration require the application to be redeployed, often resulting in
unacceptable downtime and other administrative overhead.
Local configuration files also limit the configuration to a single application, but sometimes it would be
helpful to share configuration settings across multiple applications.
Examples include database connection strings, UI theme information, or the URLs of queues and storage
used by a related set of applications.
It's challenging to manage changes to local configurations across multiple running instances of the
application, especially in a cloud-hosted scenario.
It can result in instances using different configuration settings while the update is being deployed. Also,
updates to applications and components might require changes to configuration schemas.
Many configuration systems don't support different versions of configuration information.

Example
It's 2:00 AM. Adam is done making all changes to his super fantastic code piece.
    441

The tests are all running fine. They hit commit -> push -> all commits pushed successfully to git. Happily,
Adam drives back home. 10 mins later, they get a call from the SecurityOps engineer, “Adam, did you
push the Secret Key to our public repo?”
YIKES! That blah.config file, Adam thinks. How could I've forgotten to include that in .gitignore? The
nightmare has already begun.
We can surely blame Adam for sinning, checking in sensitive secrets, and not following the recommended
practices of managing configuration files.
Still, the bigger question is that if the underlying toolchain had abstracted out the configuration manage-
ment from the developer, this fiasco would have never happened!

History
The virus was injected a long time ago. Since the early days of .NET, the app.config and web.config files
have the notion that developers can make their code flexible by moving typical configuration into these
files.
When used effectively, these files are proven to be worthy of dynamic configuration changes. However,
much time, we see the misuse of what goes into these files.
A common culprit is how samples and documentation have been written. Most samples on the web
would usually use these config files to store key elements such as ConnectionStrings and even passwords.
The values might be obfuscated but what we are telling developers is that "hey, It's a great place to push
your secrets!".
So, in a world where we're preaching using configuration files, we can't blame the developer for not
managing its governance.
We aren't challenging the use of Configuration here. It's an absolute need for an exemplary implementa-
tion. Instead, we should debate using multiple JSON, XML, and YAML files to maintain configuration
settings.
Configs are great for ensuring the flexibility of the application, config files. However, they aren't straight-
forward, especially across environments.

A ray of hope: The DevOps movement


In recent years, we've seen a shift around following some excellent practices around effective DevOps
and some great tools (Chef, Puppet) for managing Configuration for different languages.
While these have helped inject values during CI/CD pipeline and greatly simplified configuration manage-
ment, the blah.config concept hasn't moved away.
Frameworks like ASP.NET Core support the notion of appSettings.json across environments.
The framework has made it practical to use across environments through interfaces like IHostingEnviron-
ment and IConfiguration, but we can do better.

Explore separation of concerns


One of the key reasons we would want to move the configuration away from source control is to outline
responsibilities.
442     

Let's define some roles to elaborate on them. None of those are new concepts but rather a high-level
summary:
●● Configuration custodian: Responsible for generating and maintaining the life cycle of configuration
values. These include CRUD on keys, ensuring the security of secrets, regeneration of keys and tokens,
defining configuration settings such as Log levels for each environment. This role can be owned by
operation engineers and security engineering while injecting configuration files through proper
DevOps processes and CI/CD implementation. They do not define the actual configuration but are
custodians of their management.

●● Configuration consumer: Responsible for defining the schema (loose term) for the configuration that
needs to be in place and then consuming the configuration values in the application or library code.
It's the Dev. And Test teams shouldn't be concerned about the value of keys but rather what the key's
capability is. For example, a developer may need a different ConnectionString in the application but
not know the actual value across different environments.

●● Configuration store: The underlying store used to store the configuration, while it can be a simple
file, but in a distributed application, it needs to be a reliable store that can work across environments.
The store is responsible for persisting values that modify the application's behavior per environment
but aren't sensitive and don't require any encryption or HSM modules.

●● Secret store: While you can store configuration and secrets together, it violates our separation of
concern principle, so the recommendation is to use a different store for persisting secrets. It allows a
secure channel for sensitive configuration data such as ConnectionStrings, enables the operations
team to have Credentials, Certificate, Token in one repository, and minimizes the security risk if the
Configuration Store gets compromised.

Understand external configuration store pat-


terns
These patterns store the configuration information in an external location and provide an interface that
can be used to quickly and efficiently read and update configuration settings.
The type of external store depends on the hosting and runtime environment of the application.
A cloud-hosted scenario is typically a cloud-based storage service but could be a hosted database or oth-
er systems.
The backing store you choose for configuration information should have an interface that provides
consistent and easy-to-use access.
It should expose the information in a correctly typed and structured format.
The implementation might also need to authorize users' access to protect configuration data and be
flexible enough to allow storage of multiple configuration versions (such as development, staging, or
production, including many release versions of each one).
Many built-in configuration systems read the data when the application starts up and cache the data in
memory to provide fast access and minimize the impact on application performance.
    443

Depending on the type of backing store used and its latency, it might be helpful to implement a caching
mechanism within the external configuration store.
For more information, see the Caching Guidance. The figure illustrates an overview of the External Config-
uration Store pattern with optional local cache.

This pattern is helpful for:


●● Configuration settings are shared between multiple applications and application instances, or where a
standard configuration must be enforced across various applications and application instances.
●● A standard configuration system doesn't support all the required configuration settings, such as
storing images or complex data types.
●● As a complementary store for some application settings, they allow applications to override some or
all the centrally stored settings.
●● To simplify the administration of multiple applications and optionally monitor configuration settings
by logging some or all types of access to the configuration store.

Introduction to Azure App Configuration


Azure App Configuration is a service for central management of application settings and feature flags.
Modern programs include distributed components, each that needs its settings.
It's prevalent with microservice-based applications and with serverless applications.
Distributed configuration settings can lead to hard-to-troubleshoot deployment errors.
Azure App Configuration service stores all the settings for your application and secures their access in
one place.
Azure App Configuration service provides the following features:
●● A fully managed service that can be set up in minutes.
●● Flexible key representations and mappings.
●● Tagging with labels.
●● A point-in-time replay of settings.
●● Dedicated UI for feature flag management.
444     

●● Comparison of two sets of configurations on custom-defined dimensions.


●● Enhanced security through Azure managed identities.
●● Complete data encryptions, at rest or in transit.
●● Native integration with popular frameworks.
App Configuration complements Azure Key Vault, which is used to store application secrets. App Config-
uration makes it easier to implement the following scenarios:
●● Centralize management and distribution of hierarchical configuration data for different environments
and geographies.
●● Dynamically change application settings without the need to redeploy or restart an application.
●● Control feature availability in real time.

Use App Configuration


The easiest way to add an App Configuration store to your application is through one of Microsoft's
client libraries.
Based on the programming language and framework, the following best methods are available to you.

Programming language and framework How to connect


.NET Core and ASP.NET Core App Configuration provider for .NET Core
.NET Framework and ASP.NET App Configuration builder for .NET
Java Spring App Configuration client for Spring Cloud
Others App Configuration REST API

Examine Key-value pairs


Azure App Configuration stores configuration data as key-value pairs.

Keys
Keys serve as the name for key-value pairs and are used to store and retrieve corresponding values.
It's common to organize keys into a hierarchical namespace by using a character delimiter, such as / or :.
Use a convention that's best suited for your application.
App Configuration treats keys as a whole. It doesn't parse keys to figure out how their names are struc-
tured or enforce any rule on them.
Keys stored in App Configuration are case-sensitive, Unicode-based strings.
The keys app1 and App1 are distinct in an App Configuration store.
When you use configuration settings within an application, keep it in mind because some frameworks
handle configuration keys case-insensitively.
You can use any Unicode character in key names entered into App Configuration except for *, ,, and \.
These characters are reserved. If you need to include a reserved character, you must escape it by using \
{Reserved Character}.
There's a combined size limit of 10,000 characters on a key-value pair.
This limit includes all characters in the key, its value, and all associated optional attributes.
    445

Within this limit, you can have many hierarchical levels for keys.

Design key namespaces


There are two general approaches to naming keys used for configuration data: flat or hierarchical.
These methods are similar from an application usage standpoint, but hierarchical naming offers several
advantages:
●● Easier to read. Instead of one long sequence of characters, delimiters in a hierarchical key name
function as spaces in a sentence.
●● Easier to manage. A key name hierarchy represents logical groups of configuration data.
●● Easier to use. It's simpler to write a query that pattern-matches keys in a hierarchical structure and
retrieves only a portion of configuration data.
Below are some examples of how you can structure your key names into a hierarchy:
●● Based on component services
AppName:Service1:ApiEndpoint
AppName:Service2:ApiEndpoint

●● Based on deployment regions


AppName:Region1:DbEndpoint
AppName:Region2:DbEndpoint

Label keys
Key values in App Configuration can optionally have a label attribute.
Labels are used to differentiate key values with the same key.
A key app1 with labels A and B forms two separate keys in an App Configuration store.
By default, the label for a key value is empty or null.
Label provides a convenient way to create variants of a key. A common use of labels is to specify multiple
environments for the same key:
Key = AppName:DbEndpoint & Label = Test
Key = AppName:DbEndpoint & Label = Staging
Key = AppName:DbEndpoint & Label = Production

Version key values


App Configuration doesn't version key values automatically as they're modified.
Use labels as a way to create multiple versions of a key value.
For example, you can input an application version number or a Git commit ID in labels to identify key
values associated with a particular software build.
446     

Query key values


Each key value is uniquely identified by its key plus a label that can be null. You query an App Configu-
ration store for key values by specifying a pattern.
The App Configuration store returns all key values that match the pattern and their corresponding values
and attributes.

Values
Values assigned to keys are also Unicode strings. You can use all Unicode characters for values.
There's an optional user-defined content type associated with each value.
Use this attribute to store information, for example, an encoding scheme, about a value that helps your
application process it properly.
Configuration data stored in an App Configuration store, which includes all keys and values, is encrypted
at rest and in transit.
App Configuration isn't a replacement solution for Azure Key Vault. Don't store application secrets in it.

Examine App configuration feature manage-


ment
Feature management is a modern software development practice that decouples feature release from
code deployment and enables quick changes to feature availability on demand.
Feature Flags are discussed in another module, but at this point, it's worth noting that Azure App Config-
uration Service can be used to store and manage feature flags. (It's also known as feature toggles, feature
switches, and other names).

Basic concepts
Here are several new terms related to feature management:
●● Feature flag: A feature flag is a variable with a binary state of on or off. The feature flag also has an
associated code block. The state of the feature flag triggers whether the code block runs or not.
●● Feature manager: A feature manager is an application package that handles the lifecycle of all the
feature flags in an application. The feature manager typically provides more functionality, such as
caching feature flags and updating their states.
●● Filter: A filter is a rule for evaluating the state of a feature flag. A user group, a device or browser type,
a geographic location, and a time window are all examples of what a filter can represent.
Effective implementation of feature management consists of at least two components working in concert:
●● An application that makes use of feature flags.
●● A separate repository that stores the feature flags and their current states.
How these components interact is illustrated in the following examples.

Feature flag usage in code


The basic pattern for implementing feature flags in an application is simple. You can think of a feature
flag as a Boolean state variable used with an if conditional statement in your code:
    447

if (featureFlag) {
// Run the following code.
}

In this case, if featureFlag is set to True, the enclosed code block is executed; otherwise, it's skipped.
You can set the value of featureFlag statically, as in the following code example:
bool featureFlag = true;

You can also evaluate the flag's state based on certain rules:
bool featureFlag = isBetaUser();

A slightly more complicated feature flag pattern includes an else statement as well:
if (featureFlag) {
// This following code will run if the featureFlag value is true.
} else {
// This following code will run if the featureFlag value is false.
}

Feature flag declaration


Each feature flag has two parts: a name and a list of one or more filters used to evaluate if a feature's
state is on (that is when its value is True).
A filter defines a use case for when a feature should be turned on.
When a feature flag has multiple filters, the filter list is traversed until one of the filters determines the
feature should be enabled.
At that point, the feature flag is on, and any remaining filter results are skipped. If no filter indicates the
feature should be enabled, the feature flag is off.
The feature manager supports appsettings.json as a configuration source for feature flags.
The following example shows how to set up feature flags in a JSON file:
"FeatureManagement": {
"FeatureA": true, // Feature flag set to on
"FeatureB": false, // Feature flag set to off
"FeatureC": {
"EnabledFor": [
{
"Name": "Percentage",
"Parameters": {
"Value": 50
}
}
]
448     

}
}

Feature flag repository


To use feature flags effectively, you need to externalize all the feature flags used in an application.
This approach allows you to change feature flag states without modifying and redeploying the applica-
tion itself.
Azure App Configuration is designed to be a centralized repository for feature flags.
You can use it to define different kinds of feature flags and manipulate their states quickly and confident-
ly.
You can then use the App Configuration libraries for various programming language frameworks to easily
access these feature flags from your application.

Integrate Azure Key Vault with Azure Pipelines


Applications contain many secrets, such as:
●● Connection strings.
●● Passwords.
●● Certificates.
●● Tokens, which, if leaked to unauthorized users, can lead to a severe security breach.
It can result in severe damage to the organization's reputation and compliance issues with different
governing bodies.
Azure Key Vault allows you to manage your organization's secrets and certificates in a centralized reposi-
tory.
The secrets and keys are further protected by Hardware Security Modules (HSMs).
It also provides versioning of secrets, full traceability, and efficient permission management with access
policies.
For more information on Azure Key Vault, visit What is Azure Key Vault35.

Manage secrets, tokens and certificates


Azure Key Vault helps solve the following problems:
●● Secrets management - Azure Key Vault can be used to store securely and tightly control access to
tokens, passwords, certificates, API keys, and other secrets.
●● Key management - Azure Key Vault can also be used as a key management solution. Azure Key Vault
makes it easy to create and control the encryption keys used to encrypt your data.
●● Certificate management - Azure Key Vault is also a service that lets you easily provision, manage,
and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for
use with Azure. And your internal connected resources.

35 https://docs.microsoft.com/azure/key-vault/key-vault-overview
    449

●● Store secrets backed by hardware security modules - The secrets and keys can be protected by
software or FIPS 140-2 Level 2 validates HSMs.

Why use Azure Key Vault?

Centralize application secrets


Centralizing the storage of application secrets in Azure Key Vault allows you to control their distribution.
Key Vault dramatically reduces the chances that secrets may be accidentally leaked.
When using Key Vault, application developers no longer need to store security information in their
applications. It eliminates the need to make this information part of the code.
For example, an application may need to connect to a database. Instead of storing the connection string
in the app codes, store it securely in Key Vault.
Your applications can securely access the information they need by using URIs that allow them to retrieve
specific versions of a secret after the application's key or secret is stored in Azure Key Vault.
It happens without having to write custom code to protect any of the secret information.

Securely store secrets and keys


Secrets and keys are safeguarded by Azure, using industry-standard algorithms, key lengths, and hard-
ware security modules (HSMs).
The HSMs used are Federal Information Processing Standards (FIPS) 140-2 Level 2 validated.
Access to a key vault requires proper authentication and authorization before a caller (user or application)
can get access.
Authentication establishes the identity of the caller, while authorization determines the operations that
they can do.
Authentication is done via Azure Active Directory. Authorization may be done via role-based access
control (RBAC) or Key Vault access policy.
450     

RBAC is used when dealing with the management of the vaults, and a key vault access policy is used
when attempting to access data stored in a vault.
Azure Key Vaults may be either software- or hardware-HSM protected.
You can import or generate keys in hardware security modules (HSMs) that never leave the HSM bounda-
ry when you require added assurance.
Microsoft uses Thales hardware security modules. You can use Thales tools to move a key from your HSM
to Azure Key Vault.
Finally, Azure Key Vault is designed so that Microsoft doesn't see or extract your data.

Monitor access and use


Once you've created a couple of Key Vaults, you'll want to monitor how and when your keys and secrets
are accessed.
You can do it by enabling logging for Key Vault. You can configure Azure Key Vault to:
●● Archive to a storage account.
●● Stream to an Event Hubs.
●● Send the logs to Log Analytics.
You have control over your logs, and you may secure them by restricting access, and you may also delete
logs that you no longer need.

Simplified administration of application secrets


When storing valuable data, you must take several steps. Security information must be secured. It must
follow a lifecycle. It must be highly available.
Azure Key Vault simplifies it by:
●● Removing the need for in-house knowledge of Hardware Security Modules.
●● Scaling up on short notice to meet your organization's usage spikes.
●● Replicating the contents of your Key Vault within a region and to a secondary region. It ensures high
availability and takes away the need for any action from the administrator to trigger the failover.
●● Providing standard Azure administration options via the portal, Azure CLI and PowerShell.
●● Automating specific tasks on certificates that you purchase from Public CAs, such as enrollment and
renewal.
Also, Azure Key Vaults allow you to segregate application secrets.
Applications may access only the vault they can access, and they can only do specific operations.
You can create an Azure Key Vault per application and restrict the secrets stored in a Key Vault to a
particular application and team of developers.

Integrate with other Azure services


As a secure store in Azure, Key Vault has been used to simplify scenarios like Azure Disk Encryption, the
always encrypted functionality in SQL Server and Azure SQL Database, Azure web apps.
Key Vault itself can integrate with storage accounts, Event Hubs, and log analytics.
    451

Examine DevOps inner and outer loop


While you can store configuration and secrets together, it violates our separation of concern principle, so
the recommendation is to use a different store for persisting secrets.
It allows a secure channel for sensitive configuration data, such as ConnectionStrings.
It enables the operations team to have credentials, certificates, tokens in one repository and minimizes
the security risk if the Configuration Store gets compromised.
The following diagram shows how these roles play together in a DevOps inner and outer loop.
The inner loop is focused on the developer teams iterating over their solution development; they con-
sume the configuration published by the Outer Loop.
The Ops Engineer governs the Configuration management.
They push changes into Azure KeyVault and Kubernetes that are further isolated per environment.

Summary
This module explored ways to rethink application configuration data and the separation of concerns
method. It helped you understand configuration patterns and how to integrate Azure Key Vault with
Azure Pipelines.
You learned how to describe the benefits and usage of:
●● Rethink application configuration data.
●● Understand separation of concerns.
●● Integrate Azure Key Vault with Azure Pipelines.
●● Manage secrets, tokens, and certificates.
452     

Learn more
●● Use Azure Key Vault secrets in Azure Pipelines - Azure Pipelines | Microsoft Docs36.
●● Define variables - Azure Pipelines | Microsoft Docs37.
●● Integrate Azure App Configuration using a continuous integration and delivery pipeline |
Microsoft Docs38.

36 https://docs.microsoft.com/azure/devops/pipelines/release/azure-key-vault
37 https://docs.microsoft.com/azure/devops/pipelines/process/variables
38 https://docs.microsoft.com/azure/azure-app-configuration/integrate-ci-cd-pipeline
    453

Labs
Lab 11: Configuring pipelines as code with YAML
Lab overview
Many teams prefer to define their build and release pipelines using YAML. This allows them to access the
same pipeline features as those using the visual designer, but with a markup file that can be managed
like any other source file. YAML build definitions can be added to a project by simply adding the corre-
sponding files to the root of the repository. Azure DevOps also provides default templates for popular
project types, as well as a YAML designer to simplify the process of defining build and release tasks.

Objectives
After you complete this lab, you will be able to:
●● configure CI/CD pipelines as code with YAML in Azure DevOps

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions39

Lab 12: Integrating Azure Key Vault with Azure


DevOps
Lab overview
Azure Key Vault provides secure storage and management of sensitive data, such as keys, passwords, and
certificates. Azure Key Vault includes supports for hardware security modules, as well as a range of
encryption algorithms and key lengths. By using Azure Key Vault, you can minimize the possibility of
disclosing sensitive data through source code, which is a common mistake made by developers. Access
to Azure Key Vault requires proper authentication and authorization, supporting fine grained permissions
to its content.
In this lab, you will see how you can integrate Azure Key Vault with an Azure DevOps pipeline by using
the following steps:
●● create an Azure Key vault to store a MySQL server password as a secret.
●● create an Azure service principal to provide access to secrets in the Azure Key vault.
●● configure permissions to allow the service principal to read the secret.

39 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
454     

●● configure pipeline to retrieve the password from the Azure Key vault and pass it on to subsequent
tasks.

Objectives
After you complete this lab, you will be able to:
●● Create an Azure Active Directory (Azure AD) service principal.
●● Create an Azure key vault.
●● Track pull requests through the Azure DevOps pipeline.

Lab duration
●● Estimated time: 40 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions40

Lab 13: Setting up and running functional tests


Lab overview
Selenium41 is a portable open source software-testing framework for web applications. It has the capabil-
ity to operate on almost every operating system. It supports all modern browsers and multiple languages
including .NET (C#), Java.
In this lab, you will learn how to execute Selenium test cases on a C# web application, as part of the
Azure DevOps Release pipeline.

Objectives
After you complete this lab, you will be able to:
●● Configure a self-hosted Azure DevOps agent
●● Configure release pipeline
●● Trigger build and release
●● Run tests in Chrome and Firefox

Lab duration
●● Estimated time: 60 minutes

40 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
41 http://www.seleniumhq.org/
    455

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions42

42 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
456     

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices is a job type you can run?
†† Multi-configuration.
†† Multi-job.
†† Multi-pool.

Multiple choice
Which of the following choices is how many deployment jobs can be run concurrently by a single agent?
†† 3
†† 5
†† 1

Multiple choice
Which of the following choices describes the job type correctly?
†† None: Tasks will run on a single agent or more agents in parallel.
†† Multi-configuration: Run the same set of tasks with a single configuration per agent.
†† Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents.

Multiple choice
Which of the following choices isn't a technology or tool for configuring a target server during the deploy-
ment?
†† PowerShell Desired State Configuration (DSC).
†† Chef.
†† Azure App Configuration.

Multiple choice
Which of the following choices do you need to create when a pipeline needs access to resources?
†† Service Hook.
†† Service Connection.
†† Agent Pool.
    457

Multiple choice
Which of the following choices is recommended to authenticate to Azure with Service Connections?
†† User and Password.
†† Personal Access Token.
†† Service Principal.

Multiple choice
Which of the following choices is correct about Azure Pipelines?
†† Azure Pipelines allow creating custom build/release tasks.
†† Azure Pipelines only allow native tasks or extensions from the marketplace.
†† Azure Pipelines automatically create tasks based on PowerShell.

Multiple choice
Which of the following choices should you create to store values that you want to make available across
multiple builds and release pipelines?
†† Task Group.
†† Variable Group.
†† Deployment Group.

Multiple choice
Which of the following choices isn't an advantage of creating your own task?
†† You can safely and efficiently distribute across your organization.
†† You can use and reuse a secure endpoint to a target server.
†† Users can see the implementation details.

Multiple choice
Which of the following choices allows automatic collection of health signals from external services and
promotes the release when all the signals are successful?
†† Release gates.
†† Events.
†† Service Hooks.
458     

Multiple choice
Which of the following choices are raised when certain actions occur, like when a release is started or a build
completed?
†† Service Hooks.
†† Events.
†† Release gates.

Multiple choice
Which of the following choices are usually emails you receive when an action occurs to which you're
subscribed?
†† Notifications.
†† Service Hooks.
†† Events.

Multiple choice
Which of the following roles is responsible for generating and maintaining the life cycle of configuration
values?
†† Configuration Custodian.
†† Configuration Consumer.
†† Developer.

Multiple choice
Which of the following choices enables applications to have no direct access to the secrets, which helps
improve the security & control?
†† Library.
†† Azure Key Vault.
†† Azure Pipelines.

Multiple choice
Which of the following choices isn't a benefit of Azure Key Vault?
†† Code secure files.
†† Certificate management.
†† Secrets management.
    459

Multiple choice
Which of the following level choices do you need to connect your identity provider to GitHub to use SSO?
†† At the organization level.
†† At the project level.
†† At the repository level.

Multiple choice
Applications are registered with an Azure AD tenant within Azure Active Directory. Registering an applica-
tion creates an identity configuration. Which of the following choices isn't correct about who can use it?
†† Accounts in the same organizational directory.
†† Azure DevOps groups.
†† Accounts in any organizational directory and Microsoft Accounts (personal).

Multiple choice
Which of the following choices isn't a type of managed identities?
†† Organization-assigned.
†† System-assigned.
†† User-assigned.

Multiple choice
Which of the following choices isn't a scenario that App Configuration makes it easier to implement?
†† Centralize management and distribution of hierarchical configuration data for different environments
and geographies.
†† Dynamically change application settings without the need to redeploy or restart an application.
†† Create, manage and centralize packages as a Package Management.

Multiple choice
Which of the following choices isn't a general approach to naming keys used for configuration data?
†† Cross-key.
†† Flat.
†† Hierarchical.
460     

Multiple choice
Which of the following choices isn't one of two components working in concert for effective implementation
of feature management?
†† An application that makes use of feature flags.
†† A monolith application that needs a redesign to microservices.
†† A separate repository that stores the feature flags and their current states.
    461

Answers
Multiple choice
Which of the following choices is a job type you can run?
■■ Multi-configuration.
†† Multi-job.
†† Multi-pool.
Explanation
There are three different types of jobs you can run. Multi-configuration, Multi-agent, and None (Tasks will
run on a single agent).
Multiple choice
Which of the following choices is how many deployment jobs can be run concurrently by a single agent?
†† 3
†† 5
■■ 1
Explanation
The agent can only execute one job at the same time.
Multiple choice
Which of the following choices describes the job type correctly?
†† None: Tasks will run on a single agent or more agents in parallel.
†† Multi-configuration: Run the same set of tasks with a single configuration per agent.
■■ Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents.
Explanation
It's Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents.
Multiple choice
Which of the following choices isn't a technology or tool for configuring a target server during the
deployment?
†† PowerShell Desired State Configuration (DSC).
†† Chef.
■■ Azure App Configuration.
Explanation
Azure App Configuration provides a service to manage application settings and feature flags centrally.
When you want to configure the server, you should look at technologies like PowerShell Desired State
Configuration(DSC), or use tools like Puppet and Chef.
462     

Multiple choice
Which of the following choices do you need to create when a pipeline needs access to resources?
†† Service Hook.
■■ Service Connection.
†† Agent Pool.
Explanation
When a pipeline needs access to resources, you will often need to provision service connections.
Multiple choice
Which of the following choices is recommended to authenticate to Azure with Service Connections?
†† User and Password.
†† Personal Access Token.
■■ Service Principal.
Explanation
The Service Principal is a type of service account that only has permissions in the specific subscription and
resource group. This makes it a very safe way to connect from the pipeline.
Multiple choice
Which of the following choices is correct about Azure Pipelines?
■■ Azure Pipelines allow creating custom build/release tasks.
†† Azure Pipelines only allow native tasks or extensions from the marketplace.
†† Azure Pipelines automatically create tasks based on PowerShell.
Explanation
Azure Pipelines allows creating custom build/release tasks.
Multiple choice
Which of the following choices should you create to store values that you want to make available across
multiple builds and release pipelines?
†† Task Group.
■■ Variable Group.
†† Deployment Group.
Explanation
A variable group is used to store values that you want to make available across multiple builds and release
pipelines.
Multiple choice
Which of the following choices isn't an advantage of creating your own task?
†† You can safely and efficiently distribute across your organization.
†† You can use and reuse a secure endpoint to a target server.
■■ Users can see the implementation details.
Explanation
Users do notsee implementation details.
    463

Multiple choice
Which of the following choices allows automatic collection of health signals from external services and
promotes the release when all the signals are successful?
■■ Release gates.
†† Events.
†† Service Hooks.
Explanation
Release gates allow automatic collection of health signals from external services and then promote the
release when all the signals are successful at the same time or stop the deployment on timeout.
Multiple choice
Which of the following choices are raised when certain actions occur, like when a release is started or a
build completed?
†† Service Hooks.
■■ Events.
†† Release gates.
Explanation
Events are raised when certain actions occur, like when a release is started or a build completed.
Multiple choice
Which of the following choices are usually emails you receive when an action occurs to which you're
subscribed?
■■ Notifications.
†† Service Hooks.
†† Events.
Explanation
Notifications are usually emails that you receive when an event occurs to which you are subscribed.
Multiple choice
Which of the following roles is responsible for generating and maintaining the life cycle of configuration
values?
■■ Configuration Custodian.
†† Configuration Consumer.
†† Developer.
Explanation
It's the Configuration Custodian. It includes CRUD on keys, ensuring the security of secrets, regeneration of
keys and tokens, defining configuration settings such as Log levels for each environment.
464     

Multiple choice
Which of the following choices enables applications to have no direct access to the secrets, which helps
improve the security & control?
†† Library.
■■ Azure Key Vault.
†† Azure Pipelines.
Explanation
Azure Key Vault allows you to manage your organization's secrets and certificates in a centralized reposito-
ry.
Multiple choice
Which of the following choices isn't a benefit of Azure Key Vault?
■■ Code secure files.
†† Certificate management.
†† Secrets management.
Explanation
Azure Key Vault helps with Secrets management, Key management, Certificate management, and store
secrets backed by hardware security modules.
Multiple choice
Which of the following level choices do you need to connect your identity provider to GitHub to use
SSO?
■■ At the organization level.
†† At the project level.
†† At the repository level.
Explanation
To use SSO, you need to connect your identity provider to GitHub at the organization level.
Multiple choice
Applications are registered with an Azure AD tenant within Azure Active Directory. Registering an applica-
tion creates an identity configuration. Which of the following choices isn't correct about who can use it?
†† Accounts in the same organizational directory.
■■ Azure DevOps groups.
†† Accounts in any organizational directory and Microsoft Accounts (personal).
Explanation
Who can use it are accounts in the same organizational directory, accounts in any organizational directory,
and Microsoft Accounts (personal) and Microsoft Accounts (Personal accounts only).
    465

Multiple choice
Which of the following choices isn't a type of managed identities?
■■ Organization-assigned.
†† System-assigned.
†† User-assigned.
Explanation
There are two types of managed identities. System-assigned and User-assigned.
Multiple choice
Which of the following choices isn't a scenario that App Configuration makes it easier to implement?
†† Centralize management and distribution of hierarchical configuration data for different environments
and geographies.
†† Dynamically change application settings without the need to redeploy or restart an application.
■■ Create, manage and centralize packages as a Package Management.
Explanation
App Configuration makes it easier to implement the following scenarios, Centralize management and
distribution of hierarchical configuration data for different environments and geographies. Dynamically
change application settings without the need to redeploy or restart an application. Control feature availabil-
ity in real-time.
Multiple choice
Which of the following choices isn't a general approach to naming keys used for configuration data?
■■ Cross-key.
†† Flat.
†† Hierarchical.
Explanation
There are two general approaches, Flat and Hierarchical.
Multiple choice
Which of the following choices isn't one of two components working in concert for effective implementa-
tion of feature management?
†† An application that makes use of feature flags.
■■ A monolith application that needs a redesign to microservices.
†† A separate repository that stores the feature flags and their current states.
Explanation
Effective implementation of feature management consists of at least two components working in concert, an
application that uses feature flags and a separate repository that stores the feature flags and their current
states.
Module 6 Manage infrastructure as code using
Azure and DSC

Explore infrastructure as code and configura-


tion management
Introduction
Infrastructure as code (IaC) doesn't quite trip off the tongue, and its meaning isn't always straightforward.
But IaC has been with us since the beginning of DevOps—and some experts say DevOps wouldn't be
possible without it.
As the name suggests, infrastructure as code is the concept of managing your operations environment
like you do applications or other code for general release.
Rather than manually making configuration changes or using one-off scripts to make infrastructure
changes, the operations infrastructure is managed instead using the same rules and strictures that govern
code development—particularly when new server instances are spun up.
That means that the core best practices of DevOps—like version control, virtualized tests, and continuous
monitoring—are applied to the underlying code that governs the creation and management of your
infrastructure.
In other words, your infrastructure is treated the same way that any other code would be.
The elasticity of the cloud paradigm and the disposability of cloud machines can only be used by apply-
ing the principles of Infrastructure as Code to all your infrastructure.
This module describes key concepts of infrastructure as code and environment deployment creation and
configuration. Also, understand the imperative, declarative, and idempotent configuration and how it
applies to your company.
468     

Learning objectives
After completing this module, students and professionals can:
●● Understand how to deploy your environment.
●● Plan your environment configuration.
●● Choose between imperative versus declarative configuration.
●● Explain idempotent configuration.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore environment deployment


Suppose you've ever received a middle-of-the-night emergency support call because of a crashed server.
In that case, you know the pain of searching through multiple spreadsheets or even your memory—to
access the manual steps of setting up a new machine.
Then there's also the difficulty of keeping the development and production environments consistent.
An easier way to remove the possibility of human error when initializing machines is to use Infrastructure
as Code.

Manual deployment versus infrastructure as code


A common analogy for understanding the differences between manual deployment and infrastructure as
code is the distinction between owning pets and owning cattle.
When you have pets, you name each one and regard them as individuals; if something terrible happens
to one of your pets, you're inclined to care a lot.
If you have a herd of cattle, you might still name them, but you consider their herd.
In infrastructure terms, there might be severe implications with a manual deployment approach if a single
machine crash and you need to replace it (pets).
If you adopt infrastructure as a code approach, you can more easily provision another machine without
adversely impacting your entire infrastructure (cattle) if a single machine goes down.

Implementing infrastructure as code


With infrastructure as code, you capture your environment (or environments) in a text file (script or
definition).
Your file might include any networks, servers, and other compute resources.
You can check the script or definition file into version control and then use it as the source for updating
existing environments or creating new ones.
For example, you can add a new server by editing the text file and running the release pipeline rather
than remoting it into the environment and manually provisioning a new server.
    469

The following table lists the significant differences between manual deployment and infrastructure as
code.

Manual deployment Infrastructure as code


Snowflake servers. A consistent server between environments.
Deployment steps vary by environment. Environments are created or scaled easily.
More verification steps and more elaborate Fully automated creation of environment Updates.
manual processes.
Increased documentation to account for differenc- Transition to immutable infrastructure.
es.
Deployment on weekends to allow time to recover Use blue/green deployments.
from errors.
Slower release cadence to minimize pain and long Treat servers as cattle, not pets.
weekends.

Benefits of infrastructure as code


The following list is benefits of infrastructure as code:
●● Promotes auditing by making it easier to trace what was deployed, when, and how. (In other words, it
improves traceability.)
●● Provides consistent environments from release to release.
●● Greater consistency across development, test, and production environments.
●● Automates scale-up and scale-out processes.
●● Allows configurations to be version-controlled.
●● Provides code review and unit-testing capabilities to help manage infrastructure changes.
●● Uses immutable service processes, meaning if a change is needed to an environment, a new service is
deployed, and the old one was taken down; it isn't updated.
●● Allows blue/green deployments. This release methodology minimizes downtime, where two identical
environments exist—one is live, and the other isn't. Updates are applied to the server that isn't live.
When testing is verified and complete, it's swapped with the different live servers. It becomes the new
live environment, and the previous live environment is no longer the live. This methodology is also
referred to as A/B deployments.
●● Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and reprovisioned
as and when needed.

Examine environment configuration


Configuration management refers to automated configuration management, typically in version-con-
trolled scripts, for an application and all the environments needed to support it.
Configuration management means lighter-weight, executable configurations that allow us to have config-
uration and environments as code.
For example, adding a new port to a Firewall could be done by editing a text file and running the release
pipeline, not by remoting into the environment and manually adding the port.
Note: The term configuration as code can also be used to mean configuration management. However, it
isn't used as widely, and in some cases, infrastructure as code is used to describe both provisioning and
470     

configuring machines. The term infrastructure as code is also sometimes used to include configuration as
code, but not vice versa.

Manual configuration versus configuration as code


Manually managing the configuration of a single application and environment can be challenging.
The challenges are even more significant for managing multiple applications and environments across
multiple servers.
Automated configuration, or treating configuration as code, can help with some of the manual configura-
tion difficulties.
The following table lists the significant differences between manual configuration and configuration as
code.

Manual configuration Configuration as code


Configuration bugs are challenging to identify. Bugs are easily reproducible.
Error-prone. Consistent configuration.
More verification steps and more elaborate Increase deployment cadence to reduce the
manual processes. amount of incremental change.
Increased documentation. Treat environment and configuration as executable
documentation.
Deployment on weekends to allow time to recover
from errors.
Slower release cadence to minimize the require-
ment for long weekends.

Benefits of configuration management


The following list is benefits of configuration management:
●● Bugs are more easily reproduced, audit help, and improve traceability.
●● Provides consistency across environments such as dev, test, and release.
●● It increased deployment cadence.
●● Less documentation is needed and needs to be maintained as all configuration is available in scripts.
●● Enables automated scale-up and scale out.
●● Allows version-controlled configuration.
●● Helps detect and correct configuration drift.
●● Provides code-review and unit-testing capabilities to help manage infrastructure changes.
●● Treats infrastructure as a flexible resource.
●● Promotes automation.

Understand imperative versus declarative con-


figuration
There are a few different approaches that you can adopt to implement Infrastructure as Code and
Configuration as Code.
    471

Two of the main methods of approach are:


●● Declarative (functional). The declarative approach states what the final state should be. When run, the
script or definition will initialize or configure the machine to have the finished state declared without
defining how that final state should be achieved.

●● Imperative (procedural). In the imperative approach, the script states the how for the final state of the
machine by executing the steps to get to the finished state. It defines what the final state needs to be
but also includes how to achieve that final state. It also can consist of coding concepts such as for,
*if-then, loops, and matrices.

Best practices
The declarative approach abstracts away the methodology of how a state is achieved. As such, it can be
easier to read and understand what is being done.
It also makes it easier to write and define. Declarative approaches also separate the final desired state
and the coding required to achieve that state.
So, it doesn't force you to use a particular approach, allowing for optimization.
A declarative approach would generally be the preferred option where ease of use is the primary goal.
Azure Resource Manager template files are an example of a declarative automation approach.
An imperative approach may have some advantages in complex scenarios where changes in the environ-
ment occur relatively frequently, which need to be accounted for in your code.
There's no absolute on which is the best approach to take, and individual tools may be used in either
declarative or imperative forms. The best approach for you to take will depend on your needs.

Understand idempotent configuration


Idempotence is a mathematical term that can be used in Infrastructure as Code and Configuration as
Code. It can apply one or more operations against a resource, resulting in the same outcome.
For example, running a script on a system should have the same outcome despite the number of times
you execute the script. It shouldn't error out or do the same actions irrespective of the environment's
starting state.
472     

In essence, if you apply a deployment to a set of resources 1,000 times, you should end up with the same
result after each application of the script or template.

You can achieve idempotency by:


●● Automatically configuring and reconfiguring an existing set of resources.

●● Discarding the existing resources and recreating a new environment.


When defining Infrastructure as Code and Configuration as Code, as a best practice, build the scripts and
templates in such a way as to embrace idempotency.
It's essential when working with cloud services because resources and applications can be scaled in and
out regularly. New instances of services can be started up to provide service elasticity.
Note: You can read more about idempotence at Idempotency for Windows Azure Message Queues1.

Summary
This module described key infrastructure concepts as code and environment deployment creation and
configuration and explored imperative, declarative, and idempotent configuration and how it applies to
your company.
You learned how to describe the benefits and usage of:
●● Understand how to deploy your environment.
●● Plan your environment configuration.
●● Choose between imperative versus declarative configuration.
●● Explain idempotent configuration.

Learn more
●● Create target environment - Azure Pipelines | Microsoft Docs2.

1 https://www.wintellect.com/idempotency-for-windows-azure-message-queues/
2 https://docs.microsoft.com/azure/devops/pipelines/process/environments
    473

●● Integrate DevTest Labs environments into Azure Pipelines - Azure DevTest Labs | Microsoft
Docs3.
●● What is Infrastructure as Code? - Azure DevOps | Microsoft Docs4.
●● Repeatable Infrastructure - Azure Architecture Center | Microsoft Docs5.
●● Infrastructure as code | Microsoft Docs6.

3 https://docs.microsoft.com/azure/devtest-labs/integrate-environments-devops-pipeline
4 https://docs.microsoft.com/devops/deliver/what-is-infrastructure-as-code
5 https://docs.microsoft.com/azure/architecture/framework/devops/automation-infrastructure
6 https://docs.microsoft.com/dotnet/architecture/cloud-native/infrastructure-as-code
474     

Create Azure resources using Azure Resource


Manager templates
Introduction
This module explores Azure Resource Manager templates and their components and details dependen-
cies and modularized templates with secrets.

Learning objectives
After completing this module, students and professionals can:
●● Create Azure resources using Azure Resource Manager templates.
●● Understand Azure Resource Manager templates and template components.
●● Manage dependencies and secrets in templates.
●● Organize and modularize templates.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Why use Azure Resource Manager templates?


Using Resource Manager templates will make your deployments faster and more repeatable.
For example, you no longer must create a VM in the portal, wait for it to finish, and then create the next
VM. The Resource Manager takes care of the entire deployment for you.
Here are some other template benefits to consider:
●● Templates improve consistency. Resource Manager templates provide a common language for you
and others to describe your deployments. Despite the tool or SDK that you use to deploy the tem-
plate, the template's structure, format, and expressions remain the same.
●● Templates help express complex deployments. Templates enable you to deploy multiple resources in
the correct order. For example, you wouldn't want to deploy a VM before creating an operating
system (OS) disk or network interface. Resource Manager maps out each resource and its dependent
resources and creates dependent resources first. Dependency mapping helps ensure that the deploy-
ment is carried out in the correct order.
●● Templates reduce manual, error-prone tasks. Manually creating and connecting resources can be
time-consuming, and it's easy to make mistakes. The Resource Manager ensures that the deployment
happens the same way every time.
●● Templates are code. Templates express your requirements through code. Think of a template as a type
of Infrastructure as Code that can be shared, tested, and versioned like any other piece of software.
Also, because templates are code, you can create a record that you can follow. The template code
documents the deployment. Also, most users maintain their templates under revision control, such as
    475

GIT. Its revision history also records how the template (and your deployment) has evolved when you
change the template.
●● Templates promote reuse. Your template can contain parameters that are filled in when the template
runs. A parameter can define a username or password, a domain name, and other necessary items.
Template parameters also enable you to create multiple versions of your infrastructure, such as
staging and production, while still using the same template.
●● Templates are linkable. You can link Resource Manager templates together to make the templates
themselves modular. You can write small templates that define a solution and then combine them to
create a complete system.
Azure provides many quickstart templates. You might use it as a base for your work.

Explore template components


Azure Resource Manager templates are written in JSON, which allows you to express data stored as an
object (such as a virtual machine) in text.
A JSON document is essentially a collection of key-value pairs. Each key is a string that values can be:
●● A string.
●● A number.
●● A Boolean expression.
●● A list of values.
●● An object (which is a collection of other key-value pairs).
A Resource Manager template can contain sections that are expressed using JSON notation but aren't
related to the JSON language itself:
{
"$schema": "http://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "",
"parameters": { },
"variables": { },
"functions": [ ],
"resources": [ ],
"outputs": { }
}

Let's review each of these sections in a little more detail.

Parameters
This section is where you specify which values are configurable when the template runs.
For example, you might allow template users to set a username, password, or domain name.
Here's an example that illustrates two parameters: one for a virtual machine's (VMs) username and one
for its password:
476     

"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
"description": "Username for the Virtual Machine."
}
},
"adminPassword": {
"type": "securestring",
"metadata": {
"description": "Password for the Virtual Machine."
}
}
}

Variables
This section is where you define values that are used throughout the template.
Variables can help make your templates easier to maintain.
For example, you might define a storage account name one time as a variable and then use that variable
throughout the template.
If the storage account name changes, you need only update the variable once.
Here's an example that illustrates a few variables that describe networking features for a VM:
"variables": {
"nicName": "myVMNic",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
"subnetPrefix": "10.0.0.0/24",
"publicIPAddressName": "myPublicIP",
"virtualNetworkName": "MyVNET"
}

Functions
This section is where you define procedures that you don't want to repeat throughout the template.
Like variables, functions can help make your templates easier to maintain.
Here's an example that creates a function for creating a unique name to use when creating resources that
have globally unique naming requirements:
"functions": [
{
"namespace": "contoso",
"members": {
"uniqueName": {
    477

"parameters": [
{
"name": "namePrefix",
"type": "string"
}
],
"output": {
"type": "string",
"value": "[concat(toLower(parameters('namePrefix')), uniqueS-
tring(resourceGroup().id))]"
}
}
}
}
],

Resources
This section is where you define the Azure resources that make up your deployment.
Here's an example that creates a public IP address resource:
{
"type": "Microsoft.Network/publicIPAddresses",
"name": "[variables('publicIPAddressName')]",
"location": "[parameters('location')]",
"apiVersion": "2018-08-01",
"properties": {
"publicIPAllocationMethod": "Dynamic",
"dnsSettings": {
"domainNameLabel": "[parameters('dnsLabelPrefix')]"
}
}
}

Here, the type of resource is Microsoft.Network/publicIPAddresses.


The name is read from the variables section, and the location, or Azure region, is read from the parame-
ters section.
Because resource types can change over time, apiVersion refers to the version of the resource type
you want to use.
As resource types evolve, you can modify your templates to work with the latest features.

Outputs
This section is where you define any information you'd like to receive when the template runs.
For example, you might want to receive your VM's IP address or fully qualified domain name (FQDN), the
information you won't know until the deployment runs.
478     

Here's an example that illustrates an output named hostname.


The FQDN value is read from the VM's public IP address settings:
"outputs": {
"hostname": {
"type": "string",
"value": "[reference(variables('publicIPAddressName')).dnsSettings.
fqdn]"
}
}

Manage dependencies
For any given resource, other resources might need to exist before you can deploy the resource.
For example, a Microsoft SQL Server must exist before attempting to deploy a SQL Database.
You can define this relationship by marking one resource as dependent on the other.
You define a dependency with the dependsOn element or by using the reference function.
Resource Manager evaluates the dependencies between resources and deploys them in their dependent
order.
When resources aren't dependent on each other, the Resource Manager deploys them in parallel.
You only need to define dependencies for resources that are deployed in the same template.

The dependsOn element


Within your template, the dependsOn element enables you to define one resource dependent on one or
more other resources.
Its value can be a comma-separated list of resource names.

Circular dependencies
A circular dependency is a problem with dependency sequencing, resulting in the deployment going
around in a loop and unable to continue.
As a result, the Resource Manager can't deploy the resources.
Resource Manager identifies circular dependencies during template validation.
    479

If you receive an error stating that a circular dependency exists, evaluate your template to find whether
any dependencies are unnecessary and can be removed.
If removing dependencies doesn't resolve the issue, you can move some deployment operations into
child resources that are deployed after the resources with the circular dependency.

Modularize templates
When using Azure Resource Manager templates, it's best to modularize them by breaking them into
individual components.
The primary methodology to use to do this be by using linked templates.
It allows you to break out the solution into targeted components and reuse those various elements
across different deployments.

Linked template
To link one template to another, add a deployment resource to your main template.
"resources": [
{
"apiVersion": "2017-05-10",
"name": "linkedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
<link-to-external-template>
}
}
]

Nested template
You can also nest a template within the main template, use the template property, and specify the
template syntax.
It does somewhat aid modularization, but dividing up the various components can result in a sizeable
main file, as all the elements are within that single file.
"resources": [
{
"apiVersion": "2017-05-10",
"name": "nestedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
480     

"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageName')]",
"apiVersion": "2015-06-15",
"location": "West US",
"properties": {
"accountType": "Standard_LRS"
}
}
]
}
}
}
]

Note: For nested templates, you can't use parameters or variables defined within the nested template
itself. You can only use parameters and variables from the main template.
The properties you provide for the deployment resource will vary based on linking to an external tem-
plate or nesting an inline template within the main template.

Deployments modes
When deploying your resources using templates, you have three options:
●● validate. This option compiles the templates, validates the deployment, ensures the template is
functional (for example, no circular dependencies), and correct syntax.
●● incremental mode (default). This option only deploys whatever is defined in the template. It doesn't
remove or modify any resources that aren't defined in the template. For example, if you've deployed a
VM via template and then renamed the VM in the template, the first VM deployed will remain after
the template is rerun. It's the default mode.
●● complete mode: Resource Manager deletes resources in the resource group but isn't specified in the
template. For example, only resources defined in the template will be present in the resource group
after the template deploys. As a best practice, use this mode for production environments where
possible to try to achieve idempotency in your deployment templates.
When deploying with PowerShell, to set the deployment mode use the Mode parameter, as per the
nested template example earlier in this topic.
Note: As a best practice, use one resource group per deployment.
Note: For both linked and nested templates, you can only use incremental deployment mode.

External template and external parameters


To link to an external template and parameter file, use templateLink and parametersLink.
When linking to a template, ensure that the Resource Manager service can access it.
For example, you can't specify a local file or a file only available on your local network.
    481

As such, you can only provide a Uniform Resource Identifier (URI) value that includes either HTTP or
HTTPS.
One option is to place your linked template in a storage account and use the URI for that item.
You can also provide the parameter inline. However, you can't use both inline parameters and a link to a
parameter file.
The following example uses the templateLink parameter:
"resources": [
{
"name": "linkedTemplate",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2018-05-01",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri":"https://linkedtemplateek1store.blob.core.windows.net/
linkedtemplates/linkedStorageAccount.json?sv=2018-03-28&sr=b&sig=dO9p7Xnbh-
Gq56BO%2BSW3o9tX7E2WUdIk%2BpF1MTK2eFfs%3D&se=2018-12-31T14%3A32%3A29Z&sp=r"
},
"parameters": {
"storageAccountName":{"value": "[variables('storageAccount-
Name')]"},
"location":{"value": "[parameters('location')]"}
}
}
},

Securing an external template


Although the linked template must be available externally, it doesn't need to be made available to the
public.
Instead, you can add your template to a private storage account accessible to only the storage account
owner, creating shared access signature (SAS) tokens to enable access during deployment.
You add that SAS token to the URI for the linked template.
Even though the token is passed in as a secure string, the linked template's URI, including the SAS token,
is logged in the deployment operations.
To limit exposure, you can also set an expiration date for the token.

Manage secrets in templates


When passing a secure value (such as a password) as a parameter during deployment, you can retrieve
the value from an Azure Key Vault.
Reference the Key Vault and secret in your parameter file.
The value is never exposed because you only reference its Key Vault ID.
482     

The Key Vault can exist in a different subscription than the resource group you're deploying it to.

Deploy a Key Vault and secret


To create a Key Vault and secret, use either Azure CLI or PowerShell.
To access the secrets inside this Key Vault from a Resource Manager deployment, the Key Vault property
enabledForTemplateDeployment must be true.

Using Azure CLI


The following code snippet is an example how you can deploy a Key Vault and secret using Azure CLI:
keyVaultName='{your-unique-vault-name}'
resourceGroupName='{your-resource-group-name}'
location='centralus'
userPrincipalName='{your-email-address-associated-with-your-subscription}'

# Create a resource group


az group create --name $resourceGroupName --location $location

# Create a Key Vault


az keyvault create \
--name $keyVaultName \
--resource-group $resourceGroupName \
--location $location \
--enabled-for-template-deployment true
az keyvault set-policy --upn $userPrincipalName --name $keyVaultName --se-
cret-permissions set delete get list

# Create a secret with the name, vmAdminPassword


password=$(openssl rand -base64 32)
echo $password
az keyvault secret set --vault-name $keyVaultName --name 'vmAdminPassword'
--value $password

Enable access to the secret


Other than setting the Key Vault property enabledForTemplateDeployment to true, the user deploying
the template must have Microsoft.KeyVault/vaults/deploy/action permission for the scope
that contains the Key Vault.
Also, including the resource group and Key Vault. The Owner and Contributor roles both grant this
access.
If you create the Key Vault, you're the owner, so you inherently have permission.
However, if the Key Vault is under a different subscription, the owner of the Key Vault must grant the
access.
    483

Reference a secret with static ID


The Key Vault is referenced in the parameter file and not the template.
The following image shows how the parameter file references the secret and passes that value to the
template.

The following template deploys an SQL database that includes an administrator password.
The password parameter is set to a secure string. However, the template doesn't specify where that value
comes from:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminLogin": {
"type": "string"
},
"adminPassword": {
"type": "securestring"
},
"sqlServerName": {
"type": "string"
}
},
"resources": [
{
"name": "[parameters('sqlServerName')]",
"type": "Microsoft.Sql/servers",
"apiVersion": "2015-05-01-preview",
"location": "[resourceGroup().location]",
"tags": {},
"properties": {
"administratorLogin": "[parameters('adminLogin')]",
"administratorLoginPassword": "[parameters('adminPassword')]",
"version": "12.0"
}
}
],
"outputs": {
}
}
484     

Now you can create a parameter file for the preceding template. In the parameter file, specify a parame-
ter that matches the name of the parameter in the template.
For the parameter value, reference the secret from the Key Vault. You reference the secret by passing the
resource identifier of the Key Vault and the secret's name.
The Key Vault secret must already exist in the following parameter file, and you provide a static value for
its resource ID.
Copy this file locally, and set the subscription ID, vault name, and SQL server name:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminLogin": {
"value": "exampleadmin"
},
"adminPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/<subscription-id>/resourceGroups/
examplegroup/providers/Microsoft.KeyVault/vaults/<vault-name>"
},
"secretName": "examplesecret"
}
},
"sqlServerName": {
"value": "<your-server-name>"
}
}
}

You would need to deploy the template and pass the parameter file to the template.
For more information, use Azure Key Vault to pass secure parameter values during deployment7 for
more details.
There are also details available on this web page for reference to a secret with a dynamic ID.

Summary
This module explores Azure Resource Manager templates and their components and details dependen-
cies and modularized templates with secrets.
You learned how to describe the benefits and usage of:
●● Create Azure resources using Azure Resource Manager templates.
●● Understand Azure Resource Manager templates and template components.
●● Manage dependencies and secrets in templates.

7 https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-keyvault-parameter
    485

●● Organize and modularize templates.

Learn more
●● Connect to Microsoft Azure - Azure Pipelines | Microsoft Docs8.
●● CI/CD with Azure Pipelines and templates - Azure Resource Manager | Microsoft Docs9.
●● Security through templates - Azure Pipelines | Microsoft Docs10.

8 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure
9 https://docs.microsoft.com/azure/azure-resource-manager/templates/add-template-to-azure-pipelines
10 https://docs.microsoft.com/azure/devops/pipelines/security/templates
486     

Create Azure resources by using Azure CLI


Introduction
This module explains Azure CLI to create Azure resources, run templates, and detail Azure CLI commands.

Learning objectives
After completing this module, students and professionals can:
●● Create Azure resources using Azure CLI.
●● Understand and work with Azure CLI.
●● Run templates using Azure CLI.
●● Explains Azure CLI commands.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps11.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps12.

What is Azure CLI?


Azure CLI is a command-line program you use to connect to Azure and execute administrative com-
mands on Azure resources.
It runs on Linux, macOS, and Windows operating systems.
Instead of a web browser, it allows administrators and developers to execute their commands through a
terminal or a command-line prompt (or script).
For example, to restart a VM, you would use a command such as:
az vm restart -g MyResourceGroup -n MyVm

Azure CLI provides cross-platform command-line tools for managing Azure resources.
You can install it locally on computers running the Linux, macOS, or Windows operating systems.
You can also use Azure CLI from a browser through Azure Cloud Shell.

11 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
12 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
    487

In both cases, you can use Azure CLI interactively or through scripts:
●● Interactive. For Windows operating systems, launch a shell such as cmd.exe, or for Linux or macOS,
use Bash. Then issue the command at the shell prompt.
●● Scripted. Assemble the Azure CLI commands into a shell script using the script syntax of your chosen
shell, and then execute the script.

Work with Azure CLI


Azure CLI lets you control nearly every aspect of every Azure resource.
You can work with Azure resources such as resource groups, storage, VMs, Azure Active Directory,
containers, and machine learning.
Commands in the CLI are structured in groups and subgroups.
Each group represents a service provided by Azure, and the subgroups divide commands for these
services into logical groupings.
So, how do you find the commands you need? One way is to use the az find command.
For example, if you want to find commands that might help you manage a storage blob, you can use the
following find command:
az find blob

If you know the command's name you want, the help argument for that command will get you more
detailed information on the command—also, a list of the available subcommands for a command group.
For example, here's how you would get a list of the subgroups and commands for managing blob
storage:
az storage blob --help

Creating resources
When creating a new Azure resource, typically, there are three high-level steps:
1. Connect to your Azure subscription.
2. Create the resource.
3. Verify that creation was successful.

1. Connect
Because you're working with a local Azure CLI installation, you'll need to authenticate before you can
execute Azure commands.
488     

You do it by using the Azure CLI login command:


az login

Azure CLI will typically launch your default browser to open the Azure sign in page.
If it doesn't work, follow the command-line instructions, and enter an authorization code in the Enter
Code13 dialog box.
After a successful sign in, you'll be connected to your Azure subscription.

2. Create
You'll often need to create a new resource group before you create a new Azure service.
So we'll use resource groups as an example to show how to create Azure resources from the Azure CLI.
The Azure CLI group create command creates a resource group.
You need to specify a name and location.
The name parameter must be unique within your subscription.
The location parameter determines where the metadata for your resource group will be stored.
You use strings like “West US,” "North Europe," or “West India” to specify the location.
Instead, you can use single-word equivalents, such as “westus,” "northeurope," or “westindia.”
The core syntax to create a resource group is:
az group create --name <name> --location <location>

3. Verify installation
For many Azure resources, Azure CLI provides a list subcommand to get resource details.
For example, the Azure CLI group list command lists your Azure resource groups.
It's helpful to verify whether resource group creation was successful:
az group list

To get more concise information, you can format the output as a simple table:
az group list --output table

If you have several items in the group list, you can filter the return values by adding a query option using,
for example, the following command:

13 https://aka.ms/devicelogin
    489

az group list --query "[?name == '<rg name>']"

Note: You format the query using JMESPath, which is a standard query language for JSON requests.
You can learn more about this filter language at http://jmespath.org/.

Using Azure CLI in scripts


To use Azure CLI commands in scripts, you'll need to be aware of any issues around the shell or environ-
ment you use to run the script.
For example, in a bash shell, you can use the following syntax when setting variables:
variable="value"
variable=integer

If you use a PowerShell environment for running Azure CLI scripts, you'll need to use the following syntax
for variables:
$variable="value"
$variable=integer

Exercise - run templates using Azure CLI


Important: To do these steps, you need an Azure subscription. If you don't have one already, you can create
one by following the steps outlined on the Create your Azure free account today14 webpage.

Steps
In the following steps, we'll deploy the template and verify the result using Azure CLI:
1. Create a resource group to deploy your resources to by running the following command:
az group create --name <resource group name> --location <your nearest
datacenter>

Note: Check the available region for you Choose the Right Azure Region for You15. If you can't create
in the nearest region, feel free to choose another one.
2. From Cloud Shell, run the curl command to download the template you used previously from GitHub:
curl https://raw.githubusercontent.com/Microsoft/PartsUnlimited/master/
Labfiles/AZ-400T05_Implementing_Application_Infrastructure/M01/azuredeploy.
json > C:\temp\azuredeploy.json

14 https://azure.microsoft.com/free/
15 https://azure.microsoft.com/global-infrastructure/geographies
490     

3. Validate the template by running the following command, replacing the values with your own:
az deployment group validate \
--resource-group <rgn>[sandbox resource group name]</rgn> \
--template-file C:\temp\azuredeploy.json \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX

4. Deploy the resource by running the following command, replacing the same values as earlier:
az deployment group create \
--name MyDeployment \
--resource-group <rgn>[sandbox resource group name]</rgn> \
--template-file azuredeploy.json \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX

5. Obtain the IP address by running the following command:


IPADDRESS=$(az vm show \
--name SimpleWinVM \
--resource-group <rgn>[sandbox resource group name]</rgn> \
--show-details \
--query [publicIps] \
--output tsv)

6. Run curl to access your web server and verify that the deployment and running of the custom script
extension were successful:
curl $IPADDRESS

You'll have the following output:


<html><body><h2>Welcome to Azure! My name is SimpleWinVM.</h2></body></
html>

Note: Don't forget to delete any resources you deployed to avoid incurring extra costs from them.

Summary
This module explained Azure CLI to create Azure resources, run templates, and detailed Azure CLI
commands.
You learned how to describe the benefits and usage of:
●● Create Azure resources using Azure CLI.
    491

●● Understand and work with Azure CLI.


●● Run templates using Azure CLI.
●● Explains Azure CLI commands.

Learn more
●● Azure CLI task - Azure Pipelines | Microsoft Docs16.
●● How to install the Azure CLI | Microsoft Docs17.
●● Get started with Azure Command-Line Interface (CLI) | Microsoft Docs18.

16 https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-cli
17 https://docs.microsoft.com/cli/azure/install-azure-cli
18 https://docs.microsoft.com/cli/azure/get-started-with-azure-cli
492     

Explore Azure Automation with DevOps


Introduction
Manually executing environment provisioning and configuration management is both laborious and
error-prone.
Microsoft Azure DevOps advocates automation to reduce the probability of errors introduced through
manual execution.
Automation also delivers the added advantage of completing the work more quickly without relying on
subject experts.
Microsoft Azure is built to support automation from the ground up.
Azure Automation is an Azure service that provides a way for users to automate the manual, long-run-
ning, error-prone, and frequently repeated tasks commonly done in a cloud and enterprise environment.
Azure Automation saves time and increases the reliability of regular administrative tasks.
You can even schedule the tasks to be executed automatically at regular intervals.
You can automate processes using runbooks or automate configuration management by using Desired
State Configuration (DSC).
For more information about Azure Automation, review An introduction to Azure Automation19.

Azure Automation isn't the only way to automate within Azure.


You can also use open-source tools to do some of these operations.
However, the integration hooks available to Azure Automation remove much of the integration complexi-
ty you would have to manage if you did these operations manually.
Some Azure Automation capabilities are:
●● Process automation. Azure Automation provides you with the ability to automate frequent, time-con-
suming, and error-prone cloud management tasks.
●● Azure Automation State Configuration. It's an Azure service that allows you to write, manage, and
compile Windows PowerShell DSC configurations, import DSC Resources, and assign configurations to
target nodes, all in the cloud. For more information, visit Azure Automation State Configuration
Overview20.
●● Update management. Manage operating system updates for Windows and Linux computers in Azure,
on-premises environments, or other cloud providers. Get update compliance visibility across Azure,
on-premises, and for other cloud services. You can create scheduled deployments to orchestrate

19 https://azure.microsoft.com/documentation/articles/automation-intro/
20 https://docs.microsoft.com/azure/automation/automation-dsc-overview
    493

update installations within a defined maintenance window. For more information, visit Azure Auto-
mation Update Management Deployment Plan21.
●● Start and stop virtual machines (VMs). Azure Automation provides an integrated Start/Stop VM–relat-
ed resource that enables you to start and stop VMs on user-defined schedules. It also provides
insights through Azure Log Analytics and can send emails by using action groups. For more informa-
tion, go to Start/Stop VMs during off-hours solution in Azure Automation22.
●● Integration with GitHub, Azure DevOps, Git, or Team Foundation Version Control repositories. For
more information, go to Source control integration in Azure Automation23.
●● Automate Amazon Web Services (AWS) Resources. Automate common tasks with resources in AWS
using Automation runbooks in Azure. For more information, go to Authenticate Runbooks with
Amazon Web Services24.
●● Manage Shared resources. Azure Automation consists of a set of shared resources (such as connec-
tions, credentials, modules, schedules, and variables) that make it easier to automate and configure
your environments at scale.
●● Run backups. Azure Automation allows you to run regular backups of non-database systems, such as
backing up Azure Blob Storage at certain intervals.
Azure Automation works across hybrid cloud environments in addition to Windows and Linux operating
systems.
This module describes Azure automation with Azure DevOps, using runbooks, webhooks, and PowerShell
workflows.
You'll learn how to create and manage automation for your environment.

Learning objectives
After completing this module, students and professionals can:
●● Implement automation with Azure DevOps.
●● Create and manage runbooks.
●● Create webhooks.
●● Create and run a workflow runbook and PowerShell workflows.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Create automation accounts


To start using the Microsoft Azure Automation service, you must first create an Automation account25
from within the Azure portal.

21 https://docs.microsoft.com/en-us/azure/automation/update-management/plan-deployment
22 https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management
23 https://docs.microsoft.com/azure/automation/source-control-integration
24 https://docs.microsoft.com/azure/automation/automation-config-aws-account
25 https://azure.microsoft.com/documentation/articles/automation-security-overview/
494     

Steps to create an Azure Automation account are available on the Create an Azure Automation ac-
count26 page.
Automation accounts are like Azure Storage accounts in that they serve as a container to store automa-
tion artifacts.
These artifacts could be a container for all your runbooks, runbook executions (jobs), and the assets on
which your runbooks depend.
An Automation account gives you access to managing all Azure resources via an API. To safeguard it, the
Automation account creation requires subscription-owner access.

You must be a subscription owner to create the Run As accounts that the service creates.
If you don't have the proper subscription privileges, you'll see the following warning:

26 https://docs.microsoft.com/azure/automation/automation-quickstart-create-account
    495

To use Azure Automation, you'll need at least one Azure Automation account.
However, as a best practice, you should create multiple automation accounts to segregate and limit the
scope of access and minimize any risk to your organization.
For example, you might use one account for development, another for production, and another for your
on-premises environment. You can have up to 30 Automation accounts.

What is a runbook?
Runbooks serve as repositories for your custom scripts and workflows.
They also typically reference Automation shared resources such as credentials, variables, connections, and
certificates.
Runbooks can also contain other runbooks, allowing you to build more complex workflows.
You can invoke and run runbooks either on-demand or according to a schedule by using Automation
Schedule assets.

Creating runbooks
When creating runbooks, you have two options. You can either:
●● Create your runbook and import it. For more information about creating or importing a runbook in
Azure Automation, go to Start a runbook in Azure Automation27.
●● Modify runbooks from the runbook gallery. It provides a rich ecosystem of runbooks that are available
for your requirements. Visit Runbook and module galleries for Azure Automation28 for more
information.

27 https://docs.microsoft.com/azure/automation/start-runbooks
28 https://docs.microsoft.com/azure/automation/automation-runbook-gallery
496     

There's also a vibrant open-source community that creates runbooks you can apply directly to your use
cases.
You can choose from different runbook types based on your requirements and Windows PowerShell
experience.
If you prefer to work directly with Windows PowerShell code, you can use a PowerShell runbook or a
PowerShell Workflow runbook.
Using either of these, you can edit offline or with the textual editor in the Azure portal.
If you prefer to edit a runbook without being exposed to the underlying code, you can create a graphical
runbook using the Azure portal's graphic editor.

Graphical runbooks
Graphical runbooks and Graphical PowerShell Workflow runbooks are created and edited with the
graphic editor in the Azure portal.
You can export them to a file and import them into another automation account, but you can't create or
edit them with another tool.

PowerShell runbooks
PowerShell runbooks are based on Windows PowerShell. You edit the runbook code directly, using the
text editor in the Azure portal.
You can also use any offline text editor and then import the runbook into Azure Automation. PowerShell
runbooks don't use parallel processing.

PowerShell Workflow runbooks


PowerShell Workflow runbooks are text runbooks based on Windows PowerShell Workflow.
You directly edit the runbook code using the text editor in the Azure portal.
You can also use any offline text editor and import the runbook into Azure Automation.
PowerShell Workflow runbooks use parallel processing to allow for simultaneous completion of multiple
tasks.
Workflow runbooks take longer to start than PowerShell runbooks because they must be compiled before
running.

Python runbooks
Python runbooks compile under Python 2.
You can directly edit the code of the runbook using the text editor in the Azure portal, or you can use any
offline text editor and import the runbook into Azure Automation.
You can also use Python libraries. However, only Python 2 is supported at this time.
To use third-party libraries, you must first import the package into the Automation Account.
Note: You can't convert runbooks from graphical to textual type, and the other way around.
    497

For more information on the different types of runbooks, visit Azure Automation runbook types29.

Understand automation shared resources


Azure Automation contains shared resources that are globally available to be associated with or used in a
runbook.
There are currently eight shared resources categories:
●● Schedules: It allows you to define a one-off or recurring schedule.
●● Modules: Contains Azure PowerShell modules.
●● Modules gallery: It allows you to identify and import PowerShell modules into your Azure automa-
tion account.
●● Python packages. Allows you to import a Python package by uploading: .whl or tar.gz packages.
●● Credentials: It allows you to create username and password credentials.
●● Connections: It allows you to specify Azure, Azure classic certificate, or Azure Service principal
connections.
●● Certificates: It allows you to upload certificates in .cer or pfx format.
●● Variables: It allows you to define encrypted or unencrypted variables of types—for example, String,
Boolean, DateTime, Integer, or no specific type.

As a best practice, always try to create global assets to be used across your runbooks.
It will save time and reduce the number of manual edits within individual runbooks.

Explore runbook gallery


Azure Automation runbooks are provided to help eliminate the time it takes to build custom solutions.
The runbooks have already been built by Microsoft and the Microsoft community.

29 https://azure.microsoft.com/documentation/articles/automation-runbook-types
498     

You can use them with or without modification.


Also, you can import runbooks from the runbook gallery at Azure Automation Github in the runbooks
repository Azure Automation - Runbooks30.
Note: A new Azure PowerShell module was released in December 2018, called the Az PowerShell module.
It replaces the AzureRM PowerShell module and is now the intended PowerShell module for interacting
with Azure. This new Az module is currently supported in Azure Automation. For more general details on
the new Az PowerShell module, go to Introducing the new Azure PowerShell Az module31.

Choosing items from the runbook gallery


In the Azure portal, you can import directly from the runbook gallery using the following high-level steps:
1. Open your Automation account, and then select Process Automation > Runbooks.
2. In the runbooks pane, select Browse gallery.
3. From the runbook gallery, locate the runbook item you want, select it, and select Import.
When browsing through the runbooks in the repository, you can review the code or visualize the code.
You can also check information such as the source project and a detailed description, ratings, and
questions and answers.
For more information, see Azure Automation32.

30 https://github.com/azureautomation/runbooks
31 https://docs.microsoft.com/powershell/azure/new-azureps-module-az
32 https://github.com/azureautomation
    499

Note: Python runbooks are also available from the Azure Automation Github in the runbooks repository.
To find them, filter by language and select Python.
Note: You can't use PowerShell to import directly from the runbook gallery.

Examine webhooks
You can automate starting a runbook either by scheduling it or by using a webhook.
A webhook allows you to start a particular runbook in Azure Automation through a single HTTPS
request.
It allows external services such as Azure DevOps, GitHub, or custom applications to start runbooks
without implementing more complex solutions using the Azure Automation API.
More information about webhooks is available at Starting an Azure Automation runbook with a
webhook33.

33 https://docs.microsoft.com/azure/automation/automation-webhooks
500     

Create a webhook
You create a webhook linked to a runbook using the following steps:
1. In the Azure portal, open the runbook that you want to create the webhook.
2. In the runbook pane, under Resources, select Webhooks, and then choose + Add webhook.
3. Select Create new webhook.
4. In the Create new webhook dialog, there are several values you need to configure. After you config-
ure them, select Create:

●● Name. Specify any name you want for a webhook because the name isn't exposed to the client.
It's only used for you to identify the runbook in Azure Automation.
●● Enabled. A webhook is enabled by default when it's created. If you set it to Disabled, then no
client can use it.
●● Expires. Each webhook has an expiration date, at which time it can no longer be used. You can
continue to modify the date after creating the webhook providing the webhook isn't expired.
●● URL. The webhook URL is the unique address that a client calls with an HTTP POST to start the
runbook linked to the webhook. It's automatically generated when you create the webhook, and
you can't specify a custom URL. The URL contains a security token that allows the runbook to be
invoked by a third-party system with no further authentication. For this reason, treat it like a
    501

password. You can only view the URL in the Azure portal for security reasons when the webhook is

created. Make a note of the URL in a secure location for future use.
Note: When creating it, make sure you copy the webhook URL and then store it in a safe place. After you
create the webhook, you can't retrieve the URL again.
5. Select the Parameters run settings (Default: Azure) option. This option has the following character-
istics, which allows you to complete the following actions:

●● If the runbook has mandatory parameters, you'll need to provide these required parameters
during creation. You aren't able to create the webhook unless values are provided.
●● If there are no mandatory parameters in the runbook, there's no configuration required here.
●● The webhook must include values for any mandatory parameters of the runbook and include
values for optional parameters.
●● When a client starts a runbook using a webhook, it can't override the parameter values defined.
●● To receive data from the client, the runbook can accept a single parameter called $WebhookData
of type [object] that contains data that the client includes in the POST request.
502     

●● There's no required webhook configuration to support the $WebhookData parameter.


6. When finished, select Create.

Using a webhook
To use a webhook after it has been created, your client application must issue an HTTP POST with the
URL for the webhook.
●● The syntax of the webhook is in the following format:
http://< Webhook Server >/token?=< Token Value >

●● The client receives one of the following return codes from the POST request.

Code Test Description


202 Accepted The request was accepted, and
the runbook was successfully
queued.
    503

400 Bad request The request wasn't accepted


because the runbook has
expired, been disabled, or the
token in the URL is invalid.
404 Not found The request wasn't accepted
because the webhook, runbook,
or account wasn't found.
500 Internal Server Error
●● If successful, the webhook response contains the job ID in JSON format as follows:
{"JobIds":["< JobId >"]}

The response will contain a single job ID, but the JSON format allows for potential future enhance-
ments.
●● You can't determine when the runbook job completes or determine its completion status from the
webhook. You can only choose this information using the job ID with another method such as Power-
Shell or the Azure Automation API.
More details are available on the Starting an Azure Automation runbook with a webhook34 page.

Explore source control integration


Azure Automation supports source control integration that enables you to keep your runbooks in your
Automation account up to date with your scripts in your GitHub or Azure DevOps source control reposi-
tory.
Source control allows you to collaborate with your team more efficiently, track changes, and roll back to
earlier versions of your runbooks.
For example, source control will enable you to sync different branches in source control to your develop-
ment, test, or production Automation accounts.
It makes it easier to promote code you've tested in your development environment to your production
Automation account.
Azure Automation supports three types of source control:
●● GitHub.
●● Azure DevOps (Git).
●● Azure DevOps (TFVC).
Source control allows you to push code from Azure Automation to source control or pull your runbooks
from source control to Azure Automation.
Source control sync jobs run under the user's Automation Account and are billed at the same rate as
other Automation jobs.

34 https://docs.microsoft.com/azure/automation/automation-webhooks
504     

Integrate source control with Azure Automation


You integrate source control with Azure Automation using the following steps:
1. In the Azure portal, access your Automation account.
2. Under Account Settings, select Source control, and then choose + Add.
3. In the Source Control Summary blade, select GitHub as source control type and then select Au-
thenticate.

Note: Note: You'll require a GitHub account to complete the next step.
4. When the browser page opens, prompting you to authenticate to https://www.github.com35, select
Authorize azureautomation and enter your GitHub account password. If successful, you should
receive an email notification from GitHub stating that A third-party OAuth Application (Automation
Source Control) with repo scope was recently authorized to access your account.
5. After authentication completes, fill in the details based on the following table, and then select Save.

Property Description
Name Friendly name
Source control type GitHub, Azure DevOps Git, or Azure DevOps TFVC
Repository The name of the repository or project
Branch The branch from which to pull the source files.
Branch targeting isn't available for the TFVC source
control type.
Folder Path The folder that contains the runbooks to sync.
Autosync Turns on or off automatic sync when a commit is
made in the source control repository.
Publish Runbook. If set to On, after runbooks are synced from
source control, they'll be automatically published.
Description A text field to provide more details.
6. If you set Autosync to Yes, full sync will start. If you set Autosync to No, open the Source Control
Summary blade again by selecting your repository in Azure Automation and then selecting Start
Sync.

35 https://www.github.com/
    505

7. Verify that your source control is listed on the Azure Automation Source control page for you to
use.

Explore PowerShell workflows


IT pros often automate management tasks for their multi-device environments by running sequences of
long-running tasks or workflows.
These tasks can affect multiple managed computers or devices at the same time.
506     

PowerShell Workflow lets IT pros and developers apply the benefits of Windows Workflow Foundation
with the automation capabilities and ease of using Windows PowerShell.
Tip: Refer to A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 436 for
more information.
Windows PowerShell Workflow functionality was introduced in Windows Server 2012 and Windows 8 and
is part of Windows PowerShell 3.0 and later.
Windows PowerShell Workflow helps automate distribution, orchestration, and completion of multi-de-
vice tasks, freeing users and administrators to focus on higher-level tasks.

Activities
An activity is a specific task that you want a workflow to do. Just as a script is composed of one or more
commands, a workflow is composed of activities carried out in sequence.
You can also use a script as a single command in another script and use a workflow as an activity within
another workflow.

Workflow characteristics
A workflow can:
●● Be long-running.
●● Be repeated over and over.
●● Run tasks in parallel.
●● Be interrupted—can be stopped and restarted, suspended, and resumed.
●● Continue after an unexpected interruption, such as a network outage or computer/server restart.

Workflow benefits
A workflow offers many benefits, including:
●● Windows PowerShell scripting syntax. Is built on PowerShell.
●● Multidevice management. Simultaneously apply workflow tasks to hundreds of managed nodes.
●● Single task runs multiple scripts and commands. Combine related scripts and commands into a single
task. Then run the single task on multiple computes. The activity status and progress within the
workflow are visible at any time.
●● Automated failure recovery.

●● Workflows survive both planned and unplanned interruptions, such as computer restarts.
●● You can suspend a workflow operation, then restart or resume the workflow from the point it was
suspended.
●● You can author checkpoints as part of your workflow so that you can resume the workflow from
the last persisted task (or checkpoint) instead of restarting the workflow from the beginning.

36 https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/ee342461%28v=msdn.10%29
    507

●● Connection and activity retries. You can retry connections to managed nodes if network-connection
failures occur. Workflow authors can also specify activities that must run again if the activity cannot be
completed on one or more managed nodes (for example, if a target computer was offline while the
activity was running).
●● Connect and disconnect from workflows. Users can connect and disconnect from the computer
running the workflow, but the workflow will remain running. For example, suppose you're running the
workflow and managing the workflow on two different computers. In that case, you can sign out of or
restart the computer from which you're managing the workflow and continue to monitor workflow
operations from another computer without interrupting the workflow.
●● Task scheduling. You can schedule a task to start when specific conditions are met, as with any other
Windows PowerShell cmdlet or script.

Create a workflow
Use a script editor such as the Windows PowerShell Integrated Scripting Environment (ISE) to write the
workflow.
It enforces workflow syntax and highlights syntax errors. For more information, review the tutorial
Tutorial - Create a PowerShell Workflow runbook in Azure Automation37.
A benefit of using PowerShell ISE is that it automatically compiles your code and allows you to save the
artifact.
Because the syntactic differences between scripts and workflows are significant, a tool that knows both
workflows and scripts will save you considerable coding and testing time.

Syntax
When you create your workflow, begin with the workflow keyword, which identifies a workflow com-
mand to PowerShell.
A script workflow requires the workflow keyword. Next, name the workflow, and have it follow the
workflow keyword.
The body of the workflow will be enclosed in braces.
1. A workflow is a Windows command type, so select a name with a verb-noun format:
workflow Test-Workflow
{
...
}

2. To add parameters to a workflow, use the Param keyword. It's the same techniques that you use to
add parameters to a function.
3. Finally, add your standard PowerShell commands.
workflow MyFirstRunbook-Workflow
{
Param(
[string]$VMName,

37 https://docs.microsoft.com/azure/automation/learn/automation-tutorial-runbook-textual
508     

[string]$ResourceGroupName
)
....
Start-AzureRmVM -Name $VMName -ResourceGroupName $ResourceGroupName
}

Explore hybrid management


The Hybrid Runbook Worker feature of Azure Automation allows you to run runbooks that manage local
resources in your private data center on machines located in your data center.
Azure Automation stores and manages the runbooks and then delivers them to one or more on-premises
machines.
The Hybrid Runbook Worker functionality is presented in the following graphic:

Hybrid Runbook Worker workflow and characteristics


The following list is characteristics of the Hybrid Runbook Worker workflow:
●● You can select one or more computers in your data center to act as a Hybrid Runbook Worker and
then run runbooks from Azure Automation.
●● Each Hybrid Runbook Worker is a member of a Hybrid Runbook Worker group, which you specify
when you install the agent.
●● A group can include a single agent, but you can install multiple agents in a group for high availability.
●● There are no inbound firewall requirements to support Hybrid Runbook Workers, only Transmission
Control Protocol (TCP) 443 is required for outbound internet access.
    509

●● The agent on the local computer starts all communication with Azure Automation in the cloud.
●● When a runbook is started, Azure Automation creates an instruction that the agent retrieves. The
agent then pulls down the runbook and any parameters before running it.
To configure your on-premises servers that support the Hybrid Runbook Worker role with DSC, you must
add them as DSC nodes.
For more information about onboarding them for management with DSC, see Onboarding machines for
management by Azure Automation State Configuration38.
For more information on installing and removing Hybrid Runbook Workers and groups, see:
●● Automate resources in your datacenter or cloud by using Hybrid Runbook Worker.39
●● Hybrid Management in Azure Automation40

Exercise - Create and run a workflow runbook


Note: You require an Azure subscription to do the following steps. If you don't have one, you can create
one by following the steps outlined on the Create your Azure free account today41 webpage.

Steps

Create a new runbook


1. In the Azure portal, open your Automation account.
2. Under Process Automation, select Runbooks to open the list of runbooks.
3. Create a new runbook by selecting the Create a new runbook.
4. Give the runbook the name MyFirstRunbook-Workflow.
5. You're going to create a PowerShell Workflow runbook, so for Runbook type, select PowerShell
Workflow.
6. Select Create to create the runbook and open the text editor.

Add code to a runbook


You have two options when adding code to a runbook.
You can type code directly into the runbook or select cmdlets, runbooks, and assets from the Library
control and have them added to the runbook, along with any related parameters.
For this walkthrough, you'll use the type directly into the runbook method, as detailed in the following
steps:
1. Type Write-Output "Hello World." between the braces, as per the below:
Workflow MyFirstRunbook-Workflow
{
Write-Output "Hello World"
}

38 https://docs.microsoft.com/azure/automation/automation-dsc-onboarding
39 https://docs.microsoft.com/azure/automation/automation-hybrid-runbook-worker#installing-hybrid-runbook-worker
40 https://azure.microsoft.com/blog/hybrid-management-in-azure-automation/
41 https://azure.microsoft.com/free
510     

2. Save the runbook by selecting Save.

Test the runbook


Before you publish the runbook to production, you want to test it to ensure it works properly.
When you test a runbook, you run the draft version and view its output interactively, as demonstrated in
the following steps:
1. Select the Test pane.

2. Select Start to start the test. This should be the only enabled option.
    511

A runbook job is created, and its status is displayed. The job status will start as Queued, indicating
that it's waiting for a runbook worker in the cloud to come available. It moves to Starting when a
worker claims the job and then Running when the runbook starts running. When the runbook job
completes, its output displays. In this case, you should see Hello World.
3. When the runbook job finishes, close the Test pane.
512     

Publish and run the runbook


The runbook that you created is still in draft mode. It would be best if you published it before you can
run it in production. When you publish a runbook, you overwrite the existing published version with the
draft version. In your case, you don't have a published version yet because you just created the runbook.
Use the following steps to publish your runbook:
1. In the runbook editor, select Publish to publish the runbook.
2. When prompted, select Yes.
3. Scroll left to view the runbook in the Runbooks pane and ensure it shows an Authoring Status of
Published.
4. Scroll back to the right to view the pane for MyFirstRunbook-Workflow. Notice the options across
the top:
●● Start
●● View
●● Edit
●● Link to schedule to start at some time in the future.
●● Add a webhook.
●● Delete
●● Export

5. You want to start the runbook, so select Start, and then when prompted, select Yes.
6. When the job pane opens for the runbook job you created, leave it open to watch the job's progress.
7. Verify that when the job completes, the job statuses displayed in Job Summary match the status you
saw when you tested the runbook.
    513

Examine checkpoint and parallel processing


Workflows let you implement complex logic within your code. Two features available with workflows are
checkpoints and parallel processing.

Checkpoints
A checkpoint is a snapshot of the current state of the workflow.
Checkpoints include the current value for variables and any output generated up to that point. (For more
information on what a checkpoint is, read the checkpoint42 webpage.)
If a workflow ends in an error or is suspended, the next time it runs, it will start from its last checkpoint
instead of at the beginning of the workflow.
You can set a checkpoint in a workflow with the Checkpoint-Workflow activity.
For example, if an exception occurs after Activity2, the workflow will end in the following sample code.

42 https://docs.microsoft.com/azure/automation/automation-powershell-workflow
514     

When the workflow is rerun, it starts with Activity2, followed just after the last checkpoint set.
<Activity1>
Checkpoint-Workflow
<Activity2>
<Exception>
<Activity3>

Parallel processing
A script block has multiple commands that run concurrently (or in parallel) instead of sequentially, as for
a typical script.
It's referred to as parallel processing. (More information about parallel processing is available on the
Parallel processing43 webpage.)
In the following example, two vm0 and vm1 VMs will be started concurrently, and vm2 will only start after
vm0 and vm1 have started.
Parallel
{
Start-AzureRmVM -Name $vm0 -ResourceGroupName $rg
Start-AzureRmVM -Name $vm1 -ResourceGroupName $rg
}

Start-AzureRmVM -Name $vm2 -ResourceGroupName $rg

Another parallel processing example would be the following constructs that introduce some extra
options:
●● ForEach -Parallel. You can use the ForEach -Parallel construct to concurrently process commands for
each item in a collection. The items in the collection are processed in parallel while the commands in
the script block run sequentially.
In the following example, Activity1 starts at the same time for all items in the collection.
For each item, Activity2 starts after Activity1 completes. Activity3 starts only after both Activity1 and
Activity2 have been completed for all items.
●● ThrottleLimit - We use the ThrottleLimit parameter to limit parallelism. Too high of a ThrottleLimit can
cause problems. The ideal value for the ThrottleLimit parameter depends on several environmental
factors. Try starting with a low ThrottleLimit value, and then increase the value until you find one that
works for your specific circumstances:
ForEach -Parallel -ThrottleLimit 10 ($<item> in $<collection>)
{
<Activity1>
<Activity2>
}
<Activity3>

43 https://docs.microsoft.com/azure/automation/automation-powershell-workflow
    515

A real-world example of it could be similar to the following code: a message displays for each file after
it's copied. Only after all files are copied does the completion message display.
Workflow Copy-Files
{
$files = @("C:\LocalPath\File1.txt","C:\LocalPath\File2.txt","C:\
LocalPath\File3.txt")

ForEach -Parallel -ThrottleLimit 10 ($File in $Files)


{
Copy-Item -Path $File -Destination \\NetworkPath
Write-Output "$File copied."
}

Write-Output "All files copied."


}

Summary
This module described Azure automation with Azure DevOps, using runbooks, webhooks, and PowerShell
workflows. You learned how to create and manage automation for your environment.
You learned how to describe the benefits and usage of:
●● Implement automation with Azure DevOps.
●● Create and manage runbooks.
●● Create webhooks.
●● Create and run a workflow runbook and PowerShell workflows.

Learn more
●● Manage runbooks in Azure Automation | Microsoft Docs44.
●● Use source control integration in Azure Automation | Microsoft Docs45.
●● Webhooks with Azure DevOps - Azure DevOps | Microsoft Docs46.
●● Learn PowerShell Workflow for Azure Automation | Microsoft Docs47.

44 https://docs.microsoft.com/azure/automation/manage-runbooks
45 https://docs.microsoft.com/azure/automation/source-control-integration
46 https://docs.microsoft.com/azure/devops/service-hooks/services/webhooks
47 https://docs.microsoft.com/azure/automation/automation-powershell-workflow
516     

Implement Desired State Configuration (DSC)


Introduction
This module describes Desired State Configuration (DSC) and its components for implementation. You
can exercise how to import, compile and automate your environment creation, and use DSC for Linux
automation on Azure.

Learning objectives
After completing this module, students and professionals can:
●● Implement Desired State Configuration (DSC).
●● Describe Azure Automation State Configuration.
●● Implement DSC and Linux Automation on Azure.
●● Plan for hybrid management.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Understand configuration drift


Configuration drift is the process of a set of resources changing over time from their original deploy-
ment state.
It can be because of changes made manually by people or automatically by processes or programs.
Eventually, an environment can become a snowflake. A snowflake is a unique configuration that cannot
be reproduced automatically and is typically a result of configuration drift.
With snowflakes, infrastructure administration and maintenance invariably involve manual processes,
which can be hard to track and prone to human error.
The more an environment drifts from its original state, the more likely it is for an application to find
issues.
The greater the degree of configuration drift, the longer it takes to troubleshoot and rectify issues.

Security considerations
Configuration drift can also introduce security vulnerabilities into your environment. For example:
●● Ports might be opened that should be kept closed.
    517

●● Updates and security patches might not be applied across environments consistently.
●● The software might be installed that doesn't meet compliance requirements.

Solutions for managing configuration drift


While eliminating configuration drift can be difficult, there are many ways you can manage it in your
environments using configuration management tools and products such as:
●● Windows PowerShell Desired State Configuration. It's a management platform in PowerShell that
enables you to manage and enforce resource configurations. For more information about Windows
PowerShell Desired State Configuration, go to Windows PowerShell Desired State Configuration
Overview.
●● Azure Policy. Use Azure Policy to enforce policies and compliance standards for Azure resources. For
more information about Azure Policy, go to Azure Policy48.
There are also other third-party solutions that you can integrate with Azure.

Explore Desired State Configuration (DSC)


Desired State Configuration (DSC) is a configuration management approach that you can use for
configuration, deployment, and management of systems to ensure that an environment is maintained in
a state that you specify (defined state) and doesn't deviate from that state.
DSC helps eliminate configuration drift and ensures the state is maintained for compliance, security, and
performance.
Windows PowerShell DSC is a management platform in PowerShell that provides desired State.
PowerShell DSC lets you manage, deploy, and enforce configurations for physical or virtual machines,
including Windows and Linux.
For more information, visit Windows PowerShell Desired State Configuration Overview49.

DSC components
DSC consists of three primary parts:
●● Configurations. These are declarative PowerShell scripts that define and configure instances of
resources. Upon running the configuration, DSC (and the resources being called by the configuration)
will apply the configuration, ensuring that the system exists in the state laid out by the configuration.
DSC configurations are also idempotent: The Local Configuration Manager (LCM) will ensure that
machines are configured in whatever state the configuration declares.
●● Resources. They contain the code that puts and keeps the target of a configuration in the specified
state. Resources are in PowerShell modules and can be written to a model as generic as a file or a
Windows process or as specific as a Microsoft Internet Information Services (IIS) server or a VM
running in Azure.
●● Local Configuration Manager (LCM). The LCM runs on the nodes or machines you wish to configure.
It's the engine by which DSC facilitates the interaction between resources and configurations. The
LCM regularly polls the system using the control flow implemented by resources to maintain the state

48 https://azure.microsoft.com/services/azure-policy/
49 https://docs.microsoft.com/powershell/dsc/overview
518     

defined by a configuration. If the system is out of state, the LCM calls the code in resources to apply
the configuration according to specified.
There are two methods of implementing DSC:
●● Push mode - A user actively applies a configuration to a target node and pushes out the configura-
tion.
●● Pull mode is where pull clients are automatically configured to get their desired state configurations
from a remote pull service. This remote pull service is provided by a pull server that acts as central
control and manager for the configurations, ensures that nodes conform to the desired state, and
reports on their compliance status. The pull server can be set up as an SMB-based pull server or an
HTTPS-based server. HTTPS-based pull-server uses the Open Data Protocol (OData) with the OData
Web service to communicate using REST APIs. It's the model we're most interested in, as it can be
centrally managed and controlled. The following diagram provides an outline of the workflow of DSC
pull mode.

Explore Azure Automation State configuration


(DSC)
Azure Automation State configuration DSC is an Azure cloud-based implementation of PowerShell
DSC, available as part of Azure Automation.
Azure Automation State configuration allows you to write, manage, and compile PowerShell DSC configu-
rations, import DSC Resources, and assign configurations to target nodes, all in the cloud.
    519

Why use Azure Automation DSC?


The following outlines some of the reasons why we would consider using Azure Automation DSC:
●● Built-in pull server. Provides a DSC pull server like the Windows Feature DSC service so that target
nodes automatically receive configurations, conform to the desired state, and report back on their
compliance. The built-in pull server in Azure Automation eliminates the need to set up and maintain
your pull server.
●● Management of all your DSC artifacts. You can manage all your DSC configurations, resources, and
target nodes from the Azure portal or PowerShell.
●● Import reporting data into Log Analytics. Nodes managed with Azure Automation state configuration
send detailed reporting status data to the built-in pull server. You can configure Azure Automation
state configuration to send this data to your Log Analytics workspace.

How Azure Automation state configuration works


The general process for how Azure Automation State configuration works is as follows:
1. Create a PowerShell script with the configuration element.
2. Upload the script to Azure Automation and compile the script into a MOF file. The file is transferred to
the DSC pull server, provided as part of the Azure Automation service.
3. Define the nodes that will use the configuration, and then apply the configuration.

For more information, visit Managed Object Format (MOF) file50.

Examine DSC configuration file


DSC configurations are Windows PowerShell scripts that define a special type of function.
You can view some syntax examples and scenarios on the Configuration syntax51 page.

50 https://docs.microsoft.com/windows/win32/wmisdk/managed-object-format--mof-
51 https://docs.microsoft.com/powershell/scripting/dsc/configurations/configurations
520     

DSC configuration elements


We'll provide the example configurations and then discuss the elements within them. Let's start with the
following example configuration:
configuration LabConfig
{
Node WebServer
{
WindowsFeature IIS
{
Ensure = 'Present'
Name = 'Web-Server'
IncludeAllSubFeature = $true
}
}
}

●● Configuration block. The Configuration block is the outermost script block. In this case, the name of
the configuration is LabConfig. Notice the curly brackets to define the block.
●● Node block. There can be one or more Node blocks. It defines the nodes (computers and VMs) that
you're configuring. In this example, the node targets a computer called WebServer. You could also
call it localhost and use it locally on any server.
●● Resource blocks. There can be one or more resource blocks. It's where the configuration sets the
properties for the resources. In this case, there's one resource block called WindowsFeature. Notice
the parameters that are defined. (You can read more about resource blocks at DSC resources52.
Here's another example:
Configuration MyDscConfiguration
{
param
(
[string[]]$ComputerName='localhost'
)

Node $ComputerName
{
WindowsFeature MyFeatureInstance
{
Ensure = 'Present'
Name = 'RSAT'
}

WindowsFeature My2ndFeatureInstance
{
Ensure = 'Present'
Name = 'Bitlocker'
}

52 https://docs.microsoft.com/powershell/scripting/dsc/resources/resources
    521

}
}

MyDscConfiguration

In this example, you specify the node's name by passing it as the ComputerName parameter when you
compile the configuration. The name defaults to “localhost.”
Within a Configuration block, you can do almost anything that you normally could in a PowerShell
function.
You can also create the configuration in any editor, such as PowerShell ISE, and save the file as a Power-
Shell script with a .ps1 file type extension.

Exercise - Import and compile


After creating your DSC configuration file, you must import it and compile it to the DSC pull server.
Compiling will create the MOF file. Read more about it at Compiling a DSC Configuration with the
Azure portal53.

Import and compile configurations


To import and compile a configuration, complete the following high-level steps:
1. Create a configuration file by creating a file on your local machine. Then, copy and paste the following
PowerShell code into the file, and name it LabConfig.ps1. This script configuration will ensure the IIS
web-server role is installed on the servers:
configuration LabConfig
{
Node WebServer
{
WindowsFeature IIS
{
Ensure = 'Present'
Name = 'Web-Server'
IncludeAllSubFeature = $true
}
}
}

2. In Azure Automation, account under Configuration Management > State configuration (DSC),
select the Configurations tab, and select +Add.

53 https://docs.microsoft.com/azure/automation/automation-dsc-compile#compiling-a-dsc-configuration-with-the-azure-portal
522     

3. Point to the configuration file you want to import, and then select OK.

4. Once imported, double click the file, select Compile, and then confirm by selecting Yes.
    523

5. Once compiled, verify that the file has a status of completed.


524     

Note: If you prefer, you can also use the PowerShell Start-AzAutomationDscCompilationJob cmdlet.
More information about this method is available at Compiling a DSC Configuration with Windows
PowerShell54.

Exercise - Onboard machines for management


After your configuration is in place, you'll select the Azure VMs or on-premises VMs that you want to
onboard.
Note: For more information on onboarding on-premises VMs, review the Physical/virtual Windows
machines on-premises or in a cloud other than the Azure/AWS55 webpage.
You can onboard a VM and enable DSC in several different ways. Here we'll cover onboarding through an
Azure Automation account.

Onboard VMs to configure


When onboarding VMs using this method, you'll need to deploy your VMs to Azure before starting:
1. In the left pane of the Automation account, select State configuration (DSC).
2. Select the Nodes tab, and then select + Add to open the Virtual Machines pane.
3. Find the VM you would like to enable. (You can use the search field and filter options to find a specific
VM, if necessary.)
4. Select the VM, and then select Connect.

5. In the resultant Registration pane, configure the following settings, and then select OK.

54 https://docs.microsoft.com/azure/automation/automation-dsc-compile#compiling-a-dsc-configuration-with-windows-powershell
55 https://docs.microsoft.com/azure/automation/automation-dsc-onboarding#physicalvirtual-windows-machines-on-premises-or-in-a-cloud-
other-than-azureaws
    525

Property Description
Registration key. Primary or secondary, for registering the node
with a pull service.
Node configuration name. The name of the node configuration that the VM
should be configured to pull for Automation DSC.
Refresh Frequency. The time interval, in minutes, at which the LCM
checks a pull service to get updated configura-
tions. This value is ignored if the LCM isn't config-
ured in pull mode. The default value is 30.
Configuration Mode Frequency. How often, in minutes, the current configuration is
checked and applied. This property is ignored if
the ConfigurationMode property is set to
ApplyOnly. The default value is 15.
Configuration mode. Specifies how the LCM gets configurations.
Possible values are ApplyOnly, ApplyAndMoni-
tor, and ApplyAndAutoCorrect.
Allow Module Override. Controls whether new configurations downloaded
from the Azure Automation DSC pull server can
overwrite the old modules already on the target
server.
Reboot Node if Needed. Set this to $true to automatically reboot the node
after a configuration that requires a reboot is
applied. Otherwise, you'll have to reboot the node
for any configuration that needs it manually. The
default value is $false.
526     

Action after Reboot. Specifies what happens after a reboot during the
application of a configuration. The possible values
are ContinueConfiguration and StopConfigura-
tion.
The service will then connect to the Azure VMs and apply the configuration.
6. Return to the State configuration (DSC) pane and verify that the status now displays as Compliant
after applying the configuration.

Each time Azure Automation DSC does a consistency check on a managed node, the node sends a
status report back to the pull server. You can review these reports on that node's blade. Access it by
double-clicking or pressing the spacebar and then Enter on the node.
    527

Note: You can also unregister the node and assign a different configuration to nodes.
For more information about onboarding VMs, see also:
●● Enable Azure Automation State Configuration.56
●● Configuring the Local Configuration Manager57

Implement DSC and Linux Automation on Azure


Follow more specific, and up-to-date details are available at Get started with Desired State Configura-
tion (DSC) for Linux58.
The following Linux operating system versions are currently supported by both PowerShell DSC and
Azure Automation DSC:
●● CentOS 6, 7, and 8 (x64)
●● Debian GNU/Linux 8, 9, and 10 (x64)
●● Oracle Linux 6 and 7 (x64)
●● Red Hat Enterprise Linux Server 6, 7, and 8 (x64)
●● SUSE Linux Enterprise Server 12 and 15 (x64)

56 https://docs.microsoft.com/azure/automation/automation-dsc-onboarding
57 https://docs.microsoft.com/powershell/scripting/dsc/managing-nodes/metaconfig
58 https://docs.microsoft.com/powershell/scripting/dsc/getting-started/lnxgettingstarted
528     

●● Ubuntu Server 14.04 LTS, 16.04 LTS, 18.04 LTS, and 20.04 LTS (x64)

Summary
This module described Desired State Configuration (DSC) and its components for implementation. You
exercised how to import, compile and automate your environment creation, and use DSC for Linux
automation on Azure.
You learned how to describe the benefits and usage of:
●● Implement Desired State Configuration (DSC).
●● Describe Azure Automation State Configuration.
●● Implement DSC and Linux Automation on Azure.
●● Plan for hybrid management.

Learn more
●● Building a pipeline with DSC - Azure Pipelines | Microsoft Docs59.
●● Azure Automation State Configuration overview | Microsoft Docs60.
●● Desired State Configuration for Azure overview - Azure Virtual Machines | Microsoft Docs61.

59 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/dsc-cicd?view=azure-devops
60 https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview
61 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview
    529

Implement Bicep
Introduction
This module explains what Bicep is and how it's integrated into different tools such as Azure CLI and
Visual Studio Code for environment deployment configuration.

Learning Objectives
●● Learn what Bicep is.
●● Learn how to install it and create a smooth authoring experience.
●● Use Bicep to deploy resources to Azure.
●● Deploy Bicep files in Cloud Shell and Visual Studio Code.

Prerequisites
●● Basic understanding of DevOps and its concepts.
●● Familiarity with version control principles.
●● Beneficial if responsible for deploying resources to Azure via Infrastructure as Code (IaC).

What is Bicep?
Azure Bicep is the next revision of ARM templates designed to solve some of the issues developers
were facing when deploying their resources to Azure. It's an Open Source tool and, in fact, a domain-spe-
cific language (DSL) that provides a means to declaratively codify infrastructure, which describes the
topology of cloud resources such as VMs, Web Apps, and networking interfaces. It also encourages code
reuse and modularity in designing the infrastructure as code files.

The new syntax allows you to write less code compared to ARM templates, which are more straightfor-
ward and concise and automatically manage the dependency between resources. Azure Bicep comes with
its command line interface (CLI), which can be used independently or with Azure CLI. Bicep CLI allows
you to transpile the Bicep files into ARM templates and deploy them and can be used to convert an
existing ARM template to Bicep.
Note: Beware that when converting ARM templates to Bicep, there might be issues since it's still a work
in progress.
There's also an excellent integration with Visual Studio Code that creates a superb authoring experience.
Azure Bicep supports types that are used to validate templates at development time rather than runtime.
The extension also supports linting, which can be used to unify the development experience between
team members or across different teams.
For more information about Azure Bicep, see Bicep language for deploying Azure resources62.

Next steps
In the next unit, you'll find out various ways to install Bicep and set up your development environment.

62 https://docs.microsoft.com/azure/azure-resource-manager/bicep/overview
530     

Install Bicep
To start, install the Bicep CLI or the Visual Studio Code Extension63. Having both installed will give you a
great authoring experience.

Installing the VS Code extension


The extension provides language support, IntelliSense, and linting support.

To verify you've it installed, create a file with the .bicep extension and watch the language mode
change in the lower right corner of VS Code.

Installing Bicep CLI


You need to have Azure CLI version 2.20.0 or later installed to be able to install the Bicep CLI. When
you're ready, run the install command:
az bicep install

You can upgrade the Bicep CLI by running the az bicep upgrade, and for validating the installation,
use the az bicep version.
We deliberately avoided breaking down the installation for Windows, macOS, and Linux since Azure CLI is
cross-platform, and the steps would be the same.

63 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep
    531

Manual installation
You can manually install it if you don't have Azure CLI installed but still want to use Bicep CLI.

Windows
You can use Chocolatey or Winget to install the Bicep CLI:
choco install bicep
winget install -e --id Microsoft.Bicep
bicep --help

Linux
To install the Bicep CLI on Linux manually, use the below script:
curl -Lo bicep https://github.com/Azure/bicep/releases/latest/download/
bicep-linux-x64
chmod +x ./bicep
sudo mv ./bicep /usr/local/bin/bicep
bicep --help

macOS
And for macOS, use homebrew or the previous script for Linux:
brew tap azure/bicep
brew install bicep
bicep --version

Troubleshooting the installation


If any errors or issues were faced during the installation, make sure you visit the troubleshooting guide64
on Azure Bicep documentation.

Next steps
In the next unit, you'll create your first Bicep template and deploy it to Azure.

64 https://docs.microsoft.com/azure/azure-resource-manager/bicep/installation-troubleshoot/
532     

Exercise - Create Bicep templates


It's time to create your very first Bicep template. After following this unit you will learn how the Bicep
extension in VS Code simplifies development by providing type safety, syntax validation, and autocom-
pletion.

Prerequisites
To follow along, you will need to have access to an Azure subscription. You also need to have:
●● VS Code.
●● Azure CLI.
●● Bicep extension for VS Code.

Create your first template


Open your VS Code and create a new file called main.bicep. When it's done, open the file and start
typing storage. You should see a menu appear from which select the first option res-storage by
pressing Enter or Tab. You should end up with a snippet that looks like the following:
resource storageaccount 'Microsoft.Storage/storageAccounts@2021-02-01' = {
name: 'name'
location: location
kind: 'StorageV2'
sku: {
name: 'Premium_LRS'
}
}

This file will deploy an Azure Storage Account, however, we need to modify the file to make it ready for
deployment. First let's add two parameters, one for the name since it should be unique, and one for the
location.
param storageName string = 'stg${uniqueString(resourceGroup().id)}'
param location string = resourceGroup().location

The value you assign to the parameters is the default value which makes the parameters optional.
Replace the name property with storageName and since the location is already used you are good to go
ahead with the deployment.

Visualize resources
You can use VS Code to visualize the resources defined in your Bicep file by clicking on the visualizer
button at the top right-hand corner:
    533

Compile the file


You do not need to compile the file to be able to deploy it, but it is good to know how you can do it.
Open the integrated terminal in VS Code by right-clicking on your Bicep file and selecting the Open in
Integrated Terminal menu. Then use the build command as follows:
az bicep build -f ./main.bicep

Feel free to take a look at the resulting ARM template and compare the two.

Deploying the Bicep file


Now is the time to deploy the Bicep file you've created. In the same terminal run the following com-
mands:
az group create --name Bicep --location eastus
az deployment group create --resource-group Bicep --template-file main.bicep
--parameters storageName=uniqueName

Note: Replace the uniqueName with a unique name, but you can also ignore providing the parameter
since it has a default value.
When the deployment finishes you will be getting a message indicating the deployment succeeded.
534     

Next steps
Now that you have learned how to create a basic template and deploy it, head over to the next unit to
learn more about the constructs in a Bicep file.

Understand Bicep file structure and syntax


Azure Bicep comes with its own syntax, however, it is very easy to understand and follow. We will not go
deep into the syntax and structure, but let's review the main concepts using an example.

Sample Bicep .bicep file


@minLength(3)
@maxLength(11)
param storagePrefix string

param storageSKU string = 'Standard_LRS'


param location string = resourceGroup().location

var uniqueStorageName = '${storagePrefix}${uniqueString(resourceGroup().


id)}'

resource stg 'Microsoft.Storage/storageAccounts@2019-04-01' = {


name: uniqueStorageName
location: location
sku: {
name: storageSKU
}
kind: 'StorageV2'
properties: {
supportsHttpsTrafficOnly: true
}

resource service 'fileServices' = {


name: 'default'

resource share 'shares' = {


name: 'exampleshare'
}
}
}

module webModule './webApp.bicep' = {


name: 'webDeploy'
params: {
skuName: 'S1'
location: location
}
}
    535

output storageEndpoint object = stg.properties.primaryEndpoints

Scope
By default the target scope of all templates is set to resourceGroup, however, you can customize it by
setting it explicitly. Additional allowed values are subscription, managementGroup, and tenant.

Parameters
You have already used the parameters in the previous unit. They allow you to customize your template
deployment at run time by providing potential values for names, location, prefixes, etc.
Parameters also have types that editors can validate and also can have default values to make them
optional at deployment time. Additionally, you can see they can have validation rules to make the
deployment more reliable by preventing any invalid value right from the authoring. For more information,
see Parameters in Bicep65.

Variables
Similar to parameters, variables play a role in making a more robust and readable template. Any complex
expression can be stored in a variable and used throughout the template. When you define a variable, the
type is inferred from the value.
In the above example, the uniqueStorageName is used to simplify the resource definition. For more
information, see Variables in Bicep66.

Resources
The resource keyword is used when you need to declare a resource in your templates. The resource
declaration has a symbolic name for the resource which can be used to reference that resource later
either for defining a sub-resource or for using its properties for an implicit dependency like a parent-child
relationship.
There are certain properties that are common for all resources such as location, name, and proper-
ties. There are resource-specific properties that can be used to customize the resource pricing tier, SKU,
and so on.
You can define sub-resources within a resource or outside by referencing the parent. In the above
example, a file share is defined within the storage account resource. If the intention was to define the
resource outside of it, you would need to change your template to the following:
resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
name: 'examplestorage'
location: resourceGroup().location
kind: 'StorageV2'
sku: {
name: 'Standard_LRS'
}
}

65 https://docs.microsoft.com/azure/azure-resource-manager/bicep/parameters/
66 https://docs.microsoft.com/azure/azure-resource-manager/bicep/variables/
536     

resource service 'Microsoft.Storage/storageAccounts/fileServices@2021-02-01'


= {
name: 'default'
parent: storage
}

resource share 'Microsoft.Storage/storageAccounts/fileServices/


shares@2021-02-01' = {
name: 'exampleshare'
parent: service
}

For more information, see Resource declaration in Bicep67.

Modules
If you want truly reusable templates you cannot avoid using a module. Modules enable you to reuse a
Bicep file in other Bicep files. In a module, you define what you need to deploy, and any parameters
needed and when you reuse it in another file, all you need to do is reference the file and provide the
parameters. The rest is taken care of by Azure Bicep.
In the above example, you are using a module that presumably is deploying an App Service. For more
information, see Using Modules in Bicep68.

Outputs
You can use outputs to pass values from your deployment to the outside world, whether it is within a CI/
CD pipeline or in a local terminal or Cloud Shell. That would enable you to access a value such as storage
endpoint or application URL after the deployment is finished.
All you need is the output keyword and the property you would like to access:
output storageEndpoint endpoints = stg.properties.primaryEndpoints

To find more information, see Outputs in Bicep69.

Other features
There are many other features available within a Bicep file such as loops, conditional deployment,
multiline strings, referencing an existing cloud resource, and many more. In fact, any valid function within
an ARM template is also valid within a Bicep file.

Next steps
In the next unit, you will learn how to use Bicep in an Azure Pipeline.

67 https://docs.microsoft.com/azure/azure-resource-manager/bicep/resource-declaration/
68 https://docs.microsoft.com/azure/azure-resource-manager/bicep/modules/
69 https://docs.microsoft.com/azure/azure-resource-manager/bicep/outputs/
    537

Exercise - Deploy a Bicep file from Azure Pipe-


lines
Now that you know how to validate, compile and deploy your resources from your local environment, it's
time to extend that and see how to bring that into an Azure Pipeline to streamline your deployment
process even further.

Prerequisites
You'll need an Azure Subscription, if you don't have one, create a free account70 before you begin.
You also need an Azure DevOps organization, similarly, if you don't have one, create one for free71.
You'll need to have a configured service connection72 in your project that is linked to your Azure
subscription. Don't worry if you haven't done this before, we'll show you an easy way to do it when you're
creating your pipeline.
You also need to have that Bicep file you created earlier pushed into the Azure Repository of your project.

Creating the pipeline


1. From within your Azure DevOps project, select Pipelines, and New pipeline.
2. Select Azure Repos Git (YAML) and specify your Azure Repo as a source.

70 https://azure.microsoft.com/free/
71 https://docs.microsoft.com/azure/devops/pipelines/get-started/pipelines-sign-up/
72 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure/
538     

3. Select the starter pipeline from the list of templates.


    539

4. Replace everything in the starter pipeline file with the following snippet.
trigger:
- main

name: Deploy Bicep files

variables:
vmImageName: 'ubuntu-latest'

azureServiceConnection: 'myServiceConnection'
resourceGroupName: 'Bicep'
location: 'eastus'
templateFile: 'main.bicep'
pool:
vmImage: $(vmImageName)

steps:
- task: AzureCLI@2
inputs:
azureSubscription: $(azureServiceConnection)
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az --version
az group create --name $(resourceGroupName) --location $(location)
az deployment group create --resource-group $(resourceGroupName) --template-file $(tem-
plateFile)

Note: Don't forget to replace the service connection name with yours.
5. Select Save and run to create a new commit in your repository containing the pipeline YAML file and
then run the pipeline. Wait for the pipeline to finish running and check the status.
540     

6. Once the pipeline runs successfully, you should be able to see the resource group and the storage
account.

Exercise - Deploy a Bicep file from GitHub work-


flows
GitHub Actions73 are similar to Azure Pipelines in nature. They provide a way to automate software
development and deployments. In this exercise, you'll learn how to deploy a Bicep file using a GitHub
Action.

Prerequisites
You'll need a GitHub account that can be created for free here74. A GitHub repository is also required to
store your Bicep file and workflows. Once you've created your GitHub repository, push the Bicep file into
it. And for deployment to Azure, access to an Azure subscription is needed, which can be created for
free here75.

Creating a service principal in Azure


To deploy your resources to Azure, you'll need to create a service principal which GitHub can use. So
open a terminal or use Cloud Shell in the Azure portal and type the following commands:
az login
az ad sp create-for-rbac --name myApp --role contributor --scopes /sub-
scriptions/{subscription-id}/resourceGroups/Bicep --sdk-auth

Note: Don't forget to replace the Subscription ID with yours.


When the operation succeeds, it should output a JSON object that contains your tenantId, subscrip-
tionId, clientId, clientSecret, and a few more properties such as the following.
{
"clientId": "<GUID>",
"clientSecret": "<GUID>",
"subscriptionId": "<GUID>",

73 https://docs.github.com/actions
74 https://github.com/join
75 https://azure.microsoft.com/free/
    541

"tenantId": "<GUID>",
(...)
}

Make a note of this object since you'll need to add it to your GitHub secrets.

Creating a GitHub secret


In your GitHub repository, navigate to Settings > Secrets > Actions. Create a new secret called AZURE_
CREDENTIALS and paste the entire JSON object you got from the previous step.
Create another secret for the name of the resource group with a name such as AZURE_RG and one for
the subscription.

Creating a GitHub action


1. First, navigate to your repository and select the Actions menu. Then, set up a workflow to create an
empty workflow in your repository. You can rename the file to a different name if you prefer.

2. Replace the content of the file with the following snippet:


on: [push]
name: Azure ARM
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
# Checkout code
542     

- uses: actions/checkout@main

# Log into Azure


- uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}

# Deploy Bicep file


- name: deploy
uses: azure/arm-deploy@v1
with:
subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION }}
resourceGroupName: ${{ secrets.AZURE_RG }}
template: ./main.bicep
parameters: storagePrefix=stg
failOnStdErr: false

Feel free to replace the storage account prefix with your own.
Note: The first part of the workflow defines the trigger and a name. The rest defines a job and uses a
few tasks to check out the code, sign in to Azure, and deploy the Bicep file.
3. Select Start commit, and enter a title and a description in the pop-up dialog. Then select Commit
directly to the main branch, followed by Commit a new file.
    543

4. Navigate back to the Actions tab and select the newly created action that should be running.
544     

5. Monitor the status and when the job is finished, check the Azure portal to see if the storage account is
being created.

Summary
This module introduced the new revision of ARM templates called Azure Bicep which is designed to help
developers have a great authoring experience with its integration with Visual Studio Code and Azure CLI.
You learned how it simplifies the deployment and encourages reusability, less code, and is easy to write
and deploy.
You also learned some of the benefits and how to:
●● Intro and installing Azure Bicep.
●● Understands its syntax and basics.
●● Write and deploy Azure Bicep templates locally.
●● Deploy Azure Bicep templates using Azure DevOps.
    545

●● Deploy Azure Bicep templates using GitHub actions.

Learn more
●● What is Bicep? | Azure Bicep documentation76.
●● Bicep on Microsoft Learn77.
●● Best practices for Bicep | Microsoft documentation78.
●● Migrate to Bicep | Microsoft documentation79.

76 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep
77 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/learn-bicep
78 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/best-practices
79 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/migrate
546     

Labs
Lab 14: Deployments using Azure Resource Man-
ager templates
Lab overview
In this lab, you will create an Azure Resource manager template and modularize it by using a linked
template. You will then modify the main deployment template to call the linked template and updated
dependencies, and finally deploy the templates to Azure.

Objectives
After you complete this lab, you will be able to:
●● Create Resource Manager template
●● Create a Linked template for storage resources
●● Upload Linked Template to Azure Blob Storage and generate SAS token
●● Modify the main template to call Linked template
●● Modify main template to update dependencies
●● Deploy resources to Azure using linked templates

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions80

80 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    547

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices represents when using this approach the methodology of HOW a state is
achieved is abstracted away?
†† Imperative.
†† Declarative.
†† Exclamatory.

Multiple choice
Which of the following choices is defines the ability to apply one or more operations against a resource,
resulting in the same outcome every time?
†† Increment.
†† Idempotence.
†† Configuration Drift.

Multiple choice
Which of the following choices is a benefit of infrastructure as code?
†† Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and reprovisioned
as and when needed.
†† Uses mutable service processes.
†† Manual scale-up and scale-out.

Multiple choice
Which of the following choices are the deployment modes available when using Azure Resource Manager
templates?
†† Transactional, Complete.
†† Incremental, Complete.
†† Differential, Validate.
548     

Multiple choice
Which of the following choices isn't a component or section of an Azure Resource Manager template?
†† Outputs.
†† Functions.
†† Secrets.

Multiple choice
Which of the following choices is an element that enables you to define one resource as dependent on one
or more other resources?
†† template.
†† properties.
†† dependsOn.

Multiple choice
Which of the following choices can you use to get a list of VM using Azure CLI?
†† List-VM.
†† az vm list.
†† Get-AzVm.

Multiple choice
Which of the following choices can you use to create a resource group using Azure CLI?
†† az group create.
†† az creates group.
†† az deployment group create.

Multiple choice
Which of the following choices is a valid syntax for string variables if you want to use Azure CLI commands
in PowerShell?
†† variable="value".
†† variable=value.
†† $variable="value".

Multiple choice
Which of the following choices isn't an option when creating runbooks?
†† Import runbooks from Azure Artifacts.
†† Create your runbook and import it.
†† Modify runbooks from the runbook gallery.
    549

Multiple choice
Which of the following choices isn't a supported source control by Azure Automation?
†† Azure DevOps (Git or Team Foundation Version Control).
†† BitBucket.
†† GitHub.

Multiple choice
Which of the following choices is the required keyword at the beginning when creating your workflow to
PowerShell?
†† ResourceGroupName.
†† Param.
†† workflow.

Multiple choice
Which of the following choices is the process whereby a set of resources change their state over time from
the desired state in which they were deployed?
†† Configuration Drift.
†† Idempotence.
†† Increment.

Multiple choice
Which of the following choices isn't a primary component of DSC?
†† Configuration.
†† Resources.
†† PowerShell.

Multiple choice
Which of the following choices isn't a method of implementing DSC?
†† Pull mode.
†† Fetch mode.
†† Push mode.

Multiple choice
Which of the following describes how to define the dependencies in a bicep file?
†† Bicep uses implicit dependency using symbolic names and parent-child properties.
†† By adding the LinkTo property.
†† Bicep doesn't support resource dependencies.
550     

Multiple choice
Which of the following choices best describe the behavior of the webAppName parameter for a team that
created a template that contains this line: param webAppName string = 'mySite${uniqueString(resource-
Group().id)}'?
†† Whoever is deploying the template must provide a value for the webAppName.
†† When you redeploy the template to the same resource group, the value of the webAppName remains
the same.
†† The webAppName parameter will have a different value every time the template gets deployed.

Multiple choice
Which of the following choices describe how you can reuse a Bicep template in other Bicep templates?
†† By adding a local reference to that file.
†† By defining a module and referencing it in other files.
†† By adding a remote reference to a template from an online repository.
    551

Answers
Multiple choice
Which of the following choices represents when using this approach the methodology of HOW a state is
achieved is abstracted away?
†† Imperative.
■■ Declarative.
†† Exclamatory.
Explanation
The declarative approach abstracts away the methodology of how a state is achieved.
Multiple choice
Which of the following choices is defines the ability to apply one or more operations against a resource,
resulting in the same outcome every time?
†† Increment.
■■ Idempotence.
†† Configuration Drift.
Explanation
Idempotence is a mathematical term that can be used in Infrastructure as Code and Configuration as Code.
It is the ability to apply one or more operations against a resource, resulting in the same outcome.
Multiple choice
Which of the following choices is a benefit of infrastructure as code?
■■ Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and reprovisioned
as and when needed.
†† Uses mutable service processes.
†† Manual scale-up and scale-out.
Explanation
Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and reprovisioned as and
when required.
Multiple choice
Which of the following choices are the deployment modes available when using Azure Resource Manager
templates?
†† Transactional, Complete.
■■ Incremental, Complete.
†† Differential, Validate.
Explanation
When deploying your resources using templates, you have three options: validate, incremental mode
(default), and complete mode.
552     

Multiple choice
Which of the following choices isn't a component or section of an Azure Resource Manager template?
†† Outputs.
†† Functions.
■■ Secrets.
Explanation
A Resource Manager template can contain sections like Parameters, Variables, Functions, Resources,
Outputs.
Multiple choice
Which of the following choices is an element that enables you to define one resource as dependent on
one or more other resources?
†† template.
†† properties.
■■ dependsOn.
Explanation
The dependsOn element enables you to define one resource as dependent on one or more other resources.
Multiple choice
Which of the following choices can you use to get a list of VM using Azure CLI?
†† List-VM.
■■ az vm list.
†† Get-AzVm.
Explanation
It's az vm list. For many Azure resources, Azure CLI provides a list subcommand to get resource details.
Multiple choice
Which of the following choices can you use to create a resource group using Azure CLI?
■■ az group create.
†† az creates group.
†† az deployment group create.
Explanation
Create a resource group to deploy your resources by running az group create.
    553

Multiple choice
Which of the following choices is a valid syntax for string variables if you want to use Azure CLI com-
mands in PowerShell?
†† variable="value".
†† variable=value.
■■ $variable="value".
Explanation
If you use a PowerShell for running Azure CLI scripts, you'll need to use the following syntax for string
variables $variable="value", and $variable=integer for integer values.
Multiple choice
Which of the following choices isn't an option when creating runbooks?
■■ Import runbooks from Azure Artifacts.
†† Create your runbook and import it.
†† Modify runbooks from the runbook gallery.
Explanation
When creating runbooks, you have two options. You can either create your runbook and import it or modify
runbooks from the runbook gallery.
Multiple choice
Which of the following choices isn't a supported source control by Azure Automation?
†† Azure DevOps (Git or Team Foundation Version Control).
■■ BitBucket.
†† GitHub.
Explanation
Azure Automation supports three types of source control: GitHub, Azure DevOps (Git), and Azure DevOps
(Team Foundation Version Control).
Multiple choice
Which of the following choices is the required keyword at the beginning when creating your workflow to
PowerShell?
†† ResourceGroupName.
†† Param.
■■ workflow.
Explanation
When you create your workflow, begin with the workflow keyword, which identifies a workflow command to
PowerShell.
554     

Multiple choice
Which of the following choices is the process whereby a set of resources change their state over time
from the desired state in which they were deployed?
■■ Configuration Drift.
†† Idempotence.
†† Increment.
Explanation
Configuration drift is the process of a set of resources changing over time from their original deployment
state.
Multiple choice
Which of the following choices isn't a primary component of DSC?
†† Configuration.
†† Resources.
■■ PowerShell.
Explanation
DSC consists of three primary components: Configurations, Resources, and Local Configuration Manager
(LCM).
Multiple choice
Which of the following choices isn't a method of implementing DSC?
†† Pull mode.
■■ Fetch mode.
†† Push mode.
Explanation
There are two methods of implementing DSC: Push mode and Pull mode.
Multiple choice
Which of the following describes how to define the dependencies in a bicep file?
■■ Bicep uses implicit dependency using symbolic names and parent-child properties.
†† By adding the LinkTo property.
†† Bicep doesn't support resource dependencies.
Explanation
Bicep uses implicit dependency using symbolic names and parent-child properties.
    555

Multiple choice
Which of the following choices best describe the behavior of the webAppName parameter for a team
that created a template that contains this line: param webAppName string = 'mySite${uniqueString(re-
sourceGroup().id)}'?
†† Whoever is deploying the template must provide a value for the webAppName.
■■ When you redeploy the template to the same resource group, the value of the webAppName remains
the same.
†† The webAppName parameter will have a different value every time the template gets deployed.
Explanation
When you redeploy the template to the same resource group, the value of the webAppName remains the
same.
Multiple choice
Which of the following choices describe how you can reuse a Bicep template in other Bicep templates?
†† By adding a local reference to that file.
■■ By defining a module and referencing it in other files.
†† By adding a remote reference to a template from an online repository.
Explanation
By defining a module and referencing it in other files.
Module 7 Implement security and validate
code bases for compliance

Introduction to Secure DevOps


Introduction
While a DevOps way of working enables development teams to deploy applications faster, going faster
over a cliff doesn't help!

DevOps teams have access to unprecedented infrastructure and scale thanks to the cloud. They can be
approached by some of the most nefarious actors on the internet, as they risk the security of their
business with every application deployment.
Perimeter-class security is no longer viable in such a distributed environment, so now companies need to
adopt more micro-level security across applications and infrastructure and have multiple lines of defense.
With continuous integration and continuous delivery, how do you ensure your applications are secure
and stay secure? How can you find and fix security issues early in the process? It begins with practices
commonly referred to as DevSecOps.
DevSecOps incorporates the security team and their capabilities into your DevOps practices making
security the responsibility of everyone on the team. Security needs to shift from an afterthought to being
evaluated at every step of the process.
Securing applications is a continuous process that encompasses secure infrastructure, designing architec-
ture with layered security, continuous security validation, and monitoring attacks.
558     

Security is everyone's responsibility and needs to be looked at holistically across the application life cycle.
This module introduces DevSecOps concepts, SQL injection attacks, threat modeling, and security for
continuous integration.
We'll also see how continuous integration and deployment pipelines can accelerate the speed of security
teams and improve collaboration with software development teams.
You'll learn the key validation points and how to secure your pipeline.

Learning objectives
After completing this module, students and professionals can:
●● Identify SQL injection attack.
●● Understand DevSecOps.
●● Implement pipeline security.
●● Understand threat modeling.
    559

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Describe SQL injection attack


SQL Injection is an attack that makes it possible to execute malicious SQL statements. These statements
control a database server behind a web application.
Attackers can use SQL Injection vulnerabilities to bypass application security measures.
They can go around authentication and authorization of a web page or web application and retrieve the
content of the entire SQL database.
They can also use SQL Injection to add, modify, and delete records in the database.
An SQL Injection vulnerability may affect any website or web application that uses an SQL database such
as MySQL, Oracle, SQL Server, or others.
Criminals may use it to gain unauthorized access to, delete, or alter your sensitive data: customer infor-
mation, personal data, trade secrets, intellectual property, and more.
SQL Injection attacks are among the oldest, most prevalent, and most dangerous web application
vulnerabilities.
The OWASP organization (Open Web Application Security Project) lists injections in their OWASP Top 10
2017 document as the number one threat to web application security.
In this tutorial, we'll simulate a SQL injection attack.

Getting started
●● Use the SQL Injection Azure Resource Manager Template1 to provision a web app and an SQL
database with known SQL injection vulnerability.
●● Ensure you browse the ‘Contoso Clinic’ web app provisioned in your SQL injection resource group.

1 https://azure.microsoft.com/resources/templates/sql-injection-attack-prevention/
560     

How it works
1. Navigate to the Patients view, and in the search box, type "'" and hit enter. You'll see an error page
with SQL exception indicating that the search box feeds the text into a SQL statement.

The helpful error message is enough to guess that the text in the search box is being appended into
the SQL statement.
2. Next, try passing a SQL statement 'AND FirstName = 'Kim'-- in the search box. You'll see that
the results in the list below are filtered down to only show the entry with firstname Kim.

3. You can try to order the list by SSN by using this statement in the search box 'order by SSN--.

4. Now for the finale, run this drop statement to drop the table that holds the information displayed on
this page… 'AND 1=1; Drop Table Patients --. Once the operation is complete, try and load
the page. You'll see that the view errors out with an exception indicating that the dbo.patients table
can't be found.
    561

There's more
The Azure security center team has other playbooks2 you can look at to learn how vulnerabilities are
exploited to trigger a virus attack and a DDoS attack.

Understand DevSecOps
While the adoption of cloud computing is on the rise to support business productivity, a lack of security
infrastructure can inadvertently compromise data.
The 2018 Microsoft Security Intelligence Report finds that:
●● Data isn't encrypted both at rest and in transit by:

●● 7% of software as a service (SaaS) storage apps.


●● 86% percent of SaaS collaboration apps.
●● HTTP headers session protection is supported by only:

●● 4% of SaaS storage apps.


●● 3% of SaaS collaboration apps.

Secure DevOps (or DevSecOps)


DevOps is about working faster. Security is about-emphasizing thoroughness. Security concerns are
typically addressed at the end of the cycle. It can potentially create unplanned work right at the end of
the pipeline. Secure DevOps integrates DevOps with security into a set of practices designed to meet the
goals of both DevOps and safety more effectively.

2 https://azure.microsoft.com/blog/enhance-your-devsecops-practices-with-azure-security-center-s-newest-playbooks/
562     

A Secure DevOps pipeline allows development teams to work fast without breaking their project by
introducing unwanted security vulnerabilities.
Note: Secure DevOps is also sometimes referred to as DevSecOps. You might encounter both terms, but
each term refers to the same concept.

Security in the context of Secure DevOps


Historically, security typically operated on a slower cycle and involved traditional security methodologies,
such as:
●● Access control.
●● Environment hardening.
●● Perimeter protection.
Secure DevOps includes these traditional security methodologies and more. With Secure DevOps,
security is about securing the pipeline.
Secure DevOps involves determining where to add protection to the elements that plug into your build
and release pipelines.
Secure DevOps can show you how and where you can add security to your automation practices, produc-
tion environments, and other pipeline elements while benefiting from the speed of DevOps.
Secure DevOps addresses broader questions, such as:
●● Is my pipeline consuming third-party components, and if so, are they secure?
●● Are there known vulnerabilities within any of the third-party software we use?
●● How quickly can I detect vulnerabilities (also referred to as time to detect)?
●● How quickly can I remediate identified vulnerabilities (also referred to as time to remediate)?
Security practices for detecting potential security anomalies need to be as robust and fast as your
DevOps pipeline's other parts. It also includes infrastructure automation and code development.
    563

Explore Secure DevOps Pipeline


As previously stated, the goal of a Secure DevOps Pipeline is to enable development teams to work fast
without introducing unwanted vulnerabilities to their project.

Two essential features of Secure DevOps Pipelines that aren't found in standard DevOps Pipelines are:
●● Package management and the approval process associated with it. The previous workflow diagram
details other steps for adding software packages to the Pipeline and the approval processes that
packages must go through before they're used. These steps should be enacted early in the Pipeline to
identify issues sooner in the cycle.
●● Source Scanner also an extra step for scanning the source code. This step allows for security scanning
and checking for security vulnerabilities that aren't present in the application code. The scanning
occurs after the app is built but before release and pre-release testing. Source scanning can identify
security vulnerabilities earlier in the cycle.
In the rest of this lesson, we address these two essential features of Secure DevOps Pipelines, the prob-
lems they present, and some of the solutions for them.
564     

Explore key validation points


Continuous security validation should be added at each step from development through production to
help ensure the application is always secure.
This approach aims to switch the conversation with the security team from approving each release to
consenting to the CI/CD process and monitor and audit the process at any time.
When building green field applications, the diagram below highlights the critical validation points in the
CI/CD pipeline.
Depending on your platform and your application's lifecycle, you may gradually implement the tools.
Especially if your product is mature and you haven't previously run any security validation against your
site or application.

IDE / pull request


Validation in the CI/CD begins before the developer commits their code.
Static code analysis tools in the IDE provide the first line of defense to help ensure that security vulnera-
bilities aren't introduced into the CI/CD process.
The process for committing code into a central repository should have controls to help prevent security
vulnerabilities from being introduced.
Using Git source control in Azure DevOps with branch policies provides a gated commit experience that
can provide this validation.
By enabling branch policies on the shared branch, a pull request is required to start the merge process
and ensure the execution of all defined controls.
The pull request should require a code review, the one manual but important check for identifying new
issues introduced into your code.
Along with this manual check, commits should be linked to work items for auditing why the code change
was made and require a continuous integration (CI) build process to succeed before the push can be
completed.
    565

Explore continuous security validation


Today, developers don't hesitate to use components available in public package sources (such as npm or
NuGet).
With faster delivery and better productivity, open-source software (OSS) components are encouraged
across many organizations.
However, as the dependency on these third-party OSS components increases, the risk of security vulnera-
bilities or hidden license requirements also increases compliance issues.
For a business, it's critical, as issues related to compliance, liabilities, and customer personal data can
cause a great deal of privacy and security concerns.
Identifying such issues early in the release cycle gives you an advanced warning and enough time to fix
the problems. Notably, the cost of rectifying issues is lower the earlier the project discovered the prob-
lem.
Many available tools can scan for these vulnerabilities within the build and release pipelines.
Once the merge is complete, the CI build should execute as part of the pull request (PR-CI) process.
Typically, the primary difference between the two runs is that the PR-CI process doesn't need any packag-
ing/staging in the CI build.
These CI builds should run static code analysis tests to ensure that the code follows all rules for both
maintenance and security.
Several tools can be used for it:
●● SonarQube.
●● Visual Studio Code Analysis and the Roslyn Security Analyzers.
●● Checkmarx - A Static Application Security Testing (SAST) tool.
●● BinSkim - A binary static analysis tool that provides security and correctness results for Windows
portable executables and many more.
Many of the tools seamlessly integrate into the Azure Pipelines build process. Visit the Visual Studio
Marketplace for more information on the integration capabilities of these tools.
Also, to verify code quality with the CI build, two other tedious or ignored validations are scanning
third-party packages for vulnerabilities and OSS license usage.
When we ask about third-party package vulnerabilities and licenses, the response is fear or uncertainty.
Organizations trying to manage third-party packages vulnerabilities or OSS licenses explain that their
process is tedious and manual.
Fortunately, WhiteSource Software's tools can make this identification process almost instantaneous.
In a later module, we'll discuss integrating several helpful and commonly used security and compliance
tools.

Understand threat modeling


Threat modeling is a core element of the Microsoft Security Development Lifecycle (SDL).
It's an engineering technique you can use to help you identify threats, attacks, vulnerabilities, and
countermeasures that could affect your application.
566     

You can use threat modeling to shape your application's design, meet your company's security goals, and
reduce risk.
With non-security experts in mind, the tool makes threat modeling easier for all developers by providing
clear guidance on creating and analyzing threat models.

There are five major threat modeling steps:


●● Defining security requirements.
●● Creating an application diagram.
●● Identifying threats.
●● Mitigating threats.
●● Validating that threats have been mitigated.
Threat modeling should be part of your typical development lifecycle, enabling you to refine your threat
model and progressively reduce risk.

Microsoft Threat Modeling Tool


The Microsoft Threat Modeling Tool makes threat modeling easier for all developers through a standard
notation for visualizing system components, data flows, and security boundaries.
It also helps threat modelers identify classes of threats they should consider based on the structure of
their software design.
The tool has been designed with non-security experts in mind, making threat modeling easier for all
developers by providing clear guidance on creating and analyzing threat models.
The Threat Modeling Tool enables any developer or software architect to:
●● Communicate about the security design of their systems.
●● Analyze those designs for potential security issues using a proven methodology.
●● Suggest and manage mitigation for security issues.
For more information, you can see:
●● Threat Modeling Tool feature overview3.

3 https://docs.microsoft.com/azure/security/azure-security-threat-modeling-tool-feature-overview
    567

●● Microsoft Threat Modeling Tool4.

Exercise threat modeling


Security can't be a separate department in a silo. It also can't be added at the end of a project.
Security must be part of DevOps, and together they're called DevSecOps.
The biggest weakness isn't knowing the flaw in your solution. Microsoft has created a threat modeling
tool to remediate it, which helps you understand potential security vulnerabilities in your solution.
The Threat Modeling Tool is a core element of the Microsoft Security Development Life cycle (SDL).
It allows software architects to identify and mitigate potential security issues early when they're relatively
easy and cost-effective to resolve.
As a result, it dramatically reduces the total cost of development.
The tool has been designed with non-security experts in mind, making threat modeling easier for all
developers by providing clear guidance on creating and analyzing threat models.
The tool enables anyone to:
●● Communicate about the security design of their systems.
●● Analyze those designs for potential security issues using a proven methodology.
●● Suggest and manage mitigations for security issues.
In this exercise, we'll see how easy it's to use the Threat Modeling tool to see potential vulnerabilities in
your infrastructure solution that one should consider when provisioning and deploying the Azure re-
sources and the application solution into the solution.

Getting started
●● Download and install the Threat Modeling tool5.

How to do it
1. Launch the Microsoft Threat Modeling Tool and choose the option to Create a Model.

4 https://blogs.msdn.microsoft.com/secdevblog/2018/09/12/microsoft-threat-modeling-tool-ga-release/
5 https://aka.ms/threatmodelingtool
568     

2. From the right panel, search and add Azure App Service Web App, Azure SQL Database, link
them up to show a request and response flow as demonstrated in the following image.

3. From the toolbar menu, select View -> Analysis view. The analysis view will show you a complete list
of threats categorized by severity.

4. To generate a full report of the threats, from the toolbar menu, select Reports -> Create full report,
select a location to save the report.
    569

A full report is generated with details of the threat, the SLDC phase it applies to, and possible mitiga-
tion and links to more information.

There's more
You can find a complete list of threats used in the threat modeling tool here6.

Summary
This module introduced DevSecOps concepts, SQL injection attacks, threat modeling, and security for
continuous integration.
You learned how to describe the benefits and usage of:
●● Identify SQL injection attack.
●● Understand DevSecOps.
●● Implement pipeline security.
●● Understand threat modeling.

Learn more
●● DevSecOps Tools and Services | Microsoft Azure7.
●● Enable DevSecOps with Azure and GitHub - DevSecOps | Microsoft Docs8.

6 https://docs.microsoft.com/azure/security/develop/threat-modeling-tool-threats
7 https://azure.microsoft.com/solutions/devsecops
8 https://docs.microsoft.com/azure/devops/devsecops
570     

●● Advanced Threat Protection - Azure SQL Database, SQL Managed Instance, & Azure Synapse
Analytics | Microsoft Docs9.
●● Securing Azure Pipelines - Azure Pipelines | Microsoft Docs10.
●● SQL Injection attack on a web app (microsoft.com)11.

9 https://docs.microsoft.com/azure/azure-sql/database/threat-detection-overview
10 https://docs.microsoft.com/azure/devops/pipelines/security/overview
11 https://azure.microsoft.com/resources/templates/sql-injection-attack-prevention/
    571

Implement open-source software


Introduction
This module explores open-source software and corporate concerns with software components. Also, it
explains common open-source licenses, license implications, and ratings.

Learning objectives
After completing this module, students and professionals can:
●● Implement open-source software.
●● Explain corporate concerns for open-source components.
●● Describe open-source licenses.
●● Understand the license implications and ratings.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore how software is built


Let us look at using open-source software in building software.

Using open-source software


Packages contain components that are built from source code. Open-source code is publicly available for
inspection, reuse, and contribution.
Most commonly, open-source projects indicate how the sources can be used and distributed afterward. A
license agreement comes with the source code and specifies what can and cannot be done.
Software today is built by using components. These components are created in part by the team that is
writing the entire software solution.
Some dependencies are on components created and made available by other teams, third-party compa-
nies, and the community. The packages that contain the components are a formalized way for distribu-
tion.
On average, the built software solution is around 80% based on existing components and maintained
outside of the project.
The rest of the solution consists of your code with business logic and specifics for the functional require-
ments. Plus, “glue” code that binds the components and your code. The components can be a commer-
cial offering or free of charge.
A considerable part of the publically available and free components are community efforts to offer
reusable components for everyone to use and build software. The persons creating and maintaining
these components often also make the source code available.
572     

It's open-source code as opposed to closed source. A closed source means that the source code isn't
available, even though components are available.

What is open-source software?


Wikipedia defines open-source software as follows:
"Open-source software is a type of computer software in which source code is released under a license in
which the copyright holder grants users the rights to study, change, and distribute the software to anyone
and for any purpose."
The related open-source software development is a collaborative form of software development involving
multiple contributors. Together they create and maintain software and source code using open sources.
The use of open-source software is widely adopted now.
Microsoft itself has also-embraced open-source software in its software and the development platforms
they offer.

The .NET platforms, such as the original .NET Framework and even more so .NET Core, use several
components created by the open-source community and not Microsoft itself. In ASP.NET12 and ASP.
NET13 Core, many of the frontend development libraries are open-source components, such as jQuery,
Angular, and React.
Instead of creating new components themselves, the teams at Microsoft are using the open-source
components and taking a dependency on them.
The teams also contribute and invest in the open-source components and projects, joining in on the
collaborative effort. Besides adopting external open-source software, Microsoft has also made significant
parts of its software available as open-source.
.NET is a perfect example of how Microsoft has changed its posture towards open source. It has made the
codebase for the .NET Framework and .NET Core available and many other components.
The .NET Foundation aims to advocate the needs and evangelize the benefits of the .NET platform. And
promote the use of .NET open source for developers.
For more information, see the .NET Foundation website14.

12 http://asp.net/
13 http://asp.net/
14 http://www.dotnetfoundation.org
    573

Explore corporate concerns with open-source


software components
In summary, modern software development, including the Microsoft developer platform and ecosystem,
implies open-source components.
It has implications for companies that build software, either commercially or for internal use.
The inclusion of software components that are not built by the companies themselves means no com-
plete control over the sources.
Being responsible for the source code used in components used within a company means that you must
accept its risks. The concerns are that source code component:
●● Are of low quality. It would impact the maintainability, reliability, and performance of the overall
solution.
●● Have no active maintenance. The code wouldn't evolve or be alterable without copying the source
code, effectively forking away from the origin.
●● Contain malicious code. The entire system that includes and uses the code will be compromised.
Potentially the whole company's IT and infrastructure is affected.
●● Have security vulnerabilities. The security of a software system is as good as its weakest part. Using
source code with vulnerabilities makes the entire system susceptible to attack by hackers and misuse.
●● Have unfavorable licensing restrictions. The effect of a license can affect the entire solution that
uses the open-source software.
The companies will have to make a trade-off: its developers want to use open-source software compo-
nents, allowing them to speed up development and use modern frameworks, libraries, and practices.
On the other hand, giving the developers and projects the freedom to include open-source software
should not put the company at risk.
The company's challenges are finding a way to keep the developers empowered and free to choose
technology to use while making sure the risks for the company are managed as well as possible.
Other challenges come from companies that offer open-source software to the public.
These challenges include having a business model around the open-source, when to publish open-source
code and how to deal with community contributions.
The fact that your source code is open doesn't imply that anyone can make changes to it.
There can be contributions from community collaboration, but a company doesn't necessarily have to
accept it. It's referred to as-closed open-source.
Suggestions for change are welcome, but the maintainers are the ones that carry out the actual changes.

Introduction to open-source licenses


A license agreement accompanies open-source software and the related source code.
The license describes how the source code and the components built from it can be used and how any
software created with it should deal with it.
574     

According to the open-source definition of OpenSource.org15, a license shouldn't:


●● Discriminate against persons or groups.
●● Discriminate against fields of endeavor.
●● Be specific to a product.
●● Restrict other software.
●● And more - See the Open Source Definition16.
To cover the exact terms of a license, several types exist. Each type has its specifics and implications,
which we'll cover in the next part.
Even though multiple contributors generally develop open-source software from the community, it
doesn't guarantee that it's secure and without vulnerabilities.
Multiple reviewers discover chances, but the discovery might not be immediate or before being con-
sumed by others.
Since the source code is open-source, people with malicious intent can also inspect the code for vulnera-
bilities and exploit it when possible.
In that regard, it's both a blessing and a curse that open-source software has source code available for
others.

Explore common open-source licenses


In the current and previous units, we've talked about software components from the perspective of
packages.
Packages are the formalized ways to distribute software components.
The licensing types and concerns about vulnerabilities extend to the packages, as these contain the
components.

Types of licenses
There are multiple licenses used in open-source, and they're different.
The license spectrum is a chart showing licenses from the developer's perspective and the implications of
use for downstream requirements imposed on the overall solution and source code.

15 http://opensource.org/
16 http://opensource.org/osd
    575

On the left side, there are the “attribution” licenses. They're permissive and allow practically every type of
use by the software that consumes it. An example is building commercially available software, including
the components or source code under this license.
The only restriction is that the original attribution to the authors remains included in the source code or
as part of the downstream use of the new software. The right side of the spectrum shows the “copyleft”
licenses.
These licenses are considered viral, as the use of the source code and its components, and distribution of
the complete software, implies that all source code using it should follow the same license form.
The viral nature is that the use of the software covered under this license type forces you to forward the
same license for all work with or on the original software.
The middle of the spectrum shows the “downstream” or "weak copyleft" licenses. It also requires that it
must do so under the same license terms when the covered code is distributed.
Unlike the copyleft licenses, it doesn't extend to improvements or additions to the covered code.

Examine license implications and ratings


Any form of software that uses code or components must consider the license type that is covered.
For companies, it becomes essential to know the types of licenses for components and packages that
developers choose to use.
When these include even one viral license, it requires that all software uses the same license.
Any intellectual property you might have must be made public and open source according to the terms
of the viral license type.
It has a tremendous impact on the project's source code and the company that produces it.

License rating
Licenses can be rated by the impact that they have. When a package has a specific type of license, the
use of the package implies keeping to the requirements of the package.
576     

The license's impact on the downstream use of the code, components, and packages can be rated as
High, Medium, and Low, depending on the copy-left, downstream, or attribution nature of the license
type.
For compliance reasons, a high license rating can be considered a risk for compliance, intellectual
property, and exclusive rights.

Package security
The use of components creates a software supply chain.
The resultant product is a composition of all its parts and components.
It applies to the security level of the solution as well. So, like license types, it's essential to know how
secure the components being used are.
If one of the components used isn't secure, then the entire solution isn't either.

Summary
This module explored open-source software and corporate concerns with software components. Also, it
explained common open-source licenses, license implications, and ratings.
You learned how to describe the benefits and usage of:
●● Implement open-source software.
●● Explain corporate concerns for open-source components.
●● Describe open-source licenses.
●● Understand the license implications and ratings.

Learn more
●● Deploy Open Source Apps With Your Free Account | Microsoft Azure17.
●● Microsoft’s Open Source Program | Microsoft Open Source18.

17 https://azure.microsoft.com/free/open-source/search
18 https://opensource.microsoft.com/program/
    577

Software Composition Analysis


Introduction
This module explains Composition Analysis, inspecting and validating code bases for compliance, integra-
tion with security tools, and integration with Azure Pipelines.

Learning objectives
After completing this module, students and professionals can:
●● Inspect and validate code bases for compliance.
●● Integrate security tools like WhiteSource with Azure DevOps.
●● Implement pipeline security validation.
●● Interpret alerts from scanning tools.
●● Configure GitHub Dependabot alerts and security.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Inspect and validate code bases for compliance


Security for applications is critical. Every day, news services worldwide seem to carry stories about some
company systems breached. More importantly, private company and customer data have been disclosed.
It has been happening for a long time. In many cases, it wasn't visible to the public. Private information
was often disclosed, yet the people affected weren't even notified.
Governments worldwide frequently enact legislation to require information about breaches to become
public and notifications to the affected.
So, what are the issues?
We need to protect information from being disclosed to people who shouldn't have access. But more
importantly, we need to ensure that the data isn't altered or destroyed when it shouldn't be, and we need
to make sure it's destroyed when it's supposed to be.
We need to make sure we properly authenticate who is accessing the data and that they have the correct
permissions to do so. We need to find evidence when something has gone wrong through historical or
archival data or logs.
There are many aspects to building and deploying secure applications.
●● First, there's a general knowledge problem. Many developers and other staff assume they understand
security, but they don't. Cybersecurity is a constantly evolving discipline. A program of ongoing
education and training is essential.
●● Second, we need to ensure that the code is created correctly and securely implements the required
features, and we need to make sure that the features were designed with security in mind in the first
place.
578     

●● Third, we need to ensure that the application follows the rules and regulations required to meet. We
need to test it while building the code and retest it periodically, even after deployment.
It's commonly accepted that security isn't something you can add to an application or a system later.
Secure development must be part of the development life cycle. It's even more important for critical
applications and those who process sensitive or highly confidential information.
Application security concepts haven't been a focus for developers in the past. Apart from the education
and training issues, their organizations have emphasized the fast development of features.
However, with the introduction of DevOps practices, security testing is much easier to integrate. Rather
than being a task done by security specialists, security testing should be part of the day-to-day delivery
processes.
Overall, when the time for rework is taken into account, adding security to your DevOps practices can
reduce the overall time to develop quality software.

Explore software composition analysis (SCA)


Two crucial areas from the Secure DevOps pipeline are Package management and Open-Source Software
OSS components.

Package management
Just as teams use version control as a single source of truth for source code, Secure DevOps relies on a
package manager as the unique source of binary components.
Using binary package management, a development team can create a local cache of approved compo-
nents and a trusted feed for the Continuous Integration (CI) pipeline.
In Azure DevOps, Azure Artifacts is an integral part of the component workflow for organizing and
sharing access to your packages. Azure Artifacts allows you to:
●● Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating keeping binaries in Git.
●● Protect your packages. Keep every public source package you use (including packages from npmjs
and NuGet .org) safe in your feed where only you can delete it and where it's backed by the enter-
prise-grade Azure Service Level Agreement (SLA).
●● Integrate seamless package handling into your Continuous Integration (CI)/ Continuous Development
(CD) pipeline. Easily access all your artifacts in builds and releases. Azure Artifacts integrates natively
with the Azure Pipelines CI/CD tool.
For more information about Azure Artifacts, visit the webpage. What are Azure Artifacts?19

Versions and compatibility


The following table lists the package types supported by Azure Artifacts. The availability of each package
in Azure DevOps Services is also displayed.
The following table details the compatibility of each package with specific versions of the Azure DevOps
Server, previously known as Team Foundation Server (TFS).

Feature Azure DevOps Services TFS

19 https://docs.microsoft.com/azure/devops/artifacts/overview
    579

NuGet Yes TFS 2017


npm Yes TFS 2017 update one and later
Maven Yes TFS 2017 update one and later
Gradle Yes TFS 2018
Universal Yes No
Python Yes No
Maven, npm, and NuGet packages can be supported with teams of any size from public and private
sources. Azure Artifact comes with Azure DevOps, but the extension is also available from the Visual
Studio Marketplace.

Note: After publishing a particular package version to a feed, that version number is permanently
reserved.
Note: You can't upload a newer revision package with that same version number or delete that version
and upload a new package with the same version number. The published version is immutable.

The Role of OSS components


Development work is more productive because of the wide availability of reusable Open-source software
(OSS) components.
580     

This practical approach to reuse includes runtimes, which are available on Windows and Linux operating
systems such as Microsoft .NET Core and Node.js.
However, OSS component reuse comes with the risk that reused dependencies can have security vulnera-
bilities. As a result, many users find security vulnerabilities in their applications because of the Node.js
package versions they consume.
OSS offers a new-concept-called Software Composition Analysis (SCA) to address these security concerns,
shown in the following image.

When consuming an OSS component, whether you're creating or consuming dependencies, you'll
typically want to follow these high-level steps:
1. Start with the latest, correct version to avoid old vulnerabilities or license misuse.
2. Validate that the OSS components are the correct binaries for your version. In the release pipeline,
validate binaries to ensure accuracy and keep a traceable bill of materials.
3. Get notifications of component vulnerabilities immediately, correct them, and redeploy the compo-
nent automatically to resolve security vulnerabilities or license misuses from reused software.
    581

Integrate WhiteSource with Azure DevOps pipe-


line
Visual Studio Code Marketplace is an important site for addressing Secure DevOps issues. From here, you
can integrate specialist security products into your Azure DevOps pipeline.
Having a full suite of extensions that allow a seamless integration into Azure Pipelines is invaluable.

WhiteSource
The WhiteSource extension is available on the Azure DevOps Marketplace. Using WhiteSource, you can
integrate extensions with your CI/CD pipeline to address Secure DevOps security-related issues.
The WhiteSource extension specifically addresses open-source security, quality, and license compliance
concerns for a team consuming external packages.
Because most breaches target known vulnerabilities in standard components, robust tools are essential to
securing complex open-source components.

Continuously detect all open-source components in your


software
WhiteSource will automatically detect all open-source components—including their transitive dependen-
cies—every time you run a build.
It means you can generate a comprehensive inventory report within minutes based on the last build you
ran.
It also gives complete visibility to your security, DevOps, and legal teams into your organization’s soft-
ware development process.

Receive alerts on open-source security vulnerabilities


WhiteSource automatically generates an alert and provides targeted remediation guidance when a new
security vulnerability is discovered.
582     

It can include links to patches, fixes, relevant source files, even recommendations to change system
configuration to prevent exploitation.

Automatically enforce open-source security and license


compliance policies
According to a company’s policies, WhiteSource automatically approves, rejects, or triggers a manual
approval process every time a new open-source component is added to a build.
Developers can set up policies based on parameters such as security-vulnerability severity, license type,
or library age.
When a developer adds a problematic open-source component, the service will alert and fail the build.
For searching online repositories such as GitHub and Maven Central, WhiteSource also offers an innova-
tive browser extension.
Before choosing a new component, a developer can review its security vulnerabilities, quality, license
issues, and whether it fits their company’s policies.

Implement GitHub Dependabot alerts and secu-


rity updates
Alerts
GitHub Dependabot detects vulnerable dependencies and sends Dependabot alerts about them in
several situations:
●● A new vulnerability is added to the GitHub Advisory database.
●● New vulnerability data from WhiteSource is processed.
●● Dependency graph for a repository changes.
Alerts are detected in public repositories by default but can be enabled for other repositories.
Notifications can be sent via standard GitHub notification mechanisms.
    583

For more information on Dependabot Alerts, see About alerts for vulnerable dependencies20.
See Supported package ecosystems for details on the supported package ecosystems that alerts can be
generated from.
For notification details, see: Configuring notifications21.

Security updates
A key advantage of Dependabot security updates is that they can automatically create pull requests.
A developer can then review the suggested update and triage what is required to incorporate it.
For more information on automatic security updates, see About GitHub Dependabot security up-
dates22.

Integrate software composition analysis checks


into pipelines
Security scanning used to be thought of as an activity that was completed once per release by a dedicat-
ed security team whose members had little involvement with other groups.
This practice creates a dangerous pattern in which security specialists find large batches of issues at the
exact time when developers are under the most pressure to release a software product.
The pressure often results in software deployment with security vulnerabilities that need to be addressed
after a product has been released, integrating scanning into a team's workflow at multiple points along
the development path. Secure DevOps can help make all quality-assurance activities, including security,
continuous and automated.

Pull request code scan analysis integration.


DevOps teams can submit proposed changes to an application's (main) codebase using pull requests
(PRs). To avoid introducing new issues, developers need to verify the effects of the code changes before
creating a PR. A PR is typically made for each small change in a DevOps process. Changes are continu-
ously merged with the main codebase to keep the main codebase up to date. Ideally, a developer should
check for security issues before creating a PR.
Azure Marketplace extensions that help integrate scans during PRs include:
●● WhiteSource23. Helps validate dependencies with its binary fingerprinting.
●● Checkmarx24. Provides an incremental scan of changes.
●● Veracode25. Implements the concept of a developer sandbox.
●● Black Duck by Synopsis26. An auditing tool for open-source code to help identify, fix, and manage
compliance.
These extensions allow developers to experiment with changes before submitting them as part of a PR.

20 https://docs.github.com/free-pro-team@latest/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies
21 https://docs.github.com/free-pro-team@latest/github/managing-subscriptions-and-notifications-on-github/configuring-notifications
22 https://docs.github.com/free-pro-team@latest/github/managing-security-vulnerabilities/about-github-dependabot-security-updates
23 https://www.whitesourcesoftware.com/
24 https://www.checkmarx.com/
25 https://www.veracode.com/
26 https://www.blackducksoftware.com/
584     

Build and release definition code scan, analysis, and inte-


gration
Developers need to optimize CI for speed to get immediate feedback about build issues. Code scanning
can be performed quickly enough to integrate the CI build definition, preventing a broken build. It
enables developers to restore a build's status to ready/ green by fixing potential issues immediately.
At the same time, CD needs to be thorough. In Azure DevOps, the CD is typically managed through
release definitions (which progress the build output across environments) or other build definitions.
Build definitions can be scheduled (daily) or triggered with each commit. In either case, the build defini-
tion can do a longer static analysis scan (as illustrated in the following image).
You can scan the complete code project and review any errors or warnings offline without blocking the CI
flow.
    585

Examine tools for assess package security and


license rate
Several tools are available from third parties to help assess the security and license rating of software
packages in use.
As discussed in the previous section, one approach by these tools is to provide a centralized artifact
repository.
Scanning can be done at any time, inspecting the packages part of the repository.
The second approach uses tooling that scans the packages used in a build pipeline.
During the build process, the tool can scan the packages by the build, giving instantaneous feedback on
the packages in use.

Inspect packages in the delivery pipeline


There's tooling available to do security scans on packages, components, and source code while running a
delivery pipeline. Often such tooling will use the build artifacts during the build process and do scans.
The tooling can either work on a local artifact repository or the intermediary build output. Some exam-
ples for each are products like:

Tool Type
Artifactory Artifact repository
SonarQube A static code analysis tool
WhiteSource (Bolt) Build scanning.

Configure pipeline
The configuration of the scanning for license types and security vulnerability in the pipeline is done by
using appropriate build tasks in your DevOps tooling. For Azure DevOps, these are build pipeline tasks.

Interpret alerts from scanner tools


To correctly interpret the results of scanning tools, you need to be aware of some aspects:
●● False positives It's essential to verify the findings to be real positives in the scan results. The tooling is
an automated way to scan and might be misinterpreting specific vulnerabilities. In the triaging of the
finding in the scan results, you should be aware that some findings might not be correct. Such results
are called false positives, established by human interpretation and expertise. One must not
declare a result a false positive too quickly. On the other hand, scan results aren't guaranteed to be
100% accurate.
●● Security bug bar Most likely, many security vulnerabilities will be detected—some of these false
positives, but still many findings. More findings can often be handled or mitigated, given a certain
amount of time and money. In such cases, there must be a security bug bar indicating the level of
vulnerabilities that must be fixed before the security risks are acceptable enough to take the software
into production. The bug bar makes sure that it's clear what must be taken care of and what might be
done if time and resources are left.
The results of the tooling scan will be the basis for selecting what work remains to be done before the
software is considered stable and done.
586     

By setting a security bug bar in the Definition of Done and specifying the allowed license ratings, one can
use the reports from the scans to find the work for the development team.

Summary
This module explained Composition Analysis, inspecting and validating code bases for compliance,
integration with security tools, and integration with Azure Pipelines.
You learned how to describe the benefits and usage of:
●● Inspect and validate code bases for compliance.
●● Integrate security tools like WhiteSource with Azure DevOps.
●● Implement pipeline security validation.
●● Interpret alerts from scanning tools.
●● Configure GitHub Dependabot alerts and security.

Learn more
●● Develop secure applications on Microsoft Azure | Microsoft Docs27.
●● Azure DevOps Code Quality & Code Security Improvement | SonarCloud28.
●● Configuring Dependabot security updates - GitHub Docs29.

27 https://docs.microsoft.com/azure/security/develop/secure-develop
28 https://sonarcloud.io/azure-devops
29 https://docs.github.com/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/configuring-
dependabot-security-updates
    587

Static analyzers
Introduction
This module introduces the static analyzers SonarCloud and CodeQL in GitHub.

Learning objectives
After completing this module, students and professionals can:
●● Understand Static Analyzers.
●● Work with SonarCloud.
●● Work with CodeQL in GitHub.
●● Interpret alerts from scanning tools.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore SonarCloud
Technical debt can be classified as the measure between the codebase's current state and an optimal
state.
Technical debt saps productivity by making code hard to understand, easy to break, and difficult to
validate, creating unplanned work and ultimately blocking progress.
Technical debt is inevitable! It starts small and grows over time through rushed changes, lack of context,
and discipline.
Organizations often find that more than 50% of their capacity is sapped by technical debt.
The hardest part of fixing technical debt is knowing where to start.
SonarQube is an open-source platform that is the de facto solution for understanding and managing
technical debt.
We'll learn how to use SonarQube in a build pipeline to identify technical debt in this recipe.

Getting ready
SonarQube is an open platform to manage code quality.
Originally famous in the Java community, SonarQube now supports over 20 programming languages.
The joint investments made by Microsoft and SonarSource make SonarQube easier to integrate with
Pipelines and better at analyzing .NET-based applications.
You can read more about the capabilities offered by SonarQube here: https://www.sonarqube.org/.
SonarSource, the company behind SonarQube, offers a hosted SonarQube environment called as Sonar-
Cloud.
588     

Explore CodeQL in GitHub


Developers use CodeQL to automate security checks.
CodeQL treats code like data that can be queried.
GitHub researchers and community researchers have contributed standard CodeQL queries, and you can
write your own.
A CodeQL analysis consists of three phases:
●● Creating a CodeQL database (based upon the code).
●● Run CodeQL queries against the database.
●● Interpret the results.
CodeQL is available as a command-line interpreter and an extension for Visual Studio Code.
For an overview of CodeQL, see CodeQL Overview30.
For the available tools, see CodeQL Tools31.

Summary
This module introduced the static analyzers SonarCloud and CodeQL in GitHub.
You learned how to describe the benefits and usage of:
●● Understand Static Analyzers.
●● Work with SonarCloud.
●● Work with CodeQL in GitHub.
●● Interpret alerts from scanning tools.

Learn more
●● Automatic Code Review, Testing, Inspection & Auditing32
●● CodeQL33

30 https://codeql.github.com/docs/codeql-overview/about-codeql/
31 https://codeql.github.com/docs/codeql-overview/codeql-tools/
32 https://sonarcloud.io
33 https://codeql.github.com
    589

OWASP and Dynamic Analyzers


Introduction
This module explores OWASP and Dynamic Analyzers for penetration testing, results, and bugs.

Learning objectives
After completing this module, students and professionals can:
●● Understand OWASP and Dynamic Analyzers.
●● Implement OWASP Security Coding Practices.
●● Understand compliance for code bases.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Plan to Implement OWASP Secure Coding Prac-


tices
The starting point for secure development is to use secure-coding practices.
The Open Web Application Security Project (OWASP)34 is a global charitable organization focused on
improving software security.
OWASP's stated mission is to make software security visible so that individuals and organizations can
make informed decisions.
They offer impartial and practical advice.
OWASP regularly publishes a set of Secure Coding Practices. Their guidelines currently cover advice in the
following areas:
●● Input Validation
●● Output Encoding
●● Authentication and Password Management
●● Session Management
●● Access Control
●● Cryptographic Practices
●● Error Handling and Logging
●● Data Protection
●● Communication Security

34 http://owasp.org
590     

●● System Configuration
●● Database Security
●● File Management
●● Memory Management
●● General Coding Practices
OWASP also publishes an intentionally vulnerable web application called The Juice Shop Tool Project35
to learn about common vulnerabilities and see how they appear in applications.
It includes vulnerabilities from all the OWASP Top 1036.
In 2002, Microsoft underwent a company-wide re-education and review phase to produce secure applica-
tion code.
The book, Writing Secure Code by David LeBlanc, Michael Howard37, was written by two people
involved and provided detailed advice on writing secure code.
For more information, you can see:
●● The OWASP Foundation38.
●● OWASP Secure Coding Practices Quick Reference Guide39.
●● OWASP Code Review Guide40.
●● OWASP Top 1041.

Explore OWASP ZAP penetration test


ZAP is a free penetration testing tool for beginners to professionals. ZAP includes an API and a weekly
docker container image to integrate into your deployment process.
Refer to the OSWA ZAP VSTS Extension42 repo for details on setting up the integration. Here we're
going to explain the benefits of including it in your process.
The application CI/CD pipeline should run within a few minutes, so you don't want to include any
long-running processes.
The baseline scan is designed to identify vulnerabilities within a couple of minutes, making it a good
option for the application CI/CD pipeline.
The Nightly OWASP ZAP can spider the website and run the full-Active Scan to evaluate the most combi-
nations of possible vulnerabilities.
OWASP ZAP can be installed on any machine in your network, but we like to use the OWASP Zap/Weekly
docker container within Azure Container Services.
It allows for the latest updates to the image. It will enable the spin-up of multiple image instances so
several applications within an enterprise can be scanned simultaneously.

35 https://owasp.org/www-project-juice-shop/
36 https://owasp.org/www-project-top-ten/
37 https://www.booktopia.com.au/ebooks/writing-secure-code-david-leblanc/prod2370006179962.html
38 http://owasp.org
39 https://owasp.org/www-pdf-archive/OWASP_SCP_Quick_Reference_Guide_v2.pdf
40 https://owasp.org/www-pdf-archive/OWASP_Code_Review_Guide-V1_1.pdf
41 https://owasp.org/www-project-top-ten/
42 https://github.com/deliveron/owasp-zap-vsts-extension
    591

The following figure outlines the steps for the Application CI/CD pipeline and the longer-running Nightly
OWASP ZAP pipeline.

Explore OWASP ZAP results and bugs


Once the scans have completed, the Azure Pipelines release is updated with a report that includes the
results and bugs are created in the team's backlog.
Resolved bugs will close if the vulnerability has been fixed and move back into in-progress if the vulnera-
bility still exists.
The benefit of using this is that the vulnerabilities are created as bugs that provide actionable work that
can be tracked and measured.
False positives can be suppressed using OWASP ZAP's context file, so only valid vulnerabilities are
surfaced.
592     

Even with continuous security validation running against every change to help ensure new vulnerabilities
aren't introduced, hackers continuously change their approaches, and new vulnerabilities are being
discovered.
Good monitoring tools allow you to help detect, prevent, and remediate issues discovered while your
application runs in production.
Azure provides several tools that provide detection, prevention, and alerting using rules, such as OWASP
Top 1043 and machine learning to detect anomalies and unusual behavior to help identify attackers.
Minimize security vulnerabilities by taking a holistic and layered approach to security, including secure
infrastructure, application architecture, continuous validation, and monitoring.
DevSecOps practices enable your entire team to incorporate these security capabilities in the whole
lifecycle of your application.
Establishing continuous security validation into your CI/CD pipeline can allow your application to stay
secure while improving the deployment frequency to meet the needs of your business to stay ahead of
the competition.

Summary
This module explored OWASP and Dynamic Analyzers for penetration testing, results, and bugs.
You learned how to describe the benefits and usage of:
●● Understand OWASP and Dynamic Analyzers.
●● Implement OWASP Security Coding Practices.
●● Understand compliance for codebases.

43 https://owasp.org/www-project-top-ten/
    593

Learn more
●● Vulnerability Scanning Tools | OWASP Foundation44.
●● OWASP Secure Coding Practices Quick Reference Guide45.

●● OWASP Code Review Guide46.


●● OWASP Top 1047.

44 https://owasp.org/www-community/Vulnerability_Scanning_Tools
45 https://owasp.org/www-pdf-archive/OWASP_SCP_Quick_Reference_Guide_v2.pdf
46 https://owasp.org/www-pdf-archive/OWASP_Code_Review_Guide-V1_1.pdf
47 https://owasp.org/www-project-top-ten/
594     

Security Monitoring and Governance


Introduction
This module describes security monitoring and governance with Microsoft Defender for Cloud and its
usage scenarios, Azure Policies, Microsoft Defender for Identity, and security practices related to the
tools.

Learning objectives
After completing this module, students and professionals can:
●● Configure Microsoft Defender for Cloud.
●● Understand Azure policies.
●● Describe initiatives, resource locks, and Azure Blueprints.
●● Work with Microsoft Defender for Identity.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Implement pipeline security


It's fundamental to protect your code protecting credentials, and secrets. Phishing is becoming ever more
sophisticated. The following list is several operational practices that a team ought to apply to protect
itself:
●● Authentication and authorization. Use multifactor authentication (MFA), even across internal domains,
and just-in-time administration tools such as Azure PowerShell Just Enough Administration (JEA)48,
to protect against privilege escalations. Using different passwords for different user accounts will limit
the damage if a set of access credentials is stolen.
●● The CI/CD Release Pipeline. If the release pipeline and cadence are damaged, use this pipeline to
rebuild infrastructure. Manage Infrastructure as Code (IaC) with Azure Resource Manager or use the
Azure platform as a service (PaaS) or a similar service. Your pipeline will automatically create new
instances and then destroy them. It limits the places where attackers can hide malicious code inside
your infrastructure. Azure DevOps will encrypt the secrets in your pipeline. As a best practice, rotate
the passwords just as you would with other credentials.
●● Permissions management. You can manage permissions to secure the pipeline with role-based access
control (RBAC), just as you would for your source code. It keeps you in control of editing the build and
releases definitions that you use for production.
●● Dynamic scanning. It's the process of testing the running application with known attack patterns. You
could implement penetration testing as part of your release. You also could keep up to date on

48 https://aka.ms/jea
    595

security projects such as the Open Web Application Security Project (OWASP49) Foundation, then
adopt these projects into your processes.
●● Production monitoring. It's a critical DevOps practice. The specialized services for detecting anomalies
related to intrusion are known as Security Information and Event Management. Microsoft Defender
for Cloud50 focuses on the security incidents related to the Azure cloud.
Note: In all cases, use Azure Resource Manager Templates or other code-based configurations. Imple-
ment IaC best practices, such as making changes in templates to make changes traceable and repeatable.
Also, you can use provisioning and configuration technologies such as Desired State Configuration (DSC),
Azure Automation, and other third-party tools and products that can integrate seamlessly with Azure.

Explore Microsoft Defender for Cloud


Microsoft Defender for Cloud is a monitoring service that provides threat protection across all your
services both in Azure and on-premises. Microsoft Defender can:
●● Provide security recommendations based on your configurations, resources, and networks.
●● Monitor security settings across on-premises and cloud workloads and automatically apply required
security to new services as they come online.
●● Continuously monitor all your services and do automatic security assessments to identify potential
vulnerabilities before they can be exploited.
●● Use Azure Machine Learning to detect and block malicious software from being installed on your
virtual machines (VMs) and services. You can also define a list of allowed applications to ensure that
only the validated apps can execute.
●● Analyze and identify potential inbound attacks and help investigate threats and any post-breach
activity that might have occurred.
●● Provide just-in-time (JIT) access control for ports by reducing your attack surface by ensuring the
network only allows traffic that you require.

Microsoft Defender for Cloud is part of the Center for Internet Security (CIS) Benchmarks51 recom-
mendations.

Microsoft Defender for Cloud versions


Microsoft Defender for Cloud supports both Windows and Linux operating systems. It can also provide
security to features in IaaS and platform as a service (PaaS) scenarios.
Microsoft Defender for Cloud is available in two versions:
●● Free. Available as part of your Azure subscription, this tier is limited to assessments and Azure re-
sources' recommendations only.

49 https://www.owasp.org
50 https://azure.microsoft.com/services/defender-for-cloud
51 https://www.cisecurity.org/cis-benchmarks/
596     

●● Standard. This tier provides a full suite of security-related services, including continuous monitoring,
threat detection, JIT access control for ports, and more.
To access the full suite of Microsoft Defender for Cloud services, you'll need to upgrade to a Standard
version subscription.
You can access the 60-day free trial from the Microsoft Defender for Cloud dashboard in the Azure portal.

You can read more about Microsoft Defender for Cloud at Microsoft Defender for Cloud52.

Examine Microsoft Defender for Cloud usage


scenarios
You can integrate Microsoft Defender for Cloud into your workflows and use it in many ways. Here are
two example usage scenarios:
●● Use Microsoft Defender for Cloud as part of your incident response plan.
Many organizations only respond to security incidents after an attack has occurred. To reduce costs and
damage, it's necessary to have an incident response plan before an attack occurs.

52 https://azure.microsoft.com/services/defender-for-cloud
    597

The following examples are how you can use Microsoft Defender for Cloud to detect, assess, and diag-
nose your incident response plan stages.
●● Detect. Review the first indication of an event investigation. For example, use the Microsoft Defender
for Cloud dashboard to review a high-priority security alert's initial verification.
●● Assess. Do the initial assessment to obtain more information about suspicious activity. For example,
you can get more information from Microsoft Defender for Cloud about a security alert.
●● Diagnose. Conduct a technical investigation and identify containment, mitigation, and workaround
strategies. For example, you can follow the remediation steps described by Microsoft Defender for
Cloud for a particular security alert.
●● Use Microsoft Defender for Cloud recommendations to enhance security.
You can reduce the chances of a significant security event by configuring a security policy and then
implementing the recommendations provided by Microsoft Defender for Cloud. A security policy defines
the set of controls that are recommended for resources within a specified subscription or resource group.
In Microsoft Defender for Cloud, you can define policies according to your company's security require-
ments.
Microsoft Defender for Cloud analyzes the security state of your Azure resources. When it identifies
potential security vulnerabilities, it creates recommendations based on the controls set in the security
policy.
The suggestions guide you through the process of configuring the corresponding security controls.
For example, if you have workloads that don't require the Azure SQL Database Transparent Data Encryp-
tion (TDE) policy, turn off the policy at the subscription level and enable it only on the resource groups
where SQL Database TDE is required.
You can read more about the Microsoft Defender for Cloud at the Microsoft Defender for Cloud53.
More implementation and scenario details are also available in the Microsoft Defender for Cloud
planning and operations guide54.

Explore Azure Policy


Azure Policy is an Azure service that you can create, assign, and manage policies.
Policies enforce different rules and effects over your Azure resources, ensuring that your resources stay
compliant with your standards and SLAs.

Azure Policy uses policies and initiatives to provide policy enforcement capabilities.
Azure Policy evaluates your resources by scanning for resources that don't follow the policies you create.
For example, you might have a policy that specifies a maximum size limit for VMs in your environment.

53 https://azure.microsoft.com/services/security-center/
54 https://docs.microsoft.com/azure/defender-for-cloud/security-center-planning-and-operations-guide
598     

After you implement your maximum VM size policy, Azure Policy will evaluate the VM resource whenever
a VM is created—or updated to ensure that the VM follows the size limit you set in your Policy.
Azure Policy can help maintain the state of your resources by evaluating your existing resources and
configurations and automatically remediating non-compliant resources.
It has built-in policy and initiative definitions for you to use. The definitions are arranged into categories:
Storage, Networking, Compute, Security Center, and Monitoring.
Azure Policy can also integrate with Azure DevOps by applying any continuous integration (CI) and
continuous delivery (CD) pipeline policies that apply to the pre-deployment and post-deployment of your
applications.

CI/CD pipeline integration


The Check Gate task is an example of an Azure policy that you can integrate with your DevOps CI/CD
pipeline.
Using Azure policies, Check gate provides security and compliance assessment on the resources with an
Azure resource group or subscription that you can specify.
Check gate is available as a Release pipeline deployment task.
For more information, go to:
●● Azure Policy Check Gate task55.
●● Azure Policy56.

Understand policies
Applying a policy to your resources with Azure Policy involves the following high-level steps:
●● Policy definition. Create a policy definition.
●● Policy assignment. Assign the definition to a scope of resources.
●● Remediation. Review the policy evaluation results and address any non-compliances.

Policy definition
A policy definition specifies the resources to be evaluated and the actions to take on them. For example,
you could prevent VMs from deploying if exposed to a public IP address. You could also contain a specific
hard disk from deploying VMs to control costs. Policies are defined in the JavaScript Object Notation
(JSON) format.
The following example defines a policy that limits where you can deploy resources:
{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of locations that can be

55 https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-policy-check-gate
56 https://azure.microsoft.com/services/azure-policy/
    599

specified when deploying resources",


"strongType": "location",
"displayName": "Allowed locations"
}
}
},
"displayName": "Allowed locations",
"description": "This policy enables you to restrict the locations
your organization can specify when deploying resources.",
"policyRule": {
"if": {
"not": {
"field": "location",
"in": "[parameters('allowedLocations')]"
}
},
"then": {
"effect": "deny"
}
}
}
}

The following list is example policy definitions:


●● Allowed Storage Account SKUs. This policy defines conditions (or rules) that limit storage accounts to
a set of specified sizes or Stock Keeping Units (SKUs). Its effect is to deny all storage accounts that
don't adhere to defined SKU sizes.
●● Allowed Resource Type. This policy definition has a set of conditions to specify the resource types
deployed. Its effect is to deny all resources that are on the list.
●● Allowed Locations. This policy restricts the locations where new resources can be deployed. It's used
to enforce geographic compliance requirements.
●● Allowed Virtual Machine SKUs. This policy specifies a set of VM SKUs that can be deployed. The policy
effect is that VMs can't be deployed from unspecified SKUs.

Policy assignment
Policy definitions, whether custom or built-in, need to be assigned.
A policy assignment is a policy definition that has been assigned to a specific scope. Scopes can range
from a management group to a resource group.
Child resources will inherit any policy assignments applied to their parents.
It means if a policy is applied to a resource group, it's used to all the resources within that resource
group.
However, you can define subscopes for excluding resources from policy assignments.
You can assign policies via:
●● Azure portal.
600     

●● Azure CLI.
●● PowerShell.

Remediation
Resources found not to follow a deployIfNotExists or modify policy condition can be put into a compliant
state through Remediation.
Remediation instructs Azure Policy to run the deployIfNotExists effect or the tag operations of the policy
on existing resources.
To minimize configuration drift, you can bring resources into compliance using automated bulk Remedia-
tion instead of going through them one at a time.
You can read more about Azure Policy on the Azure Policy57 webpage.

Explore initiatives
Initiatives work alongside policies in Azure Policy. An initiative definition is a set of policy definitions to
help track your compliance state for meeting large-scale compliance goals.
Even if you have a single policy, we recommend using initiatives if you anticipate increasing your number
of policies over time.
Applying an initiative definition to a specific scope is called an initiative assignment.

Initiative definitions
Initiative definitions simplify the process of managing and assigning policy definitions by grouping sets of
policies into a single item.
For example, you can create an initiative named Enable Monitoring in Azure Security Center to monitor
security recommendations from Azure Security Center.
Under this example initiative, you would have the following policy definitions:
●● Monitor unencrypted SQL Database in Security Center. This policy definition monitors unencrypted
SQL databases and servers.
●● Monitor OS vulnerabilities in Security Center. This policy definition monitors servers that don't satisfy
a specified OS baseline configuration.
●● Monitor missing Endpoint Protection in Security Center. This policy definition monitors servers
without an endpoint protection agent installed.

Initiative assignments
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope.
Scopes can range from a management group to a resource group. You can assign initiatives in the same
way that you assign policies.
You can read more about policy definition and structure at Azure Policy definition structure58.

57 https://azure.microsoft.com/services/azure-policy/
58 https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure
    601

Explore resource locks


Locks help you prevent accidental deletion or modification of your Azure resources. You can manage
locks from within the Azure portal.
In the Azure portal, locks are called Delete and Read-only, respectively.
Go to the Settings section on the resource's settings blade to review, add, or delete locks for a resource
in the Azure portal.
You might need to lock a subscription, resource group, or resource to prevent users from accidentally
deleting or modifying critical resources.
You can set a lock level to CanNotDelete or ReadOnly:
●● CanNotDelete means that authorized users can read and modify a resource, but they can't delete it.
●● ReadOnly means that authorized users can read a resource, but they can't modify or delete it.
You can read more about Locks on the Lock resources to prevent unexpected changes59 webpage.

Explore Azure Blueprints


Azure Blueprints enables cloud architects to define a repeatable set of Azure resources that implement
and adhere to an organization's standards, patterns, and requirements.
Azure Blueprints helps development teams build and deploy new environments rapidly with a set of
built-in components that speed up development and delivery.
Furthermore, it's done while staying within organizational compliance requirements.

Azure Blueprints provides a declarative way to orchestrate deployment for various resource templates
and artifacts, including:
●● Role assignments
●● Policy assignments
●● Azure Resource Manager templates
●● Resource groups
To implement Azure Blueprints, complete the following high-level steps:
1. Create a blueprint.
2. Assign the blueprint.
3. Track the blueprint assignments.
With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and
the blueprint assignment (what is deployed) is preserved.

59 https://docs.microsoft.com/azure/azure-resource-manager/resource-group-lock-resources
602     

The blueprints in Azure Blueprints are different from Azure Resource Manager templates.
When Azure Resource Manager templates deploy resources, they have no active relationship with the
deployed resources. They exist in a local environment or source control.
By contrast, with Azure Blueprints, each deployment is tied to an Azure Blueprints package. It means that
the relationship with resources will be maintained, even after deployment. This way, keeping relationships
improves deployment tracking and auditing capabilities.

Usage scenario
Whether government, industry, or organizational, Adhering to security and compliance requirements can
be difficult and time-consuming.
To help you to trace your deployments and audit them for compliance, Azure Blueprints uses artifacts and
tools that speed up your path to certification.
Azure Blueprints is also helpful in Azure DevOps scenarios. Mainly where blueprints are associated with
specific build artifacts and release pipelines, and blueprints can be tracked rigorously.
You can learn more about Azure Blueprints at Azure Blueprints60.

Understand Microsoft Defender for Identity


Microsoft Defender for Identity (formerly Azure Advanced Threat Protection, also known as Azure ATP)
is a cloud-based security solution that identifies and detects:
●● Advanced threats.
●● Compromised identities.
●● Malicious insider actions directed at your organization.
Microsoft Defender can detect known malicious attacks and techniques and help you investigate security
issues and network vulnerabilities.

Microsoft Defender components


Microsoft Defender consists of the following components:
●● Microsoft Defender portal. Microsoft Defender has its portal. You can monitor and respond to suspi-
cious activity through the Microsoft Defender portal. The Microsoft Defender portal allows you to
manage your Microsoft Defender instance and review data received from Microsoft Defender sensors.
You can also use the Microsoft Defender portal to monitor, manage, and investigate threats to your
network environment.
You can sign into the Microsoft Defender portal at https://securitycenter.windows.com61.
Sign in with a user account assigned to an Azure AD security group that has access to the Microsoft
Defender portal.
●● Microsoft Defender sensor. Microsoft Defender sensors are installed directly on your domain control-
lers. The sensors monitor domain controller traffic without requiring a dedicated server or port
mirroring configurations.

60 https://azure.microsoft.com/services/blueprints/
61 https://securitycenter.windows.com/
    603

●● Microsoft Defender cloud service. The Microsoft Defender cloud service runs on the Azure infrastruc-
ture and is deployed in the United States, Europe, and Asia. The Microsoft Defender cloud service is
connected to the Microsoft Intelligent Security Graph.

Cryptocurrency mining and other-advanced attacks


Azure Defender for container registries can be enabled at the subscription level.
Once it's enabled:
●● Microsoft Defender will then scan images that are pushed to the registry.
●● Scan imported into the registry.
●● Or any images pulled within the last 30 days.
There's a per-image charge for this feature.
When issues are found, a notification appears in the Microsoft Defender dashboard.
There are three triggers for image scans:
●● On push (a new image is pushed to the registry)
●● Recently pulled (any image pulled in the last 30 days)
●● On import (when an image is imported from other locations like Docker Hub)

Purchasing Microsoft Defender


Microsoft Defender is available as part of the Microsoft Enterprise Mobility + Security E5 offering and a
standalone license.
604     

You can acquire a license directly from the Enterprise Mobility + Security pricing options62 page or the
Cloud Solution Provider (CSP) licensing model.
Note: Microsoft Defender isn't available for purchase via the Azure portal. For more information about
Microsoft Defender, review the Azure Defender | Microsoft Azure63 webpage.

Summary
This module described security monitoring and governance with Microsoft Defender for Cloud and its
usage scenarios, Azure Policies, Microsoft Defender for Identity, and security practices related to the
tools.
You learned how to describe the benefits and usage of:
●● Configure Microsoft Defender for Cloud.
●● Understand Azure policies.
●● Describe initiatives, resource locks, and Azure Blueprints.
●● Work with Microsoft Defender for Identity.

Learn more
●● What is Microsoft Defender for Identity? | Microsoft Docs64.
●● Overview of Azure Policy - Azure Policy | Microsoft Docs65.
●● Overview of Azure Blueprints - Azure Blueprints | Microsoft Docs66.

62 https://www.microsoft.com/cloud-platform/enterprise-mobility-security-pricing
63 https://azure.microsoft.com/services/azure-defender/
64 https://docs.microsoft.com/defender-for-identity/what-is
65 https://docs.microsoft.com/azure/governance/policy/overview
66 https://docs.microsoft.com/azure/governance/blueprints/overview
    605

Labs
Lab 15: Implement security and compliance in
Azure DevOps Pipelines
Lab overview
In this lab, we will create a new Azure DevOps project, populate the project repository with a sample
application code, create a build pipeline. Next, we will install WhiteSource Bolt from the Azure DevOps
Marketplace to make it available as a build task, activate it, add it to the build pipeline, use it to scan the
project code for security vulnerabilities and licensing compliance issues, and finally view the resulting
report.

Objectives
After you complete this lab, you will be able to:
●● Create a Build pipeline
●● Install WhiteSource Bolt from the Azure DevOps marketplace and activate it
●● Add WhiteSource Bolt as a build task in a build pipeline
●● Run build pipeline and view WhiteSource security and compliance report

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions67

Lab 16: Managing technical debt with


SonarQube and Azure DevOps
Lab overview
In the context of Azure DevOps, the term technical debt represents suboptimal means of reaching tactical
goals, which affect negatively the ability to reach strategic objectives in the area of software development
and deployment. Technical debt affects productivity by making code hard to understand, prone to
failures, time-consuming to change, and difficult to validate. Without proper oversight and management,
technical debt can accumulate over time and significantly impact the overall quality of the software and
the productivity of development teams in the longer term.

67 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
606     

SonarQube68 is an open source platform for continuous inspection of code quality that facilitates
automatic reviews with static analysis of code to improve its quality by detecting bugs, code smells, and
security vulnerabilities.
In this lab, you will learn how to setup SonarQube on Azure and integrate it with Azure DevOps.

Objectives
After you complete this lab, you will be able to:
●● Provision SonarQube server as an Azure Container Instance69 from the SonarQube Docker image
●● Setup a SonarQube project
●● Provision an Azure DevOps Project and configure CI pipeline to integrate with SonarQube
●● Analyze SonarQube reports

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions70

68 https://www.sonarqube.org/
69 https://docs.microsoft.com/en-in/azure/container-instances/
70 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    607

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following two elements Secure DevOps combines?
†† DevOps, Security.
†† SCA, OSS.
†† Development, Operations.

Multiple choice
Which of the following choices is the term that broadly defines what security means in Secure DevOps?
†† Access Control.
†† Securing the pipeline.
†† Perimeter protection.

Multiple choice
Which of the following description best describes the term software composition analysis?
†† Assessment of production hosting infrastructure just before deployment.
†† Analyzing open-source software after it has been deployed to production to identify security vulnera-
bilities.
†† Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criterion to use in your pipeline.

Multiple choice
Which of the following choices is a monitoring service that can provide threat protection and security
recommendations across all your services in Azure and on-premises?
†† Azure Policy.
†† Microsoft Defender.
†† Azure Key Vault.
608     

Multiple choice
Which of the following choices can you use to create, assign and manage policies?
†† Azure Machine Learning.
†† Microsoft Defender.
†† Azure Policy.

Multiple choice
Which of the following choices is a tool to prevent accidental deletion of Azure resources?
†† Locks.
†† Policy.
†† Key Vault.

Multiple choice
Which of the following license type is considered viral in nature?
†† Downstream.
†† Attribution.
†† Copyleft.

Multiple choice
Which of the following choices best describes open-source software?
†† A type of software where code users can review, modify and distribute the software.
†† It's a type of software where code users can use anywhere without license restrictions or pay for it.
†† A type of software where the license describes usage only for non-profit organizations.

Multiple choice
Which of the following choices isn't an issue often associated with the use of open-source libraries?
†† Bugs.
†† Code property.
†† Security Vulnerabilities.

Multiple choice
Which of the following choices describe OWASP ZAP?
†† Security Testing Tool.
†† Code Quality Tool.
†† A non-profit foundation.
    609

Multiple choice
Which of the following choices isn't a Secure Coding Practice guideline that OWASP regularly publishes?
†† Authentication and Password Management.
†† Code Smells.
†† Error Handling and Logging.

Multiple choice
Which of the following choices is a static analysis tool that scans binary files?
†† Azure Artifacts.
†† SonarCloud.
†† BinSkim.

Multiple choice
Which of the following steps represents the correct sequence of OWASP ZAP execution in a pipeline?
†† Pull OWASP Zap Weekly, Start Container, Run Baseline, Report Results and Create Bugs.
†† Start Container, Report Results, Run Baseline, Pull OWASP ZAP.
†† Start Container, Pull OWASP ZAP Weekly, Run Baseline, Spider Site, Report Results, Create Bugs.

Multiple choice
Which of the following tools helps discover vulnerabilities by letting you query code as though it were data?
†† SonarCloud.
†† OWASP ZAP.
†† CodeQL.

Multiple choice
Which of the following tools can be used to assess open-source security and licensing compliance?
†† SonarCloud.
†† Mend Bolt.
†† OWASP.

Multiple choice
Which of the following situations GitHub Dependabot detects vulnerable dependencies and send Dependa-
bot alerts about them?
†† A new vulnerability is added to the GitHub Advisory database.
†† A new code is committed to the repository.
†† A deployment succeeds.
610     

Multiple choice
Which of the following choices is a characteristic in source code possibly indicating a deeper problem?
†† Memory Leak.
†† Bug.
†† Code smell.

Multiple choice
Which of the following choices represents what is Mend Bolt used?
†† Penetration Testing.
†† Finding and fixing open-source vulnerabilities.
†† Scanning for code quality issues.

Multiple choice
Which of the following tools can you use to do code quality checks?
†† Veracode.
†† SonarCloud.
†† Microsoft Defender for Cloud.

Multiple choice
Which of the following choices is a type of attack that makes it possible to execute malicious SQL state-
ments?
†† Man-in-the-Middle (MitM).
†† Denial-of-Service (DOS).
†† SQL Injection.

Multiple choice
Which of the following choices is a principle or process that Threat Modeling is a core element?
†† Microsoft Solutions Framework (MSF).
†† Microsoft Security Development Lifecycle (SDL).
†† Application Lifecycle Management (ALM).

Multiple choice
Which of the following choices isn't one of the five major threat modeling steps?
†† Don't deploy with less than 90% of code quality.
†† Defining security requirements.
†† Mitigating threats.
    611

Answers
Multiple choice
Which of the following two elements Secure DevOps combines?
■■ DevOps, Security.
†† SCA, OSS.
†† Development, Operations.
Explanation
Secure DevOps brings together the notions of DevOps and Security. DevOps is about working faster.
Security is about emphasizing thoroughness, typically done at the end of the cycle, potentially generating
unplanned work right at the end of the pipeline.
Multiple choice
Which of the following choices is the term that broadly defines what security means in Secure DevOps?
†† Access Control.
■■ Securing the pipeline.
†† Perimeter protection.
Explanation
With Secure DevOps, security is more about securing the pipeline, determining where you can add protec-
tion to the elements that plug into your build and release pipeline.
Multiple choice
Which of the following description best describes the term software composition analysis?
†† Assessment of production hosting infrastructure just before deployment.
†† Analyzing open-source software after it has been deployed to production to identify security vulnera-
bilities.
■■ Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criterion to use in your pipeline.
Explanation
Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide validation
that the software meets a defined criterion to use in your pipeline.
Multiple choice
Which of the following choices is a monitoring service that can provide threat protection and security
recommendations across all your services in Azure and on-premises?
†† Azure Policy.
■■ Microsoft Defender.
†† Azure Key Vault.
Explanation
Microsoft Defender is a monitoring service that provides threat protection across all your services in Azure
and on-premises.
612     

Multiple choice
Which of the following choices can you use to create, assign and manage policies?
†† Azure Machine Learning.
†† Microsoft Defender.
■■ Azure Policy.
Explanation
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce
different rules and effects on your resources, ensuring they stay compliant with your corporate standards
and service-level agreements (SLAs).
Multiple choice
Which of the following choices is a tool to prevent accidental deletion of Azure resources?
■■ Locks.
†† Policy.
†† Key Vault.
Explanation
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage these
locks from within the Azure portal.
Multiple choice
Which of the following license type is considered viral in nature?
†† Downstream.
†† Attribution.
■■ Copyleft.
Explanation
The Copyleft license is considered viral in nature, as the use of the source code and its components and
distribution of the complete software implies that all source code using it should follow the same license
form.
Multiple choice
Which of the following choices best describes open-source software?
■■ A type of software where code users can review, modify and distribute the software.
†† It's a type of software where code users can use anywhere without license restrictions or pay for it.
†† A type of software where the license describes usage only for non-profit organizations.
Explanation
A type of software where code users can review, modify and distribute the software. The open-source license
type can limit the actions such as sale provisions that can be taken.
    613

Multiple choice
Which of the following choices isn't an issue often associated with the use of open-source libraries?
†† Bugs.
■■ Code property.
†† Security Vulnerabilities.
Explanation
Bugs, security vulnerabilities, and licensing issues are often associated with the use of open-source libraries.
Multiple choice
Which of the following choices describe OWASP ZAP?
■■ Security Testing Tool.
†† Code Quality Tool.
†† A non-profit foundation.
Explanation
OWASP Zed Attack Proxy Scan. Also known as OWASP ZAP Scan is an open-source web application security
scanner that is intended for users with all levels of security knowledge.
Multiple choice
Which of the following choices isn't a Secure Coding Practice guideline that OWASP regularly publishes?
†† Authentication and Password Management.
■■ Code Smells.
†† Error Handling and Logging.
Explanation
OWASP regularly publishes a set of Secure Coding Practices. Their guidelines currently cover advice in the
following areas like Authentication and Password Management, Error Handling and Logging, System
Configuration and others.
Multiple choice
Which of the following choices is a static analysis tool that scans binary files?
†† Azure Artifacts.
†† SonarCloud.
■■ BinSkim.
Explanation
BinSkim is a static analysis tool that scans binary files. BinSkim replaces an earlier Microsoft tool called
BinScope.
614     

Multiple choice
Which of the following steps represents the correct sequence of OWASP ZAP execution in a pipeline?
■■ Pull OWASP Zap Weekly, Start Container, Run Baseline, Report Results and Create Bugs.
†† Start Container, Report Results, Run Baseline, Pull OWASP ZAP.
†† Start Container, Pull OWASP ZAP Weekly, Run Baseline, Spider Site, Report Results, Create Bugs.
Explanation
The correct sequence is: Pull OWASP Zap Weekly, Start Container, Run Baseline, Report Results and Create
Bugs.
Multiple choice
Which of the following tools helps discover vulnerabilities by letting you query code as though it were
data?
†† SonarCloud.
†† OWASP ZAP.
■■ CodeQL.
Explanation
Developers use CodeQL to automate security checks. CodeQL treats code like data that can be queried.
Multiple choice
Which of the following tools can be used to assess open-source security and licensing compliance?
†† SonarCloud.
■■ Mend Bolt.
†† OWASP.
Explanation
Mend Bolt can be used to assess open-source security and licensing compliance.
Multiple choice
Which of the following situations GitHub Dependabot detects vulnerable dependencies and send
Dependabot alerts about them?
■■ A new vulnerability is added to the GitHub Advisory database.
†† A new code is committed to the repository.
†† A deployment succeeds.
Explanation
GitHub Dependabot detects vulnerable dependencies and sends Dependabot alerts about them in several
situations: A new vulnerability is added to the GitHub Advisory database, New vulnerability data from Mend
is processed, and the Dependency graph for a repository changes.
    615

Multiple choice
Which of the following choices is a characteristic in source code possibly indicating a deeper problem?
†† Memory Leak.
†† Bug.
■■ Code smell.
Explanation
Code smells are characteristics in your code that could be a problem. Code smells hint at more profound
problems in the design or implementation of the code.
Multiple choice
Which of the following choices represents what is Mend Bolt used?
†† Penetration Testing.
■■ Finding and fixing open-source vulnerabilities.
†† Scanning for code quality issues.
Explanation
Mend Bolt is used to finding and fixing open-source vulnerabilities.
Multiple choice
Which of the following tools can you use to do code quality checks?
†† Veracode.
■■ SonarCloud.
†† Microsoft Defender for Cloud.
Explanation
SonarCloud is the cloud-based version of the original SonarQube and would be best for working with code
in Azure Repos.
Multiple choice
Which of the following choices is a type of attack that makes it possible to execute malicious SQL
statements?
†† Man-in-the-Middle (MitM).
†† Denial-of-Service (DOS).
■■ SQL Injection.
Explanation
SQL Injection is a type of attack that makes it possible to execute malicious SQL statements.
Multiple choice
Which of the following choices is a principle or process that Threat Modeling is a core element?
†† Microsoft Solutions Framework (MSF).
■■ Microsoft Security Development Lifecycle (SDL).
†† Application Lifecycle Management (ALM).
Explanation
Threat Modeling is a core element of the Microsoft Security Development Lifecycle (SDL).
616     

Multiple choice
Which of the following choices isn't one of the five major threat modeling steps?
■■ Don't deploy with less than 90% of code quality.
†† Defining security requirements.
†† Mitigating threats.
Explanation
There are five major threat modeling steps. Defining security requirements, Creating an application dia-
gram, Identifying threats, Mitigating threats, and Validating that threats have been mitigated.
Module 8 Design and implement a dependen-
cy management strategy

Explore package dependencies


Introduction
In this module, we'll talk about managing dependencies in software development. We're going to cover
what dependencies are and how to identify them in your codebase.
Then you'll learn how to package these dependencies and manage the packages in package feeds.
Finally, you're going to learn about versioning strategies.
We'll look at dependency management as a concept in software and why it's needed.
We're going to look at dependency management strategies and how you can identify components in
your source code and change these to dependencies.
This module explores dependency management concepts and helps to identify project dependencies.
You'll learn how to decompose your system, identify dependencies, and package componentization.

Learning objectives
After completing this module, students and professionals can:
●● Define dependency management strategy.
●● Identify dependencies.
●● Describe elements and componentization of dependency management.
●● Scan your codebase for dependencies.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
618     

●● Beneficial to have experience in an organization that delivers software.

What is dependency management?


Before we can understand dependency management, we'll need first to get introduced to the concepts of
dependencies.

Dependencies in software
Modern software development involves complex projects and solutions.
Projects have dependencies on other projects, and solutions aren't single pieces of software.
The solutions and software built consist of multiple parts and components and are often reused.
As codebases are expanding and evolving, it needs to be componentized to be maintainable.
A team that is writing software won't write every piece of code by itself but use existing code written by
other teams or companies and open-source code that is readily available.
Each component can have its maintainers, speed of change, and distribution, giving both the creators and
consumers of the components autonomy.
A software engineer will need to identify the components that make up parts of the solution and decide
whether to write the implementation or include an existing component.
The latter approach introduces a dependency on other components.

Why is dependency management needed?


Software dependencies that are introduced in a project and solution must be appropriately declared and
resolved.
You need to manage the overall composition of the project code and the included dependencies.
Without proper dependency management, it will be hard to keep the components in the solution
controlled.
Management of dependencies allows a software engineer and team to be more efficient working with
dependencies.
With all dependencies being managed, it's also possible to control the consumed dependencies, enabling
governance and security scanning to use known vulnerabilities or exploits packages.

Describe elements of a dependency manage-


ment strategy
There are many aspects of a dependency management strategy.
●● Standardization Managing dependencies benefit from a standardized way of declaring and resolving
them in your codebase.
Standardization allows a repeatable, predictable process and usage that can be automated as well.

●● Package formats and sources The distribution of dependencies can be performed by a packaging
method suited for your solution's dependency type.
    619

Each dependency is packaged using its usable format and stored in a centralized source.
Your dependency management strategy should include the selection of package formats and corre-
sponding sources where to store and retrieve packages.

●● Versioning Just like your own code and components, the dependencies in your solution usually
evolve.
While your codebase grows and changes, you need to consider the changes in your dependencies as
well.
It requires a versioning mechanism for the dependencies to be selective of the version of a dependen-
cy you want to use.

Identify dependencies
It starts with identifying the dependencies in your codebase and deciding which dependencies will be
formalized.
Your software project and its solution probably already use dependencies.
It's common to use libraries and frameworks that are not written by yourself.
Additionally, your existing codebase might have internal dependencies that aren't treated as such.
For example, take a piece of code that implements a particular business domain model.
It might be included as source code in your project and consumed in other projects and teams.
It would help if you investigated your codebase to identify pieces of code that can be considered de-
pendencies and treat them as such.
It requires changes to how you organize your code and build the solution. It will bring your components.

Understand source and package componentiza-


tion
Current development practices already have the notion of componentization.
There are two ways of componentization commonly used.
1. Source componentization The first way of componentization is focused on source code. It refers to
splitting the source code in the codebase into separate parts and organizing it around the identified
components.
It works if the source code isn't shared outside of the project. Once the components need to be
shared, it requires distributing the source code or the produced binary artifacts created from it.

2. Package componentization The second way uses packages. Distributing software components is
performed utilizing packages as a formal way of wrapping and handling the components.
A shift to packages adds characteristics needed for proper dependency management, like tracking
and versioning packages in your solution.
See also Collaborate more and build faster with packages1.

1 https://docs.microsoft.com/azure/devops/artifacts/collaborate-with-packages
620     

Decompose your system


You'll need to get better insights into your code and solution before you can change your codebase into
separate components to prepare for finding dependencies that can be taken out of your system.
It allows you to decompose your system into individual components and dependencies. The goal is to
reduce the size of your codebase and system, making it more efficient to build and manageable in the
end.
You achieve it by removing specific components of your solution. These are going to be centralized,
reused, and maintained independently.
You'll remove those components and externalize them from your solution at the expense of introducing
dependencies on other components.
This process of finding and externalizing components is effectively creating dependencies.
It may require some refactoring, such as creating new solution artifacts for code organization or code
changes to cater for the unchanged code to take a dependency on an (external) component.
You might need to introduce some code design patterns to isolate and include the componentized code.
Examples of patterns are abstraction by interfaces, dependency injection, and inversion of control.
Decomposing could also mean replacing your implementation of reusable code with an available open-
source or commercial component.

Scan your codebase for dependencies


There are several ways to identify the dependencies in your codebase.
These include scanning your code for patterns and reuse and analyzing how the solution is composed of
individual modules and components.
●● Duplicate code When certain pieces of code appear in several places, it's a good indication that this
code can be reused. Keep in mind that code duplication isn't necessarily a bad practice. However, if
the code can be made available properly, it does have benefits over copying code and must manage
that. The first step to isolate these pieces of duplicate code is to centralize them in the codebase and
componentize them in the appropriate way for the type of code.
●● High cohesion and low coupling A second approach is to find code that might define components
in your solution. You'll look for code elements that have high cohesion and low coupling with other
parts of code. It could be a specific object model with business logic or code related to its responsibil-
ity, such as a set of helper or utility codes or perhaps a basis for other code to be built upon.
●● Individual lifecycle Related to high cohesion, you can look for parts of the code that have a similar
lifecycle and can be deployed and released individually. If such code can be maintained by a team
separate from the codebase that it's currently in, it's an indication that it could be a component
outside of the solution.
●● Stable parts Some parts of your codebase might have a slow rate of change. That code is stable and
isn't altered often. You can check your code repository to find the code with a low change frequency.
●● Independent code and components Whenever code and components are independent and unrelat-
ed to other parts of the system, they can be isolated to a separate component and dependency.
You can use different kinds of tools to assist you in scanning and examining your codebase.
These range from tools that scan for duplicate code and draw solution dependency graphs to tools that
compute metrics for coupling and cohesion.
    621

Summary
This module explored dependency management concepts and helped to identify project dependencies.
You learned how to decompose your system, identify dependencies, and package componentization.
You learned how to describe the benefits and usage of:
●● Define dependency management strategy.
●● Identify dependencies.
●● Describe elements and componentization of dependency management.
●● Scan your codebase for dependencies.

Learn more
●● Azure Artifacts overview - Azure Artifacts | Microsoft Docs2.
●● NuGet documentation | Microsoft Docs3.
●● npm Docs (npmjs.com)4.
●● Maven – Welcome to Apache Maven5.

2 https://docs.microsoft.com/azure/devops/artifacts/start-using-azure-artifacts
3 https://docs.microsoft.com/nuget/
4 https://docs.npmjs.com/
5 https://maven.apache.org/
622     

Understand package management


Introduction
This module describes package feeds, common public package sources, and how to create and publish
packages.

Learning objectives
After completing this module, students and professionals can:
●● Implement package management.
●● Manage package feed.
●● Consume and create packages.
●● Publish packages.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps6.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps7.

Explore packages
Packages are used to define the components you rely on and depend upon in your software solution.
They provide a way to store those components in a well-defined format with metadata to describe them.

What is a package?
A package is a formalized way of creating a distributable unit of software artifacts that can be consumed
from another software solution.
The package describes the content it contains and usually provides extra metadata.
This additional information uniquely identifies the individual packages and to be self-descriptive.
It helps to better store packages in centralized locations and consume the contents of the package
predictably.

6 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
7 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
    623

Also, it enables tooling to manage the packages in the software solution.

Types of packages
Packages can be used for different kinds of components.
The type of components you want to use in your codebase differ for the different parts and layers of the
solution you're creating.
The range from frontend components, such as JavaScript code files, to backend components like .NET
assemblies or Java components, complete self-contained solutions, or reusable files in general.
Over the past years, the packaging formats have changed and evolved. Now there are a couple of de
facto standard formats for packages.
●● NuGet packages (pronounced “new get”) are a standard used for .NET code artifacts. It includes .NET
assemblies and related files, tooling, and sometimes only metadata. NuGet defines the way packages
are created, stored, and consumed. A NuGet package is essentially a compressed folder structure with
files in ZIP format and has the .nupkg extension. See also An introduction to NuGet8.
●● NPM An NPM package is used for JavaScript development. It originates from node.js development,
where it's the default packaging format. An NPM package is a file or folder containing JavaScript files
and a package.json file describing the package's metadata. For node.js, the package usually
includes one or more modules that can be loaded once the package is consumed. See also About
packages and modules9.
●● Maven is used for Java-based projects. Each package has a Project Object Model file describing the
project's metadata and is the basic unit for defining a package and working with it.
●● PyPi The Python Package Index, abbreviated as PyPI and known as the Cheese Shop, is the official
third-party software repository for Python.
●● Docker packages are called images and contain complete and self-contained deployments of compo-
nents. A Docker image commonly represents a software component that can be hosted and executed
by itself without any dependencies on other images. Docker images are layered and might be de-
pendent on other images as their basis. Such images are referred to as base images.

Understand package feeds


Packages should be stored in a centralized place for distribution and consumption to take dependencies
on the components it contains.
The centralized storage for packages is commonly called a package feed. There are other names in
use, such as repository or registry.
We'll refer to all of these as package feeds unless it's necessary to use the specific name for clarity. Each
package type has its type of feed.
Put another way. One feed typically contains one type of packages. There are NuGet feeds, NPM feeds,
Maven repositories, PyPi feed, and Docker registries.
Package feeds offer versioned storage of packages. A particular package can exist in multiple versions in
the feed, catering for consumption of a specific version.

8 https://docs.microsoft.com/nuget/what-is-nuget
9 https://docs.npmjs.com/about-packages-and-modules
624     

Private and public feeds


The package feeds are centralized and available for many different consumers.
Depending on the package, purpose, and origin, it might be generally available or to a select audience.
Typically, open-source projects for applications, libraries, and frameworks are shared with everyone and
publically available.
The feeds can be exposed in public or private to distinguish in visibility. Anyone can consume public
feeds.
There might be reasons why you don't want your packages to be available publicly.
It could be because it contains intellectual property or doesn't make sense to share with other software
developers.
Components developed for internal use might be available only to the project, team, or company that
developed it.
In such cases, you can still use packages for dependency management and choose to store the package
in a private package feed.
Private feeds can only be consumed by those who are allowed access.

Explore package feed managers


Each of the package types has a corresponding manager that takes care of one or more of the following
aspects of package management:
●● Installation and removal of local packages
●● Pushing packages to a remote package feed
●● Consuming packages from a remote package feed
●● Searching feeds for packages
The package manager has cross-platform command-line interface (CLI) tools to manage the local pack-
ages and feeds that host the packages. This CLI tooling is part of a local install on a development ma-
chine.

Choosing tools
The command-line nature of the tooling offers the ability to include it in scripts to automate package
management. Ideally, one should use the tooling in build and release pipelines for component creating,
publishing, and consuming packages from feeds.
Additionally, developer tooling can have integrated support for working with package managers, provid-
ing a user interface for the raw tooling. Examples of such tooling are Visual Studio 2017, Visual Studio
Code, and Eclipse.

Explore common public package sources


The various package types have a standard source that is commonly used for public use.
It's a go-to place for developers to find and consume publically available components as software
dependencies. These sources are package feeds.
    625

Public
In general, you'll find that publically available package sources are free to use.
Sometimes they have a licensing or payment model for consuming individual packages or the feed itself.
These public sources can also be used to store packages you've created as part of your project.
It doesn't have to be open-source, although it is in most cases.
Public and free package sources that offer feeds at no expense will usually require that you make the
packages you store publically available as well.

Package type Package source URL


NuGet NuGet Gallery https://nuget.org
NPM NPMjs https://npmjs.org
Maven Maven https://search.maven.org
Docker Docker Hub https://hub.docker.com
Python Python Package Index https://pypi.org
The table above doesn't contain an extensive list of all public sources available.
There are other public package sources for each of the types.

Explore self-hosted and SaaS based package


sources
The following private package sources will give you a starting point for finding the most relevant feeds.

Private
Private feeds can be used in cases where packages should be available to a select audience.
The main difference between public and private feeds is the need for authentication.
Public feeds can be anonymously accessible and optionally authenticated.
Private feeds can be accessed only when authenticated.
There are two options for private feeds:
●● Self-hosting Some of the package managers are also able to host a feed. Using on-premises or
private cloud resources, one can host the required solution to offer a private feed.
●● SaaS services A variety of third-party vendors and cloud providers offer software-as-a-service feeds
that can be kept private. It typically requires a consumption fee or a cloud subscription.
The following table contains a non-exhaustive list of self-hosting options and SaaS offerings to host
package privately feeds for each type covered.

Package type Self-hosted private feed SaaS private feed


NuGet NuGet server Azure Artifacts, MyGet
NPM Sinopia, CNPMjs, Verdaccio NPMjs (https://www.npmjs.
com/), MyGet, Azure Artifacts
Maven Nexus, Artifactory, Archivia Azure Artifacts, Bintray, JitPack
626     

Docker Portus, Quay, Harbor Docker Hub, Azure Container


Registry, Amazon Elastic Con-
tainer Registry
Python PyPI Server Gemfury

Consume packages
Each software project that consumes packages to include the required dependencies will use the package
manager and more packages sources.
The package manager will download the individual packages from the sources and install them locally on
the development machine or build server.
The developer flow will follow this general pattern:
1. Identify a required dependency in your codebase.
2. Find a component that satisfies the requirements for the project.
3. Search the package sources for a package offering a correct version of the component.
4. Install the package into the codebase and development machine.
5. Create the software implementation that uses the new components from the package.
The package manager tooling will help search and install the components in the packages.
How it's achieved varies for the different package types. Refer to the documentation of the package
manager for instructions on consuming packages from feeds.
To get started, you'll need to specify the package source to be used. Package managers will have a
default source defined that refers to the standard package feed for its type.
Alternative feeds will need to be configured to allow consuming the packages they offer.

Upstream sources
Part of the package management involves keeping track of the various sources.
It's possible to refer to multiple sources from a single software solution. However, when combining
private and public sources, the order of resolution of the sources becomes essential.
One way to specify multiple packages sources is by choosing a primary source and selecting an upstream
source.
The package manager will evaluate the primary source first and switch to the upstream source when the
package isn't found there.
The upstream source might be one of the official public sources or a private source. The upstream source
could refer to another upstream source itself, creating a chain of sources.
A typical scenario is to use a private package source referring to a public upstream source for one of the
official feeds. It effectively enhances the packages in the upstream source with packages from the private
feed, avoiding publishing private packages in a public feed.
A source that has an upstream source defined may download and cache the packages that were request-
ed. It doesn't contain itself.
The source will include these downloaded packages and starts to act as a cache for the upstream source.
It also offers the ability to keep track of any packages from the external upstream source.
    627

An upstream source can be a way to avoid direct access of developers and build machines to external
sources.
The private feed uses the upstream source as a proxy to the otherwise external source. It will be your feed
manager and private source that have the communication to the outside. Only privileged roles can add
upstream sources to a private feed.
See also Upstream sources10.

Packages graph
A feed can have one or more upstream sources, which might be internal or external. Each of these can
have additional upstream sources, creating a package graph of the source.
Such a graph can offer many possibilities for layering and indirection of origins of packages. It might fit
well with multiple teams taking care of packages for frameworks and other base libraries.
The downside is that package graphs can become complex when not correctly understood or designed.
It's essential to know how you can create a proper package graph.
See also Constructing a complete package graph11.

Introduction to Azure Artifacts


Previously you learned about packaging dependencies and the various packaging formats, feeds, sources,
and package managers.
Now, you'll know more about package management, creating a feed, and publishing packages.
During this module, NuGet and Azure Artifacts are used as an example of a package format and a
particular package feed and source type.
Microsoft Azure DevOps provides various features for application lifecycle management, including:
●● Work item tracking.
●● Source code repositories.
●● Build and release pipelines.
●● Artifact management.
The artifact management is called Azure Artifacts and was previously known as Package manage-
ment. It offers public and private feeds for software packages of various types.

Types of packages supported


Azure Artifacts currently supports feeds that can store five different package types:
●● NuGet packages
●● NPM packages
●● Maven
●● Universal packages
●● Python

10 https://docs.microsoft.com/azure/devops/artifacts/concepts/upstream-sources
11 https://docs.microsoft.com/azure/devops/artifacts/concepts/package-graph
628     

Previously, we discussed the package types for NuGet, NPM, Maven, and Python. Universal packages are
an Azure Artifacts-specific package type. In essence, it's a versioned package containing multiple files and
folders.
A single Azure Artifacts feed can contain any combination of such packages. You can connect to the feed
using the package managers and the corresponding tooling for the package types. For Maven packages,
this can also be the Gradle build tool.

Selecting package sources


When creating your solution, you'll decide which packages you want to consume to offer the dependent
components.
The next step is to determine the sources for these packages. The main choice is selecting public and
private feeds or a combination of it.
Publically available packages can usually be found in the public package sources. It would be nuget.org,
npmjs, and pypi.org.
Your solution can select these sources if it only consumes packages available there.
Whenever your solution also has private packages that can't be available on public sources, you'll need to
use a private feed.
In the previous module, you learned that public package sources could be upstream sources to private
feeds.
Azure Artifacts allows its feeds to specify one or more upstream sources, public, or other private feeds.

Publish packages
As software is developed and components are written, you'll most likely produce components as depend-
encies packaged for reuse.
Discussed previously was guidance to find components that can be isolated into dependencies.
These components need to be managed and packaged. After that, they can be published to a feed,
allowing others to consume the packages and use the components it contains.

Creating a feed
The first step is to create a feed where the packages can be stored. In Azure Artifacts, you can create
multiple feeds, which are always private.
During creation, you can specify the name, visibility and prepopulate the default public upstream sources
for NuGet, NPM, and Python packages.

What are feeds in Azure Artifacts?


Most package management systems provide endpoints where you can request packages to install in your
applications. Those endpoints are called feeds.
In Azure Artifacts, you can have multiple feeds in your projects, and you can make them available to only
users authorized in your project or for your entire organization.
Each feed can contain any packages, even mixed types, but it's recommended that you create one feed
per type you want to support in your organization, this way, it's clear what the feed contains.
    629

Each feed can contain one or more upstream and can manage its security.

Controlling access
The Azure Artifacts feed you created is always private and not available publically.
You need access to it by authenticating to Azure Artifacts with an account with access to Azure DevOps
and a team project.
By default, a feed will be available to all registered users in Azure DevOps.
You can select it to be visible only to the team project where the feed is created.
Whichever option is chosen, you can change the permissions for a feed from the settings dialog.

Push packages to a feed


Once you've authenticated to Azure DevOps, you can pull and push packages to the package feed,
provided you have permission to do so.
Pushing packages is done with the tooling for the package manager. Each of the package managers and
tooling has a different syntax for pushing.
To manually push a NuGet package, you would use the NuGet.exe command-line tool. For a package
called MyDemoPackage, the command would resemble:
nuget.exe push -Source {NuGet package source URL} -ApiKey YourKey YourPack-
age\YourPackage.nupkg

Updating packages
Packages might need to be updated during their lifetime. Technically, updating a package is made by
pushing a new version of the package to the feed.
The package feed manager manages to properly store the updated package with the existing packages in
the feed.
Note: Updating packages requires a versioning strategy.

Create a package feed


Important: Open an existing Azure DevOps project.

Steps to create a package feed


1. Go to dev.azure.com and open your team project.
1. Open Azure Artifacts.
2. Click on the button New feed.
3. Enter a name: PartsUnlimited.
4. Click **Create.**The feed is now created.
2. Go to Settings and click Feed Settings.
630     

You are in the tab Feed Details.


3. Set a description: Packages for the PartsUnlimited Web application
1. Click Save.
2. Open the Permissions tab.
4. Open the Views tab.
5. Open the Upstream sources tab.
You can manage and add your upstream sources here.
6. Go back to the root of the feed by clicking on PartsUnlimited.
1. Click Connect to Feed.
This dialog shows info about how packages stored in this feed can be consumed from various tools.
You'll learn more about consuming packages in the following demonstration.

Push a package
For this exercise, use the full version of PartsUnlimited12.
Important: Make source code for PartsUnlimited available in your Azure DevOps repo.

Steps to create a NuGet package


1. Go to Visual Studio and open your PartsUnlimited project.
Here's an ASP.NET MVC application, where security-related parts have been isolated into a separate .NET
standard project. Isolating gives you the option to create a package for this code and publish it to a feed.
.NET standard has support for creating NuGet packages.
2. Build the project PartsUnlimited.Security.
3. Go to Command prompt, use command dir bin\debug.Here you see the .nupkg file for PartsUn-
limited.Security
4. Open your team project in dev.azure.com and go to Artifacts.

1. Click Connect to feed.


2. Follow the dialog instructions.

●● Copy the command for “Add this feed” by clicking the Copy icon.
3. Switch back to your command line.

●● Look at the existing NuGet sources with the command: NuGet sourcesYou see two NuGet
sources available now.
●● Paste and run the copied instructions.
●● Look at the existing NuGet sources again with the command: NuGet sourcesYou see a third
NuGet source available.

12 http://microsoft.github.io/PartsUnlimited/
    631

Steps to publish the package


1. Go back to the dialog instructions.
2. Copy the command for “Push a Package” by clicking the Copy icon.
3. Switch back to your command line and paste the copied instructions. * Change the folder and name
my_package.nupkg to bin\debug\PartsUnlimited.Security1.0.0.nupkg
* Run the command.

_We have published the package to the feed, and it is pushed successfully._

4. Check if the package is available in Azure Artifacts * Close the dialog instructions.
* Refresh the Artifacts page

_You see the successfully published package._

Summary
This module describes package feeds, common public package sources, and how to create and publish
packages.
You learned how to describe the benefits and usage of:
●● Implement package management.
●● Manage package feed.
●● Consume and create packages.
●● Publish packages.

Learn more
●● Azure Artifacts overview - Azure Artifacts | Microsoft Docs13.
●● What are feeds? - Azure Artifacts | Microsoft Docs14.
●● Get started with NuGet packages - Azure Artifacts | Microsoft Docs15.
●● Use feed views to share your packages - Azure Artifacts | Microsoft Docs16.

13 https://docs.microsoft.com/azure/devops/artifacts/start-using-azure-artifacts
14 https://docs.microsoft.com/azure/devops/artifacts/concepts/feeds
15 https://docs.microsoft.com/azure/devops/artifacts/get-started-nuget
16 https://docs.microsoft.com/azure/devops/artifacts/feeds/views
632     

Migrate consolidate and secure artifacts


Introduction
This module details package migration, consolidation, and configuration to secure access to package
feeds and artifact repositories.

Learning objectives
After completing this module, students and professionals can:
●● Identify artifact repositories.
●● Migrate and integrate artifact repositories.
●● Secure package feeds.
●● Understand roles, permissions, and authentication.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Identify existing artifact repositories


An artifact is a deployable component of your application. Azure Pipelines can work with a wide variety
of artifact sources and repositories.
When you're creating a release pipeline, you need to link the required artifact sources. Often, it will
represent the output of a build pipeline from a continuous integration system like Azure Pipelines,
Jenkins, or TeamCity.
The artifacts that you produce might be stored in source control, like Git or Team Foundation version
control. But you might also be using package management tools when you get repositories.
When you need to create a release, you must specify which version of the artifacts is required. By default,
the release pipeline will choose the latest version of the artifacts. But you might not want that.
For example, you might need to choose a specific branch, a specific build version, or perhaps you need to
specify tags.
Azure Artifacts are one of the services that's part of Azure DevOps. Using it can eliminate the need to
manage file shares or host private package services.
It lets you share code easily by allowing you to store Maven, npm, or NuGet packages together,
cloud-hosted, indexed and matched.
Now, while we can do so, there's also no need to store your binaries in Git. You can keep them directly
using universal packages. It's also a great way to protect your packages. Azure Artifacts provide universal
artifact management from Maven, npm, and NuGet.
And sharing packages, you can easily access all of your artifacts in builds and releases because it inte-
grates naturally with Azure Pipelines and its CI/CD tooling, along with versioning and testing.
    633

Migrate and integrating artifact repositories


While you can continue to work with your existing artifact repositories in their current locations when
using Azure Artifacts, there are advantages to migrating them.

NuGet and other packages


Azure Artifacts provides hosted NuGet feeds as a service.
Using this service, you can often eliminate the dependencies on on-premises resources such as file shares
and locally hosted instances of NuGet.Server.
The feeds can also be consumed by any Continuous Integration system that supports authenticated
NuGet feeds.

Walkthroughs
For details on how to integrate NuGet, npm, Maven, Python, and Universal Feeds, see the following
walkthroughs:
●● Get started with NuGet packages in Azure DevOps Services and TFS17.
●● Use npm to store JavaScript packages in Azure DevOps Services or TFS18.
●● Get started with Maven packages in Azure DevOps Services and TFS19.
●● Get started with Python packages in Azure Artifacts20.
●● Publish and then download a Universal Package21.

Secure access to package feeds


Trusted sources
Package feeds are a trusted source of packages. The offered packages will be consumed by other code-
bases and used to build software that needs to be secure.
Imagine what would happen if a package feed would offer malicious components in its packages.
Each consumer would be affected when installing the packages onto its development machine or build
server.
It also happens at any other device that will run the end product, as the malicious components will be
executed as part of the code.
Usually, the code runs with high privileges, giving a large security risk if any packages cannot be trusted
and might contain unsafe code.

17 https://docs.microsoft.com/azure/devops/artifacts/get-started-nuget
18 https://docs.microsoft.com/azure/devops/artifacts/get-started-npm
19 https://docs.microsoft.com/azure/devops/artifacts/get-started-maven
20 https://docs.microsoft.com/azure/devops/artifacts/quickstarts/python-packages
21 https://docs.microsoft.com/azure/devops/artifacts/quickstarts/universal-packages
634     

Securing access
Package feeds must be secured for access by authorized accounts, so only verified and trusted packages
are stored there.
None should push packages to a feed without the proper role and permissions.
It prevents others from pushing malicious packages. It still assumes that the persons who can push
packages will only add safe and secure packages.
Especially in the open-source world, It's done by the community. A package source can further guard its
feed with the use of security and vulnerability scan tooling.
Additionally, consumers of packages can use similar tooling to do the scans themselves.

Securing availability
Another aspect of security for package feeds is about the public or private availability of the packages.
The feeds of public sources are available for anonymous consumption.
Private feeds have restricted access most of the time.
It applies to the consumption and publishing of packages. Private feeds will allow only users in specific
roles or teams access to its packages.
Package feeds need to have secure access for different kinds of reasons.
The access should involve allowing:
●● Restricted access for consumption Whenever a particular audience should only consume a package
feed and its packages, it's required to restrict its access. Only those allowed access will consume the
packages from the feed.
●● Restricted access for publishing Secure access is required to restrict who can publish so feeds and
unauthorized or untrusted persons and accounts can't modify their packages.

Examine roles
Azure Artifacts has four different roles for package feeds. These are incremental in the permissions they
give.
The roles are in incremental order:
●● Reader: Can list and restore (or install) packages from the feed.
●● Collaborator: Can save packages from upstream sources.
●● Contributor: Can push and unlist packages in the feed.
●● Owner: has all available permissions for a package feed.
When creating an Azure Artifacts feed, the Project Collection Build Service is given contribu-
tor rights by default.
This organization-wide build identity in Azure Pipelines can access the feeds it needs when running tasks.
If you changed the build identity to be at the project level, you need to give that identity permissions to
access the feed.
Any contributors to the team project are also contributors to the feed.
    635

Project Collection Administrators and administrators of the team project, plus the feed's creator, are
automatically made owners of the feed.
The roles for these users and groups can be changed or removed.

Examine permissions
The feeds in Azure Artifacts require permission to the various features it offers. The list of permissions
consists of increasing privileged operations.
The list of privileges is as follows:

You can assign users, teams, and groups to a specific role for each permission, giving the permissions
corresponding to that role.
You need to have the Owner role to do so. Once an account has access to the feed from the permission
to list and restore packages, it's considered a Feed user.

Like permissions and roles for the feed itself, there are extra permissions for access to the individual
views.
Any feed user has access to all the views, whether the default views of @Local, @Release, @Prerelease or
newly created ones.
636     

When creating a feed, you can choose whether the feed is visible to people in your Azure DevOps
organization or only specific people.

See also: Secure and share packages using feed permissions22.

Examine authentication
Azure DevOps users will authenticate against Azure Active Directory when accessing the Azure DevOps
portal.
After being successfully authenticated, they won't have to provide any credentials to Azure Artifacts itself.
The roles for the user, based on its identity, or team and group membership, are for authorization.
When access is allowed, the user can navigate to the Azure Artifacts section of the team project.
The authentication from Azure Pipelines to Azure Artifacts feeds is taken care of transparently. It will be
based upon the roles and their permissions for the build identity.
The previous section on Roles covered some details on the required roles for the build identity.
The authentication from inside Azure DevOps doesn't need any credentials for accessing feeds by itself.
However, when accessing secured feeds outside Azure Artifacts, such as other package sources, you most
likely must provide credentials to authenticate to the feed manager.
Each package type has its way of handling the credentials and providing access upon authentication. The
command-line tooling will provide support in the authentication process.
For the build tasks in Azure Pipelines, you'll provide the credentials via a Service connection.

Summary
This module detailed package migration, consolidation, and configuration to secure access to package
feeds and artifact repositories.
You learned how to describe the benefits and usage of:
●● Identify artifact repositories.
●● Migrate and integrate artifact repositories.
●● Secure package feeds.
●● Understand roles, permissions, and authentication.

Learn more
●● Azure Artifacts overview - Azure Artifacts | Microsoft Docs23.

22 https://docs.microsoft.com/azure/devops/artifacts/feeds/feed-permissions
23 https://docs.microsoft.com/azure/devops/artifacts/start-using-azure-artifacts
    637

●● Best practices when working with Azure Artifacts - Azure Artifacts | Microsoft Docs24.
●● Set up permissions - Azure Artifacts | Microsoft Docs25.

24 https://docs.microsoft.com/azure/devops/artifacts/concepts/best-practices
25 https://docs.microsoft.com/azure/devops/artifacts/feeds/feed-permissions
638     

Implement a versioning strategy


Introduction
Software changes over time. The requirements for the software do not stay the same.
The functionality it offers and its use will grow, change, and adapt based on feedback.
The hosting of an application might evolve as well, with new operating systems, new frameworks, and
versions thereof.
The original implementation might contain flaws and bugs. Whatever reason for the change, it is unlikely
that software is stable and doesn't need to change.
Since the software you build takes dependencies on other components, the same holds for the compo-
nents and packages you build or use while building your software.
To keep track of which piece of software is currently being used, correct versioning becomes essential to
maintaining a codebase.
Versioning is also relevant for dependency management, as it relates to the versions of the packages and
the components within.
Each dependency is identified by its name and version. It allows keeping track of the exact packages
being used. Each of the packages has its lifecycle and rate of change.

Immutable packages
As packages get new versions, your codebase can choose when to use a new version of the packages it
consumes.
It does so by specifying the specific version of the package it requires. This implies that packages them-
selves should always have a new version when they change.
Whenever a package is published to a feed, it should not be allowed to change anymore.
If it were, it would be at the risk of introducing potential breaking changes to the code. In essence, a
published package is immutable.
Replacing or updating an existing version of a package is not allowed. Most of the package feeds do not
allow operations that would change a current version.
Regardless of the size of the change, a package can only be updated by introducing a new version.
The new version should indicate the type of change and impact it might have.
This module explains versioning strategies for packaging, best practices for versioning, and package
promotion.

Learning objectives
After completing this module, students and professionals can:
●● Implement a versioning strategy.
●● Promote packages.
●● Push packages from pipeline.
●● Describe semantic and explore best practices for versioning.
    639

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps26.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps27.

Understand versioning of artifacts


Its proper software development practice to indicate changes to code with the introduction of an in-
creased version number.
However small or large a change, it requires a new version. A component and its package can have
independent versions and versioning schemes.
The versioning scheme can differ per package type. Typically, it uses a scheme that can indicate the kind
of change that is made.
Most commonly, it involves three types of changes:
●● Major change Major indicates that the package and its contents have changed significantly. It often
occurs at the introduction of a new version of the package. It can be a redesign of the component.
Major changes aren't guaranteed to be compatible and usually have breaking changes from older
versions. Major changes might require a large amount of work to adopt the consuming codebase to
the new version.
●● Minor change Minor indicates that the package and its contents have extensive modifications but are
smaller than a major change. These changes can be backward compatible with the previous version,
although they aren't guaranteed to be.
●● Patch A patch or revision is used to indicate that a flaw, bug, or malfunctioning part of the compo-
nent has been fixed. Usually, It's a backward-compatible version compared to the previous version.
How artifacts are versioned technically varies per package type. Each type has its way of indicating the
version in the metadata.
The corresponding package manager can inspect the version information. The tooling can query the
package feed for packages and the available versions.
Additionally, a package type might have its conventions for versioning and a particular versioning
scheme.
See also Publish to NuGet feeds28.

26 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
27 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
28 https://docs.microsoft.com/azure/devops/pipelines/artifacts/nuget#package-versioning
640     

Explore semantic versioning


One of the predominant ways of versioning is the use of semantic versioning.
It isn't a standard but does offer a consistent way of expressing the intent and semantics of a particular
version.
It describes a version for its backward compatibility with previous versions.
Semantic versioning uses a three-part version number and an extra-label.
The version has the form of Major.Minor.Patch corresponds to the three types of changes covered in
the previous section.
Examples of versions using the semantic versioning scheme are 1.0.0 and 3.7.129. These versions
don't have any labels.
For prerelease versions, it's customary to use a label after the regular version number.
A label is a textual suffix separated by a hyphen from the rest of the version number.
The label itself can be any text describing the nature of the prerelease.
Examples of these are rc1, beta27, and alpha, forming version numbers like 1.0.0-rc1 is prerelease
for the upcoming 1.0.0 version.
Prereleases are a common way to prepare for the release of the label-less version of the package.
Early adopters can take a dependency on a prerelease version to build using the new package.
In general, it isn't a good idea to use a prerelease version of packages and their components for released
software.
It's good to expect the impact of the new components by creating a separate branch in the codebase and
using the prerelease version of the package.
Changes are that there will be incompatible changes from a prerelease to the final version.
See also Semantic Versioning 2.0.029.

Examine release views


When building packages from a pipeline, the package needs to have a version before being consumed
and tested.
Only after testing is the quality of the package known.
Since package versions can't and shouldn't be changed, it becomes challenging to choose a specific
version beforehand.
Azure Artifacts recognizes the quality level of packages in its feeds and the difference between prerelease
and release versions.
It offers different views on the list of packages and their versions, separating these based on their quality
level.
It fits well with the use of semantic versioning of the packages for predictability of the intent of a particu-
lar version.
Still, its extra metadata from the Azure Artifacts feed is called a descriptor.

29 https://semver.org/
    641

Feeds in Azure Artifacts have three different views by default. These views are added when a new feed is
created. The three views are:
●● Local. The @Local view contains all release and prerelease packages and the packages downloaded
from upstream sources.
●● Prerelease. The @Prerelease view contains all packages that have a label in their version number.
●● Release. The @Release view contains all packages that are considered official releases.

Using views
You can use views to offer help consumers of a package feed filter between released and unreleased
versions of packages.
Essentially, it allows a consumer to make a conscious decision to choose from released packages or
opt-in to prereleases of a certain quality level.
By default, the @Local view is used to offer the list of available packages. The format for this URI is:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}/nuget/
v3/index.json

When consuming a package feed by its URI endpoint, the address can have the requested view included.
For a specific view, the URI includes the name of the view, which changes to be:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}@
{Viewname}/nuget/v3/index.json

The tooling will show and use the packages from the specified view automatically.
Tooling may offer an option to select prerelease versions, such as shown in this Visual Studio 2017 NuGet
dialog. It doesn't relate or refer to the @Prerelease view of a feed. Instead, it relies on the presence of
prerelease labels of semantic versioning to include or exclude packages in the search results.
See also:
●● Views on Azure DevOps Services feeds30.
●● Communicate package quality with prerelease and release views31.

Promote packages
Azure Artifacts has the notion of promoting packages to views to indicate that a version is of a certain
quality level.
By selectively promoting packages, you can plan when packages have a certain quality and are ready to
be released and supported by the consumers.
You can promote packages to one of the available views as the quality indicator.
Release and Prerelease's two views might be sufficient, but you can create more views when you want
finer-grained quality levels if necessary, such as alpha and beta.

30 https://docs.microsoft.com/azure/devops/artifacts/concepts/views
31 https://docs.microsoft.com/azure/devops/artifacts/feeds/views
642     

Packages will always show in the Local view, but only in a particular view after being promoted.
Depending on the URL used to connect to the feed, the available packages will be listed.
Upstream sources will only be evaluated when using the @Local view of the feed.
After they've been downloaded and cached in the @Local view, you can see and resolve the packages in
other views after being promoted to it.
It's up to you to decide how and when to promote packages to a specific view.
This process can be automated by using an Azure Pipelines task as part of the build pipeline.
Packages that have been promoted to a view won't be deleted based on the retention policies.

Exercise - promote a package


Important: Have access to an existing Azure DevOps project and the connected package feed from the
previous demo Create a package feed32.

Steps to demonstrate the views on package feeds in Azure


Artifacts
1. Go to dev.azure.com and open your team project.

2. Open Artifacts and select the feed PartsUnlimited.

3. Go to Artifacts and click Feed Settings.

32 https://docs.microsoft.com/learn/modules/understand-package-management/10-create-package-feed
    643

4. Open the Views tab. By default, there will be three views. Local: includes all packages in the feed and
all cached from upstream sources. Prerelease and Release. In the Default view column is a check
behind Local. It's the default view that will always be used.

Steps to use the release view instead


1. Open Visual Studio and open NuGet Package Manager.
2. Click the settings wheel and check the source address for the PartsUnlimited feed. If you want to use a
different view than the local view, you need to include that in the Source URL of your feed by adding
@Release.
3. Add @Release to the source URL ../PartsUnlimited@Release/nuget/.. And click Update.
4. Refresh the Browse tab. You'll see there are No packages found in the Release feed. Before any
packages appear, you need to promote them.

Steps to promote packages to views


1. Go back to your feed in Azure Artifacts.
2. Click on the created NuGet Package PartsUnlimited.Security.
3. Click Promote.
4. Select the feed you want to use, in this case, PartsUnlimited@Release, and Promote.
5. Go back to the Overview. If we look again at our package in the feed, you'll notice a Release tag is
now associated with this package.
6. Go back to Visual Studio, NuGet Package Manager.
7. Refresh the Browse tab. You'll see that your version is promoted to this view.
8. Select PartsUnlimited.Security and click Update and OK. The latest version of this package is now
used.

Explore best practices for versioning


There are several best practices to effectively use versions, views, and promotion flow for your versioning
strategy.
Here's a couple of suggestions:
●● Have a documented versioning strategy.
●● Adopt SemVer 2.0 for your versioning scheme.
●● Each repository should only reference one feed.
●● On package creation, automatically publish packages back to the feed.
It's good to adopt a best practice yourself and share these with your development teams.
It can be made part of the Definition of Done for work items related to publishing and consuming
packages from feeds.
See also Best practices for using Azure Artifacts33.

33 https://docs.microsoft.com/azure/devops/artifacts/concepts/best-practices
644     

Exercise - push from the pipeline


Important: In your Azure DevOps PartsUnlimited project, prepare two builds pipelines (used in previous
demos)
●● Build pipeline PartsUnlimited Security Package.

●● .NET Core build task (Restore, build, push)


●● Enable CI trigger
●● Build pipeline PartsUnlimited E2E.

●● ASP.NET web application type


●● NuGet restore task

Steps to build and push the NuGet package


1. Edit the build pipeline PartsUnlimited Security Package.
●● dotnet restore
●● dotnet build
●● dotnet push

●● Use the command nuget push and specify path **/*.nupkg.


●● Select target feed PartsUnlimited.
2. Start a new build and select agent pool.
The build has succeeded, but you'll see the final step dotnet push fails. The reason for this failure can
be found in the log information.
3. Open the log information.
It shows the feed already contains the PartsUnlimited.Security 1.0.0. We go back to the Visual
Studio project to see what is happening.
4. Open the source code for the PartsUnlimited package in Visual Studio in a different solution.
●● Open the Project Properties.
●● Go to the package tab.
Look at the Package version. We see that the version is still 1.0.0. Packages are immutable.
As soon as a package is published to a feed, there can never be another package with the same
version.
We need to upgrade the version to a new one that uses the major, minor, and the changed patch
version.
5. Change the patch version: 1.0.1. Make a small edit to the code for illustrative purposes, and check in
the new code.
6. Change the exception type check-in, commit, and push the new code. We go back to the Azure
DevOps pipeline. Since there's a trigger on the code we just changed, the build will automatically
start.
    645

7. Go back to build pipeline for PartsUnlimited Security Package.


As you see, the build is triggered and completed successfully, including the push to the NuGet feed.
Since there was no version conflict, we successfully pushed the new version to the feed.
8. Open Artifacts and show the feed.
Since there was a successful build for the entire web application, you can see that the PartsUnlimited
feed now also includes all the downloaded upstream packages from the NuGet.org source.
9. Scroll down and click on the PartsUnlimited.Security 1.0.1 package. By clicking on it, we can inspect
the details and the versions.
10. Edit the build pipeline PartsUnlimited E2E.
11. Click NuGet restore.
There is a second pipeline that builds the complete web application. It is an ASP.NET web application
build.
The NuGet restore task is also configured to consume packages from the PartsUnlimited feed.
Because PartsUnlimited has an upstream source for NuGet.org, we don't have to Use packaged from
NuGet.org explicitly. You can uncheck this box.

Summary
This module explained versioning strategies for packaging, best practices for versioning, and package
promotion.
You learned how to describe the benefits and usage of:
●● Implement a versioning strategy.
●● Promote packages.
●● Push packages from pipeline.
●● Describe semantic and explore best practices for versioning.

Learn more
●● Key concepts for Azure Artifacts34.
●● Publish and download universal packages - Azure Artifacts | Microsoft Docs35.
●● Get started with NuGet packages - Azure Artifacts | Microsoft Docs36.

34 https://docs.microsoft.com/azure/devops/artifacts/artifacts-key-concepts#immutability
35 https://docs.microsoft.com/azure/devops/artifacts/quickstarts/universal-packages
36 https://docs.microsoft.com/azure/devops/artifacts/get-started-nuget
646     

Introduction to GitHub Packages


Introduction
This module introduces you to GitHub Packages. It explores ways to control permissions and visibility,
publish, install, delete and restore packages using GitHub.
GitHub Packages is a software package hosting service that allows you to host your packages, containers,
and other dependencies. It's a central place to provide integrated permissions management and billing
for software development on GitHub.
GitHub Packages can host:
●● npm.
●● RubyGems.
●● Apache Maven.
●● Gradle.
●● Docker.
●● NuGet.
●● GitHub's Container registry is optimized for containers and supports Docker and OCI images.

GitHub Packages give you the flexibility to control permissions and visibility for your packages. You can
publish packages in a public or private repository. The permission can be inherited from the repository
where the package is hosted or defined for specific user or organization accounts for packages in the
container registry.
You can integrate GitHub Packages with GitHub APIs, GitHub Actions, and webhooks.

Learning objectives
After completing this module, students and professionals can:
●● Publish packages.
●● Install packages.
●● Delete and restore packages.
    647

●● Configure access control and visibility.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't
have it yet, see: Join GitHub · GitHub37. If you already have your GitHub account, create a new
repository Creating a new repository - GitHub Docs38.

Publish packages
GitHub Packages use native package tooling commands to publish and install package versions.

Support for package registries


Language Package format Package client
JavaScript package.json npm
Ruby Gemfile gem
Java pom.xml mvn
Java build.gradle or build.gradle.kts gradle
.NET nupkg dotnet CLI
N/A Dockerfile Docker

When creating a package, you can provide a description, installation and usage instructions, and other
details on the package page. It helps people consuming the package understand how to use it and its
purposes.
If a new package version fixes a security vulnerability, you can publish a security advisory to your reposi-
tory.
Tip: You can connect a repository to more than one package. Ensure the README and description
provide information about each package.

Publishing a package
Using any supported package client, to publish your package, you need to:
1. Create or use an existing access token with the appropriate scopes for the task you want to accom-
plish: Creating a personal access token39. When you create a personal access token (PAT), you can
assign the token to different scopes depending on your needs. See "About permissions for GitHub
Packages40".
2. Authenticate to GitHub Packages using your access token and the instructions for your package client.

37 https://github.com/signup
38 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
39 https://docs.github.com/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token
40 https://docs.github.com/packages/learn-github-packages/about-permissions-for-github-packages#about-scopes-and-permissions-for-
package-registries
648     

3. Publish the package using the instructions for your package client.
Choose your package, and check how to authenticate and publish: Working with a GitHub Packages
registry41. You'll see below examples for NuGet and npm.

NuGet registry
You can authenticate to GitHub Packages with the dotnet command-line interface (CLI).
Create a nuget.config file in your project directory and specify GitHub Packages as a source under pack-
ageSources.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear />
<add key="github" value="https://nuget.pkg.github.com/OWNER/index.
json" />
</packageSources>
<packageSourceCredentials>
<github>
<add key="Username" value="USERNAME" />
<add key="ClearTextPassword" value="TOKEN" />
</github>
</packageSourceCredentials>
</configuration>

Note: Replace USERNAME with the name of your personal account on GitHub, TOKEN with your PAT, and
OWNER with the name of the user or organization account that owns your project's repository.
You can publish a package authenticating with a nuget.config file, or using the –api-key command-line
option with your GitHub PAT.
dotnet nuget push "bin/Release/OctocatApp.1.0.0.nupkg" --api-key YOUR_
GITHUB_PAT --source "github"

npm registry
You can authenticate using npm by either editing your per-user ~/.npmrc file to include your PAT or by
logging in to npm on the command line using your username and personal access token.
Edit your ~/.npmrc file for your project to include the following line:
//npm.pkg.github.com/:_authToken=TOKEN
Create a new ~/.npmrc file if one doesn't exist.
If you prefer to authenticate by logging in to npm, use the npm login command.
$ npm login –scope=@OWNER –registry=https://npm.pkg.github.com
Username: USERNAME
Password: TOKEN
Email: PUBLIC-EMAIL-ADDRESS

41 https://docs.github.com/packages/working-with-a-github-packages-registry
    649

Note: Replace USERNAME with your GitHub username, TOKEN with your PAT, and PUBLIC-EMAIL-AD-
DRESS with your email address.
To publish your npm package, see Working with the npm registry - GitHub Docs42.
After you publish a package, you can view the package on GitHub. See "Viewing packages43".

For an example package page, see Codertocat/hello-world-npm (github.com)44.


For more information, see:
●● Working with the Container registry45.
●● Working with a GitHub Packages registry46.
●● About GitHub Security Advisories47.
●● Working with the NuGet registry48.
●● Working with the npm registry49.

Install a package
You can install any package you have permission to view from GitHub Packages and use the package as a
dependency in your project.
You can search for packages globally across all of GitHub or within a particular organization. For details,
see Searching for packages50.
After you find a package, read the package's installation and description instructions on the package
page.
You can install a package using any supported package client following the same general guidelines.
1. Authenticate to GitHub Packages using the instructions for your package client.
2. Install the package using the instructions for your package client.

NuGet
To use NuGet packages from GitHub Packages, you must add dependencies to your .csproj file. For more
information on using a .csproj file in your project, see "Working with NuGet packages51".
If you're using Visual Studio, expand your Solution -> Project -> Right-click on Dependencies -> Manage
NuGet Packages…

42 https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-npm-registry#publishing-a-package-
using-a-local-npmrc-file
43 https://docs.github.com/packages/learn-github-packages/viewing-packages
44 https://github.com/Codertocat/hello-world-npm/packages/10696?version=1.0.1
45 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-container-registry
46 https://docs.github.com/packages/working-with-a-github-packages-registry
47 https://docs.github.com/github/managing-security-vulnerabilities/about-github-security-advisories
48 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry
49 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-npm-registry
50 https://docs.github.com/search-github/searching-on-github/searching-for-packages
51 https://docs.microsoft.com/nuget/consume-packages/overview-and-workflow
650     

You can browse, install and update dependencies from multiple registries. For more information, see
Create and remove project dependencies52.

Or, you can update your .csproj file directly.


1. Authenticate to GitHub Packages.
2. Add ItemGroup and configure the PackageReference field in the .csproj project file.

<Project Sdk="Microsoft.NET.Sdk">

<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.0</TargetFramework>
<PackageId>OctocatApp</PackageId>
<Version>1.0.0</Version>
<Authors>Octocat</Authors>
<Company>GitHub</Company>
<PackageDescription>This package adds an Octocat!</PackageDescrip-
tion>
<RepositoryUrl>https://github.com/OWNER/REPOSITORY</RepositoryUrl>
</PropertyGroup>

<ItemGroup>
<PackageReference Include="OctokittenApp" Version="12.0.2" />
</ItemGroup>
</Project>

52 https://docs.microsoft.com/visualstudio/ide/how-to-create-and-remove-project-dependencies
    651

Note: Replace the OctokittenApp package with your package dependency and 1.0.0 with the version
you want to use.
3. Install the packages with the restore command.

npm
You need to add the .npmrc file to your project to install packages from GitHub Packages.
1. Authenticate to GitHub Packages.
2. In the same directory as your package.json file, create or edit a .npmrc file.
3. Include a line specifying GitHub Packages URL and the account owner.

@OWNER:registry=https://npm.pkg.github.com
Note: Replace OWNER with the name of the user or organization account.
4. Add the .npmrc file to the repository. See "Adding a file to a repository53".
5. Configure package.json in your project to use the package you're installing.
{
"name": "@my-org/server",
"version": "1.0.0",
"description": "Server app that uses the @octo-org/octo-app package",
"main": "index.js",
"author": "",
"license": "MIT",
"dependencies": {
"@octo-org/octo-app": "1.0.0"
}
}

6. Install the package.


7. (Optional) If you need to install packages from other organizations, you can add extra lines to your .
npmrc file.

For more information, see:


●● Working with a GitHub Packages registry54.
●● Working with the NuGet registry55.
●● Working with the npm registry56.

53 https://docs.github.com/repositories/working-with-files/managing-files/adding-a-file-to-a-repository
54 https://docs.github.com/packages/working-with-a-github-packages-registry
55 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry
56 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-npm-registry
652     

Delete and restore a package


You can delete it on GitHub if you have the required access:

●● An entire private package.


●● If there aren't more than 5000 downloads of any version of the package, an entire public package.
●● A specific version of a private package.
●● A specific version of a public package if the package version doesn't have more than 5000 downloads.
For packages that inherit their access permissions from repositories, you can delete a package if you have
admin permissions to the repository.
You can also restore an entire package or package version, if:
●● You restore the package within 30 days of its deletion.
●● The same package namespace is still available and not used for a new package.
You can use the REST API to manage your packages. For more information, see the "GitHub Packages
API57".

Deleting a package version


You can delete a package version:
●● Deleting a version of a repository-scoped package on GitHub.
●● Deleting a version of a repository-scoped package with GraphQL.
●● Deleting a version of a user-scoped package on GitHub.
●● Deleting a version of an organization-scoped package on GitHub.
To delete a version of a repository-scoped package, you must have admin permissions to the repository
that owns the package.
1. On GitHub.com, navigate to the main page of the repository.
2. To the right of the list of files, click Packages.
3. Search for and select your package.
4. In the top right of your package's landing page, click Package settings.
5. On the left, click Manage versions.
6. To the right of the version you want to delete, click and select Delete version.
7. To confirm the deletion, type the package name and click I understand the consequences, delete this
version.

Deleting an entire package


You can delete a package version:
●● Deleting an entire repository-scoped package on GitHub.
●● Deleting an entire user-scoped package on GitHub.

57 https://docs.github.com/rest/reference/packages
    653

●● Deleting an entire organization-scoped package on GitHub.


To delete an entire repository-scoped package, you must have admin permissions to the repository that
owns the package.
1. On GitHub.com, navigate to the main page of the repository.
2. To the right of the list of files, click Packages.
3. Search for and select your package.
4. In the top right of your package's landing page, click Package settings.

5. Under “Danger Zone,” click Delete this package.


6. Review the confirmation message, enter your package name, click I understand, and delete this
package.

Restoring a package version


You can restore a package version from your package's landing page.
1. Navigate to your package's landing page.
2. On the right, click Package settings.
3. On the left, click Manage versions.
4. Use the “Versions” drop-down menu and select Deleted on the top right.
5. Next to the deleted package version you want to restore, click Restore.
6. To confirm, click I understand the consequences, restore this version.
For more information, see:
●● Deleting and restoring a package - GitHub Docs58.
●● Working with a GitHub Packages registry59.
●● Working with the NuGet registry60.
●● Working with the npm registry61.
●● Required permissions62.

Explore package access control and visibility


You can only use granular permissions with the Container registry (scoped to a personal user or organiza-
tion account). GitHub.com doesn't support granular permissions in other package registries, like the npm
registry.
You can change the package access control and visibility separately from the repository.
For more information about permissions, see "About permissions for GitHub Packages63".

58 https://docs.github.com/packages/learn-github-packages/deleting-and-restoring-a-package
59 https://docs.github.com/packages/working-with-a-github-packages-registry
60 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry
61 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-npm-registry
62 https://docs.github.com/packages/learn-github-packages/deleting-and-restoring-a-package
63 https://docs.github.com/packages/learn-github-packages/about-permissions-for-github-packages
654     

Container images visibility and access permissions


If you have admin permissions to a container image, you can set the access permissions for the container
image to private or public.
As an admin, you can also grant access permissions for a container image separate from your set permis-
sions at the organization and repository levels.

You can give any person an access role for container images published and owned by a personal account.
For container images published and owned by an organization, you can provide any person or team in
the organization an access role.

Permission Access description


read Can download the package.
Can read package metadata.
write Can upload and download this package.
Can read and write package metadata.
admin Can upload, download, delete, and manage this
package.
Can read and write package metadata.
Can grant package permissions.
For more information about the package's access control and visibility configuration, see Configuring a
package's access control and visibility - GitHub Docs64.

Summary
This module introduced you to GitHub Packages. It explored ways to control permissions and visibility,
publish, install, delete and restore packages using GitHub.
You learned how to describe the benefits and usage of:
●● Publish packages.
●● Install packages.
●● Delete and restore packages.
●● Configure access control and visibility.

Learn more
●● Quickstart for GitHub Packages - GitHub Docs65.
●● Learn GitHub Packages - GitHub Docs66.
●● Working with a GitHub Packages registry - GitHub Docs67.

64 https://docs.github.com/packages/learn-github-packages/configuring-a-packages-access-control-and-visibility
65 https://docs.github.com/packages/quickstart
66 https://docs.github.com/packages/learn-github-packages
67 https://docs.github.com/packages/working-with-a-github-packages-registry
    655

Lab
Lab 17: Package management with Azure Arti-
facts
Lab overview
Azure Artifacts facilitate discovery, installation, and publishing NuGet, npm, and Maven packages in Azure
DevOps. It's deeply integrated with other Azure DevOps features such as Build, making package manage-
ment a seamless part of your existing workflows.
In this lab, you will learn how to work with Azure Artifacts by using the following steps:
●● create and connect to a feed.
●● create and publish a NuGet package.
●● import a NuGet package.
●● update a NuGet package.

Objectives
After you complete this lab, you will be able to:
●● Create an Azure Active Directory (Azure AD) service principal.
●● Create an Azure key vault.
●● Track pull requests through the Azure DevOps pipeline.

Lab duration
●● Estimated time: 40 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions68

68 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
656     

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices isn't a correct aspect of a dependency management strategy?
†† Standardization.
†† Versioning.
†† Binaries sharing.

Multiple choice
Which of the following choices isn't a way of componentization commonly used?
†† Symbol componentization.
†† Source componentization.
†† Package componentization.

Multiple choice
Which of the following choices is the aspect of dependency management that allows a repeatable, predicta-
ble process and usage that can be automated?
†† Versioning.
†† Package formats and sources.
†† Standardization.

Multiple choice
Which of the following visibility should you choose if you create a feed that will allow yourself and the users
you invite to publish?
†† Private.
†† Public.
†† Local.
    657

Multiple choice
Which of the following choices isn't an option for private feeds?
†† Self-hosting.
†† Database hosting.
†† SaaS services.

Multiple choice
Which of the following choices isn't a package type supported by Azure DevOps?
†† PHP.
†† NPM.
†† Python.

Multiple choice
Which of the following choices is the minimum feed permission that will allow you to list available packages
and install them?
†† Collaborator.
†† Reader.
†† Contributor.

Multiple choice
Which of the following roles is given to the Project Collection Build Service when creating an Azure Artifacts
feed by default?
†† Owner.
†† Reader.
†† Contributor.

Multiple choice
Which of the following choices is a recommended place to store binaries?
†† Universal Packages.
†† Git.
†† Team Foundation Version Control.

Multiple choice
Which of the following choices is a correct fix if an existing package is broken or buggy?
†† Use an older version.
†† Publish a new version.
†† Repair and save.
658     

Multiple choice
Which of the following action is required to make a package visible in release views other than @local?
†† Promote.
†† Move.
†† Push.

Multiple choice
Which of the following choices isn't a default feed view in Azure Artifacts?
†† Remote.
†† Prerelease.
†† Local.

Multiple choice
Which of the following choices contains valid packages that GitHub Packages can host?
†† Apache Maven, Gradle and Go modules.
†† RubyGems, Docker and NuGet.
†† NuGet, Conan (for C/C++), and Cargo (for Rust).

Multiple choice
Which of the following choices is the most secure and recommended authentication method for GitHub
Packages?
†† Personal Access Token (PAT).
†† User and Password.
†† OAuth token.

Multiple choice
Which of the following choices is the minimum user or team permission within the GitHub organization that
allows a package delete action?
†† Write.
†† Read.
†† Admin.
    659

Answers
Multiple choice
Which of the following choices isn't a correct aspect of a dependency management strategy?
†† Standardization.
†† Versioning.
■■ Binaries sharing.
Explanation
There are many aspects of a dependency management strategy like Standardization, Package formats, and
sources, Versioning.
Multiple choice
Which of the following choices isn't a way of componentization commonly used?
■■ Symbol componentization.
†† Source componentization.
†† Package componentization.
Explanation
There are two ways of componentization commonly used: source componentization and package compo-
nentization.
Multiple choice
Which of the following choices is the aspect of dependency management that allows a repeatable,
predictable process and usage that can be automated?
†† Versioning.
†† Package formats and sources.
■■ Standardization.
Explanation
Standardization allows a repeatable, predictable process and usage that can be automated as well.
Multiple choice
Which of the following visibility should you choose if you create a feed that will allow yourself and the
users you invite to publish?
■■ Private.
†† Public.
†† Local.
Explanation
It's private. Private feeds can only be consumed by users who are allowed access.
660     

Multiple choice
Which of the following choices isn't an option for private feeds?
†† Self-hosting.
■■ Database hosting.
†† SaaS services.
Explanation
There are two options for private feeds, which are Self-hosting (NuGet Server, Nexus) and SaaS Services
(Azure Artifacts, MyGet).
Multiple choice
Which of the following choices isn't a package type supported by Azure DevOps?
■■ PHP.
†† NPM.
†† Python.
Explanation
Azure Artifacts currently supports feeds that can store five different package types such as NuGet packages,
NPM packages, Maven, Universal packages, and Python.
Multiple choice
Which of the following choices is the minimum feed permission that will allow you to list available
packages and install them?
†† Collaborator.
■■ Reader.
†† Contributor.
Explanation
A reader can list and restore or install packages from the feed.
Multiple choice
Which of the following roles is given to the Project Collection Build Service when creating an Azure
Artifacts feed by default?
†† Owner.
†† Reader.
■■ Contributor.
Explanation
When creating an Azure Artifacts feed, the Project Collection Build Service is given contributor rights by
default. This organization-wide build identity in Azure Pipelines can access the feeds it needs when running
tasks.
    661

Multiple choice
Which of the following choices is a recommended place to store binaries?
■■ Universal Packages.
†† Git.
†† Team Foundation Version Control.
Explanation
You can store them directly using universal packages. This is also a great way to protect your packages.
Multiple choice
Which of the following choices is a correct fix if an existing package is broken or buggy?
†† Use an older version.
■■ Publish a new version.
†† Repair and save.
Explanation
You need to publish a new version. Replacing or updating an existing version of a package is not allowed.
Most of the package feeds do not allow operations that would change an existing version.
Multiple choice
Which of the following action is required to make a package visible in release views other than @local?
■■ Promote.
†† Move.
†† Push.
Explanation
You can promote packages to Release and Prerelease views as the quality indicator.
Multiple choice
Which of the following choices isn't a default feed view in Azure Artifacts?
■■ Remote.
†† Prerelease.
†† Local.
Explanation
Feeds in Azure Artifacts have three different views by default. These views are Release, Prerelease, and Local.
Multiple choice
Which of the following choices contains valid packages that GitHub Packages can host?
†† Apache Maven, Gradle and Go modules.
■■ RubyGems, Docker and NuGet.
†† NuGet, Conan (for C/C++), and Cargo (for Rust).
Explanation
GitHub Packages can host npm, RubyGems, Apache Maven, Gradle, Docker, NuGet, and GitHub's Container
registry.
662     

Multiple choice
Which of the following choices is the most secure and recommended authentication method for GitHub
Packages?
■■ Personal Access Token (PAT).
†† User and Password.
†† OAuth token.
Explanation
It's recommended to use PAT for a secure GitHub Packages authentication.
Multiple choice
Which of the following choices is the minimum user or team permission within the GitHub organization
that allows a package delete action?
†† Write.
†† Read.
■■ Admin.
Explanation
Admin permission can upload, download, delete, and manage packages. Also, the admin can read and write
package metadata and grant package permissions.
Module 9 Implement continuous feedback

Implement tools to track usage and flow


Introduction
This module introduces you to continuous feedback practices and tools to track usage and flow, such as
Azure Logs Analytics, Kusto Query Language (KQL), and Application Insights.

Learning objectives
After completing this module, students and professionals can:
●● Implement tools to track feedback.
●● Plan for continuous monitoring.
●● Implement Application Insights.
●● Use Kusto Query Language (KQL).

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Understand the inner loop


It isn't clear who coined the term “inner loop” in the context of software engineering, but within Micro-
soft, at least, the word seems to have stuck.
Many of the internal teams I work with see it as something they want to keep as short as possible - but
what is the inner loop?
664     

Definitions
The easiest way to define the inner loop is the iterative process that a developer does when writing,
building, and debugging code.
There are other things that a developer does. It's the right set of steps did repeatedly before sharing their
work with their team or the rest of the world.

Exactly what goes into an individual developer's inner loop will depend significantly on the technologies
they're working with, the tools being used, and their preferences.
If I were working on a library, my inner loop would include coding, building, testing execution & debug-
ging with regular commits to my local Git repository.
On the other hand, if I were doing some web front-end work, I would probably be optimized around
hacking on HTML & JavaScript, bundling, and refreshing the browser (followed by regular commits).
    665

Most codebases comprise multiple-moving parts, so the definition of a developer's inner loop on any
single codebase might alternate depending on what is being worked on.

Understanding the loop


The steps within the inner loop can be grouped into three broad buckets of activity - experimentation,
feedback collection, and tax.
If we flick back to the library development scenario I mentioned earlier, I said four steps and how I would
bucket them.
●● Coding (Experimentation)
●● Building (Feedback Collection)
●● Testing / Debugging (Feedback Collection)
●● Committing (Tax)
Of all the steps in the inner loop, coding is the only one that adds customer value.
Building and testing code are essential, but ultimately, we use them to give the developer feedback about
what they've written to see if it delivers sufficient value.
Putting committing code in the tax bucket is perhaps a bit harsh, but the purpose of the bucket is to call
out those activities that neither add value nor provide feedback.
Tax is necessary to work. If it's unnecessary work, then it's a waste and should be eliminated.
666     

Loop optimization
Having categorized the steps within the loop, it's now possible to make some general statements:
●● You want to execute the loop as fast as possible and for the total loop execution time to be propor-
tional to the changes made.
●● You want to minimize the time feedback collection takes but maximize the quality of the feedback
that you get.
●● You want to minimize the tax you pay by eliminating it where it's unnecessary to run through the loop
(can you defer some operations until you commit, for example).
●● As new code and more complexity are added to any codebase, the amount of outward pressure to
increase the size of the inner loop also increases. More code means more tests, which means more
execution time and slow execution of the inner loop.
Suppose you have ever worked on a large monolithic codebase. In that case, it's possible to get into a
situation where even small changes require a disproportionate amount of time to execute the feedback
collection steps of the inner loop. It's a problem, and you should fix it.
There are several things that a team can do to optimize the inner loop for larger codebases:
●● Only build and test what was changed.
●● Cache intermediate build results to speed up to complete builds.
●● Break up the codebase into small units and share binaries.
How you tackle each one of those is probably a blog post.
At Microsoft, for some of our genuinely massive monolithic codebases.
We're investing heavily in #1 and #2, but#3 requires a special mention because it can be a dou-
ble-edged sword and can have the opposite of the wished impact if done incorrectly.
    667

Tangled loops
To understand the problem, we need to look beyond the inner loop. Let us say that our monolithic
codebase has an application-specific framework that does much heavy lifting.
It would be tempting to extract that framework into a set of packages.
To do this, you would pull that code into a separate repository (optional, but this is generally the way it's
done), then set up a different CI/CD pipeline that builds and publishes the package.
A different pull-request process would also front this separate build and release pipeline to inspect
changes before the code is published.
When someone needs to change this framework code, they clone down the repository, make their
changes (a separate inner loop), and submit a PR that transitions the workflow from the inner loop to the
outer loop.
The framework package would then be available to be pulled into dependent applications (in this case,
the monolith).
668     

Initially, things might work out well. However, at some point in the future, you'll likely want to develop a
new feature in the application that requires extensive new capabilities to be added to the framework.
It's where teams that have broken up their codebases in suboptimal ways will start to feel pain.
If you have to coevolve code in two separate repositories where a binary/library dependency is present,
you'll experience some friction.
In loop terms, the original codebase's inner loop now (temporarily at least) includes the outer loop of the
previously broken out framework code.
Outer loops include tax, including code reviews, scanning passes, binary signing, release pipelines, and
approvals.
You don't want to pay that every time you've added a method to a class in the framework and now want
to use it in your application.
    669

What generally ends up happening next is a series of local hacks by the developer to try to stitch the
inner loops together so that they can move forward efficiently - but it gets messy quick, and you must
pay that outer loop tax at some point.
It isn't to say that breaking up code into separate packages is an inherently bad thing - it can work
brilliantly; you need to make those incisions carefully.

Closing thoughts
There's no silver bullet solution to ensure that your inner loop doesn't start slowing down, but it's
essential to understand when it starts happening, what the cause is, and work to address it.
Decisions such as how you build, test, and debug the architecture itself will all impact how productive
developers are. Improving one aspect will often cause issues in another.
670     

Introduction to continuous monitoring

Continuous monitoring refers to the process and technology required to incorporate monitoring across
each phase of your DevOps and IT operations lifecycles.
It helps to continuously ensure your application's health, performance, reliability, and infrastructure as it
moves from development to production.
Continuous monitoring builds on the concepts of Continuous Integration and Continuous Deployment
(CI/CD), which help you develop and deliver software faster and more reliably to provide continuous
value to your users.
Azure Monitor1 is the unified monitoring solution in Azure that provides full-stack observability across
applications and infrastructure in the cloud and on-premises.
It works seamlessly with Visual Studio and Visual Studio Code2 during development and test and
integrates with Azure DevOps3 for release management and work item management during deployment
and operations.
It even integrates across the ITSM and SIEM tools of your choice to help track issues and incidents within
your existing IT processes.
This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout
your workflows.
It includes links to other documentation that provides details on implementing different features.

Enable monitoring for all your applications


To gain observability across your entire environment, you need to enable monitoring on all your web
applications and services.

1 https://docs.microsoft.com/azure/azure-monitor/overview
2 https://visualstudio.microsoft.com/
3 https://docs.microsoft.com/azure/devops/user-guide/index
    671

It will allow you to visualize end-to-end transactions and connections across all the components easily.
●● Azure DevOps Projects gives you a simplified experience with your existing code and Git
repository or chooses4 from one of the sample applications to create a Continuous Integration (CI)
and Continuous Delivery (CD) pipeline to Azure.
●● Continuous monitoring in your DevOps release pipeline5 allows you to gate or roll back your
deployment based on monitoring data.
●● Status Monitor6 allows you to instrument a live .NET app on Windows with Azure Application Insights
without having to modify or redeploy your code.
●● If you have access to the code for your application, then enable complete monitoring with Applica-
tion Insights7 by installing the Azure Monitor Application Insights SDK for .NET8, Java9, Node.js10, or
any other programming language11. It allows you to specify custom events, metrics, or page views
relevant to your application and business.

Enable monitoring for your entire infrastructure


Applications are only as reliable as their underlying infrastructure.
Monitoring enabled across your entire infrastructure will help you achieve full observability and make it
easier to discover a potential root cause when something fails.
Azure Monitor helps you track the health and performance of your entire hybrid infrastructure, including
resources such as VMs, containers, storage, and network.
●● You automatically get platform metrics, activity logs, and diagnostics logs12 from most of your
Azure resources with no configuration.
●● Enable deeper monitoring for VMs with Azure Monitor13.
●● Enable deeper monitoring for AKS clusters with Azure Monitor for containers14.
●● Add monitoring solutions15 for different applications and services in your environment.
Infrastructure as code16 manages infrastructure in a descriptive model, using the same versioning as
DevOps teams use for source code.
It adds reliability and scalability to your environment and allows you to apply similar processes to
manage your applications.
●● Use Resource Manager templates17 to enable monitoring and configure alerts over a large set of
resources.
●● Use Azure Policy18 to enforce different rules over your resources. It ensures that those resources stay
compliant with your corporate standards and service level agreements.

4 https://docs.microsoft.com/azure/devops-project/overview
5 https://docs.microsoft.com/azure/application-insights/app-insights-vsts-continuous-monitoring
6 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-performance-live-website-now
7 https://docs.microsoft.com/azure/application-insights/app-insights-overview
8 https://docs.microsoft.com/azure/application-insights/quick-monitor-portal
9 https://docs.microsoft.com/azure/application-insights/app-insights-java-quick-start
10 https://docs.microsoft.com/azure/application-insights/app-insights-nodejs-quick-start
11 https://docs.microsoft.com/azure/application-insights/app-insights-platforms
12 https://docs.microsoft.com/azure/azure-monitor/platform/data-sources
13 https://docs.microsoft.com/azure/azure-monitor/insights/vminsights-overview
14 https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-overview
15 https://docs.microsoft.com/azure/azure-monitor/insights/solutions-inventory
16 https://docs.microsoft.com/azure/devops/learn/what-is-infrastructure-as-code
17 https://docs.microsoft.com/azure/azure-monitor/platform/template-workspace-configuration
18 https://docs.microsoft.com/azure/governance/policy/overview
672     

Combine resources in Azure Resource Groups


Today, a typical application on Azure includes multiple resources such as VMs and App Services or
microservices hosted on Cloud Services, AKS clusters, or Service Fabric.
These applications frequently use dependencies like Event Hubs, Storage, SQL, and Service Bus.
●● Combine resources in Azure Resource Groups to get complete visibility of all your resources that
make up your different applications. Azure Monitor for Resource Groups19 provides a simple way to
keep track of the health and performance of your entire full-stack application and enables drilling
down into respective components for any investigations or debugging.

Ensure quality through continuous deployment


Continuous Integration / Continuous Deployment allows you to automatically integrate and deploy code
changes to your application based on automated testing results.
It streamlines the deployment process and ensures the quality of any changes before they move into
production.
●● Use Azure Pipelines20 to implement Continuous Deployment and automate your entire process from
code commit to production based on your CI/CD tests.
●● Use Quality Gates to integrate monitoring into your pre-deployment or post-deployment. It ensures
that you meet the key health/performance metrics (KPIs) as your applications move from dev to
production. Any differences in the infrastructure environment or scale aren't negatively impacting
your KPIs.
●● Maintain separate monitoring instances21 between your different deployment environments such
as Dev, Test, Canary, and Prod. It ensures that collected data is relevant across the associated applica-
tions and infrastructure. If you need to correlate data across environments, you can use multi-re-
source charts in Metrics Explorer22 or create cross-resource queries in Log Analytics23.

Create actionable alerts with actions


A critical aspect of monitoring is proactively notifying administrators of any current and predicted issues.
●● Create alerts in Azure Monitor24 based on logs and metrics to identify predictable failure states. It
would be best if you had a goal of making all alerts actionable, meaning that they represent actual
critical conditions and seek to reduce false positives. Use dynamic thresholds25 to automatically
calculate baselines on metric data rather than defining your static thresholds.
●● Define actions for alerts to use the most effective means of notifying your administrators. Available
actions for notification26 are SMS, e-mails, push notifications, or voice calls.
●● Use more advanced actions to connect to your ITSM tool27 or other alert management systems
through webhooks28.

19 https://docs.microsoft.com/azure/azure-monitor/insights/resource-group-insights
20 https://docs.microsoft.com/azure/devops/pipelines
21 https://docs.microsoft.com/azure/application-insights/app-insights-separate-resources
22 https://docs.microsoft.com/azure/azure-monitor/platform/metrics-charts
23 https://docs.microsoft.com/azure/azure-monitor/log-query/cross-workspace-query
24 https://docs.microsoft.com/azure/azure-monitor/platform/alerts-overview
25 https://docs.microsoft.com/azure/azure-monitor/platform/alerts-dynamic-thresholds
26 https://docs.microsoft.com/azure/azure-monitor/platform/action-groups#create-an-action-group-by-using-the-azure-portal
27 https://docs.microsoft.com/azure/azure-monitor/platform/itsmc-overview
28 https://docs.microsoft.com/azure/azure-monitor/platform/activity-log-alerts-webhook
    673

●● Remediate situations identified in alerts with Azure Automation runbooks29 or Logic Apps30 that can
be launched from an alert using webhooks.
●● Use autoscaling31 to dynamically increase and decrease your compute resources based on collected
metrics.

Prepare dashboards and workbooks


Ensuring that your development and operations have access to the same telemetry and tools allows them
to view patterns across your entire environment and minimize your Mean Time To Detect (MTTD) and
Mean Time To Restore (MTTR).
●● Prepare custom dashboards32 based on standard metrics and logs for the different roles in your
organization. Dashboards can combine data from all Azure resources.
●● Prepare Workbooks33 to ensure knowledge sharing between development and operations. It could
be prepared as dynamic reports with metric charts and log queries or even as troubleshooting guides
designed by developers helping customer support or operations to handle fundamental problems.

Continuously optimize
Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which
recommends continuously tracking your KPIs and user behavior metrics and optimizing them through
planning iterations.
Azure Monitor helps you collect metrics and logs relevant to your business and to add new data points in
the following deployment as required.
●● Use tools in Application Insights to track end-user behavior and engagement34.
●● Use Impact Analysis35 to help you prioritize which areas to focus on to drive to important KPIs.

Explore Azure Monitor and Log Analytics


When you run at a cloud-scale, you need intelligent logging and monitoring tools that scale to your
needs and provide insight into your data in real-time.
Azure Monitor is Microsoft's native cloud monitoring solution. Azure Monitor collects monitoring teleme-
try from different kinds of on-premises and Azure sources.
Azure Monitor provides Management tools, such as those in Azure Security Center and Azure Automa-
tion, enabling ingestion of custom log data to Azure.
The service aggregates and stores this telemetry in a log data store optimized for cost and performance.
With Azure Monitor, you can analyze data, set up alerts, and get end-to-end views of your applications.
And use machine-learning-driven insights to identify and resolve problems quickly.

29 https://docs.microsoft.com/azure/automation/automation-webhooks
30 https://docs.microsoft.com/connectors/custom-connectors/create-webhook-trigger
31 https://docs.microsoft.com/azure/azure-monitor/learn/tutorial-autoscale-performance-schedule
32 https://docs.microsoft.com/azure/application-insights/app-insights-tutorial-dashboards
33 https://docs.microsoft.com/azure/application-insights/app-insights-usage-workbooks
34 https://docs.microsoft.com/azure/application-insights/app-insights-tutorial-users
35 https://docs.microsoft.com/azure/application-insights/app-insights-usage-impact
674     

In this tutorial, we'll focus on the Log Analytics part of Azure Monitor. We'll learn how to:
●● Set up Log Analytics workspace.
●● Connect virtual machines into a log analytics workspace.
●● Configure Log Analytics workspace to collect custom performance counters.
●● Analyze the telemetry-using Kusto Query Language.

Getting started
1. You'll need a resource group with one or more virtual machines that you have access to RDP to follow
along.
2. Log into Azure Shell36. Execute the command below. It will create a new resource group and create a
new log analytics workspace. Take a record of the workspaceid of the log analytics workspace as we'll
be using it again.
$ResourceGroup = "azwe-rg-devtest-logs-001"
$WorkspaceName = "azwe-devtest-logs-01"
$Location = "westeurope"

# List of solutions to enable


$Solutions = "CapacityPerformance", "LogManagement", "ChangeTracking", "ProcessInvestigator"

# Create the resource group if needed


try {
Get-AzResourceGroup -Name $ResourceGroup -ErrorAction Stop
} catch {
New-AzResourceGroup -Name $ResourceGroup -Location $Location
}

36 https://shell.azure.com/powershell
    675

# Create the workspace


New-AzOperationalInsightsWorkspace -Location $Location -Name $WorkspaceName -Resource-
GroupName $ResourceGroup

# List all solutions and their installation status


Get-AzOperationalInsightsIntelligencePacks -ResourceGroupName $ResourceGroup -WorkspaceName
$WorkspaceName

# Add solutions
foreach ($solution in $Solutions) {
Set-AzOperationalInsightsIntelligencePack -ResourceGroupName $ResourceGroup -Workspace-
Name $WorkspaceName -IntelligencePackName $solution -Enabled $true
}

# List enabled solutions


(Get-AzOperationalInsightsIntelligencePacks -ResourceGroupName $ResourceGroup -Workspace-
Name $WorkspaceName).Where({($_.enabled -eq $true)})

# Enable IIS Log Collection using the agent


Enable-AzOperationalInsightsIISLogCollection -ResourceGroupName $ResourceGroup -Workspace-
Name $WorkspaceName

# Windows Event
New-AzOperationalInsightsWindowsEventDataSource -ResourceGroupName $ResourceGroup
-WorkspaceName $WorkspaceName -EventLogName "Application" -CollectErrors -CollectWarnings
-Name "Example Application Event Log"

3. Retrieve the Log Analytics workspace secure key.


Get-AzOperationalInsightsWorkspaceSharedKey `
-ResourceGroupName azwe-rg-devtest-logs-001 `
-Name azwe-devtest-logs-01

4. Map existing virtual machines with the Log Analytics workspace. The following query uses the wok-
spaceid and workspace-secret key of the log analytics workspace to install the Microsoft Enterprise
Cloud Monitoring extension onto an existing VM.

$PublicSettings = @{"workspaceId" = "<myWorkspaceId>"}


$ProtectedSettings = @{"workspaceKey" = "<myWorkspaceKey>"}

Set-AzVMExtension -ExtensionName "Microsoft.EnterpriseCloud.Monitoring" `


-ResourceGroupName "azwe-rg-devtest-logs-001" `
-VMName "azsu-d-sql01-01" `
-Publisher "Microsoft.EnterpriseCloud.Monitoring" `
-ExtensionType "MicrosoftMonitoringAgent" `
-TypeHandlerVersion 1.0 `
676     

-Settings $PublicSettings `
-ProtectedSettings $ProtectedSettings `
-Location westeurope

5. Run the script to configure the below-listed performance counters to be collected from the virtual
machine.
#Login-AzureRmAccount

#Instance
##################################
$InstanceNameAll = "*"
$InstanceNameTotal = '_Total'
#Objects
##################################
$ObjectCache = "Cache"
$ObjectLogicalDisk = "LogicalDisk"
$ObjectMemory = "Memory"
$ObjectNetworkAdapter = "Network Adapter"
$ObjectNetworkInterface = "Network Interface"
$ObjectPagingFile = "Paging File"
$ObjectProcess = "Process"
$ObjectProcessorInformation = "Processor Information"
$ObjectProcessor = "Processor"

$ObjectSQLAgentAlerts = "SQLAgent:Alerts"
$ObjectSQLAgentJobs = "SQLAgent:Jobs"
$ObjectSQLAgentStatistics = "SQLAgent:Statistics"

$ObjectSQLServerAccessMethods = "SQLServer:Access Methods"


$ObjectSQLServerExecStatistics = "SQLServer:Exec Statistics"
$ObjectSQLServerLocks = "SQLServer:Locks"
$ObjectSQLServerSQLErrors = "SQLServer:SQL Errors"

$ObjectSystem = "System"

#Counters
#########################################################
$CounterCache = "Copy Read Hits %"

$CounterLogicalDisk =
"% Free Space" `
,"Avg. Disk sec/Read" `
,"Avg. Disk sec/Transfer" `
,"Avg. Disk sec/Write" `
,"Current Disk Queue Length" `
,"Disk Read Bytes/sec" `
,"Disk Reads/sec" `
,"Disk Transfers/sec" `
    677

,"Disk Writes/sec"

$CounterMemory =
"% Committed Bytes In Use" `
,"Available MBytes" `
,"Page Faults/sec" `
,"Pages Input/sec" `
,"Pages Output/sec" `
,"Pool Nonpaged Bytes"

$CounterNetworkAdapter =
"Bytes Received/sec" `
,"Bytes Sent/sec"

$CounterNetworkInterface = "Bytes Total/sec"

$CounterPagingFile =
"% Usage" `
,"% Usage Peak"

$CounterProcess = "% Processor Time"

$CounterProcessorInformation =
"% Interrupt Time" `
,"Interrupts/sec"

$CounterProcessor = "% Processor Time"


$CounterProcessorTotal = "% Processor Time"

$CounterSQLAgentAlerts = "Activated alerts"


$CounterSQLAgentJobs = "Failed jobs"
$CounterSQLAgentStatistics = "SQL Server restarted"
$CounterSQLServerAccessMethods = "Table Lock Escalations/sec"
$CounterSQLServerExecStatistics = "Distributed Query"
$CounterSQLServerLocks = "Number of Deadlocks/sec"
$CounterSQLServerSQLErrors = "Errors/sec"

$CounterSystem = "Processor Queue Length"

#########################################################
$global:number = 1 #Name parameter needs to be unique that why we will use number ++ in fuction
#########################################################

function AddPerfCounters ($PerfObject, $PerfCounters, $Instance)


{
ForEach ($Counter in $PerfCounters)
{
New-AzOperationalInsightsWindowsPerformanceCounterDataSource `
-ResourceGroupName 'azwe-rg-devtest-logs-001' `
-WorkspaceName 'azwe-devtest-logs-01' `
-ObjectName $PerfObject `
678     

-InstanceName $Instance `
-CounterName $Counter `
-IntervalSeconds 10 `
-Name "Windows Performance Counter $global:number"
$global:number ++
}
}

AddPerfCounters -PerfObject $ObjectLogicalDisk -PerfCounter $CounterLogicalDisk -Instance $Instan-


ceNameAll
AddPerfCounters -PerfObject $ObjectNetworkAdapter -PerfCounter $CounterNetworkAdapter
-Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectNetworkInterface -PerfCounter $CounterNetworkInterface
-Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectPagingFile -PerfCounter $CounterPagingFile -Instance $Instance-
NameAll
AddPerfCounters -PerfObject $ObjectProcess -PerfCounter $CounterProcess -Instance $Instance-
NameAll
AddPerfCounters -PerfObject $ObjectProcessorInformation -PerfCounter $CounterProcessorInforma-
tion -Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectProcessor -PerfCounter $CounterProcessor -Instance $Instance-
NameAll
AddPerfCounters -PerfObject $ObjectProcessor -PerfCounter $CounterProcessorTotal -Instance
$InstanceNameTotal
AddPerfCounters -PerfObject $ObjectSQLAgentAlerts -PerfCounter $CounterSQLAgentAlerts -In-
stance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectSQLAgentJobs -PerfCounter $CounterSQLAgentJobs -Instance
$InstanceNameAll
AddPerfCounters -PerfObject $ObjectSQLAgentStatistics -PerfCounter $CounterSQLAgentStatistics
-Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectSQLServerAccessMethods -PerfCounter $CounterSQLServerAc-
cessMethods -Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectSQLServerExecStatistics -PerfCounter $CounterSQLServerExec-
Statistics -Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectSQLServerLocks -PerfCounter $CounterSQLServerLocks -Instance
$InstanceNameAll
AddPerfCounters -PerfObject $ObjectSQLServerSQLErrors -PerfCounter $CounterSQLServerSQLErrors
-Instance $InstanceNameAll
AddPerfCounters -PerfObject $ObjectSystem -PerfCounter $CounterSystem -Instance $InstanceName-
All
AddPerfCounters -PerfObject $ObjectMemory -PerfCounter $CounterMemory -Instance $Instance-
NameAll
AddPerfCounters -PerfObject $ObjectCache -PerfCounter $CounterCache -Instance $InstanceNameAll

6. To generate some interesting performance statistics. Download HeavyLoad utility37 (a free load
testing utility) and run it on the virtual machine to simulate high CPU, Memory, and IOPS consump-
tion.

37 https://www.jam-software.com/heavyload/
    679

How it works
1. Log Analytics works by running the Microsoft Monitoring Agent service on the machine. The service
locally captures and buffers the events and pushes them securely out to the Log Analytics workspace
in Azure.
2. Log into the virtual machine and navigate to the C:\Program Files\Microsoft Monitoring Agent\MMA
and open the control panel. It will show you the details of the log analytics workspace connected. You
also can add multiple log analytics workspaces to publish the log data into various workspaces.

Summary
So far, we've created a log analytics workspace in a resource group.
The log analytics workspace has been configured to collect performance counters, event logs, and IIS
Logs.
A virtual machine has been mapped to the log analytics workspace using the Microsoft Enterprise cloud
monitoring extension.
HeavyLoad has been used to simulate high CPU, memory, and IOPS on the virtual machine.

Examine Kusto Query Language (KQL)


Kusto is the primary way to query Log Analytics. It provides both a query language and a set of control
commands.
Kusto can be used directly within Azure Data Explorer.
Azure Data Studio also offers a Kusto query experience and supports the creation of Jupiter-style note-
books for Kusto queries.
See Getting Started with Kusto Queries.38

Walkthrough
Note: This walkthrough continues the previous lesson on Azure Log Analytics, and the walkthrough
started within it.
1. Log in to Azure portal39 and navigate to the log analytics workspace. From the left blade in the log
analytics workspace, click Logs. It will open the Logs window, ready for you to start exploring all the
data points captured into the workspace.
2. To query the logs, we'll need to use the Kusto Query Language. Run the following query to list the last
heartbeat of each machine connected to the log analytics workspace.

38 https://docs.microsoft.com/azure/data-explorer/kusto/concepts/
39 https://portal.azure.com
680     

// Last heartbeat of each computer


// Show the last heartbeat sent by each computer
Heartbeat
| summarize arg_max(TimeGenerated, *) by Computer

3. Show a list of all distinct counters being captured.


// What data is being collected?
// List the collected performance counters and object types (Process, Memory, Processor.)
Perf
| summarize by ObjectName, CounterName

4. Show a count of the data points collected in the last 24 hours. The result shows that we have 66M
data points. We can query against them in near real time to analyze and correlate insights.

5. Run the following query to generate the max CPU Utilization trend over the last 24 hours, aggregated
at a granularity of 1 min. Render the data as a time chart.
Perf
| where ObjectName == "Processor" and InstanceName == "_Total"
| summarize max(CounterValue) by Computer, bin(TimeGenerated, 1m)
| render timechart

6. Run the following query to see all the processes running on that machine contributing to the CPU
Utilization. Render the data in a pie chart.
    681

Perf
| where ObjectName contains "process"
and InstanceName !in ("_Total", "Idle")
and CounterName == "% Processor Time"
| summarize avg(CounterValue) by InstanceName, CounterName, bin(TimeGenerated, 1m)
| render piechart

There's more
This unit has introduced the basic concepts of Log Analytics and how to get started with the basics.
We've only scratched the surface of what is possible with Log Analytics.
We would encourage you to try out the advanced tutorials available for Log Analytics on Microsoft
Docs40.

Explore Application Insights


You install a small instrumentation package in your application and set up an Application Insights
resource in the Microsoft Azure portal.
The instrumentation monitors your app and sends telemetry data to the portal. (The application can run
anywhere - it doesn't have to be hosted in Azure.)
You can instrument the web service application, background components, and JavaScript in the web
pages themselves.

40 https://docs.microsoft.com/azure/azure-monitor/
682     

Also, you can pull in telemetry from the host environments such as performance counters, Azure diag-
nostics, or Docker logs.
You can also set up web tests that periodically send synthetic requests to your web service.
All these telemetry streams are integrated into the Azure portal, where you can apply powerful analytic
and search tools to the raw data.

What's the overhead?


The impact on your app's performance is minimal. Tracking calls are non-blocking and are batched and
sent in a separate thread.

What do Application Insights monitor?


Application Insights is aimed at the development team to help you understand how your app is doing
and how it's being used. It monitors:
●● Request rates, response times, and failure rates - Find out which pages are most popular, at what
times of day, and where your users are. See which pages do best. If your response times and failure
rates go high with more requests, perhaps you have a resourcing problem.
●● Dependency rates, response times, and failure rates - Find out whether external services are slowing
you down.
●● Exceptions - Analyze the aggregated statistics or pick specific instances and drill into the stack trace
and related requests. Both server and browser exceptions are reported.
●● Pageviews and load performance - reported by your users' browsers.
●● AJAX calls from web pages - rates, response times, and failure rates.
●● User and session count.
    683

●● Performance counters from your Windows or Linux server machines include CPU, memory, and
network usage.
●● Host diagnostics from Docker or Azure.
●● Diagnostic trace logs from your app - so that you can correlate trace events with requests.
●● Custom events and metrics that you write yourself in the client or server code to track business events
such as items sold or games won.

Where do I see my telemetry?


There are plenty of ways to explore your data. Check out this article for more information - Smart
detection and manual alerts41.
Automatic alerts adapt to your app's usual patterns of telemetry and trigger when there's something
outside the usual pattern. You can also set alerts42 on levels of custom or standard metrics.

Application map
The components of your app, with key metrics and alerts.

Profiler
Inspect the execution profiles of sampled requests.

Usage analysis
Analyze user segmentation and retention.

41 https://docs.microsoft.com/azure/application-insights/app-insights-proactive-diagnostics
42 https://docs.microsoft.com/azure/azure-monitor/app/alerts
684     

Diagnostic search, for instance, data.


Search and filter events such as requests, exceptions, dependency calls, log traces, and page views.

Metrics Explorer for aggregated data


Explore, filter, and segment aggregated data such as rates of requests, failures, and exceptions, response
times, page load times.

Dashboards
Mashup data from multiple resources and share it with others. Great for multi-component applications
and continuous display in the team room.

Live Metrics Stream


When you deploy a new build, watch these near-real-time performance indicators to ensure everything
works as expected.
    685

Analytics
Answer challenging questions about your app's performance and usage by using this powerful query
language.

Visual Studio
See performance data in the code. Go to code from stack traces.

Snapshot debugger
Debug snapshots sampled from live operations, with parameter values.

Power BI
Integrate usage metrics with other business intelligence.
686     

REST API
Write code to run queries over your metrics and raw data.

Continuous export
Bulk export of raw data to storage as soon as it arrives.

Implement Application Insights


Monitor
Install Application Insights in your app, set up availability web tests43, and:
●● Set up a dashboard44 for your team room to keep an eye on load, responsiveness, and the perfor-
mance of your dependencies, page loads, and AJAX calls.
●● Discover which are the slowest and most-failing requests.
●● Watch Live Stream45 when you deploy a new release to know immediately about any degradation.

Detect, Diagnose
If you receive an alert or discover a problem:
●● Assess how many users are affected.
●● Correlate failures with exceptions, dependency calls, and traces.

43 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
44 https://docs.microsoft.com/azure/azure-monitor/app/app-insights-dashboards
45 https://docs.microsoft.com/azure/application-insights/app-insights-live-stream
    687

●● Examine profiler, snapshots, stack dumps, and trace logs.

Build, Measure, Learn


Measure the effectiveness46 of each new feature that you deploy.
●● Plan to measure how customers use new UX or business features.
●● Write custom telemetry into your code.
●● Base the next development cycle on hard evidence from your telemetry.

Get started
Application Insights is one of the many services hosted within Microsoft Azure, and telemetry is sent
there for analysis and presentation.
So, before you do anything else, you'll need a subscription to Microsoft Azure47.
It's free to sign up, and if you choose the basic pricing plan48 of Application Insights, there's no charge
until your application has grown to have large usage.
If your organization already has a subscription, they could add your Microsoft account to it.
There are several ways to get started. Begin with whichever works best for you. You can add the others
later.

At run time
Instrument your web app on the server. Avoids any update to the code. You need admin access to your
server.
●● IIS on-premises or on a VM49
●● Azure web app or VM50
●● J2EE51

At development time
Add Application Insights to your code. Allows you to write custom telemetry and to instrument back-end
and desktop apps.
●● Visual Studio52 2013 update two or later.
●● Java53
●● Node.js
●● Other platforms54

46 https://docs.microsoft.com/azure/application-insights/app-insights-usage-overview
47 https://azure.com/
48 https://azure.microsoft.com/pricing/details/application-insights/
49 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-performance-live-website-now
50 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-performance-live-website-now
51 https://docs.microsoft.com/azure/application-insights/app-insights-java-live
52 https://docs.microsoft.com/azure/azure-monitor/app/asp-net
53 https://docs.microsoft.com/azure/application-insights/app-insights-java-get-started
54 https://docs.microsoft.com/azure/application-insights/app-insights-platforms
688     

●● Instrument your web pages55 for page view, AJAX, and another client-side telemetry.
●● Analyze mobile app usage56 by integrating with Visual Studio App Center.
●● Availability tests57 - ping your website regularly from our servers.

Exercise - Add Application Insights to an ASP.


NET core application
Application performance management (APM) is a discipline that includes all the tools and activities
involved in observing how software and hardware are doing.
These tools present performance information in the form product owners and software development
teams can use to make decisions.
Application Insights is a Microsoft Azure native APM Tool that is cross-platform. It's specialized in provid-
ing a rich & intelligent performance management toolset for Azure hosted web apps.
In this tutorial, we'll learn how to get started with App Insights. We'll cover,
●● Adding App Insights to your dotnet core app
●● Accessing App Insights from within the Azure portal

Getting started
1. To add Application Insights to your ASP.NET58 website, you need to:
●● Install Visual Studio 2019 for Windows with the following workloads:
●● ASP.NET59 and web development (Don't uncheck the optional components)
2. In Visual Studio, create a new dotnet core project. Right-click the project, and from the context menu,
select Add, Application Insights Telemetry.

55 https://docs.microsoft.com/azure/application-insights/app-insights-javascript
56 https://docs.microsoft.com/azure/application-insights/app-insights-mobile-center-quickstart
57 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
58 http://asp.net/
59 http://asp.net/
    689

(Depending on your Application Insights SDK version, you may be prompted to upgrade to the latest
SDK release. If prompted, select Update SDK.)
3. From the Application Insights configuration screen, click Get started to start setting up App Insights.

4. Choose to set up a new resource group and select the location where you want the telemetry data to
be persisted.
690     

Summary
So far, we have added App Insights in a dotnet core application.
The Application Insights getting started experience gives you the ability to create a new resource group
in the wished location where the App Insights instance gets created.
The instrumentation key for the app insights instance is injected into the application configuration
automatically.

How to do it
1. Run your app with F5. Open different pages to generate some telemetry. In Visual Studio, you'll see a
count of the events that have been logged.
    691

2. You can see your telemetry either in Visual Studio or in the Application Insights web portal. Search
telemetry in Visual Studio to help you debug your app. Monitor performance and usage in the web
portal when your system is live. In Visual Studio, to view Application Insights data. Select Solution
Explorer > Connected Services > right-click Application Insights, and then click Search-Live Telemetry.
In the Visual Studio Application Insights Search window, you'll see the data from your application for
telemetry generated on the server-side of your app. Experiment with the filters, and click any event to
see more detail.
692     

3. You can also see telemetry in the Application Insights web portal (unless you choose to install only the
SDK). The portal has more charts, analytic tools, and cross-component views than Visual Studio. The
portal also provides alerts.
Open your Application Insights resource. Either sign into the Azure portal and find it there, or select
Solution Explorer > Connected Services > right-click Application Insights > Open Application Insights
Portal and let it take you there.
The portal opens on a view of the telemetry from your app.

How it works
Application Insights configures a unique key (called AppInsights Key) in your application. The Application
Insights SDK uses this key to identify the Azure App Insights workspace the telemetry data needs to be
uploaded. The SDK and the key are merely used to pump the telemetry data points out of your applica-
tion. The heavy lifting of data correlation, analysis, and insights is done within Azure.

There's more
This tutorial taught us how to get started by adding Application Insights into your dotnet core applica-
tion.
App Insights offers a wide range of features.
You can learn more about these at Start Monitoring Your ASP.NET Core Web Application60.

Summary
This module introduced you to continuous feedback practices and tools to track usage and flow, such as
Azure Logs Analytics, Kusto Query Language (KQL), and Application Insights.
You learned how to describe the benefits and usage of:
●● Implement tools to track feedback.
●● Plan for continuous monitoring.
●● Implement Application Insights.
●● Use Kusto Query Language (KQL).

60 https://docs.microsoft.com/azure/azure-monitor/learn/dotnetcore-quick-start
    693

Learn more
●● Give feedback with Test & Feedback extension - Azure Test Plans | Microsoft Docs61.
●● Request stakeholder feedback - Azure Test Plans | Microsoft Docs62.
●● Continuous monitoring with Azure Monitor - Azure Monitor | Microsoft Docs63.
●● Continuous monitoring of your DevOps release pipeline with Azure Pipelines and Azure Appli-
cation Insights - Azure Monitor | Microsoft Docs64.
●● What is Azure Application Insights? - Azure Monitor | Microsoft Docs65.
●● KQL quick reference | Microsoft Docs66.

61 https://docs.microsoft.com/azure/devops/test/provide-stakeholder-feedback
62 https://docs.microsoft.com/azure/devops/test/request-stakeholder-feedback
63 https://docs.microsoft.com/azure/azure-monitor/continuous-monitoring
64 https://docs.microsoft.com/azure/azure-monitor/app/continuous-monitoring
65 https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview
66 https://docs.microsoft.com/azure/data-explorer/kql-quick-reference
694     

Develop monitor and status dashboards


Introduction
This module explains steps to develop monitoring with Azure Dashboards, work with View Designer and
Azure Monitor, and create Azure Monitor Workbooks. Also, explore tools to support monitoring with
Power BI.

Learning objectives
After completing this module, students and professionals can:
●● Configure Azure Dashboards.
●● Work with View Designer in Azure Monitor.
●● Create Azure Monitor Workbooks.
●● Monitor with Power BI.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore Azure Dashboards


Visualizations such as charts and graphs can help you analyze your monitoring data to drill down on
issues and identify patterns.
Depending on the tool you use, you may also share visualizations with other users inside and outside of
your organization.
Azure dashboards67 are the primary dashboarding technology for Azure.
They're handy in providing a single pane of glass over your Azure infrastructure and services allowing you
to identify critical issues quickly.

67 https://docs.microsoft.com/azure/azure-portal/azure-portal-dashboards
    695

Advantages
●● Deep integration into Azure. Visualizations can be pinned to dashboards from multiple Azure pages,
including metrics analytics, log analytics, and Application Insights.
●● Supports both metrics and logs.
●● Combine data from multiple sources, including output from Metrics explorer68, Log Analytics
queries69, and maps70 and availability71 in Application Insights.
●● Option for personal or shared dashboards. It's integrated with Azure role-based authentication
(RBAC)72.
●● Automatic refresh. Metrics refresh depends on the time range with a minimum of five minutes. Logs
refresh at one minute.
●● Parametrized metrics dashboards with timestamp and custom parameters.
●● Flexible layout options.
●● Full-screen mode.

Limitations
●● Limited control over log visualizations with no support for data tables. The total number of data series
is limited to 10, with different data series grouped under another bucket.
●● No custom parameters support for log charts.

68 https://docs.microsoft.com/azure/azure-monitor/platform/metrics-charts
69 https://docs.microsoft.com/azure/azure-monitor/log-query/log-query-overview
70 https://docs.microsoft.com/azure/azure-monitor/app/app-map
71 https://docs.microsoft.com/azure/azure-monitor/visualizations
72 https://docs.microsoft.com/azure/role-based-access-control/overview
696     

●● Log charts are limited to the last 30 days.


●● Log charts can only be pinned to shared dashboards.
●● No interactivity with dashboard data.
●● Limited contextual drill-down.

Examine view designer in Azure Monitor


View Designer in Azure Monitor73 allows you to create custom visualizations with log data. They're used
by monitoring solutions74 to present the data they collect.

Advantages
●● Rich visualizations for log data.
●● Export and import views to transfer them to other resource groups and subscriptions.
●● Integrates into Log Analytic management model with workspaces and monitoring solutions.
●● Filters75 for custom parameters.
●● Interactive supports multi-level drill-in (a view that drills into another view).

Limitations
●● Supports logs but not metrics.
●● No personal views. Available to all users with access to the workspace.
●● No automatic refresh.
●● Limited layout options.
●● No support for querying across multiple workspaces or Application Insights applications.
●● Queries are limited in response size to 8 MB and query execution time of 110 seconds.

73 https://docs.microsoft.com/azure/log-analytics/log-analytics-view-designer
74 https://docs.microsoft.com/azure/azure-monitor/insights/solutions
75 https://docs.microsoft.com/azure/azure-monitor/platform/view-designer-filters
    697

Explore Azure Monitor workbooks


Workbooks76 are interactive documents that provide deep insights into your data, investigation, and
collaboration inside the team. Specific examples where workbooks are helpful are troubleshooting guides
and incident postmortem.

Advantages
●● Supports both metrics and logs.
●● Supports parameters enabling interactive reports selecting an element in a table will dynamically
update associated charts and visualizations.
●● Document-like flow.
●● Option for personal or shared workbooks.
●● Easy, collaborative-friendly authoring experience.
●● Templates support the public GitHub-based template gallery.

Limitations
●● No automatic refresh.
●● No dense layout like dashboards, which make workbooks less useful as a single pane of glass. It's
intended more for providing more profound insights.

Explore Power BI
Power BI77 is beneficial for creating business-centric dashboards and reports analyzing long-term KPI
trends.

76 https://docs.microsoft.com/azure/application-insights/app-insights-usage-workbooks
77 https://powerbi.microsoft.com/documentation/powerbi-service-get-started/
698     

You can import the results of a log query78 into a Power BI dataset so you can take advantage of its
features, such as combining data from different sources and sharing reports on the web and mobile
devices.

Advantages
●● Rich visualizations.
●● Extensive interactivity, including zoom-in and cross-filtering.
●● Easy to share throughout your organization.
●● Integration with other data from multiple data sources.
●● Better performance with results cached in a cube.

Limitations
●● Supports logs but not metrics.
●● No Azure RM integration. Can't manage dashboards and models through Azure Resource Manager.
●● Need to import query results need into the Power BI model to configure. Limitation on result size and
refresh.
●● Limited data refresh of eight times per day for Pro licenses (currently 48 for Premium).

Build your own custom application


You can access data in log and metric data in Azure Monitor through their API using any REST client.
It allows you to build your custom websites and applications.

78 https://docs.microsoft.com/azure/log-analytics/log-analytics-powerbi
    699

Advantages
●● Complete flexibility in UI, visualization, interactivity, and features.
●● Combine metrics and log data with other data sources.

Disadvantages
●● Significant engineering effort is required.

Summary
This module explained steps to develop monitoring with Azure Dashboards, work with View Designer and
Azure Monitor, and create Azure Monitor Workbooks. Also, explored tools to support monitoring with
Power BI.
You learned how to describe the benefits and usage of:
●● Configure Azure Dashboards.
●● Work with View Designer in Azure Monitor.
●● Create Azure Monitor Workbooks.
●● Monitor with Power BI.

Learn more
●● Create a dashboard in the Azure portal - Azure portal | Microsoft Docs79.
●● Create views to analyze log data in Azure Monitor - Azure Monitor | Microsoft Docs80.
●● Azure Monitor view designer to workbooks transition guide - Azure Monitor | Microsoft Docs81.

●● Log Analytics integration with Power BI and Excel - Azure Monitor | Microsoft Docs82.

79 https://docs.microsoft.com/azure/azure-portal/azure-portal-dashboards
80 https://docs.microsoft.com/azure/azure-monitor/visualize/view-designer
81 https://docs.microsoft.com/azure/azure-monitor/visualize/view-designer-conversion-overview
82 https://docs.microsoft.com/azure/azure-monitor/logs/log-powerbi
700     

Share knowledge within teams


Introduction
This module describes how to share knowledge within teams, Azure DevOps Wikis, and IT Service
Management Connector (ITSMC). Also, integration with Azure Boards.

Learning objectives
After completing this module, students and professionals can:
●● Share knowledge with development teams.
●● Work with Azure DevOps Wikis.
●● Configure IT Service Management Connector (ITSMC).
●● Integrate with Azure Boards.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Share acquired knowledge within development


teams
Organizational knowledge builds up within development teams over time.
It's essential to avoid endlessly relearning the same lessons that the team (and the organization) has
learned before.
As staff turnover occurs in teams, it's easy for organizational knowledge to be lost. It's important to
record this knowledge to avoid this loss.
A simple example is if the team has developed coding standards. It should be captured and not just exist
as word of mouth.
Relearning old lessons is both wasteful and expensive.

To reflect
Azure DevOps can be used with a wide range of existing tooling used to share knowledge.
●● Which knowledge-sharing tools do you currently use (if any)?
●● What do you or don't you like about the tools?

Introduction to Azure DevOps project wikis


Azure DevOps Projects includes an option to create a project wiki.
    701

The wiki to share information with your team to understand and contribute to your project.
Wikis are stored in a repository. No wiki is automatically provisioned.

Prerequisites
You must have permission to Create a Repository to publish code as a wiki. While the Project Adminis-
trators group has this permission by default, it can be assigned to others.
To add or edit wiki pages, you should be a member of the Contributors group.
All members of the team project (including stakeholders) can view the wiki.

Creation
The following article includes details on creating a wiki: Create a Wiki for your project83.

Editing the wiki


The following article includes details on publishing a Git repository to a wiki: Publish a Git repository to
a wiki84.

Markdown
Azure DevOps Wikis are written in Markdown and can also include file attachments and videos.
Markdown is a markup language. The plain text includes formatting syntax. It has become the defacto
standard for how project and software documentation is now written.
One key reason for this is that because it's made up of plain text, it's much easier to merge in the same
way that program code is merged.
It allows documents to be managed with the same tools used to create other code in a project.

GitHub Flavored Markdown (GFM)


GFM is a formal specification released by GitHub that added extensions to a base format called Com-
monMark. GFM is widely used both within GitHub and externally. GFM is rendered in Azure DevOps
Wikis.

83 https://docs.microsoft.com/azure/devops/project/wiki/wiki-create-repo
84 https://docs.microsoft.com/azure/devops/project/wiki/publish-repo-to-wiki
702     

Mermaid
Mermaid has become an essential extension to Markdown because it allows diagrams to be included in
the documentation.
It overcomes the previous difficulties in merging documentation that includes diagrams represented as
binary files.

Details on Mermaid syntax can be found here: Mermaid Introduction85

Integrate with Azure Boards


Work item integration functionality allows you to easily create work items in Azure DevOps with relevant
Application Insights data embedded in them.
It's straightforward to configure this association and begin creating work items (this process should only
take a minute or two).

Configuring work item integration


To configure work item integration for an Application Insights resource, navigate to your settings blade
for that resource.
You'll note that a new item in the “Configure” section of the settings blade says, "Work Items."

Click on it, and the configuration blade for work items will open.

85 https://mermaid-js.github.io/mermaid/
    703

All you need to do is fill out the information about the Azure DevOps system to which you want to
connect, along with the project where you want to write your work items:

Once that information is in place, you can click on the Authorization button, where you'll be redirected to
authorize access in your selected Azure DevOps system so that work items can be written there:

Once you've completed the authorization process, you can set defaults for “area path” and "assigned to."
704     

Only area path is required (if you have not set up specific area paths in your project, that is ok. Just use
the name of the project, as it's the top-level area path.
Click OK, and assuming you have entered everything correctly, you'll see a message stating “Validation
Successful,” and the blade will close. You're now ready to start creating work items!

Creating work items


Creating work items from Application Insights is easy.
There are currently two locations where you can create work items: Proactive detection and individual
instances of activity (that is, exceptions, failures, requests, and so on.). I'll show you a simple example of
the latter, but the functionality is identical in either case.
In this example, we're looking at a test web app that we published to Azure. We started to drill into the
activity for this app by looking at the Failures blade (but we could also get to this same information
through the Search button or the Metrics Explorer):

We can see that I have several exceptions that fired when the user clicked on the Home/About tab on this
web app. If I drill into this group of exceptions, I can see the list and then choose an individual exception:
    705

Looking at the detailed blade for this exception, we see that there are now two buttons available at the
top of the blade that read “New Work Item” and "View Work Items."
To create a work item, I click on the first of these buttons, and it opens the new work item blade:
706     

As you can see, just about everything you need in your average scenario has been filled out for you.
The default values for “area path” and "assigned to" that you chose in the initial configuration are set, and
all the detailed information we have available for this exception has been added to the details field.
You can override the title, area path, and assigned to fields in this blade if you wish, or you can add to the
captured details.
When you're ready to create your work item, click on the “OK” button, and your work item will be written
to Azure DevOps.

Viewing work items


Once you have created more work items in Application Insights, you can quickly view them in Azure
DevOps.
The detailed blade for the event associated with the work item(s) will have the “View Work Items” button
enabled if you are in the Azure portal. To see the list, click the button:
    707

If you click the link for the work item that you want to view, it will open in Azure DevOps:

Advanced Configuration
Some of you may have noticed that there's a switch on the configuration blade that is labeled “Advanced
Configuration.”
We have provided another functionality to help you configure your ability to write to Azure DevOps in
scenarios where you've changed or extended some of the out-of-the-box settings.
An excellent example of it is choosing more required fields. There's no way to handle this other-required
mapping in the standard config, but you can handle it in advanced mode.
If you click on the switch, the controls at the bottom of the blade will change to look like this:
708     

You can see that you're now given a JSON-based editing box where you can specify all the settings/
mappings that you might need to handle modifications to your Azure DevOps project.

Next steps
We think that it's an excellent start to integrating work item functionality with Application Insights.
But please keep in mind that it's essentially the 1.0 version of this feature set.
We have much worked planned, and you'll see a significant evolution in this space over the upcoming
months.
Just for starters, let me outline a few of the things that we already have planned or are investigating:
●● Support for all work item types – You probably noticed that the current feature set locks the work item
type to just “bug.” Logging bugs was our primary ask for this space, so that is where we started, but
we certainly don't think that is where things should end. One of the more near-term changes you'll
see is handling all work item types for all supported processes in Azure DevOps.
●● Links back to Application Insights – It's great to create a work item with App Insights data in it, but
what happens when you are in your ALM/DevOps system and looking at that item and want to quickly
navigate back to the source of the work item in App Insights? We plan to rapidly add links to the work
items to make this as fast and easy as possible.
●● More flexible configuration – Currently, our standard configuration only handles scenarios where users
haven't modified/extended their project in Azure DevOps. Today, if you have made these kinds of
changes, you'll need to switch to advanced configuration mode. In the future, we want to handle
everyday things that people might change (for example, making more fields require, adding new
fields) in the standard configuration wherever possible. It requires some updates from our friends on
the Azure DevOps team, but they're already working on some of these for us. Once they're available,
we'll begin to make the standard configuration more flexible. In the meantime (and in the future), you
can always use the advanced configuration to overcome limitations.
●● Multiple profiles – Setting up a single configuration means that in shops where there are several ways
users commonly create work items, the people creating work items from Application Insights would
have to override values frequently. We plan to give users the capability to set up 1:n profiles, with
common values specified for each so that when you want to create a work item with that profile, you
can choose it from a drop-down list.
●● More sources of creation for work items – We'll continue to investigate (and take feedback on) other
places in Application Insights where it makes sense to create work items.
    709

●● Automatic creation of work items – There are certainly scenarios we can imagine where we might want
a work item to be created for us based upon criteria. It is on the radar, but we're spending some
design time to limit the possibilities of super-noisy or runaway work item creation. We believe that
this is a powerful and convenient feature, but we want to reduce the potential for spamming the ALM/
DevOps system as much as possible.
●● Support for other ALM/DevOps systems – Hey, we think that Azure DevOps is an excellent product, but
we recognize that many of our users may use some other product for their ALM/DevOps, and we want
to meet people where they are. So, we're working on other first-tier integrations of popular ALM/
DevOps products. We also plan to provide a pure custom configuration choice (like advanced config
for Azure DevOps) so that end users will hook up Application Insights to virtually any ALM/DevOps
system.

Summary
This module described how to share knowledge within teams, Azure DevOps Wikis, and IT Service
Management Connector (ITSMC). Also, integration with Azure Boards.
You learned how to describe the benefits and usage of:
●● Share knowledge with development teams.
●● Work with Azure DevOps Wikis.
●● Configure IT Service Management Connector (ITSMC).
●● Integrate with Azure Boards.

Learn more
●● Create a project wiki to share information - Azure DevOps | Microsoft Docs86.
●● IT Service Management Connector overview - Azure Monitor | Microsoft Docs87.

86 https://docs.microsoft.com/azure/devops/project/wiki/wiki-create-repo
87 https://docs.microsoft.com/azure/azure-monitor/alerts/itsmc-overview
710     

Design processes to automate application ana-


lytics
Introduction
This module helps design process to Application Insights, explores telemetry and monitoring tools and
technologies.

Learning objectives
After completing this module, students and professionals can:
●● Automate application analytics.
●● Assist DevOps with rapid responses and augmented search.
●● Integrate telemetry.
●● Implement monitoring tools and technologies.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Explore rapid responses and augmented search


In an Agile environment, you may typically find multiple development teams that work simultaneously.
Introducing new code or code changes daily and sometimes several times a day.
In such an immediate environment, it's prevalent to find problems that have “slipped through the cracks”
to find themselves in the production environment. When these issues arise, they have probably al-
ready-impacted end users, requiring a speedy resolution. It means that teams must conduct a rapid
investigation to identify the root cause of the problem.
Identifying where these symptoms are coming from and then isolating the root cause is a challenging
task. Symptoms can be found across various layers of a large hybrid IT environment, such as different
servers/VMs, storage devices, databases, to the front-end and server-side code. Investigations that
traditionally would take hours or days to complete must be completed within minutes.
Teams must examine the infrastructure and application logs as part of this investigation process. Howev-
er, the massive amount of log records produced in these environments makes it impossible to do this
manually.
It's much like trying to find a needle in a haystack. In most cases, these investigations are conducted
using log management and analysis systems that collect and aggregate these logs (from infrastructure
elements and applications), centralizing them in a single place and then-providing search capabilities to
explore the data.
These solutions make it possible to conduct the investigation, but they still rely entirely on investigation
skills and user knowledge. The user must know exactly what to search for and have a deep understanding
of the environment to use them effectively.
    711

It's crucial to understand that the log files of applications are far less predictable than the log files of
infrastructure elements. The errors are essentially messages and error numbers that have been intro-
duced to the code by developers in a non-consistent manner.
So, search queries yield thousands of results in most cases and do not include important ones, even when
the user is skilled. That leaves the user with the same “needle in the haystack” situation.

Assisting DevOps with augmented Search


A new breed of log management and analysis technologies has evolved to solve this challenge. These
technologies facilitate the identification and investigation processes using Augmented Search.
Explicitly designed to deal with application logs' chaotic and unpredictable nature, Augmented Search
considers that users don't necessarily know what to search for, especially in the chaotic application layer
environment.
The analysis algorithm automatically identifies errors, risk factors, and problem indicators while analyzing
their severity by combining semantic processing, statistical models, and machine learning to analyze and
“understand” the events in the logs. These insights are displayed as intelligence layers on top of the
search results, helping the user quickly discover the most relevant and essential information.
Although DevOps engineers may be familiar with the infrastructure and system architecture, the data is
constantly changing with continuous fast-paced deployment cycles and constant code changes. It means
that DevOps teams can use their intuition and knowledge to start investigating each problem, but they
have blind spots that consume time because of the dynamic nature of the log data.
Combining the decisions that DevOps engineers make during their investigation with the Augmented
Search engine information layers on the critical problems that occurred during the period of interest can
help guide them through these blind spots quickly.
Combining the user's intellect, acquaintance with the system's architecture, and Augmented Search
machine-learning capabilities on the dynamic data makes it faster and easier to focus on the most
relevant data. Here's how that works in practice: One of the servers went down, and any attempt to
reinitiate the server has failed. However, since the process is running, the server seems to be up. In this
case, end users are complaining that an application isn't responding.
This symptom could be related to many problems in a complex environment with many servers. Focusing
on the server behind this problem can be difficult, as it seems to be up. But finding the root cause of the
problem requires a lengthy investigation, even when you know which server is behind this problem.
Augmented Search will display a layer that highlights critical events during the specified period instead of
going over thousands of search results. These highlights provide information about the sources of the
events, assisting in the triage process.
At this point, DevOps engineers can understand the impact of the problem (for example, which servers
are affected by it) and then continue the investigation to find the root cause of these problems. Using
Augmented Search, DevOps engineers can identify a problem and the root cause in a matter of seconds
instead of examining thousands of log events or running multiple checks on the various servers.
Adding this type of visibility to log analysis and the ability to surface critical events out of tens of thou-
sands - and often millions - of events is essential in a fast-paced environment that constantly introduces
changes.
712     

Integrate telemetry
A key factor to automating feedback is telemetry. By inserting telemetric data into your production
application and environment, the DevOps team can automate feedback mechanisms while monitoring
applications in real time.
DevOps teams use telemetry to see and solve problems as they occur, but this data can be helpful to
both technical and business users.
When properly instrumented, telemetry can also be used to see and understand how customers are
engaging with the application in real time.
It could be critical information for product managers, marketing teams, and customer support. So,
feedback mechanisms must share continuous intelligence with all stakeholders.

What is telemetry, and why should I care?


In the software development world, telemetry can offer insights on which features end users use most,
detect bugs and issues, and provide better visibility into the performance without asking for feedback
directly from users.
In DevOps and the world of modern cloud apps, we're tracking the health and performance of an
application.
That telemetry data comes from application logs, infrastructure logs, metrics, and events.
The measurements are things like memory consumption, CPU performance, and database response time.
Events can be used to measure everything else, such as when a user logged in, when an item is added to
a basket, when a sale is made, and so on.
The concept of telemetry is often confused with just logging. But logging is a tool used in the develop-
ment process to diagnose errors and code flows. It's focused on the internal structure of a website, app,
or another development project. Logging only gives you a single dimension view. With insights into
infrastructure logs, metrics, and events, you have a 360-degree view of understanding user intent and
behavior.
Once a project is released, telemetry is what you are looking for to enable data collection from real-world
use.
Telemetry is what makes it possible to collect all that raw data that becomes valuable, actionable analyt-
ics.

Benefits of telemetry
The primary benefit of telemetry is the ability of an end user to monitor the state of an object or environ-
ment while physically far removed from it.
Once you've shipped a product, you can't be physically present, peering over the shoulders of thousands
(or millions) of users as they engage with your product to find out what works, what is easy, and what is
cumbersome.
Thanks to telemetry, those insights can be delivered directly into a dashboard for you to analyze and act.
Because telemetry provides insights into how well your product is working for your end users – as they
use it – it's a unique tool for ongoing performance monitoring and management.
Plus, you can use the data you've gathered from version 1.0-to-drive improvements and prioritize
updates for your release of version 2.0.
    713

Telemetry enables you to answer questions such as:


●● Are your customers using the features you expect? How are they engaging with your product?
●● How frequently are users engaging with your app, and for what duration?
●● What settings options do users select most? Do they prefer certain display types, input modalities,
screen orientation, or other device configurations?
●● What happens when crashes occur? Are crashes happening more frequently when certain features or
functions are used? What is the context surrounding a crash?
The answers to these and the many other questions that can be answered with telemetry are invaluable
to the development process.
It will enable you to make continuous improvements and introduce new features that, to your end users,
may seem as though you have been reading their minds – which you've been, thanks to telemetry.

Challenges of telemetry
Telemetry is a fantastic technology, but it isn't without its challenges.
The most prominent challenge – and a commonly occurring issue – isn't with telemetry itself but with
your end users and their willingness to allow what some see as Big Brother-Esque spying.
In short, some users immediately turn it off when they notice it, meaning any data generated from their
use of your product won't be gathered or reported.
That means the experience of those users won't be accounted for when it comes to planning your future
roadmap, fixing bugs, or addressing other issues in your app.
Although it isn't necessarily a problem by itself, the issue is that users who tend to disallow these types of
technologies can tend to fall into the more tech-savvy portion of your user base.
It can result in the dumbing-down of software. On the other hand, other users take no notice of telemetry
happening behind the scenes or ignore it if they do.
It's a problem without a clear solution—and it doesn't negate the overall power of telemetry for driving
development—but one to keep in mind as you analyze your data.
So, when designing a strategy for how you consider the feedback from application telemetry, it's neces-
sary to account for users who don't participate in providing the telemetry.

Examine monitoring tools and technologies


Continuous monitoring of applications in production environments is typically implemented with applica-
tion performance management (APM) solutions that intelligently monitor, analyze, and manage cloud,
on-premises, and hybrid applications and IT infrastructure.
These APM solutions enable you to monitor your users’ experience and improve the stability of your
application infrastructure. It helps identify the root cause of issues quickly to prevent outages and keep
users satisfied proactively.
With a DevOps approach, we also see more customers broaden the scope of continuous monitoring into
the staging, testing, and even development environments. It's possible because development and test
teams following a DevOps approach are striving to use production-like environments for testing as much
as possible.
By running APM solutions earlier in the life cycle, development teams get feedback about how applica-
tions will eventually do in the production and take corrective action much earlier. Also, operations teams
714     

advising the development teams get advanced knowledge and experience to better prepare and tune the
production environment, resulting in far more stable releases into production.
Applications are more business-critical than ever. They must always be up, always fast, and constantly
improving. Embracing a DevOps approach will allow you to reduce your cycle times to hours instead of
months, but you must keep ensuring a great user experience!
Continuous monitoring of your entire DevOps life cycle will ensure development and operations teams
collaborate to optimize the user experience every step of the way, leaving more time for your next
significant innovation.
When shortlisting a monitoring tool, you should seek the following advanced features:
●● Synthetic Monitoring: Developers, testers, and operations staff all need to ensure that their internet
and intranet-mobile applications and web applications are tested and operate successfully from
different points of presence worldwide.
●● Alert Management: Developers, testers, and operations staff all need to send notifications via email,
voice mail, text, mobile push notifications, and Slack messages when specific situations or events
occur in development, testing, or production environments, to get the right people’s attention and to
manage their response.
●● Deployment Automation: Developers, testers, and operations staff use different tools to schedule
and deploy complex applications and configure them in development, testing, and production
environments. We'll discuss the best practices for these teams to collaborate effectively and efficiently
and avoid potential duplication and erroneous information.
●● Analytics: Developers need to look for patterns in log messages to identify if there's a problem in the
code. Operations need to do root cause analysis across multiple log files to identify the source of the
problem in complex applications and systems.

Explore IT Service Management Connector


Customers use Azure monitoring tools to identify, analyze and troubleshoot issues.
However, the work items related to an issue are typically stored in an ITSM tool.
Instead of having to go back and forth between your ITSM tool and Azure monitoring tools, customers
can now get all the information they need in one place.
ITSMC will improve the troubleshooting experience and reduce the time it takes to resolve issues.
IT Service Management Connector (ITSMC) for Azure provides bi-directional integration between Azure
monitoring tools and your ITSM tools – ServiceNow, Provance, Cherwell, and System Center Service
Manager.
Specifically, you can use ITSMC to:
●● Create or update work-items (Event, Alert, Incident) in the ITSM tools based on Azure alerts (Activity
Log Alerts, Near Real-Time metric alerts, and Log Analytics alerts)
●● Pull the Incident and Change Request data from ITSM tools into Azure Log Analytics.
You can set up ITSMC by following this overview:
ITSMC Overview88.

88 https://docs.microsoft.com/azure/azure-monitor/platform/itsmc-overview
    715

Summary
This module helped design the process for Application Insights, explored telemetry and monitoring tools
and technologies.
You learned how to describe the benefits and usage of:
●● Automate application analytics.
●● Assist DevOps with rapid responses and augmented search.
●● Integrate telemetry.
●● Implement monitoring tools and technologies.

Learn more
●● What is Azure Application Insights? - Azure Monitor | Microsoft Docs89.
●● Continuous monitoring of your DevOps release pipeline with Azure Pipelines and Azure Appli-
cation Insights - Azure Monitor | Microsoft Docs90.
●● Azure Monitor Application Insights Documentation - Azure Monitor | Microsoft Docs91.

89 https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview
90 https://docs.microsoft.com/azure/azure-monitor/app/continuous-monitoring
91 https://docs.microsoft.com/azure/azure-monitor/azure-monitor-app-hub
716     

Manage alerts, Blameless retrospectives and a


just culture
Introduction
This module examines alerts, blameless retrospectives and creates a just culture. It helps improve applica-
tion performance, reduces meaningless and non-actionable alerts, and explains server response-time
degradation.

Learning objectives
After completing this module, students and professionals can:
●● Carry out blameless retrospectives and create a just culture.
●● Improve application performance.
●● Explain server response-time degradation.
●● Reduce meaningless and non-actionable alerts.

Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.

Examine when get a notification


Application Insights92 automatically analyzes the performance of your web application and can warn
you about potential problems. You might be reading it because you received one of our smart detection
notifications.
This feature requires no particular setup other than configuring your app for Application Insights (on ASP.
NET93, Java94, or Node.js, and web page code95). It's active when your app generates enough telemetry.

When would I get a smart detection notification?


Application Insights has detected that the performance of your application has degraded in one of these
ways:
●● Response time degradation - Your app has started responding to requests more slowly than it used
to. The change might have been rapid, for example, because there was a regression in your latest
deployment. Or it might have been gradual, maybe caused by a memory leak.
●● Dependency duration degradation - Your app makes calls to a REST API, database, or other depend-
encies. The dependency is responding more slowly than it used to.

92 https://docs.microsoft.com/azure/application-insights/app-insights-overview
93 https://docs.microsoft.com/azure/azure-monitor/app/asp-net
94 https://docs.microsoft.com/azure/application-insights/app-insights-java-get-started
95 https://docs.microsoft.com/azure/application-insights/app-insights-javascript
    717

●● Slow performance pattern - Your app appears to have a performance issue that is affecting only some
requests. For example, pages are loading more slowly on one type of browser than others; or requests
are being served more slowly from one server. Currently, our algorithms look at page load times,
request response times, and dependency response times.
Smart Detection requires at least eight days of telemetry at a workable volume to establish a normal
performance baseline. So, after your application has been running for that period, any significant issue
will result in a notification.

Does my app have a problem?


No, a notification doesn't mean that your app has a problem. It's simply a suggestion about something
you might want to look at more closely.

Explore how to fix it


The notifications include diagnostic information. Here's an example:
718     

●● Triage. The notification shows you how many users or how many operations are affected. It can help
you assign a priority to the problem.
●● Scope. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or
locations? This information can be obtained from the notification.
●● Diagnose. Often, the diagnostic information in the notification will suggest the nature of the problem.
For example, if response time slows down when the request rate is high, that means your server or de-
pendencies are overloaded. Otherwise, open the Performance blade in Application Insights. There,
you'll find Profiler96 data. If exceptions are thrown, you can also try the snapshot debugger97.

Explore smart detection notifications


Smart detection notifications are enabled by default and sent to owners, contributors, and readers
access to the Application Insights resource98.

96 https://docs.microsoft.com/azure/application-insights/app-insights-profiler
97 https://docs.microsoft.com/azure/application-insights/app-insights-snapshot-debugger
98 https://docs.microsoft.com/azure/application-insights/app-insights-resources-roles-access-control
    719

To change it, click Configure in the email notification, or open Smart Detection settings in Application
Insights.

●● You can use the unsubscribe link in the smart detection email to stop receiving the email notifications.
Emails about smart detections performance anomalies are limited to one email per day per Application
Insights resource.
The email would be sent only if at least one new issue was detected on that day. You won't get repeats of
any message.

Improve performance
Slow and failed responses are one of the biggest frustrations for website users, as you know from your
own experience. So, it's essential to address the issues.

Triage
●● First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look
at it, maybe you have more important things to think about. On the other hand, if only 1% of users
open it, but it throws exceptions every time, that might be worth investigating. Use the impact
statement (affected users or % of traffic) as a general guide but be aware that it isn't the whole story.
Gather other evidence to confirm. Consider the parameters of the issue. If it's geography-dependent,
set up availability tests99 including that region: there might be network issues in that area.

Diagnose slow page loads


●● Where is the problem? Is the server slow to respond, is the page long, or does the browser have to do
much work to display it? Open the Browsers metric blade. The segmented display of browser page
load time shows where the time is going.

●● If Send Request Time is high, the server responds slowly, or the request is a post with much data.
Look at the performance metrics100 to investigate response times.
●● Set up dependency tracking101 to see whether the slowness is because of external services or
your database.

99 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
100 https://docs.microsoft.com/azure/azure-monitor/azure-monitor-app-hub
101 https://docs.microsoft.com/azure/azure-monitor/app/asp-net-dependencies
720     

●● If Receiving Response is predominant, your page and its dependent parts - JavaScript, CSS, images,
and so on (but not asynchronously loaded data) are long. Set up an availability test102 and be
sure to set the option to load dependent parts. When you get some results, open the detail of a
result, and expand it to see the load times of different files.
●● High Client Processing time suggests scripts are running slowly. If the reason isn't clear, consider
adding some timing code and sending the times in track metrics calls.

Improve slow pages


●● There's a web full of advice on improving your server responses and page load times, so we won't try
to repeat it all here. Here are a few tips that you probably already know about to get you thinking:

●● Slow loading because of large files: Load the scripts and other parts asynchronously. Use script
bundling. Break the main page into widgets that load their data separately. Don't send plain old
HTML for long tables: use a script to request the data as JSON or another compact format, then fill
the table in place. There are remarkable frameworks to help with all of it. (They also involve great
scripts)
●● Slow server dependencies: Consider the geographical locations of your components. For example,
if you use Azure, ensure the web server and the database are in the same region. Do queries
retrieve more information than they need? Would caching or batching help?
●● Capacity issues: Look at the server metrics of response times and request counts. If response times
peak disproportionately with peaks in request counts, your servers are likely stretched.

Understand server response time degradation


The response time degradation notification tells you:
●● The response time compared to normal response time for this operation.
●● How many users are affected?
●● Average response time and 90th percentile response time for this operation on the day of the detec-
tion and seven days before.
●● Count of this operation requests on the day of the detection and seven days before.
●● Correlation between degradation in this operation and degradations in related dependencies.

●● Links to help you diagnose the problem.

●● Profiler traces to help you view where operation time is spent (the link is available if Profiler
trace examples were collected for this operation during the detection period).
●● Performance reports in Metric Explorer, where you can slice and dice time range/filters for this
operation.
●● Search for this call to view specific call properties.
●● Failure reports - If count > 1 it means that there were failures in this operation that might have
contributed to performance degradation.

102 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
    721

Reduce meaningless and non-actionable alerts


Monitoring and alerting enable a system to tell us when it's' broken, or perhaps to tell us what is about to
break.
When the system can't' automatically fix itself, we want a human to investigate the alert. Determine if
there's a real problem at hand, mitigate the problem, and determine the root cause.
Unless you're doing security auditing on narrowly scoped components of a system, you should never
trigger an alert simply because “something seems a bit weird.”
When you're reviewing existing alerts or writing new-alerting rules, consider these things to keep your
alerts relevant and your on-call rotation happier:
●● Alerts that trigger call-out should be urgent, important, actionable, and real.
●● They should represent either ongoing or imminent problems with your service.
●● Err on the side of removing noisy alerts – over-monitoring is a more-challenging problem to solve
under-monitoring.
●● You should almost always classify the problem into availability & basic functionality; latency; correct-
ness (completeness, freshness, and durability of data); and feature-specific problems.
●● Symptoms are a better way to capture more problems more comprehensively and robustly with less
effort.
●● Include cause-based information in symptom-based pages or on dashboards but avoid alerting
directly on causes.
●● The further up your serving stack you go, the more distinct problems you catch in a single rule. But
don't go so far you can't sufficiently distinguish what is going on.
●● If you want an on-call rotation, it's imperative to have a system for dealing with things that need a
timely response but aren't imminently critical.

Examine blameless retrospective


What does it mean to have a blameless retrospective?
Anyone who worked with technology at any scale is familiar with failure.
Failure cares not about the architecture designs you drudge over, the code you write and review, or the
alerts and metrics you meticulously pore through. So: failure happens.
It's a foregone conclusion when working with complex systems. But what about those failures that have
resulted from individuals' actions (or lack of action, in some cases)? What do you do with those careless
humans who caused everyone to have a bad day?
Maybe they should be fired. Or perhaps they need to be prevented from touching the dangerous bits
again. Or they need more training. It's the traditional view of “human error,” which focuses on the
individuals' characteristics. Also called the “Bad Apple Theory” – get rid of the bad apples and get rid of
the human error.
It seems simple, right? Organizations that have pioneered DevOps are shying away from this traditional
view. Instead, these DevOps practicing organizations want to view mistakes, slips, lapses, etc., from a
learning perspective. Having a blameless Post-mortem on outages and accidents is part of it.
722     

What does it mean to have a ‘blameless’ retrospective? Does it mean everyone gets off the hook for
making mistakes? No.
Well, maybe. It depends on what “gets off the hook” means. Let me explain.
Having a Just Culture means that you're making an effort to balance safety and accountability. It means
that by investigating mistakes, focusing on the situational aspects of a failure's mechanism.
In the decision-making process of individuals proximate to the failure, an organization can come out
safer than it would usually be if it had punished the actors involved as remediation.
Having a “blameless” retrospective process means that engineers whose actions have contributed to an
accident can give a detailed account of:
●● What actions do they take at what time.
●● What effects do they observe?
●● Expectations they had.
●● Assumptions they had made.
●● Their understanding of the timeline of events as they occurred.
AND that they can give this detailed account without fear of punishment or retribution.
Why should they not be punished or reprimanded? Because an engineer who thinks they'll be blamed is
incentivized to give the details necessary to understand the failure's mechanism, pathology, and opera-
tion.
This lack of understanding of how the accident occurred guarantees that it will repeat. If not with the
original engineer, another one in the future.
If we go with “blame” as the predominant approach, we implicitly accept that deterrence is how organiza-
tions become safer.
This is founded on the belief that individuals, not situations, cause errors.
It's also aligned with the idea there must be some fear that not-doing one's job correctly could lead to
punishment because the fear of punishment will motivate people to act correctly in the future. Right?
This cycle of name/blame/shame can be looked at like this:
●● Engineer acts and contributes to a failure or incident.
●● Engineer is punished, shamed, blamed, or retrained.
●● Reduced trust between engineers on the ground (the “sharp-end”) and management (the "blunt end")
looking for someone to scapegoat.
●● Engineers become silent on details about actions/situations/observations, resulting in “Cov-
er-Your-Mistake” engineering (from fear of punishment)
●● Management becomes less aware and informed on how work is being performed daily. Engineers
become less educated on lurking or latent conditions for failure because of the silence mentioned in
the previous four.
●● Errors are more likely, and latent conditions can't be identified because of the previous five.
●● Repeat the first step.
We need to avoid this cycle. We want the engineer who has made an error to give details about why
(either explicitly or implicitly) they did what they did; why the action made sense to them at the time.
    723

It's paramount to understand the pathology of the failure. The action made sense to the person when
they took it because if it had the not-made sense, they wouldn't have taken action in the first place.
The base fundamental here's something Erik Hollnagel103 has said:
We must strive to understand that accidents don't happen because people gamble and lose.
Accidents happen because the person believes that:
…what is about to happen isn't possible,
…or what is about to happen has no connection to what they're doing,
…or that the possibility of getting the intended outcome is worth whatever risk there is.

Develop a just culture


A funny thing happens when engineers make mistakes and feel safe when giving details about it: they
aren't only willing to be held accountable, but they're also enthusiastic about helping the rest of the
company avoid the same error in the future.
They are, after all, the most expert in their error.
They ought to be heavily involved in coming up with remediation items.
So technically, engineers aren't at all “off the hook” with a blameless PostMortem process. They're very
much on the hook for helping become safer and more resilient in the end. And lo and behold: most
engineers I know find this idea of making things better for others a worthwhile exercise.
So, what do we do to enable a “Just Culture”?
●● Encourage learning by having these blameless Postmortems on outages and accidents.
●● The goal is to understand **how **an accident could have happened, to better equip ourselves from it
happening in the future.
●● Gather details from multiple perspectives on failures, and don't punish people for making mistakes.
●● Instead of punishing engineers, we give them the requisite authority to improve safety by providing
detailed accounts of their contributions to failures.
●● Enable and encourage people who make mistakes to educate the rest of the organization on how not
to make them in the future.
●● Accept that there's always a discretionary space where humans can decide to make actions or not and
that the judgment of those decisions lies in hindsight.
●● Accept that the Hindsight Bias104 will continue to cloud our assessment of past events and work hard
to eliminate it.
●● Accept that the Fundamental Attribution Error105 is also difficult to escape, so we focus on the
environment and circumstances people are working in when investigating accidents.
●● Strive to make sure that the blunt end of the organization understands how work is getting done (as
opposed to how they imagine it's getting done via Gantt charts and procedures) on the sharp end.
●● The sharp end is relied upon to inform the organization of the line between appropriate and inappro-
priate behavior. It isn't something that the blunt end can come up with on its own.
Failure happens. To understand how failures occur, we first must understand our reactions to failure.

103 http://www.erikhollnagel.com/
104 http://en.wikipedia.org/wiki/Hindsight
105 http://en.wikipedia.org/wiki/Fundamental_attribution_error
724     

One option is to assume the single cause is incompetence and scream at engineers to make them “pay
attention!” or "be more careful!"
Another option is to take a hard look at how the accident happened, treat the engineers involved with
respect, and learn from the event.
For more information, see also:
●● Brian Harry's Blog - A good incident postmortem106

Summary
This module examined alerts, blameless retrospectives and created a just culture. It helped improve
application performance, reduced meaningless and non-actionable alerts, and explained server re-
sponse-time degradation.
You learned how to describe the benefits and usage of:
●● Carry out blameless retrospectives and create a just culture.
●● Improve application performance.
●● Explain server response-time degradation.
●● Reduce meaningless and non-actionable alerts.

Learn more
●● Smart detection - performance anomalies - Azure Monitor | Microsoft Docs107.
●● Overview of alerting and notification monitoring in Azure - Azure Monitor | Microsoft Docs108.
●● Create, view, and manage Metric Alerts Using Azure Monitor - Azure Monitor | Microsoft
Docs109.

106 https://blogs.msdn.microsoft.com/bharry/2018/03/02/a-good-incident-postmortem/
107 https://docs.microsoft.com/azure/azure-monitor/app/proactive-performance-diagnostics
108 https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-overview
109 https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-metric
    725

Labs
Lab 18: Integration between Azure DevOps and
Microsoft Teams
Lab overview
Microsoft Teams110 is a hub for teamwork in Office 365. It allows you to manage and use all your team's
chats, meetings, files, and apps together in one place. It provides software development teams with a hub
for teams, conversations, content and tools from across Office 365 and Azure DevOps.
In this lab, you will implement integration scenarios between Azure DevOps services and Microsoft
Teams.
Note: Azure DevOps Services integration with Microsoft Teams provides a comprehensive chat and
collaborative experience across the development cycle. Teams can easily stay informed of important
activities in your Azure DevOps team projects with notifications and alerts on work items, pull requests,
code commits, as well as build and release events.

Objectives
After you complete this lab, you will be able to:
●● Integrate Microsoft Teams with Azure DevOps
●● Integrate Azure DevOps Kanban boards and Dashboards in Teams
●● Integrate Azure Pipelines with Microsoft Teams
●● Install the Azure Pipelines app in Microsoft Teams
●● Subscribe for Azure Pipelines notifications

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions111

110 https://teams.microsoft.com/start
111 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
726     

Lab 19: Monitoring application performance


with Application Insights
Lab overview
Application Insights is an extensible Application Performance Management (APM) service for web
developers on multiple platforms. You can use it to monitor your live web applications. It automatically
detects performance anomalies, includes powerful analytics tools to help you diagnose issues, and helps
you continuously improve performance and usability. It works for apps on a wide variety of platforms
including .NET, Node.js and Java EE, hosted on-premises, hybrid, or any public cloud. It integrates with
your DevOps process with connection points available in a variety of development tools. It also allows
you to monitor and analyze telemetry from mobile apps through integration with Visual Studio App
Center.
In this lab, you'll learn about how you can add Application Insights to an existing web application, as well
as how to monitor the application via the Azure portal.

Objectives
After you complete this lab, you will be able to:
●● Deploy Azure App Service web apps
●● Generate and monitor Azure web app application traffic by using Application Insights
●● Investigate Azure web app performance by using Application Insights
●● Track Azure web app usage by using Application Insights
●● Create Azure web app alerts by using Application Insights

Lab duration
●● Estimated time: 60 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions112

Lab 20: Sharing team knowledge using Azure


project wikis
Lab overview
In this lab, you will create and configure wiki in an Azure DevOps, including managing markdown content
and creating a Mermaid diagram.

112 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
    727

Objectives
After you complete this lab, you will be able to:
●● Create a wiki in an Azure Project
●● Add and edit markdown
●● Create a Mermaid diagram

Lab duration
●● Estimated time: 45 minutes

Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions113

113 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
728     

Module review and takeaways


Module review questions

Choose the best response for each question. Then select Check your answers.

Multiple choice
Which of the following choices Azure Monitor lets you create custom dashboards based on?
†† Alerts, Action Groups.
†† Metrics, Logs.
†† Action Groups, Workflows.

Multiple choice
Which of the following query languages can you use to query Azure Log Analytics?
†† Kusto.
†† T-SQL.
†† Xpath.

Multiple choice
Which of the following choices is the unique key configured by Application Insights used by the Application
Insights SDK?
†† ApplicationInsights Key.
†† AppInsightsSDK Key.
†† AppInsights Key.

Multiple choice
Which of the following choices is a platform integration Azure Monitor provides to visualize your logs in
real-time?
†† Azure Dashboards.
†† Power BI.
†† Excel.
    729

Multiple choice
Which of the following choices is an Azure Dashboards limitation?
†† Log charts can only be pinned to shared dashboards.
†† Full-screen mode.
†† Parametrized metrics dashboards with timestamp and custom parameters.

Multiple choice
Which of the following choices is a feature in Azure Monitor that allows you to create custom visualizations
with log data?
†† Workbooks.
†† View Designer.
†† Data Export.

Multiple choice
Which of the following choices is the minimum permission or group to add or edit Wiki pages?
†† Project Administrator.
†† Contributors.
†† Stakeholders.

Multiple choice
Which of the following choices is a functionality that allows you to easily create work items in Azure
DevOps that have relevant Application Insights data embedded?
†† Work item integration.
†† Azure DevOps Wiki.
†† App Stream.

Multiple choice
Which of the following commands can you perform with the Azure Pipelines with Microsoft Teams integra-
tion?
†† Subscribe.
†† Create.
†† Unlink.
730     

Multiple choice
Which of the following choices isn't automatically identified by the Augmented Search algorithm analysis?
†† Waiver factors.
†† Errors.
†† Risk factors.

Multiple choice
Which of the following choices is a key factor to automating feedback?
†† Telemetry.
†† Work Item creation.
†† Alerts.

Multiple choice
Which of the following choices isn't an advanced feature you should seek when shortlisting a monitoring
tool?
†† Alert Management.
†† Test Management.
†† Analytics.

Multiple choice
Which of the following choices is the feature that Server Response Time Degradation notification is part of?
†† Azure Analytics.
†† Smart Alerts.
†† Smart Detection.

Multiple choice
Which of the following choices isn't a way Application Insights detects the performance of your application
is degraded?
†† Slow performance pattern.
†† Response time degradation.
†† Application dependency management pattern.

Multiple choice
Which of the following choices the response time degradation notification didn't tell you?
†† How many users are affected.
†† Count of this operation requests 60 days before.
†† Correlation between degradation in this operation and degradations in related dependencies.
    731

Answers
Multiple choice
Which of the following choices Azure Monitor lets you create custom dashboards based on?
†† Alerts, Action Groups.
■■ Metrics, Logs.
†† Action Groups, Workflows.
Explanation
Azure Monitor lets you create custom dashboards based on Metrics and Logs.
Multiple choice
Which of the following query languages can you use to query Azure Log Analytics?
■■ Kusto.
†† T-SQL.
†† Xpath.
Explanation
Kusto is the primary way to query Log Analytics. It provides both a query language and a set of control
commands.
Multiple choice
Which of the following choices is the unique key configured by Application Insights used by the Applica-
tion Insights SDK?
†† ApplicationInsights Key.
†† AppInsightsSDK Key.
■■ AppInsights Key.
Explanation
Application Insights configures a unique key called AppInsights Key in your application. The Application
Insights SDK uses this key to identify the Azure App Insights workspace the telemetry data needs to be
uploaded.
Multiple choice
Which of the following choices is a platform integration Azure Monitor provides to visualize your logs in
real-time?
†† Azure Dashboards.
■■ Power BI.
†† Excel.
Explanation
Azure Monitor integrates with Power BI to provide logs visualization in real-time.
732     

Multiple choice
Which of the following choices is an Azure Dashboards limitation?
■■ Log charts can only be pinned to shared dashboards.
†† Full-screen mode.
†† Parametrized metrics dashboards with timestamp and custom parameters.
Explanation
The limitation is Log charts can only be pinned to shared dashboards. Full-screen mode and Parametrized
metrics dashboards with the timestamp and custom parameters are Advantages.
Multiple choice
Which of the following choices is a feature in Azure Monitor that allows you to create custom visualiza-
tions with log data?
†† Workbooks.
■■ View Designer.
†† Data Export.
Explanation
View Designer in Azure Monitor allows you to create custom visualizations with log data. They are used by
monitoring solutions to present the data they collect.
Multiple choice
Which of the following choices is the minimum permission or group to add or edit Wiki pages?
†† Project Administrator.
■■ Contributors.
†† Stakeholders.
Explanation
To add or edit wiki pages, you should be a member of the Contributors group.
Multiple choice
Which of the following choices is a functionality that allows you to easily create work items in Azure
DevOps that have relevant Application Insights data embedded?
■■ Work item integration.
†† Azure DevOps Wiki.
†† App Stream.
Explanation
Work item integration functionality allows you to easily create work items in Azure DevOps that have
relevant Application Insights data embedded in them.
    733

Multiple choice
Which of the following commands can you perform with the Azure Pipelines with Microsoft Teams
integration?
■■ Subscribe.
†† Create.
†† Unlink.
Explanation
You can run the @azure pipelines subscribe [pipeline url] command to subscribe to an Azure Pipeline.
Create and Unlink only works with the Azure Boards integration.
Multiple choice
Which of the following choices isn't automatically identified by the Augmented Search algorithm analy-
sis?
■■ Waiver factors.
†† Errors.
†† Risk factors.
Explanation
The analysis algorithm automatically identifies errors, risk factors, and problem indicators.
Multiple choice
Which of the following choices is a key factor to automating feedback?
■■ Telemetry.
†† Work Item creation.
†† Alerts.
Explanation
A key factor to automating feedback is telemetry. By inserting telemetric data into your production applica-
tion and environment, the DevOps team can automate feedback mechanisms while monitoring applications
in real-time.
Multiple choice
Which of the following choices isn't an advanced feature you should seek when shortlisting a monitoring
tool?
†† Alert Management.
■■ Test Management.
†† Analytics.
Explanation
When shortlisting a monitoring tool, you should seek the following advanced features: Synthetic Monitoring,
Alert Management, Deployment Automation, Analytics.
734     

Multiple choice
Which of the following choices is the feature that Server Response Time Degradation notification is part
of?
†† Azure Analytics.
†† Smart Alerts.
■■ Smart Detection.
Explanation
Server Response Time Degradation notification is part of Smart Detection Notification.
Multiple choice
Which of the following choices isn't a way Application Insights detects the performance of your applica-
tion is degraded?
†† Slow performance pattern.
†† Response time degradation.
■■ Application dependency management pattern.
Explanation
Application Insights has detected that the performance of your application has degraded in one of the ways:
Response time degradation, Dependency duration degradation, Slow performance pattern.
Multiple choice
Which of the following choices the response time degradation notification didn't tell you?
†† How many users are affected.
■■ Count of this operation requests 60 days before.
†† Correlation between degradation in this operation and degradations in related dependencies.
Explanation
The response time degradation notification tells you: How many users are affected, the Correlation between
degradation in this operation and degradations in related dependencies, the Count of operation requests on
the day of the detection and seven days before, and some others.

You might also like