AZ 400T00A ENU TrainerHandbook
AZ 400T00A ENU TrainerHandbook
AZ 400T00A ENU TrainerHandbook
Official
Course
AZ-400T00
Designing and
Implementing Microsoft
DevOps solutions
AZ-400T00
Designing and Implementing
Microsoft DevOps solutions
II Disclaimer
Information in this document, including URL and other Internet Web site references, is subject to change
without notice. Unless otherwise noted, the example companies, organizations, products, domain names,
e-mail addresses, logos, people, places, and events depicted herein are fictitious, and no association with
any real company, organization, product, domain name, e-mail address, logo, person, place or event is
intended or should be inferred. Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in
or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the express written permission of
Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property
rights covering subject matter in this document. Except as expressly provided in any written license
agreement from Microsoft, the furnishing of this document does not give you any license to these
patents, trademarks, copyrights, or other intellectual property.
The names of manufacturers, products, or URLs are provided for informational purposes only and
Microsoft makes no representations and warranties, either expressed, implied, or statutory, regarding
these manufacturers or the use of the products with any Microsoft technologies. The inclusion of a
manufacturer or product does not imply endorsement of Microsoft of the manufacturer or product. Links
may be provided to third party sites. Such sites are not under the control of Microsoft and Microsoft is
not responsible for the contents of any linked site or any link contained in a linked site, or any changes or
updates to such sites. Microsoft is not responsible for webcasting or any other form of transmission
received from any linked site. Microsoft is providing these links to you only as a convenience, and the
inclusion of any link does not imply endorsement of Microsoft of the site or the products contained
therein.
© 2019 Microsoft Corporation. All rights reserved.
Microsoft and the trademarks listed at http://www.microsoft.com/trademarks 1are trademarks of the
Microsoft group of companies. All other trademarks are property of their respective owners.
1 http://www.microsoft.com/trademarks
EULA III
13. “Personal Device” means one (1) personal computer, device, workstation or other digital electronic
device that you personally own or control that meets or exceeds the hardware level specified for
the particular Microsoft Instructor-Led Courseware.
14. “Private Training Session” means the instructor-led training classes provided by MPN Members for
corporate customers to teach a predefined learning objective using Microsoft Instructor-Led
Courseware. These classes are not advertised or promoted to the general public and class attend-
ance is restricted to individuals employed by or contracted by the corporate customer.
15. “Trainer” means (i) an academically accredited educator engaged by a Microsoft Imagine Academy
Program Member to teach an Authorized Training Session, (ii) an academically accredited educator
validated as a Microsoft Learn for Educators – Validated Educator, and/or (iii) a MCT.
16. “Trainer Content” means the trainer version of the Microsoft Instructor-Led Courseware and
additional supplemental content designated solely for Trainers’ use to teach a training session
using the Microsoft Instructor-Led Courseware. Trainer Content may include Microsoft PowerPoint
presentations, trainer preparation guide, train the trainer materials, Microsoft One Note packs,
classroom setup guide and Pre-release course feedback form. To clarify, Trainer Content does not
include any software, virtual hard disks or virtual machines.
2. USE RIGHTS. The Licensed Content is licensed, not sold. The Licensed Content is licensed on a one
copy per user basis, such that you must acquire a license for each individual that accesses or uses the
Licensed Content.
●● 2.1 Below are five separate sets of use rights. Only one set of rights apply to you.
1. If you are a Microsoft Imagine Academy (MSIA) Program Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User who is enrolled in the Authorized Training Session, and only immediately
prior to the commencement of the Authorized Training Session that is the subject matter
of the Microsoft Instructor-Led Courseware being provided, or
2. provide one (1) End User with the unique redemption code and instructions on how they
can access one (1) digital version of the Microsoft Instructor-Led Courseware, or
3. provide one (1) Trainer with the unique redemption code and instructions on how they
can access one (1) Trainer Content.
3. For each license you acquire, you must comply with the following:
1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure each End User attending an Authorized Training Session has their own
valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the
Authorized Training Session,
3. you will ensure that each End User provided with the hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
EULA V
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
4. you will ensure that each Trainer teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
5. you will only use qualified Trainers who have in-depth knowledge of and experience with
the Microsoft technology that is the subject of the Microsoft Instructor-Led Courseware
being taught for all your Authorized Training Sessions,
6. you will only deliver a maximum of 15 hours of training per week for each Authorized
Training Session that uses a MOC title, and
7. you acknowledge that Trainers that are not MCTs will not have access to all of the trainer
resources for the Microsoft Instructor-Led Courseware.
2. If you are a Microsoft Learning Competency Member:
1. Each license acquire may only be used to review one (1) copy of the Microsoft Instruc-
tor-Led Courseware in the form provided to you. If the Microsoft Instructor-Led Course-
ware is in digital format, you may install one (1) copy on up to three (3) Personal Devices.
You may not install the Microsoft Instructor-Led Courseware on a device you do not own or
control.
2. For each license you acquire on behalf of an End User or MCT, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Authorized Training Session and only immediately prior to
the commencement of the Authorized Training Session that is the subject matter of the
Microsoft Instructor-Led Courseware provided, or
2. provide one (1) End User attending the Authorized Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) MCT with the unique redemption code and instructions on how
they can access one (1) Trainer Content.
3. For each license you acquire, you must comply with the following:
1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure that each End User attending an Authorized Training Session has their
own valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of
the Authorized Training Session,
3. you will ensure that each End User provided with a hard-copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
VI EULA
4. you will ensure that each MCT teaching an Authorized Training Session has their own
valid licensed copy of the Trainer Content that is the subject of the Authorized Training
Session,
5. you will only use qualified MCTs who also hold the applicable Microsoft Certification
credential that is the subject of the MOC title being taught for all your Authorized
Training Sessions using MOC,
6. you will only provide access to the Microsoft Instructor-Led Courseware to End Users,
and
7. you will only provide access to the Trainer Content to MCTs.
3. If you are a MPN Member:
1. Each license acquired on behalf of yourself may only be used to review one (1) copy of the
Microsoft Instructor-Led Courseware in the form provided to you. If the Microsoft Instruc-
tor-Led Courseware is in digital format, you may install one (1) copy on up to three (3)
Personal Devices. You may not install the Microsoft Instructor-Led Courseware on a device
you do not own or control.
2. For each license you acquire on behalf of an End User or Trainer, you may either:
1. distribute one (1) hard copy version of the Microsoft Instructor-Led Courseware to one
(1) End User attending the Private Training Session, and only immediately prior to the
commencement of the Private Training Session that is the subject matter of the Micro-
soft Instructor-Led Courseware being provided, or
2. provide one (1) End User who is attending the Private Training Session with the unique
redemption code and instructions on how they can access one (1) digital version of the
Microsoft Instructor-Led Courseware, or
3. you will provide one (1) Trainer who is teaching the Private Training Session with the
unique redemption code and instructions on how they can access one (1) Trainer
Content.
3. For each license you acquire, you must comply with the following:
1. you will only provide access to the Licensed Content to those individuals who have
acquired a valid license to the Licensed Content,
2. you will ensure that each End User attending an Private Training Session has their own
valid licensed copy of the Microsoft Instructor-Led Courseware that is the subject of the
Private Training Session,
3. you will ensure that each End User provided with a hard copy version of the Microsoft
Instructor-Led Courseware will be presented with a copy of this agreement and each End
User will agree that their use of the Microsoft Instructor-Led Courseware will be subject
to the terms in this agreement prior to providing them with the Microsoft Instructor-Led
Courseware. Each individual will be required to denote their acceptance of this agree-
ment in a manner that is enforceable under local law prior to their accessing the Micro-
soft Instructor-Led Courseware,
4. you will ensure that each Trainer teaching an Private Training Session has their own valid
licensed copy of the Trainer Content that is the subject of the Private Training Session,
EULA VII
5. you will only use qualified Trainers who hold the applicable Microsoft Certification
credential that is the subject of the Microsoft Instructor-Led Courseware being taught
for all your Private Training Sessions,
6. you will only use qualified MCTs who hold the applicable Microsoft Certification creden-
tial that is the subject of the MOC title being taught for all your Private Training Sessions
using MOC,
7. you will only provide access to the Microsoft Instructor-Led Courseware to End Users,
and
8. you will only provide access to the Trainer Content to Trainers.
4. If you are an End User:
For each license you acquire, you may use the Microsoft Instructor-Led Courseware solely for
your personal training use. If the Microsoft Instructor-Led Courseware is in digital format, you
may access the Microsoft Instructor-Led Courseware online using the unique redemption code
provided to you by the training provider and install and use one (1) copy of the Microsoft
Instructor-Led Courseware on up to three (3) Personal Devices. You may also print one (1) copy
of the Microsoft Instructor-Led Courseware. You may not install the Microsoft Instructor-Led
Courseware on a device you do not own or control.
5. If you are a Trainer.
1. For each license you acquire, you may install and use one (1) copy of the Trainer Content in
the form provided to you on one (1) Personal Device solely to prepare and deliver an
Authorized Training Session or Private Training Session, and install one (1) additional copy
on another Personal Device as a backup copy, which may be used only to reinstall the
Trainer Content. You may not install or use a copy of the Trainer Content on a device you do
not own or control. You may also print one (1) copy of the Trainer Content solely to prepare
for and deliver an Authorized Training Session or Private Training Session.
2. If you are an MCT, you may customize the written portions of the Trainer Content that are
logically associated with instruction of a training session in accordance with the most recent
version of the MCT agreement.
3. If you elect to exercise the foregoing rights, you agree to comply with the following: (i)
customizations may only be used for teaching Authorized Training Sessions and Private
Training Sessions, and (ii) all customizations will comply with this agreement. For clarity, any
use of “customize” refers only to changing the order of slides and content, and/or not using
all the slides or content, it does not mean changing or modifying any slide or content.
●● 2.2 Separation of Components. The Licensed Content is licensed as a single unit and you
may not separate their components and install them on different devices.
●● 2.3 Redistribution of Licensed Content. Except as expressly provided in the use rights
above, you may not distribute any Licensed Content or any portion thereof (including any permit-
ted modifications) to any third parties without the express written permission of Microsoft.
●● 2.4 Third Party Notices. The Licensed Content may include third party code that Micro-
soft, not the third party, licenses to you under this agreement. Notices, if any, for the third party
code are included for your information only.
●● 2.5 Additional Terms. Some Licensed Content may contain components with additional
terms, conditions, and licenses regarding its use. Any non-conflicting terms in those conditions
and licenses also apply to your use of that respective component and supplements the terms
described in this agreement.
VIII EULA
laws and treaties. Microsoft or its suppliers own the title, copyright, and other intellectual property
rights in the Licensed Content.
6. EXPORT RESTRICTIONS. The Licensed Content is subject to United States export laws and regula-
tions. You must comply with all domestic and international export laws and regulations that apply to
the Licensed Content. These laws include restrictions on destinations, end users and end use. For
additional information, see www.microsoft.com/exporting.
7. SUPPORT SERVICES. Because the Licensed Content is provided “as is”, we are not obligated to
provide support services for it.
8. TERMINATION. Without prejudice to any other rights, Microsoft may terminate this agreement if you
fail to comply with the terms and conditions of this agreement. Upon termination of this agreement
for any reason, you will immediately stop all use of and delete and destroy all copies of the Licensed
Content in your possession or under your control.
9. LINKS TO THIRD PARTY SITES. You may link to third party sites through the use of the Licensed
Content. The third party sites are not under the control of Microsoft, and Microsoft is not responsible
for the contents of any third party sites, any links contained in third party sites, or any changes or
updates to third party sites. Microsoft is not responsible for webcasting or any other form of trans-
mission received from any third party sites. Microsoft is providing these links to third party sites to
you only as a convenience, and the inclusion of any link does not imply an endorsement by Microsoft
of the third party site.
10. ENTIRE AGREEMENT. This agreement, and any additional terms for the Trainer Content, updates and
supplements are the entire agreement for the Licensed Content, updates and supplements.
11. APPLICABLE LAW.
1. United States. If you acquired the Licensed Content in the United States, Washington state law
governs the interpretation of this agreement and applies to claims for breach of it, regardless of
conflict of laws principles. The laws of the state where you live govern all other claims, including
claims under state consumer protection laws, unfair competition laws, and in tort.
2. Outside the United States. If you acquired the Licensed Content in any other country, the laws of
that country apply.
12. LEGAL EFFECT. This agreement describes certain legal rights. You may have other rights under the
laws of your country. You may also have rights with respect to the party from whom you acquired the
Licensed Content. This agreement does not change your rights under the laws of your country if the
laws of your country do not permit it to do so.
13. DISCLAIMER OF WARRANTY. THE LICENSED CONTENT IS LICENSED "AS-IS" AND "AS AVAILA-
BLE." YOU BEAR THE RISK OF USING IT. MICROSOFT AND ITS RESPECTIVE AFFILIATES GIVES NO
EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. YOU MAY HAVE ADDITIONAL CON-
SUMER RIGHTS UNDER YOUR LOCAL LAWS WHICH THIS AGREEMENT CANNOT CHANGE. TO
THE EXTENT PERMITTED UNDER YOUR LOCAL LAWS, MICROSOFT AND ITS RESPECTIVE AFFILI-
ATES EXCLUDES ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICU-
LAR PURPOSE AND NON-INFRINGEMENT.
14. LIMITATION ON AND EXCLUSION OF REMEDIES AND DAMAGES. YOU CAN RECOVER FROM
MICROSOFT, ITS RESPECTIVE AFFILIATES AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO
US$5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST
PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
X EULA
■■ Module 0 Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Start here . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
■■ Module 1 Get started on a DevOps transformation journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Introduction to DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Choose the right project . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Describe team structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Choose the DevOps tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Plan Agile with GitHub Projects and Azure Boards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Introduction to source control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Describe types of source control systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Work with Azure Repos and GitHub . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
■■ Module 2 Development for enterprise DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Structure your Git Repo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Manage Git branches and workflows . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Collaborate with pull requests in Azure Repos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
Explore Git hooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
Plan foster inner source . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
Manage Git repositories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
Identify technical debt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
■■ Module 3 Implement CI with Azure Pipelines and GitHub Actions . . . . . . . . . . . . . . . . . . . . . . . 181
Explore Azure Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
Manage Azure Pipeline agents and pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
Describe pipelines and concurrency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
Explore Continuous integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Implement a pipeline strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212
Integrate with Azure Pipelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
Introduction to GitHub Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241
Learn continuous integration with GitHub Actions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
Design a container build strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271
■■ Module 4 Design and implement a release strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Introduction to continuous delivery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283
Create a release pipeline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291
Explore release recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
Provision and test environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
Manage and modularize tasks and templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Automate inspection of health . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
■■ Module 5 Implement a secure continuous deployment using Azure Pipelines . . . . . . . . . . . . . 401
Introduction to deployment patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Implement blue-green deployment and feature toggles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Implement canary releases and dark launching . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424
Implement A-B testing and progressive exposure deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 428
Integrate with identity management systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 436
Manage application configuration data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 456
■■ Module 6 Manage infrastructure as code using Azure and DSC . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Explore infrastructure as code and configuration management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 467
Create Azure resources using Azure Resource Manager templates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 474
Create Azure resources by using Azure CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 486
Explore Azure Automation with DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 492
Implement Desired State Configuration (DSC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516
Implement Bicep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 529
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 547
■■ Module 7 Implement security and validate code bases for compliance . . . . . . . . . . . . . . . . . . . 557
Introduction to Secure DevOps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 557
Implement open-source software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 571
Software Composition Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 577
Static analyzers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 587
OWASP and Dynamic Analyzers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
Security Monitoring and Governance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 594
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 607
■■ Module 8 Design and implement a dependency management strategy . . . . . . . . . . . . . . . . . . . 617
Explore package dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 617
Understand package management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 622
Migrate consolidate and secure artifacts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 632
Implement a versioning strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 638
Introduction to GitHub Packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 646
Lab . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 655
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656
■■ Module 9 Implement continuous feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
Implement tools to track usage and flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663
Develop monitor and status dashboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694
Share knowledge within teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 700
Design processes to automate application analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 710
Manage alerts, Blameless retrospectives and a just culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716
Labs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725
Module review and takeaways . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 728
Module 0 Welcome
Start here
Microsoft DevOps curriculum
Welcome to the Designing and Implementing Microsoft DevOps Solutions course. This course will
help you prepare for the AZ-400, Designing and Implementing Microsoft DevOps Solutions1 certifica-
tion exam.
The DevOps certification exam is for DevOps professionals who combine people, processes, and technol-
ogies to continuously deliver valuable products and services that meet end-user needs and business
objectives. DevOps professionals streamline delivery by optimizing practices, improving communications
and collaboration, and creating automation. They design and implement strategies for application code
and infrastructure that allow for continuous integration, continuous testing, continuous delivery, and
continuous monitoring and feedback.
Exam candidates must be proficient with Agile practices. They must be familiar with Azure administration
and Azure development and experts in at least one of these areas. DevOps professionals must be able to
design and implement DevOps practices for version control, compliance, infrastructure as code, configu-
ration management, build, release, and testing by using Azure technologies.
There are seven exam study areas.
1 https://docs.microsoft.com/en-us/learn/certifications/exams/AZ-400
2
2 https://docs.microsoft.com/en-us/learn/paths/azure-fundamentals/
3 https://docs.microsoft.com/en-us/learn/certifications/courses/az-900t01
4 https://docs.microsoft.com/en-us/learn/paths/az-104-administrator-prerequisites/
5 https://docs.microsoft.com/en-us/learn/certifications/courses/az-104t00
6 https://docs.microsoft.com/en-us/learn/certifications/courses/az-010t00
7 https://docs.microsoft.com/en-us/learn/paths/create-serverless-applications/
8 https://docs.microsoft.com/en-us/learn/certifications/courses/az-204t00
9 https://docs.microsoft.com/en-us/learn/certifications/courses/az-020t00
10 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
3
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps11.
You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't have it
yet, see:
●● Join GitHub · GitHub12
●● If you already have your GitHub account, create a new repository Creating a new repository -
GitHub Docs13.
Expected learning
After completing this course, students will be able to:
●● Plan for the transformation with shared goals and timelines
●● Select a project and identify project metrics and Key Performance Indicators (KPI's)
●● Create a team and agile organizational structure
●● Design a tool integration strategy
●● Design a license management strategy (e.g., Azure DevOps and GitHub users)
●● Design a strategy for end-to-end traceability from work items to working software
●● Design an authentication and access strategy
●● Design a strategy for integrating on-premises and cloud resources
●● Describe the benefits of using Source Control
●● Describe Azure Repos and GitHub
●● Migrate from TFVC to Git
●● Manage code quality, including technical debt SonarCloud, and other tooling solutions
●● Build organizational knowledge on code quality
●● Explain how to structure Git repos
●● Describe Git branching workflows
●● Leverage pull requests for collaboration and code reviews
●● Leverage Git hooks for automation
●● Use Git to foster inner source across the organization
●● Explain the role of Azure Pipelines and its components
●● Configure Agents for use in Azure Pipelines
●● Explain why continuous integration matters
●● Implement continuous integration using Azure Pipelines
●● Design processes to measure end-user satisfaction and analyze user feedback
●● Design processes to automate application analytics
●● Manage alerts and reduce meaningless and non-actionable alerts
11 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
12 https://github.com/signup
13 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
4
Course syllabus
This course includes content that will help you prepare for the Microsoft DevOps Solution certification
exam. Other content is included to ensure you have a complete picture of DevOps. The course content
consists of graphics, reference links, module review questions, and optional hands-on labs.
Module 1 – Get started on a DevOps transformation journey
●● Lesson 1: Introduction to DevOps
●● Lesson 2: Choose the right project
●● Lesson 3: Describe team structures
●● Lesson 4: Choose the DevOps tools
●● Lesson 5: Plan Agile with GitHub Projects and Azure Boards
●● Lesson 6: Introduction to source control
●● Lesson 7: Describe types of source control systems
●● Lesson 8: Work with Azure Repos and GitHub
●● Labs
●● Lab 01: Agile planning and portfolio management with Azure Boards
●● Lab 02: Version controlling with Git in Azure Repos
Module 2 – Development for enterprise DevOps
●● Lesson 1: Structure your Git Repo
●● Lesson 2: Manage Git branches and workflows
●● Lesson 3: Collaborate with pull requests in Azure Repos
●● Lesson 4: Explore Git hooks
●● Lesson 5: Plan foster inner source
●● Lesson 6: Manage Git repositories
●● Lesson 7: Identify technical debt
5
●● Lab
●● Lab
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions14
14 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
Module 1 Get started on a DevOps transfor-
mation journey
Introduction to DevOps
Introduction
"DevOps is the union of people, process, and products to enable continuous delivery of value to
our end users." - According to Donovan Brown in What is DevOps?1
The DevOps learning paths will help you prepare for a DevOps journey. You'll learn the main characteris-
tics of the DevOps process, tools, and people involved during the lifecycle. Also, it prepares you for the
Microsoft DevOps Solution certification exam. You'll see other content to ensure you have a complete
picture of DevOps. The module's content consists of graphics, reference links, module review questions,
and optional hands-on labs.
You'll learn:
●● How to plan for DevOps.
●● Use source control.
●● Scale Git for an enterprise.
●● Combine artifacts.
●● Design a dependency management strategy.
●● Manage secrets.
●● Implement continuous integration.
●● Implement a container build strategy.
●● Design a release strategy.
●● Set up a release management workflow.
●● Implement a deployment pattern.
1 https://www.donovanbrown.com/post/what-is-devops
10
Learning objectives
After completing this module, students and professionals can:
●● Plan for the transformation with shared goals and timelines.
●● Select a project and identify project metrics and Key Performance Indicators (KPI).
●● Create a team and agile organizational structure.
●● Design a tool integration strategy.
●● Design a license management strategy (for example, Azure DevOps and GitHub users).
●● Design a plan for end-to-end traceability from work items to working software.
2 https://docs.microsoft.com/en-us/learn/certifications/exams/az-400
11
Prerequisites
Successful learners will have prior knowledge and understanding of:
●● Cloud computing concepts, including an understanding of PaaS, SaaS, and IaaS implementations.
●● Azure administration and Azure development with proven expertise in at least one of these areas.
●● Version control, Agile software development, and core software development principles. It would be
helpful to have experience in an organization that delivers software.
If you're new to Azure and cloud computing, consider one of the following resources:
●● Free online: Azure Fundamentals3.
●● Instructor-led course: AZ-900: Azure Fundamentals4.
If you're new to Azure Administration, consider taking:
●● Free online: Prerequisites for Azure Administrators5.
●● Instructor-led courses: AZ-104: Microsoft Azure Administrator6 and AZ-010: Azure Administra-
tion for AWS SysOps7.
If you're new to Azure Developer, consider taking:
●● Free online: Create serverless applications8.
●● Instructor-led courses: AZ-204: Developing Solutions for Microsoft Azure9 and AZ-020: Microsoft
Azure Solutions for AWS Developers10.
You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps11.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps12.
You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't have it
yet, see:
●● Join GitHub · GitHub13
●● If you already have your GitHub account, create a new repository Creating a new repository -
GitHub Docs14.
3 https://docs.microsoft.com/en-us/learn/paths/az-900-describe-cloud-concepts/
4 https://docs.microsoft.com/en-us/learn/certifications/courses/az-900t01
5 https://docs.microsoft.com/en-us/learn/paths/az-104-administrator-prerequisites/
6 https://docs.microsoft.com/en-us/learn/certifications/courses/az-104t00
7 https://docs.microsoft.com/en-us/learn/certifications/courses/az-010t00
8 https://docs.microsoft.com/en-us/learn/paths/create-serverless-applications/
9 https://docs.microsoft.com/en-us/learn/certifications/courses/az-204t00
10 https://docs.microsoft.com/en-us/learn/certifications/courses/az-020t00
11 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
12 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
13 https://github.com/signup
14 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
12
What is DevOps?
The contraction of “Dev” and "Ops" refers to replacing siloed Development and Operations. The idea is to
create multidisciplinary teams that now work together with shared and efficient practices and tools.
Essential DevOps practices include agile planning, continuous integration, continuous delivery, and
monitoring of applications. DevOps is a constant journey.
Become data-informed
We recommend you use data to inform what to do in your next cycle. Many experience reports tell us
that roughly one-third of the deployments will have negative business results. Approximately one-third
will have positive results, and one-third will make no difference. Fail fast on effects that do not advance
the business and double down on outcomes that support the business. Sometimes the approach is called
pivot or persevere.
2. Continuous Delivery of software solutions to production and testing environments helps organiza-
tions quickly fix bugs and respond to ever-changing business requirements.
3. Version Control, usually with a Git-based Repository, enables teams located anywhere in the world to
communicate effectively during daily development activities. Also, integrate with software develop-
ment tools for monitoring activities such as deployments.
18
5. Monitoring and Logging of running applications. Including production environments for application
health and customer usage. It helps organizations create a hypothesis and quickly validate or disprove
strategies. Rich data is captured and stored in various logging formats.
19
6. Public and Hybrid Clouds have made the impossible easy. The cloud has removed traditional bottle-
necks and helped commoditize Infrastructure. You can use Infrastructure as a Service (IaaS) to lift and
shift your existing apps or Platform as a Service (PaaS) to gain unprecedented productivity. The cloud
gives you a data center without limits.
7. Infrastructure as Code (IaC): Enables the automation and validation of the creation and teardown of
environments to help with delivering secure and stable application hosting platforms.
20
8. Use Microservices architecture to isolate business use cases into small reusable services that commu-
nicate via interface contracts. This architecture enables scalability and efficiency.
9. Containers are the next evolution in virtualization. They are much more lightweight than virtual
machines, allow much faster hydration, and easily configure files.
21
15 https://docs.microsoft.com/azure/devops/learn/what-is-devops
23
It has often been despite the existing organizational processes. Concluding it only works creating a
separate team to pursue the transformation.
For DevOps transformations, the separate team should be composed of staff members. Focused on and
measured on the transformation outcomes and not involved in the day-to-day operational work. The
team might also include some external experts that can fill the knowledge gaps—also, helping to advise
on processes that are new to the existing staff members.
Ideally, the staff members recruited should already be well regarded throughout the organization. As a
group, they should offer a broad knowledge base to think outside the box.
Summary
This module explored the key areas that organizations must apply to start their DevOps transformation
Journey, changing the team's mindset, and defining timelines and goals.
24
Learn more
●● Donovan Brown | What is DevOps?16
●● What is DevOps? - Azure DevOps | Microsoft Docs17
●● Getting started with GitHub - GitHub Docs18
●● View of features and epics on the Feature Timeline - Azure DevOps | Microsoft Docs19
●● Plan and track work in Azure Boards with Basic or Agile processes - Azure Boards | Microsoft
Docs20
●● Agile Manifesto for Software Development | Agile Alliance21
●● 12 Principles Behind the Agile Manifesto | Agile Alliance22
16 https://www.donovanbrown.com/post/what-is-devops
17 https://docs.microsoft.com/devops/what-is-devops
18 https://docs.github.com/get-started
19 https://docs.microsoft.com/azure/devops/boards/extensions/feature-timeline
20 https://docs.microsoft.com/azure/devops/boards/get-started/plan-track-work
21 https://www.agilealliance.org/agile101/the-agile-manifesto
22 https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto
25
Learning objectives
After completing this module, students and professionals can:
●● Understand different projects and systems to guide the journey.
●● Select a project to start the DevOps transformation.
●● Identify groups to minimize initial resistance.
●● Identify project metrics and Key Performance Indicators (KPIs).
Prerequisites
●● Understanding of what DevOps is and its concepts.
Greenfield projects
A greenfield project will always appear to be a more accessible starting point. A blank slate offers the
chance to implement everything the way that you want.
You might also have a better chance of avoiding existing business processes that do not align with your
project plans.
Suppose current IT policies do not allow the use of cloud-based infrastructure. In that case, the project
might be qualified for entirely new applications designed for that environment from scratch.
For example, you can sidestep internal political issues that are well entrenched.
Brownfield projects
Usually, brownfield projects come with:
●● The baggage of existing codebases.
●● Existing teams.
●● A significant amount of technical debt.
But, they can still be ideal projects for DevOps transformations.
When your teams spend large percentages of their time just maintaining existing brownfield applications,
you have limited ability to work on new code.
It is essential to find a way to reduce that time and to make software release less risky. A DevOps trans-
formation can provide that.
The limitations will have often worn down the existing team members. For example, they are working in
the past and are keen to experiment with new ideas.
The system is often crucial for organizations. It might also be easier to gain more robust management
buy-in for these projects because of the potential benefits delivered.
Management might also have a stronger sense of urgency to point brownfield projects in an appropriate
direction when compared to greenfield projects that do not currently exist.
Systems of record
Systems that provide the truth about data elements are often-called systems of record. These systems
have historically evolved slowly and carefully. For example, it is crucial that a banking system accurately
reflects your bank balance. Systems of record emphasize accuracy and security.
Systems of engagement
Many organizations have other systems that are more exploratory. These often use experimentation to
solve new problems. Systems of engagement are modified regularly. Usually, it is a priority to make quick
changes over ensuring that the changes are correct.
There is a perception that DevOps suits systems of engagement more than systems of record. The lessons
from high-performing companies show that is not the case.
Sometimes, the criticality of doing things right with a system of record is an excuse for not implementing
DevOps practices.
Worse, given the way that applications are interconnected, an issue in a system of engagement might
end up causing a problem in a system of record anyway.
Both types of systems are great. At the same time, it might be easier to start with a system of engage-
ment when first beginning a DevOps Transformation.
DevOps practices apply to both types of systems. The most significant outcomes often come from
transforming systems of record.
The staff will also range from traditional to early adopters, and others happy to work at the innovative
edge.
Faster outcomes
●● Deployment Frequency. Increasing the frequency of deployments is often a critical driver in DevOps
Projects.
●● Deployment Speed. It is necessary to reduce the time that they take.
●● Deployment Size. How many features, stories, and bug fixes are being deployed each time?
●● Lead Time. How long does it take from the creation of a work item until it is completed?
Efficiency
●● Server to Admin Ratio. Are the projects reducing the number of administrators required for a given
number of servers?
●● Staff Member to Customers Ratio. Is it possible for fewer staff members to serve a given number of
customers?
●● Application Usage. How busy is the application?
29
Culture
●● Employee morale. Are employees happy with the transformation and where the organization is head-
ing? Are they still willing to respond to further changes? This metric can be challenging to measure
but is often done by periodic, anonymous employee surveys.
●● Retention rates. Is the organization losing staff?
Note: It is crucial to choose metrics that focus on specific business outcomes and achieve a return on
investment and increased business value.
Summary
In this module, you learned how to decide how to apply the DevOps process and tools to minimize initial
resistance.
Also, how to identify teams, plan for a DevOps transformation culture, and define timelines for your
goals.
You learned how to describe the benefits and usage of:
●● Understand what DevOps is and the steps to accomplish it.
Learn more
●● Greenfield project - Wikipedia23
23 https://en.wikipedia.org/wiki/Greenfield_project
30
24 https://en.wikipedia.org/wiki/Brownfield_%28software_development%29
25 https://docs.microsoft.com/azure/devops/organizations/settings/about-teams-and-settings
26 https://docs.microsoft.com/azure/devops/boards/best-practices-agile-project-management
31
Learning objectives
After completing this module, students and professionals can:
●● Understand agile practices and principles of agile development.
●● Create a team and agile organizational structure.
●● Identify ideal DevOps team members.
●● Select and configure tools for collaboration.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with Agile software development and core software development principles is helpful but is
not necessary.
●● Beneficial to have experience in an organization that delivers software.
There's a real challenge with gathering customer requirements in the first place.
Taking a long time to deliver something would often be different from what the customer needs, even if
you built exactly what the customer asked.
Customers often don't know what they want until they see it or can't explain what they need.
Agile
By comparison, Agile methodology constantly emphasizes adaptive planning and early delivery with
continual improvement.
Rather than restricting development to rigid specifications, it encourages rapid and flexible responses to
changes as they occur.
In 2001, highly regarded developers published a manifesto for Agile software development.
They said that:
●● Development needs to favor individuals and interactions over processes and tools.
●● Working software over comprehensive documentation.
●● Customer collaboration over contract negotiation.
●● Respond to changes over following a plan.
Agile software development methods are based on releases and iterations:
●● One release might consist of several iterations.
●● Each iteration is like a small independent project.
●● After being estimated and prioritization:
●● Features, bug fixes, enhancements, and refactoring width are assigned to a release.
●● And then assigned again to a specific iteration within the release, generally on a priority basis.
●● At the end of each iteration, there should be tested working code.
●● In each iteration, the team must focus on the outcomes of the previous iteration and learn from them.
Having teams focused on shorter-term outcomes is that teams are also less likely to waste time over-en-
gineering features. Or allowing unnecessary scope creep to occur.
Agile software development helps teams keep focused on business outcomes.
Waterfall Agile
Divided into distinct phases. Separates the project development lifecycle into
sprints.
It can be rigid. Known for flexibility.
All project development phases, such as design, It follows an iterative development approach so
development, and test, are completed once. that each phase may appear more than once.
Define requirements at the start of the project with Requirements are expected to change and evolve.
little change expected.
33
27 https://www.agilealliance.org/
28 https://www.agilealliance.org/agile101/the-agile-manifesto/
29 https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto/
34
By comparison, vertical team structures span the architecture and are aligned with skillsets or disciplines:
Vertical teams have been shown to provide more good outcomes in Agile projects. Each product must
have an identified owner.
Another key benefit of the vertical team structure is that scaling can occur by adding teams. In this
example, feature teams have been created rather than just project teams:
When they first start an agile transformation, many teams hire external coaches or mentors.
Agile coaches help teams or individuals to adopt agile methods or to improve the current techniques and
practices.
They must be agents of change by helping people understand how they work and encouraging them to
adopt new approaches.
Agile coaches typically work with more than one team and remove any roadblocks from inside or outside
the organization.
This work requires various skills, including coaching, mentoring, teaching, and making easier. Agile coach-
es must be both trainers and consultants.
There is more than one type of agile coach.
●● Some coaches are technical experts who aim to show staff members how to apply specific concepts—
for example, test-driven development and continuous integration or deployment.
●● They might help how to run effective stand-up and review meetings.
●● Some coaches may themselves act as scrum masters.
Cultural changes
Over recent decades, offices have often become open spaces with few walls. At the time of writing, a
significant shift to working from home has started, started as a response to the pandemic. Both situations
can limit collaboration, and ambient noise and distractions often also reduce productivity. Staff tends to
work better when they have comfortable working environments. Defined meeting times and locations let
teams choose when they want to interact with others.
Asynchronous communication should be encouraged, but there should not be an expectation that all
communications will be responded to urgently. Staff should focus on their primary tasks without feeling
like they are being left out of important decisions.
All meetings should have strict timeframes, and more importantly, have a plan. If there is no plan, there
should be no meeting.
36
As it is becoming harder to find the required staff, great teams will be as comfortable with remote or
work-from-home workers in the office.
To be successful, though, collaboration via communication should become part of the organization's
DNA.
Staff should be encouraged to communicate openly and frankly. Learning to deal with conflict is essential
for any team, as there will be disagreements at some point. Mediation skills training would be helpful.
Cross-functional teams
Team members need good collaboration. It is also essential to have a great partnership with wider teams
to bring people with different functional expertise together to work toward a common goal.
Often, there will be people from other departments within an organization.
Faster and better innovation can occur in these cross-functional teams.
People from different areas of the organization will have different views of the same problem, and they
are more likely to come up with alternate solutions to problems or challenges. Existing entrenched ideas
are more likely to be challenged.
Cross-functional teams can also minimize turf-wars within organizations. The more widely that a project
appears to have ownership, the easier it will be to be widely accepted. Bringing cross-functional teams
together also helps to spread knowledge across an organization.
Recognizing and rewarding collective behavior across cross-functional teams can also help to increase
team cohesion.
Collaboration tooling
Agile teams commonly use the following collaboration tools:
Teams (Microsoft)30 A group chat application from Microsoft. It provides a combined location with
workplace chat, meetings, notes, and storage of file attachments. A user can be a member of many
teams.
Slack31 is A commonly used tool for collaboration in Agile and DevOps teams. From a single interface, it
provides a series of separate communication channels that can be organized by project, team, or topic.
Conversations are kept and are searchable. It is straightforward to add both internal and external team
members. Slack integrates with many third-party tools like GitHub32 for source code and DropBox33 for
document and file storage.
Jira34 is A commonly used tool that allows for planning, tracking, releasing, and reporting.
Asana35 is A standard tool designed to keep team plans, progress, and discussions in a single place. It has
strong capabilities around timelines and boards.
Glip36 is An offering from Ring Central that provides chat, video, and task management.
Other standard tools that include collaboration offerings include ProofHub, RedBooth, Trello, DaPulse,
and many others.
30 https://products.office.com/microsoft-teams/group-chat-software
31 https://slack.com/
32 https://github.com/
33 https://dropbox.com/
34 https://www.atlassian.com/software/jira
35 https://asana.com/
36 https://glip.com/
37
Physical tools
Not all tools need to be digital tools. Many teams use whiteboards to collaborate on ideas, index cards
for recording stories, and sticky notes for moving around tasks.
Even when digital tools are available, it might be more convenient to use these physical tools during
stand-up and other meetings.
Collaboration tools
We described collaboration tools in the previous topic.
As a complete CI/CD system, we have Azure DevOps and GitHub that includes:
●● Flexibility in Kanban boards.
●● Traceability through Backlogs.
38
●● Customizability in dashboards.
●● Built-in scrum boards.
●● Integrability directly with code repositories.
●● Code changes can be linked directly to tasks or bugs.
Apart from Azure DevOps and GitHub, other standard tools include:
●● Jira.
●● Trello.
●● Active Collab.
●● Agilo for Scrum.
●● SpiraTeam.
●● Icescrum.
●● SprintGround.
●● Gravity.
●● Taiga.
●● VersionOne.
●● Agilean.
●● Wrike.
●● Axosoft.
●● Assembla.
●● PlanBox.
●● Asana.
●● Binfire.
●● Proggio.
●● VivifyScrum, and many others.
Summary
This module explored agile development practices and helped define and configure teams and tools for
collaboration.
39
Learn more
●● DevOps vs. Agile | Microsoft Azure37.
●● Best practices for Agile project management - Azure Boards | Microsoft Docs38.
●● Agile Manifesto for Software Development | Agile Alliance39.
●● Agile Board | Trello40.
●● Agile Alliance41.
●● 12 Principles Behind the Agile Manifesto | Agile Alliance42.
●● Jira | Issue & Project Tracking Software | Atlassian43.
37 https://azure.microsoft.com/overview/devops-vs-agile/
38 https://docs.microsoft.com/azure/devops/boards/best-practices-agile-project-management
39 https://www.agilealliance.org/agile101/the-agile-manifesto
40 https://trello.com/b/DnZvFigA/agile-board
41 https://www.agilealliance.org/
42 https://www.agilealliance.org/agile101/12-principles-behind-the-agile-manifesto
43 https://www.atlassian.com/software/jira
40
Learning objectives
After completing this module, students and professionals can:
●● Design a tool integration strategy.
●● Design a license management strategy (for example, Azure DevOps and GitHub users).
●● Design a strategy for end-to-end traceability from work items to working software.
●● Design an authentication and access strategy.
●● Design a strategy for integrating on-premises and cloud resources.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with DevOps tools is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
What is GitHub?
GitHub is a Software as a service (SaaS) platform from Microsoft that provides Git-based repositories and
DevOps tooling for developing and deploying software.
It has a wide range of integrations with other leading tools.
These tokens can be set up using Git Credential managers, or you can create them manually.
Personal access tokens are also helpful when establishing access to command-line tools, external tools,
and tasks in build pipelines.
Also, when calling REST-based APIs, you do not have a UI popping out to do the authentication. When
access is no longer required, you can then revoke the personal access token.
Security groups
Azure DevOps is pre-configured with default security groups. Default permissions are assigned to the
default security groups. But you can also configure access at the organization level, the collection level,
and the project or object level.
In the organization settings in Azure DevOps, you can configure app access policies. Based on your
security policies, you might allow alternate authentication methods, enable third-party applications to
access via OAuth, or even allow anonymous access to some projects.
For even tighter control, you can set conditional access to Azure DevOps. It offers simple ways to help
secure resources when using Azure Active Directory for authentication.
Multifactor authentication
Conditional access policies such as multifactor authentication can help to minimize the risk of compro-
mised credentials.
As part of a conditional access policy, you might require:
●● Security group membership.
●● A location or network identity.
●● A specific operating system.
●● A managed device, or other criteria.
Jira
Jira is a commonly used work management tool.
In the Visual Studio Marketplace, Solidify offers a tool for Jira to Azure DevOps migration. It migrates in
two phases. Jira issues are exported to files, and then the files are imported to Azure DevOps.
If you decide to write the migration code yourself, the following blog post provides a sample code that
might help you get started: Migrate your project from Jira to Azure DevOps44.
Other applications
Third-party organizations offer commercial tooling to assist with migrating other work management tools
like:
●● Aha.
●● BugZilla.
●● ClearQuest.
●● And others to Azure DevOps.
44 http://www.azurefieldnotes.com/2018/10/01/migrate-your-project-from-jira-to-azure-devops/
45 https://docs.microsoft.com/en-us/azure/load-testing/overview-what-is-azure-load-testing
44
Summary
This module explored the basics of Azure DevOps and GitHub and how to start using them to implement
DevOps.
You learned how to describe the benefits and usage of:
●● Design a tool integration strategy.
●● Design a license management strategy (for example, Azure DevOps and GitHub users).
●● Design a strategy for end-to-end traceability from work items to working software.
●● Design an authentication and access strategy.
●● Design a strategy for integrating on-premises and cloud resources.
46 https://azure.microsoft.com/pricing/details/devops/azure-devops-services/
47 https://github.com/pricing/
45
Learn more
●● Azure DevOps Services | Microsoft Azure48.
●● Features | GitHub49.
●● Pricing Calculator | Microsoft Azure50.
●● Azure DevOps Server to Services Migration overview - Azure DevOps | Microsoft Docs51.
●● Azure DevOps Services Pricing | Microsoft Azure52.
●● GitHub Pricing53.
48 https://azure.microsoft.com/services/devops/
49 https://github.com/features
50 https://azure.microsoft.com/pricing/calculator/
51 https://docs.microsoft.com/azure/devops/migrate/migration-overview?view=azure-devops
52 https://azure.microsoft.com/pricing/details/devops/azure-devops-services/
53 https://github.com/pricing/
46
Learning objectives
After completing this module, students and professionals can:
●● Describe GitHub Projects and Azure Boards.
●● Link Azure Boards and GitHub.
●● Configure and Manage GitHub Projects and boards.
●● Customize Project views.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't
have it yet, see: Join GitHub · GitHub54. If you already have your GitHub account, create a new
repository Creating a new repository - GitHub Docs55.
54 https://github.com/signup
55 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
47
Templates Description
Basic kanban Track your tasks with: To do, In progress, and Done
columns.
Automated kanban Cards automatically move between: To do, In
progress, and Done columns.
Automated kanban with review Cards automatically moves between: To do, In
progress, and Done columns, with extra triggers
for pull request review status.
Bug triage Triage and prioritize bugs with: To do, High
priority, Low priority, and Closed columns.
48
Projects (beta)
Projects (beta) are a new, customizable and flexible tool version of projects for planning and tracking
work on GitHub.
Note: Projects (beta) are currently in public beta and subject to change.
A project is a customizable spreadsheet that you can configure the layout by filtering, sorting, and
grouping your issues and PRs, and adding custom fields to track metadata.
You can use different views such as Board or spreadsheet/table.
If you make changes in your pull request or issue, your project reflects that change.
You can use custom fields in your tasks. For example:
●● A date field to track target ship dates.
●● A number field to track the complexity of a task.
●● A single select field to track whether a task is Low, Medium, or High priority.
●● A text field to add a quick note.
●● An iteration field to plan work week-by-week, including support for breaks.
For more information about Projects (beta), see:
●● Creating a project (beta)61.
56 https://docs.github.com/articles/creating-a-project-board
57 https://docs.github.com/articles/editing-a-project-board
58 https://docs.github.com/articles/copying-a-project-board
59 https://docs.github.com/articles/adding-issues-and-pull-requests-to-a-project-board
60 https://docs.github.com/articles/project-board-permissions-for-an-organization
61 https://docs.github.com/issues/trying-out-the-new-projects-experience/creating-a-project
49
You can track your work using the default work item types such as user stories, bugs, features, and epics.
It's possible to customize these types or create your own. Each work item provides a standard set of
system fields and controls, including Discussion for adding and tracking comments, History, Links, and
Attachments.
62 https://docs.github.com/issues/trying-out-the-new-projects-experience/managing-iterations
63 https://docs.github.com/issues/trying-out-the-new-projects-experience/customizing-your-project-views
64 https://docs.github.com/issues/trying-out-the-new-projects-experience/automating-projects
50
If you need to create reports or a list of work with specific filters, you can use the queries hub to generate
custom lists of work items.
Queries support the following tasks:
●● Find groups of work items with something in common.
●● Triage work to assign to a team member or sprint and set priorities.
●● Perform bulk updates.
●● View dependencies or relationships between work items.
●● Create status and trend charts that you can optionally add to dashboards.
Delivery plans
It's possible to create another view with deliverables and track dependencies across several teams in a
calendar view using Delivery Plans.
65 https://docs.microsoft.com/azure/devops/boards
66 https://docs.microsoft.com/azure/devops/boards/get-started/why-use-azure-boards
51
67 https://docs.microsoft.com/azure/devops/boards/github
68 https://github.com/marketplace/azure-boards
52
Authenticating to GitHub
Azure Boards can connect to GitHub. For GitHub in the cloud, when adding a GitHub connection, the
authentication options are:
●● Username/Password
●● Personal Access Token (PAT)
For a walkthrough on making the connection, see: Connect Azure Boards to GitHub69.
You can configure other Azure Boards/Azure DevOps Projects, GitHub.com repositories, or change the
current configuration from the Azure Boards app page.
Once you've integrated Azure Boards with GitHub using the Azure Boards app, you can add or remove
repositories from the web portal for Azure Boards.
69 https://docs.microsoft.com/azure/devops/boards/github/connect-to-github
53
70 https://docs.microsoft.com/azure/devops/boards/github
71 https://docs.microsoft.com/azure/devops/boards/github/change-azure-boards-app-github-repository-access
72 https://docs.microsoft.com/azure/devops/boards/github/add-remove-repositories
73 https://docs.microsoft.com/azure/devops/boards/github/link-to-from-github
54
Adding issues
When your new project initializes, it prompts you to add items.
Click on the plus (+) sign to add more issues.
55
74 https://docs.github.com/issues/trying-out-the-new-projects-experience/quickstart
75 https://docs.github.com/issues/tracking-your-work-with-issues/creating-an-issue
59
When you first create an iteration field, three iterations are automatically created. You can add other
iterations if needed.
Iteration field
You can use the command palette or the project's interface to create an iteration field.
Tip: To open the project command palette, press Ctrl+K (Windows/Linux) or Command+K (Mac).
Start typing any part of “Create new field”. When "Create new field" displays in the command palette,
select it.
Or follow the steps using the interface:
1. Navigate to your project.
2. Click in the plus (+) sign in the rightmost field header. A drop-down menu with the project fields will
appear.
3. Click in the New field.
4. Enter a name for the new iteration field.
5. Select the dropdown menu below and click Iteration.
6. (Optional) Change the starting date from the current day, select the calendar dropdown next to Starts
on, and click on a new starting date.
7. To change the duration of each iteration, type a new number, then select the dropdown and click
either days or weeks.
60
Also, you can insert breaks into your iterations to communicate when you're taking time away from
scheduled work.
For more information about iterations, see:
●● Managing iterations in projects (beta) - GitHub Docs76.
●● Best practices for managing projects (beta) - GitHub Docs77.
76 https://docs.github.com/issues/trying-out-the-new-projects-experience/managing-iterations
77 https://docs.github.com/issues/trying-out-the-new-projects-experience/best-practices-for-managing-projects
62
78 https://docs.github.com/get-started/using-github/github-command-palette
63
79 https://docs.github.com/issues/trying-out-the-new-projects-experience/about-projects
80 https://docs.github.com/issues/trying-out-the-new-projects-experience/creating-a-project
64
81 https://docs.github.com/organizations/managing-organization-settings/enabling-or-disabling-github-discussions-for-an-organization
82 https://docs.github.com/github/collaborating-with-issues-and-pull-requests/quickstart-for-communicating-on-github
83 https://docs.github.com/articles/about-teams
84 https://docs.github.com/organizations/collaborating-with-your-team/creating-a-team-discussion
85 https://docs.github.com/organizations/collaborating-with-your-team/editing-or-deleting-a-team-discussion
65
Summary
This module introduced you to GitHub Projects, GitHub Project Boards, and Azure Boards. It explored
ways to link Azure Boards and GitHub, configure GitHub Projects and Project views, and manage work
with GitHub Projects.
You learned how to describe the benefits and usage of:
●● Describe GitHub Projects and Azure Boards.
●● Link Azure Boards and GitHub.
●● Configure and Manage GitHub Projects and boards.
●● Customize Project views.
Learn more
●● Quickstart for projects (beta) - GitHub Docs86.
●● About project boards - GitHub Docs87.
●● What is Azure Boards? Tools to manage software development projects. - Azure Boards |
Microsoft Docs88.
●● Azure Boards-GitHub integration - Azure Boards | Microsoft Docs89.
●● Managing iterations in projects (beta) - GitHub Docs90.
●● Quickstart for projects (beta) - GitHub Docs91.
●● Best practices for managing projects (beta) - GitHub Docs92.
●● Customizing your project (beta) views - GitHub Docs93.
●● About team discussions - GitHub Docs94.
86 https://docs.github.com/issues/trying-out-the-new-projects-experience/quickstart
87 https://docs.github.com/issues/organizing-your-work-with-project-boards/managing-project-boards/about-project-boards
88 https://docs.microsoft.com/azure/devops/boards/get-started/what-is-azure-boards
89 https://docs.microsoft.com/azure/devops/boards/github
90 https://docs.github.com/issues/trying-out-the-new-projects-experience/managing-iterations
91 https://docs.github.com/issues/trying-out-the-new-projects-experience/quickstart
92 https://docs.github.com/issues/trying-out-the-new-projects-experience/best-practices-for-managing-projects
93 https://docs.github.com/issues/trying-out-the-new-projects-experience/customizing-your-project-views
94 https://docs.github.com/organizations/collaborating-with-your-team/about-team-discussions
66
Learning objectives
After completing this module, students and professionals can:
●● Understand source control.
●● Apply best practices for source control.
●● Describe the benefits of using source control.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
95 https://puppet.com/resources/report/2021-state-of-devops-report
67
Version control is essential for all software development projects and is vital at large businesses and
enterprises.
Enterprises have many stakeholders. For example:
●● Distributed teams.
●● Strict processes and workflows.
●● Siloed organizations.
●● Hierarchical organizations.
All those characteristics represent coordination and integration challenges when it comes to merging and
deploying code.
Companies within highly regulated industries need a practical way to ensure that all standards are met
appropriately and mitigate risk—for example, banking and healthcare.
Without version control, you're tempted to keep multiple copies of code on your computer. It could be
dangerous. Easy to change or delete a file in the wrong code copy, potentially losing work.
Version control systems solve this problem by managing all versions of your code but presenting you
with a single version at a time.
Tools and processes alone aren't enough to accomplish the above, such as adopting Agile, Continuous
Integration, and DevOps. Believe it or not, all rely on a solid version control practice.
Version control is about keeping track of every change to software assets—tracking and managing the
who, what, and when. Version control is the first step needed to assure quality at the source, ensure flow
and pull value, and focus on the process. All of these create value not just for the software teams but
ultimately for the customer.
Version control is a solution for managing and saving changes made to any manually created assets. If
changes are made to the source code, you can go back in time and easily roll back to previous-working
versions.
Version control tools will enable you to see who made changes, when, and what exactly was changed.
Version control also makes experimenting easy and, most importantly, makes collaboration possible.
Without version control, collaborating over source code would be a painful operation.
There are several perspectives on version control.
●● For developers, it's a daily enabler for work and collaboration to happen. It's part of the daily job, one
of the most-used tools.
●● For management, the critical value of version control is in:
●● IP security.
●● Risk management.
●● Time-to-market speed through Continuous Delivery, where version control is a fundamental
enabler.
Whether writing code professionally or personally, you should always version your code using a source
control management system. Some of the advantages of using source control are,
●● Create workflows. Version control workflows prevent the chaos of everyone using their development
process with different and incompatible tools. Version control systems provide process enforcement
and permissions, so everyone stays on the same page.
●● Work with versions. Every version has a description in the form of a comment. These descriptions
help you follow changes in your code by version instead of by individual file changes. Code stored in
versions can be viewed and restored from version control at any time as needed. It makes it easy to
base new work on any version of code.
●● Collaboration. Version control synchronizes versions and makes sure that your changes do not
conflict with other changes from your team. Your team relies on version control to help resolve and
prevent conflicts, even when people make changes simultaneously.
●● Maintains history of changes. Version control keeps a record of changes as your team saves new
versions of your code. This history can be reviewed to find out who, why, and when changes were
made. The history gives you the confidence to experiment since you can roll back to a previous good
version at any time. The history lets your base work from any code version, such as fixing a bug in an
earlier release.
●● Automate tasks. Version control automation features save your team time and generate consistent
results. Automate testing, code analysis, and deployment when new versions are saved to version
control.
●● Collaboration – When teams work together, quality tends to improve. We catch one another's
mistakes and can build on each other's strengths.
●● Learning – Organizations benefit when they invest in employees learning and growing. It is important
for onboarding new team members, the lifelong learning of seasoned members, and the opportunity
for workers to contribute to the bottom line and the industry.
Summary
This module explored the basics of source control and how to work with it daily to benefit team collabo-
ration and code maintenance.
You learned how to describe the benefits and usage of:
●● Understand source control.
●● Apply best practices for source control.
●● Describe the benefits of using source control.
Learn more
●● Understand source control - Azure DevOps96.
●● Using source control in your codespace - GitHub Docs97.
96 https://docs.microsoft.com/azure/devops/user-guide/source-control
97 https://docs.github.com/codespaces/developing-in-codespaces/using-source-control-in-your-codespace
71
Learning objectives
After completing this module, students and professionals can:
●● Apply source control practices in your development process.
●● Explain the differences between centralized and distributed version control.
●● Understand Git and Team Foundation Version Control (TFVC).
●● Develop using Git.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
Centralized source control systems are based on the idea that there's a single “central” copy of your
project somewhere (probably on a server). Programmers will check in (or commit) their changes to this
central copy.
“Committing” a change means to record the difference in the central system. Other programmers can
then see this change.
Also, it's possible to pull down the change. The version control tool will automatically update the con-
tents of any files that were changed.
Most modern version control systems deal with “changesets,” which are a group of changes (possibly too
many files) that should be treated as a cohesive whole.
Programmers no longer must keep many copies of files on their hard drives manually. The version control
tool can talk to the central copy and retrieve any version they need on the fly.
72
Some of the most common-centralized version control systems you may have heard of or used are Team
Foundation Version Control (TFVC), CVS, Subversion (or SVN), and Perforce.
Over time, so-called “distributed” source control or version control systems (DVCS for short) have become
the most important. The three most popular are Git, Mercurial, and Bazaar.
These systems do not necessarily rely on a central server to store all the versions of a project's files.
Instead, every developer “clones” a copy of a repository and has the project's complete history on their
hard drive.
This copy (or “clone”) has all the metadata of the original.
This method may sound wasteful, but in practice, it is not a problem. Most programming projects consist
primarily of plain text files (and maybe a few images).
The disk space is so cheap that storing many copies of a file does not create a noticeable dent in a hard
drive's free space. Modern systems also compress the files to use even less space.
The act of getting new changes from a repository is called “pulling.”
The act of moving your changes to a repository is called “pushing.”
In both cases, you move changesets (changes to file groups as coherent wholes), not single-file diffs.
One common misconception about distributed version control systems is that there cannot be a central
project repository. It is not true. Nothing is stopping you from saying, “this copy of the project is the
authoritative one.”
It means that instead of a central repository required by the tools you use, it is now optional.
73
software that works this way includes Visual Source Safe, Perforce, and CVS. You can scale up to huge
codebases with millions of files per branch—also, large binary files with server workspaces.
●● Local workspaces - Each team member copies the latest codebase version with them and works
offline as needed. Developers check in their changes and resolve conflicts as necessary. Another
system that works this way is Subversion.
Community
In many circles, Git has come to be the expected version control system for new projects.
If your team is using Git, odds are you will not have to train new hires on your workflow because they will
already be familiar with distributed development.
Also, Git is popular among open-source projects. It is easy to use 3rd-party libraries and encourage
others to fork your open-source code.
Distributed development
In TFVC, each developer gets a working copy that points back to a single central repository. Git, however,
is a distributed version control system. Instead of a working copy, each developer gets their local reposi-
tory, complete with an entire history of commits.
75
Having a complete local history makes Git fast since it means you do not need a network connection to
create commits, inspect previous versions of a file, or do diffs between commits.
Distributed development also makes it easier to scale your engineering team. If someone breaks the
production branch in SVN, other developers cannot check in their changes until it is fixed. With Git, this
kind of blocking does not exist. Everybody can continue going about their business in their local reposi-
tories.
And, like feature branches, distributed development creates a more reliable environment. Even if devel-
opers obliterate their repository, they can clone from someone else and start afresh.
Trunk-based development
One of the most significant advantages of Git is its branching capabilities. Unlike centralized version
control systems, Git branches are cheap and easy to merge.
76
Trunk-based development provides an isolated environment for every change to your codebase. When
developers want to start working on something—no matter how large or small—they create a new
branch. It ensures that the master branch always contains production-quality code.
Using trunk-based development is more reliable than directly-editing production code, but it also
provides organizational benefits.
They let you represent development work at the same granularity as your agile backlog.
For example, you might implement a policy where each work item is addressed in its feature branch.
Pull requests
Many source code management tools such as Azure Repos enhance core Git functionality with pull
requests.
A pull request is a way to ask another developer to merge one of your branches into their repository.
It makes it easier for project leads to keep track of changes and lets developers start discussions around
their work before integrating it with the rest of the codebase.
77
Since they are essentially a comment thread attached to a feature branch, pull requests are incredibly
versatile.
When a developer gets stuck with a complex problem, they can open a pull request to ask for help from
the rest of the team.
Instead, junior developers can be confident that they are not destroying the entire project by treating pull
requests as a formal code review.
As you might expect, Git works well with continuous integration and continuous delivery environments.
Git hooks allow you to run scripts when certain events occur inside a repository, which lets you automate
deployment to your heart’s content.
You can even build or deploy code from specific branches to different servers.
For example, you might want to configure Git to deploy the most recent commit from the develop branch
to a test server whenever anyone merges a pull request into it.
Combining this kind of build automation with peer review means you have the highest possible confi-
dence in your code as it moves from development to staging to production.
Overwriting history
Git technically does allow you to overwrite history - but like any helpful feature, if misused can cause
conflicts.
79
If your teams are careful, they should never have to overwrite history.
If you are synchronizing to Azure Repos, you can also add a security rule that prevents developers from
overwriting history by using the explicit “Force Push” permissions.
Every source control system works best when developers understand how it works and which conventions
work.
While you cannot overwrite history with Team Foundation Version Control (TFVC), you can still overwrite
code and do other painful things.
Large files
Git works best with repos that are small and do not contain large files (or binaries).
Every time you (or your build machines) clone the repo, they get the entire repo with its history from the
first commit.
It is great for most situations but can be frustrating if you have large files.
Binary files are even worse because Git cannot optimize how they are stored.
That is why Git LFS98 was created.
It lets you separate large files of your repos and still has all the benefits of versioning and comparing.
Also, if you are used to storing compiled binaries in your source repos, stop!
Use Azure Artifacts99 or some other package management tool to store binaries for which you have
source code.
However, teams with large files (like 3D models or other assets) can use Git LFS to keep the code repo
slim and trimmed.
Learning curve
There is a learning curve. If you have never used source control before, you are probably better off when
learning Git. I have found that users of centralized source control (TFVC or SubVersion) battle initially to
make the mental shift, especially around branches and synchronizing.
Once developers understand how Git branches work and get over the fact that they must commit and
then push, they have all the basics they need to succeed in Git.
98 https://git-lfs.github.com/
99 https://azure.microsoft.com/services/devops/artifacts/
80
To fully appreciate the effectiveness of Git, you must first understand how to carry out basic operations
on Git. For example, clone, commit, push, and pull.
The natural question is, how do we get started with Git?
One option is to go native with the command line or look for a code editor that supports Git natively.
Visual Studio Code is a cross-platform, open-source code editor that provides powerful developer tooling
for hundreds of languages.
To work in open-source, you need to embrace open-source tools.
This recipe will start by:
●● Setting up the development environment with Visual Studio Code.
●● Creating a new Git repository.
●● Committing code changes locally.
●● Pushing changes to a remote repository on Azure DevOps.
Getting ready
In this tutorial, we'll learn how to initialize a Git repository locally.
Then we will use the ASP.NET Core MVC project template to create a new project and version it in the
local Git repository.
We will then use Visual Studio Code to interact with the Git repository to do basic commit, pull, and push
operations.
You will need to set up your working environment with:
●● .NET Core 3.1 SDK or later: Download .NET100.
●● Visual Studio Code: Download Visual Studio Code101.
●● C# Visual Studio Code extension: C# programming with Visual Studio Code102.
●● Git: Git - Downloads103
●● Git for Windows (if you are using Windows): Git for Windows104
The Visual Studio Marketplace features several extensions for Visual Studio Code that you can install to
enhance your experience of using Git:
●● Git Lens105: This extension brings visualization for code history by using Git blame annotations and
code lens. The extension enables you to seamlessly navigate and explore the history of a file or
branch. Also, the extension allows you to gain valuable insights via powerful comparison commands
and so much more.
●● Git History106: Brings visualization and interaction capabilities to view the Git log, file history and
compare branches or commits.
100 https://dotnet.microsoft.com/download
101 https://code.visualstudio.com/Download
102 https://code.visualstudio.com/docs/languages/csharp
103 https://git-scm.com/downloads
104 https://gitforwindows.org/
105 https://gitlens.amod.io/
106 https://github.com/DonJayamanne/gitHistoryVSCode/blob/master/README.md
81
How to do it
1. Open the Command Prompt and create a new-working folder:
mkdir myWebApp
cd myWebApp
3. Configure global settings for the name and email address to be used when committing in this Git
repository:
git config --global user.name "John Doe"
git config --global user.email "john.doe@contoso.com"
If you are working behind an enterprise proxy, you can make your Git repository proxy-aware by
adding the proxy details in the Git global configuration file.
Different variations of this command will allow you to set up an HTTP/HTTPS proxy (with username/
password) and optionally bypass SSL verification.
Run the below command to configure a proxy in your global git config.
git config --global http.proxy
http://proxyUsername:proxyPassword@proxy.server.com:port
4. Create a new ASP.NET core application. The new command offers a collection of switches that can be
used for language, authentication, and framework selection. More details can be found on Microsoft
docs107.
dotnet new mvc
5. When the project opens in Visual Studio Code, select Yes for the Required assets to build and
debug are missing from ‘myWebApp.’ Add them? Warning message. Select Restore for the There
are unresolved dependencies info message. Hit F5 to debug the application, then myWebApp will
load in the browser, as shown in the following screenshot:
107 https://docs.microsoft.com/dotnet/core/tools/dotnet-new
82
If you prefer to use the command line, you can run the following commands in the context of the git
repository to run the web application.
dotnet build
dotnet run
You will notice the “.vscode” folder is added to your working folder. To avoid committing this folder
into your Git repository, you can include it in the .gitignore file. With the ".vscode" folder selected, hit
F1 to launch the command window in Visual Studio Code, type gitIgnore, and accept the option to
include the selected folder in the .gitIgnore file:
6. To stage and commit the newly created myWebApp project to your Git repository from Visual Studio
Code, navigate the Git icon from the left panel. Add a commit comment and commit the changes by
clicking the checkmark icon. It will stage and commit the changes in one operation:
Open Program.cs, you will notice Git lens decorates the classes and functions with the commit history
and brings this information inline to every line of code:
83
7. Now launch cmd in the context of the git repository and run git branch --list. It will show you
that currently, only the main branch exists in this repository. Now run the following command to
create a new branch called feature-devops-home-page.
git branch feature-devops-home-page
git checkout feature-devops-home-page
git branch --list
With these commands, you have created a new branch, checked it out. The --list keyword shows
you a list of all branches in your repository. The green color represents the branch that is currently
checked out.
8. Now navigate to the file ~\Views\Home\Index.cshtml and replace the contents with the text
below.
@{
ViewData["Title"] = "Home Page";
}
<div class="text-center">
<h1 class="display-4">Welcome</h1>
<p>Learn about <a href="https://azure.microsoft.com/services/devops/">Azure DevOps</a>.</p>
</div>
10. In the context of the git repository, execute the following commands. These commands will stage the
changes in the branch and then commit them.
git status
git add .
git status
11. To merge the changes from the feature-devops-home-page into main, run the following commands
in the context of the git repository.
How it works
The easiest way to understand the outcome of the steps done earlier is to check the history of the
operation. Let us have a look at how to do it.
1. In Git, committing changes to a repository is a two-step process. Running: add . The changes are
staged but not committed. Finally, running commit promotes the staged changes into the repository.
2. To see the history of changes in the main branch, run the command git log -v
3. To investigate the actual changes in the commit, you can run the command git log -p
85
There is more
Git makes it easy to back out changes. Following our example, if you want to take out the changes made
to the welcome page.
You can do It hard resetting the main branch to a previous version of the commit using the following
command.
git reset --hard 5d2441f0be4f1e4ca1f8f83b56dee31251367adc
Running the above command would reset the branch to the project init change.
If you run git log -v, you will see that the changes done to the welcome page are removed from the
repository.
Summary
This module described different source control systems such Git and Team Foundation Version Control
(TFVC) and helped with the initial steps for Git utilization.
You learned how to describe the benefits and usage of:
●● Apply source control practices in your development process.
●● Explain the differences between centralized and distributed version control.
●● Understand Git and Team Foundation Version Control (TFVC).
●● Develop using Git.
Learn more
●● What is Team Foundation Version Control - Azure Repos | Microsoft Docs108.
●● Migrate from TFVC to Git - Azure DevOps | Microsoft Docs109.
●● Git and TFVC version control - Azure Repos | Microsoft Docs110.
108 https://docs.microsoft.com/azure/devops/repos/tfvc/what-is-tfvc
109 https://docs.microsoft.com/devops/develop/git/migrate-from-tfvc-to-git
110 https://docs.microsoft.com/azure/devops/repos/tfvc/comparison-git-tfvc
86
●● Get started with Git and Visual Studio - Azure Repos | Microsoft Docs111.
111 https://docs.microsoft.com/azure/devops/repos/git/gitquickstart
87
Learning objectives
After completing this module, students and professionals can:
●● Work with Azure Repos and GitHub.
●● Link Azure Boards and GitHub.
●● Plan and migrate from TFVC to Git.
●● Work with GitHub Codespaces.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with Git and version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
●● Automation with built-in CI/CD: Set up continuous integration/continuous delivery (CI/CD) to trigger
builds, tests, and deployments automatically. It with every completed pull request-using Azure
Pipelines or your tools.
●● Protection of your code quality with branch policies: Keep code quality high by requiring code
reviewer sign-out, successful builds, and passing tests before merge pull requests. Customize your
branch policies to maintain your team’s high standards.
●● Usage of your favorite tools: Use Git and TFVC repositories on Azure Repos with your favorite editor
and IDE.
For further reference on using git in Azure Repos, refer to Microsoft Docs.112
Introduction to GitHub
GitHub is the largest open-source community in the world. Microsoft owns GitHub. GitHub is a develop-
ment platform inspired by the way you work.
You can host and review code, manage projects, and build software alongside 40 million developers from
open source to business.
GitHub is a Git repository hosting service, but it adds many of its features.
While Git is a command-line tool, GitHub provides a Web-based graphical interface.
It also provides access control and several collaboration features, such as wikis and essential task man-
agement tools for every project.
So what are the main benefits of using GitHub? Nearly every open-source project uses GitHub to manage
its project.
Using GitHub is free if your project is open source and includes a wiki and issue tracker that makes it easy
to have more in-depth documentation and get feedback about your project.
●● Automate your workflows: Build, test, deploy, and run CI/CD the way you want in the same place
you manage code. Trigger Actions from any GitHub event to any available API. Build your Actions
in the language of your choice or choose from thousands of workflows and Actions created by the
community.
●● Packages at home with their code: Use Actions to publish new package versions to GitHub Packag-
es automatically. Install packages and images hosted on GitHub Packages or your preferred
packages registry in your CI/CD workflows. It is always free for open source, and data transfer
within Actions is unlimited for everyone.
●● Securing software together: GitHub plays a role in securing the world's code—developers, maintain-
ers, researchers, and security teams. On GitHub, development teams everywhere can work together to
112 https://docs.microsoft.com/azure/devops/repos
89
●● Get alerts about vulnerabilities in your code: GitHub continuously scans security advisories for
popular languages. Also, sends security alerts to maintainers of affected repositories with details
so they can remediate risks.
●● Automatically update vulnerabilities: GitHub monitors your project dependencies and automatical-
ly opens pull requests to update dependencies to the minimum version that resolves known
vulnerabilities.
●● Stay on top of CVEs: Stay up to date with the latest Common Vulnerabilities and Exposures (CVEs),
and learn how they affect you with the GitHub Advisory Database.
●● Find vulnerabilities that other tools miss: CodeQL is the industry's leading semantic code analysis
engine. GitHub's revolutionary approach treats code as data to identify security vulnerabilities
faster.
●● Eliminate variants: Never make the same mistake twice. Proactive vulnerability scanning prevents
vulnerabilities from ever reaching production.
●● Keep your tokens safe: Accidentally committed a token to a public repository? GitHub got you.
With support from 20 service providers, GitHub takes steps to keep you safe.
●● Seamless code review: Code review is the surest path to better code, and it is fundamental to how
GitHub works. Built-in review tools make code review an essential part of your team's process.
●● Propose changes: Better code starts with a Pull Request, a living conversation about changes
where you can talk through ideas, assign tasks, discuss details, and conduct reviews.
●● Request reviews: If you are on the other side of a review, you can request reviews from your peers
to get the detailed feedback you need.
●● See the difference: Reviews happen faster when you know exactly what is changed. Diffs compare
versions of your source code side by side, highlighting the new, edited, or deleted parts.
●● Comment in context: Discussions happen in comment threads, right within your code. Bundle
comments into one review or reply to someone else are inline to start a conversation.
●● Give clear feedback: Your teammates should not have to think too hard about what a thumbs-up
emoji means. Specify whether your comments are required changes or just a few suggestions.
●● Protect branches: Only merge the highest-quality code. You can configure repositories to require
status checks, reducing both human error and administrative overhead.
●● All your code and documentation in one place: There are hundreds of millions of private, public, and
open-source repositories hosted on GitHub. Every repository is equipped with tools to help your host,
version, and release code and documentation.
●● Code where you collaborate: Repositories keep code in one place and help your teams collaborate
with the tools they love, even if you work with large files using Git LFS. With unlimited private
repositories for individuals and teams, you can create or import as many projects as you would
like.
●● Documentation alongside your code: Host your documentation directly from your repositories
with GitHub Pages. Use Jekyll as a static site generator and publish your Pages from the /docs
folder on your master branch.
90
●● Manage your ideas: Coordinate early, stay aligned, and get more done with GitHub's project manage-
ment tools.
●● See your project's large picture: See everything happening in your project and choose where to
focus your team's efforts with Projects, task boards that live right where they belong: close to your
code.
●● Track and assign tasks: Issues help you identify, assign, and keep track of tasks within your team.
You can open an Issue to track a bug, discuss an idea with an @mention, or start distributing work.
●● The human side of software: Building software is as much about managing teams and communities as
it is about code. Whether you are on a team of two or 2.000, GitHub got the support your people
need.
●● Manage and grow teams: Help people get organized with GitHub teams, level up to access
administrative roles, and fine-tune your permissions with nested teams.
●● Keep conversations: Moderation tools, like issue and pull request locking, help your team stay
focused on code. And if you maintain an open-source project, user blocking reduces noises and
ensures conversations are productive.
●● Set community guidelines: Set roles and expectations without starting from scratch. Customize
standard codes of conduct to create the perfect one for your project. Then choose a pre-written
license right from your repository.
GitHub offers excellent learning resources for its platform. You can find everything from git introduction
training to deep dive on publishing static pages to GitHub and how to do DevOps on GitHub right
here113.
113 https://lab.github.com/
91
Import repository
Import repository also allows you to import a git repository. It is beneficial to move your git repositories
from GitHub or any other public or private hosting spaces into Azure Repos.
There are some limitations here (that apply only when migrating source type TFVC): a single branch and
only 180 days of history.
However, if you only care about one branch and are already in Azure DevOps, it is an effortless but
effective way to migrate.
Use GIT-TFS
What if you need to migrate more than a single branch and keep branch relationships? Or you are going
to drag all the history with you?
In that case, you are going to have to use GIT-TFS. It is an open-source project that is built to synchro-
nize Git and TFVC repos.
But you can use it to do a once-off migration using Git TFS clone.
GIT-TFS has the advantage that it can migrate multiple branches and preserve the relationships to merge
branches in Git after you migrate.
Be warned that it can take a while to do this conversion - especially for large repos or repos with a long
history.
You can quickly dry run the migration locally, iron out any issues, and then do it for real.
92
There are lots of flexibilities with this tool, so I highly recommend it.
If you are on Subversion, then you can use GIT-SVN to import your Subversion repo similarly to using
GIT-TFS.
114 https://github.com/git-tfs/git-tfs/blob/master/doc/commands/clone.md
93
Summary
This module-introduced Azure Repos and GitHub tools and ways to link Azure Boards and GitHub. Also,
how to migrate from Team Foundation Version Control (TFVC) to Git and work with GitHub Codespaces
for development.
You learned how to describe the benefits and usage of:
●● Work with Azure Repos and GitHub.
●● Link Azure Boards and GitHub.
●● Plan and migrate from TFVC to Git.
●● Work with GitHub Codespaces.
Learn more
●● Integration of Azure Repos and Git Repositories115.
●● Integration of Azure Boards and GitHub116.
●● Import repositories from TFVC to Git - Azure Repos | Microsoft Docs117.
●● GitHub Codespaces118.
115 https://azure.microsoft.com/services/devops/repos/
116 https://docs.microsoft.com/azure/devops/boards/github/
117 https://docs.microsoft.com/azure/devops/repos/git/import-from-tfvc
118 https://github.com/features/codespaces
94
Labs
Lab 01: Agile planning and portfolio manage-
ment with Azure Boards
Lab overview
In this lab, you will learn about the agile planning and portfolio management tools and processes
provided by Azure Boards and how they can help you quickly plan, manage, and track work across your
entire team. You will explore the product backlog, sprint backlog, and task boards which can be used to
track the flow of work during the course of an iteration. We will also take a look at how the tools have
been enhanced in this release to scale for larger teams and organizations.
Objectives
After you complete this lab, you will be able to:
●● Manage teams, areas, and iterations
●● Manage work items
●● Manage sprints and capacity
●● Customize Kanban boards
●● Define dashboards
●● Customize team process
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions119
119 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
95
Objectives
After you complete this lab, you will be able to:
●● Clone an existing repository
●● Save work with commits
●● Review history of changes
●● Work with branches by using Visual Studio Code
Lab duration
●● Estimated time: 50 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions120
120 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
96
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices best describes DevOps?
DevOps is the role of who manages source control, pipelines, and monitor environments to continue
delivering value to the software project.
DevOps is the union of people, process, and products to enable continuous delivery of value to our
end users.
DevOps is the new process of creating continuous delivery and continuous integration for software
projects.
Multiple choice
Which of the following choices drives the ongoing merging and testing of code that leads to finding defects
early?
Continuous Integration.
Continuous Delivery.
Continuous Feedback.
Multiple choice
Which of the following choices is a practice that enables the creation automation of the environments?
Infrastructure as a Service (IaaS).
Infrastructure as Code (IaC).
Software as a Service (SaaS).
Multiple choice
In which of the following choices would you find large amounts of technical debt?
Greenfield project.
Brownfield project.
Bluefield project.
97
Multiple choice
Which of the following choices would a system that manages inventory in a warehouse be considered?
System of Record.
System of Engagement.
System of History.
Multiple choice
Which of the following choices are the categorized user groups most adopted in Continuous Delivery?
Canaries, Early adopters, and Users.
Alpha and Beta Users.
Blue and Green Users.
Multiple choice
An Agile tool manages and visualizes work by showing tasks moving from left to right across columns
representing stages. What is this tool commonly called?
Backlog.
Kanban Board.
Delivery Plans.
Multiple choice
Which of the following choices isn't describing processes and methodologies correctly?
It is required to implement Waterfall to evolve and implement Agile.
It is possible to start with Waterfall and evolve to Agile.
Agile is the best and most used methodology and the best to start your project.
Multiple choice
Which of the following choices isn't an Agile/Project management tool?
Azure DevOps.
GitHub.
Camtasia.
Multiple choice
Which of the following choices isn't an Azure DevOps service/feature?
Azure Boards.
Azure Monitor.
Azure Repos.
98
Multiple choice
Which of the following is a correct statement about Azure DevOps?
Azure DevOps only works in on-premises environments. If you want to use a cloud service, you need
to choose GitHub.
Azure DevOps and GitHub are only available in the cloud.
Azure DevOps and GitHub provide options to work on-premises and in the cloud.
Multiple choice
Which of the following choices does Azure DevOps Services enterprise-grade authentication supports?
Microsoft Account, Azure Active Directory (Azure AD), GitHub Account.
Microsoft Account, Azure Active Directory (Azure AD).
GitHub Account.
Multiple choice
Which of the following choices isn't a benefit of source control?
Manageability.
Efficiency.
Accountability.
Multiple choice
Which of the following choices isn't a source control best practice?
Make small changes.
Commit personal and secure files.
Link code changes to work items.
Multiple choice
Which of the following choices correctly describes one of the most valuable version control features?
Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, you can go back in time and easily roll back to previous-working
versions.
Version control is a solution for automatically incrementing the version number for deployments.
Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, be careful, you can't go back in time and easily roll back to previ-
ous-working versions.
99
Multiple choice
Which of the following choices correctly describes what is source control?
Source control is the practice of controlling what is deployed to test and production environments.
Source control is the practice of controlling security through code files.
Source control is the practice of tracking and managing changes to code.
Multiple choice
Which of the following choices isn't a benefit of using distributed version control?
Complete offline support
Allows exclusive file locking.
Portable history.
Multiple choice
Which of the following choices isn't a benefit of using centralized version control?
Easily scales for large codebases.
An open-source friendly code review model via pull requests.
Allows exclusive file locking.
Multiple choice
Which of the following choices isn't a supported Azure Repos version control system?
Git.
Team Foundation Version Control (TFVC).
Source Safe.
Multiple choice
Which of the following choices is the exact number of days of history you can import Team Foundation
Version Control (TFVC) to Git using the "Import repository" feature in Azure DevOps?
90 days.
180 days.
365 days.
Multiple choice
Which of the following choices describe GitHub Codespaces?
It's a platform for hosting and managing packages, including containers and other dependencies.
It's an AI pair programmer that helps you write code faster and with less work.
It's an online implementation of Visual Studio Code.
100
Multiple choice
Which of the following choices best describes the Azure DevOps and GitHub integration?
Azure Boards have direct integration with Azure Repos, but it can also be integrated with GitHub to
plan and track work linking commits, PRs, and issues.
Azure Boards has direct integration with Azure Repos. Azure Repos only integrates with GitHub using
extensions from Marketplace for the read-only track.
Azure Boards has direct integration with GitHub for tracking activities and moving tasks on both sides
Azure Boards to GitHub and GitHub to Azure Repos.
Multiple choice
Which of the following Project boards types can contain issues and pull requests from any personal reposito-
ry?
User-owned project boards.
Organization-wide project boards.
Repository project boards
Multiple choice
Which of the following choices isn't a Project Boards supported template by default?
Basic kanban.
Automated kanban with review.
Automated CMMI.
101
Answers
Multiple choice
Which of the following choices best describes DevOps?
DevOps is the role of who manages source control, pipelines, and monitor environments to continue
delivering value to the software project.
■■ DevOps is the union of people, process, and products to enable continuous delivery of value to our
end users.
DevOps is the new process of creating continuous delivery and continuous integration for software
projects.
Explanation
According to Donovan Brown, "DevOps is the union of people, process, and products to enable continuous
delivery of value to our end users."
Multiple choice
Which of the following choices drives the ongoing merging and testing of code that leads to finding
defects early?
■■ Continuous Integration.
Continuous Delivery.
Continuous Feedback.
Explanation
Continuous Integration drives the ongoing merging and testing of code, which leads to finding defects early.
Multiple choice
Which of the following choices is a practice that enables the creation automation of the environments?
Infrastructure as a Service (IaaS).
■■ Infrastructure as Code (IaC).
Software as a Service (SaaS).
Explanation
Infrastructure as Code (IaC) is a practice that enables the automation and validation of the creation and
teardown of environments to help with delivering secure and stable application hosting platforms.
Multiple choice
In which of the following choices would you find large amounts of technical debt?
Greenfield project.
■■ Brownfield project.
Bluefield project.
Explanation
A Brownfield Project comes with the baggage of existing codebases, existing teams, and often a significant
technical debt. They can still be ideal projects for DevOps transformations.
102
Multiple choice
Which of the following choices would a system that manages inventory in a warehouse be considered?
■■ System of Record.
System of Engagement.
System of History.
Explanation
Systems that provide the truth about data elements are often called Systems of Record.
Multiple choice
Which of the following choices are the categorized user groups most adopted in Continuous Delivery?
■■ Canaries, Early adopters, and Users.
Alpha and Beta Users.
Blue and Green Users.
Explanation
In discussions around continuous delivery, users are often categorized into three general buckets: Canaries,
Early adopters, and Users.
Multiple choice
An Agile tool manages and visualizes work by showing tasks moving from left to right across columns
representing stages. What is this tool commonly called?
Backlog.
■■ Kanban Board.
Delivery Plans.
Explanation
A Kanban Board lets you visualize the flow of work and constrain the amount of work in progress. Your
Kanban board turns your backlog into an interactive signboard, providing a visual flow of work.
Multiple choice
Which of the following choices isn't describing processes and methodologies correctly?
■■ It is required to implement Waterfall to evolve and implement Agile.
It is possible to start with Waterfall and evolve to Agile.
Agile is the best and most used methodology and the best to start your project.
Explanation
The waterfall methodology is a traditional software development practice not related to Agile. You can
implement Agile without any dependencies. Also, you can evolve your current process to Agile.
103
Multiple choice
Which of the following choices isn't an Agile/Project management tool?
Azure DevOps.
GitHub.
■■ Camtasia.
Explanation
Camtasia is a screen recorder tool, not an Agile or Project Management tool. All others are Agile-based.
Multiple choice
Which of the following choices isn't an Azure DevOps service/feature?
Azure Boards.
■■ Azure Monitor.
Azure Repos.
Explanation
Azure DevOps includes a range of services covering the complete development life cycle like Azure Boards,
Azure Pipelines, Azure Repos, Azure Artifacts, Azure Test Plans. Azure Monitor is an Azure feature.
Multiple choice
Which of the following is a correct statement about Azure DevOps?
Azure DevOps only works in on-premises environments. If you want to use a cloud service, you need
to choose GitHub.
Azure DevOps and GitHub are only available in the cloud.
■■ Azure DevOps and GitHub provide options to work on-premises and in the cloud.
Explanation
Azure DevOps provides both on-premises and cloud options, named Azure DevOps Server (on-premises)
and Azure DevOps Services (SaaS). Also, the same applies to GitHub with GitHub (SaaS) and GitHub
Enterprise (On-premises and Cloud).
Multiple choice
Which of the following choices does Azure DevOps Services enterprise-grade authentication supports?
■■ Microsoft Account, Azure Active Directory (Azure AD), GitHub Account.
Microsoft Account, Azure Active Directory (Azure AD).
GitHub Account.
Explanation
To protect and secure your data, Azure DevOps support: Microsoft Account, GitHub Account, and Azure
Active Directory (Azure AD).
104
Multiple choice
Which of the following choices isn't a benefit of source control?
Manageability.
Efficiency.
■■ Accountability.
Explanation
Source control is the practice of tracking and managing changes to code. Benefits include but are not
limited to manageability and efficiency.
Multiple choice
Which of the following choices isn't a source control best practice?
Make small changes.
■■ Commit personal and secure files.
Link code changes to work items.
Explanation
Commit personal and secure files are unsecured. Do not commit personal or secure files. These could
include application settings or SSH keys. Often these are committed accidentally but cause problems later
when other team members are working on the same code.
Multiple choice
Which of the following choices correctly describes one of the most valuable version control features?
■■ Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, you can go back in time and easily roll back to previous-working
versions.
Version control is a solution for automatically incrementing the version number for deployments.
Version control is a solution for managing and saving changes made to manually created assets. If you
make changes to the source code, be careful, you can't go back in time and easily roll back to previ-
ous-working versions.
Explanation
Version control tools allow you to see who made changes, when, and what exactly was changed, allowing
you to revert it when needed.
Multiple choice
Which of the following choices correctly describes what is source control?
Source control is the practice of controlling what is deployed to test and production environments.
Source control is the practice of controlling security through code files.
■■ Source control is the practice of tracking and managing changes to code.
Explanation
Source control is the practice of tracking and managing changes to code.
105
Multiple choice
Which of the following choices isn't a benefit of using distributed version control?
Complete offline support
■■ Allows exclusive file locking.
Portable history.
Explanation
Distributed version control supports full offline versioning; it is cross-platform and has a portable history.
Multiple choice
Which of the following choices isn't a benefit of using centralized version control?
Easily scales for large codebases.
■■ An open-source friendly code review model via pull requests.
Allows exclusive file locking.
Explanation
Only open-source friendly code review models via PRs are exclusive distributed version control based on Git
characteristics.
Multiple choice
Which of the following choices isn't a supported Azure Repos version control system?
Git.
Team Foundation Version Control (TFVC).
■■ Source Safe.
Explanation
Azure Repos provides two types of version control systems. Git: distributed version control and Team
Foundation Version Control (TFVC): centralized version control.
Multiple choice
Which of the following choices is the exact number of days of history you can import Team Foundation
Version Control (TFVC) to Git using the "Import repository" feature in Azure DevOps?
90 days.
■■ 180 days.
365 days.
Explanation
There are some limitations here (that apply only when migrating source type TFVC) to a single branch and
only 180 days of history.
106
Multiple choice
Which of the following choices describe GitHub Codespaces?
It's a platform for hosting and managing packages, including containers and other dependencies.
It's an AI pair programmer that helps you write code faster and with less work.
■■ It's an online implementation of Visual Studio Code.
Explanation
Codespaces is a cloud-based development environment that GitHub hosts. It's essentially an online imple-
mentation of Visual Studio Code.
Multiple choice
Which of the following choices best describes the Azure DevOps and GitHub integration?
■■ Azure Boards have direct integration with Azure Repos, but it can also be integrated with GitHub to
plan and track work linking commits, PRs, and issues.
Azure Boards has direct integration with Azure Repos. Azure Repos only integrates with GitHub using
extensions from Marketplace for the read-only track.
Azure Boards has direct integration with GitHub for tracking activities and moving tasks on both sides
Azure Boards to GitHub and GitHub to Azure Repos.
Explanation
Integrating GitHub with Azure Boards lets you plan and track your work by linking GitHub commits, pull
requests, and issues, directly to work items in Boards.
Multiple choice
Which of the following Project boards types can contain issues and pull requests from any personal
repository?
■■ User-owned project boards.
Organization-wide project boards.
Repository project boards
Explanation
User-owned project boards can contain issues and pull requests from any personal repository.
Multiple choice
Which of the following choices isn't a Project Boards supported template by default?
Basic kanban.
Automated kanban with review.
■■ Automated CMMI.
Explanation
CMMI isn't a valid Project Boards template. The templates that can be automated and already configured
are: Basic kanban, Automated kanban, Automated kanban with review, and Bug triage.
Module 2 Development for enterprise DevOps
Learning objectives
After completing this module, students and professionals can:
●● Understand Git repositories.
●● Implement mono repo or multiple repos.
●● Explain how to structure Git Repos.
●● Implement a changelog.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
108
Git changelog
One standard tool is gitchangelog1. This tool is based on Python.
1 https://pypi.org/project/gitchangelog/
2 https://github.com/github-changelog-generator/github-changelog-generator
110
Summary
This module examined Git repositories structure, explained the differences between mono versus multiple
repos, and helped you create a changelog.
You learned how to describe the benefits and usage of:
●● Understand Git repositories.
●● Implement mono repo or multiple repos.
●● Explain how to structure Git Repos.
●● Implement a changelog.
Learn more
●● Understand source control - Azure DevOps3.
●● Build Azure Repos Git repositories - Azure Pipelines | Microsoft Docs4.
●● Check out multiple repositories in your pipeline - Azure Pipelines | Microsoft Docs5.
3 https://docs.microsoft.com/azure/devops/user-guide/source-control
4 https://docs.microsoft.com/azure/devops/pipelines/repos/azure-repos-git
5 https://docs.microsoft.com/azure/devops/pipelines/repos/multi-repo-checkout
111
Learning objectives
After completing this module, students and professionals can:
●● Describe Git branching workflows.
●● Implement feature branches.
●● Fork a repo.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
Trunk-based development
Trunk-based development is a logical extension of Centralized Workflow.
112
The core idea behind the Feature Branch Workflow is that all feature development should take place in a
dedicated branch instead of the main branch.
This encapsulation makes it easy for multiple developers to work on a particular feature without disturb-
ing the main codebase.
It also means the main branch should never contain broken code, which is a huge advantage for continu-
ous integration environments.
Forking workflow
The Forking Workflow is fundamentally different than the other workflows discussed in this tutorial.
Instead of using a single server-side repository to act as the “central” codebase, it gives every developer a
server-side repository.
It means that each contributor has two Git repositories:
●● A private local one.
●● A public server-side one.
Feature branches should have descriptive names, like new-banner-images or bug-91. The idea is to give a
clear, highly focused purpose to each branch.
Git makes no technical distinction between the main branch and feature branches, so developers can
edit, stage, and commit changes to a feature branch.
Create a branch
When you're working on a project, you're going to have many different features or ideas in progress at
any given time – some of which are ready to go and others that aren't.
Branching exists to help you manage this workflow.
When you create a branch in your project, you're creating an environment where you can try out new
ideas.
Changes you make on a branch don't affect the main branch, so you're free to experiment and commit
changes, safe in the knowledge that your branch won't be merged until it's ready to be reviewed by
someone you're collaborating with.
Branching is a core concept in Git, and the entire branch flow is based upon it. There's only one rule:
anything in the main branch is always deployable.
Because of this, your new branch must be created off main when working on a feature or a fix.
Your branch name should be descriptive (for example, refactor-authentication, user-content-cache-key,
make-retina-avatars) so that others can see what is being worked on.
Add commits
114
Once your branch has been created, it's time to start making changes. Whenever you add, edit, or delete
a file, you're making a commit and adding them to your branch.
This process of adding commits keeps track of your progress as you work on a feature branch.
Commits also create a transparent history of your work that others can follow to understand what you've
done and why.
Each commit has an associated commit message, which explains why a particular change was made.
Furthermore, each commit is considered a separate unit of change. It lets you roll back changes if a bug is
found or you decide to head in a different direction.
Commit messages are essential, especially since Git tracks your changes and then displays them as
commits once they're pushed to the server.
By writing clear commit messages, you can make it easier for other people to follow along and provide
feedback.
Pull Requests start discussion about your commits. Because they're tightly integrated with the underlying
Git repository, anyone can see exactly what changes would be merged if they accept your request.
You can open a Pull Request at any point during the development process when:
●● You've little or no code but want to share some screenshots or general ideas.
●● You're stuck and need help or advice.
●● You're ready for someone to review your work.
Using the @mention system in your Pull Request message, you can ask for feedback from specific people
or teams, whether they're down the hall or 10 time zones away.
Pull Requests help contribute to projects and for managing changes to shared repositories.
If you're using a Fork & Pull Model, Pull Requests provide a way to notify project maintainers about the
changes you'd like them to consider.
If you're using a Shared Repository Model, Pull Requests help start code review and conversation about
proposed changes before they're merged into the main branch.
115
Once a Pull Request has been opened, the person or team reviewing your changes may have questions or
comments.
Perhaps the coding style doesn't match project guidelines, the change is missing unit tests, or maybe
everything looks excellent, and props are in order.
Pull Requests are designed to encourage and capture this type of conversation.
You can also continue to push to your branch, considering discussion and feedback about your commits.
Suppose someone comments that you forgot to do something, or if there's a bug in the code, you can fix
it in your branch and push up the change.
Git will show your new commits and any other feedback you may receive in the unified Pull Request view.
Pull Request comments are written in Markdown, so you can embed images and emoji, use pre-format-
ted text blocks, and another lightweight formatting.
Deploy
With Git, you can deploy from a branch for final testing in an environment before merging to main.
Once your pull request has been reviewed and the branch passes your tests, you can deploy your chang-
es to verify them. If your branch causes issues, you can roll it back by deploying the existing main.
116
Merge
Now that your changes have been verified, it's time to merge your code into the main branch.
Once merged, Pull Requests preserve a record of the historical changes to your code. Because they're
searchable, they let anyone go back in time to understand why and how a decision was made.
By incorporating specific keywords into the text of your Pull Request, you can associate issues with code.
When your Pull Request is merged, the related issues can also close.
This workflow helps organize and track branches that are focused on business domain feature sets.
Other Git workflows like the Git Forking Workflow and the Gitflow Workflow are repo-focused and can
use the Git Feature Branch Workflow to manage their branching models.
Getting ready
Let's cover the principles of what we suggest:
●● The main branch:
●● The main branch is the only way to release anything to production.
●● The main branch should always be in a ready-to-release state.
●● Protect the main branch with branch policies.
●● Any changes to the main branch flow through pull requests only.
●● Tag all releases in the main branch with Git tags.
117
●● Pull requests:
●● Review and merge code with pull requests.
●● Automate what you inspect and validate as part of pull requests.
●● Tracks pull request completion duration and set goals to reduce the time it takes.
We'll be using the myWebApp created in the previous exercises. In this recipe, we'll be using three trendy
extensions from the marketplace:
●● Azure CLI6: is a command-line interface for Azure.
●● Azure DevOps CLI7: It's an extension for the Azure CLI for working with Azure DevOps and Azure
DevOps Server. It's designed to seamlessly integrate with Git, CI pipelines, and Agile tools. With the
Azure DevOps CLI, you can contribute to your projects without leaving the command line. CLI runs on
Windows, Linux, and Mac.
●● Git Pull Request Merge Conflict: This open-source extension created by Microsoft DevLabs allows you
to review and resolve pull request merge conflicts on the web. Before a Git pull request can complete,
any conflicts with the target branch must be resolved. With this extension, you can resolve these
conflicts on the web as part of the pull request merge instead of doing the merge and resolving
conflicts in a local clone.
The Azure DevOps CLI supports returning the query results in JSON, JSONC, YAML, YAMLC, table, TSV,
and none. You can configure your preference by using the configure command.
How to do it
1. After you've cloned the main branch into a local repository, create a new feature branch, myFeature-1:
myWebApp >
git checkout -b feature/myFeature-1
Output:
Switched to a new branch ‘feature/myFeature-1’.
6 https://docs.microsoft.com/cli/azure/install-azure-cli
7 https://docs.microsoft.com/azure/devops/cli
118
2. Run the Git branch command to see all the branches. The branch showing up with an asterisk is the
“currently-checked-out” branch:
myWebApp >
git branch *
Output:
feature/myFeature-1
main
3. Make a change to the Program.cs file in the feature/myFeature-1 branch:
myWebApp >
notepad Program.cs
4. Stage your changes and commit locally, then publish your branch to remote:
myWebApp >
git status
Output:
On branch feature/myFeature-1 Changes not staged for commit: (use “git add <file>...” to update what
will be committed) (use "git checkout – <file>..." to discard changes in working directory) modified:
Program.cs.
myWebApp >
git add .
git commit -m "Feature 1 added to Program.cs"
Output:
[feature/myFeature-1 70f67b2] feature 1 added to program.cs 1 file changed, 1 insertion(+).
myWebApp >
git push -u origin feature/myFeature-1
Output:
Delta compression using up to 8 threads. Compressing objects: 100% (3/3), done. Writing objects: 100%
(3/3), 348 bytes | 348.00 KiB/s, done. Total 3 (delta 2), reused 0 (delta 0) remote: Analyzing objects…
(3/3) (10 ms) remote: Storing packfile… done (44 ms) remote: Storing index… done (62 ms) To https://
dev.azure.com/Geeks/PartsUnlimited/_git/MyWebApp * [new branch] feature/myFeature-1 -> feature/
myFeature-1 Branch feature/myFeature-1 set up to track remote branch feature/myFeature-1 from
origin.
The remote shows the history of the changes:
119
5. Configure Azure DevOps CLI for your organization and project. Replace organization and project
name:
az devops configure --defaults organization=https://dev.azure.com/organization project="project
name"
6. Create a new pull request (using the Azure DevOps CLI) to review the changes in the feature-1 branch:
az repos pr create --title "Review Feature-1 before merging to main" --work-items 38 39 `
--description "#Merge feature-1 to main" `
--source-branch feature/myFeature-1 --target-branch main `
--repository myWebApp --open
Use the –open switch when raising the pull request to open the pull request in a web browser after it
has been created. The –deletesource-branch switch can be used to delete the branch after the pull
request is complete. Also, consider using –auto-complete to complete automatically when all policies
have passed, and the source branch can be merged into the target branch.
Note: For more information about az repos pr create parameter, see Create a pull request to
review and merge code8.
The team jointly reviews the code changes and approves the pull request:
8 https://docs.microsoft.com/azure/devops/repos/git/pull-requests
120
The main is ready to release. Team tags main branch with the release number:
7. Start work on Feature 2. Create a branch on remote from the main branch and do the checkout
locally:
myWebApp >
git push origin origin:refs/heads/feature/myFeature-2
Output:
Total 0 (delta 0), reused 0 (delta 0) To https://dev.azure.com/Geeks/PartsUnlimited/_git/MyWebApp *
[new branch] origin/HEAD -> refs/heads/feature/myFeature-2.
myWebApp >
git checkout feature/myFeature-2
121
Output:
Switched to a new branch ‘feature/myFeature-2’ Branch feature/myFeature-2 set up to track remote
branch feature/myFeature-2 from origin.
8. Modify Program.cs by changing the same comment line in the code changed in feature-1.
public class Program
{
// Editing the same line (file from feature-2 branch)
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}
9. Commit the changes locally, push them to the remote repository, and then raise a pull request to
merge the changes from feature/myFeature-2 to the main branch:
az repos pr create --title "Review Feature-2 before merging to main" --work-items 40 42 `
--description "#Merge feature-2 to main" `
--source-branch feature/myFeature-2 --target-branch main `
--repository myWebApp --open
A critical bug is reported in production against the feature-1 release with the pull request in flight. To
investigate the issue, you need to debug against the version of code currently deployed in produc-
tion. To investigate the issue, create a new fof branch using the release_feature1 tag:
myWebApp >
git checkout -b fof/bug-1 release_feature1
Output:
Switched to a new branch ‘fof/bug-1’.
10. Modify Program.cs by changing the same line of code that was changed in the feature-1 release:
public class Program
{
// Editing the same line (file from feature-FOF branch)
public static void Main(string[] args)
{
BuildWebHost(args).Run();
}
11. Stage and commit the changes locally, then push changes to the remote repository:
myWebApp >
git add .
git commit -m "Adding FOF changes."
git push -u origin fof/bug-1
Output:
To https://dev.azure.com/Geeks/PartsUnlimited/_git/MyWebApp * [new branch] fof/bug-1 -> fof/bug-1
Branch fof/bug-1 set up to track remote branch fof/bug-1 from origin.
12. Immediately after the changes have been rolled out to production, tag the fof\bug-1 branch with the
release_bug-1 tag, then raise a pull request to merge the changes from fof/bug-1 back into the main:
az repos pr create --title "Review Bug-1 before merging to main" --work-items 100 `
--description "#Merge Bug-1 to main" `
--source-branch fof/Bug-1 --target-branch main `
--repository myWebApp --open
As part of the pull request, the branch is deleted. However, you can still reference the entire history
using the tag.
With the critical bug fix out of the way, let's go back to the review of the feature-2 pull request.
The branches page makes it clear that the feature/myFeature-2 branch is one change ahead of the
main and two changes behind the main:
If you tried to approve the pull request, you'd see an error message informing you of a merge conflict:
123
13. The Git Pull Request Merge Conflict resolution extension makes it possible to resolve merge conflicts
right in the browser. Navigate to the conflicts tab and click on Program.cs to resolve the merge
conflicts:
The user interface allows you to take the source, target, add custom changes, review, and submit the
merge. With the changes merged, the pull request is completed.
How it works
We learned how the Git branching model gives you the flexibility to work on features in parallel by
creating a branch for each feature.
The pull request workflow allows you to review code changes using the branch policies.
Git tags are a great way to record milestones, such as the version of code released; tags give you a way to
create branches from tags.
We created a branch from a previous release tag to fix a critical bug in production.
The branches view in the web portal makes it easy to identify branches ahead of the main. Also, it forces
a merge conflict if any ongoing pull requests try to merge to the main without resolving the merge
conflicts.
A lean branching model allows you to create short-lived branches and push quality changes to produc-
tion faster.
Note: To implement GitHub flow, you'll need a GitHub account and a repository. See “Signing up for
GitHub9” and "Create a repo10."
Tip: You can complete all steps of GitHub flow through the GitHub web interface, command line, GitHub
CLI11, or GitHub Desktop12.
The first step is to create a branch in your repository to work without affecting the default branch, and
you give collaborators a chance to review your work.
For more information, see “Creating and deleting branches within your repository13.”
Make any desired changes to the repository. If you make a mistake, you can revert or push extra changes
to fix it.
Commit and push your changes to your branch to back up your work to remote storage, giving each
commit a descriptive message. Each commit should contain an isolated, complete change making it easy
to revert if you take a different approach.
Anyone collaborating with your project can see your work, answer questions, and make suggestions or
contributions. Continue to create, commit, and push changes to your branch until you're ready to ask for
feedback.
Tip: You can make a separate branch for each change to make it easy for reviewers to give feedback or
for you to understand the differences.
Once you're ready, you can create a pull request to ask collaborators for feedback on your changes. See
“Creating a pull request14.”
Pull request review is one of the most valuable features of collaboration. You can require approval from
your peers and team before merging changes. Also, you can mark it as a draft in case you want early
feedback or advice before you complete your changes.
9 https://docs.github.com/en/github/getting-started-with-github/signing-up-for-github
10 https://docs.github.com/en/github/getting-started-with-github/create-a-repo
11 https://cli.github.com/
12 https://docs.github.com/en/free-pro-team@latest/desktop
13 https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository
14 https://docs.github.com/en/articles/creating-a-pull-request
125
Describe the pull request as much as possible with the suggested changes and what problem you're
resolving. You can add images, links, related issues, or any information to document your change and
help reviewers understand the PR without opening each file. See “Basic writing and formatting syn-
tax15” and "Linking a pull request to an issue16."
Another way to improve PR quality and documentation and explicitly point something out to the review-
ers is to use the comment session area. Also, you can @mention or request a review from specific people
or teams.
15 https://docs.github.com/en/github/writing-on-github/basic-writing-and-formatting-syntax
16 https://docs.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue
126
There are other Pull Requests configurations, such as automatically requesting a review from specific
teams or users when a pull request is created or checks to run on pull requests. For more information, see
“About status checks17” and "About protected branches18."
After the reviewers' comments and checks validation, the changes should be ready to be merged, and
they can approve the Pull Request. See Merging a pull request19."
If you have any conflicts, GitHub will inform you to resolve them. “Addressing merge conflicts20.”
After a successful pull request merges, there's no need for the remote branch to stay there. You can
delete your branch to prevent others from accidentally using old branches. For more information, see
“Deleting and restoring branches in a pull request21.”
Note: GitHub keeps the commit and merges history if you need to restore or revert your pull request.
17 https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-status-checks
18 https://docs.github.com/en/github/administering-a-repository/about-protected-branches
19 https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/incorporating-changes-from-a-pull-request/merging-a-pull-
request
20 https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/addressing-merge-conflicts
21 https://docs.github.com/en/github/administering-a-repository/deleting-and-restoring-branches-in-a-pull-request
127
●● A public server-side.
The forking workflow is most often seen in public open-source projects.
The main advantage of the forking workflow is that contributions can be integrated without the need for
everybody to push to a single central repository.
Developers push to their server-side repositories, and only the project maintainer can push to the official
repository.
It allows the maintainer to accept commits from any developer without giving them written access to the
official codebase.
The forking workflow typically will be intended for merging into the original project maintainer's reposi-
tory.
The result is a distributed workflow that provides you a flexible way for large, organic teams (including
untrusted third parties) to collaborate securely.
This also makes it an ideal workflow for open-source projects.
How it works
As in the other Git workflows, the forking workflow begins with an official public repository stored on a
server.
But when a new developer wants to start working on the project, they don't directly clone the official
repository.
Instead, they fork the official repository to create a copy of it on the server.
This new copy serves as their personal public repository—no other developers can push to it, but they
can pull changes from it (we'll see why this is necessary in a moment).
After they've created their server-side copy, the developer does a git clone to get a copy of it onto their
local machine.
It serves as their private development environment, just like in the other workflows.
When they're ready to publish a local commit, they push the commit to their public repository—not the
official one.
Then, they file a pull request with the main repository, which lets the project maintainer know that an
update is ready to be integrated.
The pull request also serves as a convenient discussion thread if there are issues with the contributed
code.
The following is a step-by-step example of this workflow:
●● A developer ‘forks’ an 'official' server-side repository. It creates their server-side copy.
●● The new server-side copy is cloned to their local system.
●● A Git remote path for the ‘official’ repository is added to the local clone.
●● A new local feature branch is created.
●● The developer makes changes to the new branch.
●● New commits are created for the changes.
●● The branch gets pushed to the developer's server-side copy.
128
●● The developer opens a pull request from the new branch to the ‘official’ repository.
●● The pull request gets approved for merge and is merged into the original server-side repository.
To integrate the feature into the official codebase:
●● The maintainer pulls the contributor's changes into their local repository.
●● Checks to make sure it doesn't break the project.
●● Merges it into their local main branch.
●● Pushes the main branch to the official repository on the server.
The contribution is now part of the project, and other developers should pull from the official repository
to synchronize their local repositories.
It's essential to understand that the notion of an “official” repository in the forking workflow is merely a
convention.
The only thing that makes the official repository, so official is that it's the repository of the project
maintainer.
Summary
This module explored Git branching types, concepts, and models for the continuous delivery process. It
helped companies define their branching strategy and organization.
You learned how to describe the benefits and usage of:
●● Describe Git branching workflows.
●● Implement feature branches.
●● Fork a repo.
Learn more
●● Git branching guidance - Azure Repos | Microsoft Docs22.
●● Create a new Git branch from the web - Azure Repos | Microsoft Docs23.
●● How Microsoft develops modern software with DevOps - Azure DevOps | Microsoft Docs24.
●● Fork your repository - Azure Repos | Microsoft Docs25.
22 https://docs.microsoft.com/azure/devops/repos/git/git-branching-guidance
23 https://docs.microsoft.com/azure/devops/repos/git/create-branch
24 https://docs.microsoft.com/devops/develop/how-microsoft-develops-devops
25 https://docs.microsoft.com/azure/devops/repos/git/forks
129
Learning objectives
After completing this module, students and professionals can:
●● Use pull requests for collaboration and code reviews.
●● Give feedback-using pull requests.
●● Configure branch policies.
●● Use GitHub mobile for pull requests approvals.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Take these suggestions and create new work items and feature branches separate from the pull request
to make those changes.
Getting ready
The out-of-the-box branch policies include several policies, such as build validation and enforcing a
merge strategy. We'll only focus on the branch policies needed to set up a code-review workflow in this
recipe.
How to do it
1. Open the branches view for the myWebApp Git repository in the parts-unlimited team portal. Select
the main branch, and from the pull-down, context menu choose Branch policies:
132
2. In the policies view, It presents out-of-the-box policies. Set the minimum number of reviewers to 1:
The Allow requestors to approve their own changes option allows the submitter to self-approve their
changes.
It's OK for mature teams, where branch policies are used as a reminder for the checks that need to be
performed by the individual.
3. Use the review policy with the comment-resolution policy. It allows you to enforce that the code
review comments are resolved before the changes are accepted. The requester can take the feedback
from the comment and create a new work item and resolve the changes. It at least guarantees that
code review comments aren't lost with the acceptance of the code into the main branch:
133
4. A requirement instigates a code change in the team project. If the work item triggered the work isn't
linked to the change, it becomes hard to understand why it was made over time. It's especially useful
when reviewing the history of changes. Configure the Check for linked work items policy to block
changes that don't have a work item linked to them:
5. Select the option to automatically include reviewers when a pull request is raised automatically. You
can map which reviewers are added based on the area of the code being changed:
134
How it works
With the branch policies in place, the main branch is now fully protected.
The only way to push changes to the main branch is by first making the changes in another branch and
then raising a pull request to trigger the change-acceptance workflow.
Choose to create a new branch from one of the existing user stories in the work item hub.
By creating a new branch from a work item, that work item automatically gets linked to the branch.
You can optionally include more than one work item with a branch as part of the create workflow:
Prefix in the name when creating the branch to make a folder for the branch to go in.
In the preceding example, the branch will go in the folder. It is a great way to organize branches in busy
environments.
With the newly created branch selected in the web portal, edit the HomeController.cs file to include the
following code snippet and commit the changes to the branch.
In the image below, you'll see that you can directly commit the changes after editing the file by clicking
the commit button.
The file path control in the team portal supports search.
Start typing the file path to see all files in your Git repository under that directory, starting with these
letters showing up in the file path search results dropdown.
135
The code editor in the web portal has several new features in Azure DevOps Server, such as support for
bracket matching and toggle white space.
You can load the command palette by pressing it. Among many other new options, you can now toggle
the file using a file mini-map, collapse, and expand, and other standard operations.
To push these changes from the new branch into the main branch, create a pull request from the pull
request view.
Select the new branch as the source and the main as the target branch.
The new pull request form supports markdown, so you can add the description using the markdown
syntax.
The description window also supports @ mentions and # to link work items:
The pull request is created; the overview page summarizes the changes and the status of the policies.
The Files tab shows you a list of changes and the difference between the previous and the current
versions.
Any updates pushed to the code files will show up in the Updates tab, and a list of all the commits is
shown under the Commits tab:
Open the Files tab: this view supports code comments at the line level, file level, and overall.
136
The comments support both @ for mentions and # to link work items, and the text supports markdown
syntax:
The code comments are persisted in the pull request workflow; the code comments support multiple
iterations of reviews and work well with nested responses.
The reviewer policy allows for a code review workflow as part of the change acceptance.
It's an excellent way for the team to collaborate on any code changes pushed into the main branch.
When the required number of reviewers approves the pull request, it can be completed.
You can also mark the pull request to autocomplete after your review. It autocompletes the pull requests
once all the policies have been successfully compiled.
There's more
Have you ever been in a state where a branch has been accidentally deleted? It can't be easy to figure out
what happened.
Azure DevOps Server now supports searching for deleted branches. It helps you understand who deleted
it and when. The interface also allows you to recreate the branch.
Deleted branches are only shown if you search for them by their exact name to cut out the noise from the
search results.
To search for a deleted branch, enter the full branch name into the branch search box. It will return any
existing branches that match that text.
You'll also see an option to search for an exact match in the list of deleted branches.
If a match is found, you'll see who deleted it and when. You can also restore the branch. Restoring the
branch will re-create it at the commit to which is last pointed.
However, it won't restore policies and permissions.
137
Using a mobile app in combination with Git is a convenient option, particularly when urgent pull request
approvals are required.
●● The app can render markdown, images, and PDF files directly on the mobile device.
●● Pull requests can be managed within the app, along with marking files as viewed, collapsing files.
●● Comments can be added.
●● Emoji short codes are rendered.
Summary
This module presented pulls requests for collaboration and code reviews using Azure DevOps and GitHub
mobile for pull request approvals.
It helped understanding how pull requests work and how to configure them.
You learned how to describe the benefits and usage of:
●● Use pull requests for collaboration and code reviews.
●● Give feedback using pull requests.
●● Configure branch policies.
●● Use GitHub mobile for pull requests approvals.
Learn more
●● About pull requests and permissions - Azure Repos | Microsoft Docs26.
26 https://docs.microsoft.com/azure/devops/repos/git/about-pull-requests
138
●● Create a pull request to review and merge code - Azure Repos | Microsoft Docs27.
●● Review and comment on pull requests - Azure Repos | Microsoft Docs28.
●● Protect your Git branches with policies - Azure Repos | Microsoft Docs29.
●● Creating an issue or pull request - GitHub Docs30.
27 https://docs.microsoft.com/azure/devops/repos/git/pull-requests
28 https://docs.microsoft.com/azure/devops/repos/git/review-pull-requests
29 https://docs.microsoft.com/azure/devops/repos/git/branch-policies
30 https://docs.github.com/desktop/contributing-and-collaborating-using-github-desktop/working-with-your-remote-repository-on-github-
or-github-enterprise/creating-an-issue-or-pull-request
139
Learning objectives
After completing this module, students and professionals can:
●● Understand Git hooks.
●● Identify when used Git hooks.
●● Implement Git hooks for automation.
●● Explain Git hooks' behavior.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
Git hooks
Git hooks are a mechanism that allows code to be run before or after certain Git lifecycle events.
For example, one could hook into the commit-msg event to validate that the commit message structure
follows the recommended format.
The hooks can be any executable code, including shell, PowerShell, Python, or other scripts. Or they may
be a binary executable. Anything goes!
The only criteria are that hooks must be stored in the .git/hooks folder in the repo root. Also, they must
be named to match the related events (Git 2.x):
●● applypatch-msg
●● pre-applypatch
●● post-applypatch
●● pre-commit
●● prepare-commit-msg
●● commit-msg
140
●● post-commit
●● pre-rebase
●● post-checkout
●● post-merge
●● pre-receive
●● update
●● post-receive
●● post-update
●● pre-auto-gc
●● post-rewrite
●● pre-push
Getting ready
Let's start by exploring client-side Git hooks. Navigate to the repo .git\hooks directory – you'll find that
there are a bunch of samples, but they're disabled by default.
Note: If you open that folder, you'll find a file called precommit.sample. To enable it, rename it to
pre-commit by removing the .sample extension and making the script executable.
The script is found and executed when you attempt to commit using git commit. You commit successfully
if your pre-commit script exits with a 0 (zero). Otherwise, the commit fails. If you're using Windows,
simply renaming the file won't work.
Git will fail to find the shell in the chosen path specified in the script.
The problem is lurking in the first line of the script, the shebang declaration:
#!/bin/sh
On Unix-like OSs, the #! Tells the program loader that it's a script to be interpreted, and /bin/sh is the
path to the interpreter you want to use, sh in this case.
Windows isn't a Unix-like OS. Git for Windows supports Bash commands and shell scripts via Cygwin.
By default, what does it find when it looks for sh.exe at /bin/sh?
Nothing, nothing at all. Fix it by providing the path to the sh executable on your system. It's using the
64-bit version of Git for Windows, so the baseline looks like this:
#!C:/Program\ Files/Git/usr/bin/sh.exe
How to do it
How could Git hooks stop you from accidentally leaking Amazon AWS access keys to GitHub?
You can invoke a script at pre-commit.
Using Git hooks to scan the increment of code being committed into your local repository for specific
keywords:
Replace the code in this pre-commit shell file with the following code.
142
#!C:/Program\ Files/Git/usr/bin/sh.exe
matches=$(git diff-index --patch HEAD | grep '^+' | grep -Pi 'password|key-
word2|keyword3')
if [ ! -z "$matches" ]
then
cat <<\EOT
Error: Words from the blocked list were present in the diff:
EOT
echo $matches
exit 1
fi
You don't have to build the complete keyword scan list in this script.
You can branch off to a different file by referring to it here to encrypt or scramble if you want to.
How it works
The Git diff-index identifies the code increment committed in the script. This increment is then compared
against the list of specified keywords. If any matches are found, an error is raised to block the commit;
the script returns an error message with the list of matches. The pre-commit script doesn't return 0 (zero),
which means the commit fails.
There's more
The repo .git\hooks folder isn't committed into source control. You may wonder how you share the
goodness of the automated scripts you create with the team.
The good news is that, from Git version 2.9, you can now map Git hooks to a folder that can be commit-
ted into source control.
You could do that by updating the global settings configuration for your Git repository:
Git config --global core.hooksPath '~/.githooks'
If you ever need to overwrite the Git hooks you have set up on the client-side, you can do so by using the
no-verify switch:
Git commit --no-verify
Summary
This module described Git hooks and their usage during the development process, implementation, and
behavior.
31 https://docs.microsoft.com/azure/devops/service-hooks/events
143
Learn more
●● Creating a pre-receive hook script - GitHub Docs32.
●● Service hooks event reference - Azure DevOps | Microsoft Docs33.
32 https://docs.github.com/enterprise-server@3.1/admin/policies/enforcing-policy-with-pre-receive-hooks/creating-a-pre-receive-hook-
script
33 https://docs.microsoft.com/azure/devops/service-hooks/events
144
Learning objectives
After completing this module, students and professionals can:
●● Use Git to foster inner source across the organization.
●● Implement fork workflow.
●● Choose between branches and forks.
●● Share code between forks.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
Inner source
Inner source – sometimes called “internal open source” – brings all the benefits of open-source software
development inside your firewall.
It opens your software development processes so that your developers can easily collaborate on projects
across your company.
It uses the same processes that are popular throughout the open-source software communities.
145
But it keeps your code safe and secure within your organization.
Microsoft uses the inner source approach heavily.
As part of the efforts to standardize a one-engineering system throughout the company – backed by
Azure Repos – Microsoft has also opened the source code to all our projects to everyone within the
company.
Before the move to the inner source, Microsoft was “siloed”: only engineers working on Windows could
read the Windows source code.
Only developers working on Office could look at the Office source code.
So, if you're an engineer working on Visual Studio and you thought that you found a bug in Windows or
Office – or wanted to add a new feature – you're out of luck.
But by moving to offer inner sources throughout the company, powered by Azure Repos, it's easy to fork
a repository to contribute back.
As an individual making the change, you don't need to write access to the original repository, just the
ability to read it and create a fork.
What's in a fork?
A fork starts with all the contents of its upstream (original) repository.
You can include all branches or limit them to only the default branch when you create a fork.
None of the permissions, policies, or build pipelines are applied.
The new fork acts as if someone cloned the original repository, then pushed it to a new, empty reposito-
ry.
After a fork has been created, new files, folders, and branches aren't shared between the repositories
unless a Pull Request (PR) carries them along.
Note: You must have the Create Repository permission in your chosen project to create a fork. We
recommend you create a dedicated project for forks where all contributors have the Create Repository
permission. For an example of granting this permission, see Set Git repository permissions.
Important: Anyone with the Read permission can open a PR to upstream. If a PR build pipeline is config-
ured, the build will run against the code introduced in the fork.
The forking workflow lets you isolate changes from the main repository until you're ready to integrate
them. When you're ready, integrating code is as easy as completing a pull request.
149
Getting ready
A fork starts with all the contents of its upstream (original) repository.
When you create a fork in the Azure DevOps, you can include all branches or limit them to only the
default branch.
A fork doesn't copy the permissions, policies, or build definitions of the repository being forked.
After a fork has been created, the newly created files, folders, and branches aren't shared between the
repositories unless you start a pull request.
Pull requests are supported in either direction: from fork to upstream or upstream to fork.
The most common approach for a pull request will be from fork to upstream.
How to do it
1. Choose the Fork button (1), and then select the project where you want the fork to be created (2).
Give your fork a name and choose the Fork button (3).
150
2. Once your fork is ready, clone it using the command line or an IDE, such as Visual Studio. The fork will
be your origin remote. For convenience, you'll want to add the upstream repository (where you forked
from) as a remote named upstream. On the command line, type:
git remote add upstream {upstream_url}
3. It's possible to work directly in the main – after all, this fork is your copy of the repo. We recommend
you still work in a topic branch, though. It allows you to maintain multiple independent workstreams
simultaneously. Also, it reduces confusion later when you want to sync changes into your fork. Make
and commit your changes as you normally would. When you're done with the changes, push them to
origin (your fork).
4. Open a pull request from your fork to the upstream. All the policies, required reviewers, and builds will
be applied in the upstream repo. Once all the policies are satisfied, the PR can be completed, and the
changes become a permanent part of the upstream repo:
151
5. When your PR is accepted into upstream, you'll want to make sure your fork reflects the latest state of
the repo. We recommend rebasing on the upstream's main branch (assuming the main is the main
development branch). On the command line, run:
git fetch upstream main
git rebase upstream/main
git push origin
How it works
The forking workflow lets you isolate changes from the main repository until you're ready to integrate
them.
When you're ready, integrating code is as easy as completing a pull request.
For more information, see:
●● Clone an Existing Git repo34.
●● Azure Repos Git Tutorial35.
Summary
This module explained how to use Git to foster inner source and implement Fork and its workflows.
You learned how to describe the benefits and usage of:
●● Use Git to foster inner source across the organization.
●● Implement fork workflow.
●● Choose between branches and forks.
●● Share code between forks.
34 https://docs.microsoft.com/azure/devops/repos/git/clone
35 https://docs.microsoft.com/azure/devops/repos/git/gitworkflow
152
Learn more
●● Fork your repository - Azure Repos | Microsoft Docs36.
●● Clone an existing Git repo - Azure Repos | Microsoft Docs37.
36 https://docs.microsoft.com/azure/devops/repos/git/forks
37 https://docs.microsoft.com/azure/devops/repos/git/clone
153
Learning objectives
After completing this module, students and professionals can:
●● Understand large Git repositories.
●● Explain Git Virtual File System (GVFS).
●● Use Git Large File Storage (LFS).
●● Purge repository data.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Shallow clone
If developers don't need all the available history in their local repositories, a good option is to implement
a shallow clone.
It saves both space on local development systems and the time it takes to sync.
You can specify the depth of the clone that you want to execute:
git clone --depth [depth] [clone-url]
154
You can also reduce clones by filtering branches or cloning only a single branch.
Scalar
Scalar is a .NET Core application available for Windows and macOS. With tools and extensions for Git to
allow very large repositories to maximize your Git command performance. Microsoft uses it for Windows
and Office repositories.
If Azure Repos hosts your repository, you can clone a repository using the GVFS protocol40.
It achieves by enabling some advanced Git features, such as:
●● Partial clone: reduces time to get a working repository by not downloading all Git objects right away.
●● Background prefetch: downloads Git object data from all remotes every hour, reducing the time for
foreground git fetch calls.
●● Sparse-checkout: limits the size of your working directory.
●● File system monitor: tracks the recently modified files and eliminates the need for Git to scan the
entire work tree.
●● Commit-graph: accelerates commit walks and reachability calculations, speeding up commands like
git log.
●● Multi-pack-index: enables fast object lookups across many pack files.
38 https://docs.github.com/repositories/working-with-files/managing-large-files
39 https://github.com/microsoft/VFSForGit
40 https://github.com/microsoft/VFSForGit/blob/master/Protocol.md#the-gvfs-protocol-v1
155
●● Incremental repack: Repacks the packed Git data into fewer pack files without disrupting concurrent
commands using the multi-pack-index.
Note: We update the list of features that Scalar automatically configures as a new Git version is released.
For more information, see:
●● microsoft/scalar: Scalar41.
●● Introducing Scalar: Git at scale for everyone42.
BFG Repo-Cleaner
BFG Repo-Cleaner is a commonly used open-source tool for deleting or “fixing” content in repositories.
It's easier to use than the git filter-branch command. For a single file or set of files, use the –delete-files
option:
$ bfg --delete-files file_I_should_not_have_committed
The following bash shows how to find all the places that a file called passwords.txt exists in the reposito-
ry. Also, to replace all the text in it, you can execute the –replace-text option:
$ bfg --replace-text passwords.txt
41 https://github.com/microsoft/scalar/
42 https://devblogs.microsoft.com/devops/introducing-scalar/
43 https://github.com/newren/git-filter-repo
44 https://github.com/newren/git-filter-repo/
156
45 https://docs.github.com/repositories/working-with-files/managing-large-files/removing-files-from-git-large-file-storage
46 https://docs.github.com/authentication/keeping-your-account-and-data-secure/removing-sensitive-data-from-a-repository
47 https://rtyley.github.io/bfg-repo-cleaner
48 https://git-scm.com/book/en/Git-Basics-Tagging
49 https://docs.github.com/repositories/releasing-projects-on-github/viewing-your-repositorys-releases-and-tags
157
Creating a release
To create a release, use the gh release create command. Replace the tag with the desired tag name for
the release and follow the interactive prompts.
50 https://github.com/Microsoft/azure-pipelines-agent/releases
158
If you @mention any GitHub users in the notes, the published release on GitHub.com will include a
Contributors section with an avatar list of all the mentioned users.
You can check other commands and arguments from the GitHub CLI manual51.
Editing a release
You can't edit Releases with GitHub CLI.
To edit, use the Web Browser:
1. Navigate to the main repository page on GitHub.com.
2. Click Releases to the right of the list of files.
3. Click on the edit icon on the right side of the page, next to the release you want to edit.
4. Edit the details for the release, then click Update release.
Deleting a release
To delete a release, use the following command, replace the tag with the release tag to delete, and use
the -y flag to skip confirmation.
gh release delete tag -y
51 https://cli.github.com/manual/gh_release_create
52 https://docs.github.com/repositories/releasing-projects-on-github/managing-releases-in-a-repository
53 https://docs.github.com/actions/creating-actions/publishing-actions-in-github-marketplace
54 https://docs.github.com/github/administering-a-repository/managing-git-lfs-objects-in-archives-of-your-repository
55 https://docs.github.com/github/managing-subscriptions-and-notifications-on-github/viewing-your-subscriptions
159
You can generate an overview of the contents of a release, and you can also customize your automated
release notes.
It's possible to use labels to create custom categories to organize pull requests you want to include or
exclude specific labels and users from appearing in the output.
2. You can use the name .github/release.yml to create the release.yml file in the .github directory.
3. Specify in YAML the pull request labels and authors you want to exclude from this release. You can
also create new categories and list the pull request labels in each. For more information about
configuration options, see Automatically generated release notes - GitHub Docs.56
Example configuration:
# .github/release.yml
changelog:
exclude:
labels:
- ignore-for-release
authors:
- octocat
categories:
- title: Breaking Changes 🛠
labels:
- Semver-Major
- breaking-change
- title: Exciting New Features 🎉
labels:
- Semver-Minor
- enhancement
- title: Other Changes
labels:
-*
56 https://docs.github.com/en/repositories/releasing-projects-on-github/automatically-generated-release-notes#configuration-options
161
5. Try to create a new release and click + Auto-generate release notes to see the template structure.
For more information, see:
●● About releases - GitHub Docs57
●● Linking to releases - GitHub Docs58
●● Automation for release forms with query parameters - GitHub Docs59
Summary
This module explored how to work with large repositories and purge repository data.
You learned how to describe the benefits and usage of:
●● Understand large Git repositories.
●● Explain Git Virtual File System (GVFS).
●● Use Git Large File Storage (LFS).
●● Purge repository data.
Learn more
●● Get started with Git and Visual Studio - Azure Repos | Microsoft Docs60.
●● Using Git LFS and VFS for Git introduction - Code With Engineering Playbook (microsoft.github.
io)61.
●● Work with large files in your Git repo - Azure Repos | Microsoft Docs62.
●● Delete a Git repo from your project - Azure Repos | Microsoft Docs63.
57 https://docs.github.com/repositories/releasing-projects-on-github/about-releases
58 https://docs.github.com/repositories/releasing-projects-on-github/linking-to-releases
59 https://docs.github.com/repositories/releasing-projects-on-github/automation-for-release-forms-with-query-parameters
60 https://docs.microsoft.com/azure/devops/repos/git/gitquickstart
61 https://microsoft.github.io/code-with-engineering-playbook/source-control/git-guidance/git-lfs-and-vfs/
62 https://docs.microsoft.com/azure/devops/repos/git/manage-large-files
63 https://docs.microsoft.com/azure/devops/repos/git/delete-existing-repo
162
Learning objectives
After completing this module, students and professionals can:
●● Identify and manage technical debt.
●● Integrate code quality tools.
●● Plan code reviews.
●● Describe complexity and quality metrics.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Reliability
Reliability measures the probability that a system will run without failure over a specific period of opera-
tion.
It relates to the number of defects and availability of the software. Several defects can be measured by
running a static analysis tool.
Software availability can be measured using the mean time between failures (MTBF).
Low defect counts are crucial for developing a reliable codebase.
163
Maintainability
Maintainability measures how easily software can be maintained. It relates to the size, consistency,
structure, and complexity of the codebase.
And ensuring maintainable source code relies on several factors, such as testability and understandability.
You can't use a single metric to ensure maintainability.
Some metrics you may consider to improve maintainability are the number of stylistic warnings and
Halted complexity measures.
Both automation and human reviewers are essential for developing maintainable codebases.
Testability
Testability measures how well the software supports testing efforts.
It relies on how well you can control, observe, isolate, and automate testing, among other factors.
Testability can be measured based on how many test cases you need to find potential faults in the
system.
The size and complexity of the software can impact testability.
So, applying methods at the code level—such as cyclomatic complexity—can help you improve the
testability of the component.
Portability
Portability measures how usable the same software is in different environments. It relates to platform
independence.
There isn't a specific measure of portability. But there are several ways you can ensure portable code.
It's essential to regularly test code on different platforms rather than waiting until the end of develop-
ment.
It's also a good idea to set your compiler warning levels as high as possible—and use at least two
compilers.
Enforcing a coding standard also helps with portability.
Reusability
Reusability measures whether existing assets—such as code—can be used again.
Assets are more easily reused if they have characteristics such as modularity or loose coupling.
Reusability can be measured by the number of interdependencies.
Running a static analyzer can help you identify these interdependencies.
This measure:
●● Program vocabulary.
●● Program length.
●● Calculated program length.
●● Volume.
●● Difficulty.
●● Effort.
Code analysis tools can be used to check for considerations such as security, performance, interoperabili-
ty, language usage, globalization and should be part of every developer's toolbox and software build
process.
Regularly running a static code analysis tool and reading its output is a great way to improve as a
developer because the things caught by the software rules can often teach you something.
Common-quality-related metrics
One of the promises of DevOps is to deliver software both faster and with higher quality. Previously,
these two metrics have been almost opposites. The more quickly you went, the lower the quality. The
higher the quality, the longer it took. But DevOps processes can help you find problems earlier, which
usually means that they take less time to fix.
We've previously talked about some general project metrics and KPIs. The following is a list of metrics
that directly relate to the quality of the code being produced and the build and deployment processes.
●● Failed builds percentage - Overall, what percentage of builds are failing?
●● Failed deployments percentage - Overall, what percentage of deployments are failing?
●● Ticket volume - What is the overall volume of customer or bug tickets?
●● Bug bounce percentage - What percentage of customer or bug tickets are being reopened?
●● Unplanned work percentage - What percentage of the overall work being performed is unplanned?
Now I have two copies of the same code that I need to modify in the future instead of one, and I run the
risk of the logic diverging. There are many causes. For example, there might be a lack of technical skills
and maturity among the developers or no clear product ownership or direction.
The organization might not have coding standards at all. So, the developers didn't even know what they
should be producing. The developers might not have precise requirements to target. Well, they might be
subject to last-minute requirement changes.
Necessary-refactoring work might be delayed. There might not be any code quality testing, manual or
automated. In the end, it just makes it harder and harder to deliver value to customers in a reasonable
time frame and at a reasonable cost.
Technical debt is one of the main reasons that projects fail to meet their deadlines.
Over time, it increases in much the same way that monetary debt does. Common sources of technical
debt are:
●● Lack of coding style and standards.
●● Lack of or poor design of unit test cases.
●● Ignoring or not-understanding object orient design principles.
●● Monolithic classes and code libraries.
●● Poorly envisioned the use of technology, architecture, and approach. (Forgetting that all system
attributes, affecting maintenance, user experience, scalability, and others, need to be considered).
●● Over-engineering code (adding or creating code that isn't required, adding custom code when
existing libraries are sufficient, or creating layers or components that aren't needed).
●● Insufficient comments and documentation.
●● Not writing self-documenting code (including class, method, and variable names that are descriptive
or indicate intent).
●● Taking shortcuts to meet deadlines.
●● Leaving dead code in place.
Note: Over time, the technical debt must be paid back. Otherwise, the team's ability to fix issues and
implement new features and enhancements will take longer and eventually become cost-prohibitive.
We have seen that technical debt adds a set of problems during development and makes it much more
difficult to add extra customer value.
Having technical debt in a project saps productivity, frustrates development teams, makes code both
hard to understand and fragile, increases the time to make changes and validate those changes. Un-
planned work frequently gets in the way of planned work.
Longer-term, it also saps the organization's strength. Technical debt tends to creep up on an organiza-
tion. It starts small and grows over time. Every time a quick hack is made or testing is circumvented
because changes need to be rushed through, the problem grows worse and worse. Support costs get
higher and higher, and invariably, a serious issue arises.
Eventually, the organization can't respond to its customers' needs in a timely and cost-efficient way.
166
To review:
Azure DevOps can be integrated with a wide range of existing tooling used to check code quality during
builds.
Which code quality tools do you currently use (if any)?
What do you like or don't you like about the tools?
If you drill into the issues, you can see what the issues are, along with suggested remedies and estimates
of the time required to apply a remedy.
64 https://sonarcloud.io/about
167
NDepend
For .NET developers, a common tool is NDepend.
NDepend is a Visual Studio extension that assesses the amount of technical debt a developer has added
during a recent development period, typically in the last hour.
With this information, the developer might make the required corrections before ever committing the
code.
NDepend lets you create code rules expressed as C# LINQ queries, but it has many built-in rules that
detect a wide range of code smells.
65 https://www.ndepend.com
169
improving code are shared than interrogation sessions where the aim is to identify problems and blame
the author.
The knowledge-sharing that can occur in mentoring-style sessions can be one of the most important
outcomes of the code review process. It often happens best in small groups (even two people) rather
than in large team meetings. And it's important to highlight what has been done well, not just what
needs improvement.
Developers will often learn more in effective code review sessions than they'll in any formal training.
Reviewing code should be an opportunity for all involved to learn, not just as a chore that must be
completed as part of a formal process.
It's easy to see two or more people working on a problem and think that one person could have com-
pleted the task by themselves. That is a superficial view of the longer-term outcomes.
Team management needs to understand that improving the code quality reduces the cost of code, not
increases it. Team leaders need to establish and foster an appropriate culture across their teams.
Summary
This module examined technical debt, complexity, quality metrics, and plans for effective code reviews
and code quality validation.
You learned how to describe the benefits and usage of:
●● Identify and manage technical debt.
●● Integrate code quality tools.
●● Plan code reviews.
●● Describe complexity and quality metrics.
Learn more
●● Technical Debt – The Anti-DevOps Culture - Developer Support (microsoft.com)66.
●● Microsoft Security Code Analysis documentation overview | Microsoft Docs67.
●● Build Quality Indicators report - Azure DevOps Server | Microsoft Docs68.
66 https://devblogs.microsoft.com/premier-developer/technical-debt-the-anti-devops-culture/
67 https://docs.microsoft.com/azure/security/develop/security-code-analysis-overview
68 https://docs.microsoft.com/azure/devops/report/sql-reports/build-quality-indicators-report
170
Lab
Lab 03: Version controlling with Git in Azure Re-
pos
Lab overview
Azure DevOps supports two types of version control, Git and Team Foundation Version Control (TFVC).
Here is a quick overview of the two version control systems:
●● Team Foundation Version Control (TFVC): TFVC is a centralized version control system. Typically,
team members have only one version of each file on their dev machines. Historical data is maintained
only on the server. Branches are path-based and created on the server.
●● Git: Git is a distributed version control system. Git repositories can live locally (such as on a develop-
er's machine). Each developer has a copy of the source repository on their dev machine. Developers
can commit each set of changes on their dev machine and perform version control operations such as
history and compare without a network connection.
Git is the default version control provider for new projects. You should use Git for version control in your
projects unless you have a specific need for centralized version control features in TFVC.
In this lab, you will learn how to work with branches and repositories in Azure DevOps.
Objectives
After you complete this lab, you will be able to:
●● Work with branches in Azure Repos
●● Work with repositories in Azure Repos
Lab duration
●● Estimated time: 30 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions69
69 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
171
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices isn't a repository strategy?
Monorepo.
Multiple Repo.
Feature Repo.
Multiple choice
Which of the following choices is the fundamental difference between the monorepo and multiple repos
philosophies?
Allow teams to have their own repositories to develop.
Allow teams working together on a system to go fastest.
Avoid teams losing their changes.
Multiple choice
Which of the following choices is a common tool to create a changelog?
gitchangelog.
gitlogchange.
gitloggenerator.
Multiple choice
Which of the following choices isn't a branch workflow type?
Forking workflow.
Trunk-based development.
SmartFlow workflow.
Multiple choice
Which of the following choices gives every developer a server-side repository?
Feature branching.
Trunk-based.
Forking.
172
Multiple choice
Which of the following choices creates a feature branch using git commands?
git checkout develops, and git checkout -b feature_branch.
git checkout -n feature_branch.
git flow feature start feature_branch.
Multiple choice
Which of the following statements best describe a pull request?
Pull requests are commonly used by teams and organizations collaborating, where everyone shares a
single repository, and topic branches are used to develop features and isolate changes.
Pull requests are used to publish new changes from feature branches into production branches and
update the environments with the latest code version.
Pull request creates reports from the latest branch updates and generates remediation summary for
resolving issues during development.
Multiple choice
Which of the following choices isn't a benefit of using pull requests?
Protect branches with policies.
Give feedback.
Create reports about code security and quality.
Multiple choice
Which of the following choices is the most effective way to avoid direct updates from feature branches into
main?
Branch Policies.
Branch Security.
Branch Lock.
Multiple choice
Which of the following choices isn't a Git hook event?
Post-checkout.
Pre-commit.
After-commit.
173
Multiple choice
Which of the following choices is a practical use case for using Git hooks?
Enforce preconditions for merging applying custom code with the event post-merge.
Send notifications to your team’s chat room.
Validate code and merge branches automatically using the commit-msg event.
Multiple choice
Which of the following choices is a valid event to verify Work Item ID association in your commit message
before commit?
Commit-msg.
Commit-msg-id.
Msg-commit.
Multiple choice
Which of the following is a correct statement about a fork?
A fork is a copy of a repository.
A fork is a branch for future development.
A fork is a branching strategy when working with centralized version control.
Multiple choice
Which of the following choices are the correct-forking workflow steps?
Create a fork, make your changes locally and push them to a branch, create and complete a PR to the
upstream.
Create a fork, clone it locally, make your changes locally, push them to a branch, create and complete
a PR to the upstream, and sync your fork to the latest from upstream.
Create a fork, clone it locally, make your changes locally, create and complete a PR to the upstream,
push the changes to a branch.
Multiple choice
Which of the following choices is the recommended team size to use a single repo instead of fork workflow?
Large teams.
10-20 developers.
2-5 developers.
174
Multiple choice
Which of the following choices is for working with large files in repositories?
Package Management.
Git LFS.
Git.
Multiple choice
Which of the following choices isn't a common situation when you need to purge data from repositories?
When you need to reduce the size of a repository significantly by removing history.
When you need to remove a sensitive file that should not have been uploaded.
When you need to hide code and history without removing permissions.
Multiple choice
Which of the following choices is the built-in Git command for removing files from the repository?
git remove-branch command.
git filter-branch command.
git purge-branch command.
175
Answers
Multiple choice
Which of the following choices isn't a repository strategy?
Monorepo.
Multiple Repo.
■■ Feature Repo.
Explanation
There are two philosophies on how to organize your repos: Monorepo or Multiple repos.
Multiple choice
Which of the following choices is the fundamental difference between the monorepo and multiple repos
philosophies?
Allow teams to have their own repositories to develop.
■■ Allow teams working together on a system to go fastest.
Avoid teams losing their changes.
Explanation
The fundamental difference between the monorepo and multiple repos philosophies boils down to a
difference about what will allow teams working together on a system to go fastest.
Multiple choice
Which of the following choices is a common tool to create a changelog?
■■ gitchangelog.
gitlogchange.
gitloggenerator.
Explanation
One common tool is gitchangelog. This tool is based on Python. Another common tool is called
github-changelog-generator.
Multiple choice
Which of the following choices isn't a branch workflow type?
Forking workflow.
Trunk-based development.
■■ SmartFlow workflow.
Explanation
There are a few branching workflow types to consider, such as Trunk-based development, and Forking
workflow.
176
Multiple choice
Which of the following choices gives every developer a server-side repository?
Feature branching.
Trunk-based.
■■ Forking.
Explanation
Forking Workflow gives every developer a server-side repository.
Multiple choice
Which of the following choices creates a feature branch using git commands?
■■ git checkout develops, and git checkout -b feature_branch.
git checkout -n feature_branch.
git flow feature start feature_branch.
Explanation
Create a new feature branch by using git checkout -b feature_branch.
Multiple choice
Which of the following statements best describe a pull request?
■■ Pull requests are commonly used by teams and organizations collaborating, where everyone shares a
single repository, and topic branches are used to develop features and isolate changes.
Pull requests are used to publish new changes from feature branches into production branches and
update the environments with the latest code version.
Pull request creates reports from the latest branch updates and generates remediation summary for
resolving issues during development.
Explanation
Pull requests are commonly used by teams and organizations collaborating, where everyone shares a single
repository, and topic branches are used to develop features and isolate changes.
Multiple choice
Which of the following choices isn't a benefit of using pull requests?
Protect branches with policies.
Give feedback.
■■ Create reports about code security and quality.
Explanation
The common benefits of collaborating with pull requests are: Get your code reviewed, Give great feedback,
and Protect branches with policies.
177
Multiple choice
Which of the following choices is the most effective way to avoid direct updates from feature branches
into main?
■■ Branch Policies.
Branch Security.
Branch Lock.
Explanation
Policies are a great way to enforce your team's code quality and change-management standards.
Multiple choice
Which of the following choices isn't a Git hook event?
Post-checkout.
Pre-commit.
■■ After-commit.
Explanation
Hooks must be stored in the .git/hooks folder in the repo root, and they must be named to match the
corresponding events like post-checkout, pre-commit, post-commit, update, and so on.
Multiple choice
Which of the following choices is a practical use case for using Git hooks?
Enforce preconditions for merging applying custom code with the event post-merge.
■■ Send notifications to your team’s chat room.
Validate code and merge branches automatically using the commit-msg event.
Explanation
There are some examples of where you can use hooks to enforce policies and one is sending notifications to
your team’s chat room (Teams, Slack, HipChat, and so on).
Multiple choice
Which of the following choices is a valid event to verify Work Item ID association in your commit message
before commit?
■■ Commit-msg.
Commit-msg-id.
Msg-commit.
Explanation
For example, you could have a hook into the commit-msg event to validate that the commit message
structure follows the recommended format, or it's associated with a work item.
178
Multiple choice
Which of the following is a correct statement about a fork?
■■ A fork is a copy of a repository.
A fork is a branch for future development.
A fork is a branching strategy when working with centralized version control.
Explanation
A fork is a copy of a repository.
Multiple choice
Which of the following choices are the correct-forking workflow steps?
Create a fork, make your changes locally and push them to a branch, create and complete a PR to the
upstream.
■■ Create a fork, clone it locally, make your changes locally, push them to a branch, create and complete
a PR to the upstream, and sync your fork to the latest from upstream.
Create a fork, clone it locally, make your changes locally, create and complete a PR to the upstream,
push the changes to a branch.
Explanation
The forking workflow is to create a fork, clone it locally, make your changes locally, push them to a branch,
create and complete a PR to the upstream, and sync your fork to the latest from upstream.
Multiple choice
Which of the following choices is the recommended team size to use a single repo instead of fork
workflow?
Large teams.
10-20 developers.
■■ 2-5 developers.
Explanation
For a small team (2-5 developers), we recommend working in a single repo.
Multiple choice
Which of the following choices is for working with large files in repositories?
Package Management.
■■ Git LFS.
Git.
Explanation
When you have source files with large differences between versions and frequent updates, you can use Git
LFS.
179
Multiple choice
Which of the following choices isn't a common situation when you need to purge data from repositories?
When you need to reduce the size of a repository significantly by removing history.
When you need to remove a sensitive file that should not have been uploaded.
■■ When you need to hide code and history without removing permissions.
Explanation
When you need to hide code and history without removing permissions.
Multiple choice
Which of the following choices is the built-in Git command for removing files from the repository?
git remove-branch command.
■■ git filter-branch command.
git purge-branch command.
Explanation
git filter-branch command.
Module 3 Implement CI with Azure Pipelines
and GitHub Actions
Learning objectives
After completing this module, students and professionals can:
●● Describe Azure Pipelines.
●● Explain the role of Azure Pipelines and its components.
●● Decide Pipeline automation responsibility.
●● Understand Azure Pipeline key terms.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
182
Test automation
The new version of an application is rigorously tested throughout this stage to ensure that it meets all
wished system qualities. It's crucial that all relevant aspects—whether functionality, security, performance,
or compliance—are verified by the pipeline. The stage may involve different types of automated or
(initially, at least) manual activities.
Deployment automation
A deployment is required every time the application is installed in an environment for testing, but the
most critical moment for deployment automation is rollout time.
Since the preceding stages have verified the overall quality of the system, It's a low-risk step.
183
The deployment can be staged, with the new version being initially released to a subset of the production
environment and monitored before being rolled out.
The deployment is automated, allowing for the reliable delivery of new functionality to users within
minutes if needed.
Languages
You can use many languages with Azure Pipelines, such as Python, Java, PHP, Ruby, C#, and Go.
184
Application types
You can use Azure Pipelines with most application types, such as Java, JavaScript, Python, .NET, PHP, Go,
XCode, and C++.
Deployment targets
Use Azure Pipelines to deploy your code to multiple targets. Targets including:
●● Container registries.
●● Virtual machines.
●● Azure services, or any on-premises or cloud target such:
●● Microsoft Azure.
●● Google Cloud.
●● Amazon Web Services (AWS).
Package formats
To produce packages that others can consume, you can publish NuGet, npm, or Maven packages to the
built-in package management repository in Azure Pipelines.
You also can use any other package management repository of your choice.
Agent
When your build or deployment runs, the system begins one or more jobs. An agent is an installable
software that runs a build or deployment job.
Artifact
An artifact is a collection of files or packages published by a build. Artifacts are made available for next
tasks, such as distribution or deployment.
186
Build
A build represents one execution of a pipeline. It collects the logs associated with running the steps and
the results of running tests.
Continuous delivery
Continuous delivery (CD) (also known as Continuous Deployment) is a process by which code is built,
tested, and deployed to one or more test and production stages. Deploying and testing in multiple
stages helps drive quality. Continuous integration systems produce deployable artifacts, which include
infrastructure and apps. Automated release pipelines consume these artifacts to release new versions and
fixes to existing systems. Monitoring and alerting systems constantly run to drive visibility into the entire
CD process. This process ensures that errors are caught often and early.
Continuous integration
Continuous integration (CI) is the practice used by development teams to simplify the testing and
building of code. CI helps to catch bugs or problems early in the development cycle, making them more
accessible and faster to fix. Automated tests and builds are run as part of the CI process. The process can
run on a set schedule, whenever code is pushed, or both. Items known as artifacts are produced from CI
systems. The continuous delivery release pipelines use them to drive automatic deployments.
Deployment target
A deployment target is a virtual machine, container, web app, or any service used to host the application
being developed. A pipeline might deploy the app to one or more deployment targets after the build is
completed and tests are run.
Job
A build contains one or more jobs. Most jobs run on an agent. A job represents an execution boundary of
a set of steps. All the steps run together on the same agent.
For example, you might build two configurations - x86 and x64. In this case, you have one build and two
jobs.
Pipeline
A pipeline defines the continuous integration and deployment process for your app. It's made up of steps
called tasks.
It can be thought of as a script that describes how your test, build, and deployment steps are run.
Release
When you use the visual designer, you create a release pipeline also to a build pipeline. A release is a
term used to describe one execution of a release pipeline. It's made up of deployments to multiple
stages.
187
Stage
Stages are the primary divisions in a pipeline: “build the app,” "run integration tests," and “deploy to user
acceptance testing” are good examples of stages.
Task
A task is the building block of a pipeline. For example, a build pipeline might consist of build tasks and
test tasks. A release pipeline consists of deployment tasks. Each task runs a specific job in the pipeline.
Trigger
A trigger is something that's set up to tell the pipeline when to run. You can configure a pipeline to run
upon a push to a repository at scheduled times or upon completing another build. All these actions are
known as triggers.
Summary
This module introduced Azure Pipelines concepts and explained key terms and components of the tool. It
helped you decide your pipeline strategy and responsibilities.
You learned how to describe the benefits and usage of:
●● Describe Azure Pipelines.
●● Explain the role of Azure Pipelines and its components.
●● Decide Pipeline automation responsibility.
●● Understand Azure Pipeline key terms.
Learn more
●● What is Azure Pipelines? - Azure Pipelines | Microsoft Docs1.
●● Use Azure Pipelines - Azure Pipelines | Microsoft Docs2.
1 https://docs.microsoft.com/azure/devops/pipelines/get-started/what-is-azure-pipelines
2 https://docs.microsoft.com/azure/devops/pipelines/get-started/pipelines-get-started
188
Learning objectives
After completing this module, students and professionals can:
●● Choose between Microsoft-hosted and self-hosted agents.
●● Install and configure Azure Pipelines Agents.
●● Configure agent pools.
●● Make the agents and pools secure.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Microsoft-hosted agent
If your pipelines are in Azure Pipelines, then you've got a convenient option to build and deploy using a
Microsoft-hosted agent.
With a Microsoft-hosted agent, maintenance and upgrades are automatically done.
Each time a pipeline is run, a new virtual machine (instance) is provided. The virtual machine is discarded
after one use.
For many teams, this is the simplest way to build and deploy.
You can try it first and see if it works for your build or deployment. If not, you can use a self-hosted agent.
A Microsoft-hosted agent has job time limits.
189
Self-hosted agent
An agent that you set up and manage on your own to run build and deployment jobs is a self-hosted
agent.
You can use a self-hosted agent in Azure Pipelines. A self-hosted agent gives you more control to install
dependent software needed for your builds and deployments.
You can install the agent on:
●● Linux.
●● macOS.
●● Windows.
●● Linux Docker containers.
After you've installed the agent on a machine, you can install any other software on that machine as
required by your build or deployment jobs.
A self-hosted agent doesn't have job time limits.
Container jobs
Similar jobs to Agent Pool Jobs run in a container on an agent part of an agent pool.
Agentless jobs
Jobs that run directly on the Azure DevOps. They don't require an agent for execution. It's also-of-
ten-called Server Jobs.
In Azure Pipelines, pools are scoped to the entire organization so that you can share the agent machines
across projects.
If you create an Agent pool for a specific project, only that project can use the pool until you add the
project pool into another project.
When creating a build or release pipeline, you can specify which pool it uses, organization, or project
scope.
Pools scoped to a project can only use them across build and release pipelines within a project.
To share an agent pool with multiple projects, use an organization scope agent pool and add them in
each of those projects, add an existing agent pool, and choose the organization agent pool. If you create
a new agent pool, you can automatically grant access permission to all pipelines.
●● First, make sure you're a member of a group in All Pools with the Administrator role.
3 https://docs.microsoft.com/azure/devops/pipelines/process/phases
191
●● Next, create a New project agent pool in your project settings and select the option to Create a
new organization agent pool. As a result, both an organization and project-level agent pool will be
created.
●● Finally, install and configure agents to be part of that agent pool.
●● You're a member of the infrastructure team and would like to set up a pool of agents for use in all
projects.
●● First, make sure you're a member of a group in All Pools with the Administrator role.
●● Next, create a New organization agent pool in your admin settings and select Autoprovision
corresponding project agent pools in all projects while creating the pool. This setting ensures all
projects have a pool pointing to the organization agent pool. The system creates a pool for
existing projects, and in the future, it will do so whenever a new project is created.
●● Finally, install and configure agents to be part of that agent pool.
●● You want to share a set of agent machines with multiple projects, but not all of them.
●● First, create a project agent pool in one of the projects and select the option to Create a new
organization agent pool while creating that pool.
●● Next, go to each of the other projects, and create a pool in each of them while selecting the option
to Use an existing organization agent pool.
●● Finally, install and configure agents to be part of the shared agent pool.
Here's a standard communication pattern between the agent and Azure Pipelines.
The user registers an agent with Azure Pipelines by adding it to an agent pool. You've to be an agent
pool administrator to register an agent in that pool. The identity of the agent pool administrator is
192
needed only at the time of registration. It isn't persisted on the agent, nor is it used to communicate
further between the agent and Azure Pipelines.
Once the registration is complete, the agent downloads a listener OAuth token and uses it to listen to the
job queue.
Periodically, the agent checks to see if a new job request has been posted in the job queue in Azure
Pipelines. The agent downloads the job and a job-specific OAuth token when a job is available. This token
is generated by Azure Pipelines for the scoped identity specified in the pipeline. That token is short-lived
and is used by the agent to access resources (for example, source code) or modify resources (for exam-
ple, upload test results) on Azure Pipelines within that job.
Once the job is completed, the agent discards the job-specific OAuth token and checks if there's a new
job request using the listener OAuth token.
The payload of the messages exchanged between the agent, and Azure Pipelines are secured using
asymmetric encryption. Each agent has a public-private key pair, and the public key is exchanged with the
server during registration.
The server uses the public key to encrypt the job's payload before sending it to the agent. The agent
decrypts the job content using its private key. Secrets stored in build pipelines, release pipelines, or
variable groups are secured when exchanged with the agent.
For example, to run tasks that use Windows authentication to access an external service, you must run
the agent using an account with access to that service.
However, if you're running UI tests such as Selenium or Coded UI tests that require a browser, the
browser is launched in the context of the agent account.
After configuring the agent, we recommend you first try it in interactive mode to ensure it works. Then,
we recommend running the agent in one of the following modes so that it reliably remains to run for
production use. These modes also ensure that the agent starts automatically if the machine is restarted.
You can use the service manager of the operating system to manage the lifecycle of the agent. Also, the
experience for auto-upgrading the agent is better when it's run as a service.
As an interactive process with autologon enabled. In some cases, you might need to run the agent
interactively for production use, such as UI tests.
When the agent is configured to run in this mode, the screen saver is also disabled.
Some domain policies may prevent you from enabling autologon or disabling the screen saver.
In such cases, you may need to seek an exemption from the domain policy or run the agent on a work-
group computer where the domain policies don't apply.
Note: There are security risks when you enable automatic login or disable the screen saver. You allow
other users to walk up to the computer and use the account that automatically logs on. If you configure
the agent to run in this way, you must ensure the computer is physically protected; for example, located
in a secure facility. If you use Remote Desktop to access the computer on which an agent is running with
autologon, simply closing the Remote Desktop causes the computer to be locked, and any UI tests that
run on this agent may fail. To avoid this, use the tscon command to disconnect from Remote Desktop.
●● You don't get these benefits when using a Microsoft-hosted agent. The agent is destroyed after
the build or release pipeline is completed.
●● A Microsoft-hosted agent can take longer to start your build. While it often takes just a few seconds
for your job to be assigned to a Microsoft-hosted agent, it can sometimes take several minutes for an
agent to be allocated, depending on the load on our system.
Azure Pipelines
In Azure Pipelines, roles are defined on each agent pool. Membership in these roles governs what
operations you can do on an agent pool.
4 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-windows
5 https://docs.microsoft.com/azure/devops/pipelines/agents/proxy
196
Summary
This module explored differences between Microsoft-hosted and self-hosted agents, detailed job types,
and introduced agent pools configuration.
You learned how to describe the benefits and usage of:
●● Choose between Microsoft-hosted and self-hosted agents.
●● Install and configure Azure Pipelines Agents.
●● Configure agent pools.
●● Make the agents and pools secure.
Learn more
●● Microsoft-hosted agents for Azure Pipelines - Azure Pipelines | Microsoft Docs6.
●● Deploy an Azure Pipelines agent on Linux - Azure Pipelines | Microsoft Docs7.
●● Deploy a build and release agent on macOS - Azure Pipelines | Microsoft Docs8.
●● Deploy an Azure Pipelines agent on Windows - Azure Pipelines | Microsoft Docs9.
●● Agents pools - Azure Pipelines | Microsoft Docs10.
6 https://docs.microsoft.com/azure/devops/pipelines/agents/hosted
7 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-linux
8 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-osx
9 https://docs.microsoft.com/azure/devops/pipelines/agents/v2-windows
10 https://docs.microsoft.com/azure/devops/pipelines/agents/pools-queues
198
Learning objectives
After completing this module, students and professionals can:
●● Use and estimate parallel jobs.
●● Use Azure Pipelines for open-source or private projects.
●● Use Visual Designer.
●● Work with Azure Pipelines and YAML.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but is not necessary.
●● Beneficial to have experience in an organization that delivers software.
A release consumes a parallel job only when it's being actively deployed to a stage.
While the release is waiting for approval or manual intervention, it doesn't consume a parallel job.
199
Simple estimate
A simple rule of thumb: Estimate that you'll need one parallel job for every four to five users in your
organization.
Detailed estimate
In the following scenarios, you might need multiple parallel jobs:
●● If you have multiple teams, and if each of them requires a CI build, you'll likely need a parallel job for
each team.
●● If your CI build trigger applies to multiple branches, you'll likely need a parallel job for each active
branch.
●● If you develop multiple applications by using one organization or server, you'll likely need more
parallel jobs: one to deploy each application simultaneously.
200
Supported services
Non-members of a public project will have read-only access to a limited set of services, precisely:
●● Browse the code base, download code, view commits, branches, and pull requests.
●● View and filter work items.
●● View a project page or dashboard.
●● View the project Wiki.
●● Do a semantic search of the code or work items.
For more information, see Differences and limitations for non-members of a public project11.
11 https://docs.microsoft.com/azure/devops/organizations/public/feature-differences
202
Thanks to public projects capabilities, the team will enable just that experience. Everyone in the commu-
nity will have access to the same build results, whether they are a maintainer on the project.
When you're using the per-minute plan, you can run only
one job at a time.
If you run builds for more than 14 paid hours in a month, the per-minute plan might be less cost-effec-
tive than the parallel jobs model.
See Azure DevOps Services Pricing | Microsoft Azure12 for current pricing.
12 https://azure.microsoft.com/pricing/details/devops/azure-devops-services/
203
3. The build creates an artifact used by the rest of your pipeline to run tasks such as deploying to
staging or production.
4. Your code is now updated, built, tested, and packaged. It can be deployed to any target.
13 https://docs.microsoft.com/azure/devops/pipelines/get-started-designer
204
Summary
This module described parallel jobs and how to estimate their usage. Also, it presented Azure DevOps for
open-source projects, explored Visual Designer and YAML pipelines.
You learned how to describe the benefits and usage of:
●● Use and estimate parallel jobs.
●● Use Azure Pipelines for open-source or private projects.
●● Use Visual Designer.
●● Work with Azure Pipelines and YAML.
Learn more
●● Configure and pay for parallel jobs - Azure DevOps | Microsoft Docs15.
●● Azure DevOps Services Pricing | Microsoft Azure16.
14 https://docs.microsoft.com/azure/devops/pipelines/get-started-yaml
15 https://docs.microsoft.com/azure/devops/pipelines/licensing/concurrent-jobs
16 https://azure.microsoft.com/pricing/details/devops/azure-devops-services
205
17 https://docs.microsoft.com/azure/devops/organizations/public/about-public-projects
18 https://docs.microsoft.com/azure/devops/pipelines/create-first-pipeline
19 https://docs.microsoft.com/azure/devops/pipelines/customize-pipeline
206
Learning objectives
After completing this module, students and professionals can:
●● Explain why Continuous Integration matters.
●● Implement Continuous Integration using Azure Pipelines.
●● Explain the benefits of Continuous Integration.
●● Describe build properties.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
20 https://git-scm.com/
21 https://subversion.apache.org/
22 https://docs.microsoft.com/azure/devops/repos/tfvc/overview
23 https://www.nuget.org/
24 https://www.npmjs.com/
25 https://chocolatey.org/
26 https://brew.sh/
27 http://rpm.org/
28 https://azure.microsoft.com/services/devops
29 https://www.jetbrains.com/teamcity/
30 https://jenkins.io/
31 http://ant.apache.org/
32 http://nant.sourceforge.net/
33 https://gradle.org/
209
CI implementation challenges
●● Have you tried to implement continuous integration in your organization?
●● Were you successful?
●● If you did successfully, what lessons did you learn?
●● If you didn't get successful, what were the challenges?
34 https://docs.microsoft.com/devops/develop/what-is-continuous-integration
210
In this case, the date has been retrieved as a system variable, then formatted via yyyyMMdd, and the
revision is then appended.
Build status
While we have been manually queuing each build, we'll soon see that builds can be automatically
triggered.
It's a key capability required for continuous integration.
But there are times that we might not want the build to run, even if it's triggered.
It can be controlled with these settings:
Note: You can use the Paused setting to allow new builds to queue but to hold off then starting them.
The authorization scope determines whether the build job is limited to accessing resources in the current
project. Or accessing resources in other projects in the project collection.
The build job timeout determines how long the job can execute before being automatically canceled.
A value of zero (or leaving the text box empty) specifies that there's no limit.
The build job cancel timeout determines how long the server will wait for a build job to respond to a
cancellation request.
Badges
Some development teams like to show the state of the build on an external monitor or website.
These settings provide a link to the image to use for it. Here's an example Azure Pipelines badge that has
Succeeded:
Summary
This module detailed the Continuous Integration practice—also, the pillars for implementing it in the
development lifecycle, its benefits, and properties.
You learned how to describe the benefits and usage of:
●● Explain why Continuous Integration matters.
●● Implement Continuous Integration using Azure Pipelines.
●● Explain the benefits of Continuous Integration.
●● Describe build properties.
Learn more
●● Design a CI/CD pipeline using Azure DevOps - Azure Example Scenarios | Microsoft Docs36.
●● Build options - Azure Pipelines | Microsoft Docs37.
●● Create your first pipeline - Azure Pipelines | Microsoft Docs38.
35 https://docs.microsoft.com/azure/devops/pipelines/build/options
36 https://docs.microsoft.com/azure/architecture/example-scenario/apps/devops-dotnet-webapp
37 https://docs.microsoft.com/azure/devops/pipelines/build/options
38 https://docs.microsoft.com/azure/devops/pipelines/create-first-pipeline
212
Learning objectives
After completing this module, students and professionals can:
●● Define a build strategy.
●● Explain and configure demands.
●● Implement multi-agent builds.
●● Use different source control types available in Azure Pipelines.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
When you configure a build pipeline and the agent pool to use, you can specify specific demands that
the agent must meet on the Options tab.
In the build job image, the HasPaymentService is required in the collection of capabilities. And an exists
condition, you can choose that a capability equals a specific value.
214
●● Run the release once with configuration setting A on WebApp A and setting B for WebApp B.
●● Deploy to different geographic regions.
●● Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents. For
example, you can run a broad suite of 1000 tests on a single agent. Or you can use two agents and
run 500 tests on each one in parallel.
For more information, see Specify jobs in your pipeline40.
39 https://docs.microsoft.com/azure/devops/pipelines/agents/agents
40 https://docs.microsoft.com/azure/devops/pipelines/process/phases
215
Parallel jobs
At the organization level, you can configure the number of parallel jobs that are made available.
216
The free tier allows for one parallel job of up to 1800 minutes per month. The self-hosted agents have
higher levels.
Note: You can define a build as a collection of jobs rather than as a single job. Each job consumes one of
these parallel jobs that run on an agent. If there aren't enough parallel jobs available for your organiza-
tion, the jobs will be queued and run sequentially.
Summary
This module described pipeline strategies, configuration, implementation of multi-agent builds, and what
source controls Azure Pipelines supports.
You learned how to describe the benefits and usage of:
●● Define a build strategy.
●● Explain and configure demands.
●● Implement multi-agent builds.
●● Use different source control types available in Azure Pipelines.
Learn more
●● Create a multi-platform pipeline - Azure Pipelines | Microsoft Docs41.
●● Demands - Azure Pipelines | Microsoft Docs42.
●● Build source repositories - Azure Pipelines | Microsoft Docs43.
41 https://docs.microsoft.com/azure/devops/pipelines/get-started-multiplatform
42 https://docs.microsoft.com/azure/devops/pipelines/process/demands
43 https://docs.microsoft.com/azure/devops/pipelines/repos
218
Learning objectives
After completing this module, students and professionals can:
●● Describe advanced Azure Pipelines anatomy and structure.
●● Detail templates and YAML resources.
●● Implement and use multiple repositories.
●● Explore communication to deploy using Azure Pipelines.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Understanding of Azure Pipelines.
●● Beneficial to have experience in an organization that delivers software.
Hello world
Start slowly and create a pipeline that echoes “Hello world!” to the console. No technical course is
complete without a hello world example.
name: 1.0$(Rev:.r)
- main
# equivalents trigger
# trigger:
219
# branches:
# include:
# - main
variables:
name: martin
pool:
vmImage: ubuntu-latest
jobs:
- job: helloworld
steps:
Name
The variable name is a bit misleading since the name is in the build number format. You'll get an integer
number if you don't explicitly set a name format. It's a monotonically increasing number for run triggered
off this pipeline, starting at 1. This number is stored in Azure DevOps. You can make use of this number
by referencing $(Rev).
To make a date-based number, you can use the format $(Date:yyyy-mm-dd-HH-mm) to get a build
number like 2020-01-16-19-22.
To get a semantic number like 1.0.x, you can use something like 1.0.$(Rev:.r).
Triggers
If there's no explicit triggers section, then it's implied that any commit to any path in any branch will
trigger this pipeline to run.
However, you can be more precise using filters such as branches or paths.
Let's consider this trigger:
220
trigger:
branches:
include:
- main
This trigger is configured to queue the pipeline only when there's a commit to the main branch. What
about triggering for any branch except the main? You guessed it: use exclude instead of include:
trigger:
branches:
exclude:
- main
Tip: You can get the name of the branch from the variables Build.SourceBranch (for the full name like
refs/heads/main) or Build.SourceBranchName (for the short name like main).
What about a trigger for any branch with a name that starts with topic/ and only if the change is in the
webapp folder?
trigger:
branches:
include:
- feature/*
paths:
include:
- webapp/**
You can mix includes and excludes if you need to. You can also filter on tags.
Tip: Don't forget one overlooked trigger: none. If you never want your pipeline to trigger automatically,
you can use none. It's helpful if you're going to create a pipeline that is only manually triggered.
There are other triggers for other events, such as:
●● Pull Requests (PRs) can also filter branches and paths.
●● Schedules allow you to specify cron expressions for scheduling pipeline runs.
●● Pipelines will enable you to trigger pipelines when other pipelines are complete, allowing pipeline
chaining.
You can find all the documentation on triggers here44.
44 https://docs.microsoft.com/azure/devops/pipelines/build/triggers
221
Jobs
A job is a set of steps executed by an agent in a queue (or pool). Jobs are atomic – they're performed
wholly on a single agent. You can configure the same job to run on multiple agents simultaneously, but
even in this case, the entire set of steps in the job is run on every agent. If you need some steps to run on
one agent and some on another, you'll need two jobs.
A job has the following attributes besides its name:
●● displayName – a friendly name.
●● dependsOn - a way to specify dependencies and ordering of multiple jobs.
●● condition – a binary expression: if it evaluates to true, the job runs; if false, the job is skipped.
●● strategy - used to control how jobs are parallelized.
●● continueOnError - to specify if the rest of the pipeline should continue or not if this job fails.
●● pool – the name of the pool (queue) to run this job on.
●● workspace - managing the source workspace.
●● container - for specifying a container image to execute the job later.
●● variables – variables scoped to this job.
●● steps – the set of steps to execute.
●● timeoutInMinutes and cancelTimeoutInMinutes for controlling timeouts.
●● services - sidecar services that you can spin up.
Dependencies
You can define dependencies between jobs using the dependensOn property. It lets you specify se-
quences and fan-out and fan-in scenarios.
A sequential dependency is implied if you don't explicitly define a dependency.
If you want jobs to run parallel, you need to specify dependsOn: none.
Let's look at a few examples. Consider this pipeline:
jobs:
- job: A
steps:
# steps omitted for brevity
- job: B
steps:
# steps omitted for brevity
Because no dependsOn was specified, the jobs will run sequentially: first A and then B.
To have both jobs run in parallel, we add dependsOn: none to job B:
222
jobs:
- job: A
steps:
# steps omitted for brevity
- job: B
dependsOn: none
steps:
# steps omitted for brevity
- job: A
steps:
- job: B
dependsOn: A
steps:
- job: C
dependsOn: A
steps:
- job: D
dependsOn:
- B
- C
steps:
- job: E
dependsOn:
- B
223
- D
steps:
Checkout
Classic builds implicitly checkout any repository artifacts, but pipelines require you to be more explicit
using the checkout keyword:
●● Jobs check out the repo they're contained in automatically unless you specify checkout: none.
●● Deployment jobs don't automatically check out the repo, so you'll need to specify checkout: self for
deployment jobs if you want access to files in the YAML file's repo.
Download
Downloading artifacts requires you to use the download keyword. Downloads also work the opposite way
for jobs and deployment jobs:
●● Jobs don't download anything unless you explicitly define a download.
●● Deployment jobs implicitly do a download: current, which downloads any pipeline artifacts that have
been created in the existing pipeline. To prevent it, you must specify download: none.
Resources
What if your job requires source code in another repository? You'll need to use resources. Resources let
you reference:
●● other repositories
●● pipelines
●● builds (classic builds)
●● containers (for container jobs)
224
●● packages
To reference code in another repo, specify that repo in the resources section and then reference it via its
alias in the checkout step:
resources:
repositories:
- repository: appcode
type: git
name: otherRepo
steps:
- checkout: appcode
Variables
It would be tough to achieve any sophistication in your pipelines without variables. Though this classifica-
tion is partly mine, several types of variables exist, and pipelines don't distinguish between these types.
However, I've found it helpful to categorize pipeline variables to help teams understand some nuances
when dealing with them.
Every variable is a key: value pair. The key is the variable's name, and it has a value.
To dereference a variable, wrap the key in $(). Let's consider this example:
variables:
name: martin
steps:
45 https://docs.microsoft.com/azure/devops/extend/develop/add-build-task
225
Stages are the primary divisions in a pipeline. The stages “Build this app,” "Run these tests," and “Deploy
to preproduction” are good examples.
A stage is one or more jobs, units of work assignable to the same machine.
You can arrange both stages and jobs into dependency graphs. Examples include “Run this stage before
that one” and "This job depends on the output of that job."
A job is a linear series of steps. Steps can be tasks, scripts, or references to external templates.
This hierarchy is reflected in the structure of a YAML file like:
●● Pipeline
●● Stage A
●● Job 1
●● Step 1.1
●● Step 1.2
●● ...
●● Job 2
●● Step 2.1
●● Step 2.2
●● ...
●● Stage B
●● ...
Simple pipelines don't require all these levels. For example, you can omit the containers for stages and
jobs in a single job build because there are only steps.
Because many options shown in this article aren't required and have reasonable defaults, your YAML
definitions are unlikely to include all of them.
Pipeline
The schema for a pipeline:
name: string # build numbering format
resources:
pipelines: [ pipelineResource ]
containers: [ containerResource ]
repositories: [ repositoryResource ]
variables: # several syntaxes
trigger: trigger
pr: pr
stages: [ stage | templateReference ]
226
If you have a single-stage, you can omit the stages keyword and directly specify the jobs keyword:
# ... other pipeline-level keywords
jobs: [ job | templateReference ]
If you've a single-stage and a single job, you can omit the stages and jobs keywords and directly specify
the steps keyword:
# ... other pipeline-level keywords
steps: [ script | bash | pwsh | powershell | checkout | task | templateRef-
erence ]
Stage
A stage is a collection of related jobs. By default, stages run sequentially. Each stage starts only after the
preceding stage is complete.
Use approval checks to control when a stage should run manually. These checks are commonly used to
control deployments to production environments.
Checks are a mechanism available to the resource owner. They control when a stage in a pipeline con-
sumes a resource.
As an owner of a resource like an environment, you can define checks required before a stage that
consumes the resource can start.
This example runs three stages, one after another. The middle stage runs two jobs in parallel.
stages:
- stage: Build
jobs:
- job: BuildJob
steps:
- job: TestOnWindows
steps:
- job: Deploy
steps:
Job
A job is a collection of steps run by an agent or on a server. Jobs can run conditionally and might depend
on previous jobs.
jobs:
- job: MyJob
displayName: My First Job
continueOnError: true
workspace:
clean: outputs
steps:
Deployment strategies
Deployment strategies allow you to use specific techniques to deliver updates when deploying your
application.
Techniques examples:
●● Enable initialization.
●● Deploy the update.
●● Route traffic to the updated version.
●● Test the updated version after routing traffic.
●● If there's a failure, run steps to restore to the last known good version.
RunOnce
runOnce is the simplest deployment strategy wherein all the lifecycle hooks.
strategy:
runOnce:
preDeploy:
pool: [ server | pool ] # See pool schema.
steps:
- script: [ script | bash | pwsh | powershell | checkout | task
| templateReference ]
deploy:
pool: [ server | pool ] # See pool schema.
228
steps: ...
routeTraffic:
pool: [ server | pool ]
steps:
...
postRouteTraffic:
pool: [ server | pool ]
steps:
...
on:
failure:
pool: [ server | pool ]
steps:
...
success:
pool: [ server | pool ]
steps:
...
Rolling
A rolling deployment replaces instances of the previous version of an application with instances of the
new version. It can be configured by specifying the keyword rolling: under the strategy: node.
strategy:
rolling:
maxParallel: [ number or percentage as x% ]
preDeploy:
steps:
- script: [ script | bash | pwsh | powershell | checkout | task
| templateReference ]
deploy:
steps:
...
routeTraffic:
steps:
...
postRouteTraffic:
steps:
...
on:
failure:
steps:
...
success:
steps:
46 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
229
...
Canary
By using this strategy, you can roll out the changes to a small subset of servers first. The canary deploy-
ment strategy is an advanced deployment strategy that helps mitigate the risk involved in rolling out new
versions of applications.
As you gain more confidence in the new version, you can release it to more servers in your infrastructure
and route more traffic to it.
strategy:
canary:
increments: [ number ]
preDeploy:
pool: [ server | pool ] # See pool schema.
steps:
- script: [ script | bash | pwsh | powershell | checkout | task
| templateReference ]
deploy:
pool: [ server | pool ] # See pool schema.
steps:
...
routeTraffic:
pool: [ server | pool ]
steps:
...
postRouteTraffic:
pool: [ server | pool ]
steps:
...
on:
failure:
pool: [ server | pool ]
steps:
...
success:
pool: [ server | pool ]
steps:
...
47 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
48 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
230
Lifecycle hooks
You can achieve the deployment strategies technique by using lifecycle hooks. Depending on the pool
attribute, each resolves into an agent or server job49.
Lifecycle hooks inherit the pool specified by the deployment job. Deployment jobs use the $(Pipeline.
Workspace) system variable.
Available lifecycle hooks:
●● preDeploy: Used to run steps that initialize resources before application deployment starts.
●● deploy: Used to run steps that deploy your application. Download artifact task will be auto injected
only in the deploy hook for deployment jobs. To stop downloading artifacts, use - download: none or
choose specific artifacts to download by specifying Download Pipeline Artifact task50.
●● routeTraffic: Used to run steps that serve the traffic to the updated version.
●● postRouteTraffic: Used to run the steps after the traffic is routed. Typically, these tasks monitor the
health of the updated version for a defined interval.
●● on: failure or on: success: Used to run steps for rollback actions or clean-up.
Steps
A step is a linear sequence of operations that make up a job. Each step runs its process on an agent and
accesses the pipeline workspace on a local hard drive.
This behavior means environment variables aren't preserved between steps, but file system changes are.
steps:
- pwsh: |
Write-Host "This multiline script always runs in PowerShell Core."
Write-Host "Even on non-Windows machines!"
Tasks
Tasks are the building blocks of a pipeline. There's a catalog of tasks available to choose from.
steps:
- task: VSBuild@1
displayName: Build
timeoutInMinutes: 120
inputs:
solution: '**\*.sln'
49 https://docs.microsoft.com/azure/devops/pipelines/process/phases
50 https://docs.microsoft.com/azure/devops/pipelines/yaml-schema/steps-download
231
Detail templates
Template references
You can export reusable sections of your pipeline to a separate file. These individual files are known as
templates. Azure Pipelines supports four kinds of templates:
Azure Pipelines supports four types of templates:
●● Stage
●● Job
●● Step
●● Variable
You can also use templates to control what is allowed in a pipeline and define how parameters can be
used.
●● Parameter
Templates themselves can include other templates. Azure Pipelines supports a maximum of 50 individual
template files in a single pipeline.
Stage templates
You can define a set of stages in one file and use it multiple times in other files.
In this example, a stage is repeated twice for two different testing regimes. The stage itself is specified
only once.
# File: stages/test.yml
parameters:
name: ''
testFile: ''
stages:
vmImage: macos-10.14
steps:
Templated pipeline
# File: azure-pipelines.yml
stages:
Job templates
You can define a set of jobs in one file and use it multiple times in other files.
In this example, a single job is repeated on three platforms. The job itself is specified only once.
# File: jobs/build.yml
parameters:
name: ''
pool: ''
sign: false
jobs:
# File: azure-pipelines.yml
jobs:
Step templates
You can define a set of steps in one file and use it multiple times in another file.
# File: steps/build.yml
steps:
# File: azure-pipelines.yml
jobs:
- job: macOS
pool:
vmImage: 'macOS-10.14'
steps:
- job: Linux
pool:
vmImage: 'ubuntu-16.04'
steps:
- job: Windows
pool:
vmImage: 'vs2017-win2016'
steps:
Variable templates
You can define a set of variables in one file and use it multiple times in other files.
In this example, a set of variables is repeated across multiple pipelines. The variables are specified only
once.
# File: variables/build.yml
variables:
- name: vmImage
value: vs2017-win2016
- name: arch
value: x64
- name: config
value: debug
# File: component-x-pipeline.yml
variables:
# File: component-y-pipeline.yml
variables:
235
General schema
resources:
pipelines: [ pipeline ]
repositories: [ repository ]
containers: [ container ]
Pipeline resource
If you have an Azure pipeline that produces artifacts, your pipeline can consume the artifacts by using the
pipeline keyword to define a pipeline resource.
resources:
pipelines:
- pipeline: MyAppA
source: MyCIPipelineA
- pipeline: MyAppB
source: MyCIPipelineB
trigger: true
- pipeline: MyAppC
project: DevOpsProject
source: MyCIPipelineC
branch: releases/M159
version: 20190718.2
trigger:
branches:
include:
- master
- releases/*
51 https://docs.microsoft.com/azure/devops/pipelines/process/resources
236
exclude:
- users/*
Container resource
Container jobs let you isolate your tools and dependencies inside a container. The agent launches an
instance of your specified container then runs steps inside it. The container keyword lets you specify your
container images.
Service containers run alongside a job to provide various dependencies like databases.
resources:
containers:
- container: linux
image: ubuntu:16.04
- container: windows
image: myprivate.azurecr.io/windowsservercore:1803
endpoint: my_acr_connection
- container: my_service
image: my_service:tag
ports:
Repository resource
Let the system know about the repository if:
●● If your pipeline has templates in another repository.
●● If you want to use multi-repo checkout with a repository that requires a service connection.
The repository keyword lets you specify an external repository.
resources:
repositories:
- repository: common
type: github
name: Contoso/CommonTools
237
endpoint: MyContosoServiceConnection
type: github
endpoint: MyGitHubServiceConnection
name: MyGitHubOrgOrUser/MyGitHubRepo
- repository: MyBitBucketRepo
type: bitbucket
endpoint: MyBitBucketServiceConnection
name: MyBitBucketOrgOrUser/MyBitBucketRepo
- repository: MyAzureReposGitRepository
type: git
name: MyProject/MyAzureReposGitRepo
trigger:
- main
pool:
vmImage: 'ubuntu-latest'
steps:
- checkout: self
- checkout: MyGitHubRepo
- checkout: MyBitBucketRepo
- checkout: MyAzureReposGitRepository
If the self-repository is named CurrentRepo, the script command produces the following output: Curren-
tRepo MyAzureReposGitRepo MyBitBucketRepo MyGitHubRepo.
In this example, the repositories' names are used for the folders because no path is specified in the
checkout step.
The default branch is checked out unless you choose a specific ref.
If you're using inline syntax, choose the ref by appending @ref. For example:
239
GitHub Repository
Azure Pipelines can automatically build and validate every pull request and commit to your GitHub repos-
itory.
When creating your new pipeline, you can select a GitHub repository and then a YAML file in that reposi-
tory (self repository). By default, this is the repository that your pipeline builds.
Azure Pipelines must be granted access to your repositories to trigger their builds and fetch their code
during builds.
There are three authentication types for granting Azure Pipelines access to your GitHub repositories while
creating a pipeline.
●● GitHub App.
●● OAuth.
●● Personal access token (PAT).
You can create a continuous integration (CI) trigger to run a pipeline whenever you push an update to the
specified branches or push selected tags.
YAML pipelines are configured by default with a CI trigger on all branches.
trigger:
- main
- releases/*
Also, it's possible to configure pull request (PR) triggers to run whenever a pull request is opened with
one of the specified target branches or when updates are made to such a pull request.
You can specify the target branches when validating your pull requests.
240
To validate pull requests that target main and releases/* and start a new run the first time a new pull
request is created, and after every update made to the pull request:
pr:
- main
- releases/*
Summary
This module detailed Azure Pipelines anatomy and structure, templates, YAML resources, and how to use
multiple repositories in your pipeline. Also, it explored communication to deploy using Azure Pipelines to
target servers.
You learned how to describe the benefits and usage of:
●● Describe advanced Azure Pipelines anatomy and structure.
●● Detail templates and YAML resources.
●● Implement and use multiple repositories.
●● Explore communication to deploy using Azure Pipelines.
Learn more
●● Azure Pipelines New User Guide - Key concepts - Azure Pipelines | Microsoft Docs53.
●● Azure Pipelines YAML pipeline editor guide - Azure Pipelines | Microsoft Docs54.
●● Check out multiple repositories in your pipeline - Azure Pipelines | Microsoft Docs55.
●● Azure Pipelines Agents - Azure Pipelines | Microsoft Docs56.
52 https://docs.microsoft.com/azure/devops/pipelines/repos/github
53 https://docs.microsoft.com/azure/devops/pipelines/get-started/key-pipelines-concepts
54 https://docs.microsoft.com/azure/devops/pipelines/get-started/yaml-pipeline-editor
55 https://docs.microsoft.com/azure/devops/pipelines/repos/multi-repo-checkout
56 https://docs.microsoft.com/azure/devops/pipelines/agents/agents
241
Learning objectives
After completing this module, students and professionals can:
●● Explain GitHub Actions and workflows.
●● Create and work with GitHub Actions and Workflows.
●● Describe Events, Jobs, and Runners.
●● Examine the output and release management for actions.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
57 https://github.com/marketplace
242
GitHub tracks events that occur. Events can trigger the start of workflows.
Workflows can also start on cron-based schedules and can be triggered by events outside of GitHub.
They can be manually triggered.
Workflows are the unit of automation. They contain Jobs.
Jobs use Actions to get work done.
Understand Workflows
Workflows define the automation required. It details the events that should trigger the workflow.
Also, define the jobs that should run when the workflow is triggered.
The job defines the location in which the actions will run, like which runner to use.
Workflows are written in YAML and live within a GitHub repository at the place .github/workflows.
Example workflow:
# .github/workflows/build.yml
name: Node Build.
on: [push]
jobs:
mainbuild:
strategy:
matrix:
node-version: [12.x]
os: [windows-latest]
steps:
- uses: actions/checkout@v1
- name: Run node.js on latest Windows.
uses: actions/setup-node@v1
with:
node-version: ${{ matrix.node-version }}
Explore Events
Events are implemented by the on clause in a workflow definition.
There are several types of events that can trigger workflows.
58 https://github.com/actions/starter-workflows
59 https://docs.github.com/actions/learn-github-actions/workflow-syntax-for-github-actions
60 https://docs.github.com/actions/learn-github-actions/workflow-syntax-for-github-actions
244
Scheduled events
With this type of trigger, a cron schedule needs to be provided.
on:
schedule:
Code events
Code events will trigger most actions. It occurs when an event of interest occurs in the repository.
on:
pull_request
The above event would fire when either a push or a pull request occurs.
on:
pull_request:
branches:
- develop
The event shows how to be specific about the section of the code that is relevant.
In this case, it will fire when a pull request is made in the develop branch.
245
Manual events
There's a unique event that is used to trigger workflow runs manually. You should use the workflow_dis-
patch event.
Your workflow must be in the default branch for the repository.
Webhook events
Workflows can be executed when a GitHub webhook is called.
on:
gollum
This event would fire when someone updates (or first creates) a Wiki page.
External events
Events can be on repository_dispatch. That allows events to fire from external systems.
For more information on events, see Events that trigger workflows61.
Explore Jobs
Workflows contain one or more jobs. A job is a set of steps that will be run in order on a runner.
Steps within a job execute on the same runner and share the same filesystem.
The logs produced by jobs are searchable, and artifacts produced can be saved.
- run: ./setup_server_configuration.sh
build:
steps:
- run: ./build_new_server.sh
Sometimes you might need one job to wait for another job to complete.
You can do that by defining dependencies between the jobs.
jobs:
startup:
61 https://docs.github.com/actions/learn-github-actions/events-that-trigger-workflows
246
runs-on: ubuntu-latest
steps:
- run: ./setup_server_configuration.sh
build:
needs: startup
steps:
- run: ./build_new_server.sh
Note: If the startup job in the example above fails, the build job won't execute.
For more information on job dependencies, see the section Creating Dependent Jobs at Managing
complex workflows62.
Explore Runners
When you execute jobs, the steps execute on a Runner.
The steps can be the execution of a shell script or the execution of a predefined Action.
GitHub provides several hosted runners to avoid you needing to spin up your infrastructure to run
actions.
Now, the maximum duration of a job is 6 hours, and for a workflow is 72 hours.
For JavaScript code, you have implementations of node.js on:
●● Windows
●● macOS
●● Linux
If you need to use other languages, a Docker container can be used. Now, the Docker container support
is only Linux-based.
These options allow you to write in whatever language you prefer.
JavaScript actions will be faster (no container needs to be used) and more versatile runtime.
The GitHub UI is also better for working with JavaScript actions.
Self-hosted runners
If you need different configurations to the ones provided, you can create a self-hosted runner.
GitHub has published the source code for self-hosted runners as open-source, and you can find it here:
https://github.com/actions/runner.
It allows you to customize the runner completely. However, you then need to maintain (patch, upgrade)
the runner system.
Self-hosted runners can be added at different levels within an enterprise:
●● Repository-level (single repository).
62 https://docs.github.com/actions/learn-github-actions/managing-complex-workflows
247
Console output can help debug. If it isn't sufficient, you can also enable more logging. See: Enabling
debug logging64.
Tags
Tags allow you to specify the precise versions that you want to work.
63 https://docs.github.com/actions/hosting-your-own-runners/about-self-hosted-runners
64 https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/enabling-debug-logging
248
steps:
-uses: actions/install-timer@v2.0.1
SHA-based hashes
You can specify a requested SHA-based hash for an action. It ensures that the action hasn't changed.
However, the downside to this is that you also won't receive updates to the action automatically either.
steps:
-uses: actions/install-timer@327239021f7cc39fe7327647b213799853a9eb98
Branches
A common way to request actions is to refer to the branch you want to work with. You'll then get the
latest version from that branch. That means you'll benefit from updates, but it also increases the chance
of code-breaking.
steps:
-uses: actions/install-timer@develop
Test an Action
GitHub offers several learning tools for actions.
GitHub Actions: hello-world65
You'll see a basic example of how to:
●● Organize and identify workflow files.
●● Add executable scripts.
●● Create workflow and action blocks.
●● Trigger workflows.
●● Discover workflow logs.
Summary
In this module, you learned what GitHub Actions, action flow, and its elements are. Also, understood what
events are, explored jobs and runners, and how to read console output from actions.
You learned how to describe the benefits and usage of:
●● Explain GitHub Actions and workflows.
●● Create and work with GitHub Actions and Workflows.
●● Describe Events, Jobs, and Runners.
●● Examine the output and release management for actions.
65 https://lab.github.com/githubtraining/github-actions:-hello-world
249
Learn more
●● Quickstart for GitHub Actions66.
●● Workflow syntax for GitHub Actions - GitHub Docs67.
●● Events that trigger workflows - GitHub Docs68.
66 https://docs.github.com/actions/quickstart
67 https://docs.github.com/actions/learn-github-actions/workflow-syntax-for-github-actions
68 https://docs.github.com/actions/learn-github-actions/events-that-trigger-workflows
250
Learning objectives
After completing this module, students and professionals can:
●● Implement Continuous Integration with GitHub Actions.
●● Use environment variables.
●● Share artifacts between jobs and use Git tags.
●● Create and manage secrets.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
on: [push]
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout@main
- uses: actions/setup-dotnet@v1
with:
dotnet-version: '3.1.x'
For more information on environment variables, including a list of built-in environment variables, see
Environment variables69.
69 https://docs.github.com/actions/learn-github-actions/environment-variables
252
Upload-artifact
This action can upload one or more files from your workflow to be shared between jobs.
You can upload a specific file:
- uses: actions/upload-artifact
with:
name: harness-build-log
path: bin/output/logs/harness.log
- uses: actions/upload-artifact
with:
name: harness-build-logs
path: bin/output/logs/
- uses: actions/upload-artifact
with:
name: harness-build-logs
path: bin/output/logs/harness[ab]?/*
- uses: actions/upload-artifact
with:
name: harness-build-logs
path: |
bin/output/logs/harness.log
bin/output/logs/harnessbuild.txt
Download-artifact
There's a corresponding action for downloading (or retrieving) artifacts.
- uses: actions/download-artifact
with:
70 https://github.com/actions/upload-artifact
253
name: harness-build-log
Artifact retention
A default retention period can be set for the repository, organization, or enterprise.
You can set a custom retention period when uploading, but it can't exceed the defaults for the repository,
organization, or enterprise.
- uses: actions/upload-artifact
with:
name: harness-build-log
path: bin/output/logs/harness.log
retention-days: 12
Deleting artifacts
You can delete artifacts directly in the GitHub UI.
For details, you can see: Removing workflow artifacts72.
They usually indicate the status of the default branch but can be branch-specific. You do by adding a URL
query parameter:
71 https://github.com/actions/download-artifact
72 https://docs.github.com/actions/managing-workflow-runs/removing-workflow-artifacts
254
?branch=BBBBB
where:
●● BBBBB is the branch name.
For more information, see: Adding a workflow status badge73.
●● Version your actions like other code. Others might take dependencies on various versions of your
actions. Allow them to specify versions.
●● Provide a latest label. If others are happy to use the latest version of your action, make sure you
provide a latest label that they can specify to get it.
●● Add appropriate documentation. As with other code, documentation helps others use your actions
and can help avoid surprises about how they function.
●● Add details action.yml metadata. At the root of your action, you'll have an action.yml file. Make sure
it has been populated with author, icon, any expected inputs, and outputs.
●● Consider contributing to the marketplace. It's easier for us all to work with actions when we all
contribute to the marketplace. Help to avoid people needing to relearn the same issues endlessly.
Often these tags will contain version numbers, but they can have other values.
Tags can then be viewed in the history of a repository.
73 https://docs.github.com/actions/monitoring-and-troubleshooting-workflows/adding-a-workflow-status-badge
255
Secrets
Secrets are similar to environment variables but encrypted. They can be created at two levels:
●● Repository
●● Organization
If secrets are created at the organization level, access policies can limit the repositories that can use them.
74 https://docs.github.com/repositories/releasing-projects-on-github/about-releases
256
Command-line secrets
Secrets shouldn't be passed directly as command-line arguments as they may be visible to others.
Instead, treat them like environment variables:
steps:
- shell: pwsh
env:
DB_PASSWORD: ${{ secrets.DBPassword }}
run: |
75 https://docs.github.com/actions/security-guides/encrypted-secrets
257
db_test "$env:DB_PASSWORD"
Limitations
Workflows can use up to 100 secrets, and they're limited to 64 KB in size.
For more information on creating secrets, see Encrypted secrets76.
Summary
This module detailed continuous integration using GitHub Actions. It described environment variables,
artifacts, best practices, and how to secure your pipeline using encrypted variables and secrets.
You learned how to describe the benefits and usage of:
●● Implement Continuous Integration with GitHub Actions.
●● Use environment variables.
●● Share artifacts between jobs and use Git tags.
●● Create and manage secrets.
Learn more
●● About continuous integration - GitHub Docs77.
●● Environment variables - GitHub Docs78.
●● Storing workflow data as artifacts - GitHub Docs79.
●● Encrypted secrets - GitHub Docs80.
76 https://docs.github.com/actions/security-guides/encrypted-secrets
77 https://docs.github.com/actions/automating-builds-and-tests/about-continuous-integration
78 https://docs.github.com/actions/learn-github-actions/environment-variables
79 https://docs.github.com/actions/advanced-guides/storing-workflow-data-as-artifacts
80 https://docs.github.com/actions/security-guides/encrypted-secrets
258
Containers can be lightweight. A container may be only tens of megabytes in size, but a virtual machine
with its entire operating system may be several gigabytes in size. Because of it, a single server can host
far more containers than virtual machines.
Containers can be efficient: fast to deploy, fast to boot, fast to patch, quick to update.
Learning objectives
After completing this module, students and professionals can:
●● Design a container strategy.
●● Work with Docker Containers.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Virtual Machines
A VM is essentially an emulation of a real computer that executes programs like a real computer. VMs run
on top of a physical machine using a “hypervisor.”
As you can see in the diagram, VMs package up the virtual hardware, a kernel (OS), and user space for
each new VM.
261
Container
Unlike a VM, which provides hardware virtualization, a container provides operating-system-level virtual-
ization by abstracting the “user space.”
This diagram shows that containers package up just the user space, not the kernel or virtual hardware like
a VM does. Each container gets its isolated user space to allow multiple containers to run on a single host
machine. We can see that all the operating system-level architecture is being shared across containers.
The only parts that are created from scratch are the bins and libs. It's what makes containers so light-
weight.
FROM ubuntu
LABEL maintainer="johndoe@contoso.com"
ADD appsetup /
RUN /bin/bash -c 'source $HOME/.bashrc; \
echo $HOME'
CMD ["echo", "Hello World from within the container"]
The first line refers to the parent image based on which this new image will be based.
Generally, all images will be based on another existing image. In this case, the Ubuntu image would be
retrieved from either a local cache or from DockerHub.
An image that doesn't have a parent is called a base image. In that rare case, the FROM line can be
omitted, or FROM scratch can be used instead.
The second line indicates the email address of the person who maintains this file. Previously, there was a
MAINTAINER command, but that has been deprecated and replaced by a label.
The third line adds a file to the root folder of the image. It can also add an executable.
The fourth and fifth lines are part of a RUN command. Note the use of the backslash to continue the
fourth line onto the fifth line for readability. It's equivalent to having written it instead:
RUN /bin/bash -c 'source $HOME/.bashrc; echo $HOME'
The RUN command is run when the docker build creates the image. It's used to configure items within
the image.
By comparison, the last line represents a command that will be executed when a new container is created
from the image; it's run after container creation.
For more information, you can see:
Dockerfile reference81
81 https://docs.docker.com/engine/reference/builder/
263
At first, it simply looks like several dockerfiles stitched together. Multi-stage Dockerfiles can be layered or
inherited.
When you look closer, there are a couple of key things to realize.
Notice the third stage.
FROM build AS publish
build isn't an image pulled from a registry. It's the image we defined in stage 2, where we named the
result of our-build (SDK) image “builder.” Docker build will create a named image we can later
reference.
We can also copy the output from one image to another. It's the real power to compile our code with one
base SDK image (mcr.microsoft.com/dotnet/core/sdk:3.1) while creating a production image
based on an optimized runtime image (mcr.microsoft.com/dotnet/core/aspnet:3.1). Notice
the line.
COPY --from=publish /app/publish .
It takes the /app/publish directory from the published image and copies it to the working directory of the
production image.
Breakdown of stages
The first stage provides the base of our optimized runtime image. Notice it derives from mcr.micro-
soft.com/dotnet/core/aspnet:3.1.
We would specify extra production configurations, such as registry configurations, MSIexec of other
components. You would hand off any of those environment configurations to your ops folks to prepare
the VM.
The second stage is our build environment. mcr.microsoft.com/dotnet/core/sdk:3.1 This
includes everything we need to compile our code. From here, we have compiled binaries we can publish
or test—more on testing in a moment.
The third stage derives from our build stage. It takes the compiled output and “publishes” them in .NET
terms.
Publish means taking all the output required to deploy your “app/publish/service/component” and
placing it in a single directory. It would include your compiled binaries, graphics (images), JavaScript, and
so on.
The fourth stage takes the published output and places it in the optimized image we defined in the first
stage.
264
82 https://docs.docker.com/develop/develop-images/multistage-build/
83 https://azure.microsoft.com/services/container-instances/
84 https://azure.microsoft.com/services/kubernetes-service/
85 https://azure.microsoft.com/services/container-registry/
266
All container deployments, including DC/OS, Docker Swarm, and Kubernetes, are supported. The registry
is integrated with other Azure services such as the App Service, Batch, Service Fabric, and others.
Importantly, it allows your DevOps team to manage the configuration of apps without being tied to the
configuration of the target-hosting environment.
Azure Container Apps86
Azure Container Apps allows you to build and deploy modern apps and microservices using serverless
containers. It deploys containerized apps without managing complex infrastructure.
You can write code using your preferred programming language or framework and build microservices
with full support for Distributed Application Runtime (Dapr)87. Scale dynamically based on HTTP traffic
or events powered by Kubernetes Event-Driven Autoscaling (KEDA)88.
Azure App Service89
Azure Web Apps provides a managed service for both Windows and Linux-based web applications and
provides the ability to deploy and run containerized applications for both platforms. It provides autoscal-
ing and load balancing options and is easy to integrate with Azure DevOps.
Summary
This module helped you plan a container build strategy, explained containers and their structure, intro-
duced Docker, microservices, Azure Container Registry, and related services.
You learned how to describe the benefits and usage of:
●● Design a container strategy.
●● Work with Docker Containers.
●● Create an Azure Container Registry.
●● Explain Docker microservices and containers.
Learn more
●● Container Jobs in Azure Pipelines and TFS - Azure Pipelines | Microsoft Docs90.
●● Build an image - Azure Pipelines | Microsoft Docs91.
●● Service Containers - Azure Pipelines & TFS | Microsoft Docs92.
●● Quickstart - Create registry in portal - Azure Container Registry | Microsoft Docs93.
●● What are Microservices? - Azure DevOps | Microsoft Docs94.
●● CI/CD for microservices - Azure Architecture Center | Microsoft Docs95.
86 https://azure.microsoft.com/services/container-apps/
87 https://dapr.io/
88 https://keda.sh/
89 https://azure.microsoft.com/services/app-service/
90 https://docs.microsoft.com/azure/devops/pipelines/process/container-phases
91 https://docs.microsoft.com/azure/devops/pipelines/ecosystems/containers/build-image
92 https://docs.microsoft.com/azure/devops/pipelines/process/service-containers
93 https://docs.microsoft.com/azure/container-registry/container-registry-get-started-portal
94 https://docs.microsoft.com/devops/deliver/what-are-microservices
95 https://docs.microsoft.com/azure/architecture/microservices/ci-cd
267
Labs
Lab 04: Configuring agent pools and under-
standing pipeline styles
Lab overview
YAML-based pipelines allow you to fully implement CI/CD as code, in which pipeline definitions reside in
the same repository as the code that is part of your Azure DevOps project. YAML-based pipelines support
a wide range of features that are part of the classic pipelines, such as pull requests, code reviews, history,
branching, and templates.
Regardless of the choice of the pipeline style, to build your code or deploy your solution by using Azure
Pipelines, you need an agent. An agent hosts compute resources that runs one job at a time. Jobs can be
run directly on the host machine of the agent or in a container. You have an option to run your jobs using
Microsoft-hosted agents, which are managed for you, or implementing a self-hosted agent that you set
up and manage on your own.
In this lab, you will step through the process of converting a classic pipeline into a YAML-based one and
running it first by using a Microsoft-hosted agent and then performing the equivalent task by using a
self-hosted agent.
Objectives
After you complete this lab, you will be able to:
●● implement YAML-based pipelines
●● implement self-hosted agents
Lab duration
●● Estimated time: 90 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions96
96 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
268
Objectives
After you complete this lab, you will be able to:
●● Build a custom Docker image by using an Microsoft hosted Linux agent
●● Push an image to Azure Container Registry
●● Deploy a Docker image as a container to Azure App Service by using Azure DevOps
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions97
Objectives
After you complete this lab, you will be able to:
●● Create a basic build pipeline from a template
●● Track and review a build
●● Invoke a continuous integration build
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions98
97 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
98 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
269
Objectives
After you complete this lab, you will be able to:
●● Implement a GitHub Action workflow by using DevOps Starter
●● Explain the basic characteristics of GitHub Action workflows
Lab duration
●● Estimated time: 30 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions99
99 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
270
Objectives
After you complete this lab, you will be able to:
●● Install Azure Pipelines from the GitHub Marketplace
●● Integrate a GitHub project with an Azure DevOps pipeline
●● Track pull requests through the pipeline
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions100
100 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
271
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices aren't characteristic of a pipeline?
Reliable.
Repeatable.
Immutable.
Multiple choice
Which of the following is a correct statement about Azure Pipelines?
Azure Pipelines works exclusively with Azure and Microsoft Technology. For open-source, GitHub is
recommended.
Azure Pipelines work with any language or platform.
Azure Pipelines only deploys in cloud environments.
Multiple choice
Which of the following choices is a Continuous Integration (CI) benefit?
Automatically ensure you don't ship broken code.
Automatically deploy code to production.
Ensure deployment targets have the latest code.
Multiple choice
Which of the following commands do you use to retrieve an image from a container registry?
Docker run.
Docker pull.
Docker build.
Multiple choice
Which of the following choices isn't container-related Azure services?
Azure App Service.
Azure Container Instances.
Azure virtual machine scale sets.
272
Multiple choice
Which of the following choices creates an instance of the image?
Docker run.
Docker build.
Docker pull.
Multiple choice
Which of the following choices are the two types of agents being used in pipelines?
Client-Hosted and Server-Hosted.
Self-Hosted and Microsoft-Hosted.
Windows-Hosted and Linux-Hosted.
Multiple choice
Which of the following is a correct statement about Agent Pools?
Agent pools are scoped to the entire project and can be shared across pipelines.
Agent pools are scoped to the entire repository and can be shared across pipelines.
Agent pools are scoped to the entire organization and can be shared across projects.
Multiple choice
Which of the following choices is the role that can manage membership for all roles of the project agent
pool?
Administrator.
User.
Contributor.
Multiple choice
Which of the following choices isn't used to create and configure an Azure Pipeline?
XML.
YAML.
Visual Designer.
Multiple choice
Which of the following choices is a benefit of using the Visual Designer?
Every branch you use can modify the build policy by modifying the azure-pipelines.yml file.
The visual designer is in the same hub as the build results.
A change to the build process might cause a break or result in an unexpected outcome. Because the
change is in version control with the rest of your codebase, you can more easily identify the issue.
273
Multiple choice
Which of the following choices is a benefit of using YAML?
The pipeline is versioned with your code and follows the same branching structure.
The YAML representation of the pipelines makes it easier to get started.
The YAML is in the same hub as the build results.
Multiple choice
Which of the following choices isn't a pillar of Continuous Integration?
Version Control System.
Automated Build Process.
Automated Deploy Process.
Multiple choice
Which of the following choices describe the status of a build pipeline that will queue new build requests and
not start them?
Disabled.
Paused.
On Hold.
Multiple choice
Which of the following choices is where you change the build number format?
Build properties.
Library.
Project Settings.
Multiple choice
Which of the following choices could be a user capability?
Agent.ComputerName.
Agent.OS.
ContosoApplication.Path.
Multiple choice
Which of the following choices is the scope of where parallel jobs are defined?
Project scope.
Organization scope.
Pipeline scope.
274
Multiple choice
Which of the following choices of repository types does a YAML pipeline supports?
Azure Repos TFVC.
Azure Repos Git.
Other Git (generic).
Multiple choice
Which of the following choices is the YAML property responsible for creating a dependency between jobs?
steps.
dependsOn.
condition.
Multiple choice
Which of the following choices is responsible for always starting the communication between Azure Pipe-
lines and its agent?
Service Hook.
Azure Pipelines.
Agent.
Multiple choice
Which of the following choices describes a reason for installing an agent using interactive mode?
To run UI Tests.
To run Java pipelines.
To communicate with cloud environments from on-premises agents (self-hosted).
Multiple choice
Which of the following choices is a keyword to make one job wait for another job to complete?
on.
needs.
uses.
Multiple choice
Which of the following choices isn't a level that self-hosted runners can be added within an enterprise?
Enterprise-level (multiple organizations across an enterprise).
Organizational-level (multiple repositories in an organization).
Project-level.
275
Multiple choice
Which of the following choices is the location where you can find workflows?
.github/workflows.
.github/build/workflows.
.github/actions/workflows.
Multiple choice
Which of the following choices is where the database passwords that are needed in a CI pipeline should be
stored?
Repo.
action.yml.
Encrypted Secrets.
Multiple choice
Which of the following files is the metadata for an action held?
workflow.yml.
action.yml.
meta.yml.
Multiple choice
Which of the following choices is how can the status of a workflow be shown in a repository?
Using Badges.
Status Files.
Conversation Tab.
276
Answers
Multiple choice
Which of the following choices aren't characteristic of a pipeline?
Reliable.
Repeatable.
■■ Immutable.
Explanation
The core pipeline idea is to create a repeatable, reliable, and incrementally improving process.
Multiple choice
Which of the following is a correct statement about Azure Pipelines?
Azure Pipelines works exclusively with Azure and Microsoft Technology. For open-source, GitHub is
recommended.
■■ Azure Pipelines work with any language or platform.
Azure Pipelines only deploys in cloud environments.
Explanation
Azure Pipelines work with any language or platform.
Multiple choice
Which of the following choices is a Continuous Integration (CI) benefit?
■■ Automatically ensure you don't ship broken code.
Automatically deploy code to production.
Ensure deployment targets have the latest code.
Explanation
Automatically ensure you don't ship broken code.
Multiple choice
Which of the following commands do you use to retrieve an image from a container registry?
Docker run.
■■ Docker pull.
Docker build.
Explanation
It's docker pull. You retrieve the image, likely from a container registry.
277
Multiple choice
Which of the following choices isn't container-related Azure services?
Azure App Service.
Azure Container Instances.
■■ Azure virtual machine scale sets.
Explanation
Azure provides a wide range of services that help you to work with containers such as Azure App Service
and Azure Container Instances.
Multiple choice
Which of the following choices creates an instance of the image?
■■ Docker run.
Docker build.
Docker pull.
Explanation
It's docker run. You execute the container. An instance is created of the image.
Multiple choice
Which of the following choices are the two types of agents being used in pipelines?
Client-Hosted and Server-Hosted.
■■ Self-Hosted and Microsoft-Hosted.
Windows-Hosted and Linux-Hosted.
Explanation
Self-hosted and Microsoft-hosted.
Multiple choice
Which of the following is a correct statement about Agent Pools?
Agent pools are scoped to the entire project and can be shared across pipelines.
Agent pools are scoped to the entire repository and can be shared across pipelines.
■■ Agent pools are scoped to the entire organization and can be shared across projects.
Explanation
Agent pools are scoped to the entire organization and can be shared across projects.
Multiple choice
Which of the following choices is the role that can manage membership for all roles of the project agent
pool?
■■ Administrator.
User.
Contributor.
Explanation
It's Administrator.
278
Multiple choice
Which of the following choices isn't used to create and configure an Azure Pipeline?
■■ XML.
YAML.
Visual Designer.
Explanation
Azure Pipelines support YAML and Visual Designer.
Multiple choice
Which of the following choices is a benefit of using the Visual Designer?
Every branch you use can modify the build policy by modifying the azure-pipelines.yml file.
■■ The visual designer is in the same hub as the build results.
A change to the build process might cause a break or result in an unexpected outcome. Because the
change is in version control with the rest of your codebase, you can more easily identify the issue.
Explanation
The visual designer is in the same hub as the build results. This location makes it easier to switch back and
forth and make changes.
Multiple choice
Which of the following choices is a benefit of using YAML?
■■ The pipeline is versioned with your code and follows the same branching structure.
The YAML representation of the pipelines makes it easier to get started.
The YAML is in the same hub as the build results.
Explanation
The pipeline is versioned with your code and follows the same branching structure. You get validation of
your changes through code reviews in pull requests and branch build policies.
Multiple choice
Which of the following choices isn't a pillar of Continuous Integration?
Version Control System.
Automated Build Process.
■■ Automated Deploy Process.
Explanation
The four pillars of CI are Version Control System, Packet Management System, Continuous Integration
System, and an Automated Build Process.
279
Multiple choice
Which of the following choices describe the status of a build pipeline that will queue new build requests
and not start them?
Disabled.
■■ Paused.
On Hold.
Explanation
A build pipeline that is Paused will queue new build requests and not start them.
Multiple choice
Which of the following choices is where you change the build number format?
■■ Build properties.
Library.
Project Settings.
Explanation
It's in the build properties.
Multiple choice
Which of the following choices could be a user capability?
Agent.ComputerName.
Agent.OS.
■■ ContosoApplication.Path.
Explanation
The capabilities such as Agent.ComputerName and Agent.OS that is automatically discovered is referred to
as system capabilities. The ones that you define such ContosoApplication.Path is called user capabilities.
Multiple choice
Which of the following choices is the scope of where parallel jobs are defined?
Project scope.
■■ Organization scope.
Pipeline scope.
Explanation
At the organization level, you can configure the number of parallel jobs that are made available.
Multiple choice
Which of the following choices of repository types does a YAML pipeline supports?
Azure Repos TFVC.
■■ Azure Repos Git.
Other Git (generic).
Explanation
The supported source control types in YAML are Azure Repos Git, BitBucket Cloud, GitHub, GitHub Enter-
prise Server.
280
Multiple choice
Which of the following choices is the YAML property responsible for creating a dependency between
jobs?
steps.
■■ dependsOn.
condition.
Explanation
You can define dependencies between jobs using the dependensOn.
Multiple choice
Which of the following choices is responsible for always starting the communication between Azure
Pipelines and its agent?
Service Hook.
Azure Pipelines.
■■ Agent.
Explanation
The Agent always initiates the communication with Azure Pipelines.
Multiple choice
Which of the following choices describes a reason for installing an agent using interactive mode?
■■ To run UI Tests.
To run Java pipelines.
To communicate with cloud environments from on-premises agents (self-hosted).
Explanation
In some cases, you might need to run the agent interactively for production use - such as to run UI tests.
Multiple choice
Which of the following choices is a keyword to make one job wait for another job to complete?
on.
■■ needs.
uses.
Explanation
Jobs run in parallel by default, but you can define dependencies between the jobs using needs.
281
Multiple choice
Which of the following choices isn't a level that self-hosted runners can be added within an enterprise?
Enterprise-level (multiple organizations across an enterprise).
Organizational-level (multiple repositories in an organization).
■■ Project-level.
Explanation
Self-hosted runners can be added at different levels within an enterprise: Organizational-level (multiple
repositories in an organization), Enterprise-level (multiple organizations across an enterprise), Reposito-
ry-level (single repository).
Multiple choice
Which of the following choices is the location where you can find workflows?
■■ .github/workflows.
.github/build/workflows.
.github/actions/workflows.
Explanation
Workflows are written in YAML and live within a GitHub repository, at the location .github/workflows.
Multiple choice
Which of the following choices is where the database passwords that are needed in a CI pipeline should
be stored?
Repo.
action.yml.
■■ Encrypted Secrets.
Explanation
Secrets are similar to environment variables but encrypted.
Multiple choice
Which of the following files is the metadata for an action held?
workflow.yml.
■■ action.yml.
meta.yml.
Explanation
Add details action.yml metadata. At the root of your action, you will have an action.yml file. Make sure it
has been populated with author, icon, any expected inputs, and outputs.
282
Multiple choice
Which of the following choices is how can the status of a workflow be shown in a repository?
■■ Using Badges.
Status Files.
Conversation Tab.
Explanation
Badges can be used to show the status of a workflow within a repository.
Module 4 Design and implement a release
strategy
Learning objectives
After completing this module, students and professionals can:
●● Explain continuous delivery (CD).
●● Implement continuous delivery in your development cycle.
●● Understand releases and deployment.
●● Identify project opportunities to apply CD.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
In the past and today, the IT(-Pro) department is responsible for the stability of the systems, while the
development department is responsible for creating new value.
This split brings many companies into a problematic situation. Development departments are motivated
to deliver value as soon as possible to keep their customers happy.
On the other hand, IT is motivated to change nothing because change is a risk, and they're responsible
for eliminating the risks and keeping everything stable. And what do we get out of it? Long release cycles.
Silo-based development
Long release cycles, numerous tests, code freezes, night and weekend work, and many people ensure that
everything works.
But the more we change, the more risk it leads to, and we're back at the beginning, on many occasions
resulting in yet another document or process that should be followed.
It's what I call silo-based development.
If we look at this picture of a traditional, silo-based value stream, we see Bugs and Unplanned work,
necessary updates or support work, and planned (value-adding) work, all added to the teams' backlog.
Everything is planned, and the first “gate” can be opened. Everything drops to the next phase. All the
work, and so all the value, moves in piles to the next stage.
It moves from the Plan phase to a Realize phase where all the work is developed, tested, and document-
ed, and from here, it moves to the release phase.
All the value is released at the same time. As a result, the release takes a long time.
285
We need to move towards a situation where the value isn't piled up and released all at once but flows
through a pipeline.
Just like in the picture, a piece of work is a marble. And only one part of the work can flow through the
pipeline at once.
So, work must be prioritized in the right way. As you can see, the pipeline has green and red outlets.
These are the feedback loops or quality gates that we want to have in place. A feedback loop can be
different things:
●● A unit test to validate the code.
●● An automated build to validate the sources.
●● An automated test on a Test environment.
●● Some monitor on a server.
●● Usage instrumentation in the code.
If one of the feedback loops is red, the marble can't pass the outlet, and it will end up in the Monitor and
Learn tray.
It's where the learning happens. The problem is analyzed and solved so that the next time a marble
passes the outlet, it's green.
Every single piece of workflow through the pipeline until it ends up in the tray of value.
The more that is automated, the faster value flows through the pipeline.
Companies want to move toward Continuous Delivery.
●● They see the value.
287
It includes a snapshot of all the information required to carry out all the tasks and actions in the release
pipeline, such as:
●● The stages or environments.
●● The tasks for each one.
●● The values of task parameters and variables.
●● The release policies such as triggers, approvers, and release queuing options.
There can be multiple releases from one release pipeline (or release process).
Deployment is the action of running the tasks for one stage, which results in a tested and deployed
application and other activities specified for that stage.
Starting a release starts each deployment based on the settings and policies defined in the original
release pipeline.
There can be multiple deployments of each release, even for one stage.
When a release deployment fails for a stage, you can redeploy the same release to that stage.
See also Releases in Azure Pipelines1.
1 https://docs.microsoft.com/azure/devops/pipelines/release/releases
2 https://docs.microsoft.com/azure/devops/articles/phase-features-with-feature-flags
289
●● The Organization
●● Application Architecture
●● Skills
●● Tooling
●● Tests
●● Other things?
Summary
This module introduces continuous delivery concepts and their implementation in a traditional IT devel-
opment cycle.
You learned how to describe the benefits and usage of:
●● Explain continuous delivery (CD).
●● Implement continuous delivery in your development cycle.
●● Understand releases and deployment.
●● Identify project opportunities to apply CD.
Learn more
●● What is Continuous Delivery? - Azure DevOps | Microsoft Docs3.
●● Continuous Delivery vs. Continuous Deployment | Microsoft Azure4.
●● Release pipelines - Azure Pipelines | Microsoft Docs5.
●● Define a Classic release pipeline - Azure Pipelines | Microsoft Docs6.
3 https://docs.microsoft.com/devops/deliver/what-is-continuous-delivery
4 https://azure.microsoft.com/overview/continuous-delivery-vs-continuous-deployment
5 https://docs.microsoft.com/azure/devops/pipelines/release/?view=azure-devops
6 https://docs.microsoft.com/azure/devops/pipelines/release/define-multistage-release-process
291
Learning objectives
After completing this module, students and professionals can:
●● Explain the terminology used in Azure DevOps and other Release Management Tooling.
●● Describe what a Build and Release task is, what it can do, and some available deployment tasks.
●● Implement release jobs.
●● Differentiate between multi-agent and multi-configuration release jobs.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
In more mature organizations, this manual approval process can be replaced by an automatic process
that checks the quality before the components move on to the next stage.
Finally, we have the tasks within the various stages. The tasks are the steps that need to be executed to
install, configure, and validate the installed artifact.
In this part of the module, we'll walk through all the components of the release pipeline in detail and talk
about what to consider for each element.
The components that make up the release pipeline or process are used to create a release. There's a
difference between a release and the release pipeline or process. The release pipeline is the blueprint
through which releases are done. We'll cover more of it when discussing the quality of releases and
releases processes.
See also Release pipelines7.
7 https://docs.microsoft.com/azure/devops/pipelines/release
8 https://docs.microsoft.com/azure/devops/artifacts/artifacts-key-concepts
296
The most common way to get an artifact within the release pipeline is to use a build artifact.
The build pipeline compiles, tests, and eventually produces an immutable package stored in a secure
place (storage account, database, and so on).
The release pipeline then uses a secure connection to this secured place to get the build artifact and do
extra actions to deploy it to an environment.
The significant advantage of using a build artifact is that the build produces a versioned artifact.
The artifact is linked to the build and gives us automatic traceability. We can always find the sources that
made this artifact. Another possible artifact source is version control.
We can directly link our version control to our release pipeline.
The release is related to a specific commit in our version control system. With that, we can also see which
version of a file or script is eventually installed. In this case, the version doesn't come from the build but
from version control.
Consideration for choosing a version control artifact instead of a build artifact can be that you only want
to deploy one specific file. If you don't need to run more actions before using this file in your release
pipeline, creating a versioned package (build artifact) containing only one file doesn't make sense.
Helper scripts that do actions to support the release process (clean up, rename, string actions) are
typically good candidates to get from version control.
Another possibility of an artifact source can be a network share containing a set of files. However, you
should be aware of the possible risk. The risk is that you aren't 100% sure that the package you're going
to deploy is the same package that was put on the network share. If other people can also access the
network share, the package might be compromised. Therefore, this option won't be sufficient to prove
integrity in a regulated environment (banks, insurance companies).
Finally, container registries are a rising star regarding artifact sources. Container registries are versioned
repositories where container artifacts are stored. Pushing a versioned container to the content repository
297
and consuming that same version within the release pipeline has more or less the same advantages as
using a build artifact stored in a safe location.
9 https://semver.org
10 https://docs.microsoft.com/azure/devops/pipelines/release/artifacts
298
Steps
Let's look at how to work with one or more artifact sources in the release pipeline.
1. In the Azure DevOps environment, open the Parts Unlimited project, then from the main menu,
select Pipelines, then select Releases.
3. In the Select a template pane, see the available templates, but then select the Empty job option at
the top. It's because we're going to focus on selecting an artifact source.
4. In the Artifacts section, select +Add an artifact.
5. See the available options in the Add an artifact pane, and select the option to see more artifact
types, so that you can see all the available artifact types:
299
While we're in this section, let's briefly look at the available options.
6. Select Build and see the parameters required. This option is used to retrieve artifacts from an Azure
DevOps Build pipeline. Using it requires a project name and a build pipeline name. (Projects can have
multiple build pipelines). It's the option that we will use shortly.
7. Select Azure Repository and see the parameters required. It requires a project name and asks you to
select the source repository.
8. Select GitHub and see the parameters required. The Service is a connection to the GitHub repository.
It can be authorized by either OAuth or by using a GitHub personal access token. You also need to
select the source repository.
300
9. Select TFVC and see the parameters required. It also requires a project name and asks you to select
the source repository.
Note:A release pipeline can have more than one set of artifacts as input. A typical example is when you also
need to consume a package from a feed and your project source.
10. Select Azure Artifacts and see the parameters required. It requires you to identify the feed, package
type, and package.
11. Select GitHub Release and see the parameters required. It requires a service connection and the
source repository.
301
13. Select Docker Hub and see the parameters required. This option would be helpful if your containers
are stored in Docker Hub rather than in an Azure Container Registry. After choosing a secure service
connection, you need to select the namespace and the repository.
14. Finally, select Jenkins and see the parameters required. You don't need to get all your artifacts from
Azure. You can retrieve them from a Jenkins build. So, if you have a Jenkins Server in your infrastruc-
ture, you can use the build artifacts from there directly in your Azure Pipelines.
302
We've now added the artifacts that we'll need for later walkthroughs.
303
16. To save the work, select Save, then in the Save dialog box, select OK.
●● If a single team uses it, you can deploy it frequently. Otherwise, it would be best if you were a bit
more careful.
●● Who are the users? Do they want a new version multiple times a day?
●● How long does it take to deploy?
●● Is there downtime? What happens to performance? Are users affected?
Steps
Let's now look at the other section in the release pipeline that we've created: Stages.
1. Click on Stage 1, and in the Stage properties pane, set Stage name to Development and close the
pane.
304
Note:Stages can be based on templates. For example, you might be deploying a web application using
node.js or Python. For this walkthrough, that won't matter because we're focusing on defining a strategy.
2. To add a second stage, click +Add in the Stages section and note the available options. You have a
choice to create a new stage or to clone an existing stage. Cloning a stage can help minimize the
number of parameters that need to be configured. But for now, click New stage.
3. When the Select a template pane appears, scroll down to see the available templates. For now, we
don't need any of these, so click Empty job at the top, then in the Stage properties pane, set Stage
name to Test, then close the pane.
305
4. Hover over the Test stage and notice that two icons appear below. These are the same options that
were available in the menu drop-down that we used before. Click the Clone icon to clone the stage to
a new stage.
5. Click on the Copy of Test stage, and in the stage properties pane, set Stage name to Production and
close the pane.
We've now defined a traditional deployment strategy. Each stage contains a set of tasks, and we'll look at
those tasks later in the course.
Note: The same artifact sources move through each of the stages.
The lightning bolt icon on each stage shows that we can set a trigger as a pre-deployment condition. The
person icon on both ends of a stage shows that we can have pre and post-deployment approvers.
Concurrent stages
You'll notice that now, we have all the stages one after each other in a sequence. It's also possible to have
concurrent stages. Let's see an example.
6. Click the Test stage, and on the stage properties pane, set Stage name to Test Team A and close the
pane.
7. Hover over the Test Team A stage and click the Clone icon that appears to create a new cloned stage.
8. Click the Copy of Test Team A stage, and on the stage properties pane, set Stage name to Test
Team B and close the pane.
9. Click the Pre-deployment conditions icon (that is, the lightning bolt) on Test Team B to open the
pre-deployment settings.
306
10. In the Pre-deployment conditions pane, the stage can be triggered in three different ways:
The stage can immediately follow Release. (That is how the Development stage is currently configured). It
can require manual triggering. Or, more commonly, it can follow another stage. Now, its following Test
Team A, but that's not what we want.
11. From the Stages drop-down list, choose Development and uncheck Test Team A, then close the
pane.
We now have two concurrent Test stages.
In the current configuration, we're using them for different environments. But it's not always the case.
Here's a deployment strategy based upon regions instead:
Azure Pipelines are configurable and support a wide variety of deployment strategies. The name Stages
is a better fit than Environment even though the stages can be used for environments.
For now, let's give the pipeline a better name and save the work.
12. At the top of the screen, hover over the New release pipeline name and when a pencil appears, click
it to edit the name. Type Release to all environments as the name and hit enter or click elsewhere on
the screen.
13. For now, save the environment-based release pipeline you've created by clicking Save. Then, click OK
in the Save dialog box.
308
Add steps to specify what you want to build. The tests you want to run and all the other steps needed to
complete the build process.
There are steps for building, testing, running utilities, packaging, and deploying.
If a task isn't available, you can find numerous community tasks in the marketplace.
Jenkins, Azure DevOps, and Atlassian have an extensive marketplace where other tasks can be found.
309
Links
For more information, see also:
●● Task types & usage11
●● Tasks for Azure12
●● Atlassian marketplace13
●● Jenkins Plugins14
11 https://docs.microsoft.com/azure/devops/pipelines/process/tasks
12 https://github.com/microsoft/azure-pipelines-tasks
13 https://marketplace.atlassian.com/addons/app/bamboo/trending
14 https://plugins.jenkins.io/
15 https://docs.microsoft.com/azure/devops/extend/develop/add-build-task
310
If an IOS app needs to be built and distributed from a Mac, the angular app is hosted on Linux, so best
deployed from a Linux machine.
The backend might be deployed from a Windows machine.
Because you want all three deployments to be part of one pipeline, you can define multiple Release Jobs,
which target the different agents, servers, or deployment groups.
By default, jobs run on the host machine where the agent is installed.
It's convenient and typically well suited for projects just beginning to adopt continuous integration (CI).
Over time, you may find that you want more control over the stage where your tasks run.
Summary
This module described Azure Pipelines capabilities, build and release tasks, and multi-configuration and
multi-agent differences.
You learned how to describe the benefits and usage of:
●● Explain the terminology used in Azure DevOps and other Release Management Tooling.
●● Describe what a Build and Release task is, what it can do, and some available deployment tasks.
●● Implement release jobs.
●● Differentiate between multi-agent and multi-configuration release jobs.
16 https://docs.microsoft.com/azure/devops/pipelines/process/phases
311
Learn more
●● Release pipelines - Azure Pipelines | Microsoft Docs17.
●● Build and Release Tasks - Azure Pipelines | Microsoft Docs18.
●● Jobs in Azure Pipelines and TFS - Azure Pipelines | Microsoft Docs19.
●● Configure and pay for parallel jobs - Azure DevOps | Microsoft Docs20.
17 https://docs.microsoft.com/azure/devops/pipelines/release
18 https://docs.microsoft.com/azure/devops/pipelines/process/tasks
19 https://docs.microsoft.com/azure/devops/pipelines/process/phases
20 https://docs.microsoft.com/azure/devops/pipelines/licensing/concurrent-jobs
312
Learning objectives
After completing this module, students and professionals can:
●● Explain things to consider when designing your release strategy.
●● Define the components of a release pipeline and use artifact sources.
●● Create a release approval plan.
●● Implement release gates.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● For some exercises, you need to create an Azure DevOps Organization and a Team Project. If you
don't have it, see Create an organization - Azure DevOps21.
●● If you already have your organization created, use the Azure DevOps Demo Generator22 and create a
new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel free to create a
blank project. See Create a project - Azure DevOps23
●● Complete the previous module's walkthroughs.
21 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
22 https://azuredevopsdemogenerator.azurewebsites.net/
23 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
313
Scheduled triggers
This speaks for itself, but it allows you to set up a time-based manner to start a new release—for exam-
ple, every night at 3:00 AM or 12:00 PM. You can have one or multiple schedules per day, but it will
always run at this specific time.
Manual trigger
With a manual trigger, a person or system triggers the release based on a specific event. When it's a
person, it probably uses some UI to start a new release. When it's an automated process, some events will
likely occur. You can trigger the release from another system using the automation engine, which is
usually part of the release management tool.
For more information, see also:
●● Release and Stage triggers24
Steps
Let's now look at when our release pipeline is used to create deployments. Mainly, it will involve the use
of triggers.
When we refer to deployment, we refer to each stage. Each stage can have its own set of triggers that
determine when the deployment occurs.
1. Click the lightning bolt on the _Parts Unlimited-ASP.NET-CI artifact.
2. In the Continuous deployment trigger pane, click the Disabled option to enable continuous deploy-
ment. It will then say Enabled.
24 https://docs.microsoft.com/azure/devops/pipelines/release/triggers
314
Once it's selected, every time a build completes, deployment of the release pipeline will start.
Note: You can filter which branches affect it, so for example, you could choose the main branch or a
particular feature branch.
Scheduled deployments
You might not want to have a deployment start every time a build completes.
It might be disruptive to testers downstream if it was happening too often.
Instead, it might make sense to set up a deployment schedule.
3. Click on the Scheduled release trigger icon to open its settings.
4. In the Scheduled release trigger pane, click the Disabled option to enable the scheduled release. It
will then say Enabled, and extra options will appear.
315
You can see in the screenshot that a deployment using the release pipeline would now occur each
weekday at 3 AM.
It might be convenient when you, for example, share a stage with testers who work during the day.
You don't want to constantly deploy new versions to that stage while they're working. This setting would
create a clean, fresh environment for them at 3 AM each weekday.
Note: The default timezone is UTC. You can change it to suit your local timezone, as it might be easier to
work with when creating schedules.
5. For now, we don't need a scheduled deployment. Click the Enabled button again to disable the
scheduled release trigger and close the pane.
Pre-deployment triggers
6. Click the lightning bolt on the Development stage to open the pre-deployment conditions.
Note: Both artifact filters and schedules can be set at the pre-deployment for each stage rather than just
at the artifact configuration level.
Deployment to any stage doesn't happen automatically unless you have chosen to allow that.
Release approvals don't control how but control if you want to deliver multiple times a day.
Manual approvals also suit a significant need. Organizations that start with Continuous Delivery often lack
a certain amount of trust.
They don't dare to release without manual approval. After a while, when they find that the approval
doesn't add value and the release always succeeds, the manual approval is often replaced by an automat-
ic check.
Things to consider when you're setting up a release approval are:
●● What do we want to achieve with the approval? Is it an approval that we need for compliance rea-
sons? For example, we need to adhere to the four-eyes principle to get out SOX compliance. Or Is it
an approval that we need to manage our dependencies? Or is it an approval that needs to be in place
purely because we need a sign-out from an authority like Security Officers or Product Owners.
●● Who needs to approve? We need to know who needs to approve the release. Is it a product owner,
Security officer, or just someone that isn't the one that wrote the code? It's essential because the
approver is part of the process. They're the ones that can delay the process if not available. So be
aware of it.
●● When do you want to approve? Another essential thing to consider is when to approve. It's a direct
relationship with what happens after approval. Can you continue without approval? Or is everything
on hold until approval is given. By using scheduled deployments, you can separate approval from
deployment.
Although manual approval is a great mechanism to control the release, it isn't always helpful.
On many occasions, the check can be done at an earlier stage.
For example, it's approving a change that has been made in Source Control.
Scheduled deployments have already solved the dependency issue.
You don't have to wait for a person in the middle of the night. But there's still a manual action involved.
If you want to eliminate manual activities but still want control, you start talking about automatic approv-
als or release gates.
●● Release approvals and gates overview25.
Steps
Let's now look at when our release pipeline needs manual approval before deployment of a stage starts
or manual approval that the deployment is completed as expected.
While DevOps is all about automation, manual approvals are still helpful. There are many scenarios where
they're needed. For example, a product owner might want to sign out a release before it moves to
production.
Or the scrum team wants to make sure that no new software is deployed to the test environment before
someone signs off on it because they might need to find an appropriate time if it's constantly in use.
This can help to gain trust in the DevOps processes within the business.
25 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals
317
Even if the process is later automated, people might still want manual control until they become comfort-
able with the processes. Explicit manual approvals can be a great way to achieve that.
Let's try one.
1. Click the pre-deployment conditions icon for the Development stage to open the settings.
2. Click the Disabled button in the Pre-deployment approvals section to enable it.
3. In the Approvers list, find your name and select it. Then set the Timeout to 1 Day.
Note:Approvers is a list, not just a single value. If you add more than one person to the list, you can also
choose if they need to approve in sequence or if either or both approvals are required.
4. Take record of the approver policy options that are available:
It's prevalent not to allow a user who requests a release or deployment also to approve it.
318
In this case, we're the only approver, so that we'll leave that unchecked.
5. Close the Pre-deployment conditions pane and notice that a checkmark has appeared beside the
person in the icon.
8. In the Create a new release pane, see the available options, then click Create.
319
9. In the upper left of the screen, you can see that a release has been created.
10. At this point, an email should have been received, indicating that approval is required.
At this point, you could click the link in the email, but instead, we'll navigate within Azure DevOps to see
what's needed.
11. Click on the Release 1 Created link (or whatever number it is for you) in the area we looked at in Step
9. We're then taken to a screen that shows the status of the release.
320
You can see that a release has been manually triggered and that the Development stage is waiting for
approval. As an approver, you can now do that approval.
12. Hover over the Development stage and click the Approve icon that appears.
Note:Options to cancel the deployment or to view the logs are also provided at this point.
13. In the Development approvals window, add a comment and click Approve.
321
The deployment stage will then continue. Watch as each stage proceeds and succeeds.
When the release starts, it checks the state of the gate by calling an API. If the “gate” is open, we can
continue. Otherwise, we'll stop the release.
By using scripts and APIs, you can create your release gates instead of manual approval. Or at least
extending your manual approval.
Other scenarios for automatic approvals are, for example.
●● Incident and issues management. Ensure the required status for work items, incidents, and issues. For
example, ensure that deployment only occurs if no bugs exist.
●● Notify users such as legal approval departments, auditors, or IT managers about a deployment by
integrating with approval collaboration systems such as Microsoft Teams or Slack and waiting for the
approval to complete.
●● Quality validation. Query metrics from tests on the build artifacts such as pass rate or code coverage
and only deploy within required thresholds.
●● Security scan on artifacts. Ensure security scans such as anti-virus checking, code signing, and policy
checking for build artifacts have been completed. A gate might start the scan and wait for it to finish
or check for completion.
●● User experience relative to baseline. Using product telemetry, ensure the user experience hasn't
regressed from the baseline state. The experience level before the deployment could be considered a
baseline.
●● Change management. Wait for change management procedures in a system such as ServiceNow com-
plete before the deployment occurs.
●● Infrastructure health. Execute monitoring and validate the infrastructure against compliance rules after
deployment or wait for proper resource use and a positive security report.
In short, approvals and gates give you more control over the start and completion of the deployment
pipeline.
They can usually be set up as pre-deployment and post-deployment conditions, including waiting for
users to approve or reject deployments manually and checking with other automated systems until
specific requirements are verified.
Also, you can configure a manual intervention to pause the deployment pipeline and prompt users to
carry out manual tasks, then resume or reject the deployment.
To find out more about Release Approvals and Gates, check these documents.
●● Release approvals and gates overview26.
●● Release Gates27.
26 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals
27 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/gates
323
When we think about Continuous Delivery, all manual processes are a potential bottleneck.
We need to reconsider the notion of quality gates and see how we can automate these checks as part of
our release pipeline.
By using automatic approval with a release gate, you can automate the approval and validate your
company's policy before moving on.
Many quality gates can be considered.
●● No new blocker issues.
●● Code coverage on new code greater than 80%.
●● No license violations.
●● No vulnerabilities in dependencies.
●● No further technical debt was introduced.
●● Is the performance not affected after a new release?
●● Compliance checks
Steps
Let's now look at when our release pipeline needs to do automated checks for issues like code quality
before continuing with the deployments. That automated approval phase is achieved by using Release
Gates.
First, we need to make sure that Release Gates can execute work item queries.
1. On the Boards > Queries page, click All to see all the queries (not just favorites).
2. Click the ellipsis for Shared Queries and choose Security.
3. Add a user ProjectName Build Service (CompanyName) if they aren't already present, and choose
to Allow for Read permissions.
Now let's look at configuring a release gate.
1. Click the lightning icon on the Development stage to open the pre-deployment conditions settings.
324
2. In the Pre-deployment conditions pane, click the Disabled button beside Gates to enable them.
3. Click +Add to see the available types of gates, then click Query work items.
We'll use the Query work items gate to check if any outstanding bugs need to be dealt with. It does this
by running a work item query. This is an example of what is commonly called a Quality Gate.
4. Set Display name to No critical bugs allowed, and from the Query drop-down list, choose Critical
Bugs. Leave the Upper threshold set to zero because we don't want to allow any bugs at all.
325
5. Click the drop-down beside Evaluation options to see what can be configured. While 15 minutes is a
reasonable value in production, change The time between re-evaluation of gates to 5 minutes for
our testing.
The release gate doesn't just fail or pass a single time. It can keep evaluating the status of the gate. It
might fail the first time, but it might then pass after re-evaluation if the underlying issue has been
corrected.
6. Close the pane and click Save and OK to save the work.
7. Click Create release to start a new release, and in the Create a new release pane, click Create.
9. If it's waiting for approval, click Approve to allow it to continue, and in the Development pane, click
Approve.
326
After a short while, you should see the release continuing and then enter the phase where the gates
will process.
10. In the Development pane, click Gates to see the status of the release gates.
You'll notice that the gate failed the first time it was checked. It will be stuck in the processing gates stage
as there's a critical bug. Let's look at that bug and resolve it.
11. Close the pane and click Save, then OK to save the work.
12. In the main menu, click Boards, then click Queries.
327
13. In the Queries window, click All to see all the available queries.
You'll see that there's one critical bug that needs to be resolved.
15. In the properties pane for the bug, change the State to Done, then click Save.
328
There are now no critical bugs that will stop the release.
17. Return to the release by clicking Pipelines, then Releases in the main menu, then clicking the name
of the latest release.
18. When the release gate is checked next time, the release should continue and complete successfully.
329
Clean up
To avoid excessive wait time in later walkthroughs, we'll disable the release gates.
19. In the main menu, click Pipelines, click Releases, and click Edit to open the release pipeline editor.
20. Click the Pre-deployment conditions icon (that is, the lightning bolt) on the Development task, and
in the Pre-deployment conditions pane, click the switch beside Gates to disable release gates.
21. Click Save, then click OK.
Summary
This module explored the critical release strategy recommendations that organizations must consider
when designing automated deployments.
It explained how to define components of a release pipeline and artifact sources, create approves, and
configure release gates.
You learned how to describe the benefits and usage of:
●● Explain things to consider when designing your release strategy.
●● Define the components of a release pipeline and use artifact sources.
●● Create a release approval plan.
●● Implement release gates.
Learn more
●● How Microsoft plans efficient workloads with DevOps - Azure DevOps | Microsoft Docs28.
●● Release engineering app development - Azure Architecture Center | Microsoft Docs29.
28 https://docs.microsoft.com/devops/plan/how-microsoft-plans-devops
29 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-app-dev
330
●● How Microsoft develops modern software with DevOps - Azure DevOps | Microsoft Docs30.
●● Control deployments by using approvals - Azure Pipelines | Microsoft Docs31.
●● Control deployments by using gates - Azure Pipelines | Microsoft Docs32.
30 https://docs.microsoft.com/devops/develop/how-microsoft-develops-devops
31 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/approvals
32 https://docs.microsoft.com/azure/devops/pipelines/release/approvals/gates
331
Learning objectives
After completing this module, students and professionals can:
●● Provision and configure target environment.
●● Deploy to an environment securely using a service connection.
●● Configure functional test automation and run availability tests.
●● Setup test infrastructure.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● For some exercises, you need to create an Azure DevOps Organization and a Team Project. If you
don't have it yet, see: Create an organization - Azure DevOps33.
●● If you already have your organization created, use the Azure DevOps Demo Generator34 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps35.
33 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
34 https://azuredevopsdemogenerator.azurewebsites.net/
35 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
332
●● Service Connections.
Let us dive a bit further into these different target environments and connections.
On-premises servers
In most cases, when you deploy to an on-premises server, the hardware and the operating system are
already in place. The server is already there and ready.
In some cases, empty, but most of the time not. In this case, the release pipeline can only focus on
deploying the application.
You might want to start or stop a virtual machine (Hyper-V or VMware).
The scripts you use to start or stop the on-premises servers should be part of your source control and
delivered to your release pipeline as a build artifact.
Using a task in the release pipeline, you can run the script that starts or stops the servers.
To take it one step further and configure the server, you should look at technologies like PowerShell
Desired State Configuration(DSC).
The product will maintain your server and keep it in a particular state. When the server changes its state,
you can recover the changed configuration to the original configuration.
Integrating a tool like PowerShell DSC into the release pipeline is no different from any other task you
add.
Infrastructure as a service
When you use the cloud as your target environment, things change slightly. Some organizations lift and
shift from their on-premises server to cloud servers.
Then your deployment works the same as an on-premises server. But when you use the cloud to provide
you with Infrastructure as a Service (IaaS), you can use the power of the cloud to start and create servers
when needed.
It's where Infrastructure as Code (IaC) starts playing a significant role.
Creating a script or template can make a server or other infrastructural components like a SQL server, a
network, or an IP address.
By defining a template or using a command line and saving it in a script file, you can use that file in your
release pipeline tasks to execute it on your target cloud.
The server (or another component) will be created as part of your pipeline. After that, you can run the
steps to deploy the software.
Technologies like Azure Resource Manager are great for creating Infrastructure on demand.
Platform as a Service
When you move from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS), you'll get the
Infrastructure from the cloud you're running on.
For example: In Azure, you can choose to create a Web application. The cloud arranges the server, the
hardware, the network, the public IP address, the storage account, and even the webserver.
The user only needs to take care of the web application that will run on this Platform.
333
You only need to provide the templates that instruct the cloud to create a WebApp. The same goes for
Functions as a Service(FaaS) or Serverless technologies.
In Azure, it's called Azure Functions. You only deploy your application, and the cloud takes care of the
rest. However, you must instruct the Platform (the cloud) to create a placeholder where your application
can be hosted.
You can define this template in Azure Resource Manager. You can use the Azure CLI or command-line
tools.
In all cases, the Infrastructure is defined in a script file and live alongside the application code in source
control.
Clusters
Finally, you can deploy your software to a cluster. A cluster is a group of servers that work together to
host high-scale applications.
When you run a cluster as Infrastructure as a Service, you need to create and maintain the cluster. It
means that you need to provide the templates to create a cluster.
You also need to ensure that you roll out updates, bug fixes, and patches to your cluster. It's comparable
with Infrastructure as a Service.
When you use a hosted cluster, you should consider it a Platform as a Service. You instruct the cloud to
create the cluster, and you deploy your software to the cluster.
When you run a container cluster, you can use the container cluster technologies like AKS.
Service connections
When a pipeline needs access to resources, you'll often need to create service connections.
Summary
Whatever the technology you choose to host your application, your Infrastructure's creation or configura-
tion should be part of your release pipeline and source control repository.
Infrastructure as Code is a fundamental part of Continuous Delivery and gives you the freedom to create
servers and environments on-demand.
Links
●● Desired State Configuration Overview36.
●● Azure Functions37.
●● Azure Resource Manager38.
36 https://docs.microsoft.com/en-us/powershell/dsc/overview/dscforengineers?view=dsc-1.1
37 https://azure.microsoft.com/services/functions
38 https://docs.microsoft.com/azure/azure-resource-manager/resource-group-overview
334
Note: To follow along with this walkthrough, you'll need to have an existing Azure subscription that
contains an existing storage account.
Steps
You can set up a service connection to environments to create a secure and safe connection to the
environment that you want to deploy.
Service connections are also used to get resources from other places in a secure manner.
For example, you might need to get your source code from GitHub. In this case, let's look at configuring a
service connection to Azure.
1. From the main menu in the Parts Unlimited project, click Project settings at the bottom of the
screen.
2. In the Project Settings pane, from the Pipelines section, click Service connections. Click the drop-
down beside +New service connection.
As you can see, there are many types of service connections. You can create a connection to:
●● Apple App Store.
●● Docker Registry
●● Bitbucket.
●● Azure Service bus.
In this case, we want to deploy a new Azure resource, so we'll use the Azure Resource Manager
option.
3. Click Azure Resource Manager to add a new service connection.
335
4. Set the Connection name to Azure Resource Manager Service Connection, click on an Azure
Subscription, then select an existing Resource Group.
Note: You might be prompted to log on to Azure at this point. If so, log on first.
336
Notice that what we are creating is a Service Principal. We'll be using the Service Principal for
authenticating to Azure. At the top of the window, there's also an option to set up Managed Identity
Authentication instead.
The Service Principal is a service account that only has permissions in the specific subscription and
resource group. It makes it a safe way to connect from the pipeline.
Important: When you create a service connection with Azure, the service principal gets a contributor
role to the subscription or resource group. It's not enough to upload data to blob storage for the service
principal. You must explicitly add the service principal to the storage account blob contributor role.
Otherwise, the release gets failed with an authorization permission mismatch error.
5. Click OK to create it. It will then be shown in the list.
337
6. In the main Parts Unlimited menu, click Pipelines, then Releases, Edit to see the release pipeline.
Click the link to View stage tasks.
The current list of tasks is then shown. Because we started with an empty template, there are no tasks
yet. Each stage can execute many tasks.
7. Click the + sign to the right of the Agent job to add a new task. See the available list of task types.
338
8. In the Search box, enter the word storage and see the list of storage-related tasks. These include
standard tasks and tasks available from the Marketplace.
We'll use the Azure file copy task to copy one of our source files to a storage account container.
9. Hover over the Azure file copy task type and click Add when it appears. The task will be added to the
stage but requires further configuration.
10. Click the File Copy task to see the required settings. Select the latest task version.
339
11. Set the Display Name to Backup website zip file, click the ellipsis beside Source, locate the file as
follows, and click OK to select it.
340
We then need to provide details of how to connect to the Azure subscription. The easiest and most
secure way to do that be to use our new Service Connection.
12. From the Azure Subscription drop-down list, find and select the Azure Resource Manager Service
Connection that we created.
13. From the Destination Type drop-down list, select Azure Blob, and from the RM Storage Account
and Container Name, select the storage account, enter the container's name, then click Save at the
top of the screen OK.
14. To test the task, click Create release, and in the Create a new release pane, click Create.
15. Click the new release to view the details.
16. On the release page, approve the release so that it can continue.
17. Once the Development stage has been completed, you should see the file in the Azure storage
account.
341
A key advantage of using service connections is that this type of connection is managed in a single
place within the project settings. It doesn't involve connection details spread throughout the pipeline
tasks.
Source: https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants39
We can make four quadrants where each side of the square defines our targets with our tests.
●● Business facing - the tests are more functional and often executed by end users of the system or by
specialized testers that know the problem domain well.
●● Supporting the Team - it helps a development team get constant feedback on the product to find
bugs fast and deliver a product with quality build-in.
●● Technology facing - the tests are rather technical and non-meaningful to business people. They're
typical tests written and executed by the developers in a development team.
●● Critique Product - tests that are there to validate the workings of a product on its functional and
non-functional requirements.
39 https://lisacrispin.com/2011/11/08/using-the-agile-testing-quadrants/
342
Now we can place different test types we see in the other quadrants. For example, we can put Unit tests,
Component tests, and System or integration tests in the first quadrant.
In quadrant two, we can place functional tests, Story tests, prototypes, and simulations. These tests are
there to support the team in delivering the correct functionality and are business-facing since they're
more functional.
In quadrant three, we can place tests like exploratory, usability, acceptance, etc.
We place performance, load, security, and other non-functional requirements tests in quadrant four.
Looking at these quadrants, specific tests are easy to automate or automated by nature. These tests are
in quadrants 1 and 4. Tests that are automatable but most of the time not automated by nature are the
tests in quadrant 2. Tests that are the hardest to automate are in quadrant 3.
We also see that the tests that can't be automated or are hard to automate are tests that can be executed
in an earlier phase and not after release.
We call shift-left, where we move the testing process towards the development cycle.
We need to automate as many tests as possible and test them.
A few of the principles we can use are:
●● Tests should be written at the lowest level possible.
●● Write once, run anywhere, including the production system.
●● The product is designed for testability.
●● Test code is product code; only reliable tests survive.
●● Test ownership follows product ownership.
By testing at the lowest level possible, you'll find many tests that don't require infrastructure or applica-
tions to be deployed.
We can use the pipeline to execute the tests that need an app or infrastructure. To perform tests within
the pipeline, we can run scripts or use specific tests tools.
On many occasions, these are external tools that you execute from the pipeline, like Owasp ZAP, Spec-
Flow, or Selenium.
You can use test functionality from a platform like Azure on other occasions. For example, Availability or
Load Tests executed from within the cloud platform.
When you want to write your automated tests, choose the language that resembles the language from
your code.
In most cases, the application developers should also write the test, so it makes sense to use the same
language. For example, write tests for your .NET application in .NET, and write tests for your Angular
application in Angular.
The build and release agent can handle it to execute Unit Tests or other low-level tests that don't need a
deployed application or infrastructure.
When you need to do tests with a UI or other specialized functionality, you need a Test agent to run the
test and report the results. Installation of the test agent then needs to be done upfront or as part of the
execution of your pipeline.
343
Understand Shift-left
The goal for shifting left is to move quality upstream by performing tests early in the pipeline. It repre-
sents the phrase “fail fast, fail often” combining test and process improvements reduces the time it takes
for tests to be run and the impact of failures later on.
The idea is to ensure that most of the testing is complete before merging a change into the main branch.
Many teams find that their test takes too long to run during the development lifecycle.
As projects scale, the number and nature of tests will grow substantially, taking hours or days to run the
complete test.
They get pushed further until they're run at the last possible moment, and the benefits intended to be
gained from building those tests aren't realized until long after the code has been committed.
There are several essential principles that DevOps teams should adhere to in implementing any quality
vision.
344
●● One team at Microsoft runs over 60,000 unit tests in parallel in less than 6 minutes, intending to
get down to less than a minute.
●● Functional tests: Must be independent.
●● Defining a test taxonomy is an essential aspect of DevOps. The developers should understand the
suitable types of tests in different scenarios.
●● L0 tests are a broad class of fast in-memory unit tests. It's a test that depends on code in the
assembly under test and nothing else.
●● L1 tests might require assembly plus SQL or the file system.
●● L2 tests are functional tests run against testable service deployments. It's a functional test category
that requires a service deployment but may have critical service dependencies stubbed out
somehow.
●● L3 tests are a restricted class of integration tests that run against production. They require a
complete product deployment.
Check the case study in shifting left at Microsoft: Shift left to make testing fast and reliable40.
For more information, see:
●● Shift right to test in production41.
40 https://docs.microsoft.com/devops/develop/shift-left-make-testing-fast-reliable
41 https://docs.microsoft.com/devops/deliver/shift-right-test-production
345
42 https://azure.microsoft.com/blog/creating-a-web-test-alert-programmatically-with-application-insights/
43 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
44 https://docs.microsoft.com/azure/load-testing/overview-what-is-azure-load-testing
45 https://docs.microsoft.com/azure/load-testing/overview-what-is-azure-load-testing
346
Note: The overview image shows how Azure Load Testing uses Azure Monitor to capture metrics for app
components. Learn more about the supported Azure resource types46.
You can automatically run a load test at the end of each sprint or in a staging environment to validate a
release candidate build.
You can trigger Azure Load Testing from Azure Pipelines or GitHub Actions workflows.
Get started with adding load testing to your Azure Pipelines CI/CD workflow47 or use our Azure Load
Testing GitHub action48.
For more information about the Azure Load Testing preview, see:
●● What is Azure Load Testing?49
●● Tutorial: Use a load test to identify performance bottlenecks50.
●● Tutorial: Set up automated load testing51.
●● Learn about the key concepts for Azure Load Testing52.
●● Quickstart: Create and run a load test with Azure Load Testing53.
●● Tutorial: Identify performance regressions with Azure Load Testing and GitHub Actions - Azure
Load Testing54.
●● Configure Azure Load Testing for high-scale load tests - Azure Load Testing55.
Summary
This module detailed target environment provisioning, service connections creation process, and test
infrastructure setup. You learned how to configure functional test automation and run availability tests.
You learned how to describe the benefits and usage of:
●● Provision and configure target environment.
●● Deploy to an environment securely using a service connection.
●● Configure functional test automation and run availability tests.
●● Setup test infrastructure.
Learn more
●● Create target environment - Azure Pipelines | Microsoft Docs56.
●● Integrate DevTest Labs environments into Azure Pipelines - Azure DevTest Labs | Microsoft
Docs57.
46 https://docs.microsoft.com/azure/load-testing/resource-supported-azure-resource-types
47 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-azure-pipelines
48 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-github-actions
49 https://docs.microsoft.com/azure/load-testing/overview-what-is-azure-load-testing
50 https://docs.microsoft.com/azure/load-testing/tutorial-identify-bottlenecks-azure-portal
51 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-azure-pipelines
52 https://docs.microsoft.com/azure/load-testing/concept-load-testing-concepts
53 https://docs.microsoft.com/azure/load-testing/quickstart-create-and-run-load-test
54 https://docs.microsoft.com/azure/load-testing/tutorial-cicd-github-actions
55 https://docs.microsoft.com/azure/load-testing/how-to-high-scale-load
56 https://docs.microsoft.com/azure/devops/pipelines/process/environments
57 https://docs.microsoft.com/azure/devtest-labs/integrate-environments-devops-pipeline
347
58 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure
59 https://docs.microsoft.com/azure/devops/pipelines/library/service-endpoints
60 https://docs.microsoft.com/azure/devops/pipelines/tasks/test/run-functional-tests
61 https://docs.microsoft.com/azure/azure-monitor/app/monitor-web-app-availability
62 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-testing
348
Learning objectives
After completing this module, students and professionals can:
●● Use and manage task and variable groups.
●● Use release variables and stage variables in your release pipeline.
●● Use variables in release pipelines.
●● Create custom build and release tasks.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● For some exercises, you need to create an Azure DevOps Organization and a Team Project. If you
don't have it yet, see: Create an organization - Azure DevOps63.
●● If you already have your organization created, use the Azure DevOps Demo Generator64 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps65.
63 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
64 https://azuredevopsdemogenerator.azurewebsites.net/
65 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
66 https://docs.microsoft.com/azure/devops/pipelines/library/task-groups
349
Note: Note: Task Groups aren't currently supported in YAML. Use templates instead. See Template Refer-
ences67.
Steps
Let's now look at how a release pipeline can reuse groups of tasks.
It's common to reuse a group of tasks in more than one stage within a pipeline or in different pipelines.
1. In the main menu for the Parts Unlimited project, click Pipelines, then click Task groups.
You'll notice that you don't currently have any task groups defined.
There's an option to import task groups, but the most common way to create a task group is directly
within the release pipeline, so let's do that.
2. Click Pipelines, click Releases and click Edit to open the pipeline we worked on in the main menu.
67 https://docs.microsoft.com/azure/devops/pipelines/yaml-schema
350
3. The Development stage currently has a single task. We'll add another task to that stage. Click the
View stage tasks link to open the stage editor.
4. Click the + sign to the right of the Agent job line to add a new task, in the Search box, type data-
base.
5. Hover over the Azure SQL Database Deployment option and click Add. Click the Azure SQL
DacpacTask when it appears in the list to open the settings pane.
6. Set the Display name to Deploy devopslog database, and from the Azure Subscriptions drop-
down list, click ARM Service Connection.
Note:We can reuse our service connection here.
7. In the SQL Database section, set a unique name for the SQL Server, set the Database to devopslog,
set the Login to devopsadmin, and set any suitable password.
352
8. In the Deployment Package section, set the Deploy type to Inline SQL Script, set the Inline SQL
Script to:
CREATE TABLE dbo.TrackingLog
(
TrackingLogID int IDENTITY(1,1) PRIMARY KEY,
TrackingDetails nvarchar(max)
);
11. Click Create task group, then in the Create task group window, set Name to Backup website zip
file and deploy devopslog. Click the Category drop-down list to see the available options. Ensure
that Deploy is selected, and click Create.
353
The individual tasks have now disappeared from the list of tasks, and the new task group appears
instead.
12. From the Task drop-down list, select the Test Team A stage.
13. Click the + sign to the right of the Agent job to add a new task. In the Search box, type backup and
notice that the new task group appears like any other task.
354
14. Hover on the task group and click Add when it appears.
Task groups allow for each reuse of a set of tasks and limit the number of places where edits need to
occur.
Walkthrough cleanup
15. Click Remove to remove the task group from the Test Team A stage.
16. From the Tasks drop-down list, select the Development stage. Again click Remove to remove the
task group from the Development stage.
17. Click Save, then OK.
Predefined variables
When running your release pipeline, you always need variables that come from the agent or context of
the release pipeline.
68 https://docs.microsoft.com/azure/devops/pipelines/release/variables
355
For example, the agent directory where the sources are downloaded, the build number or build ID, the
agent's name, or any other information.
This information is accessible in predefined variables that you can use in your tasks.
Stage variables
Share values across all the tasks within one specific stage by using stage variables.
Use a stage-level variable for values that vary from stage to stage (and are the same for all the tasks in a
stage).
Variable groups
Share values across all the definitions in a project by using variable groups. We'll cover variable groups
later in this module.
Steps
Let's now look at how a release pipeline can use predefined variables, called Variable Groups.
Like how we used task groups, variable groups provide a convenient way to avoid redefining many
variables when defining stages within pipelines and even when working across multiple pipelines.
Let's create a variable group and see how it can be used.
1. On the main menu for the Parts Unlimited project, click Pipelines, then click Library. There are
currently no variable groups in the project.
2. Click + Variable group to start creating a variable group. Set Variable group name to Website Test
Product Details.
3. In the Variables section, click +Add, enter Name, enter ProductCode, and in Value, enter RED-
POLOXL.
69 https://docs.microsoft.com/azure/devops/pipelines/library/variable-groups
357
You can see an extra column that shows a lock. It allows you to have variable values that are locked
and not displayed in the configuration screens.
While it's often used for values like passwords, notice an option to link secrets from an Azure key vault
as variables.
It would be a preferable option for variables that provide credentials that need to be secured outside
the project.
In this example, we're just providing details of a product used in testing the website.
4. Add another variable called Quantity with a value of 12.
5. Add another variable called SalesUnit with a value of Each.
7. On the main menu, click Pipelines, click Releases and click Edit to return to editing the release
pipeline we have been working on. From the top menu, click Variables.
Variable groups are linked to pipelines rather than being directly added to them.
9. Click Link variable group, then in the Link variable group pane, click the Website Test Product
Details variable group (notice that it shows you how many variables are contained). In the Variable
group scope, select the Development, Test Team A, and Test Team B stages.
359
We need the test product for development and testing, but we don't need it in production. If required
in all stages, we would have chosen Release for the Variable group scope instead.
10. Click the Link to complete the link.
The variables contained in the variable group are now available for use within all stages except produc-
tion, just the same way as any other variable.
Summary
This module described the creation of task and variable groups, creating custom build and release tasks,
and using release variables and stage variables in your pipeline.
You learned how to describe the benefits and usage of:
●● Use and manage task and variable groups.
●● Use release variables and stage variables in your release pipeline.
●● Use variables in release pipelines.
●● Create custom build and release tasks.
Learn more
●● Variable groups for Azure Pipelines - Azure Pipelines | Microsoft Docs70.
●● Define variables - Azure Pipelines | Microsoft Docs71.
●● Add a build or release task in an extension - Azure DevOps | Microsoft Docs72.
70 https://docs.microsoft.com/azure/devops/pipelines/library/variable-groups
71 https://docs.microsoft.com/azure/devops/pipelines/process/variables
72 https://docs.microsoft.com/azure/devops/extend/develop/add-build-task
360
Learning objectives
After completing this module, students and professionals can:
●● Implement automated inspection of health.
●● Create and configure events.
●● Configure notifications.
●● Create service hooks to monitor the pipeline.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see: Create an organization - Azure DevOps73.
●● If you already have your organization created, use the Azure DevOps Demo Generator74 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps75.
You can do a few different things to stay informed about your release pipeline automatedly. In the follow-
ing chapters, we'll dive a bit deeper into these.
73 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
74 https://azuredevopsdemogenerator.azurewebsites.net
75 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
361
Release gates
Release gates allow automatic collection of health signals from external services and then promote the
release when all the signs are booming at the same time or stop the deployment on timeout.
Typically, gates are connected with incident management, problem management, change management,
monitoring, and external approval systems. Release gates are discussed in an upcoming module.
Service hooks
Service hooks enable you to do tasks on other services when events happen in your Azure DevOps
Services projects.
For example, create a card in Trello when a work item is created or send a push notification to your
team's Slack when a build fails.
Service hooks can also be used in custom apps and services as a more efficient way to drive activities
when events happen in your projects.
Reporting
Reporting is the most static approach to inspection but also the most evident in many cases.
Creating a dashboard that shows the status of your build and releases combined with team-specific
information is, in many cases, a valuable asset to get insights.
Read more at About dashboards, charts, reports, & widgets76.
76 https://docs.microsoft.com/azure/devops/report/dashboards/overview
362
Alerts
However, when you define alerts, you need to be careful. When you get alerts for every single event that
happens in the system, your mailbox will quickly be flooded with numerous alerts.
The more alerts you get that aren't relevant, the more significant the change that people will never look
at the alerts and notifications and will miss the important ones.
Build and release. Collaborate Customer sup- Plan and track Integrate
port
AppVeyor Campfire UserVoice Trello Azure Service Bus
Bamboo Flowdock Zendesk Azure Storage
Jenkins HipChat Web Hooks
MyGet Hubot Zapier
Slack
This list will change over time.
77 https://docs.microsoft.com/azure/devops/notifications/index
78 https://docs.microsoft.com/azure/devops/notifications/concepts-events-and-notifications
363
To learn more about service hooks and how to use and create them, read Service Hooks in Azure
DevOps79.
Steps
Let's now look at how a release pipeline can communicate with other services by using service hooks.
Azure DevOps can be integrated with a wide variety of other applications. It has built-in support for many
applications and generic hooks for working with other applications. Let's look.
1. Below the main menu for the Parts Unlimited project, click Project Settings.
79 https://docs.microsoft.com/azure/devops/service-hooks/overview
364
By using service hooks, we can notify other applications that an event has occurred within Azure
DevOps. We could also send a message to a team in Microsoft Teams or Slack. We could also trigger
an action in Bamboo or Jenkins.
4. Scroll to the bottom of the list of applications and click on Web Hooks.
365
Suppose the application that you want to communicate with isn't in the list of available application
hooks.
In that case, you can almost always use the Web Hooks option as a generic way to communicate. It
allows you to make an HTTP POST when an event occurs.
So, if, for example, you wanted to call an Azure Function or an Azure Logic App, you could use this
option.
To demonstrate the basic process for calling web hooks, we'll write a message into a queue in the
Azure Storage account that we have been using.
5. From the list of available applications, click Azure Storage.
366
6. Click Next. On the Trigger page, we determine which event causes the service hook to be called. Click
the drop-down for Trigger on this type of event to see the available event types.
7. Ensure that Release deployment completed is selected, then in the Release pipeline name select
Release to all environments. For Stage, select Production. Drop down the list for Status and see the
available options.
367
9. On the Action page, enter the name of your Azure storage account.
10. Open the Azure portal, and from the settings for the storage account, copy the value for Key in the
Access keys section.
11. Back in the Action page in Azure DevOps, paste in the key.
13. Make sure that the test succeeded, then click Close, and on the Action page, click Finish.
3. If the release is waiting for approval, click to approve it and wait for the release to complete success-
fully.
370
Note: If you have run multiple releases, you might have various messages.
3. Click the latest message (usually the bottom of the list) to open it and review the message properties,
then close the Message properties pane.
371
You've successfully integrated this message queue with your Azure DevOps release pipeline.
There are four notification types that you can manage in Azure DevOps:
●● Personal notifications.
●● Team notifications.
●● Project notifications.
●● Global notifications.
For each notification, you have a set of specific steps to configure. The following steps show how to
manage global notifications:
1. Open your Azure DevOps organization https://dev.azure.com/{organization}/_settings/organization-
Overview.
2. Click on Organization settings at the bottom left side.
3. Click on Global notifications under the General tab.
The Default subscriptions tab lists all default global subscriptions available. The globe icon on a notifica-
tion subscription indicates the subscription is a default subscription. You can view all default notification
subscriptions80.
You can view and enable options available in the context menu (...) for each subscription.
Note: Only Project Collection Administrators can enable/disable any default subscription in this view.
Project Collection Valid Users group can only view the details of the default subscription.
In the Subscribers tab, you can see users subscribed to each notification item. The Settings section shows
the Default delivery option setting. All teams and groups inherit this setting.
You can see how to manage your personal notifications following manage your personal notifica-
tions81.
For more information, see:
●● Get started with notifications in Azure DevOps - Azure DevOps82.
80 https://docs.microsoft.com/en-us/azure/devops/notifications/oob-built-in-notifications?view=azure-devops
81 https://docs.microsoft.com/azure/devops/notifications/manage-your-personal-notifications
82 https://docs.microsoft.com/azure/devops/notifications/about-notifications
373
Notification settings
1. Click on the notification icon in the upper-right corner of any page.
2. Click on the notification settings under the list of repositories in the left sidebar.
83 https://docs.microsoft.com/azure/devops/notifications/manage-team-group-global-organization-notifications
84 https://docs.microsoft.com/azure/devops/notifications/concepts-events-and-notifications
85 https://docs.github.com/github/managing-subscriptions-and-notifications-on-github/managing-your-subscriptions
374
The release also has a quality aspect, but it's tightly related to the quality of the deployment and package
deployed. When we want to measure the quality of a release itself, we can do all kinds of checks within
the pipeline.
86 https://docs.github.com/account-and-profile/managing-subscriptions-and-notifications-on-github/setting-up-notifications/about-
notifications
87 https://docs.github.com/account-and-profile/managing-subscriptions-and-notifications-on-github/setting-up-notifications/configuring-
notifications
88 https://docs.github.com/account-and-profile/managing-subscriptions-and-notifications-on-github/managing-subscriptions-for-activity-
on-github/viewing-your-subscriptions
377
You can execute all different types of tests like integration tests, load tests, or even UI tests while running
your pipeline and checking the release's quality.
Using a quality gate is also a perfect way to check the quality of your release. There are many different
quality gates. For example, a gate that monitors to check if everything is healthy on your deployment
targets, work item gates that verify the quality of your requirements process.
You can add extra security and compliance checks. For example, do we follow the four-eyes principle, or
do we have the proper traceability?
Document store
An often-used way of storing release notes is by creating text files or documents in some document
store. This way, the release notes are stored together with other documents.
The downside of this approach is that there's no direct connection between the release in the release
management tool and the release notes that belong to this release.
Wiki
The most used way for customers is to store the release notes in a Wiki. For example:
●● Confluence from Atlassian
●● SharePoint Wiki
●● SlimWiki
●● Wiki in Azure DevOps
Release notes are created as a page in the wiki and by using hyperlinks. Relations can be associated with
the build, the release, and the artifacts.
In the codebase
When you look at it, release notes belong strictly to the release of the features you implemented and
your code. In that case, the best option might be to store release notes as part of your code repository.
Once the team completes a feature, they or the product owner also write the release notes and save
them alongside the code. This way, it becomes living documentation because the notes change with the
rest of the code.
In a work item
Another option is to store your release notes as part of your work items. Work items can be Bugs, Tasks,
Product Backlog Items, or User Stories.
You can create or use a different field within the work item to save release notes in work items. In this
field, you type the publicly available release notes that will be communicated to the customer.
378
With a script or specific task in your build and release pipeline, you can generate the release notes and
store them as artifacts or publish them to an internal or external website.
89 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-XplatGenerateReleaseNotes
90 https://marketplace.visualstudio.com/items?itemName=richardfennellBM.BM-VSTS-WIKIUpdater-Tasks
91 https://www.atlassian.com/software/confluence
92 https://azure.microsoft.com/services/devops/wiki/
379
code. Create a documentation artifact in your build pipeline and deliver this artifact to the release
pipeline.
The release pipeline can then deploy the documentation to a site or include it in the boxed product.
Stages
Running a Continuous Integration pipeline that builds and deploys your product is a commonly used
scenario. But what if you want to deploy the same release to different environments? When choosing the
right release management tool, you should consider the following things when it comes to stages (or
environments)
●● Can you use the same artifact to deploy to different stages?
●● Can you differ the configuration between the stages?
●● Can you have different steps for each stage?
●● Can you follow the release between the stages?
●● Can you track the artifacts/work items and source code between the stages?
●● Can we see where the released software originates from (which code)?
●● Can we see the requirements that led to this change?
●● Can we follow the requirements through the code, build, and release?
●● Auditability
●● Can we see who, when, and why the release process changed?
●● Can we see who, when, and why a new release has been deployed?
Security is vital in it. It isn't ok when people can do everything, including deleting evidence. Setting up the
right roles, permissions, and authorization is essential to protect your system and pipeline.
When looking at an appropriate Release Management tool, you can consider:
●● Does it integrate with your company's Active Directory?
●● Can you set up roles and permissions?
●● Is there a change history of the release pipeline itself?
●● Can you ensure the artifact didn't change during the release?
●● Can you link requirements to the release?
●● Can you link source code changes to the release pipeline?
●● Can you enforce approval or the four-eyes principle?
●● Can you see the release history and the people who triggered the release?
Per tool is indicated if it's part of a more extensive suite. Integration with a bigger suite gives you many
advantages regarding traceability, security, and auditability. Numerous integrations are already there out
of the box.
GitHub Actions
GitHub Actions help you build, test, and deploy your code. You can implement continuous integration
and continuous delivery (CI/CD) that allows you to make code reviews, branch management, and issue
triaging work the way you want.
●● Trigger workflows with various events.
●● Configure environments to set rules before a job can proceed and to limit access to secrets.
●● Use concurrency to control the number of deployments running at a time.
Links
●● GitHub Actions93.
●● Understanding GitHub Actions94.
●● Essential features of GitHub Actions95.
●● Deploying with GitHub Actions96.
Azure Pipelines
Azure Pipelines helps you implement a build, test, and deployment pipeline for any app.
Tutorials, references, and other documentation show you how to configure and manage the continuous
integration and Continuous Delivery (CI/CD) for the app and platform of your choice.
●● Hosted on Azure as a SaaS in multiple regions and available as an on-premises product.
●● Complete Rest API for everything around Build and Release Management.
●● Integration with many build and source control systems
●● GitHub.
●● Azure Repos.
●● Jenkins.
●● Bitbucket.
●● and so on.
●● Cross-Platform support, all languages, and platforms.
●● Rich marketplace with extra plugins, build tasks and release tasks, and dashboard widgets.
●● Part of the Azure DevOps suite. Tightly integrated.
●● Fully customizable.
93 https://docs.github.com/en/actions
94 https://docs.github.com/en/actions/learn-github-actions/understanding-github-actions
95 https://docs.github.com/en/actions/learn-github-actions/essential-features-of-github-actions
96 https://docs.github.com/en/actions/deployment/about-deployments/deploying-with-github-actions
384
Links
●● Azure Pipelines97.
●● Building and Deploying your Code with Azure Pipelines98.
Jenkins
Jenkins's leading open-source automation server provides hundreds of plugins to support building,
deploying, and automating any project.
●● On-premises system. They're offered as SaaS by a third party.
●● No part of a bigger suite.
●● Industry-standard, especially in the full-stack space.
●● Integrates with almost every source control system.
●● A rich ecosystem of plugins.
●● CI/Build tool with deployment possibilities.
●● No release management capabilities.
Links
●● Jenkins99.
●● Tutorial: Jenkins CI/CD to deploy an ASP.NET Core application to Azure Web App service100.
●● Azure Friday - Jenkins CI/CD with Service Fabric101.
Circle CI
CircleCI's continuous integration and delivery platform help software teams rapidly release code with
confidence by automating the build, test, and deploy process.
CircleCI offers a modern software development platform that lets teams ramp quickly, scale easily, and
build confidently every day.
●● CircleCI is a cloud-based system or an on-premises system.
●● Rest API—you have access to projects, builds, and artifacts.
●● The result of the build is going to be an artifact.
●● Integration with GitHub and BitBucket.
●● Integrates with various clouds.
97 https://azure.microsoft.com/services/devops/pipelines/
98 https://www.youtube.com/watch?v=NuYDAs3kNV8
99 https://jenkins.io/
100 https://cloudblogs.microsoft.com/opensource/2018/09/21/configure-jenkins-cicd-pipeline-deploy-asp-net-core-application/
101 https://www.youtube.com/watch?v=5RYmooIZqS4
385
Links
●● CircleCI102.
●● How to get started on CircleCI 2.0: CircleCI 2.0 Demo103
GitLab Pipelines
GitLab helps teams automate the release and delivery of their applications to shorten the delivery
lifecycle, streamline manual processes and accelerate team velocity.
With Continuous Delivery (CD) built into the pipeline, deployment can be automated to multiple environ-
ments like staging and production and support advanced features such as canary deployments.
Because the configuration and definition of the application are version controlled and managed, it's easy
to configure and deploy your application on demand.
Link
●● GitLab104
Atlassian Bamboo
Bamboo is a continuous integration (CI) server that can automate the release management for a software
application, creating a Continuous Delivery pipeline.
Link
●● Atlassian Bamboo105
Summary
This module described how to automate the inspection of health events, configure notifications, and set
up service hooks to monitor pipelines.
You learned how to describe the benefits and usage of:
●● Implement automated inspection of health.
●● Create and configure events.
●● Configure notifications.
●● Create service hooks to monitor the pipeline.
102 https://circleci.com/
103 https://www.youtube.com/watch?v=KhjwnTD4oec
104 https://about.gitlab.com/stages-devops-lifecycle/release/
105 https://www.atlassian.com/software/bamboo/features
386
Learn more
●● DevOps checklist - Azure Design Review Framework | Microsoft Docs106.
●● Events, subscriptions, and notifications - Azure DevOps | Microsoft Docs107.
●● Integrate with service hooks - Azure DevOps | Microsoft Docs108.
●● Build Quality Indicators report - Azure DevOps Server | Microsoft Docs109.
106 https://docs.microsoft.com/azure/architecture/checklist/dev-ops
107 https://docs.microsoft.com/azure/devops/notifications/concepts-events-and-notifications
108 https://docs.microsoft.com/azure/devops/service-hooks/overview
109 https://docs.microsoft.com/azure/devops/report/sql-reports/build-quality-indicators-report
387
Labs
Lab 09: Controlling deployments using Release
Gates
Lab overview
This lab covers the configuration of the deployment gates and details how to use them to control
execution of Azure pipelines. To illustrate their implementation, you will configure a release definition
with two environments for an Azure Web App. You will deploy to the Canary environment only when
there are no blocking bugs for the app and mark the Canary environment complete only when there are
no active alerts in Application Insights of Azure Monitor.
A release pipeline specifies the end-to-end release process for an application to be deployed across a
range of environments. Deployments to each environment are fully automated by using jobs and tasks.
Ideally, you do not want new updates to the applications to be exposed to all the users at the same time.
It is a best practice to expose updates in a phased manner i.e. expose to a subset of users, monitor their
usage and expose to other users based on the experience of the initial set of users.
Approvals and gates enable you to take control over the start and completion of the deployments in a
release. With approvals, you can wait for users to manually approve or reject deployments. Using release
gates, you can specify application health criteria that must be met before release is promoted to the next
environment. Prior to or after any environment deployment, all the specified gates are automatically
evaluated until they all pass or until they reach your defined timeout period and fail.
Gates can be added to an environment in the release definition from the pre-deployment conditions or
the post-deployment conditions panel. Multiple gates can be added to the environment conditions to
ensure all the inputs are successful for the release.
As an example:
●● Pre-deployment gates ensure there are no active issues in the work item or problem management
system before deploying a build to an environment.
●● Post-deployment gates ensure there are no incidents from the monitoring or incident management
system for the app after it’s been deployed, before promoting the release to the next environment.
There are 4 types of gates included by default in every account.
●● Invoke Azure function: Triggers execution of an Azure function and ensures a successful completion.
●● Query Azure monitor alerts: Observes the configured Azure monitor alert rules for active alerts.
●● Invoke REST API: Makes a call to a REST API and continues if it returns a successful response.
●● Query Workitems: Ensures the number of matching work items returned from a query is within a
threshold.
Objectives
After you complete this lab, you will be able to:
●● Configure release pipelines
●● Configure release gates
●● Test release gates
388
Lab duration
●● Estimated time: 75 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions110
Objectives
After you complete this lab, you will be able to:
●● create a release dashboard
●● use REST API to query release information
Lab duration
●● Estimated time: 45 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions111
110 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
111 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
389
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices isn't considered a feedback loop or quality gates during the Continuous
Delivery?
An automated test on a Test environment.
An automated build to validate the sources.
An automated Azure Boards and repository integration.
Multiple choice
Which of the following choices best describe Continuous Delivery?
Continuous delivery (CD) triggers automated testing for every code change.
Continuous delivery (CD) is a set of processes, tools, and techniques for rapid, reliable, and continuous
software development and delivery.
Continuous delivery (CD) is the process of automating the build and testing of code every time a team
member commits changes to version control.
Multiple choice
Which of the following choices is a definition for a technical release (deployment)?
Deployment is the action of running the tasks for one stage, which results in a tested and deployed
application and other activities specified for that stage.
Deployment is a package or container containing a versioned set of artifacts specified in a release
pipeline in your CI/CD process.
Deployment is a construct that holds a versioned set of artifacts specified in a CI/CD pipeline.
Multiple choice
Which of the following choices is a container with versioned artifacts, pipeline, approvals, stages, variables?
Deployment.
Build.
Release.
390
Multiple choice
Which of the following choices isn't a release trigger?
A manual trigger.
A continuous deployment trigger.
A feature trigger.
Multiple choice
Which of the following choices can you use to prevent deployment in Azure DevOps when a security testing
tool finds a compliance problem?
Work Item.
Release Gate.
Manual trigger.
Multiple choice
Which of the following choices is a logical boundary in your release pipeline at which you can pause the
pipeline and perform various checks?
Stage.
Trigger.
Quality Gate.
Multiple choice
Which of the following choices describe how you measure the quality of your release process?
Based on the success and failures of your pipeline.
Based on your pipeline's configuration and CI/CD implementation.
The quality of your release process cannot be measured directly because it is a process.
Multiple choice
Which of the following choices describes the Release Notes and documentation importance?
To comply with the internal company process and audit.
It's essential when you want to communicate what has been released to your customer.
To control what was delivered during the sprint.
Multiple choice
Which of the following choices isn't a Release Management tool?
Azure Repos.
Azure Pipelines.
Jenkins.
391
Multiple choice
Which of the following choices describe microservice?
A microservice is a multi-function software to deliver more than one thing very well.
A microservice is software built as one monolith.
A microservice is an autonomous, independently deployable, and scalable software component.
Multiple choice
Which of the following choices is a microservice characteristic?
Each microservice can be tested based on other microservice results.
If one microservice changes, it shouldn't impact any other microservices within your landscape.
If one microservice changes, it affects other services, and you need to wait for other services to go
online.
Multiple choice
Which of the following choices represent a classical deployment pattern?
Dev, Test, Staging, and Production deployment.
Blue-green deployments.
A/B testing.
Multiple choice
Which of the following choices is the easiest way to create a staging environment for an Azure WebApp?
Create a deployment slot.
Use Application Insights.
Create an app close.
Multiple choice
Which of the following choices describes a Feature Flag functionality?
Feature Flags allow teams to create automatic release notes.
Feature Flags can be used to control exposure to new product functionality.
Feature Flags help teams control deployment release gates.
Multiple choice
Which of the following choices isn't a deployment pattern that allows you to plan to slowly increase the
traffic to a newer version of your site?
A/B Testing.
Blue-Green.
Canary Release.
392
Multiple choice
Which of the following choices is an Azure-based tool can you use to divert a percentage of your web traffic
to a newer version of an Azure website?
Load Balancer.
Application Gateway.
Traffic Manager.
Multiple choice
Which of the following choices is a characteristic that makes users suitable for working with Canary deploy-
ments?
Uses the app irregularly.
High tolerance for issues.
It depends highly on the app working all the time.
Multiple choice
Which of the following choices describes the difference Dark Launching has from Canary releases?
You're looking to assess users' responses to new features in your frontend rather than testing the
performance of the backend.
You're looking to assess users' responses to new features in your backend and frontend.
You're looking to assess users' responses to new features in your backend rather than testing the
performance of the frontend.
Multiple choice
Which of the following choices is a deployment pattern extension of Canary Release?
Blue-Green.
A/B Testing.
Progressive Exposure.
Multiple choice
Which of the following choices is another way A/B testing is called?
Smoke testing.
Split testing or Bucket testing.
Integration testing.
393
Multiple choice
Which of the following choices is a correct statement about A/B testing?
A/B testing isn't part of Continuous Delivery or a pre-requisite for Continuous Delivery.
A/B testing can be implemented using Azure Artifacts and multiple environments.
A/B testing is a pre-requisite and part of Continuous Delivery.
394
Answers
Multiple choice
Which of the following choices isn't considered a feedback loop or quality gates during the Continuous
Delivery?
An automated test on a Test environment.
An automated build to validate the sources.
■■ An automated Azure Boards and repository integration.
Explanation
A feedback loop can be different things: A unit test to validate the code, An automated build to validate the
sources, An automated test on a Test environment, Some monitor on a server, and so on.
Multiple choice
Which of the following choices best describe Continuous Delivery?
Continuous delivery (CD) triggers automated testing for every code change.
■■ Continuous delivery (CD) is a set of processes, tools, and techniques for rapid, reliable, and continuous
software development and delivery.
Continuous delivery (CD) is the process of automating the build and testing of code every time a team
member commits changes to version control.
Explanation
Continuous delivery (CD) is a set of processes, tools, and techniques for rapid, reliable, and continuous
software development and delivery.
Multiple choice
Which of the following choices is a definition for a technical release (deployment)?
■■ Deployment is the action of running the tasks for one stage, which results in a tested and deployed
application and other activities specified for that stage.
Deployment is a package or container containing a versioned set of artifacts specified in a release
pipeline in your CI/CD process.
Deployment is a construct that holds a versioned set of artifacts specified in a CI/CD pipeline.
Explanation
Deployment is the action of running the tasks for one stage, which results in a tested and deployed applica-
tion and other activities specified for that stage.
Multiple choice
Which of the following choices is a container with versioned artifacts, pipeline, approvals, stages, varia-
bles?
Deployment.
Build.
■■ Release.
Explanation
A release is a package or container that holds a versioned set of artifacts specified in a release pipeline in
your CI/CD process.
395
Multiple choice
Which of the following choices isn't a release trigger?
A manual trigger.
A continuous deployment trigger.
■■ A feature trigger.
Explanation
A feature trigger.
Multiple choice
Which of the following choices can you use to prevent deployment in Azure DevOps when a security
testing tool finds a compliance problem?
Work Item.
■■ Release Gate.
Manual trigger.
Explanation
Release Gate. Release gates give you additional control over the start and completion of the deployment
pipeline. They are often set up as a pre-deployment and post-deployment conditions.
Multiple choice
Which of the following choices is a logical boundary in your release pipeline at which you can pause the
pipeline and perform various checks?
■■ Stage.
Trigger.
Quality Gate.
Explanation
Using stage you can pause and validate the pipeline with various checks.
Multiple choice
Which of the following choices describe how you measure the quality of your release process?
Based on the success and failures of your pipeline.
Based on your pipeline's configuration and CI/CD implementation.
■■ The quality of your release process cannot be measured directly because it is a process.
Explanation
The quality of your release process can't be measured directly because it's a process. What you can measure
is how well your process works.
396
Multiple choice
Which of the following choices describes the Release Notes and documentation importance?
To comply with the internal company process and audit.
■■ It's essential when you want to communicate what has been released to your customer.
To control what was delivered during the sprint.
Explanation
When you deploy a new release to a customer or install new software on your server, you want to commu-
nicate what has been released. The usual way to do this is the use of release notes.
Multiple choice
Which of the following choices isn't a Release Management tool?
■■ Azure Repos.
Azure Pipelines.
Jenkins.
Explanation
Azure Repos is a Source Control tool.
Multiple choice
Which of the following choices describe microservice?
A microservice is a multi-function software to deliver more than one thing very well.
A microservice is software built as one monolith.
■■ A microservice is an autonomous, independently deployable, and scalable software component.
Explanation
A microservice is an autonomous, independently deployable, and scalable software component.
Multiple choice
Which of the following choices is a microservice characteristic?
Each microservice can be tested based on other microservice results.
■■ If one microservice changes, it shouldn't impact any other microservices within your landscape.
If one microservice changes, it affects other services, and you need to wait for other services to go
online.
Explanation
If one microservice changes, it should not affect any other microservices within your landscape.
Multiple choice
Which of the following choices represent a classical deployment pattern?
■■ Dev, Test, Staging, and Production deployment.
Blue-green deployments.
A/B testing.
Explanation
The traditional or classical deployment pattern was moving your software to a development stage, a testing
stage, maybe an acceptance or staging stage, and finally a production stage.
397
Multiple choice
Which of the following choices is the easiest way to create a staging environment for an Azure WebApp?
■■ Create a deployment slot.
Use Application Insights.
Create an app close.
Explanation
With deployment slots, you can validate app changes in staging before swapping them with your production
slot.
Multiple choice
Which of the following choices describes a Feature Flag functionality?
Feature Flags allow teams to create automatic release notes.
■■ Feature Flags can be used to control exposure to new product functionality.
Feature Flags help teams control deployment release gates.
Explanation
Feature Flags can be used to control exposure to new product functionality.
Multiple choice
Which of the following choices isn't a deployment pattern that allows you to plan to slowly increase the
traffic to a newer version of your site?
A/B Testing.
■■ Blue-Green.
Canary Release.
Explanation
A/B Testing and Canary Release allows you to plan to slowly increase the traffic to a newer version of your
site.
Multiple choice
Which of the following choices is an Azure-based tool can you use to divert a percentage of your web
traffic to a newer version of an Azure website?
Load Balancer.
Application Gateway.
■■ Traffic Manager.
Explanation
You set the traffic to be distributed to a small percentage of the users, and you carefully watch the applica-
tion's behavior.
398
Multiple choice
Which of the following choices is a characteristic that makes users suitable for working with Canary
deployments?
Uses the app irregularly.
■■ High tolerance for issues.
It depends highly on the app working all the time.
Explanation
It's a high tolerance for issues.
Multiple choice
Which of the following choices describes the difference Dark Launching has from Canary releases?
■■ You're looking to assess users' responses to new features in your frontend rather than testing the
performance of the backend.
You're looking to assess users' responses to new features in your backend and frontend.
You're looking to assess users' responses to new features in your backend rather than testing the
performance of the frontend.
Explanation
You are looking to assess users' responses to new features in your frontend rather than testing the perfor-
mance of the backend.
Multiple choice
Which of the following choices is a deployment pattern extension of Canary Release?
Blue-Green.
A/B Testing.
■■ Progressive Exposure.
Explanation
It's Progressive Exposure.
Multiple choice
Which of the following choices is another way A/B testing is called?
Smoke testing.
■■ Split testing or Bucket testing.
Integration testing.
Explanation
A/B testing is also known as split testing or bucket testing. It compares two versions of a web page or app
against each other to determine which one performs better.
399
Multiple choice
Which of the following choices is a correct statement about A/B testing?
■■ A/B testing isn't part of Continuous Delivery or a pre-requisite for Continuous Delivery.
A/B testing can be implemented using Azure Artifacts and multiple environments.
A/B testing is a pre-requisite and part of Continuous Delivery.
Explanation
A/B testing is not part of Continuous Delivery or a pre-requisite for Continuous Delivery. It's more the other
way around.
Module 5 Implement a secure continuous de-
ployment using Azure Pipelines
Testing strategy
Your testing strategy should be in place. If you need to run many manual tests to validate your software,
it is a bottleneck to delivering on-demand.
Coding practices
If your software isn't written in a safe and maintainable manner, the chances are that you can't maintain a
high release cadence.
402
When your software is complex because of a large amount of technical Debt, it's hard to change the code
quickly and reliably.
Writing high-quality software and high-quality tests are an essential part of Continuous Delivery.
Architecture
The architecture of your application is always significant. But when implementing Continuous Delivery, it's
maybe even more so.
If your software is a monolith with many tight coupling between the various components, it's challenging
to deliver your software continuously.
Every part that is changed might impact other parts that didn't change. Automated tests can track many
these unexpected dependencies, but it's still hard.
There's also the time aspect when working with different teams. When Team A relies on the service of
Team B, Team A can't deliver until Team B is done. It introduces another constraint on delivery.
Continuous Delivery for large software products is complex.
For smaller parts, it's easier. So, breaking up your software into smaller, independent pieces is a good
solution in many cases.
One approach to solving these issues is to implement microservices.
Continuous Integration is one of the key pillars of DevOps.
Once you have your code in a version control system, you need an automated way of integrating the
code on an ongoing basis.
Azure Pipelines can be used to create a fully featured cross-platform CI and CD service.
It works with your preferred Git provider and can deploy to most major cloud services, including Azure.
This module details continuous integration practice and the pillars for implementing it in the develop-
ment lifecycle, its benefits, and properties.
Learning objectives
After completing this module, students and professionals can:
●● Describe deployment patterns.
●● Explain microservices architecture.
●● Understand classical and modern deployment patterns.
●● Plan and design your architecture.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
403
The software was built, and when all features had been implemented, the software was deployed to an
environment where a group of people could start using it.
The traditional or classical deployment pattern was moving your software to a development stage, a
testing stage, maybe an acceptance or staging stage, and finally a production stage.
The software moved as one piece through the stages.
The production release was, in most cases, a Big Bang release, where users were confronted with many
changes at the same time.
Despite the different stages to test and validate, this approach still involves many risks.
By running all your tests and validation on non-production environments, it's hard to predict what
happens when your production users start using it.
You can run load tests and availability tests, but in the end, there's no place like production.
Summary
This module-introduced deployment patterns and explained microservices architecture to help improve
the deployment cycle and examine classical and modern deployment patterns.
You learned how to describe the benefits and usage of:
●● Describe deployment patterns.
●● Explain microservices architecture.
●● Understand classical and modern deployment patterns.
●● Plan and design your architecture.
Learn more
●● Deployment jobs - Azure Pipelines | Microsoft Docs1.
●● What are Microservices? - Azure DevOps | Microsoft Docs2.
●● Design a CI/CD pipeline-using Azure DevOps - Azure Example Scenarios | Microsoft Docs3.
1 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
2 https://docs.microsoft.com/devops/deliver/what-are-microservices
3 https://docs.microsoft.com/azure/architecture/example-scenario/apps/devops-dotnet-webapp
406
Learning objectives
After completing this module, students and professionals can:
●● Explain deployment strategies.
●● Implement blue-green deployment.
●● Understand deployment slots.
●● Implement and manage feature toggles.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see: Create an organization - Azure DevOps4.
●● If you already have your organization created, use the Azure DevOps Demo Generator5 and
create a new Team Project called “Parts Unlimited” using the template "PartsUnlimited." Or feel
free to create a blank project. See Create a project - Azure DevOps6.
4 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
5 https://azuredevopsdemogenerator.azurewebsites.net/
6 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
407
Swap
The swap eliminates downtime when you deploy your app with seamless traffic redirection, and no
requests are dropped because of swap operations.
408
Steps
Let's now look at how a release pipeline can be used to implement blue-green deployments.
We'll start by creating a new project with a release pipeline that can deploy the Parts Unlimited template
again.
3. Click on the PartsUnlimited project (not the PartsUnlimited-YAML project), click Select Template,
and click Create Project. When the deployment completes, click Navigate to the project.
4. In the main menu for PU Hosted, click Pipelines, then click Builds, then Queue, and finally Run to
start a build.
The build should succeed.
Note:Warnings might appear but can be ignored for this walkthrough.
7 https://docs.microsoft.com/azure/app-service/deploy-staging-slots
8 https://blogs.msdn.microsoft.com/devops/2017/04/10/considerations-on-using-deployment-slots-in-your-devops-pipeline/
9 https://docs.microsoft.com/azure/app-service/deploy-staging-slots
409
5. In the main menu, click Releases. Because a continuous integration trigger was in place, a release was
attempted. However, we haven't yet configured the release so it will have failed. Click Edit to enter
edit mode for the release.
6. Select the Dev stage from the drop-down list beside Tasks, then click to select the Azure Deploy-
ment task.
7. In the Azure resource group deployment pane, select your Azure subscription, then click Authorize
when prompted. When authorization completes, choose a Location for the web app.
Note:You might be prompted to sign in to Azure at this point.
410
8. Click Azure App Service Deploy in the task list to open its settings. Again, select your Azure sub-
scription. Set the Deployment slot to Staging.
411
Note:The template creates a production site and two deployment slots: Dev and Staging. We'll use
Staging for our Green site.
9. In the task list, click Dev, and in the Agent job pane, select Azure Pipelines for the Agent pool and
vs2017-win2016 for the Agent Specification.
10. From the top menu, click Pipelines. Click the Dev stage, and in the properties window, rename it to
Green Site. Click the QA stage and click Delete and Confirm. Click the Production stage and click
Delete and Confirm. Click Save, then OK.
412
11. Hover over the Green Site stage and click the Clone icon when it appears. Change the Stage name to
Production. From the Tasks drop-down list, select Production.
12. Click the Azure App Service Deploy task and uncheck the Deploy to slot option. Click Save and OK.
The production site isn't deployed to a deployment slot. It's deployed to the main site.
13. Click Create release, then Create to create the new release. When created, click the release link to
view its status.
15. Open a new browser tab and navigate to the copied URL. It will take the application a short while to
compile, but then the Green website (on the Staging slot) should appear.
Note:You can tell that the staging slot is being used because of the -staging suffix in the website URL.
16. Open another new browser tab and navigate to the same URL but without the -staging slot. The
production site should also be working.
414
20. From the Tasks drop-down list, click to select the Swap Blue-Green stage. Click the + to the right-
hand side of Agent Job to add a new task. In the Search box, type CLI.
415
21. Hover over the Azure CLI template and when the Add button appears, click it, then click to select the
Azure CLI task to open its settings pane.
22. Configure the pane as follows, with your subscription, a Script Location of Inline script, and the
Inline Script:
Az webapp deployment slot swap -g $(ResourceGroupName) -n $(WebsiteName)
--slot Staging --target-slot production
416
23. From the menu above the task list, click Pipeline. Click the Pre-deployment conditions icon for the
Swap Blue-Green stage, then in the Triggers pane, enable Pre-deployment approvals.
24. Configure yourself as an approver, click Save, then OK.
We'll make a cosmetic change to see that the website has been updated. We'll change the word tires
in the main page rotation to tyres to target an international audience.
26. Click Edit to allow editing, then find the word tires and replace it with the word tyres. Click Commit
and Commit to save the changes and trigger a build and release.
417
27. From the main menu, click Pipelines, then Builds. Wait for the continuous integration build to
complete successfully.
28. From the main menu, click Releases. Click to open the latest release (at the top of the list).
You're now being asked to approve the deployment swap across to Production. We'll check the green
deployment first.
29. Refresh the Green site (that is, Staging slot) browser tab and see if your change has appeared. It now
shows the altered word.
418
30. Refresh the Production site browser tab and notice that it still isn't updated.
31. As you're happy with the change, in release details, click Approve, then Approve and wait for the
stage to complete.
32. Refresh the Production site browser tab and check that it now has the updated code.
Final notes
If you check the production site, you'll see it has the previous version of the code.
419
It's the critical difference with Swap, rather than just a typical deployment process from one staged site to
another. You have a rapid fallback option by swapping the sites back if needed.
When the switch is off, it executes the code in the IF, otherwise the ELSE.
You can make it much more intelligent, controlling the feature toggles from a dashboard or building
capabilities for roles, users, and so on.
If you want to implement feature toggles, many different frameworks are available commercially as Open
Source.
For more information, see also Explore how to progressively expose your features in production for
some or all users10.
10 https://docs.microsoft.com/azure/devops/articles/phase-features-with-feature-flags
422
You can classify the different types of toggles based on two dimensions as described by Martin Fowler.
He states that you can look at the dimension of how long a toggle should be in your codebase and, on
the other side how dynamic the toggle needs to be.
The most important thing is to remember that you need to remove the toggles from the software.
If you don't do that, they'll become a form of technical debt if you keep them around for too long.
As soon as you introduce a feature flag, you've added to your overall technical debt.
Like other technical debt, they're easy to add, but the longer they're part of your code, the bigger the
technical debt becomes because you've added scaffolding logic needed for the branching within the
code.
The cyclomatic complexity of your code keeps increasing as you add more feature flags, as the number of
possible paths through the code increases.
Using feature flags can make your code less solid and can also add these issues:
●● The code is harder to test effectively as the number of logical combinations increases.
●● The code is harder to maintain because it's more complex.
●● The code might even be less secure.
●● It can be harder to duplicate problems when they're found.
A plan for managing the lifecycle of feature flags is critical. As soon as you add a flag, you need to plan
for when it will be removed.
Feature flags shouldn't be repurposed. There have been high-profile failures because teams decided to
reuse an old flag that they thought was no longer part of the code for a new purpose.
Azure App Configuration offers a Feature Manager. See Azure App Configuration Feature Manager11.
Summary
This module described the blue-green deployment process and introduced feature toggle techniques to
implement in the development process.
You learned how to describe the benefits and usage of:
●● Explain deployment strategies.
●● Implement blue-green deployment.
●● Understand deployment slots.
●● Implement and manage feature toggles.
Learn more
●● Release Engineering Continuous deployment - Azure Architecture Center | Microsoft Docs12.
●● Deployment jobs - Azure Pipelines | Microsoft Docs13.
●● Configure canary deployments for Azure Linux virtual machines - Azure Virtual Machines |
Microsoft Docs14.
●● Progressive experimentation with feature flags - Azure DevOps | Microsoft Docs15.
●● Set up staging environments - Azure App Service | Microsoft Docs16.
11 https://docs.microsoft.com/azure/azure-app-configuration/manage-feature-flags
12 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-cd
13 https://docs.microsoft.com/azure/devops/pipelines/process/deployment-jobs
14 https://docs.microsoft.com/azure/virtual-machines/linux/tutorial-azure-devops-blue-green-strategy
15 https://docs.microsoft.com/devops/operate/progressive-experimentation-feature-flags
16 https://docs.microsoft.com/azure/app-service/deploy-staging-slots
424
Learning objectives
After completing this module, students and professionals can:
●● Describe deployment strategies.
●● Implement canary release.
●● Explain traffic manager.
●● Understand dark launching.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Canary releases can be implemented using a combination of feature toggles, traffic routing, and deploy-
ment slots.
●● You can route a percentage of traffic to a deployment slot with the new feature enabled.
●● You can target a specific user segment by using feature toggles.
based on that is necessary. Examples include following data sovereignty mandates, localization of
content & user experience, and measuring traffic from different regions.
●● Multivalue: Select MultiValue for Traffic Manager profiles that can only have IPv4/IPv6 addresses as
endpoints. When a query is received for this profile, all healthy endpoints are returned.
●● Subnet: Select the Subnet traffic-routing method to map sets of end-user IP address ranges to a
specific endpoint within a Traffic Manager profile. The endpoint returned will be mapped for that
request's source IP address when a request is received.
When we look at the options the Traffic Manager offers, the most used option for Continuous Delivery is
routing traffic based on weights.
Note: Traffic is only routed to endpoints that are currently available.
For more information, see also:
●● What is Traffic Manager?17
●● How Traffic Manager works18
●● Traffic Manager Routing Methods19
17 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview
18 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-how-it-works
19 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-routing-methods
427
Summary
This module described deployment strategies around canary releases and dark launching and examined
traffic managers.
You learned how to describe the benefits and usage of:
●● Describe deployment strategies.
●● Implement canary release.
●● Explain traffic manager.
●● Understand dark launching.
Learn more
●● Release Engineering Continuous deployment - Azure Architecture Center | Microsoft Docs20.
●● Canary deployment strategy for Kubernetes deployments - Azure Pipelines | Microsoft Docs21.
●● Azure Traffic Manager | Microsoft Docs22.
●● Progressive experimentation with feature flags - Azure DevOps | Microsoft Docs23.
20 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-cd
21 https://docs.microsoft.com/azure/devops/pipelines/ecosystems/kubernetes/canary-demo
22 https://docs.microsoft.com/azure/traffic-manager/traffic-manager-overview
23 https://docs.microsoft.com/devops/operate/progressive-experimentation-feature-flags
428
Learning objectives
After completing this module, students and professionals can:
●● Implement progressive exposure deployment.
●● Implement A/B testing.
●● Implement CI/CD with deployment rings.
●● Identify the best deployment strategy.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
A/B testing isn't part of continuous delivery or a pre-requisite for continuous delivery. It's more the other
way around.
Continuous delivery allows you to deliver MVPs to a production environment and your end-users quickly.
Common aims are to experiment with new features, often to see if they improve conversion rates.
Experiments are continuous, and the impact of change is measured.
429
When you have identified multiple groups of users and see value in investing in a ring-based deploy-
ment, you need to define your setup.
Some organizations that use canary releasing have multiple deployment slots set up as rings.
The first release of the feature to ring 0 targets a well-known set of users, mostly their internal organiza-
tion.
After things have been proven stable in ring 0, they propagate the release to the next ring. It's with a
limited set of users outside their organization.
And finally, the feature is released to everyone. It is often done by flipping the switch on the feature
toggles in the software.
As in the other deployment patterns, monitoring and health checks are essential.
By using post-deployment release gates that check a ring for health, you can define an automatic
propagation to the next ring after everything is stable.
When a ring isn't healthy, you can halt the deployment to the following rings to reduce the impact.
For more information, see also Explore how to progressively expose your Azure DevOps extension
releases in production to validate before impacting all users24.
Steps
Let's look at how a release pipeline can stage features using ring-based deployments.
24 https://docs.microsoft.com/azure/devops/articles/phase-rollout-with-rings
431
When I have a new feature, I might want to release it to a few users first, just in case something goes
wrong.
I could do it in authenticated systems by having those users as members of a security group and letting
members of that group use the new features.
However, on a public website, I might not have logged-in users. Instead, I might want to direct a small
percentage of the traffic to use the new features.
Let's see how that's configured.
We'll create a new release pipeline that isn't triggered by code changes but manually when we slowly
release a new feature.
We start by assuming that a new feature has already been deployed to the Green site (the staging slot).
1. In the main menu for the PU Hosted project, click Pipelines, then click Release, click +New, then
click New release pipeline.
2. When prompted to select a template, click Empty job from the top of the pane.
3. Click on the Stage 1 stage and rename it to Ring 0 (Canary).
4. Hover over the New release pipeline name at the top of the page, and when a pencil appears, click it,
and change the pipeline name to Ring-based Deployment.
5. Select the Ring 0 (Canary) stage from the Tasks drop-down list. Click the + to add a new task, and
from the list of tasks, hover over Azure CLI when the Add button appears, click it, then click to select
the Azure CLI task in the task list for the stage.
432
6. In the Azure CLI settings pane, select your Azure subscription, set Script Location to Inline script,
set the Inline Script to the following, then click Save and OK.
az webapp traffic-routing set --resource-group $(ResourceGroupName) --name
$(WebsiteName) --distribution staging=10
This distribution will cause 10% of the web traffic to be sent to the new feature Site (currently the
staging slot).
7. From the menu above the task list, click Variables. Create two new variables as shown. (Make sure to
use your correct website name).
8. From the menu above the variables, click Pipeline to return to editing the pipeline. Hover over the
Ring 0 (Canary) stage and click the Clone icon when it appears. Select the new stage and rename it
to Ring 1 (Early Adopters).
9. Select the Ring 1 (Early Adopters) stage from the Tasks drop-down list and select the Azure CLI task.
Modify the script by changing the value from 10 to 30 to cause 30% of the traffic to go to the new
feature site.
433
11. Click the Pre-deployment conditions icon for the Ring 1 (Early Adopters) stage and add yourself as
a pre-deployment approver. Do the same for the Public stage—Click Save and OK.
The first step in releasing the new code to the public is to swap the new feature site (that is, the
staging site) with the production so that production is now running the new code.
12. From the Tasks drop-down list, select the Public stage. Select the Azure CLI task, change the Display
name to Swap sites and change the Inline Script to the following command:
az webapp deployment slot swap -g $(ResourceGroupName) -n $(WebsiteName)
--slot staging --target-slot production
Summary
This module introduced A/B testing and progressive exposure deployment concepts and explored CI/CD
with deployment rings–ring-based deployment.
You learned how to describe the benefits and usage of:
●● Implement progressive exposure deployment.
●● Implement A/B testing.
●● Implement CI/CD with deployment rings.
●● Identify the best deployment strategy.
Learn more
●● Progressively expose your releases using deployment rings - Azure DevOps | Microsoft Docs25.
●● Testing your app and Azure environment - Azure Architecture Center | Microsoft Docs26.
●● What is Continuous Delivery? - Azure DevOps | Microsoft Docs27.
25 https://docs.microsoft.com/azure/devops/migrate/phase-rollout-with-rings
26 https://docs.microsoft.com/azure/architecture/framework/devops/release-engineering-testing
27 https://docs.microsoft.com/devops/deliver/what-is-continuous-delivery
436
Learning objectives
After completing this module, students and professionals can:
●● Integrate Azure DevOps with identity management systems.
●● Integrate GitHub with single sign-on (SSO).
●● Understand and create a service principal.
●● Create managed service identities.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
28 https://docs.github.com/organizations/managing-saml-single-sign-on-for-your-organization/enforcing-saml-single-sign-on-for-your-
organization
437
Azure AD applications
Applications are registered with an Azure AD tenant within Azure Active Directory. Registering an applica-
tion creates an identity configuration. You also determine who can use it:
●● Accounts in the same organizational directory.
●● Accounts in any organizational directory.
●● Accounts in any organizational directory and Microsoft Accounts (personal).
●● Microsoft Accounts (Personal accounts only).
Client secret
Once the application is created, you then should create at least one client secret for the application.
438
Grant permissions
The application identity can then be granted permissions within services and resources that trust Azure
Active Directory.
Service principal
To access resources, an entity must be represented by a security principal. To connect, the entity must
know:
●● TenantID.
●● ApplicationID.
●● Client Secret.
For more information on Service Principals, see App Objects and Service Principals29.
The traditional answer would have been to use SQL Authentication with a username and password. It
leaves yet another credential that needs to be managed on an ongoing basis.
29 https://docs.microsoft.com/azure/active-directory/develop/app-objects-and-service-principals
439
Summary
This module described the integration with GitHub and single sign-on (SSO) for authentication, service
principal, and managed service identities.
You learned how to describe the benefits and usage of:
●● Integrate Azure DevOps with identity management systems.
●● Integrate GitHub with single sign-on (SSO).
●● Understand and create a service principal.
●● Create managed service identities.
Learn more
●● About security, authentication, authorization, and security policies - Azure DevOps | Microsoft
Docs31.
●● Azure Identity and Access Management Solutions | Microsoft Azure32.
●● About authentication with SAML single sign-on - GitHub Docs33.
●● Connect to Microsoft Azure - Azure Pipelines | Microsoft Docs34.
30 https://docs.microsoft.com/azure/active-directory/managed-identities-azure-resources/overview
31 https://docs.microsoft.com/azure/devops/organizations/security/about-security-identity
32 https://azure.microsoft.com/product-categories/identity/
33 https://docs.github.com/authentication/authenticating-with-saml-single-sign-on/about-authentication-with-saml-single-sign-on
34 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure
440
Learning objectives
After completing this module, students and professionals can:
●● Rethink application configuration data.
●● Understand separation of concerns.
●● Integrate Azure Key Vault with Azure Pipelines.
●● Manage secrets, tokens, and certificates.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Example
It's 2:00 AM. Adam is done making all changes to his super fantastic code piece.
441
The tests are all running fine. They hit commit -> push -> all commits pushed successfully to git. Happily,
Adam drives back home. 10 mins later, they get a call from the SecurityOps engineer, “Adam, did you
push the Secret Key to our public repo?”
YIKES! That blah.config file, Adam thinks. How could I've forgotten to include that in .gitignore? The
nightmare has already begun.
We can surely blame Adam for sinning, checking in sensitive secrets, and not following the recommended
practices of managing configuration files.
Still, the bigger question is that if the underlying toolchain had abstracted out the configuration manage-
ment from the developer, this fiasco would have never happened!
History
The virus was injected a long time ago. Since the early days of .NET, the app.config and web.config files
have the notion that developers can make their code flexible by moving typical configuration into these
files.
When used effectively, these files are proven to be worthy of dynamic configuration changes. However,
much time, we see the misuse of what goes into these files.
A common culprit is how samples and documentation have been written. Most samples on the web
would usually use these config files to store key elements such as ConnectionStrings and even passwords.
The values might be obfuscated but what we are telling developers is that "hey, It's a great place to push
your secrets!".
So, in a world where we're preaching using configuration files, we can't blame the developer for not
managing its governance.
We aren't challenging the use of Configuration here. It's an absolute need for an exemplary implementa-
tion. Instead, we should debate using multiple JSON, XML, and YAML files to maintain configuration
settings.
Configs are great for ensuring the flexibility of the application, config files. However, they aren't straight-
forward, especially across environments.
Let's define some roles to elaborate on them. None of those are new concepts but rather a high-level
summary:
●● Configuration custodian: Responsible for generating and maintaining the life cycle of configuration
values. These include CRUD on keys, ensuring the security of secrets, regeneration of keys and tokens,
defining configuration settings such as Log levels for each environment. This role can be owned by
operation engineers and security engineering while injecting configuration files through proper
DevOps processes and CI/CD implementation. They do not define the actual configuration but are
custodians of their management.
●● Configuration consumer: Responsible for defining the schema (loose term) for the configuration that
needs to be in place and then consuming the configuration values in the application or library code.
It's the Dev. And Test teams shouldn't be concerned about the value of keys but rather what the key's
capability is. For example, a developer may need a different ConnectionString in the application but
not know the actual value across different environments.
●● Configuration store: The underlying store used to store the configuration, while it can be a simple
file, but in a distributed application, it needs to be a reliable store that can work across environments.
The store is responsible for persisting values that modify the application's behavior per environment
but aren't sensitive and don't require any encryption or HSM modules.
●● Secret store: While you can store configuration and secrets together, it violates our separation of
concern principle, so the recommendation is to use a different store for persisting secrets. It allows a
secure channel for sensitive configuration data such as ConnectionStrings, enables the operations
team to have Credentials, Certificate, Token in one repository, and minimizes the security risk if the
Configuration Store gets compromised.
Depending on the type of backing store used and its latency, it might be helpful to implement a caching
mechanism within the external configuration store.
For more information, see the Caching Guidance. The figure illustrates an overview of the External Config-
uration Store pattern with optional local cache.
Keys
Keys serve as the name for key-value pairs and are used to store and retrieve corresponding values.
It's common to organize keys into a hierarchical namespace by using a character delimiter, such as / or :.
Use a convention that's best suited for your application.
App Configuration treats keys as a whole. It doesn't parse keys to figure out how their names are struc-
tured or enforce any rule on them.
Keys stored in App Configuration are case-sensitive, Unicode-based strings.
The keys app1 and App1 are distinct in an App Configuration store.
When you use configuration settings within an application, keep it in mind because some frameworks
handle configuration keys case-insensitively.
You can use any Unicode character in key names entered into App Configuration except for *, ,, and \.
These characters are reserved. If you need to include a reserved character, you must escape it by using \
{Reserved Character}.
There's a combined size limit of 10,000 characters on a key-value pair.
This limit includes all characters in the key, its value, and all associated optional attributes.
445
Within this limit, you can have many hierarchical levels for keys.
Label keys
Key values in App Configuration can optionally have a label attribute.
Labels are used to differentiate key values with the same key.
A key app1 with labels A and B forms two separate keys in an App Configuration store.
By default, the label for a key value is empty or null.
Label provides a convenient way to create variants of a key. A common use of labels is to specify multiple
environments for the same key:
Key = AppName:DbEndpoint & Label = Test
Key = AppName:DbEndpoint & Label = Staging
Key = AppName:DbEndpoint & Label = Production
Values
Values assigned to keys are also Unicode strings. You can use all Unicode characters for values.
There's an optional user-defined content type associated with each value.
Use this attribute to store information, for example, an encoding scheme, about a value that helps your
application process it properly.
Configuration data stored in an App Configuration store, which includes all keys and values, is encrypted
at rest and in transit.
App Configuration isn't a replacement solution for Azure Key Vault. Don't store application secrets in it.
Basic concepts
Here are several new terms related to feature management:
●● Feature flag: A feature flag is a variable with a binary state of on or off. The feature flag also has an
associated code block. The state of the feature flag triggers whether the code block runs or not.
●● Feature manager: A feature manager is an application package that handles the lifecycle of all the
feature flags in an application. The feature manager typically provides more functionality, such as
caching feature flags and updating their states.
●● Filter: A filter is a rule for evaluating the state of a feature flag. A user group, a device or browser type,
a geographic location, and a time window are all examples of what a filter can represent.
Effective implementation of feature management consists of at least two components working in concert:
●● An application that makes use of feature flags.
●● A separate repository that stores the feature flags and their current states.
How these components interact is illustrated in the following examples.
if (featureFlag) {
// Run the following code.
}
In this case, if featureFlag is set to True, the enclosed code block is executed; otherwise, it's skipped.
You can set the value of featureFlag statically, as in the following code example:
bool featureFlag = true;
You can also evaluate the flag's state based on certain rules:
bool featureFlag = isBetaUser();
A slightly more complicated feature flag pattern includes an else statement as well:
if (featureFlag) {
// This following code will run if the featureFlag value is true.
} else {
// This following code will run if the featureFlag value is false.
}
}
}
35 https://docs.microsoft.com/azure/key-vault/key-vault-overview
449
●● Store secrets backed by hardware security modules - The secrets and keys can be protected by
software or FIPS 140-2 Level 2 validates HSMs.
RBAC is used when dealing with the management of the vaults, and a key vault access policy is used
when attempting to access data stored in a vault.
Azure Key Vaults may be either software- or hardware-HSM protected.
You can import or generate keys in hardware security modules (HSMs) that never leave the HSM bounda-
ry when you require added assurance.
Microsoft uses Thales hardware security modules. You can use Thales tools to move a key from your HSM
to Azure Key Vault.
Finally, Azure Key Vault is designed so that Microsoft doesn't see or extract your data.
Summary
This module explored ways to rethink application configuration data and the separation of concerns
method. It helped you understand configuration patterns and how to integrate Azure Key Vault with
Azure Pipelines.
You learned how to describe the benefits and usage of:
●● Rethink application configuration data.
●● Understand separation of concerns.
●● Integrate Azure Key Vault with Azure Pipelines.
●● Manage secrets, tokens, and certificates.
452
Learn more
●● Use Azure Key Vault secrets in Azure Pipelines - Azure Pipelines | Microsoft Docs36.
●● Define variables - Azure Pipelines | Microsoft Docs37.
●● Integrate Azure App Configuration using a continuous integration and delivery pipeline |
Microsoft Docs38.
36 https://docs.microsoft.com/azure/devops/pipelines/release/azure-key-vault
37 https://docs.microsoft.com/azure/devops/pipelines/process/variables
38 https://docs.microsoft.com/azure/azure-app-configuration/integrate-ci-cd-pipeline
453
Labs
Lab 11: Configuring pipelines as code with YAML
Lab overview
Many teams prefer to define their build and release pipelines using YAML. This allows them to access the
same pipeline features as those using the visual designer, but with a markup file that can be managed
like any other source file. YAML build definitions can be added to a project by simply adding the corre-
sponding files to the root of the repository. Azure DevOps also provides default templates for popular
project types, as well as a YAML designer to simplify the process of defining build and release tasks.
Objectives
After you complete this lab, you will be able to:
●● configure CI/CD pipelines as code with YAML in Azure DevOps
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions39
39 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
454
●● configure pipeline to retrieve the password from the Azure Key vault and pass it on to subsequent
tasks.
Objectives
After you complete this lab, you will be able to:
●● Create an Azure Active Directory (Azure AD) service principal.
●● Create an Azure key vault.
●● Track pull requests through the Azure DevOps pipeline.
Lab duration
●● Estimated time: 40 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions40
Objectives
After you complete this lab, you will be able to:
●● Configure a self-hosted Azure DevOps agent
●● Configure release pipeline
●● Trigger build and release
●● Run tests in Chrome and Firefox
Lab duration
●● Estimated time: 60 minutes
40 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
41 http://www.seleniumhq.org/
455
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions42
42 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
456
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices is a job type you can run?
Multi-configuration.
Multi-job.
Multi-pool.
Multiple choice
Which of the following choices is how many deployment jobs can be run concurrently by a single agent?
3
5
1
Multiple choice
Which of the following choices describes the job type correctly?
None: Tasks will run on a single agent or more agents in parallel.
Multi-configuration: Run the same set of tasks with a single configuration per agent.
Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents.
Multiple choice
Which of the following choices isn't a technology or tool for configuring a target server during the deploy-
ment?
PowerShell Desired State Configuration (DSC).
Chef.
Azure App Configuration.
Multiple choice
Which of the following choices do you need to create when a pipeline needs access to resources?
Service Hook.
Service Connection.
Agent Pool.
457
Multiple choice
Which of the following choices is recommended to authenticate to Azure with Service Connections?
User and Password.
Personal Access Token.
Service Principal.
Multiple choice
Which of the following choices is correct about Azure Pipelines?
Azure Pipelines allow creating custom build/release tasks.
Azure Pipelines only allow native tasks or extensions from the marketplace.
Azure Pipelines automatically create tasks based on PowerShell.
Multiple choice
Which of the following choices should you create to store values that you want to make available across
multiple builds and release pipelines?
Task Group.
Variable Group.
Deployment Group.
Multiple choice
Which of the following choices isn't an advantage of creating your own task?
You can safely and efficiently distribute across your organization.
You can use and reuse a secure endpoint to a target server.
Users can see the implementation details.
Multiple choice
Which of the following choices allows automatic collection of health signals from external services and
promotes the release when all the signals are successful?
Release gates.
Events.
Service Hooks.
458
Multiple choice
Which of the following choices are raised when certain actions occur, like when a release is started or a build
completed?
Service Hooks.
Events.
Release gates.
Multiple choice
Which of the following choices are usually emails you receive when an action occurs to which you're
subscribed?
Notifications.
Service Hooks.
Events.
Multiple choice
Which of the following roles is responsible for generating and maintaining the life cycle of configuration
values?
Configuration Custodian.
Configuration Consumer.
Developer.
Multiple choice
Which of the following choices enables applications to have no direct access to the secrets, which helps
improve the security & control?
Library.
Azure Key Vault.
Azure Pipelines.
Multiple choice
Which of the following choices isn't a benefit of Azure Key Vault?
Code secure files.
Certificate management.
Secrets management.
459
Multiple choice
Which of the following level choices do you need to connect your identity provider to GitHub to use SSO?
At the organization level.
At the project level.
At the repository level.
Multiple choice
Applications are registered with an Azure AD tenant within Azure Active Directory. Registering an applica-
tion creates an identity configuration. Which of the following choices isn't correct about who can use it?
Accounts in the same organizational directory.
Azure DevOps groups.
Accounts in any organizational directory and Microsoft Accounts (personal).
Multiple choice
Which of the following choices isn't a type of managed identities?
Organization-assigned.
System-assigned.
User-assigned.
Multiple choice
Which of the following choices isn't a scenario that App Configuration makes it easier to implement?
Centralize management and distribution of hierarchical configuration data for different environments
and geographies.
Dynamically change application settings without the need to redeploy or restart an application.
Create, manage and centralize packages as a Package Management.
Multiple choice
Which of the following choices isn't a general approach to naming keys used for configuration data?
Cross-key.
Flat.
Hierarchical.
460
Multiple choice
Which of the following choices isn't one of two components working in concert for effective implementation
of feature management?
An application that makes use of feature flags.
A monolith application that needs a redesign to microservices.
A separate repository that stores the feature flags and their current states.
461
Answers
Multiple choice
Which of the following choices is a job type you can run?
■■ Multi-configuration.
Multi-job.
Multi-pool.
Explanation
There are three different types of jobs you can run. Multi-configuration, Multi-agent, and None (Tasks will
run on a single agent).
Multiple choice
Which of the following choices is how many deployment jobs can be run concurrently by a single agent?
3
5
■■ 1
Explanation
The agent can only execute one job at the same time.
Multiple choice
Which of the following choices describes the job type correctly?
None: Tasks will run on a single agent or more agents in parallel.
Multi-configuration: Run the same set of tasks with a single configuration per agent.
■■ Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents.
Explanation
It's Multi-agent: Run the same set of tasks on multiple agents using the specified number of agents.
Multiple choice
Which of the following choices isn't a technology or tool for configuring a target server during the
deployment?
PowerShell Desired State Configuration (DSC).
Chef.
■■ Azure App Configuration.
Explanation
Azure App Configuration provides a service to manage application settings and feature flags centrally.
When you want to configure the server, you should look at technologies like PowerShell Desired State
Configuration(DSC), or use tools like Puppet and Chef.
462
Multiple choice
Which of the following choices do you need to create when a pipeline needs access to resources?
Service Hook.
■■ Service Connection.
Agent Pool.
Explanation
When a pipeline needs access to resources, you will often need to provision service connections.
Multiple choice
Which of the following choices is recommended to authenticate to Azure with Service Connections?
User and Password.
Personal Access Token.
■■ Service Principal.
Explanation
The Service Principal is a type of service account that only has permissions in the specific subscription and
resource group. This makes it a very safe way to connect from the pipeline.
Multiple choice
Which of the following choices is correct about Azure Pipelines?
■■ Azure Pipelines allow creating custom build/release tasks.
Azure Pipelines only allow native tasks or extensions from the marketplace.
Azure Pipelines automatically create tasks based on PowerShell.
Explanation
Azure Pipelines allows creating custom build/release tasks.
Multiple choice
Which of the following choices should you create to store values that you want to make available across
multiple builds and release pipelines?
Task Group.
■■ Variable Group.
Deployment Group.
Explanation
A variable group is used to store values that you want to make available across multiple builds and release
pipelines.
Multiple choice
Which of the following choices isn't an advantage of creating your own task?
You can safely and efficiently distribute across your organization.
You can use and reuse a secure endpoint to a target server.
■■ Users can see the implementation details.
Explanation
Users do notsee implementation details.
463
Multiple choice
Which of the following choices allows automatic collection of health signals from external services and
promotes the release when all the signals are successful?
■■ Release gates.
Events.
Service Hooks.
Explanation
Release gates allow automatic collection of health signals from external services and then promote the
release when all the signals are successful at the same time or stop the deployment on timeout.
Multiple choice
Which of the following choices are raised when certain actions occur, like when a release is started or a
build completed?
Service Hooks.
■■ Events.
Release gates.
Explanation
Events are raised when certain actions occur, like when a release is started or a build completed.
Multiple choice
Which of the following choices are usually emails you receive when an action occurs to which you're
subscribed?
■■ Notifications.
Service Hooks.
Events.
Explanation
Notifications are usually emails that you receive when an event occurs to which you are subscribed.
Multiple choice
Which of the following roles is responsible for generating and maintaining the life cycle of configuration
values?
■■ Configuration Custodian.
Configuration Consumer.
Developer.
Explanation
It's the Configuration Custodian. It includes CRUD on keys, ensuring the security of secrets, regeneration of
keys and tokens, defining configuration settings such as Log levels for each environment.
464
Multiple choice
Which of the following choices enables applications to have no direct access to the secrets, which helps
improve the security & control?
Library.
■■ Azure Key Vault.
Azure Pipelines.
Explanation
Azure Key Vault allows you to manage your organization's secrets and certificates in a centralized reposito-
ry.
Multiple choice
Which of the following choices isn't a benefit of Azure Key Vault?
■■ Code secure files.
Certificate management.
Secrets management.
Explanation
Azure Key Vault helps with Secrets management, Key management, Certificate management, and store
secrets backed by hardware security modules.
Multiple choice
Which of the following level choices do you need to connect your identity provider to GitHub to use
SSO?
■■ At the organization level.
At the project level.
At the repository level.
Explanation
To use SSO, you need to connect your identity provider to GitHub at the organization level.
Multiple choice
Applications are registered with an Azure AD tenant within Azure Active Directory. Registering an applica-
tion creates an identity configuration. Which of the following choices isn't correct about who can use it?
Accounts in the same organizational directory.
■■ Azure DevOps groups.
Accounts in any organizational directory and Microsoft Accounts (personal).
Explanation
Who can use it are accounts in the same organizational directory, accounts in any organizational directory,
and Microsoft Accounts (personal) and Microsoft Accounts (Personal accounts only).
465
Multiple choice
Which of the following choices isn't a type of managed identities?
■■ Organization-assigned.
System-assigned.
User-assigned.
Explanation
There are two types of managed identities. System-assigned and User-assigned.
Multiple choice
Which of the following choices isn't a scenario that App Configuration makes it easier to implement?
Centralize management and distribution of hierarchical configuration data for different environments
and geographies.
Dynamically change application settings without the need to redeploy or restart an application.
■■ Create, manage and centralize packages as a Package Management.
Explanation
App Configuration makes it easier to implement the following scenarios, Centralize management and
distribution of hierarchical configuration data for different environments and geographies. Dynamically
change application settings without the need to redeploy or restart an application. Control feature availabil-
ity in real-time.
Multiple choice
Which of the following choices isn't a general approach to naming keys used for configuration data?
■■ Cross-key.
Flat.
Hierarchical.
Explanation
There are two general approaches, Flat and Hierarchical.
Multiple choice
Which of the following choices isn't one of two components working in concert for effective implementa-
tion of feature management?
An application that makes use of feature flags.
■■ A monolith application that needs a redesign to microservices.
A separate repository that stores the feature flags and their current states.
Explanation
Effective implementation of feature management consists of at least two components working in concert, an
application that uses feature flags and a separate repository that stores the feature flags and their current
states.
Module 6 Manage infrastructure as code using
Azure and DSC
Learning objectives
After completing this module, students and professionals can:
●● Understand how to deploy your environment.
●● Plan your environment configuration.
●● Choose between imperative versus declarative configuration.
●● Explain idempotent configuration.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
The following table lists the significant differences between manual deployment and infrastructure as
code.
configuring machines. The term infrastructure as code is also sometimes used to include configuration as
code, but not vice versa.
●● Imperative (procedural). In the imperative approach, the script states the how for the final state of the
machine by executing the steps to get to the finished state. It defines what the final state needs to be
but also includes how to achieve that final state. It also can consist of coding concepts such as for,
*if-then, loops, and matrices.
Best practices
The declarative approach abstracts away the methodology of how a state is achieved. As such, it can be
easier to read and understand what is being done.
It also makes it easier to write and define. Declarative approaches also separate the final desired state
and the coding required to achieve that state.
So, it doesn't force you to use a particular approach, allowing for optimization.
A declarative approach would generally be the preferred option where ease of use is the primary goal.
Azure Resource Manager template files are an example of a declarative automation approach.
An imperative approach may have some advantages in complex scenarios where changes in the environ-
ment occur relatively frequently, which need to be accounted for in your code.
There's no absolute on which is the best approach to take, and individual tools may be used in either
declarative or imperative forms. The best approach for you to take will depend on your needs.
In essence, if you apply a deployment to a set of resources 1,000 times, you should end up with the same
result after each application of the script or template.
Summary
This module described key infrastructure concepts as code and environment deployment creation and
configuration and explored imperative, declarative, and idempotent configuration and how it applies to
your company.
You learned how to describe the benefits and usage of:
●● Understand how to deploy your environment.
●● Plan your environment configuration.
●● Choose between imperative versus declarative configuration.
●● Explain idempotent configuration.
Learn more
●● Create target environment - Azure Pipelines | Microsoft Docs2.
1 https://www.wintellect.com/idempotency-for-windows-azure-message-queues/
2 https://docs.microsoft.com/azure/devops/pipelines/process/environments
473
●● Integrate DevTest Labs environments into Azure Pipelines - Azure DevTest Labs | Microsoft
Docs3.
●● What is Infrastructure as Code? - Azure DevOps | Microsoft Docs4.
●● Repeatable Infrastructure - Azure Architecture Center | Microsoft Docs5.
●● Infrastructure as code | Microsoft Docs6.
3 https://docs.microsoft.com/azure/devtest-labs/integrate-environments-devops-pipeline
4 https://docs.microsoft.com/devops/deliver/what-is-infrastructure-as-code
5 https://docs.microsoft.com/azure/architecture/framework/devops/automation-infrastructure
6 https://docs.microsoft.com/dotnet/architecture/cloud-native/infrastructure-as-code
474
Learning objectives
After completing this module, students and professionals can:
●● Create Azure resources using Azure Resource Manager templates.
●● Understand Azure Resource Manager templates and template components.
●● Manage dependencies and secrets in templates.
●● Organize and modularize templates.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
GIT. Its revision history also records how the template (and your deployment) has evolved when you
change the template.
●● Templates promote reuse. Your template can contain parameters that are filled in when the template
runs. A parameter can define a username or password, a domain name, and other necessary items.
Template parameters also enable you to create multiple versions of your infrastructure, such as
staging and production, while still using the same template.
●● Templates are linkable. You can link Resource Manager templates together to make the templates
themselves modular. You can write small templates that define a solution and then combine them to
create a complete system.
Azure provides many quickstart templates. You might use it as a base for your work.
Parameters
This section is where you specify which values are configurable when the template runs.
For example, you might allow template users to set a username, password, or domain name.
Here's an example that illustrates two parameters: one for a virtual machine's (VMs) username and one
for its password:
476
"parameters": {
"adminUsername": {
"type": "string",
"metadata": {
"description": "Username for the Virtual Machine."
}
},
"adminPassword": {
"type": "securestring",
"metadata": {
"description": "Password for the Virtual Machine."
}
}
}
Variables
This section is where you define values that are used throughout the template.
Variables can help make your templates easier to maintain.
For example, you might define a storage account name one time as a variable and then use that variable
throughout the template.
If the storage account name changes, you need only update the variable once.
Here's an example that illustrates a few variables that describe networking features for a VM:
"variables": {
"nicName": "myVMNic",
"addressPrefix": "10.0.0.0/16",
"subnetName": "Subnet",
"subnetPrefix": "10.0.0.0/24",
"publicIPAddressName": "myPublicIP",
"virtualNetworkName": "MyVNET"
}
Functions
This section is where you define procedures that you don't want to repeat throughout the template.
Like variables, functions can help make your templates easier to maintain.
Here's an example that creates a function for creating a unique name to use when creating resources that
have globally unique naming requirements:
"functions": [
{
"namespace": "contoso",
"members": {
"uniqueName": {
477
"parameters": [
{
"name": "namePrefix",
"type": "string"
}
],
"output": {
"type": "string",
"value": "[concat(toLower(parameters('namePrefix')), uniqueS-
tring(resourceGroup().id))]"
}
}
}
}
],
Resources
This section is where you define the Azure resources that make up your deployment.
Here's an example that creates a public IP address resource:
{
"type": "Microsoft.Network/publicIPAddresses",
"name": "[variables('publicIPAddressName')]",
"location": "[parameters('location')]",
"apiVersion": "2018-08-01",
"properties": {
"publicIPAllocationMethod": "Dynamic",
"dnsSettings": {
"domainNameLabel": "[parameters('dnsLabelPrefix')]"
}
}
}
Outputs
This section is where you define any information you'd like to receive when the template runs.
For example, you might want to receive your VM's IP address or fully qualified domain name (FQDN), the
information you won't know until the deployment runs.
478
Manage dependencies
For any given resource, other resources might need to exist before you can deploy the resource.
For example, a Microsoft SQL Server must exist before attempting to deploy a SQL Database.
You can define this relationship by marking one resource as dependent on the other.
You define a dependency with the dependsOn element or by using the reference function.
Resource Manager evaluates the dependencies between resources and deploys them in their dependent
order.
When resources aren't dependent on each other, the Resource Manager deploys them in parallel.
You only need to define dependencies for resources that are deployed in the same template.
Circular dependencies
A circular dependency is a problem with dependency sequencing, resulting in the deployment going
around in a loop and unable to continue.
As a result, the Resource Manager can't deploy the resources.
Resource Manager identifies circular dependencies during template validation.
479
If you receive an error stating that a circular dependency exists, evaluate your template to find whether
any dependencies are unnecessary and can be removed.
If removing dependencies doesn't resolve the issue, you can move some deployment operations into
child resources that are deployed after the resources with the circular dependency.
Modularize templates
When using Azure Resource Manager templates, it's best to modularize them by breaking them into
individual components.
The primary methodology to use to do this be by using linked templates.
It allows you to break out the solution into targeted components and reuse those various elements
across different deployments.
Linked template
To link one template to another, add a deployment resource to your main template.
"resources": [
{
"apiVersion": "2017-05-10",
"name": "linkedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
<link-to-external-template>
}
}
]
Nested template
You can also nest a template within the main template, use the template property, and specify the
template syntax.
It does somewhat aid modularization, but dividing up the various components can result in a sizeable
main file, as all the elements are within that single file.
"resources": [
{
"apiVersion": "2017-05-10",
"name": "nestedTemplate",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
480
"resources": [
{
"type": "Microsoft.Storage/storageAccounts",
"name": "[variables('storageName')]",
"apiVersion": "2015-06-15",
"location": "West US",
"properties": {
"accountType": "Standard_LRS"
}
}
]
}
}
}
]
Note: For nested templates, you can't use parameters or variables defined within the nested template
itself. You can only use parameters and variables from the main template.
The properties you provide for the deployment resource will vary based on linking to an external tem-
plate or nesting an inline template within the main template.
Deployments modes
When deploying your resources using templates, you have three options:
●● validate. This option compiles the templates, validates the deployment, ensures the template is
functional (for example, no circular dependencies), and correct syntax.
●● incremental mode (default). This option only deploys whatever is defined in the template. It doesn't
remove or modify any resources that aren't defined in the template. For example, if you've deployed a
VM via template and then renamed the VM in the template, the first VM deployed will remain after
the template is rerun. It's the default mode.
●● complete mode: Resource Manager deletes resources in the resource group but isn't specified in the
template. For example, only resources defined in the template will be present in the resource group
after the template deploys. As a best practice, use this mode for production environments where
possible to try to achieve idempotency in your deployment templates.
When deploying with PowerShell, to set the deployment mode use the Mode parameter, as per the
nested template example earlier in this topic.
Note: As a best practice, use one resource group per deployment.
Note: For both linked and nested templates, you can only use incremental deployment mode.
As such, you can only provide a Uniform Resource Identifier (URI) value that includes either HTTP or
HTTPS.
One option is to place your linked template in a storage account and use the URI for that item.
You can also provide the parameter inline. However, you can't use both inline parameters and a link to a
parameter file.
The following example uses the templateLink parameter:
"resources": [
{
"name": "linkedTemplate",
"type": "Microsoft.Resources/deployments",
"apiVersion": "2018-05-01",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri":"https://linkedtemplateek1store.blob.core.windows.net/
linkedtemplates/linkedStorageAccount.json?sv=2018-03-28&sr=b&sig=dO9p7Xnbh-
Gq56BO%2BSW3o9tX7E2WUdIk%2BpF1MTK2eFfs%3D&se=2018-12-31T14%3A32%3A29Z&sp=r"
},
"parameters": {
"storageAccountName":{"value": "[variables('storageAccount-
Name')]"},
"location":{"value": "[parameters('location')]"}
}
}
},
The Key Vault can exist in a different subscription than the resource group you're deploying it to.
The following template deploys an SQL database that includes an administrator password.
The password parameter is set to a secure string. However, the template doesn't specify where that value
comes from:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminLogin": {
"type": "string"
},
"adminPassword": {
"type": "securestring"
},
"sqlServerName": {
"type": "string"
}
},
"resources": [
{
"name": "[parameters('sqlServerName')]",
"type": "Microsoft.Sql/servers",
"apiVersion": "2015-05-01-preview",
"location": "[resourceGroup().location]",
"tags": {},
"properties": {
"administratorLogin": "[parameters('adminLogin')]",
"administratorLoginPassword": "[parameters('adminPassword')]",
"version": "12.0"
}
}
],
"outputs": {
}
}
484
Now you can create a parameter file for the preceding template. In the parameter file, specify a parame-
ter that matches the name of the parameter in the template.
For the parameter value, reference the secret from the Key Vault. You reference the secret by passing the
resource identifier of the Key Vault and the secret's name.
The Key Vault secret must already exist in the following parameter file, and you provide a static value for
its resource ID.
Copy this file locally, and set the subscription ID, vault name, and SQL server name:
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/
deploymentParameters.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"adminLogin": {
"value": "exampleadmin"
},
"adminPassword": {
"reference": {
"keyVault": {
"id": "/subscriptions/<subscription-id>/resourceGroups/
examplegroup/providers/Microsoft.KeyVault/vaults/<vault-name>"
},
"secretName": "examplesecret"
}
},
"sqlServerName": {
"value": "<your-server-name>"
}
}
}
You would need to deploy the template and pass the parameter file to the template.
For more information, use Azure Key Vault to pass secure parameter values during deployment7 for
more details.
There are also details available on this web page for reference to a secret with a dynamic ID.
Summary
This module explores Azure Resource Manager templates and their components and details dependen-
cies and modularized templates with secrets.
You learned how to describe the benefits and usage of:
●● Create Azure resources using Azure Resource Manager templates.
●● Understand Azure Resource Manager templates and template components.
●● Manage dependencies and secrets in templates.
7 https://docs.microsoft.com/azure/azure-resource-manager/resource-manager-keyvault-parameter
485
Learn more
●● Connect to Microsoft Azure - Azure Pipelines | Microsoft Docs8.
●● CI/CD with Azure Pipelines and templates - Azure Resource Manager | Microsoft Docs9.
●● Security through templates - Azure Pipelines | Microsoft Docs10.
8 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure
9 https://docs.microsoft.com/azure/azure-resource-manager/templates/add-template-to-azure-pipelines
10 https://docs.microsoft.com/azure/devops/pipelines/security/templates
486
Learning objectives
After completing this module, students and professionals can:
●● Create Azure resources using Azure CLI.
●● Understand and work with Azure CLI.
●● Run templates using Azure CLI.
●● Explains Azure CLI commands.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps11.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps12.
Azure CLI provides cross-platform command-line tools for managing Azure resources.
You can install it locally on computers running the Linux, macOS, or Windows operating systems.
You can also use Azure CLI from a browser through Azure Cloud Shell.
11 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
12 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
487
In both cases, you can use Azure CLI interactively or through scripts:
●● Interactive. For Windows operating systems, launch a shell such as cmd.exe, or for Linux or macOS,
use Bash. Then issue the command at the shell prompt.
●● Scripted. Assemble the Azure CLI commands into a shell script using the script syntax of your chosen
shell, and then execute the script.
If you know the command's name you want, the help argument for that command will get you more
detailed information on the command—also, a list of the available subcommands for a command group.
For example, here's how you would get a list of the subgroups and commands for managing blob
storage:
az storage blob --help
Creating resources
When creating a new Azure resource, typically, there are three high-level steps:
1. Connect to your Azure subscription.
2. Create the resource.
3. Verify that creation was successful.
1. Connect
Because you're working with a local Azure CLI installation, you'll need to authenticate before you can
execute Azure commands.
488
Azure CLI will typically launch your default browser to open the Azure sign in page.
If it doesn't work, follow the command-line instructions, and enter an authorization code in the Enter
Code13 dialog box.
After a successful sign in, you'll be connected to your Azure subscription.
2. Create
You'll often need to create a new resource group before you create a new Azure service.
So we'll use resource groups as an example to show how to create Azure resources from the Azure CLI.
The Azure CLI group create command creates a resource group.
You need to specify a name and location.
The name parameter must be unique within your subscription.
The location parameter determines where the metadata for your resource group will be stored.
You use strings like “West US,” "North Europe," or “West India” to specify the location.
Instead, you can use single-word equivalents, such as “westus,” "northeurope," or “westindia.”
The core syntax to create a resource group is:
az group create --name <name> --location <location>
3. Verify installation
For many Azure resources, Azure CLI provides a list subcommand to get resource details.
For example, the Azure CLI group list command lists your Azure resource groups.
It's helpful to verify whether resource group creation was successful:
az group list
To get more concise information, you can format the output as a simple table:
az group list --output table
If you have several items in the group list, you can filter the return values by adding a query option using,
for example, the following command:
13 https://aka.ms/devicelogin
489
Note: You format the query using JMESPath, which is a standard query language for JSON requests.
You can learn more about this filter language at http://jmespath.org/.
If you use a PowerShell environment for running Azure CLI scripts, you'll need to use the following syntax
for variables:
$variable="value"
$variable=integer
Steps
In the following steps, we'll deploy the template and verify the result using Azure CLI:
1. Create a resource group to deploy your resources to by running the following command:
az group create --name <resource group name> --location <your nearest
datacenter>
Note: Check the available region for you Choose the Right Azure Region for You15. If you can't create
in the nearest region, feel free to choose another one.
2. From Cloud Shell, run the curl command to download the template you used previously from GitHub:
curl https://raw.githubusercontent.com/Microsoft/PartsUnlimited/master/
Labfiles/AZ-400T05_Implementing_Application_Infrastructure/M01/azuredeploy.
json > C:\temp\azuredeploy.json
14 https://azure.microsoft.com/free/
15 https://azure.microsoft.com/global-infrastructure/geographies
490
3. Validate the template by running the following command, replacing the values with your own:
az deployment group validate \
--resource-group <rgn>[sandbox resource group name]</rgn> \
--template-file C:\temp\azuredeploy.json \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX
4. Deploy the resource by running the following command, replacing the same values as earlier:
az deployment group create \
--name MyDeployment \
--resource-group <rgn>[sandbox resource group name]</rgn> \
--template-file azuredeploy.json \
--parameters adminUsername=$USERNAME \
--parameters adminPassword=$PASSWORD \
--parameters dnsLabelPrefix=$DNS_LABEL_PREFIX
6. Run curl to access your web server and verify that the deployment and running of the custom script
extension were successful:
curl $IPADDRESS
Note: Don't forget to delete any resources you deployed to avoid incurring extra costs from them.
Summary
This module explained Azure CLI to create Azure resources, run templates, and detailed Azure CLI
commands.
You learned how to describe the benefits and usage of:
●● Create Azure resources using Azure CLI.
491
Learn more
●● Azure CLI task - Azure Pipelines | Microsoft Docs16.
●● How to install the Azure CLI | Microsoft Docs17.
●● Get started with Azure Command-Line Interface (CLI) | Microsoft Docs18.
16 https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-cli
17 https://docs.microsoft.com/cli/azure/install-azure-cli
18 https://docs.microsoft.com/cli/azure/get-started-with-azure-cli
492
19 https://azure.microsoft.com/documentation/articles/automation-intro/
20 https://docs.microsoft.com/azure/automation/automation-dsc-overview
493
update installations within a defined maintenance window. For more information, visit Azure Auto-
mation Update Management Deployment Plan21.
●● Start and stop virtual machines (VMs). Azure Automation provides an integrated Start/Stop VM–relat-
ed resource that enables you to start and stop VMs on user-defined schedules. It also provides
insights through Azure Log Analytics and can send emails by using action groups. For more informa-
tion, go to Start/Stop VMs during off-hours solution in Azure Automation22.
●● Integration with GitHub, Azure DevOps, Git, or Team Foundation Version Control repositories. For
more information, go to Source control integration in Azure Automation23.
●● Automate Amazon Web Services (AWS) Resources. Automate common tasks with resources in AWS
using Automation runbooks in Azure. For more information, go to Authenticate Runbooks with
Amazon Web Services24.
●● Manage Shared resources. Azure Automation consists of a set of shared resources (such as connec-
tions, credentials, modules, schedules, and variables) that make it easier to automate and configure
your environments at scale.
●● Run backups. Azure Automation allows you to run regular backups of non-database systems, such as
backing up Azure Blob Storage at certain intervals.
Azure Automation works across hybrid cloud environments in addition to Windows and Linux operating
systems.
This module describes Azure automation with Azure DevOps, using runbooks, webhooks, and PowerShell
workflows.
You'll learn how to create and manage automation for your environment.
Learning objectives
After completing this module, students and professionals can:
●● Implement automation with Azure DevOps.
●● Create and manage runbooks.
●● Create webhooks.
●● Create and run a workflow runbook and PowerShell workflows.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
21 https://docs.microsoft.com/en-us/azure/automation/update-management/plan-deployment
22 https://docs.microsoft.com/en-us/azure/automation/automation-solution-vm-management
23 https://docs.microsoft.com/azure/automation/source-control-integration
24 https://docs.microsoft.com/azure/automation/automation-config-aws-account
25 https://azure.microsoft.com/documentation/articles/automation-security-overview/
494
Steps to create an Azure Automation account are available on the Create an Azure Automation ac-
count26 page.
Automation accounts are like Azure Storage accounts in that they serve as a container to store automa-
tion artifacts.
These artifacts could be a container for all your runbooks, runbook executions (jobs), and the assets on
which your runbooks depend.
An Automation account gives you access to managing all Azure resources via an API. To safeguard it, the
Automation account creation requires subscription-owner access.
You must be a subscription owner to create the Run As accounts that the service creates.
If you don't have the proper subscription privileges, you'll see the following warning:
26 https://docs.microsoft.com/azure/automation/automation-quickstart-create-account
495
To use Azure Automation, you'll need at least one Azure Automation account.
However, as a best practice, you should create multiple automation accounts to segregate and limit the
scope of access and minimize any risk to your organization.
For example, you might use one account for development, another for production, and another for your
on-premises environment. You can have up to 30 Automation accounts.
What is a runbook?
Runbooks serve as repositories for your custom scripts and workflows.
They also typically reference Automation shared resources such as credentials, variables, connections, and
certificates.
Runbooks can also contain other runbooks, allowing you to build more complex workflows.
You can invoke and run runbooks either on-demand or according to a schedule by using Automation
Schedule assets.
Creating runbooks
When creating runbooks, you have two options. You can either:
●● Create your runbook and import it. For more information about creating or importing a runbook in
Azure Automation, go to Start a runbook in Azure Automation27.
●● Modify runbooks from the runbook gallery. It provides a rich ecosystem of runbooks that are available
for your requirements. Visit Runbook and module galleries for Azure Automation28 for more
information.
27 https://docs.microsoft.com/azure/automation/start-runbooks
28 https://docs.microsoft.com/azure/automation/automation-runbook-gallery
496
There's also a vibrant open-source community that creates runbooks you can apply directly to your use
cases.
You can choose from different runbook types based on your requirements and Windows PowerShell
experience.
If you prefer to work directly with Windows PowerShell code, you can use a PowerShell runbook or a
PowerShell Workflow runbook.
Using either of these, you can edit offline or with the textual editor in the Azure portal.
If you prefer to edit a runbook without being exposed to the underlying code, you can create a graphical
runbook using the Azure portal's graphic editor.
Graphical runbooks
Graphical runbooks and Graphical PowerShell Workflow runbooks are created and edited with the
graphic editor in the Azure portal.
You can export them to a file and import them into another automation account, but you can't create or
edit them with another tool.
PowerShell runbooks
PowerShell runbooks are based on Windows PowerShell. You edit the runbook code directly, using the
text editor in the Azure portal.
You can also use any offline text editor and then import the runbook into Azure Automation. PowerShell
runbooks don't use parallel processing.
Python runbooks
Python runbooks compile under Python 2.
You can directly edit the code of the runbook using the text editor in the Azure portal, or you can use any
offline text editor and import the runbook into Azure Automation.
You can also use Python libraries. However, only Python 2 is supported at this time.
To use third-party libraries, you must first import the package into the Automation Account.
Note: You can't convert runbooks from graphical to textual type, and the other way around.
497
For more information on the different types of runbooks, visit Azure Automation runbook types29.
As a best practice, always try to create global assets to be used across your runbooks.
It will save time and reduce the number of manual edits within individual runbooks.
29 https://azure.microsoft.com/documentation/articles/automation-runbook-types
498
30 https://github.com/azureautomation/runbooks
31 https://docs.microsoft.com/powershell/azure/new-azureps-module-az
32 https://github.com/azureautomation
499
Note: Python runbooks are also available from the Azure Automation Github in the runbooks repository.
To find them, filter by language and select Python.
Note: You can't use PowerShell to import directly from the runbook gallery.
Examine webhooks
You can automate starting a runbook either by scheduling it or by using a webhook.
A webhook allows you to start a particular runbook in Azure Automation through a single HTTPS
request.
It allows external services such as Azure DevOps, GitHub, or custom applications to start runbooks
without implementing more complex solutions using the Azure Automation API.
More information about webhooks is available at Starting an Azure Automation runbook with a
webhook33.
33 https://docs.microsoft.com/azure/automation/automation-webhooks
500
Create a webhook
You create a webhook linked to a runbook using the following steps:
1. In the Azure portal, open the runbook that you want to create the webhook.
2. In the runbook pane, under Resources, select Webhooks, and then choose + Add webhook.
3. Select Create new webhook.
4. In the Create new webhook dialog, there are several values you need to configure. After you config-
ure them, select Create:
●● Name. Specify any name you want for a webhook because the name isn't exposed to the client.
It's only used for you to identify the runbook in Azure Automation.
●● Enabled. A webhook is enabled by default when it's created. If you set it to Disabled, then no
client can use it.
●● Expires. Each webhook has an expiration date, at which time it can no longer be used. You can
continue to modify the date after creating the webhook providing the webhook isn't expired.
●● URL. The webhook URL is the unique address that a client calls with an HTTP POST to start the
runbook linked to the webhook. It's automatically generated when you create the webhook, and
you can't specify a custom URL. The URL contains a security token that allows the runbook to be
invoked by a third-party system with no further authentication. For this reason, treat it like a
501
password. You can only view the URL in the Azure portal for security reasons when the webhook is
created. Make a note of the URL in a secure location for future use.
Note: When creating it, make sure you copy the webhook URL and then store it in a safe place. After you
create the webhook, you can't retrieve the URL again.
5. Select the Parameters run settings (Default: Azure) option. This option has the following character-
istics, which allows you to complete the following actions:
●● If the runbook has mandatory parameters, you'll need to provide these required parameters
during creation. You aren't able to create the webhook unless values are provided.
●● If there are no mandatory parameters in the runbook, there's no configuration required here.
●● The webhook must include values for any mandatory parameters of the runbook and include
values for optional parameters.
●● When a client starts a runbook using a webhook, it can't override the parameter values defined.
●● To receive data from the client, the runbook can accept a single parameter called $WebhookData
of type [object] that contains data that the client includes in the POST request.
502
Using a webhook
To use a webhook after it has been created, your client application must issue an HTTP POST with the
URL for the webhook.
●● The syntax of the webhook is in the following format:
http://< Webhook Server >/token?=< Token Value >
●● The client receives one of the following return codes from the POST request.
The response will contain a single job ID, but the JSON format allows for potential future enhance-
ments.
●● You can't determine when the runbook job completes or determine its completion status from the
webhook. You can only choose this information using the job ID with another method such as Power-
Shell or the Azure Automation API.
More details are available on the Starting an Azure Automation runbook with a webhook34 page.
34 https://docs.microsoft.com/azure/automation/automation-webhooks
504
Note: Note: You'll require a GitHub account to complete the next step.
4. When the browser page opens, prompting you to authenticate to https://www.github.com35, select
Authorize azureautomation and enter your GitHub account password. If successful, you should
receive an email notification from GitHub stating that A third-party OAuth Application (Automation
Source Control) with repo scope was recently authorized to access your account.
5. After authentication completes, fill in the details based on the following table, and then select Save.
Property Description
Name Friendly name
Source control type GitHub, Azure DevOps Git, or Azure DevOps TFVC
Repository The name of the repository or project
Branch The branch from which to pull the source files.
Branch targeting isn't available for the TFVC source
control type.
Folder Path The folder that contains the runbooks to sync.
Autosync Turns on or off automatic sync when a commit is
made in the source control repository.
Publish Runbook. If set to On, after runbooks are synced from
source control, they'll be automatically published.
Description A text field to provide more details.
6. If you set Autosync to Yes, full sync will start. If you set Autosync to No, open the Source Control
Summary blade again by selecting your repository in Azure Automation and then selecting Start
Sync.
35 https://www.github.com/
505
7. Verify that your source control is listed on the Azure Automation Source control page for you to
use.
PowerShell Workflow lets IT pros and developers apply the benefits of Windows Workflow Foundation
with the automation capabilities and ease of using Windows PowerShell.
Tip: Refer to A Developer's Introduction to Windows Workflow Foundation (WF) in .NET 436 for
more information.
Windows PowerShell Workflow functionality was introduced in Windows Server 2012 and Windows 8 and
is part of Windows PowerShell 3.0 and later.
Windows PowerShell Workflow helps automate distribution, orchestration, and completion of multi-de-
vice tasks, freeing users and administrators to focus on higher-level tasks.
Activities
An activity is a specific task that you want a workflow to do. Just as a script is composed of one or more
commands, a workflow is composed of activities carried out in sequence.
You can also use a script as a single command in another script and use a workflow as an activity within
another workflow.
Workflow characteristics
A workflow can:
●● Be long-running.
●● Be repeated over and over.
●● Run tasks in parallel.
●● Be interrupted—can be stopped and restarted, suspended, and resumed.
●● Continue after an unexpected interruption, such as a network outage or computer/server restart.
Workflow benefits
A workflow offers many benefits, including:
●● Windows PowerShell scripting syntax. Is built on PowerShell.
●● Multidevice management. Simultaneously apply workflow tasks to hundreds of managed nodes.
●● Single task runs multiple scripts and commands. Combine related scripts and commands into a single
task. Then run the single task on multiple computes. The activity status and progress within the
workflow are visible at any time.
●● Automated failure recovery.
●● Workflows survive both planned and unplanned interruptions, such as computer restarts.
●● You can suspend a workflow operation, then restart or resume the workflow from the point it was
suspended.
●● You can author checkpoints as part of your workflow so that you can resume the workflow from
the last persisted task (or checkpoint) instead of restarting the workflow from the beginning.
36 https://docs.microsoft.com/en-us/previous-versions/dotnet/articles/ee342461%28v=msdn.10%29
507
●● Connection and activity retries. You can retry connections to managed nodes if network-connection
failures occur. Workflow authors can also specify activities that must run again if the activity cannot be
completed on one or more managed nodes (for example, if a target computer was offline while the
activity was running).
●● Connect and disconnect from workflows. Users can connect and disconnect from the computer
running the workflow, but the workflow will remain running. For example, suppose you're running the
workflow and managing the workflow on two different computers. In that case, you can sign out of or
restart the computer from which you're managing the workflow and continue to monitor workflow
operations from another computer without interrupting the workflow.
●● Task scheduling. You can schedule a task to start when specific conditions are met, as with any other
Windows PowerShell cmdlet or script.
Create a workflow
Use a script editor such as the Windows PowerShell Integrated Scripting Environment (ISE) to write the
workflow.
It enforces workflow syntax and highlights syntax errors. For more information, review the tutorial
Tutorial - Create a PowerShell Workflow runbook in Azure Automation37.
A benefit of using PowerShell ISE is that it automatically compiles your code and allows you to save the
artifact.
Because the syntactic differences between scripts and workflows are significant, a tool that knows both
workflows and scripts will save you considerable coding and testing time.
Syntax
When you create your workflow, begin with the workflow keyword, which identifies a workflow com-
mand to PowerShell.
A script workflow requires the workflow keyword. Next, name the workflow, and have it follow the
workflow keyword.
The body of the workflow will be enclosed in braces.
1. A workflow is a Windows command type, so select a name with a verb-noun format:
workflow Test-Workflow
{
...
}
2. To add parameters to a workflow, use the Param keyword. It's the same techniques that you use to
add parameters to a function.
3. Finally, add your standard PowerShell commands.
workflow MyFirstRunbook-Workflow
{
Param(
[string]$VMName,
37 https://docs.microsoft.com/azure/automation/learn/automation-tutorial-runbook-textual
508
[string]$ResourceGroupName
)
....
Start-AzureRmVM -Name $VMName -ResourceGroupName $ResourceGroupName
}
●● The agent on the local computer starts all communication with Azure Automation in the cloud.
●● When a runbook is started, Azure Automation creates an instruction that the agent retrieves. The
agent then pulls down the runbook and any parameters before running it.
To configure your on-premises servers that support the Hybrid Runbook Worker role with DSC, you must
add them as DSC nodes.
For more information about onboarding them for management with DSC, see Onboarding machines for
management by Azure Automation State Configuration38.
For more information on installing and removing Hybrid Runbook Workers and groups, see:
●● Automate resources in your datacenter or cloud by using Hybrid Runbook Worker.39
●● Hybrid Management in Azure Automation40
Steps
38 https://docs.microsoft.com/azure/automation/automation-dsc-onboarding
39 https://docs.microsoft.com/azure/automation/automation-hybrid-runbook-worker#installing-hybrid-runbook-worker
40 https://azure.microsoft.com/blog/hybrid-management-in-azure-automation/
41 https://azure.microsoft.com/free
510
2. Select Start to start the test. This should be the only enabled option.
511
A runbook job is created, and its status is displayed. The job status will start as Queued, indicating
that it's waiting for a runbook worker in the cloud to come available. It moves to Starting when a
worker claims the job and then Running when the runbook starts running. When the runbook job
completes, its output displays. In this case, you should see Hello World.
3. When the runbook job finishes, close the Test pane.
512
5. You want to start the runbook, so select Start, and then when prompted, select Yes.
6. When the job pane opens for the runbook job you created, leave it open to watch the job's progress.
7. Verify that when the job completes, the job statuses displayed in Job Summary match the status you
saw when you tested the runbook.
513
Checkpoints
A checkpoint is a snapshot of the current state of the workflow.
Checkpoints include the current value for variables and any output generated up to that point. (For more
information on what a checkpoint is, read the checkpoint42 webpage.)
If a workflow ends in an error or is suspended, the next time it runs, it will start from its last checkpoint
instead of at the beginning of the workflow.
You can set a checkpoint in a workflow with the Checkpoint-Workflow activity.
For example, if an exception occurs after Activity2, the workflow will end in the following sample code.
42 https://docs.microsoft.com/azure/automation/automation-powershell-workflow
514
When the workflow is rerun, it starts with Activity2, followed just after the last checkpoint set.
<Activity1>
Checkpoint-Workflow
<Activity2>
<Exception>
<Activity3>
Parallel processing
A script block has multiple commands that run concurrently (or in parallel) instead of sequentially, as for
a typical script.
It's referred to as parallel processing. (More information about parallel processing is available on the
Parallel processing43 webpage.)
In the following example, two vm0 and vm1 VMs will be started concurrently, and vm2 will only start after
vm0 and vm1 have started.
Parallel
{
Start-AzureRmVM -Name $vm0 -ResourceGroupName $rg
Start-AzureRmVM -Name $vm1 -ResourceGroupName $rg
}
Another parallel processing example would be the following constructs that introduce some extra
options:
●● ForEach -Parallel. You can use the ForEach -Parallel construct to concurrently process commands for
each item in a collection. The items in the collection are processed in parallel while the commands in
the script block run sequentially.
In the following example, Activity1 starts at the same time for all items in the collection.
For each item, Activity2 starts after Activity1 completes. Activity3 starts only after both Activity1 and
Activity2 have been completed for all items.
●● ThrottleLimit - We use the ThrottleLimit parameter to limit parallelism. Too high of a ThrottleLimit can
cause problems. The ideal value for the ThrottleLimit parameter depends on several environmental
factors. Try starting with a low ThrottleLimit value, and then increase the value until you find one that
works for your specific circumstances:
ForEach -Parallel -ThrottleLimit 10 ($<item> in $<collection>)
{
<Activity1>
<Activity2>
}
<Activity3>
43 https://docs.microsoft.com/azure/automation/automation-powershell-workflow
515
A real-world example of it could be similar to the following code: a message displays for each file after
it's copied. Only after all files are copied does the completion message display.
Workflow Copy-Files
{
$files = @("C:\LocalPath\File1.txt","C:\LocalPath\File2.txt","C:\
LocalPath\File3.txt")
Summary
This module described Azure automation with Azure DevOps, using runbooks, webhooks, and PowerShell
workflows. You learned how to create and manage automation for your environment.
You learned how to describe the benefits and usage of:
●● Implement automation with Azure DevOps.
●● Create and manage runbooks.
●● Create webhooks.
●● Create and run a workflow runbook and PowerShell workflows.
Learn more
●● Manage runbooks in Azure Automation | Microsoft Docs44.
●● Use source control integration in Azure Automation | Microsoft Docs45.
●● Webhooks with Azure DevOps - Azure DevOps | Microsoft Docs46.
●● Learn PowerShell Workflow for Azure Automation | Microsoft Docs47.
44 https://docs.microsoft.com/azure/automation/manage-runbooks
45 https://docs.microsoft.com/azure/automation/source-control-integration
46 https://docs.microsoft.com/azure/devops/service-hooks/services/webhooks
47 https://docs.microsoft.com/azure/automation/automation-powershell-workflow
516
Learning objectives
After completing this module, students and professionals can:
●● Implement Desired State Configuration (DSC).
●● Describe Azure Automation State Configuration.
●● Implement DSC and Linux Automation on Azure.
●● Plan for hybrid management.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Security considerations
Configuration drift can also introduce security vulnerabilities into your environment. For example:
●● Ports might be opened that should be kept closed.
517
●● Updates and security patches might not be applied across environments consistently.
●● The software might be installed that doesn't meet compliance requirements.
DSC components
DSC consists of three primary parts:
●● Configurations. These are declarative PowerShell scripts that define and configure instances of
resources. Upon running the configuration, DSC (and the resources being called by the configuration)
will apply the configuration, ensuring that the system exists in the state laid out by the configuration.
DSC configurations are also idempotent: The Local Configuration Manager (LCM) will ensure that
machines are configured in whatever state the configuration declares.
●● Resources. They contain the code that puts and keeps the target of a configuration in the specified
state. Resources are in PowerShell modules and can be written to a model as generic as a file or a
Windows process or as specific as a Microsoft Internet Information Services (IIS) server or a VM
running in Azure.
●● Local Configuration Manager (LCM). The LCM runs on the nodes or machines you wish to configure.
It's the engine by which DSC facilitates the interaction between resources and configurations. The
LCM regularly polls the system using the control flow implemented by resources to maintain the state
48 https://azure.microsoft.com/services/azure-policy/
49 https://docs.microsoft.com/powershell/dsc/overview
518
defined by a configuration. If the system is out of state, the LCM calls the code in resources to apply
the configuration according to specified.
There are two methods of implementing DSC:
●● Push mode - A user actively applies a configuration to a target node and pushes out the configura-
tion.
●● Pull mode is where pull clients are automatically configured to get their desired state configurations
from a remote pull service. This remote pull service is provided by a pull server that acts as central
control and manager for the configurations, ensures that nodes conform to the desired state, and
reports on their compliance status. The pull server can be set up as an SMB-based pull server or an
HTTPS-based server. HTTPS-based pull-server uses the Open Data Protocol (OData) with the OData
Web service to communicate using REST APIs. It's the model we're most interested in, as it can be
centrally managed and controlled. The following diagram provides an outline of the workflow of DSC
pull mode.
50 https://docs.microsoft.com/windows/win32/wmisdk/managed-object-format--mof-
51 https://docs.microsoft.com/powershell/scripting/dsc/configurations/configurations
520
●● Configuration block. The Configuration block is the outermost script block. In this case, the name of
the configuration is LabConfig. Notice the curly brackets to define the block.
●● Node block. There can be one or more Node blocks. It defines the nodes (computers and VMs) that
you're configuring. In this example, the node targets a computer called WebServer. You could also
call it localhost and use it locally on any server.
●● Resource blocks. There can be one or more resource blocks. It's where the configuration sets the
properties for the resources. In this case, there's one resource block called WindowsFeature. Notice
the parameters that are defined. (You can read more about resource blocks at DSC resources52.
Here's another example:
Configuration MyDscConfiguration
{
param
(
[string[]]$ComputerName='localhost'
)
Node $ComputerName
{
WindowsFeature MyFeatureInstance
{
Ensure = 'Present'
Name = 'RSAT'
}
WindowsFeature My2ndFeatureInstance
{
Ensure = 'Present'
Name = 'Bitlocker'
}
52 https://docs.microsoft.com/powershell/scripting/dsc/resources/resources
521
}
}
MyDscConfiguration
In this example, you specify the node's name by passing it as the ComputerName parameter when you
compile the configuration. The name defaults to “localhost.”
Within a Configuration block, you can do almost anything that you normally could in a PowerShell
function.
You can also create the configuration in any editor, such as PowerShell ISE, and save the file as a Power-
Shell script with a .ps1 file type extension.
2. In Azure Automation, account under Configuration Management > State configuration (DSC),
select the Configurations tab, and select +Add.
53 https://docs.microsoft.com/azure/automation/automation-dsc-compile#compiling-a-dsc-configuration-with-the-azure-portal
522
3. Point to the configuration file you want to import, and then select OK.
4. Once imported, double click the file, select Compile, and then confirm by selecting Yes.
523
Note: If you prefer, you can also use the PowerShell Start-AzAutomationDscCompilationJob cmdlet.
More information about this method is available at Compiling a DSC Configuration with Windows
PowerShell54.
5. In the resultant Registration pane, configure the following settings, and then select OK.
54 https://docs.microsoft.com/azure/automation/automation-dsc-compile#compiling-a-dsc-configuration-with-windows-powershell
55 https://docs.microsoft.com/azure/automation/automation-dsc-onboarding#physicalvirtual-windows-machines-on-premises-or-in-a-cloud-
other-than-azureaws
525
Property Description
Registration key. Primary or secondary, for registering the node
with a pull service.
Node configuration name. The name of the node configuration that the VM
should be configured to pull for Automation DSC.
Refresh Frequency. The time interval, in minutes, at which the LCM
checks a pull service to get updated configura-
tions. This value is ignored if the LCM isn't config-
ured in pull mode. The default value is 30.
Configuration Mode Frequency. How often, in minutes, the current configuration is
checked and applied. This property is ignored if
the ConfigurationMode property is set to
ApplyOnly. The default value is 15.
Configuration mode. Specifies how the LCM gets configurations.
Possible values are ApplyOnly, ApplyAndMoni-
tor, and ApplyAndAutoCorrect.
Allow Module Override. Controls whether new configurations downloaded
from the Azure Automation DSC pull server can
overwrite the old modules already on the target
server.
Reboot Node if Needed. Set this to $true to automatically reboot the node
after a configuration that requires a reboot is
applied. Otherwise, you'll have to reboot the node
for any configuration that needs it manually. The
default value is $false.
526
Action after Reboot. Specifies what happens after a reboot during the
application of a configuration. The possible values
are ContinueConfiguration and StopConfigura-
tion.
The service will then connect to the Azure VMs and apply the configuration.
6. Return to the State configuration (DSC) pane and verify that the status now displays as Compliant
after applying the configuration.
Each time Azure Automation DSC does a consistency check on a managed node, the node sends a
status report back to the pull server. You can review these reports on that node's blade. Access it by
double-clicking or pressing the spacebar and then Enter on the node.
527
Note: You can also unregister the node and assign a different configuration to nodes.
For more information about onboarding VMs, see also:
●● Enable Azure Automation State Configuration.56
●● Configuring the Local Configuration Manager57
56 https://docs.microsoft.com/azure/automation/automation-dsc-onboarding
57 https://docs.microsoft.com/powershell/scripting/dsc/managing-nodes/metaconfig
58 https://docs.microsoft.com/powershell/scripting/dsc/getting-started/lnxgettingstarted
528
●● Ubuntu Server 14.04 LTS, 16.04 LTS, 18.04 LTS, and 20.04 LTS (x64)
Summary
This module described Desired State Configuration (DSC) and its components for implementation. You
exercised how to import, compile and automate your environment creation, and use DSC for Linux
automation on Azure.
You learned how to describe the benefits and usage of:
●● Implement Desired State Configuration (DSC).
●● Describe Azure Automation State Configuration.
●● Implement DSC and Linux Automation on Azure.
●● Plan for hybrid management.
Learn more
●● Building a pipeline with DSC - Azure Pipelines | Microsoft Docs59.
●● Azure Automation State Configuration overview | Microsoft Docs60.
●● Desired State Configuration for Azure overview - Azure Virtual Machines | Microsoft Docs61.
59 https://docs.microsoft.com/en-us/azure/devops/pipelines/release/dsc-cicd?view=azure-devops
60 https://docs.microsoft.com/en-us/azure/automation/automation-dsc-overview
61 https://docs.microsoft.com/en-us/azure/virtual-machines/extensions/dsc-overview
529
Implement Bicep
Introduction
This module explains what Bicep is and how it's integrated into different tools such as Azure CLI and
Visual Studio Code for environment deployment configuration.
Learning Objectives
●● Learn what Bicep is.
●● Learn how to install it and create a smooth authoring experience.
●● Use Bicep to deploy resources to Azure.
●● Deploy Bicep files in Cloud Shell and Visual Studio Code.
Prerequisites
●● Basic understanding of DevOps and its concepts.
●● Familiarity with version control principles.
●● Beneficial if responsible for deploying resources to Azure via Infrastructure as Code (IaC).
What is Bicep?
Azure Bicep is the next revision of ARM templates designed to solve some of the issues developers
were facing when deploying their resources to Azure. It's an Open Source tool and, in fact, a domain-spe-
cific language (DSL) that provides a means to declaratively codify infrastructure, which describes the
topology of cloud resources such as VMs, Web Apps, and networking interfaces. It also encourages code
reuse and modularity in designing the infrastructure as code files.
The new syntax allows you to write less code compared to ARM templates, which are more straightfor-
ward and concise and automatically manage the dependency between resources. Azure Bicep comes with
its command line interface (CLI), which can be used independently or with Azure CLI. Bicep CLI allows
you to transpile the Bicep files into ARM templates and deploy them and can be used to convert an
existing ARM template to Bicep.
Note: Beware that when converting ARM templates to Bicep, there might be issues since it's still a work
in progress.
There's also an excellent integration with Visual Studio Code that creates a superb authoring experience.
Azure Bicep supports types that are used to validate templates at development time rather than runtime.
The extension also supports linting, which can be used to unify the development experience between
team members or across different teams.
For more information about Azure Bicep, see Bicep language for deploying Azure resources62.
Next steps
In the next unit, you'll find out various ways to install Bicep and set up your development environment.
62 https://docs.microsoft.com/azure/azure-resource-manager/bicep/overview
530
Install Bicep
To start, install the Bicep CLI or the Visual Studio Code Extension63. Having both installed will give you a
great authoring experience.
To verify you've it installed, create a file with the .bicep extension and watch the language mode
change in the lower right corner of VS Code.
You can upgrade the Bicep CLI by running the az bicep upgrade, and for validating the installation,
use the az bicep version.
We deliberately avoided breaking down the installation for Windows, macOS, and Linux since Azure CLI is
cross-platform, and the steps would be the same.
63 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-bicep
531
Manual installation
You can manually install it if you don't have Azure CLI installed but still want to use Bicep CLI.
Windows
You can use Chocolatey or Winget to install the Bicep CLI:
choco install bicep
winget install -e --id Microsoft.Bicep
bicep --help
Linux
To install the Bicep CLI on Linux manually, use the below script:
curl -Lo bicep https://github.com/Azure/bicep/releases/latest/download/
bicep-linux-x64
chmod +x ./bicep
sudo mv ./bicep /usr/local/bin/bicep
bicep --help
macOS
And for macOS, use homebrew or the previous script for Linux:
brew tap azure/bicep
brew install bicep
bicep --version
Next steps
In the next unit, you'll create your first Bicep template and deploy it to Azure.
64 https://docs.microsoft.com/azure/azure-resource-manager/bicep/installation-troubleshoot/
532
Prerequisites
To follow along, you will need to have access to an Azure subscription. You also need to have:
●● VS Code.
●● Azure CLI.
●● Bicep extension for VS Code.
This file will deploy an Azure Storage Account, however, we need to modify the file to make it ready for
deployment. First let's add two parameters, one for the name since it should be unique, and one for the
location.
param storageName string = 'stg${uniqueString(resourceGroup().id)}'
param location string = resourceGroup().location
The value you assign to the parameters is the default value which makes the parameters optional.
Replace the name property with storageName and since the location is already used you are good to go
ahead with the deployment.
Visualize resources
You can use VS Code to visualize the resources defined in your Bicep file by clicking on the visualizer
button at the top right-hand corner:
533
Feel free to take a look at the resulting ARM template and compare the two.
Note: Replace the uniqueName with a unique name, but you can also ignore providing the parameter
since it has a default value.
When the deployment finishes you will be getting a message indicating the deployment succeeded.
534
Next steps
Now that you have learned how to create a basic template and deploy it, head over to the next unit to
learn more about the constructs in a Bicep file.
Scope
By default the target scope of all templates is set to resourceGroup, however, you can customize it by
setting it explicitly. Additional allowed values are subscription, managementGroup, and tenant.
Parameters
You have already used the parameters in the previous unit. They allow you to customize your template
deployment at run time by providing potential values for names, location, prefixes, etc.
Parameters also have types that editors can validate and also can have default values to make them
optional at deployment time. Additionally, you can see they can have validation rules to make the
deployment more reliable by preventing any invalid value right from the authoring. For more information,
see Parameters in Bicep65.
Variables
Similar to parameters, variables play a role in making a more robust and readable template. Any complex
expression can be stored in a variable and used throughout the template. When you define a variable, the
type is inferred from the value.
In the above example, the uniqueStorageName is used to simplify the resource definition. For more
information, see Variables in Bicep66.
Resources
The resource keyword is used when you need to declare a resource in your templates. The resource
declaration has a symbolic name for the resource which can be used to reference that resource later
either for defining a sub-resource or for using its properties for an implicit dependency like a parent-child
relationship.
There are certain properties that are common for all resources such as location, name, and proper-
ties. There are resource-specific properties that can be used to customize the resource pricing tier, SKU,
and so on.
You can define sub-resources within a resource or outside by referencing the parent. In the above
example, a file share is defined within the storage account resource. If the intention was to define the
resource outside of it, you would need to change your template to the following:
resource storage 'Microsoft.Storage/storageAccounts@2021-02-01' = {
name: 'examplestorage'
location: resourceGroup().location
kind: 'StorageV2'
sku: {
name: 'Standard_LRS'
}
}
65 https://docs.microsoft.com/azure/azure-resource-manager/bicep/parameters/
66 https://docs.microsoft.com/azure/azure-resource-manager/bicep/variables/
536
Modules
If you want truly reusable templates you cannot avoid using a module. Modules enable you to reuse a
Bicep file in other Bicep files. In a module, you define what you need to deploy, and any parameters
needed and when you reuse it in another file, all you need to do is reference the file and provide the
parameters. The rest is taken care of by Azure Bicep.
In the above example, you are using a module that presumably is deploying an App Service. For more
information, see Using Modules in Bicep68.
Outputs
You can use outputs to pass values from your deployment to the outside world, whether it is within a CI/
CD pipeline or in a local terminal or Cloud Shell. That would enable you to access a value such as storage
endpoint or application URL after the deployment is finished.
All you need is the output keyword and the property you would like to access:
output storageEndpoint endpoints = stg.properties.primaryEndpoints
Other features
There are many other features available within a Bicep file such as loops, conditional deployment,
multiline strings, referencing an existing cloud resource, and many more. In fact, any valid function within
an ARM template is also valid within a Bicep file.
Next steps
In the next unit, you will learn how to use Bicep in an Azure Pipeline.
67 https://docs.microsoft.com/azure/azure-resource-manager/bicep/resource-declaration/
68 https://docs.microsoft.com/azure/azure-resource-manager/bicep/modules/
69 https://docs.microsoft.com/azure/azure-resource-manager/bicep/outputs/
537
Prerequisites
You'll need an Azure Subscription, if you don't have one, create a free account70 before you begin.
You also need an Azure DevOps organization, similarly, if you don't have one, create one for free71.
You'll need to have a configured service connection72 in your project that is linked to your Azure
subscription. Don't worry if you haven't done this before, we'll show you an easy way to do it when you're
creating your pipeline.
You also need to have that Bicep file you created earlier pushed into the Azure Repository of your project.
70 https://azure.microsoft.com/free/
71 https://docs.microsoft.com/azure/devops/pipelines/get-started/pipelines-sign-up/
72 https://docs.microsoft.com/azure/devops/pipelines/library/connect-to-azure/
538
4. Replace everything in the starter pipeline file with the following snippet.
trigger:
- main
variables:
vmImageName: 'ubuntu-latest'
azureServiceConnection: 'myServiceConnection'
resourceGroupName: 'Bicep'
location: 'eastus'
templateFile: 'main.bicep'
pool:
vmImage: $(vmImageName)
steps:
- task: AzureCLI@2
inputs:
azureSubscription: $(azureServiceConnection)
scriptType: bash
scriptLocation: inlineScript
inlineScript: |
az --version
az group create --name $(resourceGroupName) --location $(location)
az deployment group create --resource-group $(resourceGroupName) --template-file $(tem-
plateFile)
Note: Don't forget to replace the service connection name with yours.
5. Select Save and run to create a new commit in your repository containing the pipeline YAML file and
then run the pipeline. Wait for the pipeline to finish running and check the status.
540
6. Once the pipeline runs successfully, you should be able to see the resource group and the storage
account.
Prerequisites
You'll need a GitHub account that can be created for free here74. A GitHub repository is also required to
store your Bicep file and workflows. Once you've created your GitHub repository, push the Bicep file into
it. And for deployment to Azure, access to an Azure subscription is needed, which can be created for
free here75.
73 https://docs.github.com/actions
74 https://github.com/join
75 https://azure.microsoft.com/free/
541
"tenantId": "<GUID>",
(...)
}
Make a note of this object since you'll need to add it to your GitHub secrets.
- uses: actions/checkout@main
Feel free to replace the storage account prefix with your own.
Note: The first part of the workflow defines the trigger and a name. The rest defines a job and uses a
few tasks to check out the code, sign in to Azure, and deploy the Bicep file.
3. Select Start commit, and enter a title and a description in the pop-up dialog. Then select Commit
directly to the main branch, followed by Commit a new file.
543
4. Navigate back to the Actions tab and select the newly created action that should be running.
544
5. Monitor the status and when the job is finished, check the Azure portal to see if the storage account is
being created.
Summary
This module introduced the new revision of ARM templates called Azure Bicep which is designed to help
developers have a great authoring experience with its integration with Visual Studio Code and Azure CLI.
You learned how it simplifies the deployment and encourages reusability, less code, and is easy to write
and deploy.
You also learned some of the benefits and how to:
●● Intro and installing Azure Bicep.
●● Understands its syntax and basics.
●● Write and deploy Azure Bicep templates locally.
●● Deploy Azure Bicep templates using Azure DevOps.
545
Learn more
●● What is Bicep? | Azure Bicep documentation76.
●● Bicep on Microsoft Learn77.
●● Best practices for Bicep | Microsoft documentation78.
●● Migrate to Bicep | Microsoft documentation79.
76 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/overview?tabs=bicep
77 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/learn-bicep
78 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/best-practices
79 https://docs.microsoft.com/en-us/azure/azure-resource-manager/bicep/migrate
546
Labs
Lab 14: Deployments using Azure Resource Man-
ager templates
Lab overview
In this lab, you will create an Azure Resource manager template and modularize it by using a linked
template. You will then modify the main deployment template to call the linked template and updated
dependencies, and finally deploy the templates to Azure.
Objectives
After you complete this lab, you will be able to:
●● Create Resource Manager template
●● Create a Linked template for storage resources
●● Upload Linked Template to Azure Blob Storage and generate SAS token
●● Modify the main template to call Linked template
●● Modify main template to update dependencies
●● Deploy resources to Azure using linked templates
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions80
80 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
547
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices represents when using this approach the methodology of HOW a state is
achieved is abstracted away?
Imperative.
Declarative.
Exclamatory.
Multiple choice
Which of the following choices is defines the ability to apply one or more operations against a resource,
resulting in the same outcome every time?
Increment.
Idempotence.
Configuration Drift.
Multiple choice
Which of the following choices is a benefit of infrastructure as code?
Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and reprovisioned
as and when needed.
Uses mutable service processes.
Manual scale-up and scale-out.
Multiple choice
Which of the following choices are the deployment modes available when using Azure Resource Manager
templates?
Transactional, Complete.
Incremental, Complete.
Differential, Validate.
548
Multiple choice
Which of the following choices isn't a component or section of an Azure Resource Manager template?
Outputs.
Functions.
Secrets.
Multiple choice
Which of the following choices is an element that enables you to define one resource as dependent on one
or more other resources?
template.
properties.
dependsOn.
Multiple choice
Which of the following choices can you use to get a list of VM using Azure CLI?
List-VM.
az vm list.
Get-AzVm.
Multiple choice
Which of the following choices can you use to create a resource group using Azure CLI?
az group create.
az creates group.
az deployment group create.
Multiple choice
Which of the following choices is a valid syntax for string variables if you want to use Azure CLI commands
in PowerShell?
variable="value".
variable=value.
$variable="value".
Multiple choice
Which of the following choices isn't an option when creating runbooks?
Import runbooks from Azure Artifacts.
Create your runbook and import it.
Modify runbooks from the runbook gallery.
549
Multiple choice
Which of the following choices isn't a supported source control by Azure Automation?
Azure DevOps (Git or Team Foundation Version Control).
BitBucket.
GitHub.
Multiple choice
Which of the following choices is the required keyword at the beginning when creating your workflow to
PowerShell?
ResourceGroupName.
Param.
workflow.
Multiple choice
Which of the following choices is the process whereby a set of resources change their state over time from
the desired state in which they were deployed?
Configuration Drift.
Idempotence.
Increment.
Multiple choice
Which of the following choices isn't a primary component of DSC?
Configuration.
Resources.
PowerShell.
Multiple choice
Which of the following choices isn't a method of implementing DSC?
Pull mode.
Fetch mode.
Push mode.
Multiple choice
Which of the following describes how to define the dependencies in a bicep file?
Bicep uses implicit dependency using symbolic names and parent-child properties.
By adding the LinkTo property.
Bicep doesn't support resource dependencies.
550
Multiple choice
Which of the following choices best describe the behavior of the webAppName parameter for a team that
created a template that contains this line: param webAppName string = 'mySite${uniqueString(resource-
Group().id)}'?
Whoever is deploying the template must provide a value for the webAppName.
When you redeploy the template to the same resource group, the value of the webAppName remains
the same.
The webAppName parameter will have a different value every time the template gets deployed.
Multiple choice
Which of the following choices describe how you can reuse a Bicep template in other Bicep templates?
By adding a local reference to that file.
By defining a module and referencing it in other files.
By adding a remote reference to a template from an online repository.
551
Answers
Multiple choice
Which of the following choices represents when using this approach the methodology of HOW a state is
achieved is abstracted away?
Imperative.
■■ Declarative.
Exclamatory.
Explanation
The declarative approach abstracts away the methodology of how a state is achieved.
Multiple choice
Which of the following choices is defines the ability to apply one or more operations against a resource,
resulting in the same outcome every time?
Increment.
■■ Idempotence.
Configuration Drift.
Explanation
Idempotence is a mathematical term that can be used in Infrastructure as Code and Configuration as Code.
It is the ability to apply one or more operations against a resource, resulting in the same outcome.
Multiple choice
Which of the following choices is a benefit of infrastructure as code?
■■ Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and reprovisioned
as and when needed.
Uses mutable service processes.
Manual scale-up and scale-out.
Explanation
Treats infrastructure as a flexible resource that can be provisioned, de-provisioned, and reprovisioned as and
when required.
Multiple choice
Which of the following choices are the deployment modes available when using Azure Resource Manager
templates?
Transactional, Complete.
■■ Incremental, Complete.
Differential, Validate.
Explanation
When deploying your resources using templates, you have three options: validate, incremental mode
(default), and complete mode.
552
Multiple choice
Which of the following choices isn't a component or section of an Azure Resource Manager template?
Outputs.
Functions.
■■ Secrets.
Explanation
A Resource Manager template can contain sections like Parameters, Variables, Functions, Resources,
Outputs.
Multiple choice
Which of the following choices is an element that enables you to define one resource as dependent on
one or more other resources?
template.
properties.
■■ dependsOn.
Explanation
The dependsOn element enables you to define one resource as dependent on one or more other resources.
Multiple choice
Which of the following choices can you use to get a list of VM using Azure CLI?
List-VM.
■■ az vm list.
Get-AzVm.
Explanation
It's az vm list. For many Azure resources, Azure CLI provides a list subcommand to get resource details.
Multiple choice
Which of the following choices can you use to create a resource group using Azure CLI?
■■ az group create.
az creates group.
az deployment group create.
Explanation
Create a resource group to deploy your resources by running az group create.
553
Multiple choice
Which of the following choices is a valid syntax for string variables if you want to use Azure CLI com-
mands in PowerShell?
variable="value".
variable=value.
■■ $variable="value".
Explanation
If you use a PowerShell for running Azure CLI scripts, you'll need to use the following syntax for string
variables $variable="value", and $variable=integer for integer values.
Multiple choice
Which of the following choices isn't an option when creating runbooks?
■■ Import runbooks from Azure Artifacts.
Create your runbook and import it.
Modify runbooks from the runbook gallery.
Explanation
When creating runbooks, you have two options. You can either create your runbook and import it or modify
runbooks from the runbook gallery.
Multiple choice
Which of the following choices isn't a supported source control by Azure Automation?
Azure DevOps (Git or Team Foundation Version Control).
■■ BitBucket.
GitHub.
Explanation
Azure Automation supports three types of source control: GitHub, Azure DevOps (Git), and Azure DevOps
(Team Foundation Version Control).
Multiple choice
Which of the following choices is the required keyword at the beginning when creating your workflow to
PowerShell?
ResourceGroupName.
Param.
■■ workflow.
Explanation
When you create your workflow, begin with the workflow keyword, which identifies a workflow command to
PowerShell.
554
Multiple choice
Which of the following choices is the process whereby a set of resources change their state over time
from the desired state in which they were deployed?
■■ Configuration Drift.
Idempotence.
Increment.
Explanation
Configuration drift is the process of a set of resources changing over time from their original deployment
state.
Multiple choice
Which of the following choices isn't a primary component of DSC?
Configuration.
Resources.
■■ PowerShell.
Explanation
DSC consists of three primary components: Configurations, Resources, and Local Configuration Manager
(LCM).
Multiple choice
Which of the following choices isn't a method of implementing DSC?
Pull mode.
■■ Fetch mode.
Push mode.
Explanation
There are two methods of implementing DSC: Push mode and Pull mode.
Multiple choice
Which of the following describes how to define the dependencies in a bicep file?
■■ Bicep uses implicit dependency using symbolic names and parent-child properties.
By adding the LinkTo property.
Bicep doesn't support resource dependencies.
Explanation
Bicep uses implicit dependency using symbolic names and parent-child properties.
555
Multiple choice
Which of the following choices best describe the behavior of the webAppName parameter for a team
that created a template that contains this line: param webAppName string = 'mySite${uniqueString(re-
sourceGroup().id)}'?
Whoever is deploying the template must provide a value for the webAppName.
■■ When you redeploy the template to the same resource group, the value of the webAppName remains
the same.
The webAppName parameter will have a different value every time the template gets deployed.
Explanation
When you redeploy the template to the same resource group, the value of the webAppName remains the
same.
Multiple choice
Which of the following choices describe how you can reuse a Bicep template in other Bicep templates?
By adding a local reference to that file.
■■ By defining a module and referencing it in other files.
By adding a remote reference to a template from an online repository.
Explanation
By defining a module and referencing it in other files.
Module 7 Implement security and validate
code bases for compliance
DevOps teams have access to unprecedented infrastructure and scale thanks to the cloud. They can be
approached by some of the most nefarious actors on the internet, as they risk the security of their
business with every application deployment.
Perimeter-class security is no longer viable in such a distributed environment, so now companies need to
adopt more micro-level security across applications and infrastructure and have multiple lines of defense.
With continuous integration and continuous delivery, how do you ensure your applications are secure
and stay secure? How can you find and fix security issues early in the process? It begins with practices
commonly referred to as DevSecOps.
DevSecOps incorporates the security team and their capabilities into your DevOps practices making
security the responsibility of everyone on the team. Security needs to shift from an afterthought to being
evaluated at every step of the process.
Securing applications is a continuous process that encompasses secure infrastructure, designing architec-
ture with layered security, continuous security validation, and monitoring attacks.
558
Security is everyone's responsibility and needs to be looked at holistically across the application life cycle.
This module introduces DevSecOps concepts, SQL injection attacks, threat modeling, and security for
continuous integration.
We'll also see how continuous integration and deployment pipelines can accelerate the speed of security
teams and improve collaboration with software development teams.
You'll learn the key validation points and how to secure your pipeline.
Learning objectives
After completing this module, students and professionals can:
●● Identify SQL injection attack.
●● Understand DevSecOps.
●● Implement pipeline security.
●● Understand threat modeling.
559
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Getting started
●● Use the SQL Injection Azure Resource Manager Template1 to provision a web app and an SQL
database with known SQL injection vulnerability.
●● Ensure you browse the ‘Contoso Clinic’ web app provisioned in your SQL injection resource group.
1 https://azure.microsoft.com/resources/templates/sql-injection-attack-prevention/
560
How it works
1. Navigate to the Patients view, and in the search box, type "'" and hit enter. You'll see an error page
with SQL exception indicating that the search box feeds the text into a SQL statement.
The helpful error message is enough to guess that the text in the search box is being appended into
the SQL statement.
2. Next, try passing a SQL statement 'AND FirstName = 'Kim'-- in the search box. You'll see that
the results in the list below are filtered down to only show the entry with firstname Kim.
3. You can try to order the list by SSN by using this statement in the search box 'order by SSN--.
4. Now for the finale, run this drop statement to drop the table that holds the information displayed on
this page… 'AND 1=1; Drop Table Patients --. Once the operation is complete, try and load
the page. You'll see that the view errors out with an exception indicating that the dbo.patients table
can't be found.
561
There's more
The Azure security center team has other playbooks2 you can look at to learn how vulnerabilities are
exploited to trigger a virus attack and a DDoS attack.
Understand DevSecOps
While the adoption of cloud computing is on the rise to support business productivity, a lack of security
infrastructure can inadvertently compromise data.
The 2018 Microsoft Security Intelligence Report finds that:
●● Data isn't encrypted both at rest and in transit by:
2 https://azure.microsoft.com/blog/enhance-your-devsecops-practices-with-azure-security-center-s-newest-playbooks/
562
A Secure DevOps pipeline allows development teams to work fast without breaking their project by
introducing unwanted security vulnerabilities.
Note: Secure DevOps is also sometimes referred to as DevSecOps. You might encounter both terms, but
each term refers to the same concept.
Two essential features of Secure DevOps Pipelines that aren't found in standard DevOps Pipelines are:
●● Package management and the approval process associated with it. The previous workflow diagram
details other steps for adding software packages to the Pipeline and the approval processes that
packages must go through before they're used. These steps should be enacted early in the Pipeline to
identify issues sooner in the cycle.
●● Source Scanner also an extra step for scanning the source code. This step allows for security scanning
and checking for security vulnerabilities that aren't present in the application code. The scanning
occurs after the app is built but before release and pre-release testing. Source scanning can identify
security vulnerabilities earlier in the cycle.
In the rest of this lesson, we address these two essential features of Secure DevOps Pipelines, the prob-
lems they present, and some of the solutions for them.
564
You can use threat modeling to shape your application's design, meet your company's security goals, and
reduce risk.
With non-security experts in mind, the tool makes threat modeling easier for all developers by providing
clear guidance on creating and analyzing threat models.
3 https://docs.microsoft.com/azure/security/azure-security-threat-modeling-tool-feature-overview
567
Getting started
●● Download and install the Threat Modeling tool5.
How to do it
1. Launch the Microsoft Threat Modeling Tool and choose the option to Create a Model.
4 https://blogs.msdn.microsoft.com/secdevblog/2018/09/12/microsoft-threat-modeling-tool-ga-release/
5 https://aka.ms/threatmodelingtool
568
2. From the right panel, search and add Azure App Service Web App, Azure SQL Database, link
them up to show a request and response flow as demonstrated in the following image.
3. From the toolbar menu, select View -> Analysis view. The analysis view will show you a complete list
of threats categorized by severity.
4. To generate a full report of the threats, from the toolbar menu, select Reports -> Create full report,
select a location to save the report.
569
A full report is generated with details of the threat, the SLDC phase it applies to, and possible mitiga-
tion and links to more information.
There's more
You can find a complete list of threats used in the threat modeling tool here6.
Summary
This module introduced DevSecOps concepts, SQL injection attacks, threat modeling, and security for
continuous integration.
You learned how to describe the benefits and usage of:
●● Identify SQL injection attack.
●● Understand DevSecOps.
●● Implement pipeline security.
●● Understand threat modeling.
Learn more
●● DevSecOps Tools and Services | Microsoft Azure7.
●● Enable DevSecOps with Azure and GitHub - DevSecOps | Microsoft Docs8.
6 https://docs.microsoft.com/azure/security/develop/threat-modeling-tool-threats
7 https://azure.microsoft.com/solutions/devsecops
8 https://docs.microsoft.com/azure/devops/devsecops
570
●● Advanced Threat Protection - Azure SQL Database, SQL Managed Instance, & Azure Synapse
Analytics | Microsoft Docs9.
●● Securing Azure Pipelines - Azure Pipelines | Microsoft Docs10.
●● SQL Injection attack on a web app (microsoft.com)11.
9 https://docs.microsoft.com/azure/azure-sql/database/threat-detection-overview
10 https://docs.microsoft.com/azure/devops/pipelines/security/overview
11 https://azure.microsoft.com/resources/templates/sql-injection-attack-prevention/
571
Learning objectives
After completing this module, students and professionals can:
●● Implement open-source software.
●● Explain corporate concerns for open-source components.
●● Describe open-source licenses.
●● Understand the license implications and ratings.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
It's open-source code as opposed to closed source. A closed source means that the source code isn't
available, even though components are available.
The .NET platforms, such as the original .NET Framework and even more so .NET Core, use several
components created by the open-source community and not Microsoft itself. In ASP.NET12 and ASP.
NET13 Core, many of the frontend development libraries are open-source components, such as jQuery,
Angular, and React.
Instead of creating new components themselves, the teams at Microsoft are using the open-source
components and taking a dependency on them.
The teams also contribute and invest in the open-source components and projects, joining in on the
collaborative effort. Besides adopting external open-source software, Microsoft has also made significant
parts of its software available as open-source.
.NET is a perfect example of how Microsoft has changed its posture towards open source. It has made the
codebase for the .NET Framework and .NET Core available and many other components.
The .NET Foundation aims to advocate the needs and evangelize the benefits of the .NET platform. And
promote the use of .NET open source for developers.
For more information, see the .NET Foundation website14.
12 http://asp.net/
13 http://asp.net/
14 http://www.dotnetfoundation.org
573
Types of licenses
There are multiple licenses used in open-source, and they're different.
The license spectrum is a chart showing licenses from the developer's perspective and the implications of
use for downstream requirements imposed on the overall solution and source code.
15 http://opensource.org/
16 http://opensource.org/osd
575
On the left side, there are the “attribution” licenses. They're permissive and allow practically every type of
use by the software that consumes it. An example is building commercially available software, including
the components or source code under this license.
The only restriction is that the original attribution to the authors remains included in the source code or
as part of the downstream use of the new software. The right side of the spectrum shows the “copyleft”
licenses.
These licenses are considered viral, as the use of the source code and its components, and distribution of
the complete software, implies that all source code using it should follow the same license form.
The viral nature is that the use of the software covered under this license type forces you to forward the
same license for all work with or on the original software.
The middle of the spectrum shows the “downstream” or "weak copyleft" licenses. It also requires that it
must do so under the same license terms when the covered code is distributed.
Unlike the copyleft licenses, it doesn't extend to improvements or additions to the covered code.
License rating
Licenses can be rated by the impact that they have. When a package has a specific type of license, the
use of the package implies keeping to the requirements of the package.
576
The license's impact on the downstream use of the code, components, and packages can be rated as
High, Medium, and Low, depending on the copy-left, downstream, or attribution nature of the license
type.
For compliance reasons, a high license rating can be considered a risk for compliance, intellectual
property, and exclusive rights.
Package security
The use of components creates a software supply chain.
The resultant product is a composition of all its parts and components.
It applies to the security level of the solution as well. So, like license types, it's essential to know how
secure the components being used are.
If one of the components used isn't secure, then the entire solution isn't either.
Summary
This module explored open-source software and corporate concerns with software components. Also, it
explained common open-source licenses, license implications, and ratings.
You learned how to describe the benefits and usage of:
●● Implement open-source software.
●● Explain corporate concerns for open-source components.
●● Describe open-source licenses.
●● Understand the license implications and ratings.
Learn more
●● Deploy Open Source Apps With Your Free Account | Microsoft Azure17.
●● Microsoft’s Open Source Program | Microsoft Open Source18.
17 https://azure.microsoft.com/free/open-source/search
18 https://opensource.microsoft.com/program/
577
Learning objectives
After completing this module, students and professionals can:
●● Inspect and validate code bases for compliance.
●● Integrate security tools like WhiteSource with Azure DevOps.
●● Implement pipeline security validation.
●● Interpret alerts from scanning tools.
●● Configure GitHub Dependabot alerts and security.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● Third, we need to ensure that the application follows the rules and regulations required to meet. We
need to test it while building the code and retest it periodically, even after deployment.
It's commonly accepted that security isn't something you can add to an application or a system later.
Secure development must be part of the development life cycle. It's even more important for critical
applications and those who process sensitive or highly confidential information.
Application security concepts haven't been a focus for developers in the past. Apart from the education
and training issues, their organizations have emphasized the fast development of features.
However, with the introduction of DevOps practices, security testing is much easier to integrate. Rather
than being a task done by security specialists, security testing should be part of the day-to-day delivery
processes.
Overall, when the time for rework is taken into account, adding security to your DevOps practices can
reduce the overall time to develop quality software.
Package management
Just as teams use version control as a single source of truth for source code, Secure DevOps relies on a
package manager as the unique source of binary components.
Using binary package management, a development team can create a local cache of approved compo-
nents and a trusted feed for the Continuous Integration (CI) pipeline.
In Azure DevOps, Azure Artifacts is an integral part of the component workflow for organizing and
sharing access to your packages. Azure Artifacts allows you to:
●● Keep your artifacts organized. Share code easily by storing Apache Maven, npm, and NuGet packages
together. You can store packages using Universal Packages, eliminating keeping binaries in Git.
●● Protect your packages. Keep every public source package you use (including packages from npmjs
and NuGet .org) safe in your feed where only you can delete it and where it's backed by the enter-
prise-grade Azure Service Level Agreement (SLA).
●● Integrate seamless package handling into your Continuous Integration (CI)/ Continuous Development
(CD) pipeline. Easily access all your artifacts in builds and releases. Azure Artifacts integrates natively
with the Azure Pipelines CI/CD tool.
For more information about Azure Artifacts, visit the webpage. What are Azure Artifacts?19
19 https://docs.microsoft.com/azure/devops/artifacts/overview
579
Note: After publishing a particular package version to a feed, that version number is permanently
reserved.
Note: You can't upload a newer revision package with that same version number or delete that version
and upload a new package with the same version number. The published version is immutable.
This practical approach to reuse includes runtimes, which are available on Windows and Linux operating
systems such as Microsoft .NET Core and Node.js.
However, OSS component reuse comes with the risk that reused dependencies can have security vulnera-
bilities. As a result, many users find security vulnerabilities in their applications because of the Node.js
package versions they consume.
OSS offers a new-concept-called Software Composition Analysis (SCA) to address these security concerns,
shown in the following image.
When consuming an OSS component, whether you're creating or consuming dependencies, you'll
typically want to follow these high-level steps:
1. Start with the latest, correct version to avoid old vulnerabilities or license misuse.
2. Validate that the OSS components are the correct binaries for your version. In the release pipeline,
validate binaries to ensure accuracy and keep a traceable bill of materials.
3. Get notifications of component vulnerabilities immediately, correct them, and redeploy the compo-
nent automatically to resolve security vulnerabilities or license misuses from reused software.
581
WhiteSource
The WhiteSource extension is available on the Azure DevOps Marketplace. Using WhiteSource, you can
integrate extensions with your CI/CD pipeline to address Secure DevOps security-related issues.
The WhiteSource extension specifically addresses open-source security, quality, and license compliance
concerns for a team consuming external packages.
Because most breaches target known vulnerabilities in standard components, robust tools are essential to
securing complex open-source components.
It can include links to patches, fixes, relevant source files, even recommendations to change system
configuration to prevent exploitation.
For more information on Dependabot Alerts, see About alerts for vulnerable dependencies20.
See Supported package ecosystems for details on the supported package ecosystems that alerts can be
generated from.
For notification details, see: Configuring notifications21.
Security updates
A key advantage of Dependabot security updates is that they can automatically create pull requests.
A developer can then review the suggested update and triage what is required to incorporate it.
For more information on automatic security updates, see About GitHub Dependabot security up-
dates22.
20 https://docs.github.com/free-pro-team@latest/github/managing-security-vulnerabilities/about-alerts-for-vulnerable-dependencies
21 https://docs.github.com/free-pro-team@latest/github/managing-subscriptions-and-notifications-on-github/configuring-notifications
22 https://docs.github.com/free-pro-team@latest/github/managing-security-vulnerabilities/about-github-dependabot-security-updates
23 https://www.whitesourcesoftware.com/
24 https://www.checkmarx.com/
25 https://www.veracode.com/
26 https://www.blackducksoftware.com/
584
Tool Type
Artifactory Artifact repository
SonarQube A static code analysis tool
WhiteSource (Bolt) Build scanning.
Configure pipeline
The configuration of the scanning for license types and security vulnerability in the pipeline is done by
using appropriate build tasks in your DevOps tooling. For Azure DevOps, these are build pipeline tasks.
By setting a security bug bar in the Definition of Done and specifying the allowed license ratings, one can
use the reports from the scans to find the work for the development team.
Summary
This module explained Composition Analysis, inspecting and validating code bases for compliance,
integration with security tools, and integration with Azure Pipelines.
You learned how to describe the benefits and usage of:
●● Inspect and validate code bases for compliance.
●● Integrate security tools like WhiteSource with Azure DevOps.
●● Implement pipeline security validation.
●● Interpret alerts from scanning tools.
●● Configure GitHub Dependabot alerts and security.
Learn more
●● Develop secure applications on Microsoft Azure | Microsoft Docs27.
●● Azure DevOps Code Quality & Code Security Improvement | SonarCloud28.
●● Configuring Dependabot security updates - GitHub Docs29.
27 https://docs.microsoft.com/azure/security/develop/secure-develop
28 https://sonarcloud.io/azure-devops
29 https://docs.github.com/code-security/supply-chain-security/managing-vulnerabilities-in-your-projects-dependencies/configuring-
dependabot-security-updates
587
Static analyzers
Introduction
This module introduces the static analyzers SonarCloud and CodeQL in GitHub.
Learning objectives
After completing this module, students and professionals can:
●● Understand Static Analyzers.
●● Work with SonarCloud.
●● Work with CodeQL in GitHub.
●● Interpret alerts from scanning tools.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Explore SonarCloud
Technical debt can be classified as the measure between the codebase's current state and an optimal
state.
Technical debt saps productivity by making code hard to understand, easy to break, and difficult to
validate, creating unplanned work and ultimately blocking progress.
Technical debt is inevitable! It starts small and grows over time through rushed changes, lack of context,
and discipline.
Organizations often find that more than 50% of their capacity is sapped by technical debt.
The hardest part of fixing technical debt is knowing where to start.
SonarQube is an open-source platform that is the de facto solution for understanding and managing
technical debt.
We'll learn how to use SonarQube in a build pipeline to identify technical debt in this recipe.
Getting ready
SonarQube is an open platform to manage code quality.
Originally famous in the Java community, SonarQube now supports over 20 programming languages.
The joint investments made by Microsoft and SonarSource make SonarQube easier to integrate with
Pipelines and better at analyzing .NET-based applications.
You can read more about the capabilities offered by SonarQube here: https://www.sonarqube.org/.
SonarSource, the company behind SonarQube, offers a hosted SonarQube environment called as Sonar-
Cloud.
588
Summary
This module introduced the static analyzers SonarCloud and CodeQL in GitHub.
You learned how to describe the benefits and usage of:
●● Understand Static Analyzers.
●● Work with SonarCloud.
●● Work with CodeQL in GitHub.
●● Interpret alerts from scanning tools.
Learn more
●● Automatic Code Review, Testing, Inspection & Auditing32
●● CodeQL33
30 https://codeql.github.com/docs/codeql-overview/about-codeql/
31 https://codeql.github.com/docs/codeql-overview/codeql-tools/
32 https://sonarcloud.io
33 https://codeql.github.com
589
Learning objectives
After completing this module, students and professionals can:
●● Understand OWASP and Dynamic Analyzers.
●● Implement OWASP Security Coding Practices.
●● Understand compliance for code bases.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
34 http://owasp.org
590
●● System Configuration
●● Database Security
●● File Management
●● Memory Management
●● General Coding Practices
OWASP also publishes an intentionally vulnerable web application called The Juice Shop Tool Project35
to learn about common vulnerabilities and see how they appear in applications.
It includes vulnerabilities from all the OWASP Top 1036.
In 2002, Microsoft underwent a company-wide re-education and review phase to produce secure applica-
tion code.
The book, Writing Secure Code by David LeBlanc, Michael Howard37, was written by two people
involved and provided detailed advice on writing secure code.
For more information, you can see:
●● The OWASP Foundation38.
●● OWASP Secure Coding Practices Quick Reference Guide39.
●● OWASP Code Review Guide40.
●● OWASP Top 1041.
35 https://owasp.org/www-project-juice-shop/
36 https://owasp.org/www-project-top-ten/
37 https://www.booktopia.com.au/ebooks/writing-secure-code-david-leblanc/prod2370006179962.html
38 http://owasp.org
39 https://owasp.org/www-pdf-archive/OWASP_SCP_Quick_Reference_Guide_v2.pdf
40 https://owasp.org/www-pdf-archive/OWASP_Code_Review_Guide-V1_1.pdf
41 https://owasp.org/www-project-top-ten/
42 https://github.com/deliveron/owasp-zap-vsts-extension
591
The following figure outlines the steps for the Application CI/CD pipeline and the longer-running Nightly
OWASP ZAP pipeline.
Even with continuous security validation running against every change to help ensure new vulnerabilities
aren't introduced, hackers continuously change their approaches, and new vulnerabilities are being
discovered.
Good monitoring tools allow you to help detect, prevent, and remediate issues discovered while your
application runs in production.
Azure provides several tools that provide detection, prevention, and alerting using rules, such as OWASP
Top 1043 and machine learning to detect anomalies and unusual behavior to help identify attackers.
Minimize security vulnerabilities by taking a holistic and layered approach to security, including secure
infrastructure, application architecture, continuous validation, and monitoring.
DevSecOps practices enable your entire team to incorporate these security capabilities in the whole
lifecycle of your application.
Establishing continuous security validation into your CI/CD pipeline can allow your application to stay
secure while improving the deployment frequency to meet the needs of your business to stay ahead of
the competition.
Summary
This module explored OWASP and Dynamic Analyzers for penetration testing, results, and bugs.
You learned how to describe the benefits and usage of:
●● Understand OWASP and Dynamic Analyzers.
●● Implement OWASP Security Coding Practices.
●● Understand compliance for codebases.
43 https://owasp.org/www-project-top-ten/
593
Learn more
●● Vulnerability Scanning Tools | OWASP Foundation44.
●● OWASP Secure Coding Practices Quick Reference Guide45.
44 https://owasp.org/www-community/Vulnerability_Scanning_Tools
45 https://owasp.org/www-pdf-archive/OWASP_SCP_Quick_Reference_Guide_v2.pdf
46 https://owasp.org/www-pdf-archive/OWASP_Code_Review_Guide-V1_1.pdf
47 https://owasp.org/www-project-top-ten/
594
Learning objectives
After completing this module, students and professionals can:
●● Configure Microsoft Defender for Cloud.
●● Understand Azure policies.
●● Describe initiatives, resource locks, and Azure Blueprints.
●● Work with Microsoft Defender for Identity.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
48 https://aka.ms/jea
595
security projects such as the Open Web Application Security Project (OWASP49) Foundation, then
adopt these projects into your processes.
●● Production monitoring. It's a critical DevOps practice. The specialized services for detecting anomalies
related to intrusion are known as Security Information and Event Management. Microsoft Defender
for Cloud50 focuses on the security incidents related to the Azure cloud.
Note: In all cases, use Azure Resource Manager Templates or other code-based configurations. Imple-
ment IaC best practices, such as making changes in templates to make changes traceable and repeatable.
Also, you can use provisioning and configuration technologies such as Desired State Configuration (DSC),
Azure Automation, and other third-party tools and products that can integrate seamlessly with Azure.
Microsoft Defender for Cloud is part of the Center for Internet Security (CIS) Benchmarks51 recom-
mendations.
49 https://www.owasp.org
50 https://azure.microsoft.com/services/defender-for-cloud
51 https://www.cisecurity.org/cis-benchmarks/
596
●● Standard. This tier provides a full suite of security-related services, including continuous monitoring,
threat detection, JIT access control for ports, and more.
To access the full suite of Microsoft Defender for Cloud services, you'll need to upgrade to a Standard
version subscription.
You can access the 60-day free trial from the Microsoft Defender for Cloud dashboard in the Azure portal.
You can read more about Microsoft Defender for Cloud at Microsoft Defender for Cloud52.
52 https://azure.microsoft.com/services/defender-for-cloud
597
The following examples are how you can use Microsoft Defender for Cloud to detect, assess, and diag-
nose your incident response plan stages.
●● Detect. Review the first indication of an event investigation. For example, use the Microsoft Defender
for Cloud dashboard to review a high-priority security alert's initial verification.
●● Assess. Do the initial assessment to obtain more information about suspicious activity. For example,
you can get more information from Microsoft Defender for Cloud about a security alert.
●● Diagnose. Conduct a technical investigation and identify containment, mitigation, and workaround
strategies. For example, you can follow the remediation steps described by Microsoft Defender for
Cloud for a particular security alert.
●● Use Microsoft Defender for Cloud recommendations to enhance security.
You can reduce the chances of a significant security event by configuring a security policy and then
implementing the recommendations provided by Microsoft Defender for Cloud. A security policy defines
the set of controls that are recommended for resources within a specified subscription or resource group.
In Microsoft Defender for Cloud, you can define policies according to your company's security require-
ments.
Microsoft Defender for Cloud analyzes the security state of your Azure resources. When it identifies
potential security vulnerabilities, it creates recommendations based on the controls set in the security
policy.
The suggestions guide you through the process of configuring the corresponding security controls.
For example, if you have workloads that don't require the Azure SQL Database Transparent Data Encryp-
tion (TDE) policy, turn off the policy at the subscription level and enable it only on the resource groups
where SQL Database TDE is required.
You can read more about the Microsoft Defender for Cloud at the Microsoft Defender for Cloud53.
More implementation and scenario details are also available in the Microsoft Defender for Cloud
planning and operations guide54.
Azure Policy uses policies and initiatives to provide policy enforcement capabilities.
Azure Policy evaluates your resources by scanning for resources that don't follow the policies you create.
For example, you might have a policy that specifies a maximum size limit for VMs in your environment.
53 https://azure.microsoft.com/services/security-center/
54 https://docs.microsoft.com/azure/defender-for-cloud/security-center-planning-and-operations-guide
598
After you implement your maximum VM size policy, Azure Policy will evaluate the VM resource whenever
a VM is created—or updated to ensure that the VM follows the size limit you set in your Policy.
Azure Policy can help maintain the state of your resources by evaluating your existing resources and
configurations and automatically remediating non-compliant resources.
It has built-in policy and initiative definitions for you to use. The definitions are arranged into categories:
Storage, Networking, Compute, Security Center, and Monitoring.
Azure Policy can also integrate with Azure DevOps by applying any continuous integration (CI) and
continuous delivery (CD) pipeline policies that apply to the pre-deployment and post-deployment of your
applications.
Understand policies
Applying a policy to your resources with Azure Policy involves the following high-level steps:
●● Policy definition. Create a policy definition.
●● Policy assignment. Assign the definition to a scope of resources.
●● Remediation. Review the policy evaluation results and address any non-compliances.
Policy definition
A policy definition specifies the resources to be evaluated and the actions to take on them. For example,
you could prevent VMs from deploying if exposed to a public IP address. You could also contain a specific
hard disk from deploying VMs to control costs. Policies are defined in the JavaScript Object Notation
(JSON) format.
The following example defines a policy that limits where you can deploy resources:
{
"properties": {
"mode": "all",
"parameters": {
"allowedLocations": {
"type": "array",
"metadata": {
"description": "The list of locations that can be
55 https://docs.microsoft.com/azure/devops/pipelines/tasks/deploy/azure-policy-check-gate
56 https://azure.microsoft.com/services/azure-policy/
599
Policy assignment
Policy definitions, whether custom or built-in, need to be assigned.
A policy assignment is a policy definition that has been assigned to a specific scope. Scopes can range
from a management group to a resource group.
Child resources will inherit any policy assignments applied to their parents.
It means if a policy is applied to a resource group, it's used to all the resources within that resource
group.
However, you can define subscopes for excluding resources from policy assignments.
You can assign policies via:
●● Azure portal.
600
●● Azure CLI.
●● PowerShell.
Remediation
Resources found not to follow a deployIfNotExists or modify policy condition can be put into a compliant
state through Remediation.
Remediation instructs Azure Policy to run the deployIfNotExists effect or the tag operations of the policy
on existing resources.
To minimize configuration drift, you can bring resources into compliance using automated bulk Remedia-
tion instead of going through them one at a time.
You can read more about Azure Policy on the Azure Policy57 webpage.
Explore initiatives
Initiatives work alongside policies in Azure Policy. An initiative definition is a set of policy definitions to
help track your compliance state for meeting large-scale compliance goals.
Even if you have a single policy, we recommend using initiatives if you anticipate increasing your number
of policies over time.
Applying an initiative definition to a specific scope is called an initiative assignment.
Initiative definitions
Initiative definitions simplify the process of managing and assigning policy definitions by grouping sets of
policies into a single item.
For example, you can create an initiative named Enable Monitoring in Azure Security Center to monitor
security recommendations from Azure Security Center.
Under this example initiative, you would have the following policy definitions:
●● Monitor unencrypted SQL Database in Security Center. This policy definition monitors unencrypted
SQL databases and servers.
●● Monitor OS vulnerabilities in Security Center. This policy definition monitors servers that don't satisfy
a specified OS baseline configuration.
●● Monitor missing Endpoint Protection in Security Center. This policy definition monitors servers
without an endpoint protection agent installed.
Initiative assignments
Like a policy assignment, an initiative assignment is an initiative definition assigned to a specific scope.
Initiative assignments reduce the need to make several initiative definitions for each scope.
Scopes can range from a management group to a resource group. You can assign initiatives in the same
way that you assign policies.
You can read more about policy definition and structure at Azure Policy definition structure58.
57 https://azure.microsoft.com/services/azure-policy/
58 https://docs.microsoft.com/azure/governance/policy/concepts/definition-structure
601
Azure Blueprints provides a declarative way to orchestrate deployment for various resource templates
and artifacts, including:
●● Role assignments
●● Policy assignments
●● Azure Resource Manager templates
●● Resource groups
To implement Azure Blueprints, complete the following high-level steps:
1. Create a blueprint.
2. Assign the blueprint.
3. Track the blueprint assignments.
With Azure Blueprints, the relationship between the blueprint definition (what should be deployed) and
the blueprint assignment (what is deployed) is preserved.
59 https://docs.microsoft.com/azure/azure-resource-manager/resource-group-lock-resources
602
The blueprints in Azure Blueprints are different from Azure Resource Manager templates.
When Azure Resource Manager templates deploy resources, they have no active relationship with the
deployed resources. They exist in a local environment or source control.
By contrast, with Azure Blueprints, each deployment is tied to an Azure Blueprints package. It means that
the relationship with resources will be maintained, even after deployment. This way, keeping relationships
improves deployment tracking and auditing capabilities.
Usage scenario
Whether government, industry, or organizational, Adhering to security and compliance requirements can
be difficult and time-consuming.
To help you to trace your deployments and audit them for compliance, Azure Blueprints uses artifacts and
tools that speed up your path to certification.
Azure Blueprints is also helpful in Azure DevOps scenarios. Mainly where blueprints are associated with
specific build artifacts and release pipelines, and blueprints can be tracked rigorously.
You can learn more about Azure Blueprints at Azure Blueprints60.
60 https://azure.microsoft.com/services/blueprints/
61 https://securitycenter.windows.com/
603
●● Microsoft Defender cloud service. The Microsoft Defender cloud service runs on the Azure infrastruc-
ture and is deployed in the United States, Europe, and Asia. The Microsoft Defender cloud service is
connected to the Microsoft Intelligent Security Graph.
You can acquire a license directly from the Enterprise Mobility + Security pricing options62 page or the
Cloud Solution Provider (CSP) licensing model.
Note: Microsoft Defender isn't available for purchase via the Azure portal. For more information about
Microsoft Defender, review the Azure Defender | Microsoft Azure63 webpage.
Summary
This module described security monitoring and governance with Microsoft Defender for Cloud and its
usage scenarios, Azure Policies, Microsoft Defender for Identity, and security practices related to the
tools.
You learned how to describe the benefits and usage of:
●● Configure Microsoft Defender for Cloud.
●● Understand Azure policies.
●● Describe initiatives, resource locks, and Azure Blueprints.
●● Work with Microsoft Defender for Identity.
Learn more
●● What is Microsoft Defender for Identity? | Microsoft Docs64.
●● Overview of Azure Policy - Azure Policy | Microsoft Docs65.
●● Overview of Azure Blueprints - Azure Blueprints | Microsoft Docs66.
62 https://www.microsoft.com/cloud-platform/enterprise-mobility-security-pricing
63 https://azure.microsoft.com/services/azure-defender/
64 https://docs.microsoft.com/defender-for-identity/what-is
65 https://docs.microsoft.com/azure/governance/policy/overview
66 https://docs.microsoft.com/azure/governance/blueprints/overview
605
Labs
Lab 15: Implement security and compliance in
Azure DevOps Pipelines
Lab overview
In this lab, we will create a new Azure DevOps project, populate the project repository with a sample
application code, create a build pipeline. Next, we will install WhiteSource Bolt from the Azure DevOps
Marketplace to make it available as a build task, activate it, add it to the build pipeline, use it to scan the
project code for security vulnerabilities and licensing compliance issues, and finally view the resulting
report.
Objectives
After you complete this lab, you will be able to:
●● Create a Build pipeline
●● Install WhiteSource Bolt from the Azure DevOps marketplace and activate it
●● Add WhiteSource Bolt as a build task in a build pipeline
●● Run build pipeline and view WhiteSource security and compliance report
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions67
67 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
606
SonarQube68 is an open source platform for continuous inspection of code quality that facilitates
automatic reviews with static analysis of code to improve its quality by detecting bugs, code smells, and
security vulnerabilities.
In this lab, you will learn how to setup SonarQube on Azure and integrate it with Azure DevOps.
Objectives
After you complete this lab, you will be able to:
●● Provision SonarQube server as an Azure Container Instance69 from the SonarQube Docker image
●● Setup a SonarQube project
●● Provision an Azure DevOps Project and configure CI pipeline to integrate with SonarQube
●● Analyze SonarQube reports
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions70
68 https://www.sonarqube.org/
69 https://docs.microsoft.com/en-in/azure/container-instances/
70 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
607
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following two elements Secure DevOps combines?
DevOps, Security.
SCA, OSS.
Development, Operations.
Multiple choice
Which of the following choices is the term that broadly defines what security means in Secure DevOps?
Access Control.
Securing the pipeline.
Perimeter protection.
Multiple choice
Which of the following description best describes the term software composition analysis?
Assessment of production hosting infrastructure just before deployment.
Analyzing open-source software after it has been deployed to production to identify security vulnera-
bilities.
Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criterion to use in your pipeline.
Multiple choice
Which of the following choices is a monitoring service that can provide threat protection and security
recommendations across all your services in Azure and on-premises?
Azure Policy.
Microsoft Defender.
Azure Key Vault.
608
Multiple choice
Which of the following choices can you use to create, assign and manage policies?
Azure Machine Learning.
Microsoft Defender.
Azure Policy.
Multiple choice
Which of the following choices is a tool to prevent accidental deletion of Azure resources?
Locks.
Policy.
Key Vault.
Multiple choice
Which of the following license type is considered viral in nature?
Downstream.
Attribution.
Copyleft.
Multiple choice
Which of the following choices best describes open-source software?
A type of software where code users can review, modify and distribute the software.
It's a type of software where code users can use anywhere without license restrictions or pay for it.
A type of software where the license describes usage only for non-profit organizations.
Multiple choice
Which of the following choices isn't an issue often associated with the use of open-source libraries?
Bugs.
Code property.
Security Vulnerabilities.
Multiple choice
Which of the following choices describe OWASP ZAP?
Security Testing Tool.
Code Quality Tool.
A non-profit foundation.
609
Multiple choice
Which of the following choices isn't a Secure Coding Practice guideline that OWASP regularly publishes?
Authentication and Password Management.
Code Smells.
Error Handling and Logging.
Multiple choice
Which of the following choices is a static analysis tool that scans binary files?
Azure Artifacts.
SonarCloud.
BinSkim.
Multiple choice
Which of the following steps represents the correct sequence of OWASP ZAP execution in a pipeline?
Pull OWASP Zap Weekly, Start Container, Run Baseline, Report Results and Create Bugs.
Start Container, Report Results, Run Baseline, Pull OWASP ZAP.
Start Container, Pull OWASP ZAP Weekly, Run Baseline, Spider Site, Report Results, Create Bugs.
Multiple choice
Which of the following tools helps discover vulnerabilities by letting you query code as though it were data?
SonarCloud.
OWASP ZAP.
CodeQL.
Multiple choice
Which of the following tools can be used to assess open-source security and licensing compliance?
SonarCloud.
Mend Bolt.
OWASP.
Multiple choice
Which of the following situations GitHub Dependabot detects vulnerable dependencies and send Dependa-
bot alerts about them?
A new vulnerability is added to the GitHub Advisory database.
A new code is committed to the repository.
A deployment succeeds.
610
Multiple choice
Which of the following choices is a characteristic in source code possibly indicating a deeper problem?
Memory Leak.
Bug.
Code smell.
Multiple choice
Which of the following choices represents what is Mend Bolt used?
Penetration Testing.
Finding and fixing open-source vulnerabilities.
Scanning for code quality issues.
Multiple choice
Which of the following tools can you use to do code quality checks?
Veracode.
SonarCloud.
Microsoft Defender for Cloud.
Multiple choice
Which of the following choices is a type of attack that makes it possible to execute malicious SQL state-
ments?
Man-in-the-Middle (MitM).
Denial-of-Service (DOS).
SQL Injection.
Multiple choice
Which of the following choices is a principle or process that Threat Modeling is a core element?
Microsoft Solutions Framework (MSF).
Microsoft Security Development Lifecycle (SDL).
Application Lifecycle Management (ALM).
Multiple choice
Which of the following choices isn't one of the five major threat modeling steps?
Don't deploy with less than 90% of code quality.
Defining security requirements.
Mitigating threats.
611
Answers
Multiple choice
Which of the following two elements Secure DevOps combines?
■■ DevOps, Security.
SCA, OSS.
Development, Operations.
Explanation
Secure DevOps brings together the notions of DevOps and Security. DevOps is about working faster.
Security is about emphasizing thoroughness, typically done at the end of the cycle, potentially generating
unplanned work right at the end of the pipeline.
Multiple choice
Which of the following choices is the term that broadly defines what security means in Secure DevOps?
Access Control.
■■ Securing the pipeline.
Perimeter protection.
Explanation
With Secure DevOps, security is more about securing the pipeline, determining where you can add protec-
tion to the elements that plug into your build and release pipeline.
Multiple choice
Which of the following description best describes the term software composition analysis?
Assessment of production hosting infrastructure just before deployment.
Analyzing open-source software after it has been deployed to production to identify security vulnera-
bilities.
■■ Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide
validation that the software meets a defined criterion to use in your pipeline.
Explanation
Analyzing open-source software (OSS) to identify potential security vulnerabilities and provide validation
that the software meets a defined criterion to use in your pipeline.
Multiple choice
Which of the following choices is a monitoring service that can provide threat protection and security
recommendations across all your services in Azure and on-premises?
Azure Policy.
■■ Microsoft Defender.
Azure Key Vault.
Explanation
Microsoft Defender is a monitoring service that provides threat protection across all your services in Azure
and on-premises.
612
Multiple choice
Which of the following choices can you use to create, assign and manage policies?
Azure Machine Learning.
Microsoft Defender.
■■ Azure Policy.
Explanation
Azure Policy is a service in Azure that you use to create, assign, and manage policies. These policies enforce
different rules and effects on your resources, ensuring they stay compliant with your corporate standards
and service-level agreements (SLAs).
Multiple choice
Which of the following choices is a tool to prevent accidental deletion of Azure resources?
■■ Locks.
Policy.
Key Vault.
Explanation
Locks help you prevent accidental deletion or modification of your Azure resources. You can manage these
locks from within the Azure portal.
Multiple choice
Which of the following license type is considered viral in nature?
Downstream.
Attribution.
■■ Copyleft.
Explanation
The Copyleft license is considered viral in nature, as the use of the source code and its components and
distribution of the complete software implies that all source code using it should follow the same license
form.
Multiple choice
Which of the following choices best describes open-source software?
■■ A type of software where code users can review, modify and distribute the software.
It's a type of software where code users can use anywhere without license restrictions or pay for it.
A type of software where the license describes usage only for non-profit organizations.
Explanation
A type of software where code users can review, modify and distribute the software. The open-source license
type can limit the actions such as sale provisions that can be taken.
613
Multiple choice
Which of the following choices isn't an issue often associated with the use of open-source libraries?
Bugs.
■■ Code property.
Security Vulnerabilities.
Explanation
Bugs, security vulnerabilities, and licensing issues are often associated with the use of open-source libraries.
Multiple choice
Which of the following choices describe OWASP ZAP?
■■ Security Testing Tool.
Code Quality Tool.
A non-profit foundation.
Explanation
OWASP Zed Attack Proxy Scan. Also known as OWASP ZAP Scan is an open-source web application security
scanner that is intended for users with all levels of security knowledge.
Multiple choice
Which of the following choices isn't a Secure Coding Practice guideline that OWASP regularly publishes?
Authentication and Password Management.
■■ Code Smells.
Error Handling and Logging.
Explanation
OWASP regularly publishes a set of Secure Coding Practices. Their guidelines currently cover advice in the
following areas like Authentication and Password Management, Error Handling and Logging, System
Configuration and others.
Multiple choice
Which of the following choices is a static analysis tool that scans binary files?
Azure Artifacts.
SonarCloud.
■■ BinSkim.
Explanation
BinSkim is a static analysis tool that scans binary files. BinSkim replaces an earlier Microsoft tool called
BinScope.
614
Multiple choice
Which of the following steps represents the correct sequence of OWASP ZAP execution in a pipeline?
■■ Pull OWASP Zap Weekly, Start Container, Run Baseline, Report Results and Create Bugs.
Start Container, Report Results, Run Baseline, Pull OWASP ZAP.
Start Container, Pull OWASP ZAP Weekly, Run Baseline, Spider Site, Report Results, Create Bugs.
Explanation
The correct sequence is: Pull OWASP Zap Weekly, Start Container, Run Baseline, Report Results and Create
Bugs.
Multiple choice
Which of the following tools helps discover vulnerabilities by letting you query code as though it were
data?
SonarCloud.
OWASP ZAP.
■■ CodeQL.
Explanation
Developers use CodeQL to automate security checks. CodeQL treats code like data that can be queried.
Multiple choice
Which of the following tools can be used to assess open-source security and licensing compliance?
SonarCloud.
■■ Mend Bolt.
OWASP.
Explanation
Mend Bolt can be used to assess open-source security and licensing compliance.
Multiple choice
Which of the following situations GitHub Dependabot detects vulnerable dependencies and send
Dependabot alerts about them?
■■ A new vulnerability is added to the GitHub Advisory database.
A new code is committed to the repository.
A deployment succeeds.
Explanation
GitHub Dependabot detects vulnerable dependencies and sends Dependabot alerts about them in several
situations: A new vulnerability is added to the GitHub Advisory database, New vulnerability data from Mend
is processed, and the Dependency graph for a repository changes.
615
Multiple choice
Which of the following choices is a characteristic in source code possibly indicating a deeper problem?
Memory Leak.
Bug.
■■ Code smell.
Explanation
Code smells are characteristics in your code that could be a problem. Code smells hint at more profound
problems in the design or implementation of the code.
Multiple choice
Which of the following choices represents what is Mend Bolt used?
Penetration Testing.
■■ Finding and fixing open-source vulnerabilities.
Scanning for code quality issues.
Explanation
Mend Bolt is used to finding and fixing open-source vulnerabilities.
Multiple choice
Which of the following tools can you use to do code quality checks?
Veracode.
■■ SonarCloud.
Microsoft Defender for Cloud.
Explanation
SonarCloud is the cloud-based version of the original SonarQube and would be best for working with code
in Azure Repos.
Multiple choice
Which of the following choices is a type of attack that makes it possible to execute malicious SQL
statements?
Man-in-the-Middle (MitM).
Denial-of-Service (DOS).
■■ SQL Injection.
Explanation
SQL Injection is a type of attack that makes it possible to execute malicious SQL statements.
Multiple choice
Which of the following choices is a principle or process that Threat Modeling is a core element?
Microsoft Solutions Framework (MSF).
■■ Microsoft Security Development Lifecycle (SDL).
Application Lifecycle Management (ALM).
Explanation
Threat Modeling is a core element of the Microsoft Security Development Lifecycle (SDL).
616
Multiple choice
Which of the following choices isn't one of the five major threat modeling steps?
■■ Don't deploy with less than 90% of code quality.
Defining security requirements.
Mitigating threats.
Explanation
There are five major threat modeling steps. Defining security requirements, Creating an application dia-
gram, Identifying threats, Mitigating threats, and Validating that threats have been mitigated.
Module 8 Design and implement a dependen-
cy management strategy
Learning objectives
After completing this module, students and professionals can:
●● Define dependency management strategy.
●● Identify dependencies.
●● Describe elements and componentization of dependency management.
●● Scan your codebase for dependencies.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
618
Dependencies in software
Modern software development involves complex projects and solutions.
Projects have dependencies on other projects, and solutions aren't single pieces of software.
The solutions and software built consist of multiple parts and components and are often reused.
As codebases are expanding and evolving, it needs to be componentized to be maintainable.
A team that is writing software won't write every piece of code by itself but use existing code written by
other teams or companies and open-source code that is readily available.
Each component can have its maintainers, speed of change, and distribution, giving both the creators and
consumers of the components autonomy.
A software engineer will need to identify the components that make up parts of the solution and decide
whether to write the implementation or include an existing component.
The latter approach introduces a dependency on other components.
●● Package formats and sources The distribution of dependencies can be performed by a packaging
method suited for your solution's dependency type.
619
Each dependency is packaged using its usable format and stored in a centralized source.
Your dependency management strategy should include the selection of package formats and corre-
sponding sources where to store and retrieve packages.
●● Versioning Just like your own code and components, the dependencies in your solution usually
evolve.
While your codebase grows and changes, you need to consider the changes in your dependencies as
well.
It requires a versioning mechanism for the dependencies to be selective of the version of a dependen-
cy you want to use.
Identify dependencies
It starts with identifying the dependencies in your codebase and deciding which dependencies will be
formalized.
Your software project and its solution probably already use dependencies.
It's common to use libraries and frameworks that are not written by yourself.
Additionally, your existing codebase might have internal dependencies that aren't treated as such.
For example, take a piece of code that implements a particular business domain model.
It might be included as source code in your project and consumed in other projects and teams.
It would help if you investigated your codebase to identify pieces of code that can be considered de-
pendencies and treat them as such.
It requires changes to how you organize your code and build the solution. It will bring your components.
2. Package componentization The second way uses packages. Distributing software components is
performed utilizing packages as a formal way of wrapping and handling the components.
A shift to packages adds characteristics needed for proper dependency management, like tracking
and versioning packages in your solution.
See also Collaborate more and build faster with packages1.
1 https://docs.microsoft.com/azure/devops/artifacts/collaborate-with-packages
620
Summary
This module explored dependency management concepts and helped to identify project dependencies.
You learned how to decompose your system, identify dependencies, and package componentization.
You learned how to describe the benefits and usage of:
●● Define dependency management strategy.
●● Identify dependencies.
●● Describe elements and componentization of dependency management.
●● Scan your codebase for dependencies.
Learn more
●● Azure Artifacts overview - Azure Artifacts | Microsoft Docs2.
●● NuGet documentation | Microsoft Docs3.
●● npm Docs (npmjs.com)4.
●● Maven – Welcome to Apache Maven5.
2 https://docs.microsoft.com/azure/devops/artifacts/start-using-azure-artifacts
3 https://docs.microsoft.com/nuget/
4 https://docs.npmjs.com/
5 https://maven.apache.org/
622
Learning objectives
After completing this module, students and professionals can:
●● Implement package management.
●● Manage package feed.
●● Consume and create packages.
●● Publish packages.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps6.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps7.
Explore packages
Packages are used to define the components you rely on and depend upon in your software solution.
They provide a way to store those components in a well-defined format with metadata to describe them.
What is a package?
A package is a formalized way of creating a distributable unit of software artifacts that can be consumed
from another software solution.
The package describes the content it contains and usually provides extra metadata.
This additional information uniquely identifies the individual packages and to be self-descriptive.
It helps to better store packages in centralized locations and consume the contents of the package
predictably.
6 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
7 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
623
Types of packages
Packages can be used for different kinds of components.
The type of components you want to use in your codebase differ for the different parts and layers of the
solution you're creating.
The range from frontend components, such as JavaScript code files, to backend components like .NET
assemblies or Java components, complete self-contained solutions, or reusable files in general.
Over the past years, the packaging formats have changed and evolved. Now there are a couple of de
facto standard formats for packages.
●● NuGet packages (pronounced “new get”) are a standard used for .NET code artifacts. It includes .NET
assemblies and related files, tooling, and sometimes only metadata. NuGet defines the way packages
are created, stored, and consumed. A NuGet package is essentially a compressed folder structure with
files in ZIP format and has the .nupkg extension. See also An introduction to NuGet8.
●● NPM An NPM package is used for JavaScript development. It originates from node.js development,
where it's the default packaging format. An NPM package is a file or folder containing JavaScript files
and a package.json file describing the package's metadata. For node.js, the package usually
includes one or more modules that can be loaded once the package is consumed. See also About
packages and modules9.
●● Maven is used for Java-based projects. Each package has a Project Object Model file describing the
project's metadata and is the basic unit for defining a package and working with it.
●● PyPi The Python Package Index, abbreviated as PyPI and known as the Cheese Shop, is the official
third-party software repository for Python.
●● Docker packages are called images and contain complete and self-contained deployments of compo-
nents. A Docker image commonly represents a software component that can be hosted and executed
by itself without any dependencies on other images. Docker images are layered and might be de-
pendent on other images as their basis. Such images are referred to as base images.
8 https://docs.microsoft.com/nuget/what-is-nuget
9 https://docs.npmjs.com/about-packages-and-modules
624
Choosing tools
The command-line nature of the tooling offers the ability to include it in scripts to automate package
management. Ideally, one should use the tooling in build and release pipelines for component creating,
publishing, and consuming packages from feeds.
Additionally, developer tooling can have integrated support for working with package managers, provid-
ing a user interface for the raw tooling. Examples of such tooling are Visual Studio 2017, Visual Studio
Code, and Eclipse.
Public
In general, you'll find that publically available package sources are free to use.
Sometimes they have a licensing or payment model for consuming individual packages or the feed itself.
These public sources can also be used to store packages you've created as part of your project.
It doesn't have to be open-source, although it is in most cases.
Public and free package sources that offer feeds at no expense will usually require that you make the
packages you store publically available as well.
Private
Private feeds can be used in cases where packages should be available to a select audience.
The main difference between public and private feeds is the need for authentication.
Public feeds can be anonymously accessible and optionally authenticated.
Private feeds can be accessed only when authenticated.
There are two options for private feeds:
●● Self-hosting Some of the package managers are also able to host a feed. Using on-premises or
private cloud resources, one can host the required solution to offer a private feed.
●● SaaS services A variety of third-party vendors and cloud providers offer software-as-a-service feeds
that can be kept private. It typically requires a consumption fee or a cloud subscription.
The following table contains a non-exhaustive list of self-hosting options and SaaS offerings to host
package privately feeds for each type covered.
Consume packages
Each software project that consumes packages to include the required dependencies will use the package
manager and more packages sources.
The package manager will download the individual packages from the sources and install them locally on
the development machine or build server.
The developer flow will follow this general pattern:
1. Identify a required dependency in your codebase.
2. Find a component that satisfies the requirements for the project.
3. Search the package sources for a package offering a correct version of the component.
4. Install the package into the codebase and development machine.
5. Create the software implementation that uses the new components from the package.
The package manager tooling will help search and install the components in the packages.
How it's achieved varies for the different package types. Refer to the documentation of the package
manager for instructions on consuming packages from feeds.
To get started, you'll need to specify the package source to be used. Package managers will have a
default source defined that refers to the standard package feed for its type.
Alternative feeds will need to be configured to allow consuming the packages they offer.
Upstream sources
Part of the package management involves keeping track of the various sources.
It's possible to refer to multiple sources from a single software solution. However, when combining
private and public sources, the order of resolution of the sources becomes essential.
One way to specify multiple packages sources is by choosing a primary source and selecting an upstream
source.
The package manager will evaluate the primary source first and switch to the upstream source when the
package isn't found there.
The upstream source might be one of the official public sources or a private source. The upstream source
could refer to another upstream source itself, creating a chain of sources.
A typical scenario is to use a private package source referring to a public upstream source for one of the
official feeds. It effectively enhances the packages in the upstream source with packages from the private
feed, avoiding publishing private packages in a public feed.
A source that has an upstream source defined may download and cache the packages that were request-
ed. It doesn't contain itself.
The source will include these downloaded packages and starts to act as a cache for the upstream source.
It also offers the ability to keep track of any packages from the external upstream source.
627
An upstream source can be a way to avoid direct access of developers and build machines to external
sources.
The private feed uses the upstream source as a proxy to the otherwise external source. It will be your feed
manager and private source that have the communication to the outside. Only privileged roles can add
upstream sources to a private feed.
See also Upstream sources10.
Packages graph
A feed can have one or more upstream sources, which might be internal or external. Each of these can
have additional upstream sources, creating a package graph of the source.
Such a graph can offer many possibilities for layering and indirection of origins of packages. It might fit
well with multiple teams taking care of packages for frameworks and other base libraries.
The downside is that package graphs can become complex when not correctly understood or designed.
It's essential to know how you can create a proper package graph.
See also Constructing a complete package graph11.
10 https://docs.microsoft.com/azure/devops/artifacts/concepts/upstream-sources
11 https://docs.microsoft.com/azure/devops/artifacts/concepts/package-graph
628
Previously, we discussed the package types for NuGet, NPM, Maven, and Python. Universal packages are
an Azure Artifacts-specific package type. In essence, it's a versioned package containing multiple files and
folders.
A single Azure Artifacts feed can contain any combination of such packages. You can connect to the feed
using the package managers and the corresponding tooling for the package types. For Maven packages,
this can also be the Gradle build tool.
Publish packages
As software is developed and components are written, you'll most likely produce components as depend-
encies packaged for reuse.
Discussed previously was guidance to find components that can be isolated into dependencies.
These components need to be managed and packaged. After that, they can be published to a feed,
allowing others to consume the packages and use the components it contains.
Creating a feed
The first step is to create a feed where the packages can be stored. In Azure Artifacts, you can create
multiple feeds, which are always private.
During creation, you can specify the name, visibility and prepopulate the default public upstream sources
for NuGet, NPM, and Python packages.
Each feed can contain one or more upstream and can manage its security.
Controlling access
The Azure Artifacts feed you created is always private and not available publically.
You need access to it by authenticating to Azure Artifacts with an account with access to Azure DevOps
and a team project.
By default, a feed will be available to all registered users in Azure DevOps.
You can select it to be visible only to the team project where the feed is created.
Whichever option is chosen, you can change the permissions for a feed from the settings dialog.
Updating packages
Packages might need to be updated during their lifetime. Technically, updating a package is made by
pushing a new version of the package to the feed.
The package feed manager manages to properly store the updated package with the existing packages in
the feed.
Note: Updating packages requires a versioning strategy.
Push a package
For this exercise, use the full version of PartsUnlimited12.
Important: Make source code for PartsUnlimited available in your Azure DevOps repo.
●● Copy the command for “Add this feed” by clicking the Copy icon.
3. Switch back to your command line.
●● Look at the existing NuGet sources with the command: NuGet sourcesYou see two NuGet
sources available now.
●● Paste and run the copied instructions.
●● Look at the existing NuGet sources again with the command: NuGet sourcesYou see a third
NuGet source available.
12 http://microsoft.github.io/PartsUnlimited/
631
_We have published the package to the feed, and it is pushed successfully._
4. Check if the package is available in Azure Artifacts * Close the dialog instructions.
* Refresh the Artifacts page
Summary
This module describes package feeds, common public package sources, and how to create and publish
packages.
You learned how to describe the benefits and usage of:
●● Implement package management.
●● Manage package feed.
●● Consume and create packages.
●● Publish packages.
Learn more
●● Azure Artifacts overview - Azure Artifacts | Microsoft Docs13.
●● What are feeds? - Azure Artifacts | Microsoft Docs14.
●● Get started with NuGet packages - Azure Artifacts | Microsoft Docs15.
●● Use feed views to share your packages - Azure Artifacts | Microsoft Docs16.
13 https://docs.microsoft.com/azure/devops/artifacts/start-using-azure-artifacts
14 https://docs.microsoft.com/azure/devops/artifacts/concepts/feeds
15 https://docs.microsoft.com/azure/devops/artifacts/get-started-nuget
16 https://docs.microsoft.com/azure/devops/artifacts/feeds/views
632
Learning objectives
After completing this module, students and professionals can:
●● Identify artifact repositories.
●● Migrate and integrate artifact repositories.
●● Secure package feeds.
●● Understand roles, permissions, and authentication.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Walkthroughs
For details on how to integrate NuGet, npm, Maven, Python, and Universal Feeds, see the following
walkthroughs:
●● Get started with NuGet packages in Azure DevOps Services and TFS17.
●● Use npm to store JavaScript packages in Azure DevOps Services or TFS18.
●● Get started with Maven packages in Azure DevOps Services and TFS19.
●● Get started with Python packages in Azure Artifacts20.
●● Publish and then download a Universal Package21.
17 https://docs.microsoft.com/azure/devops/artifacts/get-started-nuget
18 https://docs.microsoft.com/azure/devops/artifacts/get-started-npm
19 https://docs.microsoft.com/azure/devops/artifacts/get-started-maven
20 https://docs.microsoft.com/azure/devops/artifacts/quickstarts/python-packages
21 https://docs.microsoft.com/azure/devops/artifacts/quickstarts/universal-packages
634
Securing access
Package feeds must be secured for access by authorized accounts, so only verified and trusted packages
are stored there.
None should push packages to a feed without the proper role and permissions.
It prevents others from pushing malicious packages. It still assumes that the persons who can push
packages will only add safe and secure packages.
Especially in the open-source world, It's done by the community. A package source can further guard its
feed with the use of security and vulnerability scan tooling.
Additionally, consumers of packages can use similar tooling to do the scans themselves.
Securing availability
Another aspect of security for package feeds is about the public or private availability of the packages.
The feeds of public sources are available for anonymous consumption.
Private feeds have restricted access most of the time.
It applies to the consumption and publishing of packages. Private feeds will allow only users in specific
roles or teams access to its packages.
Package feeds need to have secure access for different kinds of reasons.
The access should involve allowing:
●● Restricted access for consumption Whenever a particular audience should only consume a package
feed and its packages, it's required to restrict its access. Only those allowed access will consume the
packages from the feed.
●● Restricted access for publishing Secure access is required to restrict who can publish so feeds and
unauthorized or untrusted persons and accounts can't modify their packages.
Examine roles
Azure Artifacts has four different roles for package feeds. These are incremental in the permissions they
give.
The roles are in incremental order:
●● Reader: Can list and restore (or install) packages from the feed.
●● Collaborator: Can save packages from upstream sources.
●● Contributor: Can push and unlist packages in the feed.
●● Owner: has all available permissions for a package feed.
When creating an Azure Artifacts feed, the Project Collection Build Service is given contribu-
tor rights by default.
This organization-wide build identity in Azure Pipelines can access the feeds it needs when running tasks.
If you changed the build identity to be at the project level, you need to give that identity permissions to
access the feed.
Any contributors to the team project are also contributors to the feed.
635
Project Collection Administrators and administrators of the team project, plus the feed's creator, are
automatically made owners of the feed.
The roles for these users and groups can be changed or removed.
Examine permissions
The feeds in Azure Artifacts require permission to the various features it offers. The list of permissions
consists of increasing privileged operations.
The list of privileges is as follows:
You can assign users, teams, and groups to a specific role for each permission, giving the permissions
corresponding to that role.
You need to have the Owner role to do so. Once an account has access to the feed from the permission
to list and restore packages, it's considered a Feed user.
Like permissions and roles for the feed itself, there are extra permissions for access to the individual
views.
Any feed user has access to all the views, whether the default views of @Local, @Release, @Prerelease or
newly created ones.
636
When creating a feed, you can choose whether the feed is visible to people in your Azure DevOps
organization or only specific people.
Examine authentication
Azure DevOps users will authenticate against Azure Active Directory when accessing the Azure DevOps
portal.
After being successfully authenticated, they won't have to provide any credentials to Azure Artifacts itself.
The roles for the user, based on its identity, or team and group membership, are for authorization.
When access is allowed, the user can navigate to the Azure Artifacts section of the team project.
The authentication from Azure Pipelines to Azure Artifacts feeds is taken care of transparently. It will be
based upon the roles and their permissions for the build identity.
The previous section on Roles covered some details on the required roles for the build identity.
The authentication from inside Azure DevOps doesn't need any credentials for accessing feeds by itself.
However, when accessing secured feeds outside Azure Artifacts, such as other package sources, you most
likely must provide credentials to authenticate to the feed manager.
Each package type has its way of handling the credentials and providing access upon authentication. The
command-line tooling will provide support in the authentication process.
For the build tasks in Azure Pipelines, you'll provide the credentials via a Service connection.
Summary
This module detailed package migration, consolidation, and configuration to secure access to package
feeds and artifact repositories.
You learned how to describe the benefits and usage of:
●● Identify artifact repositories.
●● Migrate and integrate artifact repositories.
●● Secure package feeds.
●● Understand roles, permissions, and authentication.
Learn more
●● Azure Artifacts overview - Azure Artifacts | Microsoft Docs23.
22 https://docs.microsoft.com/azure/devops/artifacts/feeds/feed-permissions
23 https://docs.microsoft.com/azure/devops/artifacts/start-using-azure-artifacts
637
●● Best practices when working with Azure Artifacts - Azure Artifacts | Microsoft Docs24.
●● Set up permissions - Azure Artifacts | Microsoft Docs25.
24 https://docs.microsoft.com/azure/devops/artifacts/concepts/best-practices
25 https://docs.microsoft.com/azure/devops/artifacts/feeds/feed-permissions
638
Immutable packages
As packages get new versions, your codebase can choose when to use a new version of the packages it
consumes.
It does so by specifying the specific version of the package it requires. This implies that packages them-
selves should always have a new version when they change.
Whenever a package is published to a feed, it should not be allowed to change anymore.
If it were, it would be at the risk of introducing potential breaking changes to the code. In essence, a
published package is immutable.
Replacing or updating an existing version of a package is not allowed. Most of the package feeds do not
allow operations that would change a current version.
Regardless of the size of the change, a package can only be updated by introducing a new version.
The new version should indicate the type of change and impact it might have.
This module explains versioning strategies for packaging, best practices for versioning, and package
promotion.
Learning objectives
After completing this module, students and professionals can:
●● Implement a versioning strategy.
●● Promote packages.
●● Push packages from pipeline.
●● Describe semantic and explore best practices for versioning.
639
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create an Azure DevOps Organization and a Team Project for some exercises. If you don't
have it yet, see:
●● Create an organization - Azure DevOps26.
●● If you already have your organization created, use the Azure DevOps Demo Generator [https://
azuredevopsdemogenerator.azurewebsites.net] and create a new Team Project called “Parts Unlimit-
ed” using the template "PartsUnlimited." Or feel free to create a blank project. See Create a project -
Azure DevOps27.
26 https://docs.microsoft.com/azure/devops/organizations/accounts/create-organization
27 https://docs.microsoft.com/azure/devops/organizations/projects/create-project
28 https://docs.microsoft.com/azure/devops/pipelines/artifacts/nuget#package-versioning
640
29 https://semver.org/
641
Feeds in Azure Artifacts have three different views by default. These views are added when a new feed is
created. The three views are:
●● Local. The @Local view contains all release and prerelease packages and the packages downloaded
from upstream sources.
●● Prerelease. The @Prerelease view contains all packages that have a label in their version number.
●● Release. The @Release view contains all packages that are considered official releases.
Using views
You can use views to offer help consumers of a package feed filter between released and unreleased
versions of packages.
Essentially, it allows a consumer to make a conscious decision to choose from released packages or
opt-in to prereleases of a certain quality level.
By default, the @Local view is used to offer the list of available packages. The format for this URI is:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}/nuget/
v3/index.json
When consuming a package feed by its URI endpoint, the address can have the requested view included.
For a specific view, the URI includes the name of the view, which changes to be:
https://pkgs.dev.azure.com/{yourteamproject}/_packaging/{feedname}@
{Viewname}/nuget/v3/index.json
The tooling will show and use the packages from the specified view automatically.
Tooling may offer an option to select prerelease versions, such as shown in this Visual Studio 2017 NuGet
dialog. It doesn't relate or refer to the @Prerelease view of a feed. Instead, it relies on the presence of
prerelease labels of semantic versioning to include or exclude packages in the search results.
See also:
●● Views on Azure DevOps Services feeds30.
●● Communicate package quality with prerelease and release views31.
Promote packages
Azure Artifacts has the notion of promoting packages to views to indicate that a version is of a certain
quality level.
By selectively promoting packages, you can plan when packages have a certain quality and are ready to
be released and supported by the consumers.
You can promote packages to one of the available views as the quality indicator.
Release and Prerelease's two views might be sufficient, but you can create more views when you want
finer-grained quality levels if necessary, such as alpha and beta.
30 https://docs.microsoft.com/azure/devops/artifacts/concepts/views
31 https://docs.microsoft.com/azure/devops/artifacts/feeds/views
642
Packages will always show in the Local view, but only in a particular view after being promoted.
Depending on the URL used to connect to the feed, the available packages will be listed.
Upstream sources will only be evaluated when using the @Local view of the feed.
After they've been downloaded and cached in the @Local view, you can see and resolve the packages in
other views after being promoted to it.
It's up to you to decide how and when to promote packages to a specific view.
This process can be automated by using an Azure Pipelines task as part of the build pipeline.
Packages that have been promoted to a view won't be deleted based on the retention policies.
32 https://docs.microsoft.com/learn/modules/understand-package-management/10-create-package-feed
643
4. Open the Views tab. By default, there will be three views. Local: includes all packages in the feed and
all cached from upstream sources. Prerelease and Release. In the Default view column is a check
behind Local. It's the default view that will always be used.
33 https://docs.microsoft.com/azure/devops/artifacts/concepts/best-practices
644
Summary
This module explained versioning strategies for packaging, best practices for versioning, and package
promotion.
You learned how to describe the benefits and usage of:
●● Implement a versioning strategy.
●● Promote packages.
●● Push packages from pipeline.
●● Describe semantic and explore best practices for versioning.
Learn more
●● Key concepts for Azure Artifacts34.
●● Publish and download universal packages - Azure Artifacts | Microsoft Docs35.
●● Get started with NuGet packages - Azure Artifacts | Microsoft Docs36.
34 https://docs.microsoft.com/azure/devops/artifacts/artifacts-key-concepts#immutability
35 https://docs.microsoft.com/azure/devops/artifacts/quickstarts/universal-packages
36 https://docs.microsoft.com/azure/devops/artifacts/get-started-nuget
646
GitHub Packages give you the flexibility to control permissions and visibility for your packages. You can
publish packages in a public or private repository. The permission can be inherited from the repository
where the package is hosted or defined for specific user or organization accounts for packages in the
container registry.
You can integrate GitHub Packages with GitHub APIs, GitHub Actions, and webhooks.
Learning objectives
After completing this module, students and professionals can:
●● Publish packages.
●● Install packages.
●● Delete and restore packages.
647
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
●● You need to create a GitHub account at GitHub.com and a project for some exercises. If you don't
have it yet, see: Join GitHub · GitHub37. If you already have your GitHub account, create a new
repository Creating a new repository - GitHub Docs38.
Publish packages
GitHub Packages use native package tooling commands to publish and install package versions.
When creating a package, you can provide a description, installation and usage instructions, and other
details on the package page. It helps people consuming the package understand how to use it and its
purposes.
If a new package version fixes a security vulnerability, you can publish a security advisory to your reposi-
tory.
Tip: You can connect a repository to more than one package. Ensure the README and description
provide information about each package.
Publishing a package
Using any supported package client, to publish your package, you need to:
1. Create or use an existing access token with the appropriate scopes for the task you want to accom-
plish: Creating a personal access token39. When you create a personal access token (PAT), you can
assign the token to different scopes depending on your needs. See "About permissions for GitHub
Packages40".
2. Authenticate to GitHub Packages using your access token and the instructions for your package client.
37 https://github.com/signup
38 https://docs.github.com/repositories/creating-and-managing-repositories/creating-a-new-repository
39 https://docs.github.com/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token
40 https://docs.github.com/packages/learn-github-packages/about-permissions-for-github-packages#about-scopes-and-permissions-for-
package-registries
648
3. Publish the package using the instructions for your package client.
Choose your package, and check how to authenticate and publish: Working with a GitHub Packages
registry41. You'll see below examples for NuGet and npm.
NuGet registry
You can authenticate to GitHub Packages with the dotnet command-line interface (CLI).
Create a nuget.config file in your project directory and specify GitHub Packages as a source under pack-
ageSources.
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<packageSources>
<clear />
<add key="github" value="https://nuget.pkg.github.com/OWNER/index.
json" />
</packageSources>
<packageSourceCredentials>
<github>
<add key="Username" value="USERNAME" />
<add key="ClearTextPassword" value="TOKEN" />
</github>
</packageSourceCredentials>
</configuration>
Note: Replace USERNAME with the name of your personal account on GitHub, TOKEN with your PAT, and
OWNER with the name of the user or organization account that owns your project's repository.
You can publish a package authenticating with a nuget.config file, or using the –api-key command-line
option with your GitHub PAT.
dotnet nuget push "bin/Release/OctocatApp.1.0.0.nupkg" --api-key YOUR_
GITHUB_PAT --source "github"
npm registry
You can authenticate using npm by either editing your per-user ~/.npmrc file to include your PAT or by
logging in to npm on the command line using your username and personal access token.
Edit your ~/.npmrc file for your project to include the following line:
//npm.pkg.github.com/:_authToken=TOKEN
Create a new ~/.npmrc file if one doesn't exist.
If you prefer to authenticate by logging in to npm, use the npm login command.
$ npm login –scope=@OWNER –registry=https://npm.pkg.github.com
Username: USERNAME
Password: TOKEN
Email: PUBLIC-EMAIL-ADDRESS
41 https://docs.github.com/packages/working-with-a-github-packages-registry
649
Note: Replace USERNAME with your GitHub username, TOKEN with your PAT, and PUBLIC-EMAIL-AD-
DRESS with your email address.
To publish your npm package, see Working with the npm registry - GitHub Docs42.
After you publish a package, you can view the package on GitHub. See "Viewing packages43".
Install a package
You can install any package you have permission to view from GitHub Packages and use the package as a
dependency in your project.
You can search for packages globally across all of GitHub or within a particular organization. For details,
see Searching for packages50.
After you find a package, read the package's installation and description instructions on the package
page.
You can install a package using any supported package client following the same general guidelines.
1. Authenticate to GitHub Packages using the instructions for your package client.
2. Install the package using the instructions for your package client.
NuGet
To use NuGet packages from GitHub Packages, you must add dependencies to your .csproj file. For more
information on using a .csproj file in your project, see "Working with NuGet packages51".
If you're using Visual Studio, expand your Solution -> Project -> Right-click on Dependencies -> Manage
NuGet Packages…
42 https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-npm-registry#publishing-a-package-
using-a-local-npmrc-file
43 https://docs.github.com/packages/learn-github-packages/viewing-packages
44 https://github.com/Codertocat/hello-world-npm/packages/10696?version=1.0.1
45 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-container-registry
46 https://docs.github.com/packages/working-with-a-github-packages-registry
47 https://docs.github.com/github/managing-security-vulnerabilities/about-github-security-advisories
48 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry
49 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-npm-registry
50 https://docs.github.com/search-github/searching-on-github/searching-for-packages
51 https://docs.microsoft.com/nuget/consume-packages/overview-and-workflow
650
You can browse, install and update dependencies from multiple registries. For more information, see
Create and remove project dependencies52.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.0</TargetFramework>
<PackageId>OctocatApp</PackageId>
<Version>1.0.0</Version>
<Authors>Octocat</Authors>
<Company>GitHub</Company>
<PackageDescription>This package adds an Octocat!</PackageDescrip-
tion>
<RepositoryUrl>https://github.com/OWNER/REPOSITORY</RepositoryUrl>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="OctokittenApp" Version="12.0.2" />
</ItemGroup>
</Project>
52 https://docs.microsoft.com/visualstudio/ide/how-to-create-and-remove-project-dependencies
651
Note: Replace the OctokittenApp package with your package dependency and 1.0.0 with the version
you want to use.
3. Install the packages with the restore command.
npm
You need to add the .npmrc file to your project to install packages from GitHub Packages.
1. Authenticate to GitHub Packages.
2. In the same directory as your package.json file, create or edit a .npmrc file.
3. Include a line specifying GitHub Packages URL and the account owner.
@OWNER:registry=https://npm.pkg.github.com
Note: Replace OWNER with the name of the user or organization account.
4. Add the .npmrc file to the repository. See "Adding a file to a repository53".
5. Configure package.json in your project to use the package you're installing.
{
"name": "@my-org/server",
"version": "1.0.0",
"description": "Server app that uses the @octo-org/octo-app package",
"main": "index.js",
"author": "",
"license": "MIT",
"dependencies": {
"@octo-org/octo-app": "1.0.0"
}
}
53 https://docs.github.com/repositories/working-with-files/managing-files/adding-a-file-to-a-repository
54 https://docs.github.com/packages/working-with-a-github-packages-registry
55 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry
56 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-npm-registry
652
57 https://docs.github.com/rest/reference/packages
653
58 https://docs.github.com/packages/learn-github-packages/deleting-and-restoring-a-package
59 https://docs.github.com/packages/working-with-a-github-packages-registry
60 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-nuget-registry
61 https://docs.github.com/packages/working-with-a-github-packages-registry/working-with-the-npm-registry
62 https://docs.github.com/packages/learn-github-packages/deleting-and-restoring-a-package
63 https://docs.github.com/packages/learn-github-packages/about-permissions-for-github-packages
654
You can give any person an access role for container images published and owned by a personal account.
For container images published and owned by an organization, you can provide any person or team in
the organization an access role.
Summary
This module introduced you to GitHub Packages. It explored ways to control permissions and visibility,
publish, install, delete and restore packages using GitHub.
You learned how to describe the benefits and usage of:
●● Publish packages.
●● Install packages.
●● Delete and restore packages.
●● Configure access control and visibility.
Learn more
●● Quickstart for GitHub Packages - GitHub Docs65.
●● Learn GitHub Packages - GitHub Docs66.
●● Working with a GitHub Packages registry - GitHub Docs67.
64 https://docs.github.com/packages/learn-github-packages/configuring-a-packages-access-control-and-visibility
65 https://docs.github.com/packages/quickstart
66 https://docs.github.com/packages/learn-github-packages
67 https://docs.github.com/packages/working-with-a-github-packages-registry
655
Lab
Lab 17: Package management with Azure Arti-
facts
Lab overview
Azure Artifacts facilitate discovery, installation, and publishing NuGet, npm, and Maven packages in Azure
DevOps. It's deeply integrated with other Azure DevOps features such as Build, making package manage-
ment a seamless part of your existing workflows.
In this lab, you will learn how to work with Azure Artifacts by using the following steps:
●● create and connect to a feed.
●● create and publish a NuGet package.
●● import a NuGet package.
●● update a NuGet package.
Objectives
After you complete this lab, you will be able to:
●● Create an Azure Active Directory (Azure AD) service principal.
●● Create an Azure key vault.
●● Track pull requests through the Azure DevOps pipeline.
Lab duration
●● Estimated time: 40 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions68
68 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
656
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices isn't a correct aspect of a dependency management strategy?
Standardization.
Versioning.
Binaries sharing.
Multiple choice
Which of the following choices isn't a way of componentization commonly used?
Symbol componentization.
Source componentization.
Package componentization.
Multiple choice
Which of the following choices is the aspect of dependency management that allows a repeatable, predicta-
ble process and usage that can be automated?
Versioning.
Package formats and sources.
Standardization.
Multiple choice
Which of the following visibility should you choose if you create a feed that will allow yourself and the users
you invite to publish?
Private.
Public.
Local.
657
Multiple choice
Which of the following choices isn't an option for private feeds?
Self-hosting.
Database hosting.
SaaS services.
Multiple choice
Which of the following choices isn't a package type supported by Azure DevOps?
PHP.
NPM.
Python.
Multiple choice
Which of the following choices is the minimum feed permission that will allow you to list available packages
and install them?
Collaborator.
Reader.
Contributor.
Multiple choice
Which of the following roles is given to the Project Collection Build Service when creating an Azure Artifacts
feed by default?
Owner.
Reader.
Contributor.
Multiple choice
Which of the following choices is a recommended place to store binaries?
Universal Packages.
Git.
Team Foundation Version Control.
Multiple choice
Which of the following choices is a correct fix if an existing package is broken or buggy?
Use an older version.
Publish a new version.
Repair and save.
658
Multiple choice
Which of the following action is required to make a package visible in release views other than @local?
Promote.
Move.
Push.
Multiple choice
Which of the following choices isn't a default feed view in Azure Artifacts?
Remote.
Prerelease.
Local.
Multiple choice
Which of the following choices contains valid packages that GitHub Packages can host?
Apache Maven, Gradle and Go modules.
RubyGems, Docker and NuGet.
NuGet, Conan (for C/C++), and Cargo (for Rust).
Multiple choice
Which of the following choices is the most secure and recommended authentication method for GitHub
Packages?
Personal Access Token (PAT).
User and Password.
OAuth token.
Multiple choice
Which of the following choices is the minimum user or team permission within the GitHub organization that
allows a package delete action?
Write.
Read.
Admin.
659
Answers
Multiple choice
Which of the following choices isn't a correct aspect of a dependency management strategy?
Standardization.
Versioning.
■■ Binaries sharing.
Explanation
There are many aspects of a dependency management strategy like Standardization, Package formats, and
sources, Versioning.
Multiple choice
Which of the following choices isn't a way of componentization commonly used?
■■ Symbol componentization.
Source componentization.
Package componentization.
Explanation
There are two ways of componentization commonly used: source componentization and package compo-
nentization.
Multiple choice
Which of the following choices is the aspect of dependency management that allows a repeatable,
predictable process and usage that can be automated?
Versioning.
Package formats and sources.
■■ Standardization.
Explanation
Standardization allows a repeatable, predictable process and usage that can be automated as well.
Multiple choice
Which of the following visibility should you choose if you create a feed that will allow yourself and the
users you invite to publish?
■■ Private.
Public.
Local.
Explanation
It's private. Private feeds can only be consumed by users who are allowed access.
660
Multiple choice
Which of the following choices isn't an option for private feeds?
Self-hosting.
■■ Database hosting.
SaaS services.
Explanation
There are two options for private feeds, which are Self-hosting (NuGet Server, Nexus) and SaaS Services
(Azure Artifacts, MyGet).
Multiple choice
Which of the following choices isn't a package type supported by Azure DevOps?
■■ PHP.
NPM.
Python.
Explanation
Azure Artifacts currently supports feeds that can store five different package types such as NuGet packages,
NPM packages, Maven, Universal packages, and Python.
Multiple choice
Which of the following choices is the minimum feed permission that will allow you to list available
packages and install them?
Collaborator.
■■ Reader.
Contributor.
Explanation
A reader can list and restore or install packages from the feed.
Multiple choice
Which of the following roles is given to the Project Collection Build Service when creating an Azure
Artifacts feed by default?
Owner.
Reader.
■■ Contributor.
Explanation
When creating an Azure Artifacts feed, the Project Collection Build Service is given contributor rights by
default. This organization-wide build identity in Azure Pipelines can access the feeds it needs when running
tasks.
661
Multiple choice
Which of the following choices is a recommended place to store binaries?
■■ Universal Packages.
Git.
Team Foundation Version Control.
Explanation
You can store them directly using universal packages. This is also a great way to protect your packages.
Multiple choice
Which of the following choices is a correct fix if an existing package is broken or buggy?
Use an older version.
■■ Publish a new version.
Repair and save.
Explanation
You need to publish a new version. Replacing or updating an existing version of a package is not allowed.
Most of the package feeds do not allow operations that would change an existing version.
Multiple choice
Which of the following action is required to make a package visible in release views other than @local?
■■ Promote.
Move.
Push.
Explanation
You can promote packages to Release and Prerelease views as the quality indicator.
Multiple choice
Which of the following choices isn't a default feed view in Azure Artifacts?
■■ Remote.
Prerelease.
Local.
Explanation
Feeds in Azure Artifacts have three different views by default. These views are Release, Prerelease, and Local.
Multiple choice
Which of the following choices contains valid packages that GitHub Packages can host?
Apache Maven, Gradle and Go modules.
■■ RubyGems, Docker and NuGet.
NuGet, Conan (for C/C++), and Cargo (for Rust).
Explanation
GitHub Packages can host npm, RubyGems, Apache Maven, Gradle, Docker, NuGet, and GitHub's Container
registry.
662
Multiple choice
Which of the following choices is the most secure and recommended authentication method for GitHub
Packages?
■■ Personal Access Token (PAT).
User and Password.
OAuth token.
Explanation
It's recommended to use PAT for a secure GitHub Packages authentication.
Multiple choice
Which of the following choices is the minimum user or team permission within the GitHub organization
that allows a package delete action?
Write.
Read.
■■ Admin.
Explanation
Admin permission can upload, download, delete, and manage packages. Also, the admin can read and write
package metadata and grant package permissions.
Module 9 Implement continuous feedback
Learning objectives
After completing this module, students and professionals can:
●● Implement tools to track feedback.
●● Plan for continuous monitoring.
●● Implement Application Insights.
●● Use Kusto Query Language (KQL).
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
Definitions
The easiest way to define the inner loop is the iterative process that a developer does when writing,
building, and debugging code.
There are other things that a developer does. It's the right set of steps did repeatedly before sharing their
work with their team or the rest of the world.
Exactly what goes into an individual developer's inner loop will depend significantly on the technologies
they're working with, the tools being used, and their preferences.
If I were working on a library, my inner loop would include coding, building, testing execution & debug-
ging with regular commits to my local Git repository.
On the other hand, if I were doing some web front-end work, I would probably be optimized around
hacking on HTML & JavaScript, bundling, and refreshing the browser (followed by regular commits).
665
Most codebases comprise multiple-moving parts, so the definition of a developer's inner loop on any
single codebase might alternate depending on what is being worked on.
Loop optimization
Having categorized the steps within the loop, it's now possible to make some general statements:
●● You want to execute the loop as fast as possible and for the total loop execution time to be propor-
tional to the changes made.
●● You want to minimize the time feedback collection takes but maximize the quality of the feedback
that you get.
●● You want to minimize the tax you pay by eliminating it where it's unnecessary to run through the loop
(can you defer some operations until you commit, for example).
●● As new code and more complexity are added to any codebase, the amount of outward pressure to
increase the size of the inner loop also increases. More code means more tests, which means more
execution time and slow execution of the inner loop.
Suppose you have ever worked on a large monolithic codebase. In that case, it's possible to get into a
situation where even small changes require a disproportionate amount of time to execute the feedback
collection steps of the inner loop. It's a problem, and you should fix it.
There are several things that a team can do to optimize the inner loop for larger codebases:
●● Only build and test what was changed.
●● Cache intermediate build results to speed up to complete builds.
●● Break up the codebase into small units and share binaries.
How you tackle each one of those is probably a blog post.
At Microsoft, for some of our genuinely massive monolithic codebases.
We're investing heavily in #1 and #2, but#3 requires a special mention because it can be a dou-
ble-edged sword and can have the opposite of the wished impact if done incorrectly.
667
Tangled loops
To understand the problem, we need to look beyond the inner loop. Let us say that our monolithic
codebase has an application-specific framework that does much heavy lifting.
It would be tempting to extract that framework into a set of packages.
To do this, you would pull that code into a separate repository (optional, but this is generally the way it's
done), then set up a different CI/CD pipeline that builds and publishes the package.
A different pull-request process would also front this separate build and release pipeline to inspect
changes before the code is published.
When someone needs to change this framework code, they clone down the repository, make their
changes (a separate inner loop), and submit a PR that transitions the workflow from the inner loop to the
outer loop.
The framework package would then be available to be pulled into dependent applications (in this case,
the monolith).
668
Initially, things might work out well. However, at some point in the future, you'll likely want to develop a
new feature in the application that requires extensive new capabilities to be added to the framework.
It's where teams that have broken up their codebases in suboptimal ways will start to feel pain.
If you have to coevolve code in two separate repositories where a binary/library dependency is present,
you'll experience some friction.
In loop terms, the original codebase's inner loop now (temporarily at least) includes the outer loop of the
previously broken out framework code.
Outer loops include tax, including code reviews, scanning passes, binary signing, release pipelines, and
approvals.
You don't want to pay that every time you've added a method to a class in the framework and now want
to use it in your application.
669
What generally ends up happening next is a series of local hacks by the developer to try to stitch the
inner loops together so that they can move forward efficiently - but it gets messy quick, and you must
pay that outer loop tax at some point.
It isn't to say that breaking up code into separate packages is an inherently bad thing - it can work
brilliantly; you need to make those incisions carefully.
Closing thoughts
There's no silver bullet solution to ensure that your inner loop doesn't start slowing down, but it's
essential to understand when it starts happening, what the cause is, and work to address it.
Decisions such as how you build, test, and debug the architecture itself will all impact how productive
developers are. Improving one aspect will often cause issues in another.
670
Continuous monitoring refers to the process and technology required to incorporate monitoring across
each phase of your DevOps and IT operations lifecycles.
It helps to continuously ensure your application's health, performance, reliability, and infrastructure as it
moves from development to production.
Continuous monitoring builds on the concepts of Continuous Integration and Continuous Deployment
(CI/CD), which help you develop and deliver software faster and more reliably to provide continuous
value to your users.
Azure Monitor1 is the unified monitoring solution in Azure that provides full-stack observability across
applications and infrastructure in the cloud and on-premises.
It works seamlessly with Visual Studio and Visual Studio Code2 during development and test and
integrates with Azure DevOps3 for release management and work item management during deployment
and operations.
It even integrates across the ITSM and SIEM tools of your choice to help track issues and incidents within
your existing IT processes.
This article describes specific steps for using Azure Monitor to enable continuous monitoring throughout
your workflows.
It includes links to other documentation that provides details on implementing different features.
1 https://docs.microsoft.com/azure/azure-monitor/overview
2 https://visualstudio.microsoft.com/
3 https://docs.microsoft.com/azure/devops/user-guide/index
671
It will allow you to visualize end-to-end transactions and connections across all the components easily.
●● Azure DevOps Projects gives you a simplified experience with your existing code and Git
repository or chooses4 from one of the sample applications to create a Continuous Integration (CI)
and Continuous Delivery (CD) pipeline to Azure.
●● Continuous monitoring in your DevOps release pipeline5 allows you to gate or roll back your
deployment based on monitoring data.
●● Status Monitor6 allows you to instrument a live .NET app on Windows with Azure Application Insights
without having to modify or redeploy your code.
●● If you have access to the code for your application, then enable complete monitoring with Applica-
tion Insights7 by installing the Azure Monitor Application Insights SDK for .NET8, Java9, Node.js10, or
any other programming language11. It allows you to specify custom events, metrics, or page views
relevant to your application and business.
4 https://docs.microsoft.com/azure/devops-project/overview
5 https://docs.microsoft.com/azure/application-insights/app-insights-vsts-continuous-monitoring
6 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-performance-live-website-now
7 https://docs.microsoft.com/azure/application-insights/app-insights-overview
8 https://docs.microsoft.com/azure/application-insights/quick-monitor-portal
9 https://docs.microsoft.com/azure/application-insights/app-insights-java-quick-start
10 https://docs.microsoft.com/azure/application-insights/app-insights-nodejs-quick-start
11 https://docs.microsoft.com/azure/application-insights/app-insights-platforms
12 https://docs.microsoft.com/azure/azure-monitor/platform/data-sources
13 https://docs.microsoft.com/azure/azure-monitor/insights/vminsights-overview
14 https://docs.microsoft.com/azure/azure-monitor/insights/container-insights-overview
15 https://docs.microsoft.com/azure/azure-monitor/insights/solutions-inventory
16 https://docs.microsoft.com/azure/devops/learn/what-is-infrastructure-as-code
17 https://docs.microsoft.com/azure/azure-monitor/platform/template-workspace-configuration
18 https://docs.microsoft.com/azure/governance/policy/overview
672
19 https://docs.microsoft.com/azure/azure-monitor/insights/resource-group-insights
20 https://docs.microsoft.com/azure/devops/pipelines
21 https://docs.microsoft.com/azure/application-insights/app-insights-separate-resources
22 https://docs.microsoft.com/azure/azure-monitor/platform/metrics-charts
23 https://docs.microsoft.com/azure/azure-monitor/log-query/cross-workspace-query
24 https://docs.microsoft.com/azure/azure-monitor/platform/alerts-overview
25 https://docs.microsoft.com/azure/azure-monitor/platform/alerts-dynamic-thresholds
26 https://docs.microsoft.com/azure/azure-monitor/platform/action-groups#create-an-action-group-by-using-the-azure-portal
27 https://docs.microsoft.com/azure/azure-monitor/platform/itsmc-overview
28 https://docs.microsoft.com/azure/azure-monitor/platform/activity-log-alerts-webhook
673
●● Remediate situations identified in alerts with Azure Automation runbooks29 or Logic Apps30 that can
be launched from an alert using webhooks.
●● Use autoscaling31 to dynamically increase and decrease your compute resources based on collected
metrics.
Continuously optimize
Monitoring is one of the fundamental aspects of the popular Build-Measure-Learn philosophy, which
recommends continuously tracking your KPIs and user behavior metrics and optimizing them through
planning iterations.
Azure Monitor helps you collect metrics and logs relevant to your business and to add new data points in
the following deployment as required.
●● Use tools in Application Insights to track end-user behavior and engagement34.
●● Use Impact Analysis35 to help you prioritize which areas to focus on to drive to important KPIs.
29 https://docs.microsoft.com/azure/automation/automation-webhooks
30 https://docs.microsoft.com/connectors/custom-connectors/create-webhook-trigger
31 https://docs.microsoft.com/azure/azure-monitor/learn/tutorial-autoscale-performance-schedule
32 https://docs.microsoft.com/azure/application-insights/app-insights-tutorial-dashboards
33 https://docs.microsoft.com/azure/application-insights/app-insights-usage-workbooks
34 https://docs.microsoft.com/azure/application-insights/app-insights-tutorial-users
35 https://docs.microsoft.com/azure/application-insights/app-insights-usage-impact
674
In this tutorial, we'll focus on the Log Analytics part of Azure Monitor. We'll learn how to:
●● Set up Log Analytics workspace.
●● Connect virtual machines into a log analytics workspace.
●● Configure Log Analytics workspace to collect custom performance counters.
●● Analyze the telemetry-using Kusto Query Language.
Getting started
1. You'll need a resource group with one or more virtual machines that you have access to RDP to follow
along.
2. Log into Azure Shell36. Execute the command below. It will create a new resource group and create a
new log analytics workspace. Take a record of the workspaceid of the log analytics workspace as we'll
be using it again.
$ResourceGroup = "azwe-rg-devtest-logs-001"
$WorkspaceName = "azwe-devtest-logs-01"
$Location = "westeurope"
36 https://shell.azure.com/powershell
675
# Add solutions
foreach ($solution in $Solutions) {
Set-AzOperationalInsightsIntelligencePack -ResourceGroupName $ResourceGroup -Workspace-
Name $WorkspaceName -IntelligencePackName $solution -Enabled $true
}
# Windows Event
New-AzOperationalInsightsWindowsEventDataSource -ResourceGroupName $ResourceGroup
-WorkspaceName $WorkspaceName -EventLogName "Application" -CollectErrors -CollectWarnings
-Name "Example Application Event Log"
4. Map existing virtual machines with the Log Analytics workspace. The following query uses the wok-
spaceid and workspace-secret key of the log analytics workspace to install the Microsoft Enterprise
Cloud Monitoring extension onto an existing VM.
-Settings $PublicSettings `
-ProtectedSettings $ProtectedSettings `
-Location westeurope
5. Run the script to configure the below-listed performance counters to be collected from the virtual
machine.
#Login-AzureRmAccount
#Instance
##################################
$InstanceNameAll = "*"
$InstanceNameTotal = '_Total'
#Objects
##################################
$ObjectCache = "Cache"
$ObjectLogicalDisk = "LogicalDisk"
$ObjectMemory = "Memory"
$ObjectNetworkAdapter = "Network Adapter"
$ObjectNetworkInterface = "Network Interface"
$ObjectPagingFile = "Paging File"
$ObjectProcess = "Process"
$ObjectProcessorInformation = "Processor Information"
$ObjectProcessor = "Processor"
$ObjectSQLAgentAlerts = "SQLAgent:Alerts"
$ObjectSQLAgentJobs = "SQLAgent:Jobs"
$ObjectSQLAgentStatistics = "SQLAgent:Statistics"
$ObjectSystem = "System"
#Counters
#########################################################
$CounterCache = "Copy Read Hits %"
$CounterLogicalDisk =
"% Free Space" `
,"Avg. Disk sec/Read" `
,"Avg. Disk sec/Transfer" `
,"Avg. Disk sec/Write" `
,"Current Disk Queue Length" `
,"Disk Read Bytes/sec" `
,"Disk Reads/sec" `
,"Disk Transfers/sec" `
677
,"Disk Writes/sec"
$CounterMemory =
"% Committed Bytes In Use" `
,"Available MBytes" `
,"Page Faults/sec" `
,"Pages Input/sec" `
,"Pages Output/sec" `
,"Pool Nonpaged Bytes"
$CounterNetworkAdapter =
"Bytes Received/sec" `
,"Bytes Sent/sec"
$CounterPagingFile =
"% Usage" `
,"% Usage Peak"
$CounterProcessorInformation =
"% Interrupt Time" `
,"Interrupts/sec"
#########################################################
$global:number = 1 #Name parameter needs to be unique that why we will use number ++ in fuction
#########################################################
-InstanceName $Instance `
-CounterName $Counter `
-IntervalSeconds 10 `
-Name "Windows Performance Counter $global:number"
$global:number ++
}
}
6. To generate some interesting performance statistics. Download HeavyLoad utility37 (a free load
testing utility) and run it on the virtual machine to simulate high CPU, Memory, and IOPS consump-
tion.
37 https://www.jam-software.com/heavyload/
679
How it works
1. Log Analytics works by running the Microsoft Monitoring Agent service on the machine. The service
locally captures and buffers the events and pushes them securely out to the Log Analytics workspace
in Azure.
2. Log into the virtual machine and navigate to the C:\Program Files\Microsoft Monitoring Agent\MMA
and open the control panel. It will show you the details of the log analytics workspace connected. You
also can add multiple log analytics workspaces to publish the log data into various workspaces.
Summary
So far, we've created a log analytics workspace in a resource group.
The log analytics workspace has been configured to collect performance counters, event logs, and IIS
Logs.
A virtual machine has been mapped to the log analytics workspace using the Microsoft Enterprise cloud
monitoring extension.
HeavyLoad has been used to simulate high CPU, memory, and IOPS on the virtual machine.
Walkthrough
Note: This walkthrough continues the previous lesson on Azure Log Analytics, and the walkthrough
started within it.
1. Log in to Azure portal39 and navigate to the log analytics workspace. From the left blade in the log
analytics workspace, click Logs. It will open the Logs window, ready for you to start exploring all the
data points captured into the workspace.
2. To query the logs, we'll need to use the Kusto Query Language. Run the following query to list the last
heartbeat of each machine connected to the log analytics workspace.
38 https://docs.microsoft.com/azure/data-explorer/kusto/concepts/
39 https://portal.azure.com
680
4. Show a count of the data points collected in the last 24 hours. The result shows that we have 66M
data points. We can query against them in near real time to analyze and correlate insights.
5. Run the following query to generate the max CPU Utilization trend over the last 24 hours, aggregated
at a granularity of 1 min. Render the data as a time chart.
Perf
| where ObjectName == "Processor" and InstanceName == "_Total"
| summarize max(CounterValue) by Computer, bin(TimeGenerated, 1m)
| render timechart
6. Run the following query to see all the processes running on that machine contributing to the CPU
Utilization. Render the data in a pie chart.
681
Perf
| where ObjectName contains "process"
and InstanceName !in ("_Total", "Idle")
and CounterName == "% Processor Time"
| summarize avg(CounterValue) by InstanceName, CounterName, bin(TimeGenerated, 1m)
| render piechart
There's more
This unit has introduced the basic concepts of Log Analytics and how to get started with the basics.
We've only scratched the surface of what is possible with Log Analytics.
We would encourage you to try out the advanced tutorials available for Log Analytics on Microsoft
Docs40.
40 https://docs.microsoft.com/azure/azure-monitor/
682
Also, you can pull in telemetry from the host environments such as performance counters, Azure diag-
nostics, or Docker logs.
You can also set up web tests that periodically send synthetic requests to your web service.
All these telemetry streams are integrated into the Azure portal, where you can apply powerful analytic
and search tools to the raw data.
●● Performance counters from your Windows or Linux server machines include CPU, memory, and
network usage.
●● Host diagnostics from Docker or Azure.
●● Diagnostic trace logs from your app - so that you can correlate trace events with requests.
●● Custom events and metrics that you write yourself in the client or server code to track business events
such as items sold or games won.
Application map
The components of your app, with key metrics and alerts.
Profiler
Inspect the execution profiles of sampled requests.
Usage analysis
Analyze user segmentation and retention.
41 https://docs.microsoft.com/azure/application-insights/app-insights-proactive-diagnostics
42 https://docs.microsoft.com/azure/azure-monitor/app/alerts
684
Dashboards
Mashup data from multiple resources and share it with others. Great for multi-component applications
and continuous display in the team room.
Analytics
Answer challenging questions about your app's performance and usage by using this powerful query
language.
Visual Studio
See performance data in the code. Go to code from stack traces.
Snapshot debugger
Debug snapshots sampled from live operations, with parameter values.
Power BI
Integrate usage metrics with other business intelligence.
686
REST API
Write code to run queries over your metrics and raw data.
Continuous export
Bulk export of raw data to storage as soon as it arrives.
Detect, Diagnose
If you receive an alert or discover a problem:
●● Assess how many users are affected.
●● Correlate failures with exceptions, dependency calls, and traces.
43 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
44 https://docs.microsoft.com/azure/azure-monitor/app/app-insights-dashboards
45 https://docs.microsoft.com/azure/application-insights/app-insights-live-stream
687
Get started
Application Insights is one of the many services hosted within Microsoft Azure, and telemetry is sent
there for analysis and presentation.
So, before you do anything else, you'll need a subscription to Microsoft Azure47.
It's free to sign up, and if you choose the basic pricing plan48 of Application Insights, there's no charge
until your application has grown to have large usage.
If your organization already has a subscription, they could add your Microsoft account to it.
There are several ways to get started. Begin with whichever works best for you. You can add the others
later.
At run time
Instrument your web app on the server. Avoids any update to the code. You need admin access to your
server.
●● IIS on-premises or on a VM49
●● Azure web app or VM50
●● J2EE51
At development time
Add Application Insights to your code. Allows you to write custom telemetry and to instrument back-end
and desktop apps.
●● Visual Studio52 2013 update two or later.
●● Java53
●● Node.js
●● Other platforms54
46 https://docs.microsoft.com/azure/application-insights/app-insights-usage-overview
47 https://azure.com/
48 https://azure.microsoft.com/pricing/details/application-insights/
49 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-performance-live-website-now
50 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-performance-live-website-now
51 https://docs.microsoft.com/azure/application-insights/app-insights-java-live
52 https://docs.microsoft.com/azure/azure-monitor/app/asp-net
53 https://docs.microsoft.com/azure/application-insights/app-insights-java-get-started
54 https://docs.microsoft.com/azure/application-insights/app-insights-platforms
688
●● Instrument your web pages55 for page view, AJAX, and another client-side telemetry.
●● Analyze mobile app usage56 by integrating with Visual Studio App Center.
●● Availability tests57 - ping your website regularly from our servers.
Getting started
1. To add Application Insights to your ASP.NET58 website, you need to:
●● Install Visual Studio 2019 for Windows with the following workloads:
●● ASP.NET59 and web development (Don't uncheck the optional components)
2. In Visual Studio, create a new dotnet core project. Right-click the project, and from the context menu,
select Add, Application Insights Telemetry.
55 https://docs.microsoft.com/azure/application-insights/app-insights-javascript
56 https://docs.microsoft.com/azure/application-insights/app-insights-mobile-center-quickstart
57 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
58 http://asp.net/
59 http://asp.net/
689
(Depending on your Application Insights SDK version, you may be prompted to upgrade to the latest
SDK release. If prompted, select Update SDK.)
3. From the Application Insights configuration screen, click Get started to start setting up App Insights.
4. Choose to set up a new resource group and select the location where you want the telemetry data to
be persisted.
690
Summary
So far, we have added App Insights in a dotnet core application.
The Application Insights getting started experience gives you the ability to create a new resource group
in the wished location where the App Insights instance gets created.
The instrumentation key for the app insights instance is injected into the application configuration
automatically.
How to do it
1. Run your app with F5. Open different pages to generate some telemetry. In Visual Studio, you'll see a
count of the events that have been logged.
691
2. You can see your telemetry either in Visual Studio or in the Application Insights web portal. Search
telemetry in Visual Studio to help you debug your app. Monitor performance and usage in the web
portal when your system is live. In Visual Studio, to view Application Insights data. Select Solution
Explorer > Connected Services > right-click Application Insights, and then click Search-Live Telemetry.
In the Visual Studio Application Insights Search window, you'll see the data from your application for
telemetry generated on the server-side of your app. Experiment with the filters, and click any event to
see more detail.
692
3. You can also see telemetry in the Application Insights web portal (unless you choose to install only the
SDK). The portal has more charts, analytic tools, and cross-component views than Visual Studio. The
portal also provides alerts.
Open your Application Insights resource. Either sign into the Azure portal and find it there, or select
Solution Explorer > Connected Services > right-click Application Insights > Open Application Insights
Portal and let it take you there.
The portal opens on a view of the telemetry from your app.
How it works
Application Insights configures a unique key (called AppInsights Key) in your application. The Application
Insights SDK uses this key to identify the Azure App Insights workspace the telemetry data needs to be
uploaded. The SDK and the key are merely used to pump the telemetry data points out of your applica-
tion. The heavy lifting of data correlation, analysis, and insights is done within Azure.
There's more
This tutorial taught us how to get started by adding Application Insights into your dotnet core applica-
tion.
App Insights offers a wide range of features.
You can learn more about these at Start Monitoring Your ASP.NET Core Web Application60.
Summary
This module introduced you to continuous feedback practices and tools to track usage and flow, such as
Azure Logs Analytics, Kusto Query Language (KQL), and Application Insights.
You learned how to describe the benefits and usage of:
●● Implement tools to track feedback.
●● Plan for continuous monitoring.
●● Implement Application Insights.
●● Use Kusto Query Language (KQL).
60 https://docs.microsoft.com/azure/azure-monitor/learn/dotnetcore-quick-start
693
Learn more
●● Give feedback with Test & Feedback extension - Azure Test Plans | Microsoft Docs61.
●● Request stakeholder feedback - Azure Test Plans | Microsoft Docs62.
●● Continuous monitoring with Azure Monitor - Azure Monitor | Microsoft Docs63.
●● Continuous monitoring of your DevOps release pipeline with Azure Pipelines and Azure Appli-
cation Insights - Azure Monitor | Microsoft Docs64.
●● What is Azure Application Insights? - Azure Monitor | Microsoft Docs65.
●● KQL quick reference | Microsoft Docs66.
61 https://docs.microsoft.com/azure/devops/test/provide-stakeholder-feedback
62 https://docs.microsoft.com/azure/devops/test/request-stakeholder-feedback
63 https://docs.microsoft.com/azure/azure-monitor/continuous-monitoring
64 https://docs.microsoft.com/azure/azure-monitor/app/continuous-monitoring
65 https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview
66 https://docs.microsoft.com/azure/data-explorer/kql-quick-reference
694
Learning objectives
After completing this module, students and professionals can:
●● Configure Azure Dashboards.
●● Work with View Designer in Azure Monitor.
●● Create Azure Monitor Workbooks.
●● Monitor with Power BI.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
67 https://docs.microsoft.com/azure/azure-portal/azure-portal-dashboards
695
Advantages
●● Deep integration into Azure. Visualizations can be pinned to dashboards from multiple Azure pages,
including metrics analytics, log analytics, and Application Insights.
●● Supports both metrics and logs.
●● Combine data from multiple sources, including output from Metrics explorer68, Log Analytics
queries69, and maps70 and availability71 in Application Insights.
●● Option for personal or shared dashboards. It's integrated with Azure role-based authentication
(RBAC)72.
●● Automatic refresh. Metrics refresh depends on the time range with a minimum of five minutes. Logs
refresh at one minute.
●● Parametrized metrics dashboards with timestamp and custom parameters.
●● Flexible layout options.
●● Full-screen mode.
Limitations
●● Limited control over log visualizations with no support for data tables. The total number of data series
is limited to 10, with different data series grouped under another bucket.
●● No custom parameters support for log charts.
68 https://docs.microsoft.com/azure/azure-monitor/platform/metrics-charts
69 https://docs.microsoft.com/azure/azure-monitor/log-query/log-query-overview
70 https://docs.microsoft.com/azure/azure-monitor/app/app-map
71 https://docs.microsoft.com/azure/azure-monitor/visualizations
72 https://docs.microsoft.com/azure/role-based-access-control/overview
696
Advantages
●● Rich visualizations for log data.
●● Export and import views to transfer them to other resource groups and subscriptions.
●● Integrates into Log Analytic management model with workspaces and monitoring solutions.
●● Filters75 for custom parameters.
●● Interactive supports multi-level drill-in (a view that drills into another view).
Limitations
●● Supports logs but not metrics.
●● No personal views. Available to all users with access to the workspace.
●● No automatic refresh.
●● Limited layout options.
●● No support for querying across multiple workspaces or Application Insights applications.
●● Queries are limited in response size to 8 MB and query execution time of 110 seconds.
73 https://docs.microsoft.com/azure/log-analytics/log-analytics-view-designer
74 https://docs.microsoft.com/azure/azure-monitor/insights/solutions
75 https://docs.microsoft.com/azure/azure-monitor/platform/view-designer-filters
697
Advantages
●● Supports both metrics and logs.
●● Supports parameters enabling interactive reports selecting an element in a table will dynamically
update associated charts and visualizations.
●● Document-like flow.
●● Option for personal or shared workbooks.
●● Easy, collaborative-friendly authoring experience.
●● Templates support the public GitHub-based template gallery.
Limitations
●● No automatic refresh.
●● No dense layout like dashboards, which make workbooks less useful as a single pane of glass. It's
intended more for providing more profound insights.
Explore Power BI
Power BI77 is beneficial for creating business-centric dashboards and reports analyzing long-term KPI
trends.
76 https://docs.microsoft.com/azure/application-insights/app-insights-usage-workbooks
77 https://powerbi.microsoft.com/documentation/powerbi-service-get-started/
698
You can import the results of a log query78 into a Power BI dataset so you can take advantage of its
features, such as combining data from different sources and sharing reports on the web and mobile
devices.
Advantages
●● Rich visualizations.
●● Extensive interactivity, including zoom-in and cross-filtering.
●● Easy to share throughout your organization.
●● Integration with other data from multiple data sources.
●● Better performance with results cached in a cube.
Limitations
●● Supports logs but not metrics.
●● No Azure RM integration. Can't manage dashboards and models through Azure Resource Manager.
●● Need to import query results need into the Power BI model to configure. Limitation on result size and
refresh.
●● Limited data refresh of eight times per day for Pro licenses (currently 48 for Premium).
78 https://docs.microsoft.com/azure/log-analytics/log-analytics-powerbi
699
Advantages
●● Complete flexibility in UI, visualization, interactivity, and features.
●● Combine metrics and log data with other data sources.
Disadvantages
●● Significant engineering effort is required.
Summary
This module explained steps to develop monitoring with Azure Dashboards, work with View Designer and
Azure Monitor, and create Azure Monitor Workbooks. Also, explored tools to support monitoring with
Power BI.
You learned how to describe the benefits and usage of:
●● Configure Azure Dashboards.
●● Work with View Designer in Azure Monitor.
●● Create Azure Monitor Workbooks.
●● Monitor with Power BI.
Learn more
●● Create a dashboard in the Azure portal - Azure portal | Microsoft Docs79.
●● Create views to analyze log data in Azure Monitor - Azure Monitor | Microsoft Docs80.
●● Azure Monitor view designer to workbooks transition guide - Azure Monitor | Microsoft Docs81.
●● Log Analytics integration with Power BI and Excel - Azure Monitor | Microsoft Docs82.
79 https://docs.microsoft.com/azure/azure-portal/azure-portal-dashboards
80 https://docs.microsoft.com/azure/azure-monitor/visualize/view-designer
81 https://docs.microsoft.com/azure/azure-monitor/visualize/view-designer-conversion-overview
82 https://docs.microsoft.com/azure/azure-monitor/logs/log-powerbi
700
Learning objectives
After completing this module, students and professionals can:
●● Share knowledge with development teams.
●● Work with Azure DevOps Wikis.
●● Configure IT Service Management Connector (ITSMC).
●● Integrate with Azure Boards.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
To reflect
Azure DevOps can be used with a wide range of existing tooling used to share knowledge.
●● Which knowledge-sharing tools do you currently use (if any)?
●● What do you or don't you like about the tools?
The wiki to share information with your team to understand and contribute to your project.
Wikis are stored in a repository. No wiki is automatically provisioned.
Prerequisites
You must have permission to Create a Repository to publish code as a wiki. While the Project Adminis-
trators group has this permission by default, it can be assigned to others.
To add or edit wiki pages, you should be a member of the Contributors group.
All members of the team project (including stakeholders) can view the wiki.
Creation
The following article includes details on creating a wiki: Create a Wiki for your project83.
Markdown
Azure DevOps Wikis are written in Markdown and can also include file attachments and videos.
Markdown is a markup language. The plain text includes formatting syntax. It has become the defacto
standard for how project and software documentation is now written.
One key reason for this is that because it's made up of plain text, it's much easier to merge in the same
way that program code is merged.
It allows documents to be managed with the same tools used to create other code in a project.
83 https://docs.microsoft.com/azure/devops/project/wiki/wiki-create-repo
84 https://docs.microsoft.com/azure/devops/project/wiki/publish-repo-to-wiki
702
Mermaid
Mermaid has become an essential extension to Markdown because it allows diagrams to be included in
the documentation.
It overcomes the previous difficulties in merging documentation that includes diagrams represented as
binary files.
Click on it, and the configuration blade for work items will open.
85 https://mermaid-js.github.io/mermaid/
703
All you need to do is fill out the information about the Azure DevOps system to which you want to
connect, along with the project where you want to write your work items:
Once that information is in place, you can click on the Authorization button, where you'll be redirected to
authorize access in your selected Azure DevOps system so that work items can be written there:
Once you've completed the authorization process, you can set defaults for “area path” and "assigned to."
704
Only area path is required (if you have not set up specific area paths in your project, that is ok. Just use
the name of the project, as it's the top-level area path.
Click OK, and assuming you have entered everything correctly, you'll see a message stating “Validation
Successful,” and the blade will close. You're now ready to start creating work items!
We can see that I have several exceptions that fired when the user clicked on the Home/About tab on this
web app. If I drill into this group of exceptions, I can see the list and then choose an individual exception:
705
Looking at the detailed blade for this exception, we see that there are now two buttons available at the
top of the blade that read “New Work Item” and "View Work Items."
To create a work item, I click on the first of these buttons, and it opens the new work item blade:
706
As you can see, just about everything you need in your average scenario has been filled out for you.
The default values for “area path” and "assigned to" that you chose in the initial configuration are set, and
all the detailed information we have available for this exception has been added to the details field.
You can override the title, area path, and assigned to fields in this blade if you wish, or you can add to the
captured details.
When you're ready to create your work item, click on the “OK” button, and your work item will be written
to Azure DevOps.
If you click the link for the work item that you want to view, it will open in Azure DevOps:
Advanced Configuration
Some of you may have noticed that there's a switch on the configuration blade that is labeled “Advanced
Configuration.”
We have provided another functionality to help you configure your ability to write to Azure DevOps in
scenarios where you've changed or extended some of the out-of-the-box settings.
An excellent example of it is choosing more required fields. There's no way to handle this other-required
mapping in the standard config, but you can handle it in advanced mode.
If you click on the switch, the controls at the bottom of the blade will change to look like this:
708
You can see that you're now given a JSON-based editing box where you can specify all the settings/
mappings that you might need to handle modifications to your Azure DevOps project.
Next steps
We think that it's an excellent start to integrating work item functionality with Application Insights.
But please keep in mind that it's essentially the 1.0 version of this feature set.
We have much worked planned, and you'll see a significant evolution in this space over the upcoming
months.
Just for starters, let me outline a few of the things that we already have planned or are investigating:
●● Support for all work item types – You probably noticed that the current feature set locks the work item
type to just “bug.” Logging bugs was our primary ask for this space, so that is where we started, but
we certainly don't think that is where things should end. One of the more near-term changes you'll
see is handling all work item types for all supported processes in Azure DevOps.
●● Links back to Application Insights – It's great to create a work item with App Insights data in it, but
what happens when you are in your ALM/DevOps system and looking at that item and want to quickly
navigate back to the source of the work item in App Insights? We plan to rapidly add links to the work
items to make this as fast and easy as possible.
●● More flexible configuration – Currently, our standard configuration only handles scenarios where users
haven't modified/extended their project in Azure DevOps. Today, if you have made these kinds of
changes, you'll need to switch to advanced configuration mode. In the future, we want to handle
everyday things that people might change (for example, making more fields require, adding new
fields) in the standard configuration wherever possible. It requires some updates from our friends on
the Azure DevOps team, but they're already working on some of these for us. Once they're available,
we'll begin to make the standard configuration more flexible. In the meantime (and in the future), you
can always use the advanced configuration to overcome limitations.
●● Multiple profiles – Setting up a single configuration means that in shops where there are several ways
users commonly create work items, the people creating work items from Application Insights would
have to override values frequently. We plan to give users the capability to set up 1:n profiles, with
common values specified for each so that when you want to create a work item with that profile, you
can choose it from a drop-down list.
●● More sources of creation for work items – We'll continue to investigate (and take feedback on) other
places in Application Insights where it makes sense to create work items.
709
●● Automatic creation of work items – There are certainly scenarios we can imagine where we might want
a work item to be created for us based upon criteria. It is on the radar, but we're spending some
design time to limit the possibilities of super-noisy or runaway work item creation. We believe that
this is a powerful and convenient feature, but we want to reduce the potential for spamming the ALM/
DevOps system as much as possible.
●● Support for other ALM/DevOps systems – Hey, we think that Azure DevOps is an excellent product, but
we recognize that many of our users may use some other product for their ALM/DevOps, and we want
to meet people where they are. So, we're working on other first-tier integrations of popular ALM/
DevOps products. We also plan to provide a pure custom configuration choice (like advanced config
for Azure DevOps) so that end users will hook up Application Insights to virtually any ALM/DevOps
system.
Summary
This module described how to share knowledge within teams, Azure DevOps Wikis, and IT Service
Management Connector (ITSMC). Also, integration with Azure Boards.
You learned how to describe the benefits and usage of:
●● Share knowledge with development teams.
●● Work with Azure DevOps Wikis.
●● Configure IT Service Management Connector (ITSMC).
●● Integrate with Azure Boards.
Learn more
●● Create a project wiki to share information - Azure DevOps | Microsoft Docs86.
●● IT Service Management Connector overview - Azure Monitor | Microsoft Docs87.
86 https://docs.microsoft.com/azure/devops/project/wiki/wiki-create-repo
87 https://docs.microsoft.com/azure/azure-monitor/alerts/itsmc-overview
710
Learning objectives
After completing this module, students and professionals can:
●● Automate application analytics.
●● Assist DevOps with rapid responses and augmented search.
●● Integrate telemetry.
●● Implement monitoring tools and technologies.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
It's crucial to understand that the log files of applications are far less predictable than the log files of
infrastructure elements. The errors are essentially messages and error numbers that have been intro-
duced to the code by developers in a non-consistent manner.
So, search queries yield thousands of results in most cases and do not include important ones, even when
the user is skilled. That leaves the user with the same “needle in the haystack” situation.
Integrate telemetry
A key factor to automating feedback is telemetry. By inserting telemetric data into your production
application and environment, the DevOps team can automate feedback mechanisms while monitoring
applications in real time.
DevOps teams use telemetry to see and solve problems as they occur, but this data can be helpful to
both technical and business users.
When properly instrumented, telemetry can also be used to see and understand how customers are
engaging with the application in real time.
It could be critical information for product managers, marketing teams, and customer support. So,
feedback mechanisms must share continuous intelligence with all stakeholders.
Benefits of telemetry
The primary benefit of telemetry is the ability of an end user to monitor the state of an object or environ-
ment while physically far removed from it.
Once you've shipped a product, you can't be physically present, peering over the shoulders of thousands
(or millions) of users as they engage with your product to find out what works, what is easy, and what is
cumbersome.
Thanks to telemetry, those insights can be delivered directly into a dashboard for you to analyze and act.
Because telemetry provides insights into how well your product is working for your end users – as they
use it – it's a unique tool for ongoing performance monitoring and management.
Plus, you can use the data you've gathered from version 1.0-to-drive improvements and prioritize
updates for your release of version 2.0.
713
Challenges of telemetry
Telemetry is a fantastic technology, but it isn't without its challenges.
The most prominent challenge – and a commonly occurring issue – isn't with telemetry itself but with
your end users and their willingness to allow what some see as Big Brother-Esque spying.
In short, some users immediately turn it off when they notice it, meaning any data generated from their
use of your product won't be gathered or reported.
That means the experience of those users won't be accounted for when it comes to planning your future
roadmap, fixing bugs, or addressing other issues in your app.
Although it isn't necessarily a problem by itself, the issue is that users who tend to disallow these types of
technologies can tend to fall into the more tech-savvy portion of your user base.
It can result in the dumbing-down of software. On the other hand, other users take no notice of telemetry
happening behind the scenes or ignore it if they do.
It's a problem without a clear solution—and it doesn't negate the overall power of telemetry for driving
development—but one to keep in mind as you analyze your data.
So, when designing a strategy for how you consider the feedback from application telemetry, it's neces-
sary to account for users who don't participate in providing the telemetry.
advising the development teams get advanced knowledge and experience to better prepare and tune the
production environment, resulting in far more stable releases into production.
Applications are more business-critical than ever. They must always be up, always fast, and constantly
improving. Embracing a DevOps approach will allow you to reduce your cycle times to hours instead of
months, but you must keep ensuring a great user experience!
Continuous monitoring of your entire DevOps life cycle will ensure development and operations teams
collaborate to optimize the user experience every step of the way, leaving more time for your next
significant innovation.
When shortlisting a monitoring tool, you should seek the following advanced features:
●● Synthetic Monitoring: Developers, testers, and operations staff all need to ensure that their internet
and intranet-mobile applications and web applications are tested and operate successfully from
different points of presence worldwide.
●● Alert Management: Developers, testers, and operations staff all need to send notifications via email,
voice mail, text, mobile push notifications, and Slack messages when specific situations or events
occur in development, testing, or production environments, to get the right people’s attention and to
manage their response.
●● Deployment Automation: Developers, testers, and operations staff use different tools to schedule
and deploy complex applications and configure them in development, testing, and production
environments. We'll discuss the best practices for these teams to collaborate effectively and efficiently
and avoid potential duplication and erroneous information.
●● Analytics: Developers need to look for patterns in log messages to identify if there's a problem in the
code. Operations need to do root cause analysis across multiple log files to identify the source of the
problem in complex applications and systems.
88 https://docs.microsoft.com/azure/azure-monitor/platform/itsmc-overview
715
Summary
This module helped design the process for Application Insights, explored telemetry and monitoring tools
and technologies.
You learned how to describe the benefits and usage of:
●● Automate application analytics.
●● Assist DevOps with rapid responses and augmented search.
●● Integrate telemetry.
●● Implement monitoring tools and technologies.
Learn more
●● What is Azure Application Insights? - Azure Monitor | Microsoft Docs89.
●● Continuous monitoring of your DevOps release pipeline with Azure Pipelines and Azure Appli-
cation Insights - Azure Monitor | Microsoft Docs90.
●● Azure Monitor Application Insights Documentation - Azure Monitor | Microsoft Docs91.
89 https://docs.microsoft.com/azure/azure-monitor/app/app-insights-overview
90 https://docs.microsoft.com/azure/azure-monitor/app/continuous-monitoring
91 https://docs.microsoft.com/azure/azure-monitor/azure-monitor-app-hub
716
Learning objectives
After completing this module, students and professionals can:
●● Carry out blameless retrospectives and create a just culture.
●● Improve application performance.
●● Explain server response-time degradation.
●● Reduce meaningless and non-actionable alerts.
Prerequisites
●● Understanding of what DevOps is and its concepts.
●● Familiarity with version control principles is helpful but isn't necessary.
●● Beneficial to have experience in an organization that delivers software.
92 https://docs.microsoft.com/azure/application-insights/app-insights-overview
93 https://docs.microsoft.com/azure/azure-monitor/app/asp-net
94 https://docs.microsoft.com/azure/application-insights/app-insights-java-get-started
95 https://docs.microsoft.com/azure/application-insights/app-insights-javascript
717
●● Slow performance pattern - Your app appears to have a performance issue that is affecting only some
requests. For example, pages are loading more slowly on one type of browser than others; or requests
are being served more slowly from one server. Currently, our algorithms look at page load times,
request response times, and dependency response times.
Smart Detection requires at least eight days of telemetry at a workable volume to establish a normal
performance baseline. So, after your application has been running for that period, any significant issue
will result in a notification.
●● Triage. The notification shows you how many users or how many operations are affected. It can help
you assign a priority to the problem.
●● Scope. Is the problem affecting all traffic, or just some pages? Is it restricted to particular browsers or
locations? This information can be obtained from the notification.
●● Diagnose. Often, the diagnostic information in the notification will suggest the nature of the problem.
For example, if response time slows down when the request rate is high, that means your server or de-
pendencies are overloaded. Otherwise, open the Performance blade in Application Insights. There,
you'll find Profiler96 data. If exceptions are thrown, you can also try the snapshot debugger97.
96 https://docs.microsoft.com/azure/application-insights/app-insights-profiler
97 https://docs.microsoft.com/azure/application-insights/app-insights-snapshot-debugger
98 https://docs.microsoft.com/azure/application-insights/app-insights-resources-roles-access-control
719
To change it, click Configure in the email notification, or open Smart Detection settings in Application
Insights.
●● You can use the unsubscribe link in the smart detection email to stop receiving the email notifications.
Emails about smart detections performance anomalies are limited to one email per day per Application
Insights resource.
The email would be sent only if at least one new issue was detected on that day. You won't get repeats of
any message.
Improve performance
Slow and failed responses are one of the biggest frustrations for website users, as you know from your
own experience. So, it's essential to address the issues.
Triage
●● First, does it matter? If a page is always slow to load, but only 1% of your site's users ever have to look
at it, maybe you have more important things to think about. On the other hand, if only 1% of users
open it, but it throws exceptions every time, that might be worth investigating. Use the impact
statement (affected users or % of traffic) as a general guide but be aware that it isn't the whole story.
Gather other evidence to confirm. Consider the parameters of the issue. If it's geography-dependent,
set up availability tests99 including that region: there might be network issues in that area.
●● If Send Request Time is high, the server responds slowly, or the request is a post with much data.
Look at the performance metrics100 to investigate response times.
●● Set up dependency tracking101 to see whether the slowness is because of external services or
your database.
99 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
100 https://docs.microsoft.com/azure/azure-monitor/azure-monitor-app-hub
101 https://docs.microsoft.com/azure/azure-monitor/app/asp-net-dependencies
720
●● If Receiving Response is predominant, your page and its dependent parts - JavaScript, CSS, images,
and so on (but not asynchronously loaded data) are long. Set up an availability test102 and be
sure to set the option to load dependent parts. When you get some results, open the detail of a
result, and expand it to see the load times of different files.
●● High Client Processing time suggests scripts are running slowly. If the reason isn't clear, consider
adding some timing code and sending the times in track metrics calls.
●● Slow loading because of large files: Load the scripts and other parts asynchronously. Use script
bundling. Break the main page into widgets that load their data separately. Don't send plain old
HTML for long tables: use a script to request the data as JSON or another compact format, then fill
the table in place. There are remarkable frameworks to help with all of it. (They also involve great
scripts)
●● Slow server dependencies: Consider the geographical locations of your components. For example,
if you use Azure, ensure the web server and the database are in the same region. Do queries
retrieve more information than they need? Would caching or batching help?
●● Capacity issues: Look at the server metrics of response times and request counts. If response times
peak disproportionately with peaks in request counts, your servers are likely stretched.
●● Profiler traces to help you view where operation time is spent (the link is available if Profiler
trace examples were collected for this operation during the detection period).
●● Performance reports in Metric Explorer, where you can slice and dice time range/filters for this
operation.
●● Search for this call to view specific call properties.
●● Failure reports - If count > 1 it means that there were failures in this operation that might have
contributed to performance degradation.
102 https://docs.microsoft.com/azure/application-insights/app-insights-monitor-web-app-availability
721
What does it mean to have a ‘blameless’ retrospective? Does it mean everyone gets off the hook for
making mistakes? No.
Well, maybe. It depends on what “gets off the hook” means. Let me explain.
Having a Just Culture means that you're making an effort to balance safety and accountability. It means
that by investigating mistakes, focusing on the situational aspects of a failure's mechanism.
In the decision-making process of individuals proximate to the failure, an organization can come out
safer than it would usually be if it had punished the actors involved as remediation.
Having a “blameless” retrospective process means that engineers whose actions have contributed to an
accident can give a detailed account of:
●● What actions do they take at what time.
●● What effects do they observe?
●● Expectations they had.
●● Assumptions they had made.
●● Their understanding of the timeline of events as they occurred.
AND that they can give this detailed account without fear of punishment or retribution.
Why should they not be punished or reprimanded? Because an engineer who thinks they'll be blamed is
incentivized to give the details necessary to understand the failure's mechanism, pathology, and opera-
tion.
This lack of understanding of how the accident occurred guarantees that it will repeat. If not with the
original engineer, another one in the future.
If we go with “blame” as the predominant approach, we implicitly accept that deterrence is how organiza-
tions become safer.
This is founded on the belief that individuals, not situations, cause errors.
It's also aligned with the idea there must be some fear that not-doing one's job correctly could lead to
punishment because the fear of punishment will motivate people to act correctly in the future. Right?
This cycle of name/blame/shame can be looked at like this:
●● Engineer acts and contributes to a failure or incident.
●● Engineer is punished, shamed, blamed, or retrained.
●● Reduced trust between engineers on the ground (the “sharp-end”) and management (the "blunt end")
looking for someone to scapegoat.
●● Engineers become silent on details about actions/situations/observations, resulting in “Cov-
er-Your-Mistake” engineering (from fear of punishment)
●● Management becomes less aware and informed on how work is being performed daily. Engineers
become less educated on lurking or latent conditions for failure because of the silence mentioned in
the previous four.
●● Errors are more likely, and latent conditions can't be identified because of the previous five.
●● Repeat the first step.
We need to avoid this cycle. We want the engineer who has made an error to give details about why
(either explicitly or implicitly) they did what they did; why the action made sense to them at the time.
723
It's paramount to understand the pathology of the failure. The action made sense to the person when
they took it because if it had the not-made sense, they wouldn't have taken action in the first place.
The base fundamental here's something Erik Hollnagel103 has said:
We must strive to understand that accidents don't happen because people gamble and lose.
Accidents happen because the person believes that:
…what is about to happen isn't possible,
…or what is about to happen has no connection to what they're doing,
…or that the possibility of getting the intended outcome is worth whatever risk there is.
103 http://www.erikhollnagel.com/
104 http://en.wikipedia.org/wiki/Hindsight
105 http://en.wikipedia.org/wiki/Fundamental_attribution_error
724
One option is to assume the single cause is incompetence and scream at engineers to make them “pay
attention!” or "be more careful!"
Another option is to take a hard look at how the accident happened, treat the engineers involved with
respect, and learn from the event.
For more information, see also:
●● Brian Harry's Blog - A good incident postmortem106
Summary
This module examined alerts, blameless retrospectives and created a just culture. It helped improve
application performance, reduced meaningless and non-actionable alerts, and explained server re-
sponse-time degradation.
You learned how to describe the benefits and usage of:
●● Carry out blameless retrospectives and create a just culture.
●● Improve application performance.
●● Explain server response-time degradation.
●● Reduce meaningless and non-actionable alerts.
Learn more
●● Smart detection - performance anomalies - Azure Monitor | Microsoft Docs107.
●● Overview of alerting and notification monitoring in Azure - Azure Monitor | Microsoft Docs108.
●● Create, view, and manage Metric Alerts Using Azure Monitor - Azure Monitor | Microsoft
Docs109.
106 https://blogs.msdn.microsoft.com/bharry/2018/03/02/a-good-incident-postmortem/
107 https://docs.microsoft.com/azure/azure-monitor/app/proactive-performance-diagnostics
108 https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-overview
109 https://docs.microsoft.com/azure/azure-monitor/alerts/alerts-metric
725
Labs
Lab 18: Integration between Azure DevOps and
Microsoft Teams
Lab overview
Microsoft Teams110 is a hub for teamwork in Office 365. It allows you to manage and use all your team's
chats, meetings, files, and apps together in one place. It provides software development teams with a hub
for teams, conversations, content and tools from across Office 365 and Azure DevOps.
In this lab, you will implement integration scenarios between Azure DevOps services and Microsoft
Teams.
Note: Azure DevOps Services integration with Microsoft Teams provides a comprehensive chat and
collaborative experience across the development cycle. Teams can easily stay informed of important
activities in your Azure DevOps team projects with notifications and alerts on work items, pull requests,
code commits, as well as build and release events.
Objectives
After you complete this lab, you will be able to:
●● Integrate Microsoft Teams with Azure DevOps
●● Integrate Azure DevOps Kanban boards and Dashboards in Teams
●● Integrate Azure Pipelines with Microsoft Teams
●● Install the Azure Pipelines app in Microsoft Teams
●● Subscribe for Azure Pipelines notifications
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions111
110 https://teams.microsoft.com/start
111 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
726
Objectives
After you complete this lab, you will be able to:
●● Deploy Azure App Service web apps
●● Generate and monitor Azure web app application traffic by using Application Insights
●● Investigate Azure web app performance by using Application Insights
●● Track Azure web app usage by using Application Insights
●● Create Azure web app alerts by using Application Insights
Lab duration
●● Estimated time: 60 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions112
112 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
727
Objectives
After you complete this lab, you will be able to:
●● Create a wiki in an Azure Project
●● Add and edit markdown
●● Create a Mermaid diagram
Lab duration
●● Estimated time: 45 minutes
Lab updates
The labs are updated on a regular basis. For the latest information please visit:
●● AZ400-DesigningAndImplementingMicrosoftDevOpsSolutions113
113 https://microsoftlearning.github.io/AZ400-DesigningandImplementingMicrosoftDevOpsSolutions/
728
Choose the best response for each question. Then select Check your answers.
Multiple choice
Which of the following choices Azure Monitor lets you create custom dashboards based on?
Alerts, Action Groups.
Metrics, Logs.
Action Groups, Workflows.
Multiple choice
Which of the following query languages can you use to query Azure Log Analytics?
Kusto.
T-SQL.
Xpath.
Multiple choice
Which of the following choices is the unique key configured by Application Insights used by the Application
Insights SDK?
ApplicationInsights Key.
AppInsightsSDK Key.
AppInsights Key.
Multiple choice
Which of the following choices is a platform integration Azure Monitor provides to visualize your logs in
real-time?
Azure Dashboards.
Power BI.
Excel.
729
Multiple choice
Which of the following choices is an Azure Dashboards limitation?
Log charts can only be pinned to shared dashboards.
Full-screen mode.
Parametrized metrics dashboards with timestamp and custom parameters.
Multiple choice
Which of the following choices is a feature in Azure Monitor that allows you to create custom visualizations
with log data?
Workbooks.
View Designer.
Data Export.
Multiple choice
Which of the following choices is the minimum permission or group to add or edit Wiki pages?
Project Administrator.
Contributors.
Stakeholders.
Multiple choice
Which of the following choices is a functionality that allows you to easily create work items in Azure
DevOps that have relevant Application Insights data embedded?
Work item integration.
Azure DevOps Wiki.
App Stream.
Multiple choice
Which of the following commands can you perform with the Azure Pipelines with Microsoft Teams integra-
tion?
Subscribe.
Create.
Unlink.
730
Multiple choice
Which of the following choices isn't automatically identified by the Augmented Search algorithm analysis?
Waiver factors.
Errors.
Risk factors.
Multiple choice
Which of the following choices is a key factor to automating feedback?
Telemetry.
Work Item creation.
Alerts.
Multiple choice
Which of the following choices isn't an advanced feature you should seek when shortlisting a monitoring
tool?
Alert Management.
Test Management.
Analytics.
Multiple choice
Which of the following choices is the feature that Server Response Time Degradation notification is part of?
Azure Analytics.
Smart Alerts.
Smart Detection.
Multiple choice
Which of the following choices isn't a way Application Insights detects the performance of your application
is degraded?
Slow performance pattern.
Response time degradation.
Application dependency management pattern.
Multiple choice
Which of the following choices the response time degradation notification didn't tell you?
How many users are affected.
Count of this operation requests 60 days before.
Correlation between degradation in this operation and degradations in related dependencies.
731
Answers
Multiple choice
Which of the following choices Azure Monitor lets you create custom dashboards based on?
Alerts, Action Groups.
■■ Metrics, Logs.
Action Groups, Workflows.
Explanation
Azure Monitor lets you create custom dashboards based on Metrics and Logs.
Multiple choice
Which of the following query languages can you use to query Azure Log Analytics?
■■ Kusto.
T-SQL.
Xpath.
Explanation
Kusto is the primary way to query Log Analytics. It provides both a query language and a set of control
commands.
Multiple choice
Which of the following choices is the unique key configured by Application Insights used by the Applica-
tion Insights SDK?
ApplicationInsights Key.
AppInsightsSDK Key.
■■ AppInsights Key.
Explanation
Application Insights configures a unique key called AppInsights Key in your application. The Application
Insights SDK uses this key to identify the Azure App Insights workspace the telemetry data needs to be
uploaded.
Multiple choice
Which of the following choices is a platform integration Azure Monitor provides to visualize your logs in
real-time?
Azure Dashboards.
■■ Power BI.
Excel.
Explanation
Azure Monitor integrates with Power BI to provide logs visualization in real-time.
732
Multiple choice
Which of the following choices is an Azure Dashboards limitation?
■■ Log charts can only be pinned to shared dashboards.
Full-screen mode.
Parametrized metrics dashboards with timestamp and custom parameters.
Explanation
The limitation is Log charts can only be pinned to shared dashboards. Full-screen mode and Parametrized
metrics dashboards with the timestamp and custom parameters are Advantages.
Multiple choice
Which of the following choices is a feature in Azure Monitor that allows you to create custom visualiza-
tions with log data?
Workbooks.
■■ View Designer.
Data Export.
Explanation
View Designer in Azure Monitor allows you to create custom visualizations with log data. They are used by
monitoring solutions to present the data they collect.
Multiple choice
Which of the following choices is the minimum permission or group to add or edit Wiki pages?
Project Administrator.
■■ Contributors.
Stakeholders.
Explanation
To add or edit wiki pages, you should be a member of the Contributors group.
Multiple choice
Which of the following choices is a functionality that allows you to easily create work items in Azure
DevOps that have relevant Application Insights data embedded?
■■ Work item integration.
Azure DevOps Wiki.
App Stream.
Explanation
Work item integration functionality allows you to easily create work items in Azure DevOps that have
relevant Application Insights data embedded in them.
733
Multiple choice
Which of the following commands can you perform with the Azure Pipelines with Microsoft Teams
integration?
■■ Subscribe.
Create.
Unlink.
Explanation
You can run the @azure pipelines subscribe [pipeline url] command to subscribe to an Azure Pipeline.
Create and Unlink only works with the Azure Boards integration.
Multiple choice
Which of the following choices isn't automatically identified by the Augmented Search algorithm analy-
sis?
■■ Waiver factors.
Errors.
Risk factors.
Explanation
The analysis algorithm automatically identifies errors, risk factors, and problem indicators.
Multiple choice
Which of the following choices is a key factor to automating feedback?
■■ Telemetry.
Work Item creation.
Alerts.
Explanation
A key factor to automating feedback is telemetry. By inserting telemetric data into your production applica-
tion and environment, the DevOps team can automate feedback mechanisms while monitoring applications
in real-time.
Multiple choice
Which of the following choices isn't an advanced feature you should seek when shortlisting a monitoring
tool?
Alert Management.
■■ Test Management.
Analytics.
Explanation
When shortlisting a monitoring tool, you should seek the following advanced features: Synthetic Monitoring,
Alert Management, Deployment Automation, Analytics.
734
Multiple choice
Which of the following choices is the feature that Server Response Time Degradation notification is part
of?
Azure Analytics.
Smart Alerts.
■■ Smart Detection.
Explanation
Server Response Time Degradation notification is part of Smart Detection Notification.
Multiple choice
Which of the following choices isn't a way Application Insights detects the performance of your applica-
tion is degraded?
Slow performance pattern.
Response time degradation.
■■ Application dependency management pattern.
Explanation
Application Insights has detected that the performance of your application has degraded in one of the ways:
Response time degradation, Dependency duration degradation, Slow performance pattern.
Multiple choice
Which of the following choices the response time degradation notification didn't tell you?
How many users are affected.
■■ Count of this operation requests 60 days before.
Correlation between degradation in this operation and degradations in related dependencies.
Explanation
The response time degradation notification tells you: How many users are affected, the Correlation between
degradation in this operation and degradations in related dependencies, the Count of operation requests on
the day of the detection and seven days before, and some others.