Powershellorg The DSC Book Master
Powershellorg The DSC Book Master
Powershellorg The DSC Book Master
by Don Jones
and Steve Murawski
with contributions by Stephen Owen
cover design by Nathan Vonnahme
Visit Penflip.com/powershellorg to check for newer editions of this
ebook.
This guide is released under the Creative Commons AttributionNoDerivs 3.0 Unported License. The authors encourage you to
redistribute this file as widely as possible, but ask that you do not
modify the document.
Windows 8.1 and Windows Server 2012 R2, and is available for
Windows 7, Windows Server 2008 R2, and Windows Server 2012.
Because Windows 8.1 is a free upgrade to Windows 8, WMF 4 is not
available for Windows 8.
You must have WMF 4.0 on a computer if you plan to author
configurations there. You must also have WMF 4.0 on any
computer you plan to manage via DSC. Every computer
involved in the entire DSC conversation must have WMF 4.0
installed. Period. Check $PSVersionTable in PowerShell if youre not
sure what version is installed on a computer.
To figure out what DSC is and does, its useful to compare it to Group
Policy. There are significant differences between the two, but at a high
level they both set out to accomplish something similar. With Group
Policy, you create a declarative configuration file called a Group Policy
object (GPO). That file lists a bunch of configuration items that you
want to be in effect on one or more computers. You target the GPO by
linking it to domain sites, organizational units, and so on. Targeted
machines pull, or download, the entire GPO from domain controllers.
The machines use client-side code to read the GPO and implement
whatever it says to do. They periodically re-check for an updated GPO,
too.
DSC is similar but not exactly the same. For one, it has no
dependencies whatsoever on Active Directory Domain Services
(ADDS). Its also a lot more flexible and more easily extended. A
comparison is perhaps a good way to get a feel for what DSC is all
about:
Feature
Configuration specification
Targeting machines
Configuration implemented by
Extend the things that can be configured
Primary configuration target
Persistence
Number of configurations
With DSC, you start by writing a configuration script in Windows
PowerShell. This script doesnt do anything. That is, it doesnt install
stuff, configure stuff, or anything else. It simply lists the things you
want configured, and how you want them configured. The configuration
also specifies the machines that it applies to. When you run the
configuration, PowerShell produces a Managed Object Format (MOF)
file for each targeted machine, or node.
Thats an important thing to call out: You (step 1) write a configuration
script in PowerShell. Then you (step 2) run that script, and the result is
one or more MOF files. If your configuration is written to target multiple
nodes, youll get a MOF file for each one. MOF stands for Managed
Object Format, and its basically a specially formatted text file. Then,
(step 3), the MOF files are somehow conveyed to the machines theyre
meant for, and (step 4) those machines start configuring themselves to
match what the MOF says.
In terms of conveying the MOF files to their target machines, there are
two ways to do so: push mode is a more-or-less manual file copy,
performed over PowerShells Windows Remote Management (WinRM)
remoting protocol; the pull mode configures nodes to check in to a
special web server and grab their MOF files. Pull mode is a lot like the
way Group Policy works, except that it doesnt use a domain controller.
Pull mode can also be configured to pull MOF files from a file server by
using Server Message Blocks (SMB; Windows native file-sharing
protocol).
Once a node has its MOF file (and its only allowed to have one; thats
another difference from Group Policy, where you can target several
GPOs to a single machine), it starts reading through the configuration.
Each section in the configuration uses a DSC resource to actually
implement the configuration. For example, if the configuration includes
some kind of registry specification, then the registry DSC resource is
called upon to actually check the registry and make the change if
necessary.
You do have to deploy those DSC resources to your target nodes. In
push mode, thats a manual task. In pull mode, nodes can realize that
theyre missing a resource needed by their configuration, and grab the
necessary resource from the pull server (if youve put it there). For that
reason, pull mode is the most flexible, centralized, and convenient way
to go if youre managing a bunch of machines. Pull mode is something
you can set up on any Windows Server 2012 R2 computer, and it
doesnt even need to belong to a domain. If youre using the usual web
server style of pull server (as opposed to SMB), you can configure
either HTTP or HTTPS at your leisure (HTTPS merely requires an SSL
certificate on the server).
In this guide, were going to go through pretty much every aspect of
DSC. The things we configure will be simple, so that were not
distracting from the discussion on DSC itself. This guide will evolve
over time; if you notice blank sections, its because those havent yet
been written. Errors, requests for more information, and so on should
be reported in the PowerShell Q&A forum at PowerShell.org.
Why MOF?
Youll notice that DSC has a heavy dependency on MOF files, and
theres a good reason for it.
The Managed Object Format (MOF) was defined by the Distributed
Management Task Force (DMTF), a vendor-neutral industry organization
that Microsoft belongs to. The purpose of the DMTF is to supervise
standards that help enable cross-platform management. In other
words, MOF is a cross-platform standard. That means a couple of
important things:
The point of this is that you can swap out the top layer for anything
capable of producing the right MOF - and that the MOF format is
vendor-neutral and not Microsoft-proprietary. Since anyone can provide
elements of the bottom layer (and Microsoft will provide a lot), any
management tool can leverage DSC. So if youve got a cross-platform
management tool, and it can produce the right MOFs, then you dont
necessarily need that tools agent software installed on your
computers. Instead, the tool can produce MOFs that tell the Windows
DSC components what to do.
Wave 1:
http://blogs.msdn.com/b/powershell/archive/2013/12/26/holidaygift-desired-state-configuration-dsc-resource-kit-wave-1.aspx
Wave 2:
http://blogs.msdn.com/b/powershell/archive/2014/02/07/needmore-dsc-resources-announcing-dsc-resource-kit-wave-2.aspx?
utm\_source=tuicool
Microsoft-Provided Resources
Microsoft provides the following resources in WMF 4:
Registry
File
WindowsFeature
Environment
Service
WindowsProcess
xVMSwitch
xIPAddress
xADDomain
xADDomainController
xADUser
xSqlServerInstall
xSqlHAEndpoint
xSqlHAGroup
xWaitForSqlHAGroup
xCluster
xWaitForCluster
xSmbShare
xFirewall
xWebsite
xVhd
There isnt really a single download for the DSC Resource Kit; as of this
writing, each resource is a separate download from the TechNet Script
Gallery.
The entire module would go into that folder. The root module file will
be named something like CorpApp.psd1, and there will be a
DSCResources sub-folder that contains additional script files for the
actual resources. Theres a whole section in this guide about deploying
resources, so youll find more detail there.
Note that youll need to install resources on any computer where you
plan to author configurations using PowerShell, and on any nodes that
will use those resources in their configurations.
Configuration MonitoringSoftware {
param(
[string[]]
$ComputerName="localhost"
)
Node $ComputerName
{
File
MonitoringInstallationFiles
{
Ensure
= "Present"
SourcePath
= "\\dc01\Software\Monitoring"
DestinationPath =
"C:\Temp\Monitoring"
Type
= "Directory"
Recurse
= $true
}
} } MonitoringSoftware
That would produce a MOF file for each computer in the Clients OU of
the Domain.pri domain.
Discovering Resources
The above example should bring up one, or maybe two questions:
Things get a little tricky because you dont really import this into the
shell as a normal module. Theres an Import-DSCResource keyword,
but it only works inside Configuration scripts - you cant use it right at
the shell prompt. So part of your finding what resources I have
process is going to necessarily involve browsing the file system a bit to
see whats installed.
Pushing Configurations
Conceptually, pushing configurations is the easiest to talk about, so
well start there. It also requires the least setup in most environments.
Theres really only one prerequisite, which is that your target nodes
must have PowerShell remoting enabled. If they dont, you should read
through _Secrets of PowerShell Remoting _, another free ebook at
http://PowerShell.org/wp/newsletter. In a domain environment, running
Enable-PSRemoting on the target nodes is a quick and easy way to
Node $ComputerName
{
WindowsFeature DSCServiceFeature
{
Ensure = "Present"
Name = "DSC-Service"
}
xDscWebService PSDSCPullServer
{
Ensure
= "Present"
EndpointName
Port
= "PSDSCPullServer"
= 8080
PhysicalPath
= "$env:SystemDrive\inetpub\wwwroot\PSDSCPullServer"
CertificateThumbPrint = "AllowUnencryptedTraffic"
ModulePath
=
"$env:PROGRAMFILES\WindowsPowerShell\DscService\Modules"
ConfigurationPath =
"$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration"
State
= "Started"
DependsOn
= "[WindowsFeature]DSCServiceFeature"
xDscWebService PSDSCComplianceServer
{
Ensure
= "Present"
EndpointName
Port
= "PSDSCComplianceServer"
= 9080
PhysicalPath
=
"$env:SystemDrive\inetpub\wwwroot\PSDSCComplianceServer"
CertificateThumbPrint = "AllowUnencryptedTraffic"
State
= "Started"
IsComplianceServer = $true
DependsOn
=
("[WindowsFeature]DSCServiceFeature","[xDSCWebService]PSDSCPullServer"
)
}
Notice that this file creates a pull server as well as a compliance server
(more on that toward the end of this guide). Each is basically just a
website under IIS, running on different ports. Both are configured to
use unencrypted HTTP, rather than an SSL certificate. We think thats
probably going to be a common configuration on private networks,
although HTTPS does provide authentication of the server, which is
nice to have.
The configuration includes three items, with each successive one
depending on the previous one. The first installs the DSC Pull Server
feature, the second sets up the pull server website, and the third sets
up the compliance server website. Weve used paths that are more or
less the defaults.
We saved the script as C:\dsc\PullServerConfig.ps1. We then ran it,
resulting in the creation of C:\dsc\CreatePullServer. Notice that the new
subfolder name matches the name we gave our configuration in the
script. In that folder, we found Pull1.lab.pri.mof, the MOF file for server
PULL1. Thats the name of the Windows Server 2012 R2 computer that
we plan to turn into a pull server.
Next, we ran Start-DscConfiguration .\CreatePullServer -Wait.
Notice that we gave it the path where the MOF files live; it then
enumerates the files in that path and starts pushing the MOF files to
their target nodes. Because we used -Wait, the command runs right
away and not in a job. Thats often the best way to do it when youre
first starting, since any errors will be clearly displayed. If you dont use
-Wait, youll get back a job object and it can be a bit difficult to track
down any errors.
You shouldnt get any errors. If you did, triple-check that KB2883200 is
installed on the target node. You will likely get a warning if Windows
automatic updating isnt enabled - thats just reminding you to run
Windows Update to make sure the components you just installed are
completely up-to-date.
Name =
This is obviously not going anything incredibly magical. Its just telling
a computer, MEMBER2, to make sure Windows Backup is installed. So
now we need to run it and create the appropriate MOF. We do that by
simply running the script. Itll create a folder named WindowsBackup
(since thats the name of the configuration), and in there will be a MOF
file for Member2.lab.pri , the node were targeting.
Now we need to get that MOF to the pull server. Thing is, we also need
to rename the file, because the node name isnt what the pull server
wants in the filename. Instead, the pull server wants the MOFs
filename to be a globally unique identifier (GUID). If you look back at
the configuration we used to create the pull server, we said that it
should store configurations in
$env:PROGRAMFILES\WindowsPowerShell\DscService\Configuration. So
thats where the renamed MOF needs to go.
First, create the GUID:
$guid = [guid]::NewGuid()
Notice the cute trick with double quotes to insert the GUID into the
destination path, and the backtick to keep PowerShell from treating $\
as a variable name. Also notice that our pull server is named
Pull1.lab.pri.
With the MOF in place, we now need to generate a checksum file for it
on the pull server. Computers that attempt to pull the configuration
use the checksum to ensure the integrity of the file.
New-DSCChecksum $dest
When run, this will create a folder named SetPullMode, which will
contain Member2.lab.pri.meta.mof. The meta portion of the
filename indicates that this is configuring the targets Local
Configuration Manager (LCM), rather than configuring something
else on the target.
At the end of the script, were actually running the command and
pushing the MOF file out to the target node.
Running this script produces the MOF, and then pushes it to the target
node. Very quickly, the target node will contact the pull server, grab its
configuration, and then start evaluating the configuration. Before we
run this script, then, lets quickly check the status of the Windows
Backup feature on Member2:
Now we may need to wait just a smidge. By default, this pull thing
happens every half-hour. But eventually, it kicks in and works,
installing Windows Backup for us.
Setting Things Up
Were going to assume youve read through the HTTP(S) pull server
setup, and just point out the major differences involved in using SMB
instead.
First, you need to make sure that target nodes are configured to use
the pull server. In the LCM configuration, instead of this:
DownloadManagerName = "WebDownloadManager"
DownloadManagerCustomData = @{
ServerUrl =
'http://pull1.lab.pri:8080/PSDSCPullServer.svc';
AllowUnsecureConnection =
'true' }
This tells the target node to get its configuration from the
\PULL1\PullShare shared folder. Easy enough. You dont actually have to
go through any complex setup on the pull server; you just create the
specified shared folder.
Drop your MOF files (which must have a GUID.mof filename) and their
checksum files (GUID.mof.checksum; use New-DscChecksum to
generate checksum files) into the shared folder. Boom, youre done.
An SMB pull server can also include custom resources. As we discuss in
the Deploying Resources section of this guide, you have to ZIP up the
resource modules, name them according to convention, and provide
checksum files. But then you just drop those ZIP and checksum files
right into the pull server shared folder - nothing more to do.
For an HTTP(S) pull server, you would create a farm of web servers,
each configured identically. You could load balance between them
using a hardware or software load balancer, Windows Network Load
Balancing (NLB), or even IIS Application Request Routing (ARR). To
ensure that each web server has the same content, you could replicate
between them using DFS-R, or have them all draw their content from a
common back-end shared folder (that would normally be a clustered
file server, so that the shared folder wasnt a single point of failure).
For an SMB pull server, the LCM will present that credential to gain
access to the SMB file share.
For an HTTP(S) pull server, the LCM will present that credential to
the web server if prompted to do so. Note that the credential is
sent via the HTTP authentication protocol, which means youll
usually configure IIS to use Basic authentication, which means the
password and username will be transmitted in the clear. You should
only do that if youve configured the pull server and the LCM to use
encrypted (HTTPS) connections, so that the password and
username cant be intercepted.
Each resource lives in its own folder, and then folder name is the
resources complete name (often longer than the resources
friendly names).
name, perhaps their role or job title, and other pieces of information.
Each piece of information becomes a _property, _or setting, of the
resource.
Take the built-in File resource as another example. Its properties
include the source path and destination path for a file. At a minimum, it
needs those two pieces of information to see if a file exists on the
target node, and to go get the file if necessary.
As a sort of running example, were going to create a sample resource
for a fictional line-of-business application. Our resource will be named
adatumOrdersApp. Naming is important: were pretending that we
work for Adatum, one of Microsofts fictional company names. Prefixing
our resource name with the company name helps ensure that our
resource doesnt conflict with any other resources that relate to order
applications.
Weve decided that our resource will be responsible for maintaining the
user list of the application. Users consist of a login name, a full name,
and a role. The role must be Agent, Manager or Auditor. So our
resource will need to expose at least three properties. In addition, it
will expose an Ensure property that can be Present or Absent
(e.g., ensuring a user is, or isnt, in the system).
Note that were formatting the code samples here for easier reading in
the book. These wont run if you type them exactly as shown, but
theyre not meant to run, because theyre controlling a fictional
application.
That gives us four variables that each define one of our resources
properties, or settings. Now we can create the skeleton for the
resource:
New-DscResource -Name adatumOrdersApp
-Properties $logon,$ensure,
$fullname,$role
-Path 'C:\program
files\windowspowershell\modules\adatumOrderApp'
-FriendlyName
adatumOrderApp
That command should create a folder for our root module, which will be
named adatumOrderApp. It will also create the subfolder for the actual
resource, which is also named adatumOrderApp. Itll also create an
empty adatumOrderApp.psm1 script module, and the
adatumOrderApp.schema.mof file that defines the resources structure.
If Ensure is set to Absent and the user exists, this function should
remove the user.
If Ensure is set to Present and the user exists, this function does
nothing.
If Ensure is set to Absent and the user does not exist, this
function does nothing.
If Ensure is set to Present and the user does not exist, this
function must create the user.
Your Test function compares the current state to the desired state.
Think Modularly
As you plan a DSC resource, try to think about modular programming
practices. For example, you will probably find a lot of duplicated code
in your Test-TargetResource and Set-TargetResource functions, because
they both have to check and see if things are configured as desired.
Rather than actually duplicating code, it might make sense to move
the check it code into separate utility functions within the script
IPAddress = $IPAddress
InterfaceAlias = Ethernet
DefaultGateway = 192.168.10.2
SubnetMask = 24
AddressFamily = IPv4
}
xDNSServerAddress DNSServer {
Address = 192.168.12.8
InterfaceAlias = Ethernet
AddressFamily = IPv4
}
}
````
Notice that theres no Node section in this configuration! Thats
because this isnt going to target a specific node. We did use the
xNetworking module (from
http://gallery.technet.microsoft.com/scriptcenter/xNetworking-Module818b3583), and we provided an input parameter so that the target
nodes desired IP address can be provided. Weve hardcoded the
remaining IP address and DNS server settings, presumably based on
our knowledge of how our environment is set up. You could obviously
parameterize any of those things - we just wanted to illustrate how you
can mix-and-match parameterized information with hardcoded values.
The trick is in how you save this file. Were going to name it
CommonConfig.schema.psm1 - notice the schema.psm1 filename
extension, because thats what makes this magical. You then need to
save it the same way youd save a resource. For example, well save it
in \Program
Files\WindowsPowerShell\Modules\CommonConfigModule\DSCResource
s\CommonConfig\CommonConfig.schema.psm1.
Now you need to create a module manifest for it, in the same folder.
The manifest filename must be CommonConfig.psd1:
New-ModuleManifest -Path "\Program
Files\WindowsPowerShell\Modules\CommonConfigModule\DSCResources\Com
monConfig\CommonConfig.psd1" -RootModule
"CommonConfig.schema.psm1"
If you dont already have one, youll also need a manifest at the root of
the module folder:
New-ModuleManifest -Path "\Program
Files\WindowsPowerShell\Modules\CommonConfigModule\CommonConfigMod
ule.psd1"
````
Note that there are reports (see
http://connect.microsoft.com/PowerShell/feedback/details/804230/lcmfails-to-extract-module-zipped-with-system-io-compression-zipfile) of
ZIP files not working with DSC if they were created by using the .NET
Frameworks System.IO.Compression.ZipFile.CreateFromDirectory()
method. Be aware.
Note that DSC will always try to use the newest version of a resource
(technically, only the module in which a resource lives has a version;
every resource in that module is considered to be the same version
number as the module itself). So, when DSC pulls a configuration, itll
check the pull server for new versions of any resources used by that
configuration. You can configure the LCM to not download newer
modules, if desired. In a configuration, you can also specify a version
number when using Import-DscResource ; if you do so, then DSC will
look for, and only use, the specified version.
ConfigurationID. The GUID that the node will look for on the pull
server when it pulls its configuration.
Note that the LCM will only deal with a single configuration. If you push
a new one, the LCM will start using that one instead of any previous
one. In pull mode, it will pull only one configuration. You can use
composite configurations (we described them earlier) to combine
multiple configurations into one.
When you create a meta configuration MOF, you apply it by running
Set-DscLocalConfigurationManager , not StartDscConfiguration. Configurations containing a
LocalConfigurationManager setting should not contain any other
configuration items.
For more information on LCM configuration, read
http://blogs.msdn.com/b/powershell/archive/2013/12/09/understanding
-meta-configuration-in-windows-powershell-desired-stateconfiguration.aspx. As of this writing, the information at
http://technet.microsoft.com/en-us/library/dn249922.aspx is
inaccurate.
````
configuration CredentialEncryptionExample
{
param(
[Parameter(Mandatory=$true)][ValidateNotNullorEmpty()]
[PsCredential] $credential
)
Node $AllNodes.NodeName
{
File exampleFile
{
SourcePath = "\\Server\share\path\file.ext"
DestinationPath = "C:\destinationPath"
Credential = $credential
}
}
}
````
This configuration, when run, will prompt you for a PSCredential - a
username and password. Notice that the File resource is being used to
copy a file from a UNC path. Assuming that UNC path requires a nonanonymous connection, well need to provide a credential - and we
pass the PSCredential that you were prompted for.
The trick in this configuration comes from the $AllNodes variable.
Heres how we set that up:
````
$ConfigData = @{
AllNodes = @(
@{
}
````
So weve created a hashtable in $ConfigData. The hashtable has a
single key, named AllNodes, and the value of that key is an array of
one item. That one item is itself a hashtable with three keys:
NodeName, CertificateFile, and Thumbprint. The certificate isnt
installed on this computer, but has instead been exported to a .CER
file. However, the certificate must be installed on any nodes that well
target with this configuration. Weve also configured the LCM on the
target node to have that thumbprint as its CertificateId setting.
So, weve created the configuration. Weve created a block of
configuration data to pass to it, and that block includes the certificate
details. Now we need to run the configuration, passing in that
configuration data:
CredentialEncryptionExample -ConfigurationData $ConfigData
As a note, the node where we created the MOF file uses the public key
from the certficiate; the target node uses the private key for
decryption.
[Parameter(Mandatory=$True)]
[string]$EthernetInterfaceAlias
)
# Ensure KB2883200 is installed on all computers
# Expecting Windows 8.1 and Windows Server 2012 R2
# Run this script on each computer to produce
# MOF files and start configuration
# You should do DC1 first, then the other two
# You will be prompted for two credentials
# For the first, provide DOMAIN\Administrator and Pa$$w0rd
# For the second, provide Administrator and Pa$$w0rd
# This assumes that the Student Materials files are at
# E:\AllFiles
# Computers must have the following downloaded and installed
# into C:\Program Files\WindowsPowerShell\Modules:
# - http://gallery.technet.microsoft.com/xActiveDirectory-f2d573f3
#http://gallery.technet.microsoft.com/scriptcenter/xComputerManageme
nt-Module-3ad911cc
# - http://gallery.technet.microsoft.com/scriptcenter/xNetworkingModule-818b3583
Import-DscResource -ModuleName
ActiveDirectory,xNetworking,xComputerManagement
Node DC1 {
WindowsFeature ADDSInstall {
Ensure = Present
Name = AD-Domain-Services
}
xIPAddress IP {
IPAddress = '10.0.0.10'
InterfaceAlias = $EthernetInterfaceAlias
DefaultGateway = '10.0.0.1'
SubnetMask = 24
AddressFamily = 'IPv4'
}
xDNSServerAddress DNS {
Address = '127.0.0.1'
InterfaceAlias = $EthernetInterfaceAlias
AddressFamily = 'IPv4'
}
xComputer Computer {
Name = 'DC1'
}
xADDomain Adatum {
DomainName = domain.pri'
DomainAdministratorCredential = $domaincredential
SafemodeAdministratorPassword = $SafeModeCredential
DependsOn = '[WindowsFeature]ADDSInstall',
'[xIPAddress]IP',
'[xDNSServerAddress]DNS',
'[xComputer]Computer'
}
}
Node CL1 {
xIPAddress IP {
IPAddress = 10.0.0.30
InterfaceAlias = $EthernetInterfaceAlias
DefaultGateway = 10.0.0.1
SubnetMask = 24
AddressFamily = IPv4
}
xDNSServerAddress DNS {
Address = '10.0.0.10'
InterfaceAlias = $EthernetInterfaceAlias
AddressFamily = 'IPv4'
}
xComputer Computer {
Name = 'CL1'
DomainName = 'DOMAIN'
Credential = $DomainCredential
DependsOn = '[xIPAddress]IP','[xDNSServerAddress]DNS'
}
Environment Env {
Name = 'PSModulePath'
Ensure = 'Present'
Path = $true
Value = 'E:\AllFiles\Modules'
}
}
Node SRV1 {
xIPAddress IP {
IPAddress = 10.0.0.20
InterfaceAlias = $EthernetInterfaceAlias
DefaultGateway = 10.0.0.1
SubnetMask = 24
AddressFamily = IPv4
}
xDNSServerAddress DNS {
Address = 10.0.0.10
InterfaceAlias = $EthernetInterfaceAlias
AddressFamily = IPv4
}
xComputer Computer {
Name = 'SRV1'
DomainName = 'DOMAIN'
Credential = $DomainCredential
DependsOn = '[xIPAddress]IP','[xDNSServerAddress]DNS'
}
}
}
$ConfigurationData = @{
AllNodes = @(
@{
NodeName=LON-DC1
PSDscAllowPlainTextPassword=$true
}
@{
NodeName='LON-SRV1'
PSDscAllowPlainTextPassword=$true
}
@{
NodeName='LON-CL1'
PSDscAllowPlainTextPassword=$true
}
)
}
LabSetup -OutputPath C:\LabSetup -ConfigurationData
$ConfigurationData `
-EthernetInterfaceAlias Ethernet0
Weve included more than one NODE section. So, were using this
single configuration to set up an entire environment. When we run
this, well get three MOF files.
Running this script on a lab computer produces all three MOFs, but
then only the local computers MOF is actually run. We presume
that the bare computers will have the correct computer names
already, something thats taken care of by our base images.
Obviously, not every lab setup will be that way, but ours is.
````
Wevtutil.exe set-log Microsoft-Windows-Dsc/Analytic /q:true /e:true
Wevtutil.exe set-log Microsoft-Windows-Dsc/Debug /q:true /e:true
````
The Operational log contains error messages, and is a good place to
start troubleshooting. The Analytic log has more detailed messages,
and any verbose messages produced by the DSC engine. The Debug
log contains developer-level messages that might not be useful unless
youre working with Microsoft Product Support to troubleshoot a
problem. For more information on using these logs, visit
http://blogs.msdn.com/b/powershell/archive/2014/01/03/using-eventlogs-to-diagnose-errors-in-desired-state-configuration.aspx.
Wave 2 of the DSC Resource Kit includes an xDscDiagnostics module,
which you can use to help analyze DSC failures. There are two
commands in the module: Get-xDscOperation and TracexDscOperation. For a quick walkthrough of using these commands,
visit http://blogs.msdn.com/b/powershell/archive/2014/02/11/dscdiagnostics-module-analyze-dsc-logs-instantly-now.aspx.
Compliance Servers
A DSC pull server can actually contain two components: the pull server
bit, plus what Microsoft is presently calling a compliance server. That
name will likely change, but its a good placeholder for now.
In short, the compliance server maintains a database (which it shares
with the pull server piece) that lists all of the computers that have
checked in with the pull server, and shows their current state of
configuration compliance - that is, whether or not their actual
configuration matches their desired configuration. So, part of the LCMs
job is to not only grab configuration MOFs from the pull server, but also
to report back on how things are looking.
You can then generate reports from that database, showing you what
nodes are compliant and which ones arent. Remember that the LCM
can be configured to not continually apply configurations -we covered
that earlier with its ConfigurationMode setting, which can be set to
ApplyAndMonitor. In that mode, the LCM can let you know which
machines are out of compliance, but not actually do anything about it.
Note that the LCM will only report back if youre using an HTTP(S) pull
server; this trick wont work with an SMP pull server.
Its still early days for this particular aspect of DSC, so well expand this
section of the guide once theres more information from Microsoft on
this feature.