Nothing Special   »   [go: up one dir, main page]

Data Center Security

Download as pdf or txt
Download as pdf or txt
You are on page 1of 22

A compendium of our best recent coverage

Datacenter
Security
February 2014
DOWNLOAD PDF
This issue sponsored by:
A compendium of our best recent coverage
Datacenter
Security
2 InformationWeek MUSTReads
3
7
11
14

17


20


Is Your Security Program Eective?
5 Protocols That Should Be
Closely Watched
How Cloud Security Drives
Business Agility
5 Monitoring Initiatives For 2014
Cloud Growth Drives Need For
Unied Datacenter Monitoring
Network Baseline Information
Key To Detecting Anomalies
Most Liked
Most Tweeted
This issue sponsored by:
DOWNLOAD PDF
Table of Contents
3 InformationWeek MUSTReads
Business leaders can, and should, insist on metrics to prove protection efforts are worth the money.
By Jeff Lowder
A
s we put the final touches on 2014
budgets, many security leaders are
asking for more money now to keep
bad things from happening later. CEOs
and CISOs have done this dance for years.
But today I see many business leaders
asking, What do we have to show for all
of these information security investments?
How do I know were spending the right
amount? How do I know our security pro-
gram actually works?
This last question is especially tricky.
Youve either had a security breach or you
havent. If you have had a major incident,
were you unprepared or just unlucky to
be targeted by a high-powered attacker?
If you havent had a major breach, is that
because of a good security strategy? Or
did you just get lucky? Can you even
know for sure?
The correct answer to these questions is:
Risk reduction as borne out by our risk
management program. Ill explain what
that looks like in a moment. But first, here
are seven questions business leaders should
ask their CISOs, and the answers that
should worry them.
1. How do I know our risk
management program works?
(Red-flag answers: I dont know, or
We use X and X is a best practice.)
Is Your Security Program Effective?
COMMENTARY
Jeff Lowder is
president of the
Society of Informa-
tion Risk Analysts
and director of global
information security
and privacy at Open-
Market, a subsidiary of Amdocs.
Jeff previously served as CISO at
Disney Interactive and director of
information security at The Walt Disney
Co. and the US Air Force Academy,
and held other senior security
positions at United Online and
PricewaterhouseCoopers.
A
B
O
U
T

T
H
E

A
U
T
H
O
R
2. Do we have a defined risk
management methodology?
(Red-flag answer: No.)
3. Where did our methodology
come from? Which interdisciplinary
techniques do we use?
(Red-flag answers: We invented our
own, or I dont know.)
4. How do we measure probability,
frequency, and business impact?
Do we use ranges of numbers?
(If the answer is no, you might be in
possession of a red flag.)
5. Does our risk management meth-
odology require detailed, calibrated
estimates? Is the CSO/CISO calibrated?
(No to either question is a red flag.)
6. Can the CSO/CISO explain the
base rate fallacy?
(The answer should be yes.)
7. Do we measure probability,
frequency, and impact with a scale,
such as high, medium, and low?
Do we use risk matrices or heat maps
to summarize risks?
(If the answer to both questions is yes,
thats a red flag. Gotcha!)
If youve asked these questions, chances
are youve also gotten a lot of wrong an-
swers. Youre not alone. Most companies
use what I call a qualitative approach
that, by definition, focuses on qualities,
attributes, or characteristics of things.
Examples include marking off checklists of
compliance requirements and benchmark-
ing the company with peers. While easy to
do, qualitative approaches by themselves
dont answer the important questions. Just
because my peers are doing X, why does
that make X the vright approach for us?
You need a complementary quantita-
tive approach that, by definition, focuses
on numerical measurements that make
it possible to answer our questions. For
example:
Q: How can I know if a security
investment is a good one?
A: First, measure the amount of risk re-
duction achieved by the investment. Sec-
ond, find out if the investment increased
risk in other areas. Third, measure the risk
reduction per unit cost.
4 InformationWeek MUSTReads
Table 1: Comparing Security Controls
Control Cost RRPUC
Control #1 $300,000 $1.52
Control #2 $300,000 $20.45
Control #3 $300,000 $10.00
Control #4 $300,000 $3.00
5 InformationWeek MUSTReads
Good security investments not only
reduce risk (and avoid increasing other
risks), they optimize the balance between
risk reduction and cost. Heres a typical
conversation:
CFO: How do I know our security
program actually works?
CISO: Because the expected loss from
security-related events with those security
investments in place is less than what it
would be without them.
CFO: How so?
CISO: Take our investment in data reten-
tion controls. Without these controls, we
know that we will suffer an average of
one loss event per year, and the cost of a
loss incident is approximately $250,000,
for an annual expected loss of $250,000
per year. With data retention controls,
we know that we will suffer an average
of one loss event per decade, while the
cost of that loss incident remains the
same, for an annual expected loss of 0.1
x $250,000/year = $25,000/year. So the
risk reduction is $250,000/year - $25,000/
year = $225,000/year.
CFO: Where did you get these
numbers? How do you know the
frequency of loss events with and
without the security controls?
2014 2013
Compared with the current fiscal year, will next fiscal year's IT budget increase, decrease or remain the same?
Future IT Budget
Increase more than 10%
Increase 5% to 10%
Increase less than 5%
Stay flat
Decrease less than 5%
Decrease 5% to 10%
Decrease more than 10%
Base: 289 respondents in October 2013 and 293 in September 2012
Data: InformationWeek IT Budget Survey of business technology professionals at organizations with 50 or more employees
R7491113/14
21%
18%
24%
20%
7%
9%
33%
33%
7%
8%
6%
7%
2%
5%
R7491113_IT_Budget_chart14
6 InformationWeek MUSTReads
CISO: When it exists, we use historical
data. When it doesnt, we use calibrated
estimates. The people providing these num-
bers have gone through calibration train-
ing. Psychological studies have consistently
shown that calibration training significantly
improves accuracy of peoples estimates.
CFO: How does it work?
CISO: Almost everyone is systematically
biased toward overconfidence or under-
confidence. Calibration training exposes
people to their bias and teaches them how
to avoid them. People learn, for example,
how to estimate using ranges and confi-
dence intervals. They will give a range of
numbers, say, one to 10 loss events per
year, and a confidence interval (CI) of, say,
90%. The range simply means that the
actual number of loss events per year is
between one and 10. The 90% CI means
that if the expert gave 10 estimates with a
90% CI, the ranges in nine of those esti-
mates would contain the correct number.
CFO: OK, got it. But even with cali-
brated estimates, how do we know
were investing the right amount?
CISO: We dont want to get the most
security because that costs too much.
Nor do we want the cheapest security
because that doesnt consider risk re-
duction. Instead, we want the optimum
balance between cost and risk reduction.
So we measure the risk reduction per
unit cost (RRPUC) of various options. For
example, our data retention controls cost
$11,000. So the RRPUC equals $225,000
divided by $11,000, or $20.45.
RRPUC measures a proposed controls
cost-effectiveness at reducing risk. If the
RRPUC is exactly one, then the proposed
control isnt any more cost-effective than
no control at all. A ratio much greater than
one, such as the $20.45 referenced above,
suggests the control is a good investment.
The beauty of the RRPUC approach is
that it enables CXOs to compare options
in a portfolio of proposed security invest-
ments. Suppose your CISO proposes four
controls with the following metrics:
If your security budget tops out at
$300,000, control No. 2 is clearly the
best option. If its $600,000, controls 2
and 3 would be a good combination. But
if your budget is $1.2 million or greater,
controls 1 and 4 may be poor investments
because their RRPUC values are so low.
So lets revisit our original questions.
How do I know were investing the
right amount? You know that you are
investing the right amount because the
RRPUC approach forces you to balance
risk reduction with cost.
How do I know our security program
actually works? The RRPUC approach
provides at least part of the answer be-
cause it shows that your security invest-
ments actually reduce risk.
I hope more organizations will adopt
an RRPUC approach when analyzing and
managing their IT risks. Its the best way
to retire those red flags.p
7 InformationWeek MUSTReads
Attackers frequently scan for open SSH, FTP, and RDP ports, but companies
need to watch out for attacks against less-common protocols as well.
FOR DECADES, OPPORTUNISTIC ATTACKERS HAVE SCANNED THE
Internet for open ports through which they can compromise vulnerable
applications.
Such scanning has only gotten easier: The Shodan search engine regularly
scans the Internet and stores the results for anyone to search; researchers
from the University of Michigan have refined techniques to allow for fast,
comprehensive scans of a single port across the Internet; and programs such
as Nmap allow anyone to scan for open, and potentially vulnerable, ports.
Most Liked
Protocols
By Robert Lemos
5That Should Be
Closely Watched
DOWNLOAD PDF
Table of Contents
While the most commonly attacked
ports are those used by Secure Shell
(SSH), the File Transfer Protocol (FTP), the
Remote Desktop Protocol (RDP), and Web
servers (HTTP), companies need to mon-
itor network activity aimed at less-com-
mon protocols and ports, security experts
say. Attackers will likely increasingly look
for vulnerabilities in less-common ports,
says HD Moore, chief research officer for
vulnerability management firm Rapid7,
which has made a name for itself scan-
ning the Internet for just those ports.
This stuff is not in the top bucket in
terms of priority, but it tends to bite peo-
ple because they are not keeping an eye
on it, he says.
Companies shouldnt just monitor for
malicious activity using these protocols,
they should take an inventory of the ap-
plications inside their own networks and
connected to the Internet that expose
firms to potential opportunistic attacks,
says Johannes Ullrich, dean of research for
the SANS Technology Institute. The SANS
Institutes DShield project collects data
from contributors to analyze the ports in
which attackers are most interested.
Companies need not just detect the
attacks coming in, but to inventory all the
devices that have in their network looking
at traffic on these ports, he says. It sort
of comes down to inventory control on
the network.
For companies looking for a place to
start, Ullrich and Moore suggest five
protocols where companies can check for
weaknesses.
1
Intelligent Platform
Management Interface
Over the past year, security re-
searcher Dan Farmer has investigated
weaknesses in the IPMI protocol. Many
companies use servers that can be moni-
tored and managed through a baseboard
management controller, an embedded
device that communicates using IPMI.
Farmer found that the IPMI standard and
various implementations have a number
of security flaws.
Rapid7 investigated Supermicros im-
plementation, finding that the companys
baseboard management controller used
default passwords and was vulnerable
to a number of universal plug-and-play
issues.
IPMI is used a lot by businesses, and
8 InformationWeek MUSTReads
9 InformationWeek MUSTReads
they dont really understand what all the
risks are, Moore says. It is really difficult
to have an IPMI installation that is not
vulnerable.
Moore and other security experts recom-
mend managing devices that use the IPMI
protocol behind virtual private networks,
firewalls, and other security, always assum-
ing the devices are in a hostile network.
2
Embedded web servers
A variety of devices are vulnerable
not because of the native proto-
cols that they use, but because of the
lightweight Web servers embedded in
the devices to provide a management
interface. From printers and baseboard
management controllers to routers and
PBX systems, companies host a wide
array of devices that likely have vul-
nerable web interfaces to manage the
technology.
These undocumented, undisclosed,
and unmonitored Web interfaces are a
bigger deal than most people realize,
Moore says. They are really common,
but they are not something that people
normally keep track of.
Ullrich agrees, saying that DShield data
shows that companies are seeing oppor-
tunistic scans for the devices.
All the miscellaneous devices rout-
ers, switches sometimes have a man-
agement interface on an uncommon
port, but you see a decent amount of
scanning activity for these, he says.
3
Videoconferencing
Last year, Moore scanned the Inter-
net for signs of videoconferencing
systems connected directly to the Internet
and set to auto-answer, estimating that
some 150,000 devices were vulnerable to
an attacker directly calling into the con-
ferencing system.
Most folks did not do any sort of se-
curity on the videoconferencing side, and
many of them had really horrible security
on the Web management interface,
Moore says.
Companies should scan their public In-
ternet space on port 1720, typically used
by the H.323 messaging protocol, using a
status enquiry to nonintrusively check
for potential vulnerable systems, accord-
ing to Rapid7.
4
SQL Servers
Databases are frequent targets of
attacks. Many attackers scan for
open Microsoft SQL Server and MySQL
ports, but rather than attempting to com-
promise such systems with exploits, they
instead attempt to brute-force the pass-
word protecting the databases, says the
SANS Institutes Ullrich.
They typically dont search for a vul-
nerability there but for a weak pass-
word, he says. They scan for the data-
bases and then try to connect by guessing
passwords.
Companies should track down any
database accessible from the Internet and
ensure that adequate steps are taken to
secure access to the servers.
10 InformationWeek MUSTReads
5
Simple Network
Management Protocol
The DShield project sees some scan-
ning for the SNMP, but Ullrich sees the
protocol as mainly an overlooked risk.
Moore, however, sees SNMP as an
engine for future attacks. Because many
companies do not pay attention to SNMP,
the protocol could be used as a vector for
compromise and as a method of ampli-
fication for distributed denial-of-service
attacks, Moore says.
SNMP tends to get short shrift in terms
of security exposure, not to mention it
can be used for amplification attacks,
Moore says. Amplification attacks typi-
cally use the DNS system, which can be
made to respond to a single request with
a multitude of packets. SNMP has similar
characteristics, he says.
Companies should filter inbound mal-
formed packets to prevent their sys-
tems from being used in a distributed
denial-of-service attack and to block all
outbound SNMP packets. p
DOWNLOAD PDF
Table of Contents
11 InformationWeek MUSTReads
Cloud computing represents a unique opportunity to rethink enterprise security and risk management.
By Bankim Tejani
C
loud security has become a divisive
topic within many companies. Some
see cloud computing as a business
necessity, required to keep up with compet-
itors, or a vehicle to transform old world
IT. Others see daunting and dangerous
security risks. To me, cloud computing rep-
resents an opportunity to rethink, redesign,
and operationalize information security and
risk management to drive business agility.
Cloud computing offers a unique change
in managing information systems: the use
of automation. While most look at auto-
mation as the cornerstone of cloud com-
putings cost savings and efficiency,
automation is equally valuable, if not
more so, for information security and risk
management. Looking at todays security
problems, the landscape is littered with
methods that are largely manual and dis-
connected.
Business systems are launched and
retired faster than security teams can
identify, analyze, and track.
Risks are implicitly accepted by busi-
ness sponsors during design, develop-
ment, and operation, but mitigated
only when pressed by security and risk
management.
Security policies are enforced primar-
ily by manually executed audits and
processes.
How Cloud Security Drives Business Agility
COMMENTARY
Bankim Tejani has
conducted security
research, assess-
ments, training,
and consulting for
more than a de-
cade. His recent
focus is on helping companies and
government agencies integrate appli-
cation security and static analysis into
their software development life cycles.
Tejani is an active member of the
Austin Open Web Application Security
Project and co-founder of the Agile
Austin Security SIG.
A
B
O
U
T

T
H
E

A
U
T
H
O
R
Scaling todays information security
and risk management problems to
cloud velocity is untenable, but do-
ing so without refactoring poses an
even greater risk to the enterprise.
A successful approach combines the
refactoring of existing information secu-
rity and risk management practices with
automation that operates at cloud speed
and scale. That automation consists of
four key components:
An execution engine that relably de-
ploys virtual systems to data-driven
design
Life cycle-centric systems manage-
ment and operational tools
Automated sensory and scanning
systems that identify key issues and
risks
A policy evaluation engine that can
drive planned automated responses
and notifications
The combination of these powerful
automation and refactored information
security concepts creates an environment
in which security requirements for cloud
systems are codified and enforced in a
prescriptive and proactive manner.
One example can be seen in enterprises
that engage in routine security system
and business application scans. The chal-
lenges with these scans begin with iden-
tifying the systems to be scanned. This is
often the most time-consuming process,
but its also the critical factor to success.
Once identified, systems are scheduled
for scan, then scanned, and results are
analyzed. Then the security team commu-
nicates the issues to the project/develop-
ment/business team, and they negotiate
remediation timelines, risk acceptance,
and deferrals.
The IT security team typically manages
the entire process, spending more time
on bureaucracy than on security. Because
of the overhead, these scans are usually
performed on production or near-produc-
tion systems. The processes are considered
successful when each application or server
in the enterprise is scanned annually.
In cloud-centric operations, a system
may be running for hours or days, mean-
ing the existing processes will likely miss
the system completely. While this gap
may be mitigated by slowing down cloud
deployments to fit existing processes,
a better strategy is revising the security
scanning process for the cloud.
In agile cloud operations, for instance, a
cloud management platform will be aware
12 InformationWeek MUSTReads
13 InformationWeek MUSTReads
of every system started by business and
development teams. Through automation
and policy, each system is scanned upon
startup and restart. Results can be sent
automatically to both system owners and
information security. More importantly,
scans can be performed during the earlier
stages of system development, when its
easier, cheaper, and faster to make sys-
tem changes. Further improvements are
gained by automatically separating results
into those that may be immediately acted
upon by system owners and those that
require further analysis by security experts.
By adapting security scan processes to
the cloud, businesses are able to act more
nimbly in a cloud-centric environment
while moving to more frequent scans and
earlier, cheaper remediation. Such gains
would not be available without the solid
foundation provided by a cloud manage-
ment platform.
By deploying a cloud management plat-
form with a rich automated policy infra-
structure, IT can be confident that it has
established governance, compliance, and
security that are configurable, automated,
and enforced. In doing so, its enabling
the business to operate with cloud speed
and agility, knowing that information se-
curity has been part of the journey.p
14 InformationWeek MUSTReads
SECURITY INFORMATION AND EVENT MANAGEMENT (SIEM)
systems became much more common in 2013, while more companies
talked about using massive data sets to fuel better visibility into the
potential threats inside their networks.
Yet effective security monitoring has a long way to go. To better
secure their networks and improve visibility into the threats on their
systems in 2014, companies first need good communication between
Most Tweeted
To get better visibility into the business and potential threats inside their
networks, companies should collect more data, use context, and invest
more in their employees expertise.
Monitoring
Initiatives
For 2014
5
By Robert Lemos
DOWNLOAD PDF
Table of Contents
business executives and information se-
curity managers. While 90% of managers
surveyed by network security and man-
agement firm SolarWinds thought security
was under control, only 30% of the actual
IT practitioners believe that security is
well-established, according to the firm.
A good place to start is for information
technology leaders to ask themselves and
their business counterparts what more
they want to know about their networks,
systems, and employees. Without the right
questions, monitoring for threats will be
hard, says Dave Bianco, Hunt Team man-
ager for incident-response firm Mandiant,
which was acquired by FireEye in January.
It pays for companies to take a step
back and look at what they are doing,
Bianco says. I can look at things that Im
really worried about because of my busi-
ness, or things that might be interesting
to those who are attacking me not
only figure out what you might be able
to detect, but figure out what you have
to detect them with.
To start the conversation, here are five
initiatives that security monitoring experts
say should be undertaken this year.
1
Catalogue the sources in your
network. Companies first have to
know what they have to work with.
A business looking at improving visibility
into its network and the threats in the
network should first find out what data
sources are available, Bianco says.
Companies should collect the logs not
only from Web servers, firewalls, and in-
trusion-detection systems, but from other
systems that may not initially be consid-
ered sources of intrusion information, he
says. One example: the authentication
logs for all the systems in the environ-
ment, he says.
Make sure that you are logging the
data from these systems correctly and
sending it to a central place where you can
get access to it, Bianco says. That way
you can turn all those independent log
sources into new detection platforms.
2
Monitor users, not just
devices. Many companies con-
tinue to attribute activities to In-
ternet addresses that is, devices on
their networks, rather than dealiasing the
user behind those actions, says Patrick
Hubbard, head geek for SolarWinds. Yet
adding context to the actions being taken
on the network is important, he says.
With more and more Internet-con-
nected devices on the network, the num-
ber of humans on the network relative to
the number of devices on the network is
beginning to decrease, so it is not as easy
to have strong authentication from the
device, Hubbard says.
Businesses should make an effort this
year to attribute actions to specific em-
ployees and users by combining authenti-
cation information and other sources with
network logs.
15 InformationWeek MUSTReads
16 InformationWeek MUSTReads
You want to look at users not just
as logons, but within the context of the
identity breadcrumbs they are leaving
behind on the network, he says.
3
Use more math. By collecting
more data and knowing the ques-
tions to ask, companies should find
themselves with a lot more information
on what is happening in their networks.
IT security teams can ask questions of the
data and discover incidents that may have
otherwise been hidden. However, compa-
nies should also allow the data to speak
for itself and to do that, they need
math, says Joe Goldberg, senior manager
of security and compliance product mar-
keting for data analytics firm Splunk.
By using statistical analysis, companies
can determine the outliers in a big data
set. If the average employee downloads
10 files from a SharePoint server in a day,
then someone downloading 50 files may
be an advanced threat actor harvesting
data from the companys server, he says.
Use statistics and math on the sea
of data that youve collected to figure
out what is abnormal and what is odd,
Goldberg says.
4
Find out more about
attackers. Once companies have
the data and the ability to ana-
lyze it, they need to know what types of
threats may be targeting them, Mandi-
ants Bianco says.
Companies need to know the ad-
versaries that might be targeting their
businesses or industries. Focused threat
intelligence can provide that as well as
what techniques are common for those
adversaries, Bianco says. Whether an at-
tacker uses spear phishing, SQL injection,
or malware to attack a businesss systems
makes a difference in how a company
detects the threats, he says.
You need to know all these things that
influence the catalogue that a company
creates of detection scenarios and how
they are going to detect those threats,
he says.
5
Invest more in your people.
While security practitioners contin-
ue to be in high demand, compa-
nies should do everything they can to
find the necessary expertise and develop
that expertise with training, Splunks
Goldberg says.
You are going to need security practi-
tioners to not only deploy these systems
and collect the data, but also to sit be-
hind the desk and monitor and fine-tune
them, he says. You want skilled people
who know your environment well, and
you cannot always outsource that.p
17 InformationWeek MUSTReads
Todays complex, cloud-enabled datacenter requires network monitoring tools that provide a comprehensive view of trafc.
By Frank J. Ohlhorst
T
he complexities of the cloud are be-
coming much more evident as en-
terprises seek to add this technology
into their datacenters. Regardless of the
ultimate goal, be it public cloud, private
cloud, or a hybrid cloud integration, data-
center engineers face numerous challenges
ranging from security to performance to
service provisioning.
However, there is one challenge that
transcends all of the services offered: end-
to-end data packet transport.
Simply put, a successful IT service,
whether its local (LAN), cloud (WAN), or
otherwise, ultimately relies on the efficient
movement of data packets across the in-
frastructure elements. Without this part
of the equation, services are unreliable or,
worse, unusable. For IT, neither is accept-
able for line-of-business or customer ser-
vice applications.
However, reliability and consistency are
not new challenges for the enterprise. Im-
proving both of these has been a challenge
since day one of computing. That brings
up another question basically, what
exactly has changed to bring core network-
ing reliabilities back to the forefront of IT
services?
The answer to that question comes in
two parts, the first of which amounts to
the inherent complexity of todays cloud-
based applications and services. The second
part of the answer is geared more toward
Cloud Growth Drives Need For Unified
Datacenter Monitoring
COMMENTARY
DOWNLOAD PDF
Table of Contents
Frank J. Ohlhorst is
an award-winning
technology jour-
nalist, professional
speaker, and IT busi-
ness consultant with
more than 25 years
of experience in
the technology arena.
A
B
O
U
T

T
H
E

A
U
T
H
O
R
how the complexities have affected those
charged with keeping things working.
Todays networks run a plethora of ser-
vices, ranging from transactional data and
VoIP to secure traffic (SSL) and streaming
video, each of which can be affected in
varying amounts by latency, bandwidth,
saturation, packet loss, and much more.
Understanding and mitigating those is-
sues isnt for the fainthearted, and further
complicating the issue is the inexperience
of many of todays newly minted network
engineers.
Of course, many may answer the chal-
lenge with various network management
platforms yet something is still missing.
That something is the unification of those
management tools to provide end-to-end
visibility for identifying problems, and
more importantly how to solve them.
The fragmentation in network mon-
itoring tools has been caused by all of
the different things we require of todays
networks. We expect to move data, run
applications, remotely access systems,
process transactions, make phone calls
(VoIP), host conferences, and so much
more. Add to that the perceived need
to cloudify those capabilities, and
network engineers are presented with
a mishmash of protocols, codecs, pack-
ets, encryption schemes, as well as other
things that may live on the wire.
What we have done is create a situa-
tion that can quickly overwhelm most any
network management tool, as well as the
18 InformationWeek MUSTReads
19 InformationWeek MUSTReads
engineer using it.
Splitting the services up into dif-
ferent management silos is not the
answer. Why should VoIP traffic be
managed differently than SSL traffic,
especially when it all moves across the
same boxes, wires, and infrastructure
equipment?
Many are finding that without the
big picture, its almost impossible
to solve a problem that may have it
roots in the cloud, the datacenter, or
the edge of the network. The key to
resolving this issue, and others, comes
in the form of network management
and troubleshooting tools that can
deliver the whole picture, not just
independent pieces of the puzzle.
That means management and trou-
bleshooting must become a platform
onto itself, and that platform needs
to be able to track every packet, every
conversation, every connection, and
every piece of software and hardware
involved in an electronic conversation.
Whats more, that tool (or platform)
must be able to record everything
that travels over the wire and offer
ways to reconstruct the conversation
for troubleshooting purposes.
Finding such a tool presents a major
challenge, in terms of both budget
and availability, that may not be eas-
ily solved. So the question becomes:
What tools will you use to meet those
extensive needs?p
20 InformationWeek MUSTReads
Establishing normal behaviors, trafcs, and patterns across the
network makes it easier to spot previously unknown bad behavior.
WHILE SO MUCH TIME IN NETWORK SECURITY IS SPENT
discussing the discovery of anomalies that can indicate attack,
one thing that sometimes gets forgotten is how fundamental it is
to first understand what normal looks like. Establishing base-
line data for normal traffic activity and standard configuration for
network devices can go a long way toward helping security
analysts spot potential problems, experts say.
There are so many distinct activities in todays networks with
a high amount of variance that it is extremely difficult to discover
security issues without understanding what normal looks like,
says Seth Goldhammer, director of product management for
LogRhythm.
Network Baseline Information
Key To Detecting Anomalies
By Ericka Chickowski
DOWNLOAD PDF
Table of Contents
Wolfgang Kandek, CTO of Qualys,
agrees, stating that establish baseline data
makes it easier for IT organizations to
track deviations from that baseline.
For example, if one knows that the use
of dynamic DNS services is at a low 0.5%
of normal DNS traffic, an increase to 5%
is an anomaly that should be investigated
and might well lead to the detection of a
malware infection, Kandek says.
But according to Goldhammer, simply
understanding normal can be a challenge
in its own right. Baselining activities can
mean tracking many different attributes
across multiple dimensions, he says,
which means understanding normal host
behavior, network behavior, user behav-
ior, and application behavior, along with
other internal information, such as the
function and vulnerability state of the
host. Additionally, external context
such as reputation of IP plays a factor.
For example, on any given host, that
means understanding which processes
and services are running, which users
access the host, how often, [and] what
files, databases, and/or applications do
these users access, he says. On the
network, [it means] which hosts com-
municate to which other hosts, what
application traffic is generated, and how
much traffic is generated.

A Fingerprint Of The Network
Its a hard slog, and, unfortunately, the
open nature of Internet traffic and diverg-
ing user behavior make it hard to come up
with cookie-cutter baseline recommenda-
tions for any organization, experts say.
Networks, in essence, serve the needs
of their users. Users are unique individuals
and express their different tastes, pref-
erences, and work styles in the way they
interact with the network, says Andrew
Brandt, director of threat research for the
advanced threat protection group at Blue
Coat Systems. The collection of metadata
about those preferences can act like a
fingerprint of that network. And each net-
work fingerprint is going to be as unique
as its users who generate the traffic.
Another dimension to developing a
baseline is time. The time range for sam-
pling data for establishment of a bench-
mark will often depend on what kind of
abnormality the organization hopes to
eventually discover.
For example, if I am interested in detect-
ing abnormal file access, I would want a
longer benchmark period building a his-
togram of file accesses per user over the
previous week to compare to current week,
whereas if I want to monitor the number
of authentication successes and failures
to production systems, I may only need to
benchmark the previous day compare to
the current day, Goldhammer says.
While baselines can be useful for detect-
ing deviations, TK Keanini, CTO of Lan-
cope, warns that it may actually be useful
to think in terms of pattern contrasts
rather than normal and abnormal.
21 InformationWeek MUSTReads
22 InformationWeek MUSTReads
The term anomaly is used a lot be-
cause people think of pattern A as normal
and patterns not A as the anomaly, but I
prefer just thinking about it as a contrast
between patterns, Keanini says. Espe-
cially as we develop advanced analytics
for big data, the general function of data
contrasts delivers emergent insights.
This kind of analysis also makes it less
easy to fall prey to adversaries who un-
derstand how baselines can be used to
track deviations. Instead of a single, static
baseline, advanced organizations will
constantly track patterns and look for
contrasts across time.
The adversary will always try to un-
derstand the target norms because this
allows them to evade detection, Keanini
says. Think about how hard you make it
for the adversary when you establish your
own enterprise-wide norms and change
them on a regular basis.
However it is done, when a contrast of
patterns does flag those telltale anom-
alies, Kandek recommends organizing
immediate analytical response.
To deal with network anomalies, IT
departments can lean on a scaled-down
version of their incident response pro-
cess, he says. Have a team in place to
investigate the anomalies, document the
findings, and take the appropriate ac-
tions, including adapting the baselines or
escalating to a full-blown incident re-
sponse action plan.
Foremost in that immediate action is
information-sharing, Brandt recommends.
When you identify the appropriate
parameters needed to classify traffic from
the unknown to the known bad col-
umn, its important to share that infor-
mation, first internally to lock down your
own network, and then more widely, so
others might learn how they can detect
anything similar on their own networks,
he says.p

You might also like