DevOps Final
DevOps Final
DevOps Final
Contents
SOFTWARE TESTING ........................................................................................................................................................................... 4
1.DEVOPS ............................................................................................................................................................ 6
PIPELINES : ........................................................................................................................................................................................... 9
2. NETWORKING ..............................................................................................................................................12
4.JENKINS ......................................................................................................................................................... 33
5.VAGRANT ....................................................................................................................................................... 35
INTRODUCTION ................................................................................................................................................................................. 35
ARCHITRCTURE .................................................................................................................................................................................. 36
COMMANDS......................................................................................................................................................................................... 36
WORKING............................................................................................................................................................................................ 37
UBUNTU INSTALL .............................................................................................................................................................................. 38
REFERENCES....................................................................................................................................................................................... 41
AZURE INTRODUCTION.................................................................................................................................................................... 42
1.Creating Azure Account ................................................................................................................................................................ 42
2.Terminalalogies ............................................................................................................................................................................... 43
Tisco Project Raodmap ...................................................................................................................................................................... 44
AZURE STORAGE ............................................................................................................................................................................... 46
AZURE VIRTUAL NETWORK (VNET) ............................................................................................................................................. 47
AZURE VIRTUAL MACHINES............................................................................................................................................................ 48
AZURE COMPUTE .............................................................................................................................................................................. 48
1. HISTORY .......................................................................................................................................................................................... 53
2.SERVICES .......................................................................................................................................................................................... 55
AWS CLI: AWS Command Line Interface .................................................................................................................................... 56
AWS SDK: Software Development Kits ........................................................................................................................................... 56
Example : Pizza Application ........................................................................................................................................................... 56
............................................................................................................................................................................. 56
Installing AWS Commandline Interface ............................................................................................................................................ 57
3. CLOUDWATCH & IAM ................................................................................................................................................................. 57
1.CloudWatch ................................................................................................................................................................................... 58
2.IAM - Identity & Access Management ......................................................................................................................................... 60
4. EC2 & VIRTUAL MECHINES ........................................................................................................................................................ 62
1.Virtual Private Cloud .................................................................................................................................................................... 62
2.Elastic Cloud Compute (EC2) ...................................................................................................................................................... 66
1|P A G E
3. Scalling Application ...................................................................................................................................................................... 70
5. S3 (SIMPLE STORAGE SERVICE) .................................................................................................................................................. 71
6. DATABASES (RDS & DYNAMODB)............................................................................................................................................ 73
Relational Database Service ............................................................................................................................................................... 73
7. CLOUDFORMATION ...................................................................................................................................................................... 73
AWS CERTIFIED DEVOPS ENGINEER .......................................................................................................................................... 74
1.AWS CODECOMMIT ...................................................................................................................................................................... 74
2. AWS CODEDEPLOY ..................................................................................................................................................................... 75
3.AWS CODEPIPELINE ..................................................................................................................................................................... 75
4.CLOUDFORMATION ....................................................................................................................................................................... 76
5.ELASTIC BEANSTALK..................................................................................................................................................................... 79
6.OPSWORKS ...................................................................................................................................................................................... 80
7. CLOUDWATCH ............................................................................................................................................................................... 80
ARCHITECTURE .................................................................................................................................................................................. 85
INSTALLING ........................................................................................................................................................................................ 86
HANDSON ........................................................................................................................................................................................... 87
1.CREATING RECIPE IN CHEF ......................................................................................................................................................... 87
2. CHEF : RECIPES .............................................................................................................................................................................. 89
3. CHEF : COOKBOOKS ..................................................................................................................................................................... 90
4. COOKBOOKS + CHEF SERVER + NODES ................................................................................................................................. 93
Chef Server : Create Online Chef server ............................................................................................................................................ 93
Nodes : Manage Nodes using Chef server ......................................................................................................................................... 96
MORE ON CHEF ................................................................................................................................................................................. 99
3|P A G E
Software Testing
Software testing is a set of processes and tasks that take place throughout the software development
life cycle. It helps to reduce the risk of failures that may occur during operational use and, thus,
ensure the quality of the software system.
4|P A G E
5|P A G E
1.DevOps
Yes. High deployment frequencies are possible and leading IT organizations do it today.
FlickR - The popular image and video hosting portal deploys release updates to their
applications everyday.
Facebook - The most used social networking site, deploys an average of 2 releases everyday.
Amazon - The world's largest internet company by revenue, does an average of 50,000,000 code
deployments per year.
Continuous Testing (CT): Continuous Testing is the process of executing automated tests as part of
the software delivery pipeline in order to obtain rapid feedback on the business risks associated with
a software release candidate
6|P A G E
7|P A G E
8|P A G E
PipeLines :
In this case study we will look at how DevOps was implemented for an international
telecommunications and television company. This client is one of the largest broadband internet
service providers outside of the United States
XYZ Company helped them to successfully build overall testing governance structure using DevOps
practices.
With deployment frequencies as high as atleast 2 per week, it was challenging due to:
Total integration time, from code deployment to test readiness taking 5 weeks
Manual regression tests to test every deployment.
Manual smoke tests required to ensure all the services are up and running on different
environments.
Tools used:
9|P A G E
10 | P A G E
Continuous integration: The most dominant player in the ‘Three Cs’ is Continuous integration (CI)
and it’s a necessary approach for any Agile team. CI requires developers to integrate code into a
shared repository several times a day. Each check-in is then verified by an automated build, allowing
teams to detect problems early.
By integrating regularly, teams can detect errors quickly, and locate them more easily. Simply, it
ensures bugs are caught earlier in the development cycle, which makes them less expensive to fix -
and maintains a consistent quality.
Teams should ensure they have a monitoring dashboard for your production environment in place in
order to eliminate performance bottlenecks and respond fast to issues. This will complete an efficient
CD process.
Continuous testing: Continuous testing (CT), which can also be referred to as Continuous Quality, is
the practice of embedding and automating test activities into every “commit”. CT helps developers
use their time more efficiently when trying to fix a bug for code that was written years ago. To fix the
bug, developers should first remind themselves of which code it was, undo any code that was written
on top of the original code, and then re-test the new code; not a short process. Testing that takes
place every commit, every few hours, nightly and weekly, not only increases confidence in the
application quality, it also drives team efficiency.
11 | P A G E
2. Networking
When two or more machines are connected with each other, it is called a network and the devices in a
network are called hosts.
Your classroom computer is actually part of XYZ Company network. All the computers in XYZ
Company are connected with each other, That’s why you are able to send a mail to any XYZ Company
mail Id.
12 | P A G E
What do you think is the use of LAN cable?
The amount of data that can be transmitted in a given period of time is called Bandwidth. It is
measured in Mbps, Gbps, etc.
Fiber-Optic >10Gbps
The LAN cable starts from your desktop. Do you know where it ends?
It ends in a Switch.
13 | P A G E
Switch has many ports. Each port can be connected with an individual network device
Check if your computer is directly connected to your friends computer. How does your data
reach his computer?
The message you send reaches the switch. The switch sends the message to your
friends machine.
A switch is a device which connects all the devices within a network. All your
computers are connected to a switch!
Here we can see that all the computers are connected to a single switch, like a star. The
layout of a network is called a topology and local networks use the star topology.
14 | P A G E
If your switch cannot send data beyond XYZ Company Mysore network, how can you send
data to a machine in the supermarkets network?
15 | P A G E
Just the way a switch has ports, a router has interfaces through which other switches and
routers are connected.
16 | P A G E
We can form a network of networks also. Internet is the largest network of networks!
Here we can see that the routers are highly interconnected, like a mesh. Such a topology is
called a Mesh topology. The mesh topology improves redundancy, as the data can reach the
destination in a different route if some link fails.
IpAddress
Let’s say you want to send a parcel to your friend in USA. There are millions of houses all
over the world. How can you uniquely identify your friends house?
17 | P A G E
Just the way you can uniquely identify your friends house by the unique house address,
we can uniquely identify every device in the network by its address called the IP
address!
Since the number of devices on the internet far exceeds the number that can be supported
by IPv4, world is gradually adopting the IPv6 system
19 | P A G E
Domain Name
www.XYZ Company.com is the name given to the IP address of the computer within XYZ
Company.com domain, which stores the XYZ Company website
DNS (Domain Name System) Server is a machine which has a database of domain names
and the corresponding IP addresses.
20 | P A G E
DNS is used for Resolving names into IP address
Ping is a command which checks the connectivity with a specific machine. It also gives the
IP address associated with a domain name
Your machine will check if the destination machine is within the same network.
21 | P A G E
How can your device know if the destination IP address is within the same
network?
If the network ID of the IP addresses are same, then it means both the
addresses are in same network.
Subnet mask is a value which is used to separate out the network ID and the
host ID.
255.0.0.0 10
255.255.0.0 10.68
255.255.255.0 10.68.190
Applying the subnet mask of 255.255.255.0, the Network ID of sender and receiver is
10.123.45. Hence, they both are in the same network.
22 | P A G E
Among all the machines in the same network how can your machine find out which is the
receiver machine?
A similar thing happens in a network also. Switch sends a broadcast message to the rest of
the devices
The machine with the IP address responds back, giving details of its MAC address. MAC is a
unique value given to a
computer.
23 | P A G E
This process is called ARP. ARP – Address Resolution Protocol
MAC – Media Access Control
A MAC address is also a unique value given to every device.
If IP address itself can uniquely identify a computer, then why do we need a MAC address?
Different unique values are meant for different purposes. For example, your employee
number though unique is relevant only within XYZ Company, whereas a unique passport
number is relevant throughout the world.
24 | P A G E
IP Address MAC Address
Unique Unique
Used to identify the network and the device Used to identify the device
Switch does not understand IP address Switch understands only MAC address and
PORT.
Now that your machine knows the MAC address of the receiver machine, it can now send
the data.
The switch has a MAC table. The MAC table has a list of port numbers and MAC addresses.
Depending on the MAC address passed, it will send information to that specific machine
alone.
25 | P A G E
Your mobile is not connected to network through wires. Then how it is able to
send and receive data? This is done through Wi-Fi.
26 | P A G E
Internet
Open another command prompt. Call the web service using python client
python CurrencyConverter_Python_client.py 6
Open another command prompt. Call the web service using java client
java CurrencyConverter_Java_Client 6
Download Demo
Note 1: Ensure you are in the correct directory in the command prompt.
It is the responsibility of the vendor, the Cloud Service Provider (CSP), to develop, own and
maintain these resources and make them available to the consumers over the internet.
You, the consumer, need not know exactly where the resources are located and how it all works.
Example
If you want to use an email service, you would need the hardware and software resources for
Instead, if you use a cloud based mail service like Gmail, Outlook, etc., all you need is a device, with
an app or a browser, connected to the internet
28 | P A G E
Organisations are looking at cloud computing solutions like below
29 | P A G E
2.Platform as a Service provides all of the capabilities that you need to support a complete
application lifecycle - building, testing, deploying, managing, and updating – using the same
integrated environment.
3.Software as a Service describes any cloud service where a fully functional and complete software
product is delivered to users over the internet. Instead of installing and maintaining software, you
simply access it via the Internet, freeing yourself from complex software and hardware management.
30 | P A G E
Types of Cloud Platform plans
Each deployment model aims at addressing one or more concerns of the cloud consumer. Therefore,
it is very important that consumers prioritize their concerns before opting for a particular model.
1. Public cloud
2. Private cloud
3. Community cloud
4. Hybrid cloud
1.Public Cloud is one that is available for use by the general public. Hence, it is the most common
and popular deployment model available today. They are entirely owned, deployed, monitored, and
managed by the cloud service provider, who deliver their computing resources over the internet.
Example
DropBox provides storage space to the general public.
Google provides Gmail and other cloud servers to the general public
31 | P A G E
2. Private Cloud is available only to users within a single organization.
Concerns addressed
Security
Compliance
Governance/Control
Performance
3.A community cloud is a private cloud that is shared by two or more organizations having shared
concerns like security requirements, policy, and compliance considerations etc.
4.A hybrid cloud is a combination of two or more different cloud deployment models.
In Real-World
As a software services professional, you might get to work on cloud computing in one of the following
ways
1. Cloud implementations
2. Cloud based developments
3. Migration projects
Though virtualization enables logical (not physical) separation of shared hardware resources, it is
only an enabler for implementing cloud and not a mandatory requirement. Physically separate
resources, like different makes and models of mobile devices, can also be hosted on cloud, without
virtualization, to be used by developers and testers.
32 | P A G E
4.Jenkins
whenever developers create a build, testers will execute these test cases and scripts using its own
frame works and the results will be saved separately for UFT, Selenium and IDTW.
With the help of Jenkins, we can automatically call and execute all these types of test cases and test
scripts in a sequential or pipeline fashion whenever the developer pushes the code to GitHub. Also,
the results of all these scripts can be saved back to GitHub. This helps in the automatic execution of
test cases without the intervention of testers, and developers can immediately know the quality of
their code.
cd D:\DevOps
d:
java -jar jenkins.war
admin /admin
33 | P A G E
34 | P A G E
5.Vagrant
“Create and configure lightweight, reproducible and portable environments.”
Vagrant - the command line utility for managing the lifecycle of virtual machines
Vagrant, an open-source software product for building and maintaining portable virtual
development environments. Written in: Ruby
Vagrant is a tool for building and managing virtual machine environments in a single workflow. With
an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup
time, increases production parity, and makes the "works on my machine" excuse a relic of the past.
Introduction
Every developer has faced problems when it comes to setting up a development
environment. Usually the environment behaves as it should on one machine, while on another
machine it behaves differently or does not function at all.
Vagrant changes the way how developers setup their work environments and maintain them.
Vagrant makes it possible to create a configurable portable development environment easily. These
environments need to be configured in a so-called Vagrantfile to get completely configured.
In this Vagrantfile file, the developer specifies how the environment should be set up,
configured, which software should be installed and which operating system should be used. This
Vagrantfile can then be distributed among other developers who just need this file in order to set up
the same development environment on their own machine. Vagrant will then follow every step as
defined in the provided Vagrantfile and initialise the machine.
Vagrant encourages automation to set up your development environments using shell scripts
or configuration management software.
Vagrant allows you to work with the same operating system that is running in production,
whether your physical development machine is running Linux, Mac OS X, or Windows.
If you were to run the virtual development environment manually — without Vagrant’s help, that is —
you would have to follow these steps:
35 | P A G E
With Vagrant, all these tasks (and many more) are automated. The command $ vagrant up can do the
following (depending on the configuration file):
Architrcture
Vagrant sits on top of existing and well-known virtualization solutions such as VirtualBox, VMWare
Workstation, VMWare Fusion, and Hyper-V; and provides a unified and simple command-line
interface to manage VMs. To work with Vagrant, you have to install at least one provider
Commands
$ vagrant init [url]
$ vagrant up
$ vagrant halt
$ vagrant destroy [--force]
$ vagrant reload
$ vagrant ssh
$ vagrant status
36 | P A G E
Working
Set the Proxy on cmdline
set http_proxy=http://10.219.2.220:80
set https_proxy=http://10.219.2.220:80
Installation
1. Download the free VirtualBox for your operating system from the VirtualBox website.
2. After download, just run the binary and install it.
3. Download Vagrant.
4. Again, just run the binary to install it.
2.Vagrant is a command-line based tool. Once installation is complete, open a console window and
create a new directory called 'vagrant_intro’ to workwith new Vegrant baox
cd ~
mkdir vagrant_intro
cd vagrant_intro
In the above command, you will notice that boxes are namespaced. Boxes are broken down into two
parts - the username and the box name - separated by a slash
4.To create an environment, inside your folder run init command, it will create ‘vagrantfile’
vagrant init ubuntu/trusty64
It will downloads the Ubuntu Box into our local mechine, we can check the downloaded Box by going
this location Windows : C:\Users\<Username>\.vagrant.d\boxes Linux/Mac: ~/.vagrant.d/boxes
The generated 'Vagrantfile' is a Ruby file that controls your [one or more] virtual machines.
A 'Vagrantfile' has been placed in this directory. You are now ready to 'vagrant up' your
first virtual environment! Please read the comments in the Vagrantfile as well as
documentation on'vagrantup.com' for more information on using Vagrant.
37 | P A G E
6. connect to Environment
$ vagrant ssh
Ubuntu Install
1.Install
sudo apt-get install vagrant
2.Complete log
satya@satya-Aspire-E5-523:~/Desktop/DevOps/vagrant$ cd centos/
This box can work with multiple providers! The providers that it
can work with are listed below. Please review the list and choose
1) hyperv
2) libvirt
3) virtualbox
4) vmware_desktop
38 | P A G E
==> box: Adding box 'centos/7' (v1804.02) for provider: virtualbox
box: Downloading:
https://vagrantcloud.com/centos/boxes/7/versions/1804.02/providers/virtualbox.box
satya@satya-Aspire-E5-523:~/Desktop/DevOps/vagrant/centos$ ls
satya@satya-Aspire-E5-523:~/Desktop/DevOps/vagrant/centos$ vagrant up
==> default: Waiting for machine to boot. This may take a few minutes...
default:
39 | P A G E
default:
default: Key inserted! Disconnecting and reconnecting using new SSH key...
default: No guest additions were detected on the base box for this VM! Guest
default: additions are required for forwarded ports, shared folders, host only
default: networking, and more. If SSH fails on this machine, please install
default:
default: This is not an error message; everything may continue to work properly,
satya@satya-Aspire-E5-523:~/Desktop/DevOps/vagrant/centos$
/* this is the box (and the version) that we want to download from:
https://app.vagrantup.com/debian/boxes/jessie64 */
wget https://app.vagrantup.com/debian/boxes/jessie64/versions/8.9.0/providers/virtualbox.box -
O debian-jessie64-8.9.0.box
40 | P A G E
/* update box version */
cd ~/.vagrant.d/boxes/debian-VAGRANTSLASH-jessie64/
mv 0 8.9.0
References
Installation : https://youtu.be/nZrQsxCPT2s
6. Microsoft Azure
Tisco, an IT and networking firm, develops, manufactures and sells networking hardwares to markets
across globe. The organization is currently maintaining their own infrastructure but due to the
massive growth of the organization in the global market, the organization is more keen towards
increasing the manufacturing unit rather than expanding the existing infrastructure. Some of the
challenges faced by the current infrastructure are as follows:
1. Exponential data growth is demanding more storage space resulting more maintenance cost.
2. Often resulting in application/database downtime while upgrading the server because of
demanding resource.
3. 24*7 power supply and manpower is required hence adding to the cost.
4. Taking periodic backups of the critical servers and recovering it during fail over is complex.
In order to overcome the above challenges, Tisco has decided to gradually move some of their new
deployments to Microsoft Azure.
41 | P A G E
Azure Introduction
Microsoft Azure is Microsoft's public cloud offering with a wide set of cloud services that gives
the flexibility to build, manage and deploy infrastructure and application on a massive, global
network using diverse technology and frameworks.
6.It will asks for email/password , once connect – it will show the Azure Account details
PS C:\windows\system32> Add-AzureAccount
Id Type Subscriptions Tenants
-- ---- ------------- -------
satyakaveti@outlook.com User 197d6fbd-41b4-4 {bc2fc351-2b4d-}
PS C:\windows\system32>
42 | P A G E
8.To run powershell scripts we must ByPass the ExecutionPolicy as below
2.Terminalalogies
The infrastructure of an application on Azure is made up of many components like virtual machine,
storage account, web app, etc. Some of the terminologies used to referring these components are as
follows
Resource: Resource in Azure are manageable items that are present in Azure, such as, virtual
machines, networks, databases, web app etc
Resource groups : Resource group is a logical container that holds related resources for an entire
application. While creating/deploying resources on Azure, you have to specify the resource group
that has to be used for storing the resource.
Resource provider: Resource provider is a service that supplies the resources which you can deploy
and manage through Resource Manager. For example, Microsoft.compute is one of the common
resource provider which supplies virtual machine resource.
3:Click the "Add" option in the top middle pane to create a new resource group.
Tisco has decided to gradually migrate their existing infrastructure and also the new deployments to
Microsoft Azure.
44 | P A G E
You will be performing the following tasks to achieve this
45 | P A G E
Azure Storage
Requirement : Tisco has huge amount of data stored in their file servers and in other object storages.
As a part of their migration plan, Tisco wants to move the data to a storage which
can accomodate terabytes of unstructured data.
Solution : Azure storage account can be used to store huge amount of data as files and also
as objects.
Azure Storage provides data storage solution on cloud. It is highly scalable, elastic, globally
accessible and automatically load-balances application-data based on traffic.
Replication
You can choose one of the following replication options:
Locally Redundant Storage (LRS): Maintains three copies of data within the same data
center in a single region
Geo Redundant Storage (GRS): Maintains six copies of data, three copies in the primary
region and three copies in the secondary region
Read-Access Geo Redundant Storage (RA GRS): Similar to GRS. Only read permissions
assigned to the data stored in secondary region
Step 2: Navigate to the "Resource group" and open the tisco-rg and "Add" , search for "Storage
account", click on it & create
46 | P A G E
Azure Virtual Network (VNet)
Azure Virtual Network (VNet) is a representation of a network in the cloud. It enables Azure Virtual
machines to communicate with each other, the internet and the on-premises network. Virtual
Networks can be segmented into multiple Subnets
Isolation and segmentation: Within each Azure subscription and Azure region multiple
virtual networks can be configured. Each virtual network is isolated from other virtual
networks.
Communicate with the internet: By default, all resources in a virtual network can
communicate outbound to the internet. For inbound communication with a resource, a public
IP address(A public IP address is a resource with its own configurable settings) has to be
assigned to it.
Communicate between Azure resources: Azure resources communicate securely with each
other.
Communicate with on-premises resources: Azure VNet's can be integrated with on-
premises resources.
Filter network traffic: Inbound and out bound traffic through Azure VNet an be customized
Route network traffic: Azure routes traffic between subnets, connected virtual networks, on-
premises networks, and the Internet, by default. Traffic can be routed with azure route table
or it can also be user defined
47 | P A G E
IP addresses can be assigned to Azure resources so as to communicate with other Azure resources,
with the on-premises network and the internet. IP addresses can be of two types in Azure
Step 3:Provide the parameters for creating the Azure Virtual Network as follows.
Name: tisconet
Address space: 10.1.0.0/16
Resource group: tisco-rg
Location: East US
Subnet: default
Step 4:You will observe that the Azure Virtual Network is created as below.
Azure Compute
Hardware / Resoures required to run our code
48 | P A G E
We have following Azure Compute Options
1.Virtual Machines
Linux or Windows
Prebuilt images
Varying sizes
Premium Storage
You manage the operating system
2.Container
Lightweight application hosts
Chain images together
Docker client support on Windows
3.Cloud Services
Web / Worker Roles
Package application code
Declared target operating system
Azure Service Fabric managed
4.web apps
Web application code
IIS hosting at scale
Source control integration for CI
Web Jobs for background processing
https://app.pluralsight.com/library/courses/chef-planning-installing/table-of-contents
https://app.pluralsight.com/library/courses/chef-planning-installing/table-of-contents
49 | P A G E
7. Amazon Web Services (AWS)
Cloud computing offers dynamic provisioning of resources based on demand, on a pay-as-you-use
pricing. Instead of physical servers, cloud computing helps to spin out virtual servers. With dynamic
scaling and load balancing features of cloud, long term planning is not necessary.
Why Cloud?
Amazon.com's Great Indian Festival or Flipkart's Big Billion Day, they also declare intermittent offer
days.
50 | P A G E
51 | P A G E
Continuous Delivery and Cloud
52 | P A G E
AWS - Compute and Network Services
AWS - Storage and Database Services
AWS – Cloud management Systems
GCP - Storage and Database Services
GCP - Compute and Network Services
1. History
https://youtu.be/jOhbTAU4OPI
Amazon Web Services(AWS) is a low cost cloud service platform from Amazon, which provides
services such as compute, storage, networking, CDN services etc to users. All AWS services are
exposed as web services accessible from any where, any time, on a pay per use pricing model.
AWS services can be managed through a web based management console, command line interface
(CLI) or software development kits (SDK). With AWS, you can provision resources in seconds and
build applications without upfront capital investment.
53 | P A G E
Region
is a physical location spread across globe to host your data
In each region, there will be atleast two availability zones for fault tolerance
Regions are completely separate from one another
Enterprises can choose to have their data in a specific region
Availability zones
Availability zones are anologous to clusters of datacenters
Availability zones are connected through redundant low-latency links
These AZs offer scalable, fault tolerant and highly-available architecture
Most of the AWS services are region dependent and only a few are region independent. Few services
may not be available in all the regions. So, while determining a region to push the workloads, the
following parameters to be considered.
54 | P A G E
2.Services
55 services currently available from AWS.The following are the various categories of services offered
by Amazon Web Services (AWS).
55 | P A G E
AWS CLI: AWS Command Line Interface
AWS provides SDK’s for all programming languages for implementing application inside AWS. For
example the AWS SDK for Java is a collection of tools for developers creating Java-based Web apps to
run on Amazon cloud components such as Amazon Simple Storage Service (S3), Amazon Elastic
Compute Cloud (EC2) and Amazon SimpleDB
Nodejs
Application Hosting > EC2
Images/Assets in website > S3
User Registrtions Database > RDS
User Sessions Storage Cache > ElastiCache
Saving Pizza’s NoSQL Db > DynamoDB
56 | P A G E
Installing AWS Commandline Interface
4. An AWS Access Key is required for configure the CLI in local system& SDK
Top menu > Your name > Security Credencials >Continue to Security Credencials
Expand AccessKeys > Create New Access Key > Copy & Download file(rootkey.csv)
Access Key ID :AKIAJJAJRA4MN3H62CCA
Secret Access Key :TzSnFtUwpDtpp1y2JaZvldeDbw9Ujv7ugopOM1S7
57 | P A G E
1.CloudWatch
Cloudwatch is a Service to set alarms based on Service metric thresholds. That means send
a notification when a perticlar event is occurred using Simple Notification Service(SNS).
3. Create New Topic > Topic Name: admin_email ; Display Name : Email SNS
58 | P A G E
4.Subscriptions > Create Subscription > Protocal : Email, EndPoint: myemail@gmail.com
5.A confirmation mail is sen to your mail id. Check & Confirm subscription.
1.Go To Top Menu > Your name > My Billing DashBorad > Preferences > Check[] box for
2.Go to Services > Cloudwatch > Left Menu : Alarams > Create Alaram
Select metric > Total Estimated Charge > Check USD >Set Alaram Threshold
59 | P A G E
2.IAM - Identity & Access Management
Identity & Access Management (IAM) Service to configure authentication and access for user accounts
Multi-Factor Authentication(MFA)
Authentication that requires more than one factor to authenticate
1.Go to Top > Your name > My Security Credencials > Select : Multi-factor Authentication > Activate
2.Select [.] Virtual device(Your Smartphone) & install AWS MFA-compatible app on the smartphone.
So Install MFA apps like Google Authenticator, open it.
3.Scan the QR code using the App, add the two Conseqgutiive keys from app & Finish
4.Signout from aws, try to re-login, it will asks for MFA code. That’s it!!
IAM Policy
Used to manage perimissions for different groups
60 | P A G E
IAM Policy Statement Properties
Effect : “Allow”or“Deny”
Action: Operation user can perform on services
Resource : Specific Resources user can perform action on
1.To Create user, Go to Servies > IAM >Add User > username:[ ] > Create User
2.To Add user to Group, Add new Group> Choose Group name > Add
3.It will create Access Keys for user, go to command line add user keys to system
Password Policy
Require at least one uppercase letter
Require at least one lowercase letter
Require at least one number
Require at least one non-alphanumeric character
Allow users to change their own password
To set these options Go to My Security > Account Settings > Password Policy
Iam > Users > user : smlcodes > Security credentials Tab > Console password : Manage
61 | P A G E
Finally we completed with Security Things
The EC2 provides users with the opportunity to choose among several varieties of operating systems,
RAMs and CPUs.
Amazon Virtual Private Cloud (Amazon VPC) enables you to launch Amazon Web Services (AWS)
resources into a virtual network that you've defined.
A "Virtual Private Cloud" is a sub-cloud inside the AWS public cloud. Sub-cloud means it is inside an
isolated logical network. Other servers can't see instances that are inside a VPC. It is like a vlan AWS
infrastructure inside a vlan.
62 | P A G E
We are now going to create below archetecutre in AWS
2.Click on VPC > Create VPC > fill details & Create
Now VPC is just created, but for instances of the VPC has no access with Internet by default. We
need to configure Routing table for the Internet access.
63 | P A G E
4.To Configure VPC, Go to VPC > Left: Your VPC’s > Select : Pizza VPC > Summary: Routetable
By Clicking Route Table link, it will open new tab, and select Route Table Id > Routes Tab
64 | P A G E
6.Next, create Route Table, VPN >Left: Route Table > Create Route Table
7.To access Internet, we must create Internet gateway : Internet gateways > Create internet gateway
8.To add Internet gateway to VPC, go to subnets > Pizza-Public-Subnet>Routing table > Edit > add
another route & save details
8.So now we need to create another subnet in different region for replica purpose.
Go to VPCs > Edit CIDRs > Add IPv4 CIDR > 100.64.0.0/24 > save
Go to VPC > Subnets > Create subnet > Fill Details > Save
65 | P A G E
2.Elastic Cloud Compute (EC2)
EC2 is a webservice that enables you to launch and manage Linux/Unix/Windows operating system
instances in Amazon data centers.
EC2 requires Storage space for Uploding & Saving files. For that they provide default service as
“Elastic Block Store”
66 | P A G E
Independent storage volumes used with EC2 instances
67 | P A G E
3: Configure Instance Details
4: Add Storage
All Traffic
1.Download Our Application code , extract it , navigate to the extracted location from cmdline
68 | P A G E
3.Go to EC2 > Running Instances > Click : 1 Running Instances, see there is no Public DNS (IPv4)
4.To connect with our EC2 instance form our local machine, we need to configuere Public DNS(Public
IP Address).
5.Elastic IP instace will manages the Public IP addresses that are creation, destroyed, and assigned.
EC2 > Left: NETWORK & SECURITY > Elastic IPs > Allocate new address > Create
Now, Select Elastic IP > Actions > Associate Address > Click Associate
69 | P A G E
1.Open PuttyGen > Load > Select Perm key > Save Private Key
2.Using WinScp, move PizzaApp code to /PizzaApp folder & run following commands
npm install
npm start
3. Scalling Application
EC2 > Actions > Image > Create Image > [Done]
Load balancing is the process of distributing network traffic across multiple servers. This ensures no
single server bears too much demand. By spreading the work evenly, load balancing improves
application responsiveness
70 | P A G E
Creating Load Balancer for PizzaApp
1.EC2 > left : LOAD BALANCING > Load Balancer > Create Load Balancer
71 | P A G E
S3 Bucket Example
Bucket Name: pizza-luvrs
Region: Oregon
URL: s3-us-west-2.amazonaws.com/pizza-luvrs
3.Go to S3 Bucket > Permissions > Bucket Policy : Paste JSON > save
72 | P A G E
6. Databases (RDS & DynamoDB)
Relational Database Service
RDS Backups
Multi-AZ Deployment Database replication to different Availability Zone
Automatic failover in case of catastrophic event
Occurs daily
Configurable backup window
Backups stored 1 - 35 days
Restore database from backup
1.Services > RDS > Create DataBase > Ex. PostgreSQL >
7. CloudFormation
CloudFormation allows you to use a simple text file to model and provision, in an automated and
secure manner, all the resources needed for your applications across all regions and accounts. This
file serves as the single source of truth for your cloud environment.
73 | P A G E
AWS Certified DevOps Engineer
1.AWS Certified DevOps Engineer: Continuous Delivery and Automation
1.AWS CodeCommit
AWS CodeCommit is a secure, highly scalable, managed source control service that hosts private
Git repositories. AWS CodeCommit eliminates the need for you to manage your own source control
system or worry about scaling its infrastructure. You can use AWS CodeCommit to store anything
from code to binaries.
>Get-AWSCredentials -ListProfiles
>Set-AWSCredentials -AccessKey AKIAJBJRAI3S3TZEETFA -SecretKey yGD05dNukCMt+oQw+so5
-StoreAs codecommit
>cd 'C:\Program Files (x86)\AWS Tools\CodeCommit\'
> .\git-credential-AWSS4.exe -p codecommit
74 | P A G E
IAM Sign in URL : user/123abc****@
To create Repository
Services > CodeCommit > Create Repo > Name : “FirstRepo” > Create
Clone Repo:
git clone https://git-codecommit.us-east-1.amazonaws.com/v1/repos/FirstRepo
2. AWS CodeDeploy
Code : https://github.com/mikepfeiffer/aws-codedeploy-linux
3.AWS CodePipeline
75 | P A G E
4.CloudFormation
76 | P A G E
77 | P A G E
78 | P A G E
5.Elastic BeanStalk
79 | P A G E
6.OpsWorks
AWS OpsWorks is a configuration management service that helps you build and operate highly
dynamic applications, and propagate changes instantly.
7. CloudWatch
80 | P A G E
Software testing is a set of processes and tasks that take place throughout the software development
life cycle. It helps to reduce the risk of failures that may occur during operational use and, thus,
ensure the quality of the software system.
81 | P A G E
82 | P A G E
8. Chef - Automate IT Infrastructure
83 | P A G E
Signup for HostedChef
Chef is a configuration management tool. Trying to coordinate the work of multiple system
administrators and developers involving hundreds, or even thousands, of servers and applications to
support a large customer base is complex and typically requires the support of a tool.
Examples of modern IT configuration management tools are CFEngine, Puppet, the Desired State
Configuration engine in Microsoft Windows, Ansible, SaltStack, and of course, Chef.
84 | P A G E
Architecture
The Chef DK workstation is the location where users interact with Chef. On the workstation
users author and test cookbooks using tools such as Test Kitchen and interact with the Chef server
using the knife and chefcommand line tools.
The Chef server acts as a hub for configuration data. The Chef server stores cookbooks, the
policies that are applied to nodes, and metadata that describes each registered node that is being
managed by Chef.
Chef client nodes are the machines that are managed by Chef. The Chef client is installed on
each node and is used to configure the node to its desired state.
85 | P A G E
Once you master Chef, you can use it to
Fully automate deployments, including internal development and end-user systems
Automate scaling of infrastructure
Make your infrastructure self-healing
Installing
1. download an Ubuntu 14.04 box using vagrant. To create an environment, inside your folder run init
command, it will create ‘vagrantfile’
vagrant init ubuntu/trusty64
3. connect to Environment
$ vagrant ssh
The Chef DK provides tools that enable you to manage your servers remotely from your workstation.
But it also provides tools that allow you to configure a machine directly.
3. Install ChefDK:
Handson
1.Downloaded chefdk in Normal ubuntu System & installed
87 | P A G E
2.Create a resource in System using Chef
create hello.rb file inside chef_repo
file 'motd' do
content 'Hello, World'
end
4.Here chef is idempotent, that means multiple changes won’t change the result.
It we change the hello.rd file, chef wont consider the changes until we apply changes
file 'motd' do
action:delete
end
88 | P A G E
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ ls
hello.rb
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* file[motd] action create
- create new file motd
- update content in file motd from none to 03675a
--- motd 2018-09-26 22:42:58.885793277 +0530
+++ ./.chef-motd20180926-23053-ysyfqd 2018-09-26 22:42:58.885793277 +0530
@@ -1 +1,2 @@
+Hello, World
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ ls
hello.rb motd
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ gedit hello.rb
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* file[motd] action delete
- delete file motd
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$
2. Chef : Recipes
In above ex, we just created a file as a resorce in host mechine. Now go to more advance install
Softtware as a pkg in host machine
Apache2 pkg should install in host mechine
Apache2 Should enable & Auto Start
Create index.html, & make it as apache Homepage
service 'apache2' do
action[:enable, :start]
end
file '/var/www/html/index.html' do
content '<h1>Hello, Chef!!</h1>'
end
2.Do chef-apply
Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ sudo chef-apply hello.rb
[sudo] password for satya:
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* apt_package[apache2] action install
89 | P A G E
* service[apache2] action enable (up to date)
* service[apache2] action start (up to date)
* file[/var/www/html/index.html] action create
- update content in file /var/www/html/index.html from b66332 to c0086c
--- /var/www/html/index.html 2018-09-26 23:24:20.577635660 +0530
+++ /var/www/html/.chef-index20180926-27935-qxmg32.html 2018-09-26
23:24:28.713710303 +0530
@@ -1,376 +1,2 @@
Cross Checking
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ sudo chef-apply hello.rb
Recipe: (chef-apply cookbook)::(chef-apply recipe)
* apt_package[apache2] action install (up to date)
* service[apache2] action enable (up to date)
* service[apache2] action start (up to date)
* file[/var/www/html/index.html] action create (up to date)
Or using Curl
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef_repo$ curl localhost
<h1>Hello, Chef!!</h1>
3. Chef : Cookbooks
90 | P A G E
To generate CookBook Syntax is
chef generate cookbook <Cookbook_Name>
7 directories, 10 files
Recipe: code_generator::template
* directory[./chef_apache2/templates] action create
- create new directory ./chef_apache2/templates
* template[./chef_apache2/templates/index.html.erb] action create
- create new file ./chef_apache2/templates/index.html.erb
- update content in file ./chef_apache2/templates/index.html.erb from none to
e3b0c4
here new folder (template) inside chef_apache2 and places index.html inside template folder
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/cookbooks/chef_apache2$ tree
.
├── Berksfile
├── CHANGELOG.md
├── chefignore
├── LICENSE
├── metadata.rb
91 | P A G E
├── README.md
├── recipes
│ └── default.rb
├── spec
│ ├── spec_helper.rb
│ └── unit
│ └── recipes
│ └── default_spec.rb
├── templates
│ └── index.html.erb
└── test
└── integration
└── default
└── default_test.rb
package 'apache2'
service 'apache2' do
action [:enable, :start]
end
template '/var/www/html/index.html' do
source 'index.html.erb'
end
Running handlers:
Running handlers complete
Chef Client finished, 1/4 resources updated in 03 seconds
Check Apache Homepage
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef$ curl localhost
Hello Satya
3.Administration > Oraganization >smlcodes > Actions > Starter Kit > Download Starter Kit
10 directories, 8 files
94 | P A G E
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef-starter/chef-repo$ ls
cookbooks learn_chef_apache2-0.3.0.tar.gz README.md roles
satya@satya-Aspire-E5-523:~/Desktop/DevOps/chef/chef-starter/chef-repo$
learn_chef_apache2/
learn_chef_apache2/.kitchen.yml
learn_chef_apache2/Berksfile
learn_chef_apache2/Berksfile.lock
learn_chef_apache2/chefignore
learn_chef_apache2/metadata.json
learn_chef_apache2/metadata.rb
learn_chef_apache2/README.md
learn_chef_apache2/recipes/
learn_chef_apache2/templates/
learn_chef_apache2/templates/default/
learn_chef_apache2/templates/default/index.html.erb
learn_chef_apache2/recipes/default.rb
package 'apache2'
service 'apache2' do
supports :status => true
action [:enable, :start]
end
template '/var/www/html/index.html' do
source 'index.html.erb'
end
95 | P A G E
Now we are going to manage our nodes throgh Chef Server
We have two nodes created using vagrant, up them[vagrant up, vagrant ssh]
1. Ubuntu : vagrant@vagrant-ubuntu-trusty-64:
2. CentOS : [vagrant@localhost ~]$
Vagrant.configure("2") do |config|
config.vm.box = "ubuntu/trusty64"
config.vm.network "forwarded_port", guest: 80, host: 5555
config.vm.network "public_network"
end
Ubuntu : 5555
IpAddrees - 192.168.0.105
Username/pwd - ubuntu/ ubuntu
CentOs:6666
96 | P A G E
IpAddrees - inet 192.168.0.107
Username/pwd - root / vagrant
3.Now we can check, knife automatically register Ubuntu Node1 , with Chef Server
97 | P A G E
4.Go To Node1-Ubuntu Terminal, Check the Home page
vagrant@vagrant-ubuntu-trusty-64:~$ curl localhost
<html>
<body>
<h1>hello world</h1>
</body>
</html>
Fixed it!
So when you are using hosted chef you need to pass in a private key with the bootstrap and have the
public key in your autherized_keys file....
98 | P A G E
1. install the ChefSDK
2. SCP your starter kit from hosted Chef
3. extract the starter kit to ~/chef-repo
4. generate a new keypair: ssh-keygen
5. add the public key to your autherized_keys file: $ cat id_rsa.pub >> authorized_keys
6. run the knife bootstrap with the following:
sudo knife bootstrap {{server-ip}} --ssh-user {{your-server-user}} -i ~/.ssh/id_rsa --sudo --node-name web1
That should work!
I would also suggest that the user you pass as the --ssh-user has passwordless sudo access.
3.Now we can check, knife automatically register CentOS Node2 , with Chef Server
More on Chef
cnode1
99 | P A G E
9. Ansible
Managing Multiple Systems/ Servers
Scaling -Bigbillion day, more traffic – need to add more Servers & Load Balancers
100 | P A G E
Rooling – Rooling & Rooling back if Users don’t like new features.
Push Based – we can push the changes to Node machines directly (Without Serve)
Pull Based – They pull all the Configurations through the Server
101 | P A G E
NASA needed to move 65 applications from a traditional hardware based data center to a cloud-
based environment for better agility and cost savings
The rapid timeline resulted in many applications being migrated ‘as it is’ to a cloud environment.
This created an environment which spanned multiple virtual private clouds (VPCs) and AWS
accounts that could not be managed easily.
Even simple things, like ensuring every system administrator had access to every server, or
simple security patching, were extremely cumbersome.
The solution was to leverage Ansible Tower to manage and schedule the cloud environment.
Ansible Tower provided with a dashboard which provided the status summary of all hosts and
jobs which allowed NASA to group all contents and manage access permissions across different
departments
Ansible Tower is a web-based interface for managing Ansible. One of the top items in Ansible
users’ wishlists was an easy-to-use UI for managing quick deployments and monitoring one’s
configurations.
Further, Ansible divided the tasks among teams by assigning various roles. It managed the clean
up of old job history, activity streams, data marked for deletion and system tracking info
102 | P A G E
As a result, NASA has achieved the following efficiencies:
NASA web app servers are being patched routinely and automatically through Ansible Tower
with a very simple 10-line Ansible playbook.
Every single week, both the full and mobile versions of www.nasa.gov are updated via Ansible,
generally only taking about 5 minutes to do.
103 | P A G E
Orchestration
Orchestration – Flow of Order of Configurations.
See, if we change the Order the WebApplication not Deploy properly , where you place html files
with out 1st step 😊
Architeture
104 | P A G E
105 | P A G E
Provisioning – Installing nessasary softwares which are required to run a application properly
106 | P A G E
107 | P A G E
PlayBooks
Handson Task
108 | P A G E
sudo http_proxy=http://10.219.2.220:80 apt-get update
sudo http_proxy=http://10.219.2.220:80 apt install software-properties-common
Check the host inventory file, path is /etc/ansible/hosts; provide host machine(node) ip address in
that host’s file.
[test-servers]
10.219.19.87
Generate ssh key in the ansible machine, which we have to copy to all the remote hosts for doing
deployments or configurations on them.
vagrant@vagrant-ubuntu-trusty-64:~/ansible$
ssh-keygen -t rsa -b 4096 -C "vagrant@vagrant-ubuntu-trusty-64"
Generating public/private rsa key pair.
Enter file in which to save the key (/home/vagrant/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/vagrant/.ssh/id_rsa.
Your public key has been saved in /home/vagrant/.ssh/id_rsa.pub.
The key fingerprint is:
61:a0:f0:74:66:79:7b:99:df:af:f1:c6:54:68:3e:d6 vagrant@vagrant-ubuntu-trusty-64
The key's randomart image is:
+--[ RSA 4096]----+
| . . =. |
| + =... |
| o+|
| .o.|
+-----------------+
Ssh key is generated at (/home/vagrant/.ssh/id_rsa), here id_rsa is a SSH key file
109 | P A G E
Ansible – Ubuntu Hands on
Configured in Puppet Folder under ansible
https://www.edureka.co/blog/install-ansible/
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-centos-7
Update Pkgs
[root@AnsibleMaster vagrant]# yum update -y
Ansible by default not available to Yum pkgs, so To get Ansible for CentOS 7, first ensure that the
CentOS 7 EPEL repository is installed:EPEL(Extra Pkgs for Entraprize Linux)
sudo yum install epel-release
[root@AnsibleMaster vagrant]# sudo yum install epel-release
CHECK Version
110 | P A G E
### copy public key of Ansible server to its nodes.Here my node ip is : 192.168.0.108
and check to make sure that only the key(s) you wanted were added.
111 | P A G E
Step 3 — Using Simple Ansible Commands
Nginx is software to provide a web server. It can act as a reverse proxy server for TCP, UDP, HTTP,
HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and an HTTP cache.
---
-
become: true
hosts: test-servers
name: "Install nginx"
tasks:
-
name: "Add epel-release repo"
yum:
name: epel-release
state: present
-
name: "Install nginx"
yum:
name: nginx
state: present
-
name: "Start NGiNX"
service:
112 | P A G E
### Run the playbook, it will install nginx on nodes
PLAY RECAP
***************************************************************************************
***************************
192.168.0.110 : ok=4 changed=1 unreachable=0 failed=0
Now to check if it is installed in your node machine, type the following command in your node:
ps waux | grep nginx
[vagrant@CentOS7-Agent ~]$ ps waux | grep nginx
root 5600 0.0 0.4 120812 2096 ? Ss 19:53 0:00 nginx: master process
/usr/sbin/nginx
nginx 5601 0.0 0.6 121276 3132 ? S 19:53 0:00 nginx: worker process
vagrant 5626 0.0 0.1 12520 952 pts/0 S+ 19:55 0:00 grep --color=auto
nginx
113 | P A G E
Ansible Master on AWS Cloud
https://www.youtube.com/watch?v=wpIgvy34BzU
https://www.edureka.co/blog/aws-devops-a-new-approach-to-software-deployment/
1.Login to Aws
Instances :4
Public IP will keep change, if we reboot/re-launch the System. To make IP Address Static we need
assign "Elasitic IP"
-Newwork secirity tab > Elastic Ips > Allocate 4 New IP address ::scope -vpc
Private IP: is for connecting Internal System resorces. ex: apache runs on port 8080, mysql:3305
114 | P A G E
4.Connect with Ansible Master & Nodes
Switch to Root
[ec2-user@ip-172-31-23-104 ~]$ sudo su
5. Create user called "test" in master & Nodes with pwd test
useradd test
passwd test
Provide ROOT access to test user by going vi /etc/sudoers add below line
ssh test@13.126.179.188
ssh test@13.127.176.9
ssh test@13.232.223.255
ssh test@13.233.62.166
115 | P A G E
8.Get Public & private Ips
NODE PUBLIC Private
----------------------------------
Master 13.126.179.188 172.31.23.104
Node1 13.127.176.9 172.31.20.225
Node2 13.232.223.255 172.31.31.116
Node3 13.233.62.166 172.31.29.193
9.All the Systems are Internal to AWS. So Check login from On Node Terminal to Other For this we
need to add ICMP - All to Security Group in AWS
ssh test@172.31.23.104
ssh test@172.31.20.225
ssh test@172.31.31.116
ssh test@172.31.29.193
See it is aking for Password for Internal Communication. To make Systems connect with out ask
password We should generate SSH Key & Share to the Nodes
ssh-copy-id 172.31.20.225
ssh-copy-id 172.31.31.116
ssh-copy-id 172.31.29.193
Now try logging into the machine, with: "ssh '172.31.20.225'" and check to make sure that only the
key(s) you wanted were added.
-Now Copy Node1 SSH to all Other Master, 2 Nodes Machines same way
ssh-copy-id 172.31.23.104
116 | P A G E
ssh-copy-id 172.31.31.116
ssh-copy-id 172.31.29.193
-Now Copy Node2 SSH to all Other Master, 2 Nodes Machines same way
ssh-copy-id 172.31.23.104
ssh-copy-id 172.31.20.225
ssh-copy-id 172.31.29.193
Now Copy Node3 SSH to all Other Master, 2 Nodes Machines same way
ssh-copy-id 172.31.23.104
ssh-copy-id 172.31.20.225
ssh-copy-id 172.31.31.116
11.Installing Ansible
-----------------------
Ansibe is not free, Ansible.com provide Ansible Tower only that too Commercial.we can use
Opensource version from fedora repo.
Ansible by default not available to Yum pkgs, so To get Ansible for CentOS 7, first ensure that the
Redhat 7 EPEL repository is installed:EPEL(Extra Pkgs for Entraprize Linux)
sudo yum install epel-release
-Update Pkgs
[root@AnsibleMaster vagrant]# yum update -y
[root@AnsibleMaster vagrant]# sudo yum install epel-release
-----------------------------------
sudo vi /etc/ansible/hosts
[test-servers]
172.31.20.225
172.31.31.116
172.31.29.193
118 | P A G E
172.31.29.193 | SUCCESS | rc=0 >>
17:25:28 up 8:27, 2 users, load average: 0.00, 1.25, 11.17
Nginx is software to provide a web server. It can act as a reverse proxy server for TCP, UDP, HTTP,
HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer and an HTTP cache.
vi playbook.yml
---
-
become: true
hosts: test-servers
name: "Install nginx"
tasks:
-
name: "Add epel-release repo"
yum:
name: epel-release
state: present
-
name: "Install nginx"
yum:
name: nginx
state: present
-
name: "Start NGiNX"
service:
119 | P A G E
TASK [Gathering Facts]
***************************************************************************************
***************
ok: [172.31.31.116]
ok: [172.31.29.193]
ok: [172.31.20.225]
PLAY RECAP
***************************************************************************************
***************************
172.31.20.225 : ok=4 changed=0 unreachable=0 failed=0
172.31.29.193 : ok=4 changed=0 unreachable=0 failed=0
172.31.31.116 : ok=4 changed=0 unreachable=0 failed=0
Now to check if it is installed in your node machine, type the following command in your node:
ps waux | grep nginx
References
Ansible Install : https://youtu.be/XJpN8qpxWbA
120 | P A G E
10. Puppet
Puppet : a person, group, or country under the control of another.
What Is Puppet?
Puppet is a Configuration Management tool that is used for deploying, configuring and managing
servers. It performs the following functions:
Defining distinct configurations for each and every host, and continuously checking and
confirming whether the required configuration is in place and is not altered (if altered Puppet
will revert back to the required configuration) on the host.
Providing control over all your configured machines, so a centralized (master-server or repo-
based) change gets propagated to all, automatically.
121 | P A G E
Architecture of Puppet
The Puppet Agent sends the Facts to the Puppet Master. Facts are basically key/value data
pair that represents some aspect of Slave state, such as its IP address, up-time, operating
system, or whether it’s a virtual machine. I will explain Facts in detail later in the blog.
Puppet Master uses the facts to compile a Catalog that defines how the Slave should be
configured. Catalog is a document that describes the desired state for each resource that
Puppet Master manages on a Slave. I will explain catalogs and resources in detail later.
Puppet Slave reports back to Master indicating that Configuration is complete, which is
visible in the Puppet dashboard.
122 | P A G E
As you can see from the above Image:
123 | P A G E
Chef vs Puppet
https://logz.io/blog/chef-vs-puppet/
124 | P A G E
What the Problems it Solves
125 | P A G E
126 | P A G E
127 | P A G E
128 | P A G E
Puppet Terminology
129 | P A G E
130 | P A G E
Pupput Installation
131 | P A G E
Puppet client is defalutly installed on Ubuntu machines. You can check by
vagrant@vagrant-ubuntu-trusty-64:~$ puppet --version
3.4.3
vagrant@vagrant-ubuntu-trusty-64:~$
vagrant@vagrant-ubuntu-trusty-64:~$ ls /var/lib/puppet
ls: cannot access /var/lib/puppet: No such file or directory
vagrant@vagrant-ubuntu-trusty-64:~/puppet$
wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
--2018-09-28 08:46:41-- https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
Connecting to 10.219.2.220:80... connected.
Proxy request sent, awaiting response... 200 OK
Length: 16944 (17K) [application/x-debian-package]
Saving to: ‘puppetlabs-release-trusty.deb’
100%[===========================================================================>]
16,944 --.-K/s in 0.003s
References
Video : https://www.youtube.com/watch?v=0yVJhb2VkVk
132 | P A G E
11. Docker
Introduction
Ref : https://www.edureka.co/blog/docker-tutorial
What is Virtualization?
Virtualization is the technique of importing a Guest operating system on top of a Host operating
system. This eliminated the need for extra hardware resource.
In above pic, you can see there is a host operating system on which there are 3 guest operating
systems running which is nothing but the virtual machines.
Guest OS running on top of the host OS, which will have its own kernel and set of libraries and
dependencies. This takes up a large chunk of system resources, i.e. hard disk, processor and
especially RAM.
Disadvantages of Virtualization:
Running multiple Virtual Machines leads to unstable performance
Hypervisors are not as efficient as the host operating system
Boot up process is long and takes time
What is Containerization?
133 | P A G E
All the containers share, host operating system and holds only the application related binaries &
libraries. They are lightweight and faster than Virtual Machines.
In the diagram , you can see that there is a host operating system which is shared by all the
containers. Containers only contain application specific libraries which are separate for each
container and they are faster and do not waste any resources.
Virtualization vs Containerization
Virtualization and Containerization both let you run multiple operating systems inside a host machine.
Virtualization deals with creating many operating systems in a single host machine. Containerization
on the other hand will create multiple containers for every type of application as required
As we can see from the image, the major difference is that there are multiple Guest Operating
Systems in Virtualization which are absent in Containerization. The best part of Containerization is
that it is very light weight as compared to the heavy virtualization
134 | P A G E
What is Docker
Ref : https://www.edureka.co/blog/what-is-docker-container
What is Docker ? – Docker is a containerization platform that packages your application and all its
dependencies together in the form of a docker container to ensure that your application works
seamlessly in any environment
What is Container ? – Docker Container is a standardized unit which can be created on the fly to
deploy a particular application or environment. It could be an Ubuntu container, CentOs container,
etc. to full-fill the requirement from an operating system point of view. Also, it could be an application
oriented container like CakePHP container or a Tomcat-Ubuntu container etc.
A company needs to develop a Java Application. In order to do so the developer will setup an
environment with tomcat server installed in it. Once the application is developed, it needs to be
tested by the tester.
Now the tester will again set up tomcat environment from the scratch to test the application. Once the
application testing is done, it will be deployed on the production server.
Again the production needs an environment with tomcat installed on it, so that it can host the Java
application. If you see the same tomcat environment setup is done thrice. There are some issues
that I have listed below with this approach:
Now, I will show you how Docker container can be used to prevent this loss.
In this case, the developer will create a tomcat docker image ( A Docker Image is nothing but a
blueprint to deploy multiple containers of the same configurations ) using a base image like Ubuntu,
which is already existing in Docker Hub (Docker Hub has some base docker images available for
free) .
Now this image can be used by the developer, the tester and the system admin to deploy the tomcat
environment. This is how docker container solves the problem.
135 | P A G E
Let’s see a comparison between a Virtual machine and Docker Container to understand this better.
136 | P A G E
Size: The following image explains how Virtual Machine and Docker Container utilizes the resources
allocated to them.
Docker Engine
Docker Engine is simply the docker application that is installed on your host machine. It works like a
client-server application which uses:
137 | P A G E
As per the above image, in a Linux Operating system, there is a Docker client which can be accessed
from the terminal and a Docker Host which runs the Docker Daemon. We build our Docker images
and run Docker containers by passing commands from the CLI client to the Docker Daemon.
Docker Image
Docker Image can be compared to a template which is used to create Docker Containers. They are
the building blocks of a Docker Container. These Docker Images are created using the build
command. These Read only templates are used for creating containers by using the run command.
Docker Container
Containers are the ready applications created from Docker Images or you can say a Docker
Container is a running instance of a Docker Image and they hold the entire package needed to run
the application. This happens to be the ultimate utility of Docker.
Docker Registry?
Docker Registry is where the Docker Images are stored. The Registry can be either a user’s local
repository or a public repository like a Docker Hub allowing multiple users to collaborate in building
an application.
Even with multiple teams within the same organization can exchange or share containers by
uploading them to the Docker Hub. Docker Hub is Docker’s very own cloud repository similar to
GitHub.
138 | P A G E
Docker Architecuture
Docker Architecture includes a Docker client – used to trigger Docker commands, a Docker Host –
running the Docker Daemon and a Docker Registry – storing Docker Images. The Docker Daemon
running within Docker Host is responsible for the images and containers.
To build a Docker Image, we can use the CLI (client) to issue a build command to the Docker
Daemon (running on Docker_Host). The Docker Daemon will then build an image based on
our inputs and save it in the Registry, which can be either Docker hub or a local repository
If we do not want to create an image, then we can just pull an image from the Docker hub,
which would have been built by a different user
Finally, if we have to create a running instance of my Docker image, we can issue a run
command from the CLI, which will create a Docker Container.
Docker Commands
Ref: https://www.edureka.co/blog/docker-commands/
1.docker –version
This command is used to get the currently installed version of docker
2. docker pull
4. docker ps
5. docker ps -a
This command is used to show all the running and exited containers
6. docker exec
7. docker stop
8. docker kill
Usage: docker kill <container id>
This command kills the container by stopping its execution immediately. The difference between
‘docker kill’ and ‘docker stop’ is that ‘docker stop’ gives the container time to shutdown gracefully, in
situations when it is taking too much time for getting the container to stop, one can opt to kill it
9. docker commit
This command creates a new image of an edited container on the local system
140 | P A G E
10. docker login
13. docker rm
141 | P A G E
Installing Docker on Ubuntu
https://www.youtube.com/watch?v=lcQfQRDAMpQ&list=PL9ooVrP1hQOHUKuqGuiWLQoJ-
LD25KxI5&index=2 19: min
Docker Compose
when I had to containerize multiple services in separate containers, who can you communicate
beween them ? & How can start them with Single Operations. That Stage Docker Compose Come into
the picture.
Docker Compose can be used to create separate containers and host them for each of the stacks in a
Full stack application which contains MongoDB Express Angular & NodeJs.
By using Docker Compose, we can host each of these technologies in separate containers on the
same host and get them to communicate with each other. Each container will expose a port for
communicating with other containers.
The communication and up-time of these containers will be maintained by Docker Compose
142 | P A G E
HandsOn
https://www.edureka.co/blog/install-docker/
Install : https://www.youtube.com/watch?v=lcQfQRDAMpQ&list=PL9ooVrP1hQOHUKuqGuiWLQoJ-
LD25KxI5&index=2
Edureka : https://www.youtube.com/watch?v=h0NCZbHjIpY&list=PL9ooVrP1hQOHUKuqGuiWLQoJ-
LD25KxI5
Next : https://www.youtube.com/watch?v=wi-MGFhrad0&list=PLhW3qG5bs-L99pQsZ74f-LC-
tOEsBp2rK
2.Next, install a few prerequisite packages which let apt use packages over HTTPS:
3.Then add the GPG key for the official Docker repository to your system:
5.Next, update the package database with the Docker packages from the newly added repo:
6.Make sure you are about to install from the Docker repo instead of the default Ubuntu repo:
You'll see output like this, although the version number for Docker may be different:
docker-ce:
Installed: (none)
Candidate: 18.03.1~ce~3-0~ubuntu
Version table:
18.03.1~ce~3-0~ubuntu 500
143 | P A G E
500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
7.Notice that docker-ce is not installed, but the candidate for installation is from the Docker repository
for Ubuntu 18.04 (bionic).So, install Docker:
8.Docker should now be installed, the daemon started, and the process enabled to start on boot.
Check that it's running:
First, it will check the local registry for CentOS image. If it doesn’t find there, then it will go to the
docker hub and pull the image
144 | P A G E
### Now, Run the CentOS container.
================================================
================================================
Basically, you need one container for WordPress and you need one more container as MySQL for
back end, that MySQL container should be linked to the wordpress container. We also need one more
container for Php Myadmin that will be linked to MySQL database, basically, it is used to access
MySQL database.
[ Worpress ]
[ MySQL ]
[ PhpMyAdmin ]
Here we will write Docker Compose file to install & make link between them
Steps involved:
145 | P A G E
1. Install Docker Compose
2. Install WordPress:
mkdir wordpress
cd wordpress/
### In this directory create a Docker Compose YAML file, then edit it using gedit:
this error means you don't have enogh permissions. so run with Sudi
http://localhost:8080/
http://localhost:8181/
===========================================
===========================================
https://www.youtube.com/watch?v=HqBMEmoAd1M&index=7&list=PLhW3qG5bs-L99pQsZ74f-LC-
tOEsBp2rK
-------------------------
--------------------------
sudo docker ps -a
147 | P A G E
sudo docker run hello-world
getdocker.com
===============================
Docker COmmands
==============================
basic
------
sudo docker -v
Images
----------
sudo docker
sudo docker
Conatiners
148 | P A G E
----------=
sudo docker ps
System
--------------
------------------------------
Digest: sha256:de774a3145f7ca4f0bd144c7d4ffb2931e06634f11529653b23eba85aef8e378
Digest: sha256:de774a3145f7ca4f0bd144c7d4ffb2931e06634f11529653b23eba85aef8e378
root@76baecadb14d:/#
root@76baecadb14d:/# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
root@76baecadb14d:/#
-----------------------------
------------------------------
Dockerfile :
FROM
150 | P A G E
RUN
CMD
----------------------
Dockerfile
--------------
FROM ubuntu
MAINTAINER smlcodes<smlcodes@gmail.com>
151 | P A G E
RUN apt-get update
------------------
---> cd6d8154f1e1
---> 78dd08c39f7b
------------------------------------
-------------------------------------------
docker: Error response from daemon: pull access denied for smlcodes1, repository does not exist or
may require 'docker login'.
If You got this error Login with Your Docker account or use id to Run [sudo docker run -it
863bc0cd3f7c ]
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID,
head over to https://hub.docker.com to create one.
Username: smlcodes
Password:
Hello, Im Satya
153 | P A G E
-------------------------------------
Docker Compose
------------------------------------
Docker compose
: can stop all services with a single command : docker compose down
docker-compose -v
2 Ways
1. https://github.com/docker/compose/rel...
2. Using PIP
docker-compose.yml
------------------------
[root@ip-172-31-22-216 Docker]# cat docker-compose.yml
version: '3.3'
154 | P A G E
services:
web:
image: nginx
database:
image: redis
docker-compose config
sudo docker-compose up -d
docker-compose down
TIPS
—scale
--------------------------------------
Docker Volumes
-----------------------------=-------
-By default all files created inside a container are stored on a writable container layer
155 | P A G E
-A container’s writable layer is tightly coupled to the host machine where the container is running.
You can’t easily move the data somewhere else.
-Docker has two options for containers to store files in the host machine so that the files are persisted
even after the container stops
Use of Volumes
===========
: docker volume ls
: docker volume rm
-Volumes are stored in a part of the host filesystem which is managed by Docker
-Non-Docker processes should not modify this part of the filesystem
-Bind mounts may be stored anywhere on the host system
-Non-Docker processes on the Docker host or a Docker container can modify them at any time
-In Bind Mounts, the file or directory is referenced by its full path on the host machine.
-Volumes are the best way to persist data in Docker
-volumes are managed by Docker and are isolated from the core functionality of the host
machine
-A given volume can be mounted into multiple containers simultaneously.
156 | P A G E
-When no running container is using a volume, the volume is still available to Docker and is
not removed automatically. You can remove unused volumes using docker volume prune.
-When you mount a volume, it may be named or anonymous.
-Anonymous volumes are not given an explicit name when they are first mounted into a
container
-Volumes also support the use of volume drivers, which allow you to store your data on
remote hosts or cloud providers, among other possibilities.
myvol1
local 046cd7015ac74c659beb1f8d93293993161f1e4103715cdc29c4b447a105dcf2
local 3c6e5051ff9c98b23829c45e4a374f1426f7bbd15eab39d52720f1d8337dea82
local 9c8f01a4316e809eee624833b4538afc39e7e8194e81ee792e8ce5873b87298e
local myvol1
[
{
"CreatedAt": "2018-10-02T19:42:26+05:30",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/myvol1/_data",
"Name": "myvol1",
"Options": {},
"Scope": "local"
}
]
157 | P A G E
### to remove volume
====================
====================
1.pull Jenkins
2.Now i want to store the all /var/.jenkins related data to 'myvol1', so that if we delete
jenkins container , the jenkins data will be available, to do so run below cmd
This command will directly store the data into the container only
IF We start the another Jenkins instance with same volume location, docker wont create another
jenkins intsnace, instead ot will share the same data to newly added jenkins with another port
158 | P A G E
satya@satya:~/.../docker$ sudo docker run --name MyJenkins2 -v myvol1:/var/jenkins_home -p
9090:8080 -p 50000:50000 jenkins
Commands
----------------------
References
https://hub.docker.com/_/jenkins/
https://docs.docker.com/storage/volumes/
=========================
Docker Swarm
======================
-A swarm is a group of machines that are running Docker and joined into a cluster.
--------------------
159 | P A G E
- Health check on every container
Pre-requisites
2. Docker Machine (pre installed for Docker for Windows and Docker for
Mac)https://docs.docker.com/machine/insta...
https://docs.docker.com/get-started/p...
Step 1 : Create Docker machines (to act as nodes for Docker Swarm) Create one machine as
manager and others as workers
https://stackoverflow.com/questions/3...
160 | P A G E
Create one manager machine
docker-machine ls
docker-machine ip manager1
docker node ls
(this command will work only in swarm manager and not in worker)
SSH into worker node (machine) and run command to join swarm as worker
161 | P A G E
In Manager Run command - docker node ls to verify worker is registered and is ready
docker info
docker swarm
docker service ls
On manager node
162 | P A G E
docker service scale serviceName=2
REFERENCES:
163 | P A G E
https://docs.docker.com/get-started/p...
https://rominirani.com/docker-swarm-t...
A swarm is a group of machines that are running Docker and joined into a cluster
A cluster is managed by swarm manager
The machines in a swarm can be physical or virtual. After joining a swarm, they are referred
to as nodes
Swarm managers are the only machines in a swarm that can execute your commands, or
authorise other machines to join the swarm as workers
Workers are just there to provide capacity and do not have the authority to tell any other
machine what it can and cannot do
you can have a node join as a worker or as a manager. At any point in time, there is only one
LEADER and the other manager nodes will be as backup in case the current LEADER opts out
References
https://www.youtube.com/watch?v=h0NCZbHjIpY&list=PL9ooVrP1hQOHUKuqGuiWLQoJ-LD25KxI5
164 | P A G E
12. Kubernates
Kubernetes is an open-source container management (orchestration) tool. It’s container
management responsibilities include container deployment, scaling & descaling of containers &
container load balancing.
It was originally designed by Google and is now maintained by the Cloud Native Computing
Foundation
165 | P A G E
1. Automatic Binpacking
Kubernetes automatically packages your application and schedules the containers
based on their requirements and available resources while not sacrificing availability.
To ensure complete utilization and save unused resources, Kubernetes balances
between critical and best-effort workloads.
3. Storage Orchestration
With Kubernetes, you can mount the storage system of your choice. You can either opt
for local storage, or choose a public cloud provider such as GCP or AWS, or perhaps
use a shared network storage system such as NFS, iSCSI, etc.
4. Self-Healing
Personally, this is my favorite feature. Kubernetes can automatically restart containers
that fail during execution and kills those containers that don’t respond to user-defined
health checks. But if nodes itself die, then it replaces and reschedules those failed
containers on other available nodes.
166 | P A G E
6. Batch Execution
In addition to managing services, Kubernetes can also manage your batch and CI
workloads, thus replacing containers that fail, if desired.
7. Horizontal Scaling
Kubernetes needs only 1 command to scale up the containers, or to scale them down
when using the CLI. Else, scaling can also be done via the Dashboard (kubernetes UI).
167 | P A G E
168 | P A G E
169 | P A G E
Kubernetes Architecture
Master controls the cluster, and the nodes in it. It ensures the execution only happens in nodes and
coordinates the act. Nodes host the containers; in-fact these Containers are grouped logically to
form Pods. Each node can run multiple such Pods, which are a group of containers, that interact with
each other, for a deployment.
Replication Controller is Master’s resource to ensure that the requested no. of pods are always
running on nodes. Serviceis an object on Master that provides load balancing across a replicated
group of Pods.
So, that’s the Kubernetes architecture in simple fashion. You can expect more details on the
architecture in my next blog. A better news is, the next blog will also have a hands-on demonstration
of installing Kubernetes cluster and deploying an application.
170 | P A G E
Install Kubernetes Cluster on Ubuntu
https://www.edureka.co/blog/install-kubernetes-on-ubuntu
Refetenfces
Intro : https://www.edureka.co/blog/what-is-kubernetes-container-orchestration\
171 | P A G E
13.Nagios
https://www.youtube.com/watch?v=qehuAgKHFQ0
Nagios monitors your entire IT infrastructure to ensure systems, applications, services, and business
processes are functioning properly.
For years our security professionals are performing static analysis from – system log, firewall logs,
IDS logs, IPS logs etc. But, it did not provide proper analysis and response.
What is Nagios?
Nagios is used for Continuous monitoring of systems, applications, services, and business processes
etc in a DevOps culture. In the event of a failure, Nagios can alert technical staff of the problem,
allowing them to begin remediation processes before outages affect business processes, end-users,
or customers. With Nagios, you don’t have to explain why an unseen infrastructure outage affect your
organization’s bottom line.
172 | P A G E
Nagios runs on a server, usually as a daemon or a service.
It periodically runs plugins residing on the same server, they contact hosts or servers on your
network or on the internet. One can view the status information using the web interface. You can also
receive email or SMS notifications if something happens.
The Nagios daemon behaves like a scheduler that runs certain scripts at certain moments. It stores
the results of those scripts and will run other scripts if these results change.
Plugins: These are compiled executables or scripts (Perl scripts, shell scripts, etc.) that can be run
from a command line to check the status or a host or service. Nagios uses the results from
the plugins to determine the current status of the hosts and services on your network.
Nagios Architecture
Nagios is built on a server/agents architecture.
Usually, on a network, a Nagios server is running on a host, and Plugins interact with local and all
the remote hosts that need to be monitored.
These plugins will send information to the Scheduler, which displays that in a GUI.
173 | P A G E
14. Splunk
https://www.youtube.com/watch?v=rvjW5LJ0vbU&index=2&list=PL9ooVrP1hQOFPP6mdp1M4Ml436
pYZIyH9
174 | P A G E
175 | P A G E
176 | P A G E
Components
177 | P A G E
178 | P A G E
179 | P A G E
Architecture
180 | P A G E
Adding data to Splunk
Windows – Event Viwer
Adding data - Home > Add Data >Monitor > Local Evenet Logs > [Application Security] > Submit
http://localhost:8000/en-
US/app/search/search?q=search%20source%3D%22WinEventLog%3A*%22%20host%3D%22HYDP
CM457488D%22&earliest=0&latest=&sid=1538625810.13&display.page.search.mode=smart&dispatc
h.sample_ratio=1&workload_pool=
181 | P A G E
Start Seraching
Go To here : http://localhost:8000/en-US/app/search/search
host=HYDPCM457488D
182 | P A G E
3.Click on Log text & Add to Search
4. To Get the logs which contain the word “auditing” in the given host
host=HYDPCM457488D auditing
183 | P A G E
SPL – Splunk Processing language
3.Get the all the Logs of Host=”hyd” & EventCode=5447 & Process_ID=956
host="hydpcm457488d" | search EventCode=5447 | search Process_ID=956
29,594 events (before 10/4/18 10:18:30.000 AM)
184 | P A G E
4.Get the all Logs which conatin tha word Privileges
host="hydpcm457488d" privileges
374 events (before 10/4/18 10:38:16.000 AM)
host="hydpcm457488d" | head 10
10 events (before 10/4/18 10:26:07.000 AM)
host="hydpcm457488d" | tail 10
10 events (before 10/4/18 10:26:07.000 AM)
185 | P A G E
Pattrens – On What conditions/ pattens logs are generated
Reporting
Just I want to get the Reports of UnAuthrozed access of XYZ Company Account.
186 | P A G E
To get Save Reports
Home > Splunk Search > Reports Tab ; we can find the list of reports
187 | P A G E
Create Alerts
If any one from Infy users , try to modify account settings, should trigger an ALERT!!
To Create an Alert,
Search Bar Top > Save As > Alert > Privide Required Details
188 | P A G E
Forwarder
189 | P A G E
Entraprize Splunk Architecuture
190 | P A G E
Splunk in DevOps
Splunkbase
Splukbase is Market place for Splunkplug-ins and application. Community driven application with
licensed and non-licensed options for Splunkapplication.
Website : https://splunkbase.splunk.com/apps/
191 | P A G E
192 | P A G E
IBM AppScan
IBM® Security AppScan® and Application Security on Cloud enhance web and mobile application
security, improve application security program management and strengthen regulatory compliance.
Testing web and mobile applications prior to deployment can help you identify security risks,
generate reports and fix recommendations.
Identify and fix vulnerabilities : Reduce risk exposure by identifying vulnerabilities early
in the software development lifecycle.
Maximize remediation efforts: Classify and prioritize application assets based on business
impact and identify high-risk areas.
Decrease the likelihood of attacks:Test applications prior to deployment and for ongoing
risk assessment in production environments.
AppScan Document :
http://publibfp.dhe.ibm.com/epubs/pdf/i1328740.pdf?bcsi_scan_510c2960d4f4e50e=0&bcsi_scan_fil
ename=i1328740.pdf
Links
Jenkins : https://lex.XYZ Companyapps.com/
193 | P A G E
Errors and Solutions
Vagrant
Error: Could not resolve host: vagrantcloud.com ?
set http_proxy=http://10.219.2.220:80
set https_proxy=http://10.219.2.220:80
194 | P A G E