IT Notes
IT Notes
IT Notes
What is SDLC?
SDLC is a process followed for a software project, within a software organization. It
consists of a detailed plan describing how to develop, maintain, replace and alter or
enhance specific software. The life cycle defines a methodology for improving the
quality of software and the overall development process.
The following figure is a graphical representation of the various stages of a typical
SDLC.
Once the requirement analysis is done the next step is to clearly define and
document the product requirements and get them approved from the customer or
the market analysts. This is done through an SRS (Software Requirement
Specification) document which consists of all the product requirements to be
designed and developed during the project life cycle.
SRS is the reference for product architects to come out with the best architecture for
the product to be developed. Based on the requirements specified in SRS, usually
more than one design approach for the product architecture is proposed and
documented in a DDS - Design Document Specification.
This DDS is reviewed by all the important stakeholders and based on various
parameters as risk assessment, product robustness, design modularity, budget and
time constraints, the best design approach is selected for the product.
A design approach clearly defines all the architectural modules of the product along
with its communication and data flow representation with the external and third party
modules (if any). The internal design of all the modules of the proposed architecture
should be clearly defined with the minutest of the details in DDS.
In this stage of SDLC the actual development starts and the product is built. The
programming code is generated as per DDS during this stage. If the design is
performed in a detailed and organized manner, code generation can be
accomplished without much hassle.
Developers must follow the coding guidelines defined by their organization and
programming tools like compilers, interpreters, debuggers, etc. are used to generate
the code. Different high level programming languages such as C, C++, Pascal, Java
and PHP are used for coding. The programming language is chosen with respect to
the type of software being developed.
Once the product is tested and ready to be deployed it is released formally in the
appropriate market. Sometimes product deployment happens in stages as per the
business strategy of that organization. The product may first be released in a limited
segment and tested in the real business environment (UAT- User acceptance
testing).
Then based on the feedback, the product may be released as it is or with suggested
enhancements in the targeting market segment. After the product is released in the
market, its maintenance is done for the existing customer base.
SDLC Models
There are various software development life cycle models defined and designed
which are followed during the software development process. These models are
also referred as Software Development Process Models". Each process model
follows a Series of steps unique to its type to ensure success in the process of
software development.
Following are the most important and popular SDLC models followed in the industry
−
Waterfall Model
Iterative Model
Spiral Model
V-Model
Big Bang Model
Other related methodologies are Agile Model, RAD Model, Rapid Application
Development and Prototyping Models.
TYPES OF COMPUTERS
Supercomputer
Supercomputers are the fastest and the most expensive computers. These huge computers are
used to solve very complex science and engineering problems. Supercomputers get their
processing power by taking advantage of parallel processing; they use lots of CPUs at the same
time on one problem. A typical supercomputer can do up to ten trillion individual calculations
every second. Example Supercomputers:
K Computer
Columbia
Mainframe[edit]
Mainframe (coloquially, "big iron") computers are similar to supercomputers in many aspects, the
main difference between them is the fact that a supercomputer use all its raw power to focus on
very few tasks, while a mainframe purpose is to perform thousands or millions of operations
concurrently. Due to its nature, mainframes are often employed by large organizations for bulk
data processing, such as census, industry and consumer statistics, enterprise resource
planning and transaction processing.
3.Server Computer
A server is a central computer that contains collections of data and programs. Also called a
network server, this system allows all connected users to share and store electronic data and
applications. Two important types of servers are file servers and application servers.
Servers are a step under supercomputers, because they don't focus on trying to solve one very
complex problem, but try to solve many many similar smaller ones. An example of servers would
be the computers that Wikipedia stores its encyclopedia on. Those computers have to go and
find the page you're looking for and send it to you. In itself it's not a big task, but it becomes a job
for a server when the computers have to go and find lots of pages for a lot of people and send
them to the right place. Some servers, like the ones Google uses for something like Google
Documents, have applications on them instead of just files, like Wikipedia.
Workstation Computer
Workstations are high-end, expensive computers that are made for more complex procedures
and are intended for one user at a time. Some of the complex procedures consist of science,
math and engineering calculations and are useful for computer design and manufacturing.
Workstations are sometimes improperly named for marketing reasons. Real workstations are not
usually sold in retail, but this is starting to change; Apple's Mac Pro would be considered a
workstation.
The movie Toy Story was made on a set of Sun (Sparc) workstations [1]
Personal Computer or PC
PC is an abbreviation for a Personal Computer, it is also known as a Microcomputer. Its physical
characteristics and low cost are appealing and useful for its users. The capabilities of a personal
computer have changed greatly since the introduction of electronic computers. By the early
1970s, people in academic or research institutions had the opportunity for single-person use of a
computer system in interactive mode for extended durations, although these systems would still
have been too expensive to be owned by a single individual. The introduction of
the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, led to
the proliferation of personal computers after about 1975.
Microcontroller
Microcontrollers are mini computers that enable the user to store data and execute simple
commands and tasks. These single circuit devices have minimal memory and program length but
are normally designed to be very good at performing a niche task. Many such systems are
known as embedded systems. The computer in your car, for example is an embedded system. A
common microcontroller that one might come across is called Arduino.
Smartphone
Do you own a smartphone? Your smart phone is a computer! Most smartphones run iOS or
Android. Android is an operating system that is based on Linux. Smartphones are becoming
exponentially faster and also have an exponentially increasing data capacity.
Client/server architecture
Client/server architecture is a computing model in which the server hosts, delivers
and manages most of the resources and services to be consumed by the client. This
type of architecture has one or more client computers connected to a central server
over a network or internet connection. This system shares computing resources.
Client/server architecture is also known as a networking computing model or
client/server network because all the requests and services are delivered over a
network.
The terms World Wide Web (WWW) and Internet are not the
same. The Internet is a collection of interconnected computer
networks, linked by copper wires, fiber-optic cables, wireless
connections, etc. World Wide Web (WWW) is a collection of
interconnected documents and other resources, linked by
hyperlinks and URLs. The World Wide Web is one of the
services accessible via the Internet, along with various others
including email, file sharing, remote administration, video
streaming, online gaming etc.
Application (Layer 7)
OSI Model, Layer 7, supports application and end-user
processes. Communication partners are identified, quality of
service is identified, user authentication and privacy are
considered, and any constraints on data syntax are identified.
Everything at this layer is application-specific. This layer
provides application services for file transfers, e-mail, and
other network software services. Telnet and FTP are
applications that exist entirely in the application level. Tiered
application architectures are part of this layer.
Layer 7 Application examples include WWW browsers, NFS, SNMP, Telnet, HTTP, FTP
Presentation (Layer 6)
This layer provides independence from differences in data
representation (e.g., encryption) by translating from application
to network format, and vice versa. The presentation layer
works to transform data into the form that the application layer
can accept. This layer formats and encrypts data to be sent
across a network, providing freedom from compatibility
problems. It is sometimes called the syntax layer.
Layer 6 Presentation examples include encryption, ASCII,
EBCDIC, TIFF, GIF, PICT, JPEG, MPEG, MIDI.
Session (Layer 5)
This layer establishes, manages and terminates connections
between applications. The session layer sets up, coordinates,
and terminates conversations, exchanges, and dialogues
between the applications at each end. It deals with session and
connection coordination.
Layer 5 Session examples include NFS, NetBios names, RPC,
SQL.
Transport (Layer 4)
OSI Model, Layer 4, provides transparent transfer of data
between end systems, or hosts, and is responsible for end-to-
end error recovery and flow control. It ensures complete data
transfer.
Layer 4 Transport examples include SPX, TCP, UDP.
Network (Layer 3)
Layer 3 provides switching and routing technologies, creating
logical paths, known as virtual circuits, for transmitting data
from node to node. Routing and forwarding are functions of this
layer, as well as addressing, internetworking, error
handling, congestion control and packet sequencing.
Layer 3 Network examples include AppleTalk DDP, IP, IPX.
Data Link (Layer 2)
At OSI Model, Layer 2, data packets are encoded and decoded
into bits. It furnishes transmission protocol knowledge and
management and handles errors in the physical layer, flow
control and frame synchronization. The data link layer is
divided into two sub layers: The Media Access Control (MAC)
layer and the Logical Link Control (LLC) layer. The MAC sub
layer controls how a computer on the network gains access to
the data and permission to transmit it. The LLC layer controls
frame synchronization, flow control and error checking.
Layer 2 Data Link examples include PPP, FDDI, ATM, IEEE
802.5/ 802.2, IEEE 802.3/802.2, HDLC, Frame Relay.
Physical (Layer 1)
OSI Model, Layer 1 conveys the bit stream - electrical impulse,
light or radio signal — through the network at the electrical and
mechanical level. It provides the hardware means of sending
and receiving data on a carrier, including defining cables, cards
and physical aspects. Fast Ethernet, RS232,
and ATM are protocols with physical layer components.
Layer 1 Physical examples include Ethernet, FDDI, B8ZS,
V.35, V.24, RJ45.