Nothing Special   »   [go: up one dir, main page]

Blood Bank Management System

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 46

TOPIC NAME

BLOOD BANK MANAGEMENT SYSTEM

1
TABLE OF CONTENT

PARTICULARS PAGE NUMBER

INTRODUCTION 3–8

SYSTEM ANALYSIS AND DESIGN 9 – 20

SYSTEM REQUIREMENTS 21

TECHNOLOGY USED 22 – 27

OUTPUT SCREEN 28 – 31

SOFTWARE TESTING 32 – 42

CONCLUSION 43

FUTURE WORK 44

NON-FUNCTION REQURIEMENT 45

BIBLIOGRAPHY 46

2
INTRODUCTION

The software system is a blood bank management system that helps in managing various
blood bank operations effectively. The project consists of a central repository containing
various blood deposits available along with associated details. These details include blood
type, storage area and date of storage. These details help in maintaining and monitoring the
blood deposits. The project is an online system that allows to check weather required blood
deposits of a particular group are available in the blood bank. Moreover the system also has
added features such as patient name and contacts, blood booking and even need foe certain
blood group is posted on the website to find available donors for a blood emergency. This
system is developed on VB.Net platform and supported by an MYSQL database to store
blood and user specific details.

AIM

The main aim of developing this software is to provide blood to the people who are in need of
blood. The numbers of persons who are in need of blood are increasing in large number day
by day. Using this system user can search the blood group available in the city and he can
also get contact number of the donor who has the same blood group. In order to help people
who are in need of blood, this Blood Bank software can be used effectively for getting the
details of available blood groups and user can also get contact number of the blood donors
having the same blood group and within the same city.

EXISTING SYSTEM

There are a quite good number of software packages that exist for BLOOD BANK Inventory
control. But, when I visited blood bank of Karnataka cancer hospital in navanagar. I found
that existing system is limited only to those particular blood bank.At the present there is no
software to keep any records in blood bank. It becomes difficult to provide any record
immediately at times of emergency. Required more human efforts in maintaining the branch
related information . Manually to keep the accounts is also tedious & risky job & to maintain
those accounts in ledgers for a long period is also very difficult. Difficult to manage and

3
maintain the files.Chance of damage of files, if the data is stored in the files for duration of
time. Privacy is difficult. Time consuming is retervieng, storing and updating the data. It is
difficult to keep track the record about the donor & receiver he has donated or recievered the
blood at the last time.

PROPOSED SYSTEM

The proposed system (Blood Bank Management System) is designed to help the Blood Bank
administrator to meet the demand of Blood by sending and/or serving the request for Blood
as and when required.The proposed system gives the procedural approach of how to bridge
the gap between Recipient, Donor, and Blood Banks. The features of proposed system are
ease of data entry , system should provide user friendly interfaces , no need to maintain any
manual register and form , immediate data retrievel and so on. The new system covers all the
aspects of the existing system as well as enhanced features for the existing system For e.g.
Bill provision etc.

Problem Definition

2.1 Existing System

 Cannot Upload and Download the latest updates.


 No use of Services and Remoting.
 Risk of mismanagement and of data when the project is under development.
 Less Security.
 No proper coordination between different Applications and Users.
 Fewer Users – Friendly
Disadvantages

1. User friendliness is provided in the application with various controls.


2. The system makes the overall project management much easier and flexible.
3. Readily upload the latest updates, allows user to download the alerts by clicking the
URL.
4. There is no risk of data mismanagement at any level while the project development is
under process.

4
5. It provides high level of security with different level of authentication.

Advantages:

 User friendliness I provided in the application with various controls.


 The system makes the overall project management much easier and flexible.
 Readily upload the latest updates ,allows user to download the alerts by
clicking the url.
 It provides high level of security with different level of authentication.

FEASIBILITY STUDY

5
Preliminary investigation examine project feasibility, the likelihood the system will be
useful to the organization. The main objective of the feasibility study is to test the Technical,
Operational and Economical feasibility for adding new modules and debugging old running
system. All system is feasible if they are unlimited resources and infinite time. There are
aspects in the feasibility study portion of the preliminary investigation:

 Technical Feasibility
 Operation Feasibility
 Economical Feasibility

Technical Feasibility

The technical issue usually raised during the feasibility stage of the investigation
includes the following:

 Does the necessary technology exist to do what is suggested?


 Do the proposed equipments have the technical capacity to hold the data required to use
the new system?
 Will the proposed system provide adequate response to inquiries, regardless of the
number or location of users?
 Can the system be upgraded if developed?
 Are there technical guarantees of accuracy, reliability, ease of access and data security?

Earlier no system existed to cater to the needs of ‘Secure Infrastructure


Implementation System’. The current system developed is technically feasible. It is a web
based user interface for audit workflow at NIC-CSD. Thus it provides an easy access to the
users.

The database’s purpose is to create, establish and maintain a workflow among various
entities in order to facilitate all concerned users in their various capacities or roles.
Permission to the users would be granted based on the roles specified. Therefore, it provides
the technical guarantee of accuracy, reliability and security.

The software and hard requirements for the development of this project are not many
and are already available in-house at NIC or are available as free as open source. The work
for the project is done with the current equipment and existing software technology.

6
Necessary bandwidth exists for providing a fast feedback to the users irrespective of the
number of users using the system.

Operational Feasibility

Proposed projects are beneficial only if they can be turned out into information
system. That will meet the organization’s operating requirements. Operational feasibility
aspects of the project are to be taken as an important part of the project implementation.
Some of the important issues raised are to test the operational feasibility of a project includes
the following: -

 Is there sufficient support for the management from the users?


 Will the system be used and work properly if it is being developed and implemented?
 Will there be any resistance from the user that will undermine the possible application
benefits?

This system is targeted to be in accordance with the above-mentioned issues.


Beforehand, the management issues and user requirements have been taken into
consideration. So there is no question of resistance from the users that can undermine the
possible application benefits.

The well-planned design would ensure the optimal utilization of the computer resources and
would help in the improvement of performance status.

Economic Feasibility

7
A system can be developed technically and that will be used if installed must still be a good
investment for the organization. In the economical feasibility, the development cost in
creating the system is evaluated against the ultimate benefit derived from the new systems.
Financial benefits must equal or exceed the costs.

The system is economically feasible. It does not require any addition hardware or
software. Since the interface for this system is developed using the existing resources and
technologies available at NIC, There is nominal expenditure and economical feasibility for
certain.

SYSTEM ANALYSIS AND DESIGN


8
System Development Life Cycles (SDLC) describes the data design and applications design.
SDLS is an interactive rather than a sequential process. Thus SDLS might help to refine
Feasibility study to the user requirements.

Fig: System Development Life Cycles (SDLC)

Planning: SDLS planning yields a general overview of the company and its objectives. An
initial assessment of the information of the flow and intents requirement must be made during
this discovery portions of SDLS.

Analysis: Problems defined during the planning phase are examined in great details during
analysis phase. Analysis phase of the SDLS is an effect, a though AUDIT of the users
requirements.

Detailed system Design: In this phase, the designer complete the design of the systems
processes. This include all the necessary technical specification for the screens and reports.

9
Maintenance: As soon as the system is operable, end users being to request in it. Those
changes generates systems maintenance activities, which can be grouped into three type:

 Perfective maintenance to enhance the system.


 Correctives maintenance in response to systems.
 Adoptive maintenance due to changes in business environment

Data Flow Diagrams (DFD)

A data flow diagram is graphical tool used to describe and analyze movement of data
through a system. These are the central tool and the basis from which the other components
are developed. The transformation of data from input to output, through processed, may be
described logically and independently of physical components associated with the system.
These are known as the logical data flow diagrams.

The physical data flow diagrams show the actual implements and movement of data
between people, departments and workstations. A full description of a system actually
consists of a set of data flow diagrams. Using two familiar notations Yourdon, Gane and
Sarson notation develops the data flow diagrams. Each component in a DFD is labeled with a
descriptive name. Process is further identified with a number that will be used for
identification purpose.

The development of DFD’S is done in several levels. Each process in lower level
diagrams can be broken down into a more detailed DFD in the next level. The lop-level
diagram is often called context diagram. It consists a single process bit, which plays vital role
in studying the current system. The process in the context level diagram is exploded into
other process at the first level DFD.

The idea behind the explosion of a process into more process is that understanding at
one level of detail is exploded into greater detail at the next level. This is done until further
explosion is necessary and an adequate amount of detail is described for analyst to understand
the process.

Larry Constantine first developed the DFD as a way of expressing system


requirements in a graphical from, this lead to the modular design.

10
A DFD is also known as a “bubble Chart” has the purpose of clarifying system
requirements and identifying major transformations that will become programs in system
design. So it is the starting point of the design to the lowest level of detail. A DFD consists
of a series of bubbles joined by data flows in the system.

DFD SYMBOLS:

In the DFD, there are four symbols

1. A square defines a source(originator) or destination of system data


2. An arrow identifies data flow. It is the pipeline through which the information flows
3. A circle or a bubble represents a process that transforms incoming data flow into outgoing
data flows.
4. An open rectangle is a data store, data at rest or a temporary repository of data

Process that transforms data flow.

Source or Destination of data

Data flow

Data Store

CONSTRUCTING A DFD:

Several rules of thumb are used in drawing DFD’S:

11
1. Process should be named and numbered for an easy reference. Each name should be
representative of the process.
2. The direction of flow is from top to bottom and from left to right. Data traditionally flow
from source to the destination although they may flow back to the source. One way to
indicate this is to draw long flow line back to a source. An alternative way is to repeat the
source symbol as a destination. Since it is used more than once in the DFD it is marked
with a short diagonal.
3. When a process is exploded into lower level details, they are numbered.
4. The names of data stores and destinations are written in capital letters. Process and
dataflow names have the first letter of each work capitalized

A DFD typically shows the minimum contents of data store. Each data store should
contain all the data elements that flow in and out.

Questionnaires should contain all the data elements that flow in and out. Missing
interfaces redundancies and like is then accounted for often through interviews.

SAILENT FEATURES OF DFD’S

1. The DFD shows flow of data, not of control loops and decision are controlled
considerations do not appear on a DFD.
2. The DFD does not indicate the time factor involved in any process whether the dataflow
take place daily, weekly, monthly or yearly.
3. The sequence of events is not brought out on the DFD.

TYPES OF DATA FLOW DIAGRAMS


1. Current Physical
2. Current Logical

12
3. New Logical
4. New Physical

CURRENT PHYSICAL:
In Current Physical DFD proecess label include the name of people or their positions
or the names of computer systems that might provide some of the overall system-processing
label includes an identification of the technology used to process the data. Similarly data
flows and data stores are often labels with the names of the actual physical media on which
data are stored such as file folders, computer files, business forms or computer tapes.

CURRENT LOGICAL:

The physical aspects at the system are removed as mush as possible so that the current
system is reduced to its essence to the data and the processors that transform them regardless
of actual physical form.

NEW LOGICAL:

This is exactly like a current logical model if the user were completely happy with he
user were completely happy with the functionality of the current system but had problems
with how it was implemented typically through the new logical model will differ from current
logical model while having additional functions, absolute function removal and inefficient
flows recognized.

NEW PHYSICAL:

The new physical represents only the physical implementation of the new system.

RULES GOVERNING THE DFD’S

PROCESS

13
1) No process can have only outputs.
2) No process can have only inputs. If an object has only inputs than it must be a sink.
3) A process has a verb phrase label.

DATA STORE

1) Data cannot move directly from one data store to another data store, a process must move
data.
2) Data cannot move directly from an outside source to a data store, a process, which
receives, must move data from the source and place the data into data store
3) A data store has a noun phrase label.

SOURCE OR SINK

The origin and /or destination of data.

1) Data cannot move direly from a source to sink it must be moved by a process
2) A source and /or sink has a noun phrase land

DATA FLOW
1) A Data Flow has only one direction of flow between symbols. It may flow in both
directions between a process and a data store to show a read before an update. The later
is usually indicated however by two separate arrows since these happen at different type.
2) A join in DFD means that exactly the same data comes from any of two or more different
processes data store or sink to a common location.
3) A data flow cannot go directly back to the same process it leads. There must be atleast
one other process that handles the data flow produce some other data flow returns the
original data into the beginning process.
4) A Data flow to a data store means update (delete or change).
5) A data Flow from a data store means retrieve or use.

DFD FOR ADMIN LOGIN

14
DFD FOR USER LOGIN

15
UML Diagrams

16
Use Case Diagram:

 The unified modeling language allows the software engineer to express an analysis model
using the modeling notation that is governed by a set of syntactic semantic and pragmatic
rules.

 A UML system is represented using five different views that describe the system from
distinctly different perspective. Each view is defined by a set of diagram, which is as
follows.

 User Model View

i. This view represents the system from the users perspective.

ii. The analysis representation describes a usage scenario from the end-users
perspective.
Structural model view

 In this model the data and functionality are arrived from inside the system.

 This model view models the static structures.

Behavioral model view

It represents the dynamic of behavioral as parts of the system, depicting the


interactions of collection between various structural elements described in the user model
and structural model view.

Implementation Model View

In this the structural and behavioral as parts of the system are


represented as they are to be built.

Environmental Model View

In this the structural and behavioral aspects of the environment in which the
system is to be implemented are represented.

UML is specifically constructed through two different domains they are

 UML Analysis modeling, which focuses on the user model and structural model
views of the system.

17
 UML design modeling, which focuses on the behavioral modeling, implementation
modeling and environmental model views.

Use case Diagrams represent the functionality of the system from a user’s point of view.
Use cases are used during requirements elicitation and analysis to represent the functionality
of the system. Use cases focus on the behavior of the system from external point of view.

Actors are external entities that interact with the system. Examples of actors include users
like administrator, bank customer …etc., or another system like central database.

Use case Model

18
19
SEQUENCE DIAGRAMS

Sequence Diagrams Represent the objects participating the interaction horizontally and
time vertically.

20
SYSTEM REQUIREMENTS

Software requirements: A set of programs associated with the operation of a computer is


called software. Software is the part of the computer system which enables the user to interact
with several physical hardware devices.

The minimum software requirement specifications for developing this project are as follows:-

Software System Requirements

 Operating System : Window 7/8/10

 Available Coding Language : VB.Net

 Database : MYSQL

Hardware requirements: The Collection of internal electronic circuits and external physical
devices used in building a computer is called Hardware.

The minimum hardware requirement specification for developing this project is as follows:-

Hardware System Requirements

 System : Pentium I3 Processor

 Hard Disk : 500GB

 Monitor : Standard LED Monitor

 Input Devices : Keyboard

 Ram : 4 GB

21
TECHNOLOGY USED

VB.NET stands for Visual Basic.NET, and it is a computer programming language developed
by Microsoft. It was first released in 2002 to replace Visual Basic 6. VB.NET is an object-
oriented programming language. This means that it supports the features of object-oriented
programming which include encapsulation, polymorphism, abstraction, and inheritance.

Visual Basic .ASP NET runs on the .NET framework, which means that it has full access to
the .NET libraries. It is a very productive tool for rapid creation of a wide range of Web,
Windows, Office, and Mobile applications that have been built on the .NET framework.

The language was designed in such a way that it is easy to understand to both novice and
advanced programmers. Since VB.NET relies on the .NET framework, programs written in
the language run with much reliability and scalability. With VB.NET, you can create
applications that are fully object-oriented, similar to the ones created in other languages like
C++, Java, or C#. Programs written in VB.NET can also interoperate well with programs
written in Visual C++, Visual C#, and Visual J#. VB.NET treats everything as an object.

It is true that VB.NET is an evolved version of Visual Basic 6, but it's not compatible with it.
If you write your code in Visual Basic 6, you cannot compile it under VB.NET.

History of VB.NET

• VB.NET is a multi-paradigm programming language developed by Microsoft on


the .NET framework. It was launched in 2002 as a successor to the Visual Basic language.
This was the first version of VB.NET (VB.NET 7.0) and it relied on .NET version 1.0.

• In 2003, the second version of VB.NET, VB.NET 7.1, was released. This one relied
on .NET version 1.1. This version came with a number of improvements including support
for .NET Compact Framework and an improved reliability and performance of the .NET IDE.

22
VB.NET 2003 was also made available in the academic edition of Visual Studio.NET and
distributed to various scholars from different countries for free.

• In 2005, VB.NET 8.0 was released. The .NET core portion was dropped from its
name so as to distinguish it from the classical Visual Basic language. This version was named
Visual Basic 2005. This version came with many features since Microsoft wanted this
language to be used for rapid application developers. They also wanted to make it different
from C# language. Some of the features introduced by this version of VB.NET included
partial classes, generics, nullable types, operator overloading, and unsigned integer support.
This version also saw the introduction of the IsNot operator.
• In 2008, VB 9.0 was introduced. This was released together with .NET 3.5. Some of
the features added to this release of VB.NET included anonymous types, true conditional
operator, LINQ support, XML literals, Lambda expressions, extension methods, and type
inference.

• In 2010, Microsoft released VB 2010 (code 10.0). They wanted to use a Dynamic
Language Runtime for this release, but they opted for co-evolution strategy shared between
VB.NET and C# to bring these languages closer to each other.

• In 2012, VB 2012 (code 11.0) was release together with .NET 4.5. Its features
included call hierarchy, iterators, caller data, asynchronous programming with "await" and
"async" statements and the "Global" keyword in the "namespace" statements.

• In 2015, VB 2015 (code 14.0) was released alongside Visual Studio 2015. The "?."
operator was introduced to do inline null checks. A string interpolation feature was also
introduced to help in formatting strings inline.

• In 2017, VB 2017 (code 15.0) was introduced alongside Visual Studio 2017. A better
way of organizing source code in just a single action was introduced.

23
VB.NET Features

VB.NET comes loaded with numerous features that have made it a popular programming
language amongst programmers worldwide. These features include the following:

• VB.NET is not case sensitive like other languages such as C++ and Java.
• It is an object-oriented programming language. It treats everything as an object.
• Automatic code formatting, XML designer, improved object browser etc.
• Garbage collection is automated.
• Support for Boolean conditions for decision making.
• Simple multithreading, allowing your apps to deal with multiple tasks simultaneously.
• Simple generics.
• A standard library.
• Events management.
• References. You should reference an external object that is to be used in a VB.NET
application.
• Attributes, which are tags for providing additional information regarding elements
that have been defined within a program.
• Windows Forms- you can inherit your form from an already existing form.

24
Advantages of VB.NET

The following are the pros/benefits you will enjoy for coding in VB.NET:
• Your code will be formatted automatically.

• You will use object-oriented constructs to create an enterprise-class code.


• You can create web applications with modern features like performance counters,
event logs, and file system.

• You can create your web forms with much ease through the visual forms designer.
You will also enjoy drag and drop capability to replace any elements that you may need.

• You can connect your applications to other applications created in languages that run
on the .NET framework.

• You will enjoy features like docking, automatic control anchoring, and in-place menu
editor all good for developing web applications.

Disadvantages of VB.NET

Below are some of the drawbacks/cons associated with VB.NET:

• VB.NET cannot handle pointers directly. This is a significant disadvantage since


pointers are much necessary for programming. Any additional coding will lead to many CPU
cycles, requiring more processing time. Your application will become slow.

• VB.NET is easy to learn. This has led to a large talent pool. Hence, it may be
challenging to secure a job as a VB.NET programmer.

25
MYSQL: Security Improvements: MySQL now enables database administrators to establish a
policy for automatic password expiration: Any user who connects to the server using an
account for which the password is past its permitted lifetime must change the password. For
more information, Administrators can lock and unlock accounts for better control over who
can log in.

For more information, To make it easier to support secure connections, MySQL servers
compiled using Open SSL can automatically generate missing SSL and RSA certificate and
key files at start-up.

SQL mode changes: The ERROR_FOR_DIVISION_BY_ZERO, NO_ZERO_DATE, SQL


modes are now deprecated but enabled by default. The long term plan is to have them
included in strict SQL mode and to remove them as explicit modes in a future MySQL
release. Globalization improvements:

MySQL 5.7.4 includes a gb18030 character set that supports the China National Standard
GB18030 character set. For more information about MySQL character set support.

JSON support:

Beginning with MySQL 5.7.8, MySQL supports a native JSON type. JSON values are not
stored as strings, instead using an internal binary format that permits quick read access to
document elements. JSON documents stored in JSON columns are automatically validated
whenever they are inserted or updated, with an invalid document producing an error.

JSON documents are normalized on creation, and can be compared using most comparison
operators such as =, <, <=, >, >=, <>, !=, and <=>; for information about supported operators
as well as precedence and other rules that MySQL follows when comparing JSON values

Sys Schema:

26
MySQL distributions now include the sys schema, which is a set of objects that help DBAs
and developers interpret data collected by the Performance Schema. sys schema objects can
be used for typical tuning and diagnosis use cases. For more information

Condition handling:

MySQL now supports stacked diagnostics areas. When the diagnostics area stack is pushed,
the first (current) diagnostics area becomes the second (stacked) diagnostics area and a new
current diagnostics area is created as a copy of it. Within a condition handler, executed
statements modify the new current diagnostics area, but GET STACKED DIAGNOSTICS
can be used to inspect the stacked diagnostics area to obtain information about the condition
that caused the handler to activate, independent of current conditions within the handler itself.
(Previously, there was a single diagnostics area. To inspect handler-activating conditions
within a handler,

Master dump thread improvements:

The master dump thread was refectories to reduce lock contention and improve master
throughput. Previous to MySQL 5.7.2, the dump thread took a lock on the binary log
whenever reading an event; in MySQL 5.7.2 and later, this lock is held only while reading the
position at the end of the last successfully written event. This means both that multiple dump
threads are now able to read concurrently from the binary log file, and that dump threads are
now able to read while clients are writing to the binary log.

Multi-source replication is now possible:

MySQL Multi-Source Replication adds the ability to replicate from multiple masters to a
slave. MySQL Multi-Source Replication topologies can be used to back up multiple servers
to a single server, to merge table shards.

27
OUTPUT SCREEN

28
29
30
31
SOFTWARE TESTING

Software testing:
Is an investigation conducted to provide stakeholders with information about the quality of
the product or service under test. Software testing can also provide an objective, independent
view of the software to allow the business to appreciate and understand the risks of software
implementation. Test techniques include the process of executing a program or application
with the intent of finding software bugs (errors or other defects).

Software testing involves the execution of a software component or system component to


evaluate one or more properties of interest. In general, these properties indicate the extent to
which the component or system under test:

 meets the requirements that guided its design and development,


 responds correctly to all kinds of inputs,
 performs its functions within an acceptable time,
 is sufficiently usable,
 can be installed and run in its intended environments, and
 Achieves the general result its stakeholder’s desire.

Software testing can provide objective, independent information about the quality of software
and risk of its failure to users and/or sponsors. As the number of possible tests for even
simple software components is practically infinite, all software testing uses some strategy to
select tests that are feasible for the available time and resources.

As a result, software testing typically (but not exclusively) attempts to execute a program or
application with the intent of finding software bugs (errors or other defects).The job of testing
is an iterative process as when one bug is fixed it can illuminate other deeper bugs or can
even create new ones. Software testing can be conducted as soon as executable software
(even if partially complete) exists.

32
The overall approach to software development often determines when and how testing is
conducted. For example, in a phased process, most testing occurs after system requirements
have been defined and then implemented in testable programs. In contrast, under an Agile
approach, requirements, programming, and testing are often done concurrently.

Levels of Testing:
 Unit Testing.
 Module Testing.
 Integration Testing.
 System Testing.
 User Acceptance Testing.

 Levels of Testing:

33
 Unit testing: In computer programming, unit testing is a software testing method by
which individual units of source code, sets of one or more computer program modules
together with associated control data, usage procedures, and operating procedures, are
tested to determine whether they are fit for use.

Intuitively, one can view a unit as the smallest testable part of an application. In procedural
programming, a unit could be an entire module, but it is more commonly an individual
function or procedure. In objectoriented programming, a unit is often an entire interface, such
as a class, but could be an individual method. Unit tests are short code fragments created by
programmers or occasionally by white box testers during the development process. It forms
the basis for component testing.

Ideally, each test case is independent from the others. Substitutes such as method stubs, mock
objects, fakes, and test harnesses can be used to assist testing a module in isolation. Unit tests
are typically written and run by software developers to ensure that code meets its design and
behaves as intended

 Module Testing: Module test plans must be created prior to module test execution.
The following is a module testing test plan for the microwave oven example.

34
Integration: The purpose of integration testing is to verify functional, performance, and
reliability requirements placed on major design items. These "design items", i.e., assemblages
(or groups of units), are exercised through their interfaces using black box testing, success
and error cases being simulated via appropriate parameter and data inputs. Simulated usage of
shared data areas and inter-process communication is tested and individual subsystems are
exercised through their input interface.

Test cases are constructed to test whether all the components within assemblages interact
correctly, for example across procedure calls or process activations, and this is done after
testing individual modules, i.e., unit testing. The overall idea is a "building block" approach,
in which verified assemblages are added to a verified base which is then used to support the
integration testing of further assemblages.

35
Software Integration Testing is performed according to the Software Development Life Cycle
(SDLC) after module and functional tests. The cross-dependencies for software integration
testing are: schedule for integration testing, strategy and selection of the tools used for
integration, define the cyclomatical complexity of the software and software architecture,
reusability of modules and life-cycle / versioning management.

System testing: Software or hardware is testing conducted on a complete, integrated system


to evaluate the system's compliance with its specified requirements. System testing falls
within the scope of black-box testing, and as such, should require no knowledge of the inner
design of the code or logic.

As a rule, system testing takes, as its input, all of the "integrated" software components that
have passed integration testing and also the software system itself integrated with any
applicable hardware system(s). The purpose of integration testing is to detect any
inconsistencies between the software units that are integrated together (called assemblages)
or between any of the assemblages and the hardware. System testing is a more limited type of
testing; it seeks to detect defects both within the "inter-assemblages" and also within the
system as a wholeTesting the whole system.

System testing is performed on the entire system in the context of a Functional Requirement
Specification(s) (FRS) and/or a System Requirement Specification (SRS). System testing
tests not only the design, but also the behavior and even the believed expectations of the
customer. It is also intended to test up to and beyond the bounds defined in the
software/hardware requirements specification(s).

Static vs. dynamic testing:

There are many approaches available in software testing. Reviews, walkthroughs, or


inspections are referred to as static testing, whereas actually executing programmed code
with a given set of test cases is referred to as dynamic testing. Static testing is often implicit,
as proofreading, plus when programming tools/text editors check source code structure or
compilers (pre-compilers) check syntax and data flow as static program analysis.

36
Dynamic testing takes place when the program itself is run. Dynamic testing may begin
before the program is 100% complete in order to test particular sections of code and are
applied to discrete functions or modules. Typical techniques for this are either using
stubs/drivers or execution from a debugger environment.

Static testing involves verification, whereas dynamic testing involves validation. Together
they help improve software quality. Among the techniques for static analysis, mutation
testing can be used to ensure the test-cases will detect errors which are introduced by
mutating the source code.

Acceptance testing: In engineering and its various sub disciplines, acceptance testing is a test
conducted to determine if the requirements of specification or contract are met. It may
involve chemical tests, physical tests, or performance tests.

In systems engineering it may involve black-box testing performed on a system (for example:
a piece of software, lots of manufactured mechanical parts, or batches of chemical products)
prior to its delivery.

In software testing the ISTQB defines acceptance as: formal testing with respect to user
needs, requirements, and business processes conducted to determine whether a system
satisfies the acceptance criteria and to enable the user, customers or other authorized entity to
determine whether or not to accept the system. Acceptance testing is also known as user
acceptance testing (UAT), end-user testing, and operational acceptance testing (OAT) or field
(acceptance) testing. A smoke test may be used as an acceptance test prior to introducing a
build of software to the main testing process

Testing is a set of activities conducted to facilitate discovery and/or evaluation of properties


of one or more items under test. Each individual test, known as a test case, exercises a set of
predefined test activities, developed to drive the execution of the test item to meet test
objectives; including correct implementation, error identification, quality verification and

37
other valued detail. The test environment is usually designed to be identical, or as close as
possible, to the anticipated production environment.

It includes all facilities, hardware, software, firmware, procedures and/or documentation


intended for or used to perform the testing of software. Operational acceptance test (OAT)
criteria (regardless if using agile, iterative or sequential development) are defined in terms of
functional and non-functional requirements; covering key quality attributes of functional
stability, portability and reliability.

User acceptance: User acceptance testing (UAT) consists of a process of verifying that a
solution works for the user. It is not system testing (ensuring software does not crash and
meets documented requirements), but rather ensures that the solution will work for the user
i.e., test the user accepts the solution (software vendors often refer to this as "Beta testing").

This testing should be undertaken by a subject-matter expert (SME), preferably the owner or
client of the solution under test, and provide a summary of the findings for confirmation to
proceed after trial or review.

In software development, UAT as one of the final stages of a project often occurs before a
client or customer accepts the new system. Users of the system perform tests in line with
what would occur in real-life scenarios. It is important that the materials given to the tester be
similar to the materials that the end user will have. Provide testers with real-life scenarios
such as the three most common tasks or the three most difficult tasks you expect an average
user will undertake. Instructions on how to complete the tasks must not be provided.

38
The UAT acts as a final verification of the required business functionality and proper
functioning of the system, emulating real-world usage conditions on behalf of the paying
client or a specific large customer. If the software works as required and without issues
during normal use, one can reasonably extrapolate the same level of stability in production.
User tests, usually performed by clients or by end-users, do not normally focus on identifying
simple problems such as spelling errors or cosmetic problems, nor on showstopper defects,
such as software crashes; testers and developers previously identify and fix these issues
during earlier unit testing, integration testing, and system testing phases.

UAT should be executed against test scenarios. Test scenarios usually differ from System or
Functional test cases in the sense that they represent a "player" or "user" journey. The broad
nature of the test scenario ensures that the focus is on the journey and not on technical or
system-specific key presses, staying away from "click-by-click" test steps to allow for a
variance in users' steps through systems. Test scenarios can be broken down into logical
"days", which are usually where the actor (player/customer/operator) or system (back office,
front end) changes.

Maintenance and environment:

AS the number of computer based systems, grieve libraries of computer software


began to expand. In house developed projects produced tones of thousand soft program
source statements. Software products purchased from the outside added hundreds of
thousands of new statements. A dark cloud appeared on the horizon. All of these programs,
all of those source statements-had to be corrected when false were detected, modified as user
requirements changed, or adapted to new hardware that was purchased. These activities were
collectively called software Maintenance.

39
The maintenance phase focuses on change that is associated with error
correction, adaptations required as the software's environment evolves, and changes due to
enhancements brought about by changing customer requirements. Four types of changes are
encountered during the maintenance phase.

1. Correction
2. Adaptation
3. Enhancement
4. Prevention

Correction:

Even with the best quality assurance activities is lightly that the customer will
uncover defects in the software. Corrective maintenance changes the software to
correct defects.

Maintenance is a set of software Engineering activities that occur after software


has been delivered to the customer and put into operation. Software configuration
management is a set of tracking and control activities that began when a software
project begins and terminates only when the software is taken out of the operation.

We may define maintenance by describing four activities that are undertaken after a
program is released for use:

 Corrective Maintenance
 Adaptive Maintenance
 Perfective Maintenance or Enhancement
 Preventive Maintenance or reengineering

Only about 20 percent of all maintenance work are spent "fixing mistakes". The
remaining 80 percent are spent adapting existing systems to changes in their external
environment, making enhancements requested by users, and reengineering an
application for use.

40
Adaptation:

Over time, the original environment (E>G., CPU, operating system, business rules,
external product characteristics) for which the software was developed is likely to change.
Adaptive maintenance results in modification to the software to accommodate change to its
external environment.-

Enhancement:

As software is used, the customer/user will recognize additional functions that will
provide benefit. Perceptive maintenance extends the software beyond its original function
requirements.

Prevention :

Computer software deteriorates due to change, and because of this, preventive maintenance,
often called software re engineering, must be conducted to enable the software to serve the
needs of its end users. In essence, preventive maintenance makes changes to computer
programs so that they can be more easily corrected, adapted, and enhanced. Software
configuration management (SCM) is an umbrella activity that is applied throughout the
software process.

ADVANTAGES

 Security of Data

 Greater efficiency

 Better Service

 User friendliness and interactive

 Minimum time required

 Easy to use

41
Scope of the project

System development is also considered as a process backed by engineering approach. We


have tried to incorporate & develop new particles for our education particles have been
followed not during the but coding but also during the analysis, design phases & in
documentation.

Blood Bank Management System is considered as an expansion of business relations. It


contributes a lot by providing quick & fast services of sending documents letters (formal &
informal both) to business as it enables any business to flourish

Following modification or upgrades can be done in system.

1) More than one company can be integrated through this software.

2) Web services can be used to know exact delivery status of packets.

3) Client can check the repacked delivery status online.

4) Distributed database approach in place of centralized approach

42
CONCLUSION

With the theoretical inclination of our syllabus it becomes very essential to take the at most
advantage of any opportunity of gaining practical experience that comes along. The building
blocks of this Major Project” BLOOD BANK Management System” was one of these
opportunities. It gave us the requisite practical knowledge to supplement the already taught
theoretical concepts thus making us more competent as a computer engineer.
The project from a personal point of view also helped us in understanding the following
aspects of project development:
• The planning that goes into implementing a project.
• The importance of proper planning and an organized methodology.
• The key element of team spirit and co-ordination in a successful project.
The project also provided us the opportunity of interacting with our teachers and to gain from
their best experience

43
FUTURE RECOMMENDATION

BLOOD BANK MANAGEMENT is a software application to built such a way that it should
suits for all type of blood banks in future. One important future scope is availability of
location based blood bank details and extraction of location based donor’s detail, which is
very helpful to the acceptant people. All the time the network facilities cannot be use. This
time donor request does not reach in proper time, this can be avoid through adding some
message sending procedure this will help to find proper blood donor in time. This will
provide availability of blood in time. 

44
NON- FUNCTIONAL REQUIREMENT:

Performance Requirements:

The proposed system that we are going to develop will be used as the Chief performance
system for providing help to the organization in managing the whole database of the Blood
bank management system. Therefore, it is expected that the database would perform
functionally all the requirements that are specified.

Safety Requirements:

The database may get crashed at any certain time due to virus or operating system failure.
Therefore, it is required to take the database backup.

Security Requirements:

We are going to develop a secured database. There are different categories of users Blood
Bank Management System, Student who will be viewing either all or some specific
information form the database. Depending upon the category of user the access rights are
decided. It means if the user is an administrator then he can be able to modify the data,
append etc. All other users only have the rights to retrieve the information about database.

45
BIBLIOGRAPHY

 Mastering VB 6.0

 Blood Bank Management System.

46

You might also like