JVT2004 - RiskManagement SIA CCA and RA PDF
JVT2004 - RiskManagement SIA CCA and RA PDF
JVT2004 - RiskManagement SIA CCA and RA PDF
Analysis Techniques
For Validation Programs
B Y D AV I D W. V I N C E N T & B I L L H O N E C K
Introduction different risk management tools however, this article will
In recent years, the subject of quality risk management be based on those most commonly used in the healthcare
has become a major focus of the Food and Drug Adminis- industry. The following is a list of the most commonly
tration (FDA). On April 9-11 2002, the FDA held a public used risk management tools, and a brief description of
meeting in Washington, D.C. The purpose of the meeting their practical usages:
was for the public to comment on the following three FDA
concept papers: Cause and Effect
Premarketing Risk Assessment, Risk Management Pro- Fault Tree Analysis (FTA)
grams, and Risk Assessment of Observation Data: Good Hazard Analysis and Critical Control Points
Pharmacovigliance Practices and Pharmacoepidemiologic (HACCP)
Assessment.1 Failure Modes and Effect Analysis (FMEA)
It is only matter of time before the FDA and other regu-
latory agencies will expect the same quality risk assessment Cause and Effect
to be applied to all areas of the biotechnology and pharma-
ceutical industry. Cause-and-effect diagrams were developed by Kauro
Ishikawa of Tokyo University in 1943, and thus, are often
Quality Risk Management called Ishikawa Diagrams. They are also known as fishbone
diagrams because of their appearance (in the plotted form).
Quality risk management is not a new concept. It has Cause-and-effect diagrams are used to systematically list the
been used in the medical device and other industries for different causes that can be attributed to a problem (or an ef-
many years, and is now becoming more accepted within the fect). A cause-and-effect diagram can aid in identifying the
pharmaceutical and biotechnology industries. For exam- reasons why a process goes out of control.
ple, Failure Mode and Effect Analysis (FMEA) techniques A fishbone diagram is one technique used to illustrate
have been around for over 30 years. Its only recently, how- cause-and-effect. The following is an example of a fishbone
ever, that FMEAs have gained widespread acceptance out- diagram technique:
side the safety area, thanks in large part to QS-9000.
The purpose of this article is to discuss several risk as- FISHBONE DIAGRAM TECHNIQUE
sessment techniques, and how they can be utilized to sup- 1. The diagram, like other problem solving techniques,
port the development of user requirement specifications, is a heuristic (verify) tool. As such, it helps users orga-
commissioning, and validation activities. Before a risk as- nize their thoughts and structure the quality improvement
sessment technique can be utilized in any quality assess- process. Of course, the diagram does not provide solu-
ment, it is first important to understand each technique and tions to quality problems.
how to implement them into your system. There are many 2. The final diagram does not rank causes according to
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3 235
David W. Vincent & Bill Honeck
their importance. Put differently, the diagram does not sible causes of that event are considered. The analysis pro-
identify leverage points, that when manipulated, will sig- ceeds by determining how these system-level failures can be
nificantly improve the quality of the process at hand. caused by individual or combined lower level failures or
3. The diagram is a very attractive tool. On the face of it, events. The tree is continued until the subsystem at fault is de-
it is easy to learn and apply. However, it is a mistake to ap- termined. By determining the underlying causes, corrective
proach it without mastering at least some organizational actions can be identified to avoid or diminish the effects of the
learning skills, such as working together with others, seek- failures. FTA is a great lead-in to robust experimental design
ing the truth, being open to different ideas, and seeing oth- techniques. For example, the following is a top down approach
ers who might oppose you as colleagues with different to understanding a basic sterilization model (See Figure A)
ideas. Without such skills, internal politics can dominate
the process (e.g., the most powerful opinion dominates; Hazard Analysis and
team members bring to the diagram construction process a Critical Control Points (HACCP)
political agenda).
HACCP is a management system in which product
Fault Tree Analysis safety is addressed through the analysis and control of bio-
logical, chemical, and physical hazards from raw material
A Fault Tree Analysis (FTA) is a deductive, top-down production, procurement and handling, to manufacturing,
method of analyzing system design and performance. It in- distribution, and consumption of the finished product. For
volves specifying a top event to analyze (such as a sterilization successful implementation of a HACCP plan, management
process), followed by identifying all of the associated ele- must be strongly committed to the HACCP concept. A firm
ments in the system that could cause that top event to occur. commitment to HACCP by top management provides com-
FTA is a top down approach to failure mode analysis. It as- pany employees with a sense of the importance of produc-
sumes a system level failure, and identifies critical failure ing safe products. While HACCP is traditionally used in the
modes within that system. The undesirable event is defined, food industry, one can see the value of using this technique
and that event is then traced through the system to identify in determining the critical control point in the manufactur-
possible causes. One event is addressed at a time, and all pos- ing of biological or pharmaceutical drugs.
Figure A
________________________________________________________________________
A top down approach to understanding a basic sterilization model
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3 237
David W. Vincent & Bill Honeck
Figure 1
______________________________________________________________________________
Figure 1
FMEA Team
FMEA Team Start-Up Worksheet
Start-Up Worksheet
Date Completed:____________________________
Team Leader:____________________________________________
9. What is the procedure if the team needs to expand beyond these boundaries?
_______________________________________________________________________________________________
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3 239
David W. Vincent & Bill Honeck
START WITH KNOWN FAILURE MODES: more than one cause for each failure mode.
Customer complaints Design FMEAs
Process control reports The focus is specifically on design weaknesses and de-
Validation failures ficiencies, or possible customer use/misuse situations
Test results that could lead to the failure.
Product quality data Process FMEAs
The focus is on process aspects, controls, variables, or
Potential Effects (System and End User) conditions that can result in the failure.
Effects are any conditions that can occur in the early Occurrence
process development phase, clinical setting, and/or manu-
facturing conditions, potentially brought about by a failure Occurrence is the probability that the cause listed will
mode, if it were present in the product used by the cus- happen and create the failure mode described. Historical data
tomer. In the case of process FMEAs, also include potential on this or similar designs/processes may be used to estimate
effects on subsequent operations in the manufacturing how often an occurrence will transpire. The probability of oc-
process. There may be several effects for each failure mode. currence may be defined on a scale from one to five. There is
an occurrence rank for each cause identified. (See Figure 4)
Assigning Severity, Occurrence,
and Detection Ratings Detection
In most FMEA, the rating is based on a 10-point scale, Detection ranking is specific to Current Controls. A
with one (1) being lowest and ten (10) being highest. Fig- ranking score of one is assigned to represent the combined
ure 2 is an example of a typical ranking system for Sever- impact of all controls identified for a given cause. If there are
ity, Occurrence, and Detection. no controls for a cause, assign a high rank (5) in the detection
It is important to establish clear and concise descriptions column for that cause.
for the points on each of the scales, so that all team members Design FMEAs
have the same understanding of the ratings. The scales Detection is based on the ability that routine testing and
should be established before the team begins the ranking of inspection will detect the failure or cause of the failure
the FMEA. prior to manufacturing.
In a typical ranking system, each of three ratings (sever- Process FMEAs
ity, occurrence, and detection) is based on a five-point Detection is based on the probability that the process
scale, with one (1) being the lowest rating and five (5) being controls/inspections identified will prevent or remove the
the highest. This ranking method was selected because it cause prior to manufacturing or customer use.
best suited the process analysis.
Risk Priority Number (RPN)
Severity
The Risk Priority Number, (RPN), is a measure of the
Severity ranking is an assessment of the seriousness of overall risk associated with the failure mode. The RPN is
the effect, assuming the affected product is actually being obtained by multiplying the rating for severity, occurrence,
used. This is depicted using a numbering scheme. The and detection. It will be a number between 1 and 125. The
Severity is estimated on a scale of one through five. Figure higher the number, the more serious the failure mode will
3 may be used as a reference for scaling. There will be a be. Each failure mode may have several RPNs, because
severity rank for each effect identified. there may be multiple effects (i.e., severity, occurrence, and
detection ranks) and, therefore, several combinations of
Potential Causes of Failure those numbers.
For each failure mode, list all the possible mechanisms Severity x Occurrence x Detection = RPN
and causes that could bring about the failure. There may be 5 x 5 x 5 = 125
Figure 2?
Figure 2
________________________________________________________________________________
Figure
Figure 2?
Rating
2? ranking
A typical SeverityOccurrence, and Detection.
system for Severity, Occurrence Detection
10 Dangerously High Very High: Failure is almost Absolute Uncertainty
Rating
Rating Severity
Severity Occurrence
Occurrence
inevitable Detection
Detection
None
Dangerously
High High VeryRemote: Failure
Very High: is isalmost
Failure unlikely Almost
is almostAbsolute Certainty
Absolute Uncertainty
10 1 10 Dangerously High: Failure Uncertainty
inevitable
inevitable
1 1 None None Remote:
Remote: FailureFailure is unlikelyAlmostAlmost
is unlikely Certainty
Certainty
Figure 3?
Figure 3
____________________________________________________________________________
A severity Figure
Severity
rank
Figure 3?
Leveleffect identified.
for each
3? Description Ranking
Very High Any failure that could reasonably result in a 5
Severity
Severity LevelLevel Description
safety
Description issue (potential harm to worker or customer)Ranking Ranking
Very High
Very High Anyand/or
failure may
that result
Any failure could in
that could a regulatory
reasonably reasonably issue.
result result
in a in a 5 5
High Major
safetysafety failure
issue issue that may
(potential
(potential harm torender
harm the
worker system
to worker or customer)
or customer) 4
inoperable
and/or and/or may
may result or result
result in
in significant
a regulatory
in a regulatory issue.reduction
issue. in
High High performance
Major Major
failurefailure or
that may quality
thatrender of
may render the product
the system the system 4 4
Moderate Moderate
inoperable or failure
inoperable result likely
or result
in resulting
in
significantsignificant in reduction
reduction in inin
reduction 3
performance
performance
performance or orquality
or quality quality
of the of of
the the
productproduct.
productThese
Moderate
Moderate failures
Moderate Moderate are noticeable
failurefailure to the
likely resulting
likely resulting end user, and
in arein
in reduction
in reduction 3 3
likely to
performance
performance generate a moderate
or quality
or quality of the of level
the product.
product. of
TheseThese
dissatisfaction
failures failures
are noticeable or complaints
are noticeable
to the to end thefromendthe
user, customer.
user,
and and are
are
Low Minor
likely tolikely failure,
generate not
to generate noticeably
a moderatea moderate affecting
level of level of functional 2
quality,
dissatisfaction however,
dissatisfaction may
or
or complaints generate
complaintsfrom the complaints
from due
the customer.
customer.
Low Low Minorto Minor
annoyance.
failure,failure, For
not example,
not noticeablynoticeably
affectingcosmetic
affecting defects
functional
functional 2 2
and
quality, increased
quality,
however, maintenance.
however, may generate
may generate complaints complaints
due due
None Minor
to annoyance. failure,
to annoyance. unlikely
For example, to be
For example, noticed
cosmetic by customers
cosmetic
defects defects 1
or generate
and increased
and increased complaints.
maintenance.
maintenance.
None None Minor Minor
failure,failure,
unlikelyunlikely to be noticed
to be noticed by customers
by customers 1 1
or generate
or generate complaints.
complaints.
Figure 4
___________________________________________________________________________
Figure4?
An occurrence rank for each cause identified.
Figure4? of Failure
Probability
Figure4? Description Ranking
Very high
Probability
Probability of Failure
of Failure Failures occur regularly and one could reasonablyRanking
Description
Description 5
Ranking
expect the failure to occur for each component or
Very high
Very high Failures
during
Failures eachoccur
occur regularly
process
regularly andcould
step.one
and one could reasonably 5
reasonably 5
High expect
Failures
expect the failure
occurto
the failure onoccurto occur
a frequent for
basis.
for each each component
These failures
component or or 4
during during
do not
each each
occur process
every
process step.
time, however, they do occur at
step.
High High Failures
a rate
Failures occur
to produce
occur on a frequent
significant
on a frequent basis.
concern
basis. ThesetoThese failures 4
the product
failures 4
do not do
qualitynot
occurandoccur every
performance.
every time, however,
time, however, they do they do occur
occur at at
Moderate a rate
Failures
a rate to produce significant
occursignificant
to produce only occasionally, concern
concern however, to the product
at a rate
to the product 3
qualityquality
thatand
does and performance.
not significantly impact production, but
performance.
Moderate
Moderate canFailures
Failures be occur
a nuisance.
occur only occasionally,
only occasionally, however, however, at a rate 3
at a rate 3
Low that not
thatFailures
does does
occurnotrarely.
significantly
significantly impactimpact
These failure production,
createbut
rates but
production, 2
canfew can be a
be aproduction nuisance.
nuisance.problems.
Low
LowRemote Failures
A failure
Failures occurof occur rarely.
TheseThese
the component
rarely. failure
or system
failure rates
rates iscreate create
extremely 2 1 2
few
few unlikely.production
production problems. problems.
RemoteRemote A failure
A failure of the of the component
component or system
or system is extremely 1
is extremely 1
unlikely.
unlikely. 241
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3
David W. Vincent & Bill Honeck
Figure 5
______________________________________________________________________
A detection 5?
Figure rank for each failure identified.
Prioritize the Failure Modes for Action above 50 creates an unacceptable risk.
As a general guideline, RPN numbers with a severity of Once a corrective action is determined by the teams, its
three (3) or greater, and an overall RPN of 50 or greater, important to assign an individual or group to implement the
should be considered as potentially critical, and actions required action. Selection should be based on experience
should be taken to reduce the RPN. However, this threshold and expertise to perform the corrective action. Its also im-
number may vary from process-to-process, and the project portant to assign a target completion date for the action item.
team must make the final decision. This will help in insuring the timely close of any problem.
Pareto analysis can be applied. The top 20% of the
ranked RPN numbers should account for approximately Reassessing the Risk Mode
80% of the anticipated frequent failure modes. These 20% after CorrectiveAction
should be a top priority in corrective action.
Once action has been taken to improve the product or
RECOMMENDED ACTIONS process, a new rating for severity, occurrence, and detection
To reduce severity: change design or application/use. should be determined, and the resulting RPN calculated.
To reduce occurrence: change process and/or product design. For failures modes where action was taken, there should be
To improve detection: Improve controls as a temporary significant reduction in the RPN. If not, that means the ac-
measure. Emphasis should be on prevention e.g., de- tion did not reduce the severity, occurrence, and detectabil-
velop controls with alarms. ity. The final RPN can be organized in a Pareto diagram and
compared with the original. You should expect at least a
By ranking problems in order, from the highest risk pri- 50% or greater reduction in the total RPN after the FMEA.
ority number to the lowest, you can prioritize the failure After the action has been implemented, the severity, oc-
modes. A Pareto diagram is helpful to visualize the differ- currence, and detection ratings for the targeted failure
ences between the various ratings, and to assist in the rank- modes are re-evaluated. If the resulting RPN is satisfactory,
ing process. The FMEA team must now decide which items you can move onto other failure modes. If not, you may
to work on first. Usually, it helps to set a cut-off RPN, wish to recommend further corrective action.
where any failure modes with an RPN above that point are Use the example of a typical FMEA worksheet that ap-
attended to. pears on the prior page.
Those below the cut-off are left alone for the time being. The FMEA risk assessment method listed above is just
For example, an organization may decide that any RPN one example of how implementing a risk management tool
243
Description of FMEA Worksheet
System _________________________ FMEA Number ___________________
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3
Subsystem _________________________ Potential Prepared By ___________________
Component _________________________ Failure Mode and Effects Analysis FMEA Date ___________________
(Design FMEA)
Key Revision Date ___________________
Team Lead _________________________ Date _________________________ Page _______of__________
Core Team _________________________________________________________________
Action Results
Potential Potential Responsibility
New RPN
Potential Current Recommended
New Occ
New Sev
Effect(s) Cause(s)/
New Det
& Target
RPN
OCC
Det
Sev
Item / Design
Failure of Mechanism(s) Action(s) Completion Actions
Function Controls
Mode(s) Failure of Failure Date Taken
Coolant Crack/break. Leak 3 Over pressure 3 Burst, 2 18 Test included John Scientist Install 3 1 1 3
containment Burst. validation in prototype 2/27/04 durable
in Product. Bad seal. pressure and Jim Engineer hose
Poor hose Poor hose cycle. production 5/1/04 material
connection material validation with
testing. pressure
interlock
to prevent
over
Response Plans and Tracking pressure.
Validated
Occurance - Write down the new design
Write down each failure mode potential cause(s), and on a
Risk Priority Number - The
and potential consequence(s) scale of 1-5, rate the likelihood
combined weighting of Severity,
of that failure. of each failure (5= most likely).
Occurance, and Detectability.
RPN = Sev X Occ X Det
Severity - On a scale of 1-5 Detectability - Examine the
Corrective Action
rate the Severity of each current design, then, on a scale
Implemented New
failure (5= most severe). of 1-5, rate the Detectability of
RPN assigned
each failure(5= least detectable).
See Detectability sheet.
David W. Vincent & Bill Honeck
can decrease the potential for quality problems. The next ceived, installed, commissioned, and validated. However,
topic will cover establishing risk management for systems before URSs and protocols can be developed, a component
that require validation. impact assessment should be performed on that system.
In order to decrease the cost and potential delays in a pro-
User Requirement Specification Procedure ject, Good Engineering Practices (GEP) should be imple-
Getting Started mented. The ISPE Baseline Commissioning and Qualifica-
tion guideline defines Good Engineering Practice as follows:
This section of the article will describe how to develop Established engineering methods and standards that
a URS system for direct and indirect impact systems. How- are applied throughout the project life cycle to deliver ap-
2
ever, before a detail URS document can be developed, a propriate cost-effective solutions
system impact assessment process should be performed for The proper design and selection of system can be criti-
each system. Figure 6 is a brief overview of how to perform cal to any manufacturing operations. By implementing
an equipment impact assessment. GEP, the risk of problems occurring during the design and
selection can be decreased substantially.
Equipment Impact Assessment Direct Impact systems are expected to have an impact
Figure ? on product quality, whereas, indirect impact systems are not
An equipment impact assessment should be performed expected to have an impact on product quality. Both systems
on any system or equipment before they are purchased, re- will require commissioning; however, the Direct Impact
Develop System
Boundaries
NO
NO
system will be subject to qualification practices to meet ad- Complete the impact assessment challenge table (See
ditional regulatory requirements of the FDA and other regu- next page). Use the seven listed challenges to evaluate the
latory authorities. system and place an X in the appropriate Yes or No
See Figure 6 for an outline of the impact assessment block.
process based on the ISPEs Commissioning and Qualifica- Classify the system as Direct Impact, Indirect Im-
tion guidelines. pact, or No Impact on the system classification line.
System Impact Procedure Complete the system classification rationale section with
a brief explanation as to why the classification was assigned.
You must first identify the system and enter system This is to ensure understanding by subsequent reviewers and
name and system number, on the system impact assessment approvers as to why the classification was chosen.
Table 1. This information can usually be obtained from the Attach the P & IDs to the system impact assessment
P & ID drawings or other system documentation. table, fill in the page numbers, and fill in the prepared by
Complete the system description section with a general and date fields.
narrative of the system and its major components, de- Impact Table 1 is a system impact assessment for nitro-
sign, operation, functional capabilities, and critical func- gen air distribution system used in the operations of manu-
tions. facturing process.
Mark on the system P & ID drawing(s) to clearly iden-
tify the system boundaries and all components of the Component Criticality Assessment Process
system included within the boundary. After you have established that a system is direct or in-
Specify system boundaries by inserting a horizontal or direct, you then perform a component impact assessment.
vertical line at the boundary. These lines should be However, this is usually performed after the impact assess-
placed to clearly identify whether or not the adjacent ment is performed, and URSs have been developed. The
component is part of the system. component criticality assessment process requires the Pip-
To help in establishing the system boundary, utilize the ing and Instrument Drawings (P&IDs) and system instru-
following general guidelines (there may be exceptions to ment list be reviewed in detail.
these guidelines): The components within Direct Impact, Indirect Im-
pact, and in some cases No Impact systems should be
If the component number of a valve etc. is labeled as assessed for criticality. This is suggested to ensure that sys-
part of the main system being assessed, then it generally tems previously judged to be Indirect Impact or No Im-
will be part of that system. pact in the early, high-level assessment, have not subse-
The control system I/O for a given system will become quently acquired a critical function, as the detailed design
part of that system. has progressed to conclusion.
Disposable flexible piping connectors and/or portable
tanks etc. should not be highlighted as part of the sys-
tem, and should be noted either on the drawing or in the
comments section of the form, so it is clear that they
were not highlighted on purpose.
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3 245
David W. Vincent & Bill Honeck
Applicability of any of the following listed criteria to a 6) The component controls critical process elements that
given component will provide an indication that the com- may affect product quality without independent verifi-
ponent is critical: cation of the control system performance.
7) The component is used to create or preserve a critical
1) The component is used to demonstrate compliance with status of a system
the registered process
2) The normal operation or control of the component has a Evaluation of each criticality of components within
direct effect on product quality each system with respect to their role will assure product
3) Failure or alarm of the component will have a direct ef- quality.
fect on product quality or efficacy After the impact assessments have been performed, the
4) Information from this component is recorded as part of qualification phase of the systems can be performed. The
the batch record, lot release data, or other GMP-related use of risk assessment methods, as described above, can as-
documentation sist in developing validation protocols that are logically de-
5) The component has direct contact with product or prod- signed to insure proper qualification of a system.
uct components
Table 1
______________________________________________________________________________
Impact Assessment Challenge Table
1. Does the system have direct contact with the product (e.g., air quality) X
or direct contact with a product contact surface (e.g., CIP solution)?
2. Does the system provide an excipient, or produce an ingredient or X
solvent (e.g., water for injection)?
3. Is the system used in cleaning, sanitizing, or sterilizing X
(e.g., Clean Steam)?
4. Does the system preserve product status X
(e.g., Nitrogen purge for air sensitive products)?
5. Does the system produce data that is used to accept or reject product X
(e.g., electronic batch record system, critical process parameter chart
recorder, or release laboratory instrument)?
6. Is the system a process control system (e.g., PLC, DCS) or contain X
a process control system that may affect the product quality, and
there is no system for independent verification of control system
performance in place?
7. Is the system not expected to have a direct impact on product quality, X
but does support a Direct Impact System?
System Classification: (Direct Impact, Indirect Impact, or No Impact): This system was
defined as Direct Impact because it meets the requirements based on the above
risk assessment criteria.
System Classification Rationale: The function of the nitrogen air distribution system is
to provide a continuous overlay of product. Since nitrogen air does impact status, the system
impact is considered Direct. The problem with nitrogen quality is that it will have a direct
impact on product quality.
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3 247
David W. Vincent & Bill Honeck
Table 2
______________________________________________________________________________
Component Impact Assessment
tenance tasks and schedule. Identify any additional by the company or vendors. Include manuals, factory
maintenance requirements to ensure that the equipment acceptance test, site acceptance, commissioning docu-
continues to operate as required. ments material construction, parts lists, drawings, gov-
Requalification Requirements: Describe requalification ernment inspections, certificates, SOPs, etc.
requirements to ensure that the equipment remains in a Training: Indicate training requirements for operators
validated state. and maintenance personnel. Identify any special certifi-
cations, educational, or physical requirements, for oper-
System Requirements Definition Section ation or maintenance of the equipment.
Identify the specific attributes that are necessary for the Systematic Risk Assessment for
equipment to satisfy the requirements for the equipments System Qualifications
intended use. Provide acceptance criteria and acceptable
ranges that can be verified to document that the equipment The risk assessment section discusses the potential im-
is appropriate for its use, and capable of functioning reliably, pact on cGMP operations associated with use of the equip-
as required. This section provides the basis for qualification ment, and the steps that will be taken to reduce those risks.
protocols, and for ongoing maintenance and calibration pro- Identify conditions that could lead to failure of the equip-
cedures. List only those characteristics that will provide spe- ment, and the effects of failure on cGMP operations. Evalu-
cific evidence relevant to the equipments intended use. In- ate the degree of risk to product quality, company opera-
clude the following requirements, as appropriate: tions, and safety of personnel and equipment. During the
Procurement: Identify any special shipping, delivery, risk assessment, its important to perform an impact assess-
preliminary testing, certification, or other requirements ment on the system. Impact assessment is the process by
for acquisition of the equipment, as necessary. which the impact of the system on product quality is evalu-
Installation: Identify requirements for installation, oper- ated, and the critical components within those systems. The
ating environment, and support utilities. Indicate any risk assessment for systems should fail within three cate-
qualification testing and/or documentation required for gories: direct product impact, in-direct product impact, and
utilities or peripheral equipment prior to installation of no direct product impact.
the subject equipment. By performing a design impact assessment, companies
Operation: List the critical operating parameters and can reduce the scope of the systems and component subject
ranges, capacity requirements, etc., that are required for to qualification, and allowing appropriate focus to be placed
the intended function. Do not include measures that do on the components that may present a potential risk to the
not affect the required functionality of the equipment. product.
Performance: Identify measurable product or results The following is one example of how applying risk as-
that are required when operating the equipment under sessment for a validatable system can be beneficial in de-
expected conditions. Include operating limits and veloping a scientific rationale, and justification for selection
ranges, and worst-case scenarios that may be encoun- of the different types of qualification needed to support a
tered during normal use. system. Summarize risks and associated controls in an im-
Safety Features & Controls: Identify safety features and pact/complexity analysis, as follows:
controls that the equipment and installation must supply.
Instrumentation, Operating Controls, and Peripherals: Impact Analysis: Rate the impact of the equipment on
Identify the required instrumentation, control compo- product quality, safety and purity, and on safety of per-
nents and peripheral equipment that monitor and control sonnel and equipment. Evaluate the systems in place to
the equipment. Provide necessary operating ranges, sen- control those risks.
sitivity, and calibration requirements.
Consumables: Identify consumables required for opera- Complexity Analysis: Describes the technological risks
tion of the equipment. Identify whether supplied by and controls associated with the equipment. The com-
manufacturer or user. plexity analysis evaluates the risk of failure due to techni-
Documentation: List the documentation that will be cal sophistication of the equipment, and the relative diffi-
supplied with the equipment, and that must be created culty of maintaining the equipment in a state of control.
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3 249
David W. Vincent & Bill Honeck
Table 3
________________________________________________________________________________
Validation
Figure ? Requirements
Risk Score: This section is a calculation used to evalu- and time in the long run. Most project cost overruns and de-
ate the overall risk of the equipment by combining the lays have been contributed to not performing Good Engi-
individual impact and complexity scores in the follow- neering Practices and Risk Assessment at the beginning of
ing formula: project. Also, implementing a risk assessment program
within firms Quality Function will insure that the final
(A + B) x (C + D) product quality will be achieved.
Effective
dation Program for Pharmaceutical, Biotechnology
Validation
and Medical Device Industries RA 776 at San Diego
State University (SDSU) for their Regulator Affairs
Master Degree program. Currently, he is the CEO of
Validation Technologies, Inc. a nationwide Validation
Services Company.
M a y 2 0 0 4 Vo l u m e 1 0 , N u m b e r 3 251