Complex Systems
Concurrent
Engineering
Collaboration, Technology Innovation
and Sustainability
123
Geilson Loureiro, PhD, MSc, BSc Richard Curran, PhD, BEng
Laboratory of Integration Centre of Excellence for Integrated
and Testing (LIT) Aircraft Technology (CEIAT)
Brazilian Institute Northern Ireland Technology Centre
for Space Research (INPE) Queen’s University Belfast
São Paulo 12227-010 Belfast BT9 5HN
Brazil Northern Ireland
It is our great pleasure to welcome you to go through the Proceedings book of the
14th ISPE International Conference on Concurrent Engineering, CE2007 held at an
impressive facility of complex systems development, the Laboratory of Integration
and Testing (LIT, http://www.lit.inpe.br) of the Brazilian Institute for Space
Research (INPE, http://www.inpe.br) in São José dos Campos, SP, Brazil. The
previous events were held in Antibes-Juan les Pins, France (CE2006), Dallas,
Texas, USA (CE2005), Beijing, China (CE2004), Madeira Island, Portugal
(CE2003), Cranfield, UK (CE2002), Anaheim, USA (CE2001) , Lyon, France
(CE2000), Bath, UK (CE99), Tokyo, Japan (CE98), Rochester, USA (CE97),
Toronto, Canada (CE96), McLean, USA (CE95), Pittsburgh, USA (CE94).
CE2008 and CE2009 are planned for Nothern Ireland, UK and Taiwan,
respectively.
The CEXX conference series were launched by the International Society for
Productivity Enhancement (http://www.ispe-org.net) and have constituted an
important forum for international scientific exchange on concurrent engineering.
These international conferences attract a significant number of researchers,
industrialists and students, as well as government representatives, who are
interested in the recent advances in concurrent engineering research and
applications. Concurrent engineering is a well recognized engineering approach for
productivity enhancement that anticipates all product life cycle process
requirements at an early stage in the product development and seeks to architect
product and processes in as simultaneous a manner as possible. It works via multi-
functional, multi-discipline and multi-organization team collaboration.
The theme of this CE2007 proceedings book is complex systems development
focusing on innovative product and process solutions that requires collaboration for
architecture, design, implementation and build in order to deliver sustainable value
to stakeholders. Concurrent engineering methods, technologies and tools are well
established for the development of parts and for enhancing the productivity of
isolated product life cycle processes (e.g., manufacturing or assembly). However,
there is nothing that prevents us from exploiting the potential to use the concept of
concurrent engineering for complex systems development.
Complex systems (e.g. automobiles, aeroplanes, spacecrafts, space vehicles and
launchers) development requires the collaboration of many organizations around
the globe. So it is necessary to expand current collaborative engineering and
management concepts from already traditional multidisciplinary collaboration to
multi-cultural through to multi-organizational collaborations. The CE2007
proceedings book offers you the opportunity to keep track of the latest trends on
knowledge and collaboration engineering and management.
vi Preface
Local Chair: Clovis Solano Pereira, Brazilian Institute for Space Research
(LIT/INPE), Brazil
Track: Systems
Track: Information
- Rose Dieng, The French National Institute for Research in Computer Science and
Control, France
xii Organization
Communications
Editorial Review
Exhibits
Sponsors
Publication
Plenary Sessions
Technical Tours
Registration
Conference Management
Webmaster
Knowledge Oriented Process Portal for Continually Improving NPD ................. 451
Andrea Padovan Jubileu, Henrique Rozenfeld, Creusa Sayuri Tahara
Amaral, Janaina Mascarenhas Hornos Costa, Marcella Letícia
de Souza Costa
Knowledge Sharing and Reuse in Potential Failure Mode and Effects
Analysis in the Manufacturing and Assembly Processes (PFMEA) Domain....... 461
Walter Luís Mikos , João Carlos Espíndola Ferreira
1 Introduction
The Concurrent Engineering is a quite “young” (25 years) multidisciplinary
domain of interest, but the time has come to convert it from “best practices” –
oriented methodology engineering - to a smart Distributed Concurrent &
Collaborative Engineering Science (DCCE)
Web science and Internet Technology provide enterprises ( Large Size, SMEs,
ȝSMEs ) with better and more flexible ways to fully satisfy their customers on the
global e-market. Enterprises are increasing their efforts, even more in order to
improve their (intra and inter) business processes, in order to reach a more
1
Professor PhD Eng,. University Politehnica Bucharest, Faculty of Automatic Control and
Computer, 313 Splaiul Independentei, Bucharest, Romania ; Tel: +40 21 3113242; Fax: +40
21 3113241; Email: ams@cpru.pub.ro; http://www.pub.ro
4 A.M Stanescu, I Dumitrache, M. Pouly, S.I Caramihai, M.A Moisescu
competitive level, as well as become more agile actors [Loss, Rabelo, Perreira-
Klen 2005a].
Available estimates indicate that SME 's are the main employers in Europe,
representing 60% of the Job market [5]. It is also estimated that industrial
subcontractors represent roughly 20% of al industrial jobs.
1st of January 2007 marked the beginning of the 'new wave' to enlarge the
European Union (27 countries) after Romania and Bulgaria, both joined in. In the
newly integrated 'members' economy, the real ratio of SMEs contribution is more
spectacular (e.g. 92% in Romanian economy). The challenges to develop a
sustainable digital and global economy by using the advanced ICT-platforms are
the key-drivers of Knowledge based Economy. The academic research community
has received this 'message' since the '80s. In a huge effort to fill the gap between
remarkable scientific achievements and the real socioeconomic needs, a "new
requirements list" has been developed during the last three decades.
Altough many Reference Models, Standards, Frameworks and IC-
Technologies have been created, 'Enterprise Interoperability' (IST-FP6-successful
projects: ATHENA, ECOLEAD, INTEROP, [9, 10, 11] a.s.o), still represents a
key-problem to be fully solved in the years to come. The main paradigm that
concerns many authors with in their present research is 'Virtual
Organization'/Collaborative Networks [1]. Taking into consideration the
international research context, one could notice that the "concurrent engineering
"(CE) paradigm, that has been rapidly developed since 1982, has to be re-balanced
from an advanced methodology useful in engineering science including methods,
tools, techniques into a collaborative sciences foundation.
The paper is structured in the following sections: Section #2 is concerned
with the GST (General systems theory oriented framework); Section #3 deals with
a new approach for the ICT- Infrastructures Architecture of collaborative platforms
supporting the nowadays geographically-distributed Concurrent Engineering
(DCE);Section, Section #4 provides a case-study (REMEDIA- project).
eCE
S (.) = { MVIEWS , MPLATF, A, R, G }
3. Solution Architecture
6 Conclusions
The paper is focused on reporting the synthetical results of a seven years "long
term" research, including the "FABRICATOR" ISP_FP6, Vision & Roadmap for
Virtual Organization, Education & Research Ministry founded project
"Interoperability Based "REMEDIA" Environment-Health (2006-2008).
The present paper supports the following key conclusion:
The General System Theory could play the role of "centrifugal force" for the
D.CC.E.
The following issues are to be debated during conference :
x information System development solves the problem to integrate
(Collaborative P2P Co-Research platform) Business Process Monitoring
10 A.M Stanescu, I Dumitrache, M. Pouly, S.I Caramihai, M.A Moisescu
Reference List
Anderson Levati Amoroso a,1, Petrônio Noronha de Souza b and Marcelo Lopes de
Oliveira e Souza b
a
Assistant Professor, Electrical Engineering Course, Pontifícia Universidade
Católica-PUC, Curitiba, PR, Brazil.
b
Professor, Space Mechanics and Control Course, National Institute for Space
Research-INPE.
Abstract. In the past, space design activities mainly emphasized the system
requirements. The available methods focused on functional aspects only. The cost
was estimated at the end of the process. Recently, new design methodologies have
been proposed. Among them, the design-to-cost (DTC) method was developed to
include the cost as a parameter. In this work we propose an extension of the DTC
method for systems analysis and specification. In other work we applied it to the
design of a reaction wheel. Its basic components are described. General
information on system development is related. The object-oriented approach is
used as a modeling tool. Performance indexes are defined. A set of algebraic
equations describes the cost behavior as a function of system parameters.
Reliability is evaluated secondarily. The proposed model embodies many
approaches to solve and/or optimize a problem with any level of complexity.
1 Introduction
Reality shows that aspects not directly tied to the performance of equipments have
equal or, in certain cases, greater role than the performance itself. This conducted to
the development of design techniques in which aspects as cost became treated as
design constraints, and not more as something to be evaluated at the end of it. This
1
Assistant Professor, Electrical Engineering Course, CCET, Pontifícia Universida
de Católica-PUC, Curitiba, PR, Brazil. Rua Imaculada Conceição, 1155 - Prado
Velho - Curitiba - PR - CEP: 80215-901 - phone: (41) 3271-1515. Email
anderson.amoroso@pucpr.br; http://www.pucpr.br/educacao/academico/
graduacao/cursos/ccet/engeletrica/index.php
12 A. Amoroso, P. Souza and M. Souza
fact brought a reordering of priorities; and, in this new scheme, the performance
obtained became the possible one that, even so, ensures the technical success of the
mission, and not the desired one anymore, that could be unacceptable due to its cost
[1-2].
In this work we present a method for systems analysis and specification with
performance, cost and reliability requirements. In other work it is particularly
applied to the reaction wheels destined to microsatellites, with emphasis in the
electronic control system. It is desirable that actuators of such nature have the
following characteristics of performance: high efficiency, low power, good
dynamic response, long and useful life, and possibility of highly integrated
implementation.
Besides the requirements of performance, the system shall also to satisfy criteria of
cost and reliability , quantities usually evaluated after the conclusion of the project.
For that, the method to be presented shall be an extension of the method known as
“design-to-cost” (DTC) [3], where the requirements of performance and reliability
are “negotiated” during the project as functions of the limitations previously
imposed to its cost [4-5]. This work is the first part of a larger previous work [13].
NASA adopts a design cycle based in phases that organize the activities of a project in
a logical sequence of steps [3, 6]. This systematic method defines a series of
progressive steps with well defined goals that conducts to the realization of all
objectives of the project. A summary of the project phases and their respective
objectives is shown in Table 1. The activities in Pre-phase A, in Phase A and in Phase
B are denominated phases of project formulation, since the emphasis is on requirements
analysis, project planning, concept definition and demonstration of realization. Phase
C, Phase D and Phase E are denominated phases of implementation because the
operational software and hardware are designed, fabricated, integrated and put into
operation.
Design
Requirements SYSTEM Cost
Perfomance
Required
Design
Objectives
SYSTEM
Cost
Performance
Acceptable
Figure 1 - Comparison of methods of design. Adapted from: [3].
The method DTC helps the project teams to take decisions based in more precise
information on how the technical parameters of performance and attributes of
design affect the cost.
Despite the method DTC may be regarded as a problem of optimization subject to
constraints, its approach is not always done by conventional analytical or
computational methods, since the functions that describe it are not, many times,
possible to be found. Instead of this, it is up to the project teams to search and
negotiate possible alternatives to build a system.
The model DTC is formed by the integration of various models and tools. They
include: models of cost, models of performance of subsystems, models of
reliability, and tools of analysis and decision.
The model DTC is primarily filled with equations and values of parameters that
describe a preliminary implementation, the base model, common to the project
teams. Initially, this model can only include basic and elementary descriptions of the
project. The detailing increases in each phase, jointly with the understanding of the
14 A. Amoroso, P. Souza and M. Souza
designers. These details include technical information - typically mass, power and
reliability, all related in a list of equipments - and equations of performance. The next
step consists in establishing equations that show the inter-relations among variables of
performance and equations of cost. The costs must be expressed by equations that
reflect their relations with design attributes. The equations of cost must be structured
in such a way that they can express gradients of cost – as the cost varies when
attributes of performance are altered.
Once the base model is implemented , we can initiate the interactive process of system
optimization. An increase of costs that violates the restrictions, brings the team to
reject the proposed change or initiate a search for compensation in another subsystem.
In this process, when an alternative implementation is found, it shall become the new
base model.
Summarizing, the model DTC shall capture the objectives and knowledge of the
system and the cost information associated, and shall be capable of doing reliable cost
projections for alternate implementations. There resides the major difference with
respect to the traditional methods of design. About the phase of implementation of the
design, they follow the method DTR presented previously.
2.2.1 Modeling of Cost
The modeling of cost has been used to analyse the feasibility of a proposal.
However, this use becomes inadequate to take decisions when treating complex
systems. The modeling shall not have the objective of decision making on what or
how we should do something. On the contrary, it shall give a deep comprehension
of the methods used, data involved and it shall be sufficiently flexible to estimate
the cost in all phases of a project.
The cost can figure as a parameter of engineering that varies with physical
parameters, with technology and with methods of management.
Experience shows that models more refined than the specific cost (cost per unit
mass) are needed. This brought to an estimation of costs using parametric models.
These models are based on physical, technical and performance parameters.
Starting from an historical data base we define coefficients and expressions that
show how the cost of a system or subsystem varies as a function of the
characteristic parameters.
Other adjustment factors correct uncertainties on the level of development of
determined technology. The worst case occurs when new technologies are
introduced with which the design teams have not any familiarity.
Another criteria refers to the risk of employment of a technology as a function of
its degree of qualification. New technological resources tend to increment the cost
when their use in special conditions is less determined.
An analysis of risks treats the uncertainties that can jeopardize the objectives of a
system. Two sources are considered: uncertainty in the estimation of costs and
growth of cost due to unexpected technical difficulties.
We use to approach all the mentioned criteria through an integrated analysis, called
Methodology of Concurrent Engineering, where technical specialists and cost
analysts cooperate in the mapping and interconnectivity of all points that affects
the cost and the performance of a complex system. They rely on valid and flexible
A Method for Systems Analysis 15
statistical analyses originated from models and data base updated with the
advancement of the technology and the acquired experience [3, 8].
The “frames” were introduced to permit the expression of the internal structures of
the objects, maintaining the possibility of representing inheritances of properties as
semantic networks. In general, a frame consists of a set of attributes that, through its
values, describe characteristics of the object represented by the frame. The values
given to these attributes can come from other frames, creating a network of
dependence among the frames. The frames are also organized in a hierarchy of
specialization, creating another dimension of dependence among them. The attributes
also present properties that relate to the type of values and to the restrictions of
numbers that can be associated to each attribute [11].
3.1 Introduction
The method presented here intends to provide elements that help the process of
decision making during the entire cycle of a project. The analysis based in objects
was inserted in this method for having a direct correspondence with physical
elements that constitute the system, which makes it more clear and practical than a
functional analysis. The objects are identified through information based on
bibliographical searches, simulations and experimentations. The proposed tool
16 A. Amoroso, P. Souza and M. Souza
presents two main parts : a global analysis and a specific analysis. The first one
treats of an object to be acquired/designed as an unique element characterized by
its attributes. The second one constitutes a refinement of such object.
The most comfortable and certain situation to obtain a quality product is to acquire
it from some manufacturer. The method presented in this section offers a solution
to the question of choice of a product among other similars. Departing from a
superior hierarchical level, a frame is built with a list of attributes of the product
with information obtained from different companies. These attributes are then
filled with values desirable for the realization of the objectives of the mission,
resulting from a preliminary study. Such data constitute the initial base model.
Commercial models, with characteristics similar to the base model, are then
juxtaposed.
A comparative analysis shall determine which models available in the market are
compatible with the necessities of the mission. For this, the values of the attributes
are normalized (Equation 1). So, the nominal value of each attribute (Vn) of the
commercial models can be expressed as a relative value (Vr) in relation to the
value of the variable in the base model (Vb).
Vn
Vr u 100 (1)
Vb
To indicate a variation or dispersion of the n relative values of a commercial
model, except cost, around the base value, 100, it is defined a deviation given by
Equation 2, inspired in the calcul of the standard deviation.
n
2
¦ Vrk 100
k 1
Desvio (2)
n
We can also attribute a weighting factor fk to each difference to express the
relevance of a datum with respect to the others, according to the objectives of the
project, according to Equation 3.
n
2
¦ fk Vrk 100
k 1
Desvio (3)
n
In this way, the greater the deviation of a commercial model, the greater will be its
dissimilarities with respect to the ideal model.
The costs associated to each model are represented by a fraction 1/1000 of a
monetary unit and are assumed invariant in time. the cost of the base model is a
value previously stipulated that will serve as one of the parameters o acceptance of
A Method for Systems Analysis 17
the project. It is desirable to establish a level of tolerance above which the project
is rejected immediately.
With this global analysis we intend to provide greater subsidies to the design teams
in the choice of a system or other equivalent in a rapid and systematic way.
In discrepant cases or in the existence of limiting factors, as cost for example, the
reliability of partial or total realization/acquisition of the object can be analysed
with greater rigor by partitioning it in smaller objects. The refinement in the base
model will furnish data for a new comparison. Having as reference the base model,
we proceed with the identification of the objects that constitute the system and the
disposition of the attributes of these objects in frames. Over the attributes are
defined concepts and indexes for evaluation of performance, cost and reliability of
these inferior objects and, therefore, of the product in question. All these
parameters are related in such a way that the system can be evaluated and
optimized. Since the attributes of the smaller objects do not have the same values
as the base model, they pass to constitute the current model. The current model
presents the same attributes of the base model, despite with distinct values. Their
values are filled as soon as the inferior objects are built.
3.3.1 Frames of the Inferior Objects
In this step the main objects of the system are explicated through their respective
frames. The quantity and specificity of the selected objects depend on the design
team. The attributes can be expressed as quantitative or qualitative variables. To
each value of an attribute is associated a numerical concept, C. This concept is
attributed by an specialist (example in Table 1). The function of this concept is to
characterize the attributes given to the objectives of the system. The numerical
range adopted for the representation of concepts expresses the level of knowledge
of the specialist on the object in question.
wc O
S Oc (3.6)
wO c
In this way, the value of S can be used to determine a change per unit in c due to a
change per unit in O. For example, if the sensitivity of cost relative to an attribute is
5, then an increase of 1% in O results in an increase of 5% in the value of c.
3.3.5 Reliability
The model of rate of faults of an object is obtained with the manufacturer or
obtained experimentally. The expectation of the useful life of a system is
fundamentally determined by the environmental conditions of use and by its
topology. The investment on the planning of quality can contribute to obtain a
reliable system and with minimal redundancy, that can diminish the cost of
production.
The reliability was inserted in the proposed model as an attribute: the useful life of
each object. Despite this concept is not adequate for electronic systems, it was
chosen by a question of uniformization. The matrix of correlations determines
which are the parameters that have some relation with the useful life of the object.
Starting from that, the phases of project and test can be better planned.
3.3.6 Actualization of the Current Model
Starting from the frames of the objects formed and the tables of performance, cost
and reliability, the current model is structured. The attributes of the current model
are extracted directly from the attributes of the smaller object or obtained through
simulations and/or experimentations with the data of these objects. Having one or
more candidates to current model resulting from the specific analysis, we shall
submit them to a global analysis to select a new base model. The function of this
procedure is not to provide a definitive answer, but to clarify doubts, arise
questions and point possible solutions.
The procedures described in the specific analysis shall not necessarily be followed
in the order presented. The flexibility of the model implemented shall permit
alterations a any instant. The interactivity shall be always valued.
4. Conclusions
In this work it was presented a method for the analysis ans specification of systems
with requirements of performance and cost, according to the model DTC. This
method was then applied to reaction wheels in other paper. A flexible tool was
proposed that aggregates different modes of treatment and modeling of complex
systems with the same objectives. It is capable of assembling information of
diverse kinds and treat them globally. As other methodologies, presents advantages
and disadvantages. The initial implementation can be difficult, either by financial,
organizational or human questions. Due to this, in this work were related the most
different approaches and used parts of what is of best among them, without
deepening the concepts involved. This work is the first part of a larger previous
work [13].
A Method for Systems Analysis 21
Target Object
Objectives
Base Model
Model
Candidate
Operations of
Normalization&
Determination
of Deviations
Analysis
GLOBAL ANALYSIS
SPECIFIC ANALYSIS
Inferior Objects
Analysis
Parametric
Models
of Cost
Analysis
Current Model
5. References
[1] Renner U, Lübke-Ossenbeck B, Butz P. TUBSAT, low cost access to space technology.
In: Annual AIAA/USU Conference on Small Satellites, 7., Utah, 1993. Proceedings.
[2] Smith JL, Wood C, Reister K. Low-cost attitude determination and control for small
satellites. In: Annual AIAA/USU Conference on Small Satellites, 7., Utah, 1993.
Proceedings.
[3] Wertz JW, Larson WL (eds.). Reducing space mission cost. Torrance: Microcosm
Press; Kluwer Academic, 1996.
[4] Ertas A, Jones J. The engineering design process. New York: John Wiley & Sons,
1993. 345p.
[5] Cross N. Engineering design methods. New York: John Wiley & Sons, 1993. 159p.
[6] NASA. Systems engineering handbook for in-house space flight projects. Virginia:
Langley Research Center, 1994.
[7] Taguchi S. Utilize a função perda de qualidade. Controle da Qualidade, May 1999;
vol.8, 84:80-83,.
[8] Bandecchi M, Melton B, Ongaro F. Concurrent engineering applied to space mission
assessment and design. ESA Bulletin, Sept. 1999; 7:34-40.
[9] Lerner EJ. Reliable systems: design and tests. IEEE Spectrum, Oct. 1981; vol. 18,
10:50-55.
[10] Turine, MAS. Fundamentos e aplicações do paradigma de orientação a objetos.
[transparences]. Universidade de Cuiabá (UNIC). Cuiabá, March 1998; 30
transparences. 25 x 20 cm.
[11] Bittencourt G. Inteligência artificial: ferramentas e teorias. Florianópolis: Ed. da UFSC,
1998. 362p.
[12] Sedra AS, Smith, KC. Microelectronic circuits. New York: Saunders College
Publishing, 1991. 1054p.
[1] [13] Amoroso A L A Method of System Analysis and Specification with
Requirements of Performance, Cost, and Reliability Applied to Reaction Wheels.
INPE, São José dos Campos, SP, BR, October 01, 1999 (Master Dissertation INPE-
7517-TDI/730) (CAPES).
Guidelines for Reverse Engineering Process Modeling
of Technical Systems
1 Overview
In methodological terms, Brazilian companies have made little effort to promote
innovation, in a systematic way [7]. This indicates the need to provide better
support to companies, in terms of the obtention of information and an
understanding of their products and technologies.
Many authors consider the process of conception generation as essential to
innovation, because it uncouples the design problem from the known solutions by
an abstraction process. This leads to more possibilities for innovation. In spite of
its importance, this process is not effectively carried out by designers. Partially,
this happens due to the abstract nature of its activities – mainly in the functional
modeling – and the difficulty to manipulate generically valid functions (basic
operations) to represent the product.
1
Graduate Student in Mechanical Engineering; Federal University of Santa Catarina
(UFSC); NeDIP – Núcleo de Desenvolvimento Integrado de Produtos; Departamento de
Engenharia Mecânica, Bloco B, Campus Universitário, Caixa Postal 476, Florianópolis-SC,
Brazil; Tel: +55 48 37219719; E-mail: ivojr@nedip.ufsc.br; http://www.nedip.ufsc.br
24 I. R. Montanha Jr., A. Ogliari and N. Back
2 Literature Review
In the conceptual design phase, the TS conceptions are developed through the
activities: functional modeling, solution principles and product conceptions
generation. The main functional modeling approaches are: functional deployment
[13]; axiom-based synthesis [18] and the function-means tree [17].
When the function structure of the TS is defined, the process of generation of
solution principles starts, where the functions are usually listed in a morphological
matrix. Solution principles are proposed for each function and, after that, the
solution principles are combined, generating the product conceptions.
The conception process demands a significant capacity of abstraction and an
accurate definition of the functions. In order to support the acquisition of this
information the use of RE is proposed, as shown in the next section.
INFORMATION
MAIN ACTIVITIES OF THE MAIN RESULTS OF THE
SOURCES OF THE
REVERSE ENGINEERING REVERSE ENGINEERING
REVERSE ENGIN.
Internet and
Analysis of relevant
specific
pieces of information E-mails (bulletin)
magazines
Taking pictures of
Automobile events vehicle and Cost-reduction
(fairs and salons) commenting on their proposals
main attributes
Analysis of positive
and negative points Corporative
of the vehicle teardown database
Vehicle disassembly
Physical analysis and register of systems Physical storage
of vehicles and components of disassembled
(disassembled) vehicles
Comparative analysis
of systems and
components (vehicles
analyzed against a
similar internal vehicle)
Figure 1. General view of the reverse engineering process of the visited company [10]
The left side of Figure 1 presents the main information sources of the RE
process in the RE company. The RE activities are essentially related to information
analysis from the Internet, events, tests of assembled vehicle (as purchased) and
analysis of the disassembled vehicle (teardown).
Guidelines for Reverse Engineering Process Modeling of Technical Systems 27
From the sources and activities, the results of the RE process are generated
(right side of Figure 1). One important publication is the book of the vehicle, which
is a report with all the results of the entire RE process, and this information is
inserted into the corporative teardown database. The main information of the RE
process is introduced into the cost-reduction ideas database of the company, in
order to optimize the design and redesign of the company’s vehicles.
In spite of the visited company having a formal and well understood procedure
of RE – which is not seen in most Brazilian companies – the vehicle functions are
not considered in the TS study. The functions should support the planning of new
versions, because the designers can find new ways to satisfy the functions,
increasing the innovation possibilities.
- Publication analysis
(magazines, catalogs, sites,
merchandising, etc.)
- Events analysis, related to
TS market and technologies
The goal of the Planning and Purchasing phase is to plan the activities of the
RE process, and to orientate the designers to purchase the right TS to be analyzed.
The main results of this phase are the project plan of the RE process and the TS
purchasing. The Technical System Analysis phase aims at the analysis of the
purchased TS, in order to obtain information which can be used in future designs
and redesigns. The main results are: a list of components and materials, TS
description and information about technical performance.
Another differential result of this research, is the proposition of means to
identify the TS function structure, with their respective solution principles. These
means are based on the methods of section 2.2 (FAST, SOP, Force Flow and the
[6] approach), and the support methods for the RE process (section 2.4 – value
analysis, AHP method, interface analysis, etc.).
For example, in [6], a TS is analyzed from its technical drawing, highlighting
its mechanisms and principles. After that, the main functions carried out for the TS
are identified, as well the energy, material and signal flows. Then, the main
functions of the TS are represented in a block diagram, in a systematic process of
abstraction, from the real system to its functions.
From the identification of the functions of the TS under analysis, alternatives
for the function structures can be suggested. The designers can innovate in the
insertion, combination and removal of functions, as well as in the proposal of
alternatives for the solution principles for each TS function. In this phase, parallel
activities of publications (books, magazines, websites, etc.) and events analysis are
also considered, both being related to the market and technologies of the company.
Finally, in the Redesign Orientation phase, the goals and requirements of the
TS redesign are defined, indicating which subsystems should be optimized. Then,
orientations and means are proposed to compare the TS function structures, against
its similar competitors, guiding the function synthesis process (Figure 3).
The objective of such comparison is to visualize the function structures of the
TS, seen as a reference for the TS redesign. The function structure of the TS must
be deployed in subsystems, where each subsystem can be analyzed (internal
against competitors similar), considering: its total of functions; how they are
connected (ways of interaction); the sequence of operations/processes of the TS,
represented by the functions; flows of energy, material and signal among the
functions; etc. From this, the RE team can optimize the function structure to be
considered in the design. Adequate methodological approaches were not identified
for this application and will be developed in this research study.
The optimized function structure is inserted into the morphological matrix for
the generation of new conceptions. The main results of this phase are: the redesign
of goals, a comparative analysis of the function structures, the optimized function
structure of the TS and the attributes of the new versions of the TS conceptions.
In order to support the suggested RE activities, a database is to be developed,
which will be based on the “design catalogues” approach and consider the
information structure of the TS functions. Since this research is now in a
preliminary stage, the methodology and the database need some guidelines, in
terms of RE process modeling, as seen in the next section.
However, this methodology is been developed. For this reason, practical results
in companies still have not obtained, but they will be done until this year’s end.
Guidelines for Reverse Engineering Process Modeling of Technical Systems 29
Technical system A 1
1
6 Final Considerations
In this paper, the importance of RE process formalizing was highlighted, in order
to allow the identification of TS functions and solution principles. A comparative
analysis between the technical systems studied and an internal TS can then be
carried out, favouring an improvement in the TS.
By utilizing the RE process as a source of knowledge for innovations in TS,
companies can develop TS solutions in a faster way and with less uncertainties, in
relation to a project without comparison parameters. This requires less validation
effort regarding the concepts and technologies implemented in the TS solutions.
7 References
[1] Abe T, Starr P. Teaching the writing and role of specifications via a structured
teardown process. Journal of Design Studies. 2003, 24, 475-489.
[2] Back N. Metodologia de projeto de produtos industriais. R.J.: Guanabara Dois, 1983.
[3] Chen L-C, Lin L. Optimization of product configuration design using functional
requirements and constraints. Research in Engineering Design. 2002, 13, 167–182.
[4] Chikofsky EJ, Cross Il JH. Reverse engineering and design recovery: a taxonomy.
IEEE. 1990, 13-17.
[5] Eilam E. Reversing: secrets of reverse engineering. Wiley Publishing Inc., 2005.
[6] Höhne G. EMC 3241: projeto de instrumentos. Internal publication. Graduation course
of Mechanical Engineering at Federal University of Santa Catarina (UFSC), 1990.
[7] IBGE. Pintec: pesquisa industrial inovação tecnológica 2003. Rio de Janeiro, 2005.
[8] Koller R. Konstruktionslehre für den maschinenbau. Berlin: Springer-Verlag, 1985.
[9] Montanha Jr. IR. Sistemática de gestão da tecnologia aplicada no projeto de produtos:
um estudo para as empresas metal-mecânicas de micro e pequeno porte. M.Sc. thesis in
Mechanical Engineering. Federal University of Santa Catarina (UFSC), 2004.
[10] Montanha Jr. IR. Sistematização do processo de engenharia reversa de sistemas
técnicos. Qualify (Ph.D in Mechanical Engineering). UFSC, 2006.
[11] Otto KN, Wood KL. A reverse engineering and redesign methodology for product
evolution. Proceedings of the 1996 ASME Design Engineering Technical Conferences
and Design Theory and Methodology Conference. 96-DETC/DTM-1523. Irvine, USA.
[12] Otto KN, Wood KL. Product design: techniques in reverse engineering and new
product development. New Jersey: Prentice Hall, 2001.
[13] Pahl G, Beitz W. Engineering design: a systematic approach. Berlin: Springer-Verlag.
2nd Ed., 1988.
[14] Roth K. Konstruieren mit konstruktionskatalogen. Berlin: Springer-Verlag, 1982.
[15] Sousa AG. Estudo e análise dos métodos de avaliação da montabilidade de produtos
industriais no processo de projeto. M.Sc. thesis in Mechanical Engin. UFSC, 1988.
[16] Tay FEH, Gu J. Product modeling for conceptual design support. Journal of Computers
in Industry. 2003, 48, 143-155.
[17] Tjalve E. A short course in industrial design. London: Butterworth, 1979.
[18] Tomiyama T, Yoshioka M, Tsumaya A. A knowledge operation model of synthesis. In:
Chakrabarti A. Engineering design synthesis: understanding, approaches and tools.
London: Springer Verlag, 2002, 67-75.
Designing a ground support equipment for satellite
subsystem based on a product development reference
model
Abstract. This work presents an application of a reference model for product development:
a ground support equipment for an environmental monitoring imager. MUX-EGSE is a
product designed on demand to INPE (Brasilian National Institute of Spatial Researches). It
is an equipment of electronic tests of a camera which will equip the CBERS3&4 satellite.
This reference model was adapted to manage the development of this product, which is quite
diferent from the products commercialized in other lines of the researched company.
1 Introduction
In [11] is defined AIT: activities of assembly, integration and tests of an artificial
satellite to be launched in the Earth’s orbit. It corresponds to a set of procedures
and the execution of logically inter-related events with the purpose of reaching a
high level of reliability and robustness in the satellite performance. The
multispectral imager (MUX) of CBERS3&4 satellite, which is being developed in
a Brazilian company, ought to be submitted to a set of acceptance, calibration and
functional tests in its design and AIT phases. Hence, it demands a specific
electronic system which makes viable the satellite interface and the execution of
this complex analysis. This system is called MUX-EGSE – ground support
equipment of MUX subsystem.
For the design of this equipment it was necessary a process model of product
development that comprehended the best practices in a mechatronic design context.
The inexistence of such a model was observed in [1], that works in this gap and
1
Development engineer. Opto Eletrônica S.A., Joaquim R. de Souza, Jardim Sta. Felícia,
São Carlos, BR; Tel: +55 (16) 2106 7000; Fax: +55 (16) 3373 7001; Email:
hpazelli@opto.com.br; http://www.opto.com.br
32 H. Pazelli, S. Barbalho, V. Roda
developed the mechatronic reference model (MRM). MRM is also utilized in other
products commercialized by this company.
are made for a given set of products. In the strategy phase, the set comprises all the
products of the company, while in the portfolio phase, all the products belong to a
given PL.
Gates ( ) are business-oriented decisions made on the basis of design
performance indicators. These gates ( ) are technical decisions made through peer
review meetings, and a gate ( ) represents the closing of a given development
project after ramp-up of the product.
ds
Image
Data
Data
Ima
man
Com EI
S
ge
Display
rio
Real
uá
Commands Data
us
Time
SEI Image
p/
Data
s
la
Generator Receiver
Te
Receiver
se
Printer
fico
Con
Grá tórios
rela
formatados
dição
I
os Data Format
Dados
SE ition
for ados
tad
nd Da
desej
Co
ma
D
do
d p/ s de te
an us u
ada
In Client
Mose and Comandos Commands Requests,
Keyboard do usuário Interpratation configurati
ons,
Editions…
Tests Execution
ts Or
men Power
Supp
sure Instruments Simulation
Conditi ly Power
Mea
ated on
Supply
Comandos Power
Form ns p/ fonte
Dados de medidas ditio Supply
Con Op Controller
Measurement Instrument tic
Co al Be
Intrument Interfaces ndit nch
ion
Configurações
Boards
TCs
Data
Optical
Comandos p/
Data
Bench
TM
TC terfa
TM
ia
in
s v ce
s v ce
OB
s via
ia
TM terfa
s via
in
OB
TC
DH
TM Interface TC Interface
OBDH Interface
In the technical design phase the reference model predicts some concurrent
activities as illustrated in Figure 4.
The basic engineering of the product was basically the development and
fabrication of mechanical parts to support the equipments.
The communication and control system was developed interconnecting all
necessary connections as required by specification and setting all instruments
controlled by Ethernet, GPIB or RS232 with proper options as in the router, in the
software and in the operational system of EGSE controller.
For electronic design, the components were chosen considering already used
ones in other company projects or the main manufacturers such as Texas
Instruments, Analog Devices, among others. Circuit simulations, electrical
schemes, lay-outs and gerber were developed using Altium Designer platform.
After the boards were designed, they were manufactured, assembled, tested,
revised and re-manufactured achieving theirs second fully operational version. For
Designing a GSE for satellite subsystem 37
Figure 5. EGSE controller rack, electronic boards and image acquisition test screen
5 Final considerations
This work presents a well suceeded adaptation of MRM, a development reference
model. Althought it has been designed to consumer goods, its guidelines
emphasize the concept of phases that helped designers in the consistent
development of an equipment for aerospace industry and in the generation of its
documents. By practice viewpoint, the results obtained so far are totally
satisfactory, since the developed electronic ground support equipment for CBERS
multispectral camera is able to test all the satellite subsystem requirements,
fulfilling all specifications.
These results show the qualities of the MRM’s structured framework, which
provides a clear understanding of design status and guides the designers through
the best practices observed in academic researches and companies approaches.
6 References
[4] COOPER, R.G.; EDGETT, S.J.; KLEINSCHMIDT, E.J. Portfolio Management for
New Products. Perseus Books, Massachussets, United States, 1998.`
[5] CHRISSIS, M. B. et. al. CMMI: Guidelines or process integration and product
improvement. Boston, Massachussets, United States, 2003.
[6] CREVELING, C.M. et al.. Design for six-sigma in technology and product
development. New Jersey, United States, Prentice Hall, 2003.
[7] LVDS Owner’s Manual – Low Volvtage Differential Signalling. National
Semiconductor. Available at:
<http://www.national.com/appinfo/lvds/files/ownersmanual.pdf>. Acessed on: Nov.
10th 2006.
[8] NONAKA, I.; TAKEUSCHI, I. Criação de conhecimento na empresa. Trad. Ana
Beatriz Rodrigues, Priscilla Martins Celeste – Rio de Janeiro: Campus, 1997.
[9] PAHL, G.; BEITZ, W. Engineering design: a systematic approach – 2Rev.ed. Springer-
Verlag London Limited, London, Great Britain, 1996.
[10] PUGH, S. Total design: integrated methods for successful product engineering.
Addison Wesley, London, United Kingdom, 1990.
[11] Qualificação de Sistemas Espaciais. Montagem, Integração e Teste de Satélites.
Laboratório de Integração e Testes. Available at:
<http://www.lit.inpe.br/qualificacao_siste mas_espaciais_montagem.htm>. Accessed
on: Jul. 20th 2006.
[12] ULRICH, K.T.; EPPINGER, S.D. Product design and development. McGraw-Hill Inc.
United States, 1995.
[13] WHEELWRIGHT, S. C.; CLARK, K. B. Revolutionizing product development
process: quantum leaps in speed, efficiency, and quality. New York, United States, The
Free Press, 1992.
Impacts of Standardization Process in the Brazilian
Space Sector: a Case Study of a R&D Institute
a
Instituto de Aeronáutica e Espaço – IAE, Brazil.
Abstract: The main focus of this paper is to evaluate the impact of standardization process
in a R&D Institute of the Brazilian space sector. The research has tried to identify the
several organizational changes associated to the implementation of NBR 15100:2004 (a
specific standard for quality management systems for the aerospace sector) that have been
implemented since the middle of 2005, in one of the institutes in charge of research and
development in the space sector. In order to identify those changes, researchers that are
participating directly in the NBR 15100 implementation were interviewed. The results of
research have demonstrated a major impact on organizational, relationship and human
resources.
1 Introduction
The Brazilian State, under coordination of Brazilian Space Agency (AEB), has
been participating in the national and international space context in several
projects, such as: partnership in the construction of satellites (CBERS),
participation in the International Space Station (ISS), microgravity programs and
others. The Brazilian participation in the microgravity projects is fomented through
the project, production and launching of sounding rockets, accomplished
nationally. Besides sounding rockets, the Program of National Space Activities
(PNAE) has as objective to project, develop and manufacture in the Brazilian
industry the Satellite Launch Vehicles (VLS). In 2003, during the assembling
process of VLS-03 at the launch tower, located at Alcântara Launch Center (CLA),
an accident happened impacting in many losses. As result of the accident report,
were established actions, and one of them was the implementation of a quality
management system standard, edited by Brazilian Association of Technical
Standards (ABNT) in 2004, in the institute of R&D in charge of VLS project. The
standardization adopted was the NBR 15100 (Quality Systems, Aerospace Model
for assurance of quality in the project, development, production, installation and
services) [1].
Therefore, this paper has as objective to evaluate associated impacts with the
adoption of standardization in the Institute of R&D. The paper is organized in five
1
Corresponding Author E-mail: rvascon@iae.cta.br
42 R.R.Vasconcellos, M.A.Harada, V. F. F. Contreiro, A. L. Correia and S.Costa
In the middle of the 90’s, the aerospace industry recognized that the international
standard ISO 9001 did not assist the minimum requirements of its sector. Most of
1st level organizations, in the supply chain of the sector, were increasing additional
requirements to ISO 9001, when they asked them for their suppliers.
In this context, the authorities of aerospace companies in the United States,
Europe and Asia organized the International Aerospace Quality Group (IAQG)
with the intention of minimizing the complexity of international integration process
of aerospace components, sub-systems and systems. In 1999, the IAQG, together
with the Aerospace Technical Committee of ISO [2], organized the first
international standard for aerospace supply chain, denominated SAE 9100 [3]. It
was based on the ISO 9000 plus aerospace requirements. So, the IAQG and ISO
established the basic conditions for the alignment of requirements in the aerospace
supply chain and the specific demands of production [4].
Following the international perspective, the aerospace sector in Brazil created
the CB-08 (Brazilian Committee of Aeronautics and Space), with the objective of
standardize the sector regarding materials, components, equipments, project,
production, evaluations, maintenance of aircrafts, subsystems, aerospace
infrastructure and space vehicles. Therefore, the CB-08 worked out, in 2004, a
standard technically equivalent to SAE AS 9100, registered at ABNT as NBR
15100:2004 (Quality System - Aerospace - Model for assurance of quality in
project, development, production, installation and associated services). It was
ratified by IAQG and were established the most favorable conditions for the
insertion of Brazilian aerospace production in the international chain.
Management
Responsability
Measure,
Resources
Satisfaction
Analisys and
Administration
Improvement
Out
In
Realization of Product Product
The fig. 1 above describes the model of quality management system of NBR
15100 based in a continuous improvement process [1]. The process begins by the
identification of the customer's needs and the evaluation of the service capacity,
considering the references of product and/or service compliances. Afterwards,
those needs are translated in technical requirements that guarantee the product
effectiveness, observed the aerospace regulatory restrictions. Then, the established
configuration is documented, as well as the resources used for the production
process, operation and maintenance of the product. The production is controlled by
monitoring devices, to analyze the process conformity levels with the product
requirements, and identify opportunities for preventive actions and improvements.
The main characteristic of NBR 15100, is the continuous improvement of
quality management system, through the use of quality policy, quality objectives,
audit results, data analysis, corrective and preventive actions and critical analysis
of administration system. In that sense, the organization has to show evidences of
the commitment with the development and implementation of quality management
system as well as with the continuous improvement process, such as: to
communicate to the organization the importance in assists the customer's
requirements, to attend the governmental regulations, and to supply the necessary
resources. It is important to mention that SAE AS 9100 and NBR 15100, are
characterized as consensus standards, in other words, both of them are of voluntary
adhesion, and do not substitute the regulatory requirements adopted by the
aerospace production. In Brazil, for instance, they are subject to the Brazilian
Regulations of Aerospace Quality (RBQA), whose objective is to assure, through
requirements and procedures, that the demands of product contracts and
conformities are assisted.
3 Research Methodology
According to Eisenhardt [5], case studies can include a single or multiple cases, as
well as they are driven in several analysis levels. Our research was characterized as
a single case study, trying to deepen in the selected case, through the study of a
governmental Institute of R&D related to the Brazilian space sector [6].
In order to elaborate a questionnaire for the interviews, was utilized a model
developed by Nadle, Gestein and Shaw [7], and the relationship network
perspective [8]. In this model, the variables that have influence in the results are
classified as external, when they can not be controlled directly by the
organizations, and as internal variables, such as: the technology, the human
resources, and the organizational structure and work organization. Modifications in
the internal variables, that represent the organizational basic elements, can be
considered decisive for changing the process and the way that the tasks are
accomplished [4].
In the research, were analyzed the following areas of impact:
x Organizational Impacts - aspects regarding communication process
(responsibility, content and diffusion), authority (hierarchical structures,
command and control) and the tasks (formalization); and
44 R.R.Vasconcellos, M.A.Harada, V. F. F. Contreiro, A. L. Correia and S.Costa
According to the results obtained in the interviews, below follows the graphic
related to organizational and technological impacts.
75%
50%
92%
75%
60% 58%
25% 50%
0%
Communications Authority T asks Infrastructure Information
Process
“not agree or not disagree” represented, in the majority of the answers, a positive
impact but not so relevant to be considered as “agree or agree absolutely” by the
interviewees. So, the interviewees mentioned that there was a better
communication inside the organization, specifically regarding quality objectives.
The communication process was improved with more representatives of quality
and assistants in the divisions and facilitating the diffusion of the NBR 15100
requirements. The interviewees affirmed that relevant modifications happened to
provide larger authority, but without significant changes in the hierarchical
structure. Finally, regarding tasks, were detected changes, mainly in the type of
formalization, demanded for accomplishing the new quality standards.
Regarding technological issues, in the infrastructure of machines, equipments,
and software / hardware were identified impacts, but still lack to complement them.
Below, follows the graphic of the impacts in human resources and the internal
and external interactions.
75%
50%
92%
75%
25% 50%
40%
0%
Quantity Competence Internal External
Also, according to the NBR 15100, item 7.3.1 – Planning of Project and
Development -, the organization has to manage the interfaces among different
groups involved in project, and development to assure the communication
effectiveness and clarify the designations of responsibilities, and all of it have to
happen during all stages of the project life cycle, in order to attend the
requirements related to the product. This affirmative is similar to concurrent
engineering definition. As Loureiro [10] defines “Concurrent engineering is a
systematic approach of simultaneous and integrated projects of products and its
related process, manufacturing, and staff. This approach intends to prompt
developers to consider, in the early project stages, all elements of product life
cycle, from conception up to discharge, including quality, cost, term, and user
requirements.”
The paper tried to identify and analyze the impacts caused by the implementation
process of NBR 15100, in a governmental Institute of R&D, in the Brazilian space
sector.
The methodology applied was a single case study. It was effective, because it
has given us an opportunity to deep in the researched subjects. Besides that, the
researchers have been implementing the quality standard and, also, were working
at the Institute by the time that the accident happened. All of these have provided
us a valuable source of data.
The most important impacts observed were related to the organizational,
relationship and human resources. In the organizational impact, tasks and
authority, demonstrate that the process of quality management system is
permeating on several process in the Institute. The NBR 15100 implementation is
emphasizing the importance of quality function in the professional profile of each
researcher, independently if his/her original work is in quality or not. Also, it is
important to mention the impact regarding internal relationship promoting the
researcher works as a team since the beginning of the project life cycle. So, a
standardization process has demonstrated a direct impact in the attitude of the
researcher changing from an individual work to teamwork. Indirectly the
standardization is creating an environmental situation facilitating a concurrent
engineering approach.
In order to generalize the results for other institutions, new researches are
suggested in organizations that have been going through by a quality
standardization process. Finally, despite NBR 15100 implementation, has begun no
more than 18 months, it was possible to observe the beginning of a cultural change
in the functional quality vision, in other words, from a departmental quality to a
(yet embryonic) Total Quality Vision.
Impacts of Standardization Process in the Brazilian Space Sector 47
6 References
[1] ABNT NBR 15.100. Sistema da Qualidade - Aeroespacial – Modelo para garantia da
qualidade em projeto, desenvolvimento, produção, instalação e serviços associados. 2
ed., 25 p., 2004.
[2] INTERNATIONAL ORGANIZATION FOR STANDARDIZATION (ISO): available
in www.iso.org. Accessed on: Dez. 09th 2006.
[3] SAE AS9100. Available at: <http://www.sae.org/technical/standards /AS9100B >
Accessed on: Feb. 22th 2007.
[4] PEREIRA, M.I. e SANTOS, S.A. Modelo de gestão: uma análise conceitual, Pioneira
- Thomson, 2001.
[5] EISENHARDT, K. M. Building theory from case study research. Academy
Management Review. v. 14, n. 4, p. 532-50, 1989.
[6] YIN, R.. Estudo de Casos: planejamento e métodos. 2ª edição. Porto Alegre:
Bookman, 2001.
[7] NADLE, D.; GESTEIN, M.S. and SHAW, R.B. Arquitetura organizacional: a chave
para a mudança empresarial. Ed. Campus. Rio de Janeiro, 1994. Modelo de gestão:
uma análise conceitual, Pioneira - Thomson, 2001.
[8] AMATO NETO, J. (org); Manufatura classe mundial: conceitos, estratégias e
aplicações, ed. Atlas, SP, 2001.
[9] SELLTIZ, C.; JAHODA, M.; DEUTSCH, M.; COOK, S.W. Research methods in the
social relationships. São Paulo, EPU, 1974. 687p.
[10] LOUREIRO, G. A Systems Engineering and Concurrent Engineering Framework of
the Integrated Development of Complex Products. Tese (Doutorado) – Loughborough
University, Inglaterra, 1999.
Proposal of an Efficiency Index for Supporting System
Configuration Design
Abstract. Demands for various miniature mechanical parts and products such as mobile
phones, medical devices and so on, will increase more and more. Contrarily, manufacturing
systems for those devices are becoming larger and more complicated. AIST developed the
first prototype of a machineable microfactory which consisted of miniature machine tools
and robots in 1999 as a countermeasure for the situation. An expected advantage of the
microfactory was that the microfactory can reduce environmental impact and costs of
miniature mechanical fabrication. However, the effect of the microfactory in reducing
environmental impact and costs, or enhancing system efficiency have not been quantified.
So, an appropriate index to evaluate microfactories by considering environmental impacts,
costs and system throughput, simultaneously, is necessary. In the paper, the authors propose
an evaluation index, based on the required time for each process, machine cost, operator’s
cost and environmental impact, using the microfactory as an example. The calculation
shows that the proposed efficiency index is useful in evaluating the system configuration.
1 Introduction
machines enables flexible layout changes, it can control the increase of the costs
when the product designs have been modified. And, by replacing conventional
manufacturing systems to microfactories, electrical power can be reduced.
However, since there have been no effort to evaluate effect of microfactories
quantitatively, abovementioned advantages are still uncertified. In recent world
where “green manufacturing” is strongly required, environmental aspect of
microfactories should be examined. The purpose of this research is to propose a
simple efficiency index for a microfactory-like system to support its system
configuration design.
Figure 1. Microfactory
From the figure it is easily imaginable that the assembly processes are very time-
consuming, because the manufacturing processes should be done sequentially
under a microscope using the micro-hand. Table 1 indicates the average process
time of the corresponding processes in Figure 3, after operators have been skilled
enough. Number of operators required for each process is also shown in the table.
C ase
Rod (I0.9mm) Fixture 1 Surface milling Cavity milling Releasing 1 Transferring 1
S h a ft
Cover
Table 1. Process time and number of operators for each process per unit
Sub Process Process time in seconds Number of operator
Fixture 1 10 sec. 1
Fixture 2 5 sec. 1
Surface &cavity milling 3 min. 1
Turning 2 min. 1
Press 0.2 sec. 0
Releasing 1 10 sec. 1
Releasing 2 5 sec. 1
Transferring 1 1 sec. 0
Transferring 2 1 sec. 0
Transferring 3 1 sec. 0
Assembly 1-4 (total) 48 min. 1
Gluing 1 1 min. 1
Gluing 2 2 min. 1
used in the microfactory. And also the energy consumption of each machine is an
important factor to consider system efficiencies. Table 3 shows the average power
consumption of the machines during the operation.
the same idea can be applied. Efficiency index defined by Equation (1) is used in
the following sections.
F
Ef (1)
C E
Ef: system efficiency index, F: system functionality,
C: cost of the system (yen/hour), E: environmental impact (kg-CO2/hour)
Instead of the total performance of a product defined in the original index, “Ef”
which is an index to express system efficiencies is introduced. “F” is the sum total
of the value of the various products created within a certain time. But, since the
target product is not changed in this case, “F” can be simply represented by the
system throughput. By defining the throughput by number of products assembled
in an hour, the efficiency index for the microfactory can be calculated. “C” can be
calculated by a sum total of machine costs, labor costs and electricity cost during
the corresponding time. Labor cost is assumed to be 5.0 (million yen) per person
per year. For “E”, many indexes have been proposed to estimate it. In the
microfactory, since it isn't necessary to consider special waste, equivalent CO2
emission will be a good index to estimate environmental impact. So, “E” can be
expressed by the sum of CO2 emission cased by electricity shown in table 3 and
machine material. (1kWh =0.38kg-CO2)
1.18i (2)
Ef (1 d i d 5)
6
(6.9 10i 5) u 10 / 1600 (0.8 0.4i) u 0.38
6.4i (3)
Ef (6 d i )
6
(6.9 10i 5) u 10 / 1600 (0.8 0.4i ) u 0.38
1.18i (4)
Ef
6
(5 10i 5 j 1.2k 0.7l ) u 10 / 1600 (0.25 0.4i 0.3k 0.25l ) u 0.38
Efficiency
1 lathe, 1 mill and 1 operator
1 lathe, 1 mill and 2 operators
2 lathes, 2 mills and 2 operators
2 lathes, 2 mills and 3 operators
Number of hands
In the figure, since press and transferring processes are not significant for the
overall throughput, the figure shows the behavior of the efficiency indicator
according to the change of the number of hands, lathes and mills. (“Operators”
means number of operators for machining processes.) According to Figure 4, it can
be said that there are some local maximums. The result suggested some simple
strategies. For example when the system had a lathe, 2 machining operators and 6
hands, the efficiency was higher than that of the case having 1 lathe and 5 hands.
The results showed that having 6 hands and one machining operators won’t be
efficient. Usually, the configuration of the system is mainly determined by the
required throughput. But the calculation indicates that covering the shortage of the
throughput by extending the operation time of the factory will be a better solution
in the aspect of green manufacturing.
Proposal of an Efficiency Index for Supporting System Configuration Design 55
Average demand (% )
The figure indicates that the microfactory is rather efficient when the demand is
low and the lifetime of the system is relatively short. When the system lifetime is 5
years, the efficiency of the microfactory in its suitable configuration is about 0.04.
So, when the average demand is lower than 20% of the maximum throughput, the
microfactory is more efficient than the mass production system. Although more
precise comparison is necessary, it can be said that “microfactory” has a good
possibility for diverse-types-and-small-quantity production of miniature
mechanical products.
system had some suitable configurations. By using the evaluation result, it was
possible to design the system configuration of microfactory-like systems.
The result was compared with a rough estimation of the efficiency of a
conventional manufacturing system of ball bearing. The comparison indicated that
the efficiency index of the microfactory was lower than that of a mass production
line. However, when the lifetime of the system is relatively short and the demand is
low, efficiency of the microfactory can be higher than that of a mass production
line. The fact shows although “microfactory” is not a suitable system for mass
production, it will be a good solution for “diverse-types-and-small-quantity
production”.
As the future work, more precise comparisons with mass production systems
are required in order to prove the effectiveness of microfactory-like system. In
addition, modification of efficiency index to consider frequent change of demand
and product design may become necessary to estimate the feature of microfactory-
like systems.
7 Reference
[1] Kawahara N, Suto T, Hirano T, Ishikawa Y, Ooyama N and Ataka T. Microfactories;
New Applications of micro machine technology to the manufacture of small products,
Microsystem Technologies; 37-41
[2] Mishima N, Ashida K, Tanikawa T and Maekawa H. Design of a Microfactory, Proc. of
ASME/DETC2002, Montreal, Canada, 2002; DETC2002/DFM-34164
[3] Okazaki Y, Mishima N and Ashida K, Microfactory -Concept, History and
Developments-, Journal of Manufacturing Science and Engineering, Trans. ASME,
Vol.126, 2004; 837-844
[4] Gaugel T et al. Advanced Modular Production Concept for Miniaturized Products, Proc.
of 2nd International workshop on Microfactories, Fribourg, Switzerland, 2000; 35-38
[5] Furuta K. Experimental Processing and Assembling System (Microfactory). Proc. of the
5th International Micromachine Symposium, Tokyo, Japan, 1999; 173-177
[6] Hollis R and Quaid A. An architecture for agile assembly, Proc. of ASPE 10th Annual
Meeting,1995.
[7] Kondoh S, Masui K, Hattori M, Mishima N and Matsumoto M. Total Performance
Analysis of Product Life Cycle Considering the Deterioration and Obsolescence
of Product Value, Proc. of CARE INNOVATION 2006, Vienna, Austria, 2006;
2.10.1
[8] Oizumi, K and Tokuoka N. Evaluation Method of Design Products Based on Eco-
efficiency Index, Proc. of 7th International Conf. on Ecobalance, Tsukuba, Japan, 2006;
B3-4
[9] Kudoh Y, Tahara K and Inaba A. Environmental Efficiency of Passenger Vehicles: How
Can the Value of a Vehicle be Determined?, Proc. of 7th International Conference on
Ecobalance, Tsukuba, Japan, 2006; B3-7
[10] Ernzer M, Lindahl M, Masui K and Sakao T. An International Study on Utilizing of
Design Environmental Methods (DfE) –a pre-study-, Proc. of Third International
Symposium on Environmentally Conscious Design and Inverse Manufacturing, IEEE
Computer Society, 2003; 124-131
Reaching readiness in technological change through the
application of capability maturity models principals
Abstract. New technology introduction generally implies a change management plan as the
adoption of advance technical capabilities comports information, cooperation and
coordination restructuring. When planning for potential organizational developments the
application of integrated design principals facilitates structure modelling. It enables to
capitalize the perceived recommendations and constraints by the individuals impacted by
change. Organizational structure design is considered as integrated product design where
concurrent engineering principals are applied. The different professions concerned by
process redesign, collaborate to its definition so as to ensure interoperability. This
methodology allows considering the implied stakeholders at different level of the process
and the needed resources to ensure readiness for a given technological change. As indicated
by O.Poveda [9] even if the objective of this kind of methodology is clear, it remains
complex to operate. The main difficulties are to translate the recommendations and
constraints captured at the specifications phase to elaborate an optimal organizational
structure supporting the new processes. In order to face these hurdles we propose a potential
change maturity model so as to tackle the technical, social and strategic factors linked to
organizational performance.
1
Olivier Zephir, LIPSI – ESTIA, Technopole Izarbel 64210 Bidart France ; Tel : +33 (0)
559 43 85 11 ; Fax : +33 (0) 559 43 84 05 ; Email : o.zephir@estia.fr
58 O.Zéphir, S.Minel
network structures. The effects are that business actors have more operational
autonomy and decisional power including an increase in transversal activities and
collective data management. This kind of organization can be considered as
complex. In fact the system is composed of multiple subsystems incorporating
different professional corps cooperating to design and run the activity. It results to
the fact that no unique profession can manage the global system as it is
interconnected. Changing the organizational structure in this context is critical; a
restructure of the collective activities must be operated while keeping day to day
performance. Classical change management models describe transformation levels
and steps that have to be reached to adopt new operational modes. In those models
we cannot estimate the necessary capabilities that are needed by structured
organizational agents to operate under a new mode. In order to measure the change
capabilities we propose a combination of three evaluation models measuring
information, cooperation and coordination transformation. The presented model
which is developed in a European project provides a framework allowing project
management teams to assess the organizational maturity to integrate new practices
under structural or technological change.
It is the first step where the impacts of the programmed change is characterised on
the organizational activity. Through interviews the impacted processes and core
competencies are determined. Core competencies as defined by Hammel et al [5]
are those capabilities that are critical to a business, it embodies an organization’s
collective learning, the know how of coordinating diverse production skills and
integrating multiple technologies. When the impacted core competencies are
revealed, the link can be made to identify the teams and the individual
competencies impacted. This step is crucial to fix the As Is state; it fixes the body
of organizational knowledge and competencies that is concern by the change. The
impacted process analysis reveals the related capability that is supported by the
knowledge, skills and abilities employed by organizational actors to achieve the
process goals and objectives. This level allows identifying “who” the
organizational roles and functions and “what” competencies or tools, impacted [4].
The next figure illustrates an example of how organizational impacts are
determined. Impacts on the general business workflow and the related processes or
sub processes are identified. It allows considering the concerned teams or
individuals. The last phase is to define the impacted factors which can be used to
evaluate the needed recourses for change.
Reaching readiness in technological change 61
4.2 As Is V/S To Be
When the As-Is situation is set the To-Be one is designed considering all the
impacted stakeholders in the various concerned processes. The Minel’s [7]
Cooperative Evaluation Scale (CES) is applied to characterise the level of
collaboration between 2 professions involved in a same activity. Useldinger’s [11]
model defining as a six point Likert scale different levels of information is
readapted to express the level of information change in an activity. Our
investigation consists in the mapping of collaborating professions in the spotted
impacted activities. We first carry an “As-Is” collaboration situation, to evaluate
the level of cooperation before the change. Characterising the degree of
cooperation allows defining targets related to change implementation. That is,
when considering 2 professional corps collaborating, to determine if the same
cooperative level is to be kept after change implementation or if it needs to be
optimised. The Minel’s CES considers 6 levels of collaboration, described by the
level of knowledge shared by two interacting actors. The levels are as follows: 0
stands for no knowledge shared, 1 for common vocabulary, 2. Knowledge of
concepts, 3. Knowledge of methods, 4. Master of domain, and 5 for expert of
domain. Empirical studies show that in order to attain collaboration between two
different professions, the level 3 of the CES is required to share a common vision
of how to integrate the constrains of the other in ones own goals. Above this level,
actors’ specialised skills affect the cooperation. Under this level, cooperation is not
efficient and can be improved. When the result of the “As Is” cooperation state is
figured out, it has to be linked to the evaluation of the information changing state.
62 O.Zéphir, S.Minel
This is carried out by using Useldinger’s model where six level of information are
defined as follows: Signal, data, information, knowledge, Skills and know-how.
Level 3
Level 2
Effort for change
(As-Is v/s To Be) evaluation
The model is similar to a 6 point Likert scale characterising (under a hierarchy) the
different levels of information throughout different formalized schemes. The
collaborating actors have to define in common the level of information changing in
their activities. Defining that, allows evaluating to what extent the activity is
changing, from the form of data or structure to competencies and know-how.
Having those information collaborating actors are able to redefine their common
activities, and also to state the needed resources, effort and support they need to
collaborate under a new operating scheme. A similar evaluation is applied to
evaluate coordination evolution from the As-Is to the To-Be situation there is no
particular method applied here, but and indication on each described collaboration
activity.
This last step is design to indicate for each transformed activity spotted in the level
two, the necessary human and technical resources to deliver a constant process.
Once the extent to which activity is being transformed is fixed, as referred in CMM
models, simulations are programmed to evaluate the needed documentation,
management and control to reach continuous process improvement through
readjustments. The prerequisite skills, knowledge, practices and tools to ensure
compliance with the corporate procedures and process are fixed at this level. We
Reaching readiness in technological change 63
estimate that readiness for change is reached when technical and human capability
is estimated in relation to a define service level with improvement possibilities.
Readiness means here the organizational capacity to incorporate new business
processes and mastering there possible evolution.
5. Future developments
The presented Potential change maturity model is still in its development state. The
first two levels are being actually tested and fine tuned through the SMMART
European project. Theses first steps are crucial to determine the impacts of a new
technology and the needed capabilities to support it. Change readiness definition
relies circumstantially on the impacts definition. Applying capability maturity
models principals to model the proposed method provides an efficient and practical
framework to estimate the change project progression. Our next issue is to
elaborate a strong simulation method so as provide reliable human capability
evaluation. We still have to set the adequate method base on empirical researches
analysis and strong theory evaluation. Our main focus trough this article was to
present a practical model enabling the evaluation of the need capabilities in terms
of human and technical capital for new technology introduction. Our investigations
aim at conciliating human and technical factors for optimal process design.
Acknowledgement:
This work has been carried out within the SMMART (System for mobile
Maintenance Accessible in Real Time) project that received research funding from
the European Community’s FP6.
References:
[1] Armenakis, A. A., Harris, S. G., & Mossholder, K. W. Creating readiness for
organizational change. Human Relations, 1993; 46(6), 681–703.
[2] CMMI Product Team. Capability maturity model integration (CMMI) Version 1.1.
Pittsburgh, PA: Carnegie Mellon Software Engineering Institute, 2002.
[3] Cook-Davies J. Innovations, Project Management research, Measurement of
organization al maturity, 2004.
[4] Deaming WE. Out of crisis. Massachusetts Institute of Technology, 2000.
[5] Hammel G., Prahalad CK. The core competence of the corporation, Harvard Business
Review, 1990.
[6] Lillian T. Eby et al. Perceptions of organizational readiness for change: Factors related
to employees' reactions to the implementation of team-based selling, Human relation,
Tavistock InstituteJellinek EM, Vol 53, No. 3, 419-442 (2000).
[7] Minel S. Démarche de conception collaborative et proposition d’outils de transfert de
données métiers, Ph.D. thesis, L’ENSAM Paris, 2003.
64 O.Zéphir, S.Minel
Abstract. High integrated and complex systems are more and more scattered and
common in people lives. Even without the exact feeling of this means, they hope
for the best product. This desire implicate in system manufactures improve
knowledge and create solutions with more advanced technologies to satisfy
consumers expectations. Verification have been done at the end of process
development, but it have resulted in difficulties to manufacturers because is very
expansive and hard to implement any required modifications at this point of
development process. Thus, many manufactures have started verification process at
begin of development, decreasing erroneous implementations. This paper is
intended to show an intuitive method possible to apply in any cases, using block
diagrams, that assists generate test cases procedures, since when development
starts, making relationship among system interfaces, subsystems and functions,
enabling tests traceability and tests coverage analysis. In cases where manufacturer
develops same kind of products, the block diagram will be easily reused to a new
one, including or removing systems, subsystems and functions, adapting it to new
features and project requirements. The propose is starting developments doing the
things right earlier as possible.
Keywords. Verification, System, Interfaces, Block Diagram, Reuse and Test Coverage.
1 Introduction
The verification and validation activities have been generated a lot of
discussions where is applicable validation or verification. It is possible find many
ways to understand and define these activities, even that they should executed
together without clear and explicit separation in terms of time and stages of
development. Many documents consider that validation is based in non
implemented hardware and software while verification is based in a target
implementation of hardware and software components. In spite of divergences
about the best definition to validation and verification, there is a common
1
INPE –Instituto Nacional de Pesquisas Espaciais; São José dos Campos – SP; Brazil; Av.
dos Astronautas, 1758 – Jd. da Granja; Tel: +55 (12) 39471131; Email:
chmendonca1977@yahoo.com.br;
66 C. H. Mendonça
consensus that validation means “Do the right things” and verification means “Do
the things right”. Thus, both issues are supplementary during process to ensure the
correctness products development and, at this paper, the term verification will be
considered here disregarding stages of development.
2 Motivation
This work is motivated by the last modifications occurred in development
process of high integrated and complex products forcing developers improve their
knowledge about technologies, process and methods to speed up the development
and ensure that the new products will be accepted by market and consumers will be
satisfied.
The intention of this paper is propose a method to start verification even
without formal requirements have been written. When a development is defined,
developers have, at least, a minimal knowledge of the system under construction,
thus it enables the start of verification process, once that it is possible create a
block diagram with systems and breakdown it in a small parts. This method allows
and helps many levels of verification, traceability of tests and coverage. A brief
example will be done later.
identification of integration problems even if the systems and their interfaces are
not detailed and totally defined.
Using blocks diagrams to depict systems is an easy way to visualize whole
integrated system because they are more readable and intuitively understandable
than formal languages. With the complete view of the system and its interfaces is
possible investigate details not so simple to think when writing a requirement.
Using a symbolic way to represent ideas about system could be simpler than use
words, spoken or written. This way to analyze system is valid and applicable also
to subsystems, elements, components, functionalities among others. In any moment
is demonstrated the intention of change formal requirements by non-formal
languages, it is only expected to aid developers to have a complete visualization of
the systems.
Usually a developer creates each system separately establishing interfaces
between systems only by requirements, but it could not be well understand by parts
causing problems later during integration. This example is only one case, but
misunderstand could be generated inside of own workgroup, where subsystems do
not agree with them interfaces. A device defined by one part of the system or
subsystem could not met all features requested by other or a post requirements
elaboration changes could not be transmitted completely or clearly.
With this method is possible make integration of parts and start a process to
develop a complete verification analyzes from system to components and
functionalities (top down verification) or from components and functionalities to
system (bottom up verification). The system verification responsible could use this
method to perform analyses around the correction of system functions (top down)
in normal or abnormal conditions. The components verification responsible could
start the verification from components and devices level (bottom up). Both, system
verification responsible and components verification responsible are able to use the
same block diagram breakdown, it only depends of fidelity and detail of block
diagram breakdown.
Other advantage is to facilitate the coverage of tests. Usually is generated a
verification matrix to ensure that all requirements are verified. The verification
matrix is used to make cross between each requirement with each verification
modalities and during the initial stages of developments and based in written
requirements that can change. Here is easy to split systems in subsystem or in
minor parts and analyze each separate parts as well as integrations and interfaces.
Each minor part verified could be linked with its interfaces to start other
verification and so on, covering wide parts of system. This way is possible cover,
at least an enormous number of tests cases.
The system traceability is accomplished when each block diagram carry all of
important signals to next block level becoming easier the paths to follow signals
and verify correct links among blocks (systems, subsystems, components, etc.).
Nowadays the reuse of tools, developments, process, methods are very
commented, but the reuse is not so easy to achieve once that, there are
improvements and different objectives from development to development. Other
feature this method is to reuse the block diagram, making modifications that are
not so hard to implement. It only needs remove or add blocks and rearrange
interfaces, inputs and outputs. It is very normal that a product have very closed
68 C. H. Mendonça
- The Electric Wheel Chair shall have a control to command chair by the
desired way
It can be understood that these two basic features are requirements to start
project. Based in these is possible start thinking in interfaces and necessities that
system will require to satisfy them. After keep in mind the basic requirements,
starts the phase where detailed requirements are written. But like we are intended
to show that no requirements are necessary to start the assembly of this block
diagram breakdown we will start drawing blocks.
First of all, it is necessary understand which are the main interfaces of electric
wheel chair and them insert in an initial block its interfaces, like Figure 1.
After gathering all possible interfaces, in major view, break system in known
technologies or parts and insert those interfaces internal and external. In this case
The System Verification Breakdown Method 69
To simplify this paper was chosen detail only blocks to electrical systems. As
below, Figure 4 depicts in more details electrical systems that were broken in
electrical sources, electrical commands and torque system. At this step is possible
insert in each subsystem their respective inputs and outputs parameters. An
70 C. H. Mendonça
example is that all subsystems have a kind of protection, but comfort is a not
necessary characteristic to battery, but necessary to other three subsystems.
In order to demonstrate reuse, it will be created new requirements to modify
system and show how this method could be reused. These requirements are
proposed, for instance, to create a higher level of product with more functionality
and comfort. Consider next requirements:
- The Electric Wheel Chair shall have a control to recline back and feet to
increase comfort of user.
The new implementation could be done inserting a new box with a motor to
recline back and feet like Figure 5. Note that there signals that are going from
existent boxes (command box and battery) to new one and vice-versa.
This example is stopping at this point, but following the present idea it could be
detailed even no requirements written, for instance, command box could be break
in two functions, move chair or recline. Example of modification could be adding
physical system like motors that could be break in motors and clutch and so on.
The System Verification Breakdown Method 71
5 Conclusion
While requirements in creation, it is possible increase details of functions
subsystems, components, interfaces parameters, etc. When desired details are
72 C. H. Mendonça
6 Reference List
[1] Chew, J., Sullivan C. Verification, Validation and Accreditation in the Life Cycle of
Models and Simulation. Proceedings of the 2000 Winter Simulation Conference, 2000.
[2] Ladkin, P. B. Analysis of a Technical description of the airbus A320 Braking System.
CRIN-CNRS & INRIA Lorraine, Bâtiment LORIA, BP 239, 54506 Vandoeuvre-Lès-
Nancy, France, 1995
[3] Lutz, R. R., Mikulski, I. C. Requirements Discovery During the Testing of Safety-
Critical Software. Jet Propulsion Laboratory of California Institute of Technology.
[4] Souza, M. L. De O. e, Trivelato, G. da C. Simulators and Simulations: Their
Characteristics and Applications to the Simulation and Control of Aerospace Vehicles.
Society of Automotive Engineers, 2003.
[5] Varró D., Varró, G., Pataricza, A. Automatic Graph Transformation In System
Verification. Technical University of Budapest, Department of Measurement and
Information Systems.
[6] VanDoren, V. J., Simulation Simplifies ‘What-if’ Analysis. Control Engineering, July,
1998; 38-45.
[7] System Engineering Fundamentals. Defense Acquisition University, 2001.
[8] System Engineering Handbook. International Council on Systems Engineering.
INCOSE, 2004.
[9] System Engineering Handbook. National Aeronautics and Sace Administration. NASA,
1995.
[10] Design and Verification Strategies for Complex Systems: Part 1. Available at:
<http://www.dspdesgnline.com/showArticle.jhtml>. Accessed on: Jan. 18th 2007.
[11] Design and Verification Strategies for Complex Systems: Part 2. Available at:
<http://www.automotivedesignline.com/189401216>. Accessed on: Jan. 18th 2007.
[12] Electric Wheel Chair Characteristics Information. Available at:
<http://www.medicalproductsdirect.com/inneustelser.html> Accessed in: Jan 27th
2007
[13] Electric Wheel Chair Characteristics Information. Available at:
<http://www.1800wheelchair.com/asp/view-
product.asp?product_id=1786&s_cid=dlshpg_1786> Accessed in: Jan 27th 2007
Systems Architecting
Hardware and Software: How Can We Establish
Concurrency between the Two?
Shuichi Fukuda
Stanford University
Abstract. Today, most of our products are combinations of hardware and software.
we must remember software works on hardware. But the function of hardware is
fixed, while that of software is evolving throughout its life cycle.
Hardware is an individual living. Once they are created, they start to degrade.
Therefore, maintenance is a very important task with respect to hardware. But
hardware functions are fixed so that it is relatively easy because the objective of
maintenance is to restore the degraded quality to its original designed level.
Software, however, is a species. Each software, once born, grows in a different
way to adapt to the situation. And many new portions are added on. Therefore,
decommissioning becomes very difficult for software.
Since most of our products are combinations of hardware and software, we
should pay attention to how we should work them together effectively and how we
can decommission the system which is composed of both hardware and software.
This paper discusses this issue and suggests the possible solution.
1 Introduction
Today, most of our products are combinations of hardware and software. we must
remember software works on hardware. But the function of hardware is fixed,
while that of software is evolving throughout its life cycle.
___________________________
Shuichi Fukuda
Stanford University, Consulting Professor
3-29-5, Kichijoji-Higashicho, Musashino, Tokyo, 180-0002, Japan
Phone: +81-422-21-1508 Fax; +81-422-21-8260
Email: shufukuda@aol.com
76 S. Fukuda
Hardware is an individual living. Once they are created, they start to degrade.
Therefore, maintenance is a very important task with respect to hardware. But
hardware functions are fixed so that it is relatively easy because the objective of
maintenance is to restore the degraded quality to its original designed level.
Software, however, is a species. Each software, once born, grows in a different
way to adapt to the situation. And many new portions are added on. Therefore,
decommissioning becomes very difficult for software.
Since most of our products are combinations of hardware and software, we
should pay attention to how we should work them together effectively and how we
can decommission the system which is composed of both hardware and software.
This paper discusses this issue and suggests a possible solution.
Sotware used to be produced in the same way as hardware. In fact, there were
companies in Japan who called their software division “Software Factory.”. There
were no distinctions between hardware and software at that time.
Software engineers did their best to comply with the design requirements and
to deliver software products which satisfy them. But it became soon clear that it is
impossible to completely debug and with the increasing diversification, these
efforts do not really answer to the needs of customers. Customers would like to
have software with more adapatablity and flexibilty than one with fixed functions
without any bugs.
Around 1980, knowledge engineering was proposed and what knowledge
engineering brought to software sector was quite revolutionary. The most
important impact it brought to us is the concept of continual prototyping.
Since then, software development was changed completely. Software engineers
started to provide us with a baby software (Beta version) and this baby grows with
us. Software was no more “fixed function” product.
But as we come closer to the end of 20th century, around 1990, diversification
grew more and more and situations come to change more frequently and widely.
To respond to these changes, software changed once again. It development
Hardware and Software: How Can We Establish Concurrency between the Two? 77
philosophy was to grow a product. But now it changed from an individual living to
a specimen. Many new pieces are coming to be added on so that software products
come to “evolve” as species. The discussion based on an indivual living or entity
does not hold any more. We have to look at software as “evolving system” or
“evolving species”.(Figure 2).
Since software is a species, we cannot easily predict its life. It evolves forever.
But our non-physical world is boundless. It can expand to infinity. Thus, the issue
is how we can compromise the finite and bounded world and the infinite and
unbounded one.
But we are living in a physical world and our life is finite. For example, an
organization or a company pursues to proper forever. Survival is their ultimate
objective. But the people who work there changes after about 30 years.
Generations change.
Then, would it be really worthwhile to develop software that works forever? If
the generations change, their ways of thinking or their ways of solving the problem
would change. Therefore, it would be far wiser to develop software that works best
for this period of time at the maximum. Of course, the situation changes are more
frequent so that we have to add on more new pieces to increase adaptability. If the
people in an organization changes widely, then it would be time for
decommissioning the system. Software is the collection of the brains of these
people but if these people are no more with the organization, there will be no more
people who can make adequate judgement about the system outcomes.
In fact, people can not understand software with too many add-ons. If they
cannot understand it, it is the time for decommissioning.
But what happens if hardware degrades much faster than software. In fact, in
most cases, software is thrown away because hardware is replaced. Thus, the
problem will be how we can decommission them at the same time. This could be
possible if we change our hardware development from the fixed function products
to evolving function ones. In short, we will develop hardware as a system. There
have been hardware systems. But these systems are composed so to speak in a
parallel way. There was no communication between the parts. But we have to
change its design so that each part communicates with others to evolve. Some parts
could be decommissioned earlier if other parts could take over their functions.
Thus, new hardware system works on a complementary basis. Each part or
component communicate with other ones so that they can maintain functions as a
whole system. The function may not evolve, but it certainly adds adaptability and
flexibility to the system. The hardware system survives in changing situations and
maintains its original desired functions. Thus, we should change our hardware
design to be more system-oriented and adaptive. Hardware elements will serve for
the designed purpose as a system to interact with a physical world.
We have to remember again that we are living in a physical world. So if such a
hardware system does not work anymore, it is the time to replace the whole
software- and hardware-combined system. We can add any adaptability and
flexibility as software, but it is within a non-physical world. If we come back to
our basics that we are living in a physical world, the system that would not fit
anymore with the physical world would lose its meaning. And efforts to evolve
software would not be worthwhile.
Hardware and Software: How Can We Establish Concurrency between the Two? 79
7 Reference
[1] Shuichi Fukuda, editor, Feature Issue. Frontiers in Reliability, Trans. Institute
of Electrical, Communication and Information Engineers, Dec. 2006 (in Japanese)
80 S. Fukuda
Design Degradation
Function
Restore
Production Maintenance
in Use
Shipping
Hardware
(Delivery of Finished Product)
Figure 1 Hardware development
Hardware and Software: How Can We Establish Concurrency between the Two? 81
Customizing
function
Function
Shipping Use
Software
(Continuous Prototyping)
Environment
Integrated
Humanware System
Software
Hardware
Environment Environment
Figure 3 Hardware, software and humanware integrated into one system
A Simulated Annealing Algorithm based on Parallel
Cluster for Engineering Layout Design
Abstract. The layout design problem is a kind of nesting problems that is naturally NP-hard
and very difficult to solve. Layout designing of machine is even more difficult because of its
nesting items are actually machine parts that have both irregular shapes and complex
constraints. A feasible way to solve machine layout problem is to employ ameliorative
algorithms, such as simulated annealing algorithm. But these kinds of algorithms are usually
CPU-time thirsty, sometime the computing time is unbearable. In this paper, the authors
advocate to parallel the simulated annealing algorithm on a multi-computer network (a
parallel cluster). We have combined Message Passing Interface (MPI) with Visual C++ to
integrate Simulated Annealing Algorithm based on Parallel Cluster and Engineering Layout
Design Support System. An engineering example about vehicle dynamical cabin layout
design is presented to test validity of the Algorithm. If appropriate temperature piece is
chosen and seemly a number of nodes are used, the integration of Simulated Annealing
Algorithm based on Parallel Cluster and Engineering Layout Design Support System
definitely will improve the efficiency for engineer.
1 Instructions
It is difficult to solve the problems of Complex product layout. In order to solve
them, there are many conventional algorithms, such as accurate algorithm[1],
simulated annealing[2], genetic algorithm[3, 4] and extended pattern search
algorithm [5], hybrid algorithm, expert system[6], virtual reality[7] etc. are used
for solving this kind of problems. As a kind of heuristic algorithm, simulated
annealing algorithm is usually used in engineering layout design. However,
because of dozens of existing shortcomings, such as inefficiency, simulated
annealing algorithm will spend too much time in solving a layout design question
to complex mechanical products. Moreover, a lot of professional knowledge and
1
Ph.D student, School of Mechanical, Electronic and control Engineering, Beijing Jiaotong
University, No. 3 of Shangyuan Residence Haidian District in Beijing; Tel: 8610-
51685335; Email: 06116257@bjtu.edu.cn; http://www.bjtu.edu.cn/en.
84 Nan LI, Jianzhong CHA; Yiping LU and Gang LI
§ T T1 ·
Tbh h T0 ¨¨ 0 ¸¸h 1 (1)
© q ¹
§ T T1 ·
T eh h T b h ¨¨ 0 ¸¸ (2)
© q ¹
In this equation, h is the index of current temperature piece ( h 1,2, , q ). Tbh is
initial temperature of stage h , Teh is final temperature of stage h , T0 is initial
temperature of all computing process, T1 is final temperature of all computing
process
(2) Do Simulated Annealing Algorithm with the current temperature and value
of design variables on every node.
For example, on node m ( m is the index of nodes, m 1,2, , p ), the process of
Simulated Annealing Algorithm computing is S (Tbh , Teh , X bh ) X ehm , Tbh is current
initial temperature, Teh is current final temperature, X bh is the current initial value
of design variables , X ehm is the current final value of design variables, after
Simulated Annealing computing on stage h , and on node m .
(3) Collect results from every node on current stage after Simulated Annealing
computing so that we can find the optimum point with comparing the results. The
rule of compare is min( f ( X eh1 ), f ( X eh 2 ), , f ( X ehp ) ). The optimum node is k
( k 1,2, , p ). After that, the best values of design variables are sent to all of
A Simulated Annealing Algorithm based on Parallel Cluster 85
nodes, so that we can keep the situation synchronal on different node. The equation
X b ( h1) X ehk will be set to prepare for next computing.
Meta-system
Layout
Constraint
Task base Rule base Case base Model base Database
base
User Interface
Figure 2 shows the detail flow chart of Simulated Annealing Algorithm based
on Parallel Cluster, and the algorithm works with other modules.
Initialize p,q,T0 and T1 Simulated Annealing Get Te,Tb and the foremost
Get the foremost layout object layout object station value
station from knowledge layout
Algorithm process
subsystem
Combine
XML File
4 Example
The results from table 1 show that because of the message translation time,
Simulated Annealing Algorithm based on Parallel Cluster spends more time than
single Simulated Annealing Algorithm does. But it can achieve much better target
function value. The higher Initial Temperature can get much better result. Figure 4
shows the layout result ( T0 800 , q 4 ).
5 Conclusions
The test shows that the Simulated Annealing Algorithm based on Parallel Cluster is
useful and efficient. The result and the computing time not only depend on
algorithm efficiency, but also relate to the style of layout object model, the
condition of network and so on. For complex mechatronic products, it is hard to
solve layout problems with a single conventional algorithm. There are a lot of
layout knowledge and constrains to influence the process and result of a layout
problem. So the algorithms have to cooperate with other intelligent systems and
human-computer interactive systems sometimes. The integration of Simulated
Annealing Algorithm based on Parallel Cluster and Engineering Intelligent Layout
Design Support System is superior to others in this field.
Acknowledgement
This research is supported by the National Natural Science Foundation of China
(No. 50335040).
A Simulated Annealing Algorithm based on Parallel Cluster 89
References
[1] Andrea Lodi, Silvano martello, Daniele Vigo. Heuristic algorithms for the three-
dimensional bin packing problem, European Journal of Operational Research, 141,
2002, 410-420.
[2] S. Szykman, J.Cagan. A simulated annealing based approach to three-dimensional
component packing. Transactions of the ASME, 117(2), 1995, 308-314.
[3] Teng hongfei, et al. Complex layout optimization problem: layout scheme design of
spacecraft module, Journal of Dalian University of Technology, 41 (5), 2001, 581-588.
[4] Qian Zhiqin, Teng Hongfei, Sun Zhiguo. Human-computer interactive genetic
algorithm and its application to constrained layout optimization [J]. Chinese Journal of
Computers, 24(5), 2001, 553-559.
[5] Su Yin, Jonathan Cagan. An extended pattern search algorithm for three-dimensional
c1omponent layout. Transactions of the ASME, 122, 2000, 102-108.
[6] Kuoming Chao, Marin Guenov, Bill Hills, etc. An expert system to generate
associativity data for layout design, Artificial Intelligence in Engineering, 11, 1997,
191-196.
[7] Tang Xiaojun, Cha Jianzhong, Lu Yiping. Research on the method of human-
computer cooperation for solving packing problems based on virtual reality, Chinese
journal of mechanical engineering, 39(8), 2003, 95-100.
Space Mission Architecture Trade off Based on
Stakeholder Value
Márcio Silva Alves Brancoa,1, Geilson Loureirob and Luís Gonzaga Trabassoc
a
Systems Engineer, National Institute of Space Research (INPE), Brazil.
b
Technologist, Laboratory of Integration and Testing, INPE, Brazil.
c
Associate Professor, Aeronautics Institute of Technology (ITA), Brazil.
Abstract. One the most difficult aspects of system conceptualization process is to recognize,
understand and manage the trade-offs in a way that maximizes the success of the product.
This is particularly important for space projects. In this way, a major part of the system
engineer's role is to provide information that the system manager can use to make the right
decisions. This includes identification of alternative architectures and characterization of
those elements in a way that helps managers to find out, among the alternatives, a design
that provides a better combination of the various technical areas involved in the design.
Space mission architecture consists of a broad system concept which is the most
fundamental statement of how the mission will be carried out and satisfy the stakeholders.
The architecture development process starts with the stakeholder analysis which enables the
identification of the decision drivers, then, the requirements are analysed for elaborationg
the system concept. Effectiveness parameters such as performance, cost, risk and schedule
are the outcomes of the stakeholder analysis which are labelled as decision drivers to be
used in a trade off process to improve the managerial mission decisions. Thus, the proposal
presented herein provides a means for innovating the mission design process by identifying
drivers through stakeholder analysis and use them in a trade off process to obtain the
stakeholder satisfaction with effectiveness parameters .
1 Introduction
An effective system must provide a particular kind of balance among critical
parameters. An ideal solution should meet high performance requirements on a
cost effective way in all technological areas. This is a very difficult goal to attain
because the success in one area could drive a failure in other.
1
Systems Engineer, National Institute of Space Research (INPE), Av dos Astronautas, 1758
– Jardim da Granja, CEP 12227-010, São José dos Campos, São Paulo, Brasil; Tel: +55 12
3945 7103; Fax: +55 12 3941 1890; Email: marcio@dss.inpe.br.
92 M. S. A. Branco, G. Loureiro and L. G. Trabasso
Essentially all space projects go through mission evaluation and analysis stages
many times; however there are relatively few discussions in the literature that
tackles trade off analysis for designing cost effective architectures. [3].
Thus, considering that about 80% of the life cycle cost, performance, risk and
schedule of a project are committed by decisions made during design concept
exploration; this paper addresses several questions such as: how to improve such
decisions? How to evaluate system architecture through cost, performance, risk and
schedule by taking into account stakeholder values? How to anticipate such
evaluation to the beginning of design process? How establish the connection
between stakeholder values with the architecture elements? These questions do
reflect the state of art of the design process regarding to concept phase.
An innovative method is proposed in this paper that is intended for
investigating the system trade-off space at an early design phase taking into
account all these questions stated above.
The process begins with the stakeholder analysis where it is defined all interests
towards the system to be developed. The requirement analysis can be done in the
same feedback loop as stakeholder analysis. Then, architecture elements can be
defined and the stakeholder values (defined earlier) allocated to them. This step
assures a relationship between stakeholder interests (values) and architecture
elements. The definition of key trades for each architecture elements is a creative
step where a set of cost effective solutions can be found. The critical point of this
approach is to identify the decision driver for each architecture element and
stakeholder value. Then, the method establishes a connection between physical
solution and the associated aspects that can commit cost, performance, risk and
schedule in a project. Comprising all these steps, the evaluation of architecture
alternatives can be done, assessing element impact on architecture taking into
account decision driver (performance, cost, risk and schedule) identified earlier. A
set of alternatives is evaluated and the selection can be done.
94 M. S. A. Branco, G. Loureiro and L. G. Trabasso
Operators Sponsors
Operators Operation easiness Operation (5%)
Program Team (10%)
manager members Total 100% 100%
Figure 2. Stakeholder context diagram and interests for Data Collection System (DCS)
The second step of the method is to identify the stakeholders' interests and the
relative importance for each one. To accomplish this stage, stakeholders should be
listed on a table or spreadsheet with their key interests and relative importance in
terms of cost, performance, risk and schedule. Special attention must be paid to
outline multiple interests, particularly those that are overt and hidden in relation to
project objectives. It is important to keep in mind that identifying interests is done
with stakeholders' perspective in mind, not your own.
Requirements largely describe aspects of the problem to be solved and
constraints on the solution [1]. Requirement analysis reflect sometimes conflicting
interests of a given set of system’s stakeholders.
Many authors list sources of stakeholder requirements. Stevens et al. [8]
provides a list of sources of users requirements and Pugh [5], a set of additional
sources of stakeholder requirements.
Requirements analysis is conducted iteratively with functional analysis to
optimize performance requirements for the identified functions, and to verify that
the elaboratred solutions can satisfy stakeholder requirements.
The requirements refinement loop assures that technical requirements maps the
stakeholder values, assumptions and goals.
Space Mission Architecture Trade off Based on Stakeholder Value 95
From Table 2
Relative weigth 3 7 5 5 20 30 5 5 5 30 5 5 10 5 5 10 10
satisfaction (performance)
Interval of transmit (perf.)
Funding constrains (perf.)
N° of maneuvers (perf.)
Time of transmit (perf.)
N° of spacecrafts (cost)
N° of employees (cost)
N° of ground stations
N° of control stations
Element stakeholder
Element stakeholder
Message size (perf.)
Payload mass (cost)
satisfaction (cost)
Processing (cost)
Operators (cost)
From Table 1
Architecture Alternative
elements options
Space proces. 9 7 7 8 62 75
Processing Some level 4 5 5 5 37 50
Mission
Ground proces. 1 3 3 2 18 25
Low level 8 5 1 81 10
Element impact on 10 very high (cost
Autonomy Medium level 5 3 5 50 50
High level
architecture taking into or perf. increase)
2 1 7 19 70
1 spacecraft
account decision driver 1 very small
Orbit / constellation
2 spacecraft
Constellat.
4 spac. 2 plans Stakeholder satisfaction with architecture:
8 spac 3 plans Archit. 1 (cost) = 62 (space processing) + 81 (low level autonomy) + …
LEO (perf.) = 75 (space processing) + 10 (low level autonomy) + …
Altitude MEO Archit. 2 (cost) = 37 (some level) + 81 (low level autonomy) + …
GEO (perf.) = 50 (some level) + 50 (low level autonomy) + …
...
...
The last two columns are results obtained from the sum of products between
relative weight and element impact on architecture taking into account the decision
driver relationship established in Table 1. An evaluation of stakeholder satisfaction
with architecture effectiveness is obtained trough sum of element results (one
option for each architecture element).
The matrix presented in Figure 3 is just illustrative. More studies are necessary
to modeling cost, performance, risk and schedule as decision drivers and improve
the integrated mission architecture trade off.
1000
Stakeholder satisfaction with
Efficient
architecture performance
solutions
Dominated
solutions
10
1000 10
Stakeholder satisfaction with
architecture cost
Figure 4. The Pareto frontier obtained from matrix results of Figure 2 (DCS)
8 Conclusions
Design methods present product development process in a systemized and
organized way; however, the same do not occur with information and activities
about the creation and evaluation of design alternatives. There are relatively few
discussions about the trade off process in the literature [4].
Defining and using performance, cost, risk and schedule parameters as decision
drivers and transfering to them the relative importance of stakeholder interests
(values) in a trade off process may promote a new paradigm: a evaluation (through
relationship matrix) of the architectutre effectiveness through the value that the
stakeholder gives to performance, cost, risk and schedule. In this way, the
stakeholder satisfaction with the system effectiveness becomes more important in
the management decisions.
Thus proposal presented in this paper provides a means for innovating the
mission design process by interconnecting stakeholder needs, requirement analysis,
concept exploration and decision drivers in order to capture in trade off process the
value given by stakeholders to the architecture performance, cost, risk and
schedule. The paper proposes a subtle but closer to reality paradigm shift: trade the
importance stakeholders give to performance, cost, risk and schedule attributes
rather then those attributes themselves!
9 References
[1] Egyed, Alexander et al. Refinement and evolution issues in bridging requirements and
architecture – The CBSP approach. Proceedings of the 1st International Workshops
From Requirements to Architecture (STRAW), co-located with ICSE Toronto, Canada,
May 2001, pp. 42-47.
[2] IEEE, IEEE-Std 1220-1994, IEEE trial-use standard for application and management of
the systems engineering process. The Institute of Electrical and Electronics Engineers
Inc., New York.
[3] Larson, Wiley J. and Wertz, James R., eds. 2003, Space mission analysis and design”
(3rd Edition), Torrance: Microcosm Press and Dordrecht: Kluwer Academic
Publishers.
[4] Loureiro, geilson. Systems engineering and concurrent engineering framework for the
integrated development of complex products, Ph.D. Thesis, Loughborough University,
Loughborough, UK, 1999.
[5] Pugh, S., 1991. Total design: integrated methods for successful product engineering.
Addison-Wesley Publishing Company. Wokingham (England).
[6] Shinko, Robert, 1995, “NASA systems engineering handbook”, National Aeronautics
and Space Administration.
[7] Smith, Larry W., Project clarity through stakeholder analysis. Available at:
<http://www.stsc.hill.af.mil/CrossTalk/2000/12/smith.html>. Access on: Febr. 9th
2007.
[8] Stevens, R. et al. Systems engineering: coping with complexity. Prentice Hall Europe,
London, 1998.
Product Development Process: Using Real Options for
Assessments and to support the Decision-Making at
Decision Gates
Keywords. Real options theory, product development process, risk, investment decision
making under uncertainty
1
PhD candidate at UNESP – Universidade Estadual Paulista Julio de Mesquita Filho, Av.
Dr. Ariberto Pereira da Cunha, 333, 12.516-410 – Guaratingueta/SP, Brazil, +55(12)3123-
2855; Email: hmartins@aedb.br; http://www.feg.unesp.br
100 H. Rocha, M. Delamaro
2 Background
The present article used the following theoretical references to develop a model
for product development: project life cycle, product life cycle, and real options
theory.
objectives of the project will have been reached, or when it becomes clear that the
project objectives will not be reached or when its needs no longer exist.
PDPs are usually broken into sequential stages (or phases), so that requirements
can be checked against plans to evaluate the process alignment and trends towards
the objectives. Checkpoints between phases involve “go/no go” decisions, leading
the process towards later management decisions or terminating projects that do not
offer good chances of revenue/profits to the company, nor opportunities for a better
strategic positioning.
Many authors studied the product life cycle [3, 18, 21, 31, 42]. Kotler [21] divides
it in five periods: development, introduction, growth, maturity and decline.
According to author, during the development of the product the company
accumulates costs of investments. The period of introduction is characterized by
the launch of the product in the market, followed by an increase in sales. After that,
the product goes in a period of stability (maturity), and from this point on, sales
and profits decline.
In the corporate finance literature, the value of a risky project is calculated by the
net present value (NPV) of its cash flows, discounted at a discount rate that reflects
the project risk: such method is not able to capture the management flexibility
along the decision-making process. The decision-making during the product
development requires that the existing options can be evaluated based on expected
values earnings and involved risks.
This concept can be calculated by the Capital Asset Pricing Model (CAPM)
[10, 15, 34]. Such calculation establishes a discounting factor to be used in the
analysis of an investment by its net present value (NPV): the discount tax is
increased to compensate the existing risk, beyond the value of the money in the
time (which would be the tax free of risk). However, in the PDP, the risk variation
has no linear relation with the expected returns: at phases transition, the project
evaluation will drive to a decision whether the project goes on (if favorable
conditions occur), requires changes (due to consumer needs changes, competition,
technological change or a composition of diverse factors), or even be cancelled.
Santos and Pamplona [39] stated that in markets characterized for competitive
change, uncertainty and interactions, management has the flexibility to modify the
operation strategy to capitalize favorable future chances or to minimize losses. The
probability of success in a project usually increases with the reduction of the
inherent risk along the time [30]. The deducted cashflow understates projects,
therefore it ignores and it does not accommodate the main strategic questions in its
analyses: management does not have to accept a NPV calculation, positive or
negative, unless an explanation to it exists [24]. Therefore, the CAPM becomes
inadequate: some models are used to measure the return and risk in the process of
decision making, mainly the Black & Scholes formulae and the binomial model,
102 H. Rocha, M. Delamaro
a PV1
PV b PV2
...
PVn-1
PVn
Opportunity
End of life
mapping
detailing
Market
...
...
4 Final Considerations
The use of the real options to revision the performance criteria in each of the
project phases seems to be an obvious and natural choice. Krishnan and
104 H. Rocha, M. Delamaro
5 References
[1] AKEN, J, NAGEL, A. Organizing and Managing the Fuzzy Front End of New Product
Development. Eindhoven Centre for Innovation Studies, The Netherlands, Working
Paper 04.12, Department of Technology Management, 2004.
[2] ALESII, G. Rules of Thumb in Real Options Analysis. In: Real Options: Theory Meets
Practice, 8th Annual International Conference, Montreal, Jun 17-19, 2004.
[3] BAXTER, M. Projeto de Produto: Guia Pratico para o Design de Novos Produtos. 2 ed.
Sao Paulo: Edgard Blüncher, 2003.
[4] BROBOUSKI, W. Teoria das Opçoes Reais Aplicada a um Contrato de Parceria
Florestal com Preço Minimo. In: XXVI SBPO, Simposio Brasileiro de Pesquisa
Operacional, Sao Joao del-Rei: Sociedade Brasileira de Pesquisa Operacional, 2004.
[5] CHEN, M, IYIGUN, M. Generating Market Power via Strategic Product Development
Delays. In: 2004 Econometrics Society Summer Meeting, Providence, Rhode Island,
2004.
[6] CLARK, K, FUJIMOTO, T. Product Development Performance: Strategy,
Organization, and Management in the World Auto Industry, Boston: Harvard Business
School Press, 1991.
[7] COPELAND, T, ANTIKAROV, V. Opçoes Reais – Um Novo Paradigma para
Reinventar a Avaliacao de Investimentos. Rio de Janeiro: Campus, 2002.
[8] DAVILA, T. Performance and Design of Economics Incentives in New Product
Development. Research paper no. 1647, Graduate School of Business, Stanford
University, Jun 2000.
[9] FIGUEIREDO NETO, L, MANFRINATO, J, CREPALDI, A. Teoria das Opcoes reais:
de que se esta falando? In: X SIMPEP, Simposio de Engenharia de Producao,
Universidade Estadual Paulista, Departamento de Engenharia de Producao, Bauru,
2003.
[10] GITMAN, L, MADURA, J. Administracao Financeira – Uma Abordagem Gerencial.
Sao Paulo: Pearson, 2003.
[11] GRENADIER, S. Investment under Uncertainty and Time-Inconsistent Preferences.
Research paper no. 1899, Graduate School of Business, Stanford University, Jul 2005.
Product Development Process: Using Real Options 105
[12] GUSTAFSSON, J. Portfolio Optimization Models for Project Valuation. Ph.D. thesis –
Department of Engineering Physics and Mathematics, Helsinki University of
Technology, Helsinki, 2005.
[13] GUSTAFSSON, J, SALO, A. Contingent Portfolio Programming for the Management
of Risky Projects. Operations Research, 2005; 53; 6; 224-35.
[14] HEIRMAN, A, CLARYSSE, B. Do Intangible Assets and Pre-founding R&D Efforts
Matter for Innovation Speed Start-ups? Vlerick Leuven Gent Management School,
Working Paper 2004/04, Ghent University, The Netherlands, 2004.
[15] HOJI, M. Administracao Financeira: Uma Abordagem Pratica: Matematica Financeira
Aplicada, Estrategias Financeiras, Analise, Planejamento e Controle Financeiro, 5 ed.
Sao Paulo: Atlas, 2004.
[16] KAMRAD, B, ORD, K. Market and Process Risks in Production Opportunities:
Demand and Yield Uncertainty. In: Real Options: Theory Meets Practice, 7th Annual
International Conference, Montreal, Jul 10-12, 2003.
[17] KEIZER, J, VOS, J. Diagnosing Risks in New Product Development. Eindhoven
Centre for Innovation Studies, The Netherlands, Working Paper 03.11, Department of
Technology Management, 2003.
[18] KEPPLER, S. Entry, Exit, Growth and Innovation over the Product Life Cycle.
Department of Social and Decision Sciences, Carnegie Mellon University, 1996.
[19] KOK, R, HILLEBRAND, B, BIEMANS, W. Market-Oriented Product Development as
an Organizational Learning Capability: Findings from Two Cases. SOM Research
School, University of Groning, The Netherlands, 2002.
[20] KORT, P, MURTO, P, PAWLINA, G. The Value of Flexibility in Sequencing Growth
Investment. In: Real Options: Theory Meets Practice, 8th Annual International
Conference, Montreal, Jun 17-19, 2004.
[21] KOTLER, P. Administracao de Marketing – Analise, Planejamento, Implementacao e
Controle, 4a edicao. Sao Paulo: Atlas, 1996
[22] KRISHNAM, V, BHATTACHARYA, S. Technology Selection and Commitment in
New Product Development: The Role of Uncertainty and Design Flexibility.
Management Science, 2002; 48; 3; 313-349.
[23] LIN, B, HERBST, A. Valuation of a Startup Business with Pending Patent Using real
Options. Startup Junkies Organization, 2003. Available at
<http://www.startupjunkies.org/PatentCase_Valuation.pdf>. Acessed on: Jan 25th
2006.
[24] MACEDO, M. Avaliacao de Projetos: Uma Visao da Utilizacao da Teoria das Opçoes.
In: ENEGEP, XVIII Encontro Nacional de Engenharia de Producao. Niteroi: ENEGEP,
1998.
[25] MARTINEZ, A. Opçoes Reais na Analise de Contratos de Leasing. Revista de
Administracao de Empresas. EAESP, FGV, Sao Paulo, 1998, p.36-48.
[26] MINARDI, A. Teoria de Opçoes Aplicada a Projetos de Investimento. Sao Paulo:
Atlas, 2004.
[27] NORTON, D, KAPLAN, R. Estrategia em Acao: Balanced Scorecard, Rio de Janeiro:
Campus, 1997.
[28] OSTROVSKY, M, SCHWARZ, M. Synchronization under Uncertainty. Research
paper no. 1923, Graduate School of Business, Stanford University, Jun 2005.
[29] PINTO, C, MONTEZANO, R. Avaliacao por Opçoes Reais de Projeto de Sistemas de
Informaçoes Geograficas. Gestao.Org, 2005; 3; 3. Available at
http://www.gestaoorg.dca.ufpe.br. Acessed on: Feb 10th 2006.
[30] PMI. A Guide to the Project Management Body of Knowledge (PMBOK). Newton
Square: Project Management Institute, 2000.
[31] PORTER, M. Vantagem Competitiva: Criando e Sustentando o Desempenho Superior.
Sao Paulo: Campus, 1986
106 H. Rocha, M. Delamaro
[32] ROCHA, H. Modos de Impacto e Efeitos dos Produtos nas Organizaçoes. In: SIMPOI,
V Simposio de Administracao de Producao, Logistica e Operaçoes Internacionais,
2002, Sao Paulo. Anais.
[33] ______. Metodologia Estruturada de Desenvolvimento de Produtos: uma Abordagem
Voltada a Excelencia nos Negocios. In: ENEGEP, 2003, Ouro Preto. XXIII Encontro
Nacional de Engenharia de Producao. Ouro Preto : ENEGEP, 2003.
[34] ROSS, S, WESTERFIELD, R, JORDAN, B. Principios de Administracao Financeira,
2a ed. Sao Paulo: Atlas, 2000.
[35] SADOWSKY, J. The Value of Learning in the Product Development Stage: A Real
Options Approach. Working Paper, Stanford University, Department of Management
Science and Engineering, 2005.
[36] SAITO, R, SCHIOZER, D, CASTRO, G. Simulacao de Tecnicas de Engenharia de
Reservatorios: Exemplo de Utilizacao de Opçoes Reais. RAE,2 000; 40; 2; 64-73.
[37] SANTIAGO, L, BIFANO, T. Management of R&D Projects under Uncetainty: A
Multidimensional Approach to Managerial Flexibility. In: IEEE Transactions on
Engineering Management, 2005; 52; 2; 269-280.
[38] SANTOS, E. Um Estudo sobre a Teoria de Opçoes Reais Aplicadas a Analise de
Investimentos em Projetos de Pesquisa e Desenvolvimento (P&D), 2001, M.Sc. thesis.
Escola Federal de Itajuba, Itajuba, 2001.
[39] SANTOS, E, PAMPLONA, E. Teoria de Opçoes Reais: Uma Atraente Opcao no
Processo de Analise de Investimentos. RAUSP,2005; 40; 3; 235-252.
[40] SANTOS, D. A Teoria das Opçoes Reais como Instrumento de Avaliacao na Analise
de um Processo de Fusao/Incorporacao de Empresas. 2004. M.Sc. thesis. Universidade
Federal Fluminense, Niteroi, 2004.
[41] SILVA, W. Uma Aplicacao da Teoria das Opçoes Reais a Analise de Investimentos
para Internet em Tecnologia ASP. In: XXVI SBPO, Simposio Brasileiro de Pesquisa
Operacional, Sao Joao Del-Rei: Sociedade Brasileira de Pesquisa Operacional, 2004.
[42] SLACK, N, CHAMBERS, S, JOHNSTON, R. Administracao da Producao, 2a. ed. Sao
Paulo: Atlas 2002.
[43] WESSELER, J. The Option Value of Scientific Uncertainty on Pest-Resistance
Development of Transgenic Crops. In: 2nd World Congress of Environmental and
Resource Economists, Monterey, CA, Jun 2002.
A Valuation Technology for Product Development
Options Using an Executable Meta-modeling Language
1
Tsinghua University, Beijing, P. R. China
2
Massachusetts Institute of Technology, Cambridge, Massachusetts, USA
Real options analysis is related, but different from, financial option analysis. When
a financial option is purchased, certain rights in the future are contractually
protected [7]. Conversely, product development options in the real world usually
provide little if any guarantee. For example, the investment in certain technologies
may or may not create additional opportunities in the future. Therefore, when
modeling real options, the modeling method must deal with this additional level of
uncertainty. Furthermore, when comparing between product development
alternatives, it is often necessary to preserve the structural and behavioral
compositions of the alternative scenarios. Many quantitative option analysis
methods assume that the possible behavioral and structural evolutions of the option
portfolios of interest can be abstracted into a few statistical measures. To preserve
and analyze the structural and behavioral information content in product
development options, we utilize modeling principles inspired by Hoare and Cousot
[2, 6] to develop a modeldriven method for product development option analysis
which can preserve the quantitative, quantitative, and fuzzy aspects of “real”
options.
When all related development options are statically related, the payoff function f
can be modeled as a conditional probability function, which represents all possible
value combinations and associated distribution of these options.
Once any one of these options’ values are determined, the payoff function can be
used to compute the value distributions for other related options.When two or more
options are related temporally, a payoff function can be constructed to take the
value of a temporally-causal option and compute the value(s) of the temporally-
dependent option(s). For example, Voutput = f(vinput1, Vinput2, ...).
Payoff functions can also be analyzed using algebraic rules to substitute, simplify,
or compose into different functions or values.
the tool is twofold. First, the above method involves certain tedious model
manipulation tasks that must be automated. Second, it should serve as an
experimental prototype to determine whether this type of automated model
construction tool can be useful in real-world engineering projects. This section
briefly describes the functional features and high-level software architecture of a
tool, Object-Process Network (OPN). OPN can be characterized as an executable
meta-language designed for model manipulation tasks [8, 9].
4 Applications
This method and its supporting tool, OPN, have been successfully applied to study
varying compositional structures of different product development portfolios and
assess the interactions between many qualitative and quantitative variables. Due to
limitations on article length, the following list briefly summarizes three published
applications:
• A study of Moon and Mars exploration architectures for the NASA Vision for
Space Exploration [10]. In this study, over a thousand alternative, feasible mission-
mode options were generated and compared for human travel to the Moon or Mars.
• A study of developmental options for flight configurations of a particular type of
military aircraft [8]. This study demonstrated OPN’s ability to reason about the
possibility space of physical configurations under incomplete information.
• A study of options for Space Shuttle derived cargo launch vehicles [11]. This
study generated and evaluated hundreds of developmental portfolio options for
evolving the Space Shuttle’s hardware into a new launch vehicle.
OPN helped streamline the exploration of many combinatorial possibilities in
different option portfolios. It also supports numeric calculation of payoff values,
and the calculations can be postponed or symbolically simplified without
sacrificing the integrity of the analysis results.
A Valuation Technology for Product Development Options 113
Fig. 1. Screen capture of the OPN tool.
114 Benjamin H. Y. Koo,, Willard L. Simmons, and Edward F. Crawley
References
1. C. Y. Baldwin and K. B. Clark. Design Rules, Vol. 1: The Power of Modularity.
The MIT Press, Mar 2000.
2. P. Cousot and R. Cousot. Compositional and inductive semantic definition in fixpoint,
equational, constraint, closure-conditioned, rule-based and gametheoretic form. In P.
Wolper, editor, Computer Aided Verification: 7th International Conference, LNCS 939,
pages 293–308. Springer-Verlag, July 3-5 1995.
3. K. Czarnecki and U. W. Eisenecker. Generative Programming: Methods, Tools and
Applications. Addison-Wesley, 2000.
4. W. Griswold, M. Shonle, K. Sullivan, Y. Song, N. Tewari, Y. Cai, and H. Rajan. Modular
software design with crosscutting interfaces. IEEE Software, 23(1):51– 60, 2006.
5. J. Guttag and J. J. Horning. Formal specification as a design tool. In Proceedings of the
7th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, pages
251–261, New York, NY, USA, 1980. ACM Press.
6. C. A. R. Hoare. Process algebra: A unifying approach. In J. W. S. Aku E. Abdallah, Cliff
B. Jones, editor, Communicating Sequential Processes, July 2005.
7. J. C. Hull. Options, Futures, and other Derivative Securities. Prentice Hall, 2nd
edition, 1993.
8. B. H. Y. Koo. A Meta-language for Systems Architecting. PhD thesis, Massachusetts
Institute of Technology, Cambridge, MA, 2005.
A Valuation Technology for Product Development Options 115
Abstract. This article intends to shed new light on the system design process. We here
suggest the possibility of combining simulation features of an executable meta-language
called Object-Process Network (OPN) with the descriptive power of well-known modeling
languages such as Object-Process Methodology (OPM), Structured Analysis (SA) or
SysML. In the Systems Architecture domain, a great issue one always faces is the great
number of options to be considered when designing a system. We must keep in mind that
modeling the space of options is actually different from modeling the system of interest. The
traditional modeling tools allow us to specify a unique solution, when we should consider
the whole set of feasible architectures. On the other hand, OPN is able to help architects to
assess all these possible configurations but, as a decision-support tool, it doesn’t offer the
descriptive power OPM, SA and SysML do.
1 Introduction
The process of designing complex socio-technical systems often involves tasks
such as transferring knowledge across different domains and computing parameters
of interest. In addition, the space of options to be considered increases as the
system being studied becomes more complex. Experience proved essential the use
of architectural reasoning techniques when developing such systems. The concept
behind these techniques is that reasonable decisions can be made evaluating
relevant parameters to that specific system, often related to cost and time issues.
Nonetheless, we have had only tools and techniques able to tackle specific needs.
No one could handle the main needs in an integrated approach. That is why Object-
Process Network (OPN) [5] turns out to be a unique tool in system architecting. It
unifies the processes of decomposing the problem (and thus being able to manage
1
Rua H8-C 302, Campus do CTA, Sao José dos Campos, SP - Brazil; Tel: +55 (12)
39477902; Cel: +55 (12) 8134-9758; Email: gusta.pinheiro@gmail.com,
felipeeng08@gmail.com
118 F. Simon, G. Pinheiro, G. Loureiro
common visual understanding of the system architecture and, on the other hand,
when modelling you do not have instruments for deciding which way to move
forward with required model detailing.
State of the art is modelling frameworks, on one hand, such as OPN providing
a full set of combinatorial options and strong decision support, but needing
enhancement on visual modelling detailing and, on the other hand, the ones such
as Structured Analysis, SysML and OPM, that provide comprehensive and detailed
system models but with no decision support capability. In other words, the decision
process and solution architecture modelling are done separatedly leading, for
example, to decisions with less than available systems information or to detailed
models that will never be used.
This section provides a brief review on OPN and OPM, once they will be used
in the examples in the following sections.
Studies regarding system architects’ current modeling process proved essential the
development of a tool that could give support to tasks one always faces when
developing a system: enumerating the space of possibilities, given constraints that
bound the problem; computing all architectures, based of the options assigned in
the first task; and, eventually, evaluating all those solutions, leading us to the
preferable ones. Prof. Benjamin Koo’s research gave birth to Object-Process
Network, a meta-language based on a small set of linguistic primitives that unifies
these three tasks in systems architecting. A tool [10] based on Java programming
language gives support to this meta-language.
An OPN graph is solely composed by processes, objects and links that connect
these two entities (only between different types of blocks). A process is defined by
functions that modify chosen parameters. Thus, the best architectures can be
inferred through analysis of final values of those parameters. Each “thread” in an
OPN graph will lead to a different set of “final values”. Figure 1 shows on the right
an OPN graph and on the left a possible “path” to be followed. For further
information on OPN’s ongoing research, see References: [5,6,7,8].
Model multiplicity due to excess symbols has been pointed out as a serious
weakness in Unified Modeling Language (UML). A new approach to design
information systems was developed in order to overcome the excess notation
problem. Object-Process Network uses three types of entities: object, process
(modifies the state of an object) and state. Many types of links are available to
define which kind of relation the blocks have with each other. These relationships
can be either structural or procedural. The first one expresses persistent, long-term
relation; the latter one expresses the behavior of the system. Another OPM feature
is that it supports graphical and textual models: Object-Process Diagrams (OPD)
presents the system through symbols for objects, processes and links; Object-
Process Language (OPL) presents the system through sentences. Object-Process
Case Tool (OPCAT) [11] is the software developed to demonstrate the
applicability of this methodology.
Towards Automatic Systems Architecting 121
All these ideas regarding the union of existing modeling languages gave birth
to what we call “The New Approach”. Essentially, it means getting the best of
OPN and descriptives languages at once. Figure 4 illustrates “The New Approach”.
Its first main idea is that we have to define initial and final states related to the
system. For example, were we to apply this concept to the Apollo Program the
initial state should be “Man on Earth” and the final should be “Man on the Moon”.
In order not to have infinite possibilities, we have to define the “boundary
conditions” of the system. They are the main direct obstacles that hinder the
change from an initial to a final state. One may ask: How are we to overcome these
“obstacles”? For each of them, a new “subsystem” is designed, allowing these
obstacles to be overcome. For an instance, what hinders the man going from Earth
to the Moon? The first of the restrictions would be “Distance”. For this specific
restriction, a given number of subsystems (options) is available.
This process can continue, if for this subsystem to work properly new
boundary conditions have to be added. We should stop iterating when the
subsystems generated are “minimum blocks”, in the sense that there are no more
boundary conditions and there is no need to zoom into them. This idea is
schematized in figure 5. It is what we will start calling “The General Concept” for
The New Approach.
One could argue that the concept here applied does not allow us to make high-
level decisions, we would be always obliged to go down into lower levels
(generating the “new subsystems”). That’s not true. The problem is that starting to
make decisions at top-levels means having fewer parameters to evaluate the
architectures. Imagine that we are starting to develop a complex new system, and
there are three mainstreams (and completely different) to be followed. Unless this
is not a completely new system, we would not have parameters to infer best
architectures without going down to lower levels on each of the mainstreams.
Simulating at a top level with only macro-parameters could be deceiving. That is
why we suggest continuing the iteration process until “minimum subsystems” are
modeled. Defining a subsystem as “minimum” is a purely a matter of convenience
or availability. For instance, if it is possible to infer parameters that describe this
minimum system (for example, from an available database), then there is no need
to continue iteration process for it. In other situations, it can be more convenient to
develop an entire model for a subsystem and only then it will be possible to
describe it in terms of system’s main parameters. Note that Subsystem Option 1
and Subsystem Option 2 are regarded in our model (figure 5) as minimum
subsystems.
The essential and first step that will govern the development of the rest of the
model is the definition of initial and final states and the conditions related to the
central problem. The examples below (figure 6 and figure 7) show familiar systems
and the definition of the change of states linked to the macro problem (developed
with OPCAT II – Object Process Case tool [11]).
Figure 6. Definition of Change States for the system “Spaceship from Earth to the Moon”.
124 F. Simon, G. Pinheiro, G. Loureiro
Figure 8. OPM model for The General Concept with parameters to be simulated.
Towards Automatic Systems Architecting 125
Now that we have options and parameters (related to the options via functions),
an OPN model can be created. Based on minimum requirements, the simulation
process will eventually point out feasible architectures (figure 9). These
requirements could be risk, time, cost etc.
The results of the simulation process let us derive a solution for the system to
be built (judged to be amongst the best ones).
Figure 12. OPM model with parameters (linked to the functions through dashed lines).
Based on the information obtained in the first process, we can now build an
OPN model that can enumerate solutions for our problems based on constraints we
have defined (Figure 13).
The last step of the composition of this system is, based on decisions made, to
model the actual system. This can be systematically done erasing all the boundary
conditions, their links and not selected options and turning the dashed lines in
figure 12 into continuous lines.
5 Further Development
As a next step towards the final goal of mechanizing the conception of systems,
further research need to be done. Some questions remain unanswered. How to
decide to which extent we should model in order not to spend great effort modeling
a solution that will never be developed? When composing and architecture model
using different notations for various parts of the model, how to make the overall
model make sense? How to integrate different models if they are written with
modelling notations at the convenience of the system architect?
6 Conclusions
In this study we presented an innovative approach for complex systems
development. The applicability of this new methodology that can automatically
generate architectures for a system was shown through the study of the logistics in
a soda market. Besides the prospect of mechanizing the conception of systems, this
approach would allow us to identify new solutions traditionally discarded by past-
experienced based constraints. For an instance, The Apollo Program was
constrained in the 60s by time and risk. The solution then was going via Lunar
Orbit. New space exploration vision systems are very much constrained by cost.
Most cost-effective solutions may point out to a flight directly to Lunar Ground,
for example. Certainly, changing initial constraints will change the space of
solution options. Using such methodology means that we will not have to start all
the decision process over again if constraints change. Further studies are expected
in order to answer the current problems with this new approach.
7 References
7.1 Thesis
7.2 Papers
Abstract. In this paper, we present how to integrate several processes using a common
reference frame offering various viewpoints. This step is applied to the integration of two
standards of quality - ISO 9001: 2000 and CMMI - in order to generate a multivues quality
reference frame allowing a certification relative to the two standards. This reference frame
takes into account the structure imposed by ISO and the recommendations of CMMI. The
implementation of this reference frame is accompanied by the application of the
organzational improvement model IDEAL (relative to the implementation of CMMI). This
paper is based on the work completed within a software engineering company (SYLIS).
Both human and cultural aspects of the company are considered in order to mitigate the
problem of acceptability.
Keywords. Quality standards, CMMI, ISO 9001 : 2000, enterprise modeling, business
process, reference frame.
1 Introduction
ISO 9001 : 2000 requires that an organization’s processes undergo continuous
improvement even after ISO 9001 : 2000 certification has been achieved. CMMI
provides an organization with a means to accomplish further process improvement.
CMMI is a very detailed set of documents that contain many more of the basic
concepts for process improvement than can be found in ISO 9001 : 2000. Our
paper presents the implementation of these quality standards in a unique reference
frame allowing us to obtain certification for both standards.
The remainder of this paper is organized as follows: Section 2 gives a brief
explanation of CMMI and ISO 9001 : 2000. Section 3 introduces our proposal of
1
Corresponding author. E-mail: anis.ferchichi@sylis.com. Tel: +33(6)3 20 17 10 21.
Mobile: +33(0)6 26 16 37 45. Fax: +33(0)3 20 17 10 26
134 Anis Ferchichi , Jean-Pierre Bourey, Michel Bigand and Hervé Lefebvre
process areas, yet the processes and their actions can span different levels. The
continuous representation provides maximum flexibility for focusing on specific
process areas according to business goals and objectives [13].
All quality models and standards have their advantages and drawbacks. Among
the various models, standards and corpus of knowledge, some include all the
activities of the company and adopt a breakdown approach like ISO 9000 : 2001,
others like CMMI are guides of best practices and are specific to certain activities
of the company and they adopt an ascending approach. On the basis of the two
standards that interest us (ISO 9001 : 2000 and CMMI), it would be interesting to
combine them or at least to bring them closer in order to see appearing various
synergies : to implement a multivues cartography of processes. Such cartography
will allow the implementation of the two standards of quality and obtaining relative
certifications.
Thus, our objective is to carry out a reference frame which:
Integrates the recommendations of various quality standards,
Allows the evaluation of these quality standards,
Is easily usable by all employees,
Is easily exploitable in a process improvement approach.
Generally, we find within two models the description of common and specific
activities to each model (as shown in Figure 4). These two descriptions can be
implemented or not in enterprise activities.
Basing on CMMI and ISO 9001 : 2000 synergy [4] [5], we implement a
mapping in order to determine:
CMMI practices treated par ISO 9001 : 2000 chapters.
ISO 9001 : 2000 chapters treated par CMMI practices.
Since the company has its own methods and ways of work, we feared that the
integrated model will not be used. So, we deal with the problem of acceptance.
Thus, we began by classifying the personnel of the company in two categories:
The allies: They approve work and are convinced of the utility and the
need on the implementation of such quality reference in the company.
The recalcitrants: They are a little bit hostile to the implementation of such
a model because:
Usually, they have their own methods and processes.
They are against the use of rigorous methods and processes.
We concentrated our efforts on this second part of the personnel. We chose the
strategy of persuasion through presentation to show the advantages of the adopted
model compared to classic quality standards and by discussing with them to know
which are their waiting and if there are things to modify.
Implementing integration of quality standards CMMI and ISO 9001 141
5 Conclusion
To implement CMMI in an ISO 9001 : 2000-certified organization efficiently
and effectively, both the common and different parts of the ISO 9001 : 2000
standards and CMMI documents must be identified. ISO 9001:2000 requirements
can be mapped to CMMI practices [5]. However, the following limits have been
identified in this mapping process:
1. A requirement of ISO 9001 : 2000 can be mapped to many CMMI
practices. Conversely, a CMMI practice can be mapped to many ISO 9001 :
2000 requirements. These mappings are useful for comparing these two
frameworks, but they may cause confusion during the decision-making process.
2. It is difficult for organizations to understand and apply these mappings
during CMMI implementation because they only describe the degree of the
correlation between ISO 9001 : 2000 and CMMI without providing any
explanation of these mappings.
3. The structure and words that are used by CMMI are not familiar to ISO-
certified organizations, which makes it more complicated for an ISO 9001 :
2000-certified organization to implement CMMI.
We are working now on mitigating these limits and the implementation of more
than two quality standards on the same reference frame.
6 References
[1] Richard Basque. “CMMI un itinéraire fléché vers le Capability Maturity Model
Integration”. 2004. Dunod.
[2] Boehm, Barry W. “Software Engineering Economics”. 1981. Prentice-Hall.
[3] Boris Mutafelija, Harvey Stromberg. “Exploring CMMI- ISO 9001 : 2000 Synergy
when Developing a Process Improvement Strategy. 2003”. Bearing Poing and Hughes
Network System.
[4] Boris Mutafelija, Harvey Stromberg. “Systematic Process Improvement Using ISO
9001 : 2000 and CMMI”. 2003. Artech House.
[5] Boris Mutafelija, Harvey Stromberg, “Mappings of ISO 9001:2000 and CMMI Version
1.1”. Available from: http://www.sei.cmu.edu/cmmi/adoption.
[6] Mary Beth Chrissis, Mike Konrad, Sandy Shrum. “CMMI: Guidelines for Process
Integration and Product Improvement”. 2003. Addison Wesley Professional.
[7] Dennis M. Ahern, Aaron Clouse, Richard Turner. “CMMI Distilled: A Practical
Introduction to Integrated Process Improvement. Second Edition”. 2003. Addison-
Wesley.
[8] Frank Vandenbroecke. “Combiner CMM et ISO 9001 - 2000 pour l’amélioration de
processus ?”. N-Tech Belgium.
[9] International Organization for Standardization (ISO). “ISO 9001:2000: Quality
Management Systems Requirements”. Beuth, Berlin, 2000.
[10] International Organization for Standardization (ISO). “Quality Management Systems
Fundamentals and Vocabulary, ISO 9000:2000”, 2000.
[11] International Organization for Standardization (ISO). “Quality Management Systems
Guidelines for Performance Improvements, ISO 9004:2000”, 2000.
[12] Ketola, J., Roberts, K. “ISO 9000:2000 in a Nutshell”, 2000, Patton Press, Chico, CA.
142 Anis Ferchichi , Jean-Pierre Bourey, Michel Bigand and Hervé Lefebvre
[13] Margaret K. Kulpa and Kent A. Johnson. “Interpreting the CMMI: A Process
Improvement Approach”. 2003. Auerbach Publications.
[14] Software Engineering Institute (SEI). “CMMI version 1.1 CMU/SEI-2002-TR-012”.
2002.
Steps Towards Pervasive Software: Does Software
Engineering Need Reengineering?
Abstract. Nowadays, the definition of service is demanding machines to turn into human
beings. In order to work efficiently, machines need to analyze current situations, perceive
user needs and provide users with intelligent, automatic and proactive adaptation that
responds to current contexts. System performance will be guaranteed only if we add new
features to its behavior, such as: self-adaptation, self-organization, self-configuring, self-
healing, self-optimizing and self-protecting. These challenging automated processes can
produce proactive behavior if software engineers change the engineering logic and use the
environment context as a solution instead of thinking about it as an obstacle.
1 Introduction
Due to the revolution of Information Technology, a new computing era is taking
place. Many challenges need to be met, especially in a mobile and dynamic
environment where users are interacting with different devices, constructing ad hoc
networks, while systems should provide them with proactive value-added services.
Pervasive or Ubiquitous Computing was first introduced by Weiser as his
vision for the future of computing in the 21st century; where computing elements
will disappear from the user’s consciousness while functioning homogeneously in
the background of his environment [13]. In pervasive computing, users compute
and communicate with each others whenever, wherever and however [9].
Pervasive computing merges physical and computational infrastructures in an
integrated environment, where different computer devices and sensors are gathered
to provide new functionalities, specialized services and boost productivity [15].
While analyzing pervasive computing and studying its progression, it was
found that for hardware and computing elements to disappear, software needs to
disappear and the spatial and temporal relationships between people and objects
(human-machine interaction) has to be well defined in the early design phase in
order to cope with the dynamicity of the ubiquitous computing environments [17].
In this section, we have presented different definitions of pervasive computing.
Next, in section 2, we’ll present the challenges that face these systems. In section
3, we’ll present the difficulties of integrating context within content information. In
144 D. Al Kukhun, F. Sedes
section 4, we’ll present some enabling technologies that help the system to adapt
with the heterogeneity of its software components. In section 5, we’ll present the
challenges that meet software engineers while providing different system
requirements. Finally, we highlight the importance of changing the adaptively logic
and using the environment as a stimulating factor for adaptation.
Software
Adaptation
Sub components
3.1.2 Middleware
The adaptation process in pervasive systems has to deal with its context volatility
and unpredictability. Pervasive computing connects many applications together.
Matching a lot of software components is not a practical solution but transforming
them into generic and powerful middleware would facilitate the integration of
these components and would ensure homogeneous communication [5]. A uniform
and adaptive middleware technology would ensure interoperability between
different services within ad hoc networks.
Pervasive computing has introduced new high level system requirements that
should be taken in consideration in the design and implementation process.
Interoperability is highly demanded in all the levels of pervasive systems.
Software components should be built independently of the context, this way they
will be used in different computing environments and applications.
Heterogeneity is a challenge in pervasive environments; mobile users interact
with the system using different hardware devices, the context and connectivity
become dynamic. Meanwhile, the user also has a dynamically evolving profile. As
a result the software should provide services that adapt with different screen
resolutions, user-interaction methods, machine power and processing capacities.
Mobility is an important requirement. Actual mobility is the capability of an
autonomous software agent to dynamically transfer its execution towards the nodes
where the resources it needs to access are located. Exploiting this form of mobility
will save network bandwidth and increase the execution reliability and efficiency.
This aspect can be deployed to help embedded software agents to follow mobile
users wherever they go. Virtual agent mobility is the ability to be aware of the
multiplicity of networked execution environments[5].
Survivability and security provide systems with powerful capacities.
Survivability is the ability of a system to fulfill its mission on time despite the
presence of attacks and failures. Such functioning requires a self-healing
infrastructure with improved qualities such as security, reliability, availability and
robustness. An application may employ security mechanisms, such as passwords
and encryption, yet may still be fragile by failing when a server or network link
dies. The literature has presented two kinds of survivability in the context of
software security: survival by protection (SP) allows security mechanisms like
access control and encryption to ensure survivability by protecting applications
from malicious attacks. The survival by adaptation (SA) gives the application the
ability to survive by adapting itself to the changing dynamic conditions [5].
Continuity is a very demanding feature in ubiquitous applications especially
with the uncertainty and instability of user connectivity while he moves around.
The application should have the possibility to pause the user session in the case of
sudden disconnection and continue to work later without losing information [4].
In the adaptation context, agility and evolvability become important
requirements. Agility allows managing changes that are timely unpredictable but
have predictable characteristics. Evolvability enables handling changes in the long-
term during the life cycle of a system.
Self-organization is the ability of a system to spontaneously increase its
organization without the control of the environment or external systems. Self-
organizing systems not only regulate or adapt their behavior but also create their
own organization. Self-organization applies the concepts of self-learning, expert
systems, chaotic theory and fuzzy logic. Self-organization may be also applied in
148 D. Al Kukhun, F. Sedes
Software User
Automation Usability
Security Accessibility
Context -Independency Context - awareness
Integrity Reliability
Extensibility Privacy
Scalability Interoperability
Adaptation
7 Conclusion
In a pervasive computing environment, everything becomes dynamic and
heterogeneous; data, hardware, connectivity and software. The software’s mission
is and to coordinate between the different system subcomponents in order to satisfy
the changing needs, requirements and user context. In this article, we present a new
dimension of software adaptation techniques in which we propose using the
context to help the system function proactively.
8 References
[1] Al Kukhun D and Sedes F, “A Taxonomy for Evaluating Pervasive Computing
Environments”, MAPS 2006, IEEE Conference on Pervasive Services, 2006, pp 29-34.
[2] Almenárez F, Marín A, Campo C and García C, “TrustAC: Trust-Based Access Control
for Pervasive Devices”. In The 2nd International Conference on Security in Pervasive
Computing, Germany, 2005, pp 225-238.
[3] Campbel R, Al-Muhtadi J, Naldurg P, Sampemane G and Mickunas MD, “Towards
Security and Privacy for Pervasive Computing”. Software Security, 2002 p 1-15.
[4] Chen E, Zhang D, Shi Y and Xu G, “Seamless Mobile Service for Pervasive
Multimedia”. In PCM’04, IEEE, 2004, p 194-198.
[5] Chung ES, Hong JI, Lin J, Prabaker MK, Landay, JA and Liu AL, “Development and
evaluation of emerging design patterns for ubiquitous computing”. In 2004 conference
on Designing interactive systems, USA, p 233 - 242.
[6] Davis J, Tierney A and Chang E, “A User-Adaptable User Interface Model to Support
Ubiquitous User Access to EIS Style Applications”. COMPSAC’05, IEEE, p 351–358
[7] Duan Y and Canny J, “Protecting User Data in Ubiquitous Computing : Towards
trustworthy environments”. In PET 2004, Springer, p 167-185.
[8] Graham L, “The principles of Interactive design”, Delmar Publishing, 1999.
[9] Gschwind T, Jazayeri M and Oberleitner J, “Pervasive Challenges for Software
Components”. In RISSE 2002, Springer, p 152-166.
[10] Munoz J, Pelechano V, “Building a Software Factory for Pervasive Systems
Development”. Advanced Information Systems Engineering, Springer 2005, p 342-356
[11] Niemela E and Latvakoski J, “Survey of Requirements and Solutions for Ubiquitous
Software”. 3rd International Conference Mobile and Ubiquitous Multimedia, pp 71–78.
[12] OASIS, “A brief Introduction to XACML”, 2003, access on 4/2007, available at
www.oasis-open.org/committees/download.php/2713/Brief_Introduction_to_XACML
[13] Park I, Kim W and Park Y, “A Ubiquitous Streaming Framework for Multimedia
Broadcasting Service with QoS based mobility Support”. In Information Networking,
Springer, 2004, p 65-74.
[14] Want R, Pering T, Borriello G and Farkas K, “Disappearing hardware”. In Pervasive
Computing, 2002, IEEE, p 36 – 47.
[15] Want R, Pering T, “System challenges for ubiquitous & pervasive computing”,
Software Engineering Conference, 2005, p 9–14.
[16] Weiser M, “The computer for the 21st century”, ACM SIGMOBILE Mobile
Computing and Communications Review, 1999, p 3-11.
[17] Yang H, Jansen E and Helal S. “A Comparison of Two Programming Models for
Pervasive Computing”. In SAINT 2006, IEEE, p134 – 137.
[18] Zimmermann A, Lorenz A and Specht M, “Applications of a Context-Management
System”, In Modelling and Using Context, Springer, 2005, p 556-569.
Question-Answer Means for Collaborative
Development of Software Intensive Systems
Peter Sosnina,1
a
Head of Computer Department, Ulianovsk State Technical University (Ulianovsk)
Russia.
1 Introduction
There is a problem of successful designing of the Software Intensive Systems
(SIS). The facts of the low success (about 30 %) in this area [2] means that till now
developers of SYS have not got very important technological tools. The role of
such tools can play the means of Artificial Intelligence, first of all, means
supported interaction with Knowledge and Experience, modeling of reasoning,
decision-making and problem solving.
The practice of the SIS development shows that the negative influence of the
mentioned reasons can be lowered by applying effective question-answer
reasoning for interaction with experience (and models of experience) involved in
the process of development. As the number of such reasoning, for example, we can
mention reasoning in the “inquiry cycle” [5] and “inquiry wheel” [6]. Similar ideas
are used in the special question-answer system which supports development of the
SIS [8]. In more general context the place and role of reasoning are presented in [1]
from , ,. in [7] where reasoning is presented at seven levels of application together
with knowledge and in [4] as model-based reasoning.
1
Head of Computer Department, Ulianovsk State Technical Unuversity (Ulianovsk),
UlSTU, 32, Severny Venetc, 432027, Russia; Tel: +7 8422 4531556; Fax: +7 8422 431556;
Email: sosnin@ulstu.ru; http://www.ulstu.ru/SOSNIN
152 P. Sosnin
2 Question-Answer Models
Different types of conceptual models are developed for tasks with certain decisions
presented during development of a software intensive system and in its results.
Such types include, for example, UML-diagrams, Data Flows Diagrams (DFD) and
Entity Relation Diagrams (ER-diagrams). Nowadays, the visualized graphic
models consisting of components and connectors between them play an important
helpful role. Visual models help the stakeholders to include their skills of work
with the figurative information to development processes.
Adequacy of the applied conceptual models essentially depends on how they
are constructed. Guidelines are often used to support conceptual modeling. But any
guideline describes typical scheme of actions and the typical scheme of reasoning
applied to the certain subject domain. Such guidelines function properly, but they
are not useful for coordination of various conceptual human schemes.
Historically, questions and answers were the basic means to coordinate
conceptual schemes of individuals who try to find the mutual understanding in the
definite task and work. Such activity is put in a basis of question-answer modeling
of the task.
Question-answer model of a task QA(Zi) is formed and used on a step-by-step
process of the conceptual decision of the task. Such decision usually includes
“Conception”, ”Architecture” and ”Project” form of SIS representations. Usage of
QA-model of any task is aimed at the coordination of human conceptual schemes.
Construction of QA-model is completed when the set of conceptual models
{MCk}, chosen for sufficient understanding of task Zi, is created. Usage of QA-
model in any state on its life cycle represents the specific kind of modeling named
“question-answer modeling”.
Thus, the conceptual decision of task Zi includes decisions of a set of tasks
{ZCk}, each of which is aimed at construction of corresponding model from the set
Question-Answer Means for Collaborative Development 153
{MCk} . Methods and means of QA-modeling represented below are applied not
only to the task of SIS design, but also to any “service task?” ZCk.
Question-answer models are the systematized representation of the reasoning
used during the decision of the task Z(t) and kept in special QA-database. Any QA-
model is a set of interactive objects such as “question”, “answer” and “task” with
the certain attributes and operations. The structure and content of QA-model are
defined according to the following views:
x Logic view, fixing representation of QA(Z(t)) within the frame of logic of
questions and answers (visual representation includes a tree of the tasks where
each task is presented by corresponding QA-tree).
x Conceptual view, opening ontology of the task Z(t) and process of its
creation.
x Communicative view, opening question-answer processes as
communicative interactions of the stakeholders concerned in conceptual
decision of the task Z (t).
x Activity view, registering “questions” and “answers” as objects of activity.
1. View from positions of experience, fixing experience involved to the
decision process.
Each of the presented views (step by step) is formed and registered in QA-base
of the project. The certain set of “concern”, models, documents and functions
providing construction and usage of views are connected with each view. The logic
view is a primary base of all these conceptual units, which are produced by
developers with special means of analysis, transformation, representation and
visualization (Figure 1).
Q Project ontology,
Q1 A1
Q11 A11 Working dictionary,
Q12 A12 Predicative descriptions
Q1m A1m
Q2 A2
Q21 A21 Communication schemes
Q22 A22 and models,
Argumentation schemes
Q2n A2n
Qp Ap
Qp1 Ap1 QA-PERT and
Qp2 Ap2 GANT diagrams,
Apr QA-event nets.
Analysis Qpr
Transformation
Declarative precedents,
Representation Procedure precedents
Visualization
Such set of views is used for construction of QA-models of project and service
tasks. The general case of QA-model of task Z(t) is defined as an integrated set of
“views” on the task, which is realized as a special structure of data registering a
logic view, variants of its transformation and representation, including visual
representation through results of the analysis.
Let us pass to specifications of QA-models. QA-model QA(Z (t)) of the task
Z(t) is created as a system S of dynamic interactive objects QA(Z(t)) = S({XI)}),
each of which has an unique index name XI definite type X. Names of types (X=..)
reflect types of questions and answers. Indexes of names are assigned
automatically.
Each object XI(Ti, Sb1j , Sb2k, t, Gn)} uses the following attributes: Ti - the
description of “object”; Sb1j - the identifier of the subject responsible for “object”;
Sb2k - the identifier of the subject (generally the compound subject), concerned in
“object”; t - the moment of time in which the current state of “object” has fixed; Gn
– the set of other attributes of “object” XI representing it in the base of the project.
3 Essence of QA-Modeling
Question-answer models, as well as any other models, are created “for extraction
of answers to the questions enclosed in model”. Moreover, the model is a very
important form of representation of questions, answers on which are generated
while interaction. Questions are fixed in QA-models obviously in the form of
“objects-questions” and implicitly in forms of ambiguities used in textual QA-
units.
The essence of QA-modeling is an interaction of stakeholders with artifacts of
process and its current results and it helps them:
x To enrich model QA(Z(t) by adding its structure by the new question or/and
answer.
x To realize a number of variants of the analysis, interactive (+ collective)
inspection and/or validation of state QA(Z(t)) or its fragments, directed on
revealing of mistakes and defects of design decisions,and also their
conformity to norms and samples.
x To perform predicative analysis aimed at an establishment of adequacy of
model.
x To use results of the analysis for establishing of understanding and mutual
understanding in group of the stakeholders.
x To extract of requirements and restrictions for the SIS.
x To manage of changes in the project.
2. To view the results of monitoring states of designing process.
Question-Answer Means for Collaborative Development 155
Result of decision =
ZJ.r(tm) conceptual project
…
ZI.k(tn)
Initial
statement of
Z*(t0) Decision process
1Ɋɟɡ ɭ ɥ ɶ ɬ ɚɬ , ɩɨɥ ɟɡ ɧɵ ɣ ɷɮ ɮ ɟ ɤ ɬ
ɜ ɵ ɩɨɥ ɧ ɟɧɢ ɹ ɉ ɪɟ ɰɟ ɞ ɟ ɧɬ ɚ ɞ ɥ ɹ
ɤ ɚɠɞ ɨ ɝɨ ɭ ɱ ɚɫɬ ɧ ɢɤ ɚ
1ȼ ɫɥ ɭ ɱ ɚɟ ɭ ɫɩɟɲ ɧɨ ɝɨ ɪ ɟɲ ɟ ɧɢ ɹ
Ɂɚɞ ɚɱ ɢ ɉ ɟɪɜ ɢɱ ɧɨɝ ɨ Ⱥɤ ɬ ɨ ɪɚ
1ȼ ɫɥ ɭ ɱ ɚɟ ɧ ɟɭ ɞ ɚɱ ɢ (ɝɚ ɪɚɧɬ ɢɢ
ɭ ɱ ɚɫɬ ɧɢɤ ɚɦ )
1ɉɪɟ ɞ ɭ ɫɥ ɨɜ ɢɹ – ɫ ɢɬ ɭ ɚɰ ɢɹ ɤ
ɧɚɱ ɚɥ ɭ ɜ ɵ ɩɨɥ ɧɟɧ ɢɹ ɉɪ ɟɰ ɟ ɞ ɟɧ ɬ ɚ
Z 1Ɍ ɪɢɝ ɝɟ ɪ – ɫɨɛ ɵ ɬ ɢɟ, ɤ ɨ ɬ ɨɪ ɟ
ɡ ɚɩɭ ɫɤ ɚɟɬ ɜ ɵ ɩɨɥ ɧ ɟɧɢ ɟ
Z1 ɉɪɟ ɰ ɟɞ ɟɧɬ ɚ
2ɉɨɞ ɩɪ ɟɰ ɟɞ ɟɧɬ ɵ
2ɉɨ ɞ ɡ ɚɞ ɚɱ ɢ (ɞ ɨɫɬ ɚɬ ɨɱ ɧɨ
Z11 ɫɚɦ ɨɫɬ ɨɹɬ ɟɥ ɶ ɧɵ ɟ) ɜ
ɜ ɵ ɩɨɥ ɧ ɟɧɢ ɹ ɉɪɟ ɰ ɟɞ ɟɧɬ ɚ
ɩɪɨ ɰ ɟɫɫɟ
Z12
Z1m
Z2
Z21
Z22
Z2n
Zp
Zp1
Zp2
Zpr Q
Q1 A1
Q11 A11
Q12 A12
Q1m A1m
Q2 A2
Q21 A21
Q22 A22
Q2n A2n
Qp Ap
Qp1 Ap1
Qp2 Ap2
Qpr Apr
Analysis
{QA(MKj)}
Transformation
Library of models
Representations
Visualization
Figure 2. Logical view
Z Iterative process
Z1
Z11
+
Z12
Appointing
Z1m
of the tasks
Z2
Z21 +
Z22
Step-wise
Z2n
refinement
Zp
Zp1 +
Zp2
Zpr QA-modeling
oriented paradigms. The base version of QA-processor can be opened for WEB-
access of stakeholders from Internet.
Noted application of the QA-processor is developed as the model of Rational
Unified Process (RUP) technology providing creation of the conceptual project.
For creation of the conceptual project the workflows “Business modelling”,
“Requirements”, “Analysis and design” and also three supported workflows are
used. The application is aimed at construction of 16 artefacts, including all
architectural artefacts of the RUP.
The basic role in realization of a method is carried out a workflow of question-
answer reasoning QAR(t). Therefore we shall present a number of details
connected with dynamics of this workflow.
At definite time ti the reasoning QAR(t) goes to QAR(ti)-state, which has its
causal potential that gives the possibility to move the reasoning forward to the next
state QAR(ti+1). In this aspect the “history” of previously made work, represented
in QAR(t)-codes, influences the next rational step of reasoning. Next steps both for
reasoning and for design can be defined by means of question-answer analysis of
QAR(t)-codes.
General statement of each project task should be defined before Question-
Answer working with this task. Special definition of the task (as its general
statement) uses a special pattern to present a task as 3 structured text blocks.
The first reflects the main purpose of a system under design, which is specified
by its potential users. Here we begin the work with the basic Use-Case diagram for
the task in UML language.
The second defines the main techniques to perform Use-Case diagram for the
task. It provides construction of the basic diagram of business -objects of UML.
The third defines technology of implementation of a system under design.
Information of this block is applied in conceptual design as context information.
Analysis of a text T0 of the general statement of a task and its translation to
PROLOG-like language are used for extraction of questions to begin and continue
QAR.
More detail it is based on step by step registering of questions and answers in
accordance with following technique:
x The set of questions {Qi} is taken from the text Ò0 and coded by adequate texts
T (Qi).
x Actions of item 1 are executed for each text T (Qi, therefore the set of
questions {Qij} and their codes {T(Qij} is formed. Actions of item 2 are
used to control the correctness of question codes and for the choice of
those questions {Qk} which will be used for the next step of detailization
from the set Q = {Qi}{Qij}. Other questions are recorded for their
application in the subsequent steps.
x Set of answers {Ak} and their codes {T(Ak} is formed and registered in
QA database.
4. Each text T(Ak is processed as the text Ò0. The cycle 1-4 is repeated until
the project comes to the end.
All project tasks ZP={ZPr} are derived from process described above. Any task
P
Z r is a question qualified by stakeholders as a task-question answer which can be
158 P. Sosnin
found only through decision process. Any service task ZCm has its QA-pattern kept
in special library. Such pattern helps to build model QA(ZCs) for definite
conceptual artifact. The work with questions, answers, QAR and conceptual
artifacts are executed with the help of technological tasks ZT= {ZTn} generally
described below.
6 Conclusion
This paper presents QA-method for conceptual decision of the SIS project tasks.
Method is based on a stepwise refinement and QA-modeling. It can help to build
the system of conceptual models which represents the SIS on levels of descriptions
presented “Conception”, “Architecture” and “Project”. Means of method are
organized as a set of workflows called “Interaction with Experience”, and can be
used additionally to the RUP as a model of its workflows. Means of QA-modeling
are adjusted to support Conceptual Design of Software Intensive Systems, their
documenting, and training of a design team in a closed corporate network. Such
means can be open for stakeholders in Internet through the defended WEB-access.
Proposed means have confirmed the practical usefulness in development of a
number of the SIS, including “Automated system for management of distance
education” and “Automated system for planning of cargo transportation”.
7 References
[1] Bass L., Ivers J., Klein M., Merson P. Reasoning Frameworks. (CMU/SEI-2005-
TR-007), Pittsburgh, PA: Software Engineering Institute, Carnegie Mellon
University, 2005.
[2] Charette R.N.. Why software falls. IEEE Spectrum,vol 42,#9, 2005; 36-43.
[3] Kruchten P. The Rational Unified Process: An Introduction. Third Edition,
Addison-Wesley Professional, 2003.
[4] Lee M.H. Model-Based Reasoning: A Principled Approach for Software
Engineering. Software - Concepts and Tools,19(4), 2000; 179-189.
[5] Potts C., Takahashi K., Anton A. Inquiry-based Requirements Analysis. IEEE
Software, 11(2), 1994; 21-32.
[6] Reiff R, Harwood W., Phillipson T. A Scientific Method Based Upon Research
Scientists’ Conceptions of Scientific Inquiry, Session. Proceedings of the 2002
Annual International Conference of the Association for the Education of Teachers
in Science, 2002; 546–556.
[7] Rich C., Feldman Y.A. Seven Layers of Knowledge Representation and
Reasoning in Support of Software Development. IEEE Transactions on Software
Engineering, Volume 18, Issue 6, 1992; 451-469.
[8] Sosnin P. Question-Answer Processor for Cooperative Work in Human-
Computer Environment. Proceeding the 2 International IEEE conference
Intelligent System, 2004; 452-456.
Bringing together space systems engineering and
software engineering processes based on standards and
best practices
Abstract. The growing complexity of the current space systems results an increasing
responsibility for the software embedded in them. This is particularly significant when the
systems are employed for space critical missions. Usually the software has rigid real time
requirements to fulfil which demands high reliability and a disciplined development process.
This paper relates the effort of defining a set of software development processes for the on-
board computer flight control software (SOAB), a component of the Brazilian Satellite
Launcher (VLS), developed by the Instituto de Aeronautica e Espaco – IAE. To achieve the
strict requirements for space missions, the SOAB development team’s degree of maturity
and technological proficiency had to harmonize with a well defined set of software
development processes integrated into the systems engineering. Furthermore, these
processes definition had to consider international space systems engineering standards and
standards of quality established by IAE. Best practices in software engineering were
considered as well.
1 Introduction
Engineering in space systems must be a team activity where the various individuals
involved are aware of the important relationship between specialties and their roles
in the development as an organizational process. Successful accomplishment of
engineering objectives requires not only a combination of technical specialties and
expertise, but also principles and best practices to harmonize the systems
engineering activities and the software development process. This is particularly
significant while developing critical systems where the embedded software is
required to perform critical functions, mostly in real time. The on-board computer
flight control software (SOAB) for the Brazilian Satellite Launcher (VLS) is
________________________
1
Software Engineer, Instituto de Aeronautica e Espaco (Sao Jose dos Campos). Praca
Marechal Eduardo Gomes, 50, Vila das Acacias, Sao Jose dos Campos, SP, CEP: 12228-
904, Brazil. Tel: +55 (12) 3947 4969; Fax: +55 (12) 3947 5019; Email:
miriamalves@iae.cta.br;
160 M. Alves, M. Abdala and R. Silva
included in this category of critical software. The flight control software takes
critical responsibility for the control of the launcher – except for the launcher
destruction – from a few minutes preceding the lift-off until the satellite has been
deployed into the Earth’s orbit. The software is also responsible for the checkout of
various launcher systems, including inertial platforms, autopilot chain and
sequencing chain. Beside that, each flight has different characteristics and the
software has to be prepared to incorporate new issues like a new type of inertial
measurement unit. Consequently, this requires well-organized processes of
development and maintenance that are directly or indirectly responsible for the
specification, acceptance testing, and execution of flight software for the VLS.
On one hand as SOAB is part of the VLS space project, its development
process is intrinsically related with systems engineering activities, management
and product quality assurance of the organization. On the other hand, the way of
developing software has to be consistent with software engineering methodologies.
The final outputs of the working process are operational prototypes made available
for extended operational validation. The results in terms of technology exploitation
experience and novel implemented solutions become an asset for reuse for future
space projects.
This paper presents the result of structuring and defining a set of software
development processes based on the Brazilian Standard for Quality Management
Systems [1] which has been adopted by IAE as a reference for its Quality
Management System and also on ESA ECSS Architecture for Space Projects [3, 4,
5], particularly the E-40 family [6,7]. The processes definition also considered the
software system life cycle, the development environment, best practices and
techniques.
This paper is organized in six sections. Section 1 is the introduction, followed
by section 2 that presents the importance of the adoption of space standards.
Section 3 presents the relationship between the software development process and
the systems engineering process in the organization. Section 4 describes how a set
of processes was structured and Section 5 presents the visual representation chosen
to symbolize these set of processes. Finally, the conclusion and future prospects are
summarized.
According to this new software development approach and based on [6], the
requirement engineering process, in which the software requirements and
specifications are defined, has a special role in the software engineering activities
considering that these activities are purely intellectual and the outputs are normally
documents. Even the code can be taken as a special form of electronic document.
At this point resides the importance of adopting the ECSS-E-40 Part 2B [7], which
focus is on the requirements for the structure and content of the documentation
produced.
The requirement engineering process [6] includes the activity of System
Analysis and Architecture, which results in a Software System Specification that is
delivered to the software team to be reviewed in the System Requirements Review
(SRR). This review has the participation of both teams: systems engineering and
software development teams. The consolidation of the Software System
Specification generates the requirement baseline, which will serve as a base for the
SOAB requirements during all its development.
Subsequently, the software development team will start the requirements
engineering analysis activity in which the requirements and specifications of the
software are defined. This activity is part of the set of processes mentioned in the
next section and usually takes a large and underestimated time in the whole
development. Aware of that, the software quality team gives special attention when
planning the development schedule. As a result of this activity, a software technical
specification is elaborated and reviewed jointly with the systems engineering team
in the Preliminary Design Review (PDR). In this review, the systems engineering
team will verify if all the requirements with respect to the requirements baseline
are captured in the technical specification and if the software architecture is well
identified. This is the opportunity to make sure the software will totally fulfil the
systems requirements, considering all the environmental definitions and
restrictions.
From this point on the software team, based on a set of processes (section 4),
will carry on the software development, which includes two more internal formal
reviews apart from those reviews made jointly with the system engineering team:
Detailed Design Review (DDR) and Critical Design Review (CDR). The CDR is
conducted at the end of the design and the completeness of the software validation
activities with respect to the technical specification is reviewed and verified.
Once the CDR is realized, the subsequent planned activities are carried out. The
accomplishment of these activities will lead the software team to the next formal
review, the Qualification Review (QR), which is conducted in cooperation with the
systems engineering team. At this point, the system and software engineering
processes come together again to verify if the software meets all its specified
requirements in the approved requirement baseline, before its integration into the
system. The consistency and completeness of all software documentation are also
reviewed and verified.
The next formal meeting with both teams is in the Acceptance Review (AR),
which aim is to prepare the software to be integrated into the space system, where a
summary of tests reports and the software user manual are reviewed. The
consistency and completeness of the documentation are verified again. At this
Bringing together space systems engineering and software engineering 163
point, the system becomes internally validated and it is delivered to the systems
engineering team in order to start the Software Acceptance activity.
The Software Acceptance and System Acceptance activities [6], as part of the
systems engineering process, will integrate the software into the system and
perform all the required operational tests. These tests are executed to guarantee that
the software is working correctly and accurately in the system environment as well
as with other system components.
adaptation of the UML activity diagrams. Each process in the set was represented
by one activity diagram, where each activity in a process was mapped into an
activity of the UML activity diagram. Activities in the processes were also detailed
in another activity diagram, when necessary.
The choice of incorporating the visual representation of the process in the
CASE environment helped to bring together the development environment, the
process of development itself and the best practices of the Unified Process (UP),
allowing to apply the very same techniques to disciplines other than software that
also use models, especially systems engineering, promoting more integration
among them.
The CASE environment allowed the project managers to use the IBM©
RUPBuilder to select, configure, create custom views and publish the defined set
of processes for their projects. They could start from the pre-established set of
processes and make further choices based on their project's unique needs.
Additionally, they could publish the processes in Web sites.
The way of publishing a customized process was managed from the user
interface, which gave the project managers the means to describe their process
selecting components from the defined set of processes to compose their own
customized process. They could also create different process views of the
customized process for different members of the software team.
7 References
[1] ABNT Sistema da Qualidade Aeroespacial – Modelo para a garantia da qualidade
em projetos, desenvolvimento, produção, instalação e serviços associados. ABNT
NBR 15100:2004.
[2] Blanchard BS, Fabrycky, WJ. Systems Engineering and Analysis. Prentice Hall,
New Jersey, 1998. 3rd edn.
[3] ECSS-E-10 Space Engineering - Systems Engineering Part 1B: Requirements and
process. ECSS-E-10 Part 1B, 18 Nov 2004.
[4] ECSS-E-10 Space Engineering - Systems Engineering Part 6A: Functional and
technical specifications. ECSS-E-10 Part 6A, 09 Jan 2004.
[5] ECSS-E-10 Space Engineering - Systems Engineering Part 7A: Product data
exchange. ECSS-E-10 Part 7A, 25 August 2004.
[6] ECSS-E-40 Space engineering – Software Part 1: Principles and requirements.
ECSS-E-40 Part 1B, 28 November 2003.
[7] ECSS-E-40 Space Engineering - Software Part 2: Document requirements
definitions (DRDs). ECSS-E-40 Part 2B, 31 March 2005.
[8] ISO/IEC Standard for information technology –Software life cycle process.
ISO/IEC 12207:1995.
[9] Jacobson I, Booch, G, Rumbaugh, J. The Unified Software Development Process.
Addison Wesley Logman, Inc., 1999.
[10] Vorthman Jr, RG, Stephen MH. Towards a Rational Approach to Standards for
Integrated Ocean Observing Systems (IOOS) Development. In: Proceedings of the
Oceans 2006 MTS/IEEE Conference. Boston, MA. Sept 18-21, 2006.
A Brazilian Software Industry Experience in Using
ECSS for Space Application Software Development
Abstract. This paper presents the tailoring of ECSS software product assurance
requirements aiming at the development of scientific satellite payload embedded software
by a Brazilian software supplier. The software item, named SWPDC, developed by DBA
Engenharia de Sistemas LTDA within Software Factory context, is part of an ongoing
research project, named Quality of Space Application Embedded Software - QSEE,
developed by National Institute for Space Research – INPE, with FINEP financial support.
Among other aspects, QSEE project allowed to evaluate the adherence of a Software
Factory processes to INPE’s embedded software development process requirements.
Although not familiar with space domain, the high maturity level of such supplier, CMMI-3
formally evaluated, facilitates the Software Factory to comply with the requirements
imposed by the custumer. Following the software verification and validation processes
recommended by ECSS standards, an Independent Verification and Validation - IVV
approach was used by INPE in order to delegate the software acceptance activities to a third
party team. ECSS standard tailored form contributions along the execution of the project and
the benefits provided to the supplier in terms of process improvements are also presented
herein.
1
Senior Software Engineer in Space Applications, Space Tecnologies Enginerring, National
Institute for Space Research - INPE, Av. dos Astronautas, 1758, Sao Jose dos Campos, SP,
Brazil; Tel: +55 (12) 3945 7124; Fax: +55 (12) 3945 7100; Email: fatima@dss.inpe.br;
http://www.inpe.br/atifs/fatima/
168 F. Mattiello-Francisco, V. Santiago, A. M. Ambrosio, L. Jogaib and R. Costa
5. The Project Designer will then issue a production order (OP) along with a
set of artefacts to be provided (Use Case, Classes and Sequence diagrams).
6. The Project Test Plan including testing guidelines that should be followed
for FSW developed product acceptance
7. Test Design - describes test cases for each technical specification.
Since SWPDC software supply by the FSW is subject of space domain
application technology transfer to the Brazilian software industry, the PROJECT
team was formed by one senior DBA Project Manager and two Software Analysts.
The latter have been on-job trained in similar embedded application at INPE
laboratory during six months before the SWPDC was effectively started.
4 Tailored Requirements
Software product assurance plays a mandatory role within the overall software
engineering process. The complexity of software development and maintenance
requires discipline to build quality into the product from the very beginning.
According to ECSS-Q-80B, the software product assurance requirements are
grouped in three sets of activities: (i) the assurance programme implementation, (ii)
the assurance of the development and maintenance process and (iii) the assurance
of the quality of the software product.
The tailoring process was carried out following these three groups in a
supplementary way by means of careful analysis of their requirements. Table 1,
Table 2 and Table 3 summarize, as examples, some of the applicable requirements
analyzed and their tailored form for SWPDC product assurance. The first two
columns of each table contain the ECSS-Q-80 requirement and its description,
respectively. The column entitled Tailored Form describes the way the
recommended requirement has been tailored in this project. Whenever a facility is
provided to support the requirement, the Tool column introduces it. The Document
column lists the customized documents in which such requirement is complied
with. Table 1 lists some requirements corresponding to the group (i).
Requirements Baseline (RB) is the main document provided by customer. It
imposes six formal reviews: Software Specification Review (SSR), Preliminary
Design Review (PDR), Detailed Design Review (DDR), Critical Design Review
(CDR), Qualification Review (QR), and Acceptance Review (AR). RB also defines
the documents to be provided by the supplier: Software Development Plan
(SDPlan), Software Test Plan (STPlan), Software Technical Specification
(STSpec), Software Design Document (SDD), Software Test Specification
(TestSpec) and Test Report (TRep). The following documents are required from
the independent team: Independent Verification and Validation Plan (IVVPlan),
Independent Verification and Validation Test Specification (IVVTSpec),
Independent Verification and Validation Report (IVVRep). The Formal Reviews
are documented in Technical Review Report (TRRep) which includes identified
discrepancies (RIDs). During the acceptance phase, Non-conformance report
(RNC) is delivered by IVV team to the supplier with a copy to the customer.
In respect to ECSS requirements presented in Table 1, a brief analysis about
their correspondence with DBA Software Development processes and related sub-
processes (Figure 1) shows that such requirements have met the Progress Control
and Quality Assurance sub-processes.
Table 2 lists some requirements corresponding to group (ii). Since ECSS-Q-
80B subdivides the software assurance process requirements in three categories,
that organization was also adopted in that table.
ECSS requirements related to software lifecycle are met by two DBA Software
Development processes: Requirements Management and Planning. And by
related sub-processes: Progress Control and Change Management. The process
assurance requirements applicable to all software engineering processes are met by
Peer Review, Quality Assurance and Delivery & Acceptance processes. And by
related sub-processes: Configuration Management and Quality Assurance.
Whereas the process assurance requirements related to individual software
engineering activities are met by Construction and Test processes. And by related
sub-processes: Change Management.
Table 3 lists some requirements concerning group (iii). The correspondence
between the requirements on Table 3 and DBA processes presented in the Figure 1
macro-workflow is consequence of the software development lifecycle phases.
Thus, the first requirement row meets the Requirements Management and Planning
processes. Second requirement meets the Construction process. And the last two
rows meet the Test, Peer Review and Delivery & Acceptance processes.
A Brazilian Software Industry Experience in using ECSS 173
5 Conclusions
The tailored form contributed to simplify the embedded software technology
transfer process from INPE to DBA. Specific requirements concerning
independent verification and validation carried out by a third party team were
defined because the full validation of the software product on the target computer
was not feasible within the FSW context. This team participation on the reviews
allowed for early understanding of the software operational behavior which
contributed to the applicability of model-based testing techniques as part of the
acceptance process.
Although not familiar with space domain, the supplier maturity level, CMMI-3
formally evaluated, facilitates FSW to comply with the requirements imposed by the
customer. The project-oriented approach adopted by DBA to deal with the well stabilized
processes of its FSW minimized the difficulties inherent to adding new project knowledge
domain to FSW environment.
6 References
[1] Quality of Space Application Embedded Software (QSEE). Available at:
<http://www.cea.inpe.br/~qsee>. Accessed on: Feb. 23th 2007.
[2] Space Engineering - Software, ECSS-E-40 standard, May 2005.
[3] Space Product Assurance – Software, ECSS-Q-80A and B standard, October
2003.
Satellite Simulator Requirements Specification based
on Standardized Space Services
Ana Maria Ambrósio a,1, Daniele Constant Guimarãesb and Joaquim Pedro Barretoa
a
National Institute for Space Research, Brazil.
b
T&M Testes de Software.
Abstract. The high cost and the compression of the development timescale of a realistic
satellite simulator motivated the definition of a set of functional requirements based on
standardized space services, which is presented in this paper. In order to improve as much as
possible reusability and consequently decreasing cost in development, the use of standards
were intensively adopted specially for the satellite independent functions, namely, data
handling and communication among ground and on-board systems. Functions related to the
on-board data handling computer and its communication with the ground systems were
based on the ECSS-E-70-41A standard, which focuses on telemetry and telecommand
packet utilization definition. The protocol supporting the ground and on-board system
communication, at several layers, were based on the Consultative Committee for Space Data
Systems (CCSDS) recommendations. This paper presents a set of generic satellite simulator
functional requirements modeled into the UML and SysML use case artifact. The satellite
and ground station functions included are as much general as possible, as they were based
on practical publications of related works and in space service standards.
1 Introduction
The use of software simulators in satellite operations allows the simulation of
ground operations before and after the launching. They may simulate satellite
behaviour, the space environment and ground stations providing a good basis to
find unexpected scenarios that could not be tested before launching. Some
advantages of developing satellite simulators are: (i) validate operational
procedures; (ii) train operators before launch; (iii) validate the Satellite Control
System (SCS) and the TM/TC database [8].
Building a satellite simulator requires a precise specification of its
requirements. In modern satellites, the on-board data-handling computer pays an
1
National Institute for Space Research (INPE), Av dos Astronautas, 1758, Jardim da
Granja, São José dos Campos, São Paulo, Brazil, CEP 12.227-010, Tel: +55 (12) 3945 6586;
Fax: +55 (12) 3945 6625; Email: ana@dss.inpe.br; http://www.inpe.br/atifs/ana
176 A.M.Ambrosio, D.C.Guimarães and J.P.Barreto
important role in the satellite behaviour. For a satellite simulator, such functions
should be precisely defined. A great part of such main functions are common from
one satellite to another, so, one may use the standard ECSS-E-70-41A [3] as a
starting reference. The ECSS-E-70-41A standard defines a set of services that
address operational concepts covering fundamental requirements of satellite
monitoring and controlling during satellite integration, testing and flight
operations. This standard focuses on the utilization of telecommand and telemetry
packets for the purpose of remote monitoring and satellite subsystems controlling.
The protocol supporting the ground and on-board system communications, from
physical to transport levels, including telemetry and telecommand protocol was
based on CCSDS standards [2].
The satellite simulator behaviour is presented in Use Case notation of UML and
SysML [7]. One reason to adopt use case is that it is an excellent technique for
capturing the functional requirements of a system, allowing inclusion and removal
of new elements as the software project evolves. The use case diagrams can be
fully integrated with other analysis and design artifacts created using a CASE tool
to produce a complete requirements design and implementation repository.
Another important reason to adopt Use Case notation is that it has being used as a
construct in the Reference Architecture for Space Data Systems (RASDS) [2].
This paper presents, in section 2, an overview of a satellite simulator, its main
functions and general requirements. Section 3 contains a short introduction to the
ECSS-E-70-41A standard. A breakdown of the software into modules, a
description of their main functions and the model for the most common functional
requirements for on-board data handling of a satellite simulator are presented in
section 4. Section 5 concludes this article.
been taken into account here are: (i) real time housekeeping telemetry
(HKTM-RT). This telemetry is sent directly to ground station, as soon as it
is acquired on board. (iii) on-board recorded housekeeping telemetry
(HKTM-ST). It is recorded on board when the satellite is out of ground
station visibility range and sent to ground station during the visibility; (iv)
payload telemetry (PLTM) which refers to the payload-acquired data;
- provide attitude and orbit determination according to the satellite and the
space environment;
- provide mechanisms to allow operators to make the time synchronization;
- control and monitor the ground station;
- keep ground and on-board communication according to the protocol
established for the mission.
3 ECSS Stardards
The ECSS-E-7041A [2] is a standard addressing the utilization of telecommand
and telemetry packets for remote monitoring and control of subsystem and
payloads. It is to be applied to ground and on board satellite operations, as well as
data transferred through these segments on packets layer. The services it describes
are as much general as possible, allowing them to be adapted to any mission and to
be tailored to the mission specific requirements. The following services are
standardized in this document: (1) Telecommand verification, (2) Device command
distribution, (3) Housekeeping & diagnostic data reporting, (4) Parameter statistics
reporting, (5) Event reporting, (6) Memory management, (7) Function
management, (8) Time management, (9) On-board operations scheduling, (10) On-
board monitoring, (11) Large data transfer, (12) Packet forwarding control, (13)
On-board storage and retrieval, (14) Test service, (15) On-board operations
procedure service, (16) Event-action service. Such services may also be applied in
a satellite simulator, as it is proposed in this article.
with the satellite simulator. Since the satellite simulator was previously divided
into four modules, a module interacting with another module plays the role of an
actor. External entities like the Satellite Control System and the simulation
conductor are the main actors of the simulator. Due to the importance of time-
related events, as telemetry transmission rates, time-tagged telecommands, faults to
be triggered in predefined instants, a special actor in this work is the timer; for
example the TTimeTaggedTC timer indicates the exact instant a tagged TC should be
executed on-board. Table 1 presents such actors.
Table 1. Actors
Actor Description (or actor role)
Satellite x Establish communication with the Ground Station
Control x Control and monitor the GST
System (SCS) x Control and monitor the satellite
x Configure, according to the Training Plan, satellite, GST,
environment and flight dynamics and simulation management,
Simulation in order to characterize different situations for training
Conductor operators of the Satellite Control Center
x Interact in real time with the simulator during a simulation
session run: read and modify parameter values
Timer x Any timer that triggers an event depending on temporization
x Send telecommands to Satellite
x Receives telemetry from Satellite
Ground
x Receives visibility status from the Environment & Flight
Station
Dynamic
x Receives parameter values from Simulation Management
x Send telemetry to Ground Station
x Receives telecommands from GST
Satellite x Receives GST visibility status from the Environment & Flight
Dynamic
x Receives parameter values from Simulation Management
x Send visibility status to the Ground Station
Environment
x Send Ground Station visibility status to Satellite
and Flight
Dynamic x Send satellite position in the orbit to Satellite Receives
parameter values from Simulation Management
x Send configuration parameters to Ground Station, to
Satellite and to Simulation Management Environment &
Simulation
Flight Dynamics
Management
x Send parameter value changes to Ground Station, to
Satellite and to Environment & Flight Dynamic
5 Conclusion
This paper presented a description of standard requirements present in a satellite
simulator using the UML Use Case approach. The Use Case notation allows easily
connecting actors and their functions on a defined environment.
The choice of use case notation considered the following facts: it describes how
the system shall be used by an actor to achieve a particular goal, it has no
implementation-specific language and does not include detail regarding user
interfaces and screens. Besides that, some other benefits can be obtained from use
cases: it is an excellent technique for capturing the functional requirements of a
system, it can be relatively easily added and removed from a software project as
priorities change, it along with use case diagrams can be fully integrated with other
analysis and design deliverables created using a CASE tool to produce a complete
requirements design and implementation repository and, finally, test cases (for
System, User Acceptance and Functional tests) can be directly derived from use
cases.
A set of functional requirements were defined and described in the same way it
was demonstrated in use case diagrams, use case functions table and detailed
information about each use case for every module.
One starting a satellite simulator development could use the presented set of
use cases as a starting point and aggregate new features as long as they become
necessary, since the basic requirements are the same for several satellites. This
paper provided a satellite simulator development with a generic framework from
where a specific simulator design can be instantiated. The breakdown and the
functions choices allow its reuse for future micro-satellite and small satellite
programs in different missions.
182 A.M.Ambrosio, D.C.Guimarães and J.P.Barreto
6 References
[1] Ambrosio, A.M.; Guimarães, D.C. Satellite Simulator Requirements Specification based
on standard services. Internal report INPE-13942-NTE/370. São José dos Campos: INPE,
2005.
[2] Consultative Committee for Space Data Systems (CCSDS) Standards. Disponível em: <
http://www.ccsds.org > Acesso em: 2 jun. 2004.
[3] European Cooperation for Space Standardization (ECSS). ECSS-E-70-41A: ECSS-E-
70-41A – Ground systems and operations: telemetry and telecommand Packet utilization.
January, 2003b. Noordwijk: ESA publication Division. Available in:
<HTTP://WWW.ECSS.NL/>. Obtained in: 2 jun. 2003.
[4] Jacobson, I.; Christerson, M.; Jonsson, P.; Öergaard, G. Object-Oriented Software
Engineering: a Use Case Driven Approach, Adisson-Wesley, 1992.
[5] Ryser, J. Glinz, M. SCENT: A Method Employing Scenarios to Systematically Derive
Test Cases for System Test. Technical Report 2000/3. Institut für Informatik, Universität
Zürich, 190, 8057 Zurich, Switzerland, 2000
[6] Saraf, S.; Adourian, C.; Melanson, P.; Tafazoli, M. Use of an operations simulator for
small satellites. In: 15th Annual AIAA/USU Conference on Small Satellite., 2001, Utah,
Canadá. Available at URLib: <http://www.aria.cec.wustl.edu/SSC01/papers/8a-6.pdf>.
Obtained in: 17 fev. 2005.
[7] Systems Modeling language (SysML) Spaceification. Version 1.0 Draft. OMG
Document: ad/2006-03-01. Needham, Massachusetts: Object management Group, April
2006. <http://syseng.omg.org/SysML.htm>
[8] Williams, A.; Seymour, M.A. Radarsat-2 Software simulator: benefits of an early
delivery. In: 8th International Conference on Space Operations, 2004, Montreal, Canadá.
Proceedings... 1 CD-ROM.
Performance Analysis of Software Processes Supported
by Simulation: a Resolution Problem Process Case
Study
Abstract: Results expected by organizations do not match the efforts spent to obtain
processes defined in SDE. In the real world, one justifies this fact by considering the lack of
adequate instruments and efficient programs of implementation and follow-up of the
performance of these processes, together with the fact that the task is far from being trivial.
The search for the achievements of these processes, the measurement and analysis of
performance are referred to as a practical-key for the maturity and quality of the processes
according to major market frameworks. In this scenery of problems and challenges, this
work presents a proposal that supports the measurement and analysis of performance of
software processes, and the improvement of these. The proposal is characterized by the
agreed use of software modeling and simulation of processes with the performance analysis,
the latter subsidizing the former. This work contains two well distinguished parts: a
bibliographical review concerning key areas related to software processes, and a case study
of one of the processes of SPICE. Viability of the proposal, a study on appropriate indicators
of performance, a theoretic-scientific approach as contribution to the referred areas and an
instrument that supports the definition and management of software processes are some of
the results of this case-study. One concludes that a capable software process is a software
satisfying the needs of the customer, and that it must be duly adjusted to the reality of the
customer organization.
1. Introduction
Results expected by organizations do not match the efforts spent to obtain
processes defined in SDE-Software Development Environment. One of the points
of this difference is mainly in the characteristics of complexity and use of
1
Postgraduate student. Laboratory for Computing and Applied Mathematics, Brazilian
National Institute for Space Research, São José dos Campos, Brazil
C. Postal 515 – 12245-970 – São José dos Campos - SP – Brazil
Lecture – University of Taubaté (UNITAU). Taubaté-SP-Brazil.
email:dawilmar@lac.inpe.br, dawilmar@unitau.br
186 D. G. Araújo, N. Sant´Anna N and G. S. Kienbaum
by its relationship with other activities. Which are usually described by rules of the
organization and of the process itself. An activity is part of the process that is
defined according to its tasks. Abilities or roles are written and defined for the
accomplishment of the tasks through its actors. Actors produce products
(objective) of the process when playing their roles, their by consuming resources.
The quality of a software product is determined by the quality of the process
used for its development and maintenance [05].
The citations presented stress the importance to legalize and to improve the
organization processes of software, so as to gain quality and productivity. A
process is the main element of SDE since the quality of a software product is
directly related to the quality of the process used for its development.
apprentices, which it makes capability and the performance of its project diminish
in relation to the capability of total process of the organization [03].
The capability agreement must follow that of the performance. In other words,
the way to the capability is in first place to obtain the capacity of the process by
means of the agreement of the performance of the process in controllable and
predictive conditions.
Table 1. Indicators of performance, metric and objective global for evaluation of processes.
It can be said that it is generally accepted that, the future of the metric of
software is in the use of relatively simple metric combining different aspects that
may permit some types of estimates and evaluations.
the remaining 1,5 hours was destined to the necessary stoppages (routines meeting
of the team of the day, equipment configuration etc). The process of Resolution of
Problems adopted has an activity of analysis, with an actor for this activity; three
activities of trouble-shooters, being an actor for each one of the three different
phases of the project; and one activity of editor, with an actor for this activity. In
phase 1 requirement survey is included, analysis and structural project, in phase 2
codification, implementation of all the products and tests, finally in phase 3 them
are the final tests and the implantation of the product. A typical band of problem
raised in the order of 10%, 60% was adopted respectively for each phase, 30%. It is
pointed out that the two first phases possess constant interactivity, being that it was
considered for phase 1, 10% of problems of primary order (initial), the remaining
problems being concentrated on phase 2. For this example, each detected revision
between the interactions of phases 1 and 2 approximately 30% of the defects that
are in this point are attenuated for phase 2 which increased significantly its tax to
60%. The activities of the trouble-shooters are efficient enough, closing in the
average of 85% of the problems in the first interaction. Therefore, 15% stand after
being revised by the editors still for copyholders, conditioned to the trouble-
shooters.
Figure 1. Screen of the logical model of process resolution of problems in the Simprocess®
simulator environment. [11]
For this example, the done question was, "considering aggregation of roles of
the actors during two first phase, which we would be the impact in the performance
process" The additional data in the simulation as to the effort in the
accomplishment, ability in deciding the problem, type of the problem
(classification as to the phase critical index) and others were limited to in the codes
Performance analysis of software processes supported by simulation 191
For this work two modalities of simulation had been considered. In the first one the
team (human resources) kept it constant in relation to the format of the
organization as to its activities i.e., kept the process "as it is"; and in the second
modality, the roles that contemplate the activity had been aggregated, aiming at a
multi-functional team, i.e. modified the process for "as it would be if". In both
cases the indices of productivity, utilization, efficiency and average time of
resolution of problems.
Deficiencies of the model of simulator and the executors of processes could
have been perceived clearly, as:
x One does not have a standard time for the activities (in function of the three
aspects: the individual, the type of the activity, the complexity of the
activities);
x no statistics information of any type is offered, as for example, probability
distribution in agreement (agreement of the estimates of the managers of the
company in study). Aiming the objectives of the research of the Software
Engineering Group of the LAC-INPE, is definition of a model of gauging of
parameters for these and other aspects, which includes these deficiencies by
means of experimental results [01].
Results in this case study emphasizes evidences, such as:
x processes must necessarily be treat continuously, and aspects like efficiency and
income must be analyzed jointly (as shown in Table 2, some partial results);
x the lack, or non-availability of a resource in a given moment may to cause
the reduction of the performance, which reflects directly on the
performance of the process.
x the simulation revealed to be a relatively simple resource to organize and to
understand the use of systematic resources.
Table 2. Comparison of the tried alternatives (calculated from the hypothetical data)
8. Conclusion
There are various options to increase the effectiveness of the analyses, solution and
revision of process. To determine new potential results based on the prediction of
the results of the simulation. However to evaluate the potential change in more
depth and to evaluate also some alternative strategies of the execution it is
interesting to take in to consideration resources you add to the inserted models of
quality in the engineering processes.
It is fundamental to point out that the types of to be considered changes depend
on the organizational culture. What may work for an organization, most of the time
many not work for another one. The simulation is a very efficient tool to explore
alternatives. Corroborated by diverse authors [04, 05, 07], however they only assist in
the aspects that had been considered in the model. Feelings and decisions still are in
the reality of the organization and the actors that control and assume them.
The process that was shaped for approaching this work is particularly useful to
explore these questions considered qualifying in the process in study. For each
potential option, a result in productivity, utilization, efficiency etc can be predicted.
Some of these many not be immediate became of staff training, but others may be
immediate with the hiring of expert developers.
The approach to this work appears as in the mentioned areas of this work
(modeling, simulation and analysis of performance) being quite interesting, therefore
it support the metrology of the engineering of software for analysis of performance of
software processes, and for the management of processes of software development.
References
[01] Araújo, D G; et all. (2006). Supporting to estimates in software projects by simulations.
II UNEM-WorkShop Colaboração Universidade-Empresa. Taubaté-SP.
[02] Cereja Junior, M. (2004). A software process coordination service integrated into the e-
WebProject-PSEE. (Thesis). INPE, São José dos Campos-SP, Brazil.
[03] Gonçalves, J M; Boas, A V. (2001). SW-CMM: Capability Maturity Model for
Software. CMU/SEI-93-TR-24-CMM. V1.1.Telecom & IT Soluction. Campinas-SP
[04] Kellner, M I; Raffo, D M and Madachy, R J. (1999). Software process simulation
modeling: Why? What? How? The Journal of systems and software, 46, (2-3), 91-105, 4/15.
[05] Raffo, D M; Harrison, W and Vandeville, J. (2002). Software Process decision support:
making process tradeoffs using a hybrid metrics, modeling and utility framework.
Proceedings of the 14th international conference on Software engineering and knowledge
engineering. ACM International Conference Proceeding Series. pgs: 803 – 809. July.
[06] Raffo, D M; Vandeville, J V; Martin, R H. (1999). Software Process Simulation to
Achieve Higher CMM Levels. Journal of systems and software, 46, (2-3), 91-105, apr, 15.
[07] Salviano, C F. (2003). Introdução aos modelos CMM, ISO/IEC 15504 (SPICE) e
CMMI. V Simpósio Internacional de Melhoria de Processo de Software, Recife-PE.
[08] Sant´Anna, N. (2000). Um ambiente integrado para o apoio ao desenvolvimento e
gestão de projetos de software para sistemas de controle de satélites. (Thesis). INPE, São
José dos Campos-SP, Brazil.
[09] Software Engineering Institute (SEI). (1993). CMM. version 1.1. Pittsburgh: Software
Engineering Institute. (Technical Report CMU/SEI-93-TR-24).
[10] SPICE - Software Process Improvement and Capability dEtermination. (1993).
Software Process Assessment – Part 2 : A model for process management, Version 1.00.
[11] Simprocess. Available at: <http://www.simprocess.com>. Accessed on: Jan. 15th 2007.
Concurrent Innovative Product Engineering
Be Lazy: A Motto for New Concurrent Engineering
Shuichi Fukuda
Stanford University
Abstract. This paper is a position paper to point out that concurrent engineering is
entering its 3rd generation. 1st generation concurrent engineering was proposed in
1989 when DICE project started. The primary purpose of the 1st generation was
how effectively we can reduce time to market. Their only concern was time and
“earlier and faster” were the keywords then.
Then 2nd generation concurrent engineering came. We became aware that if
we really have to process things in a concurrent way, we have to discuss at a
strategic level. The tactical discussion would not solve the problem. That was what
we found out after many years of practicing concurrent engineering. The primary
task in 2nd generation concurrent engineering was how we can set a strategic goal
across all different development processes. To achieve this, communication and
collaboration became essential. So some researchers, including myself, called this
2nd generation concurrent engineering “collaborative engineering”.
Now, we are entering 3rd generation. With the growing diversification of
customers and with our traditional markets going out, we have to consider every
constraint as soft and negotiable. In short, our 3rd generation concurrent
engineering is “negotiable engineering”. Everything is put on a negotiable basis.
There are no more fixed dimensions. Everything changes in a dimension and
sometimes the number of dimension itself changes. Our way of solving the
problem changed to constraint-driven from our traditional way of goal-driven.
This paper describes how concurrent engineering changed with time and what
will be our new challenges in our 3rd generation concurrent engineering.
Keywords. Concurrent Engineering, Yesterday, today and tomorrow, Soft and hard
constraints. Negotiaion, Constraint driven, Postponement, Lazy evaluation
________________________
Shuichi Fukuda
Stanford University, Consulting Professor
3-29-5, Kichijoji-Higashicho, Musashino, Tokyo, 180-0002, Japan
Phone: +81-422-21-1508 Fax; +81-422-21-8260
Email: shufukuda@aol.com
196 S. Fukuda
With the growing diversification, more flexibility and adaptability are called for in
product development and marketing. Therefore, new business models are proposed
for marketing and product development. One of them is one to one marketing
where product development focuses more on individual aspects and considers life
time value of a customer. But unless the product is very large or very special so
that it has to be made to order, which means production would not start unless
price covers all the expense, we have to compromise between mass production and
personalized production, because if we produce products completely to individual
order, then the price would become too much high and we cannot sell the product.
So if the product has to be sold at a reasonable price to secure marketability, we
have to compromise between mass and personalized productions.
This means that every item or part to be developed will be put on a negotiable
basis. Our old product development was such that each developing item or part is a
box which has fixed sizes, i.e., height, width and length with order priorities and
that how we can pack them together appropriately in a larger box. Thus,
fundamentally the problem of our old product development was a packaging
problem or a scheduling problem (Figure 1).
Concurrent Engineering changed the situation [1], [2]. It told us that by noting
the content of each item or part, some items or parts can be processed at the same
time. This means that the size of items or parts become changeable if we note their
contents (Figure 2).
This is quite revolutionary because until the emergence of concurrent
engineering, everybody endeavoured only how we can pack them adequately. i.e.
All our efforts were done until them with all the sizes as hard or non-negotiable
constraint. Until concurrent engineering was proposed, there were no ideas to relax
the constraints. What concurrent engineering really proposed was how we can
relax the constraints.
But initial concurrent engineering only noted a temporal constraint. If we can
process some items or parts together, we could reduce time to market. So their only
concern was how we can reduce time.
But this is based upon an argument that our market does not change much. If we
borrow the words from Kim and Maubogne [5], our attention was paid only to the
red ocean. Everybody has to fight for the old traditional market so that even a
minute earlier means better chance of win. But as they point our we are now
entering the age of blue ocean. If we look aside, the blue ocean is expanding
infinitely.
With the growing diversification, the customer’s preferences changed
remarkably. Each customer has a different sense of value based upon his or her
life. Thus, our traditional market which was based upon an averaged sense of value
lost its meaning. How we can create a new market or how we can find a blue ocean
becomes our most important challenge.
This means that all the constraints are now negotiable or turn to be soft. In our
traditional concurrent engineering, all data are single elements. Even if data is in
the form of a list, the number of elements in it did not change. What the first and
second concurrent engineering did was how we can effectively change the value of
composing elements. The number of elements was a hard or non-negotiable
constraint.
But now the number of elements in the list can be changeable or negotiable.
198 S. Fukuda
8 References
[1] D. Sriram, R. Logcher, S. Fukuda, “Computer Aided Cooperative Product
Development”, Lecture Notes in Computer Science Series No.492, Springer-
Verlag, 1989
[2] S. Fukuda, “Concurrent Engineering”, Baifukan, 1991, (in Japanese)
[3] S. Fukuda, “Concurrent Engineering for Product, Process and Market
Development“ in M. Sobolewski eds., “Concurrent Engineering The World
Wide Engineering Grid”, Tsinghua University Press, Springer, pp.23-42, 2004
[4] R. D. Sriram, et al, “Distributed and Integrative Collaborative Engineering
Design”, Sarven Publishers, 2002
[5] W. Chan Kim and Renee Mauborgne, “Blue Ocean Strategy”, Harvard
Business School Press, 2005
Be Lazy: A Motto for New Concurrent Engineering 199
Content Checking
(Constraint Relaxation)
Goal
Process 1
Goal 1
Process 2
Goal 2
Process 3
Goal 3
Process 4
Goal 4
Time
Figure 3 (a) Old Sequential Product Development
Goal
Process 1
Goal 1
Process 2
Goal 2
Process 3
Goal 3
Process 4
Goal 4
Time
Figure 3 (b) 1st Generation Concurrent Engineering
Be Lazy: A Motto for New Concurrent Engineering 201
Goal
Process 1
Goal 1
Goal 2 Process 2
Goal 3 Process 3
Goal 4
Time
Fig.3 (c) 2nd Generation Concurrent Engineering
Spatial Variation
Temporal Variation
Time
Figure 4 Diversification
A Study on the Application of Business Plans in New
Product Development Processes
Abstract. The present work presents a study on the application of business plans (BP) (a
widely used document for investment decisions of new enterprises), with standard new
product development processes (PDPs). The main objective was to find out whether it may
be applied and, if so, at which moments it should be used in the PDP. The main source of
information in this exploratory research was the existing literature concerning product and
business development processes, business plans models and similar documentation, and
project selection and evaluation methods. Then, these contents have been compared using
the PDP stages as references. As a result, the study points out that a business plan is a
document that models business, and it can gather enough information for investment
decision analyses. It can also be elaborated concurrently with the PDP stages and used at the
PDP decision gates. However, business dynamics has favored other forms of documentation
for early decisions, such as synopses, presentations and even web-pages (for external
resources). Business plan contents are still relevant and useful for new enterprises
investment decisions.
Keywords. Business plan, new product development processes, project selection and
evaluation methods
1 Introduction
Economic sustainability of existing and startup companies strongly depends on
their ability to take advantage of market opportunities. However, to identify them
and to decide upon which one of them will mostly be feasible in an uncertain
future, is a subject that still attracts attention.
The central point of this problem is the fact that one cannot foresee the future
[19]. Therefore the success of a given company in the long term cannot be
guaranteed [5]. In this scenario of uncertainty in several industrial sectors and of
imperfect companies, the best alternative in order to invest is to search for elements
that contribute to reduce failure risk [5].
1
UTFPR., 3165, Sete de Setembro St., Rebouças, Curitiba-PR, 80230-901, Brazil;
Tel: +55 (041) 3310-4776; Email: kampa@utfpr.edu.br; http://www.utfpr.edu.br
204 J. R. Kampa and M. Borsato
The decision to invest in a given project involves the analysis of both qualitative
and quantitative factors. There are several approaches to project selection, which
can be grouped into three main categories [4]:
x Benefit measuring techniques;
x Economic models;
1. Portfolio management methods.
In the benefit measuring techniques approach, one gathers criteria to be used
when a given project is analyzed. Grades can be used, as well as binary logic
(yes/no) when one compares data from each given project against reference values.
This approach is mostly qualitative, as it involves a subjective evaluation sense,
which can incorporate an evaluator's personal criteria.
In the economic models, a timeliness study on the economic behavior of each
project is performed and financial indexes are used in order to allow for a detailed
comparison from different projects. This approach has a quantitative nature [4, 8,
13].
On the other hand, portfolio management methods aim at maximizing return of
investment (ROI), selecting projects that are aligned with the company's strategy
and balancing the project portfolio. It allows holistic decisions as it mixes both
qualitative and quantitative factors. To do that, one applies a group of techniques
that come from the two previous approaches, among others [4, 15].
Despite the fact that either economic models and benefit measuring techniques
are not ideal when used exclusively, ROI metrics that derive from them are
fundamental to support decisions in each one of the approaches presented [4].
Nonetheless, the main element that justifies the existence of a given company is
profit making. Positive results are not enough; it is necessary that the generated
profits are better than those expected from other investment alternatives. Hence,
the essence of an economic-financial evaluation is to measure a given project's
return such that comparisons can be made against other investments [15].
206 J. R. Kampa and M. Borsato
Business plans are usually associated to the entrepreneur and his dream to foster a
new company [1, 5]. But a business plan can be used both by new companies and
established ones, large or small [1, 2, 12, 16].
Companies can be perceived as systems that, in turn, need several internal
subsystems. Systems have their performance evaluated by means of models that
represent them. Model conception and development cause people who conduct
experiments to become self-conscious of concepts and values that influence
understanding, planning, action and reaction of a system's elements. Hence,
initially one has a tacit/mental model of the system [2].
A business plan is used to formalize and make the model of a given system
explicit. To do that, it has qualitative and quantitative aspects. The qualitative part
is discoursed and refers to the description of a functioning system and of the model
projection inserted in the system. The quantitative part reveals the economic model
of the project and is targeted towards measuring the economic return of the model
[ibidem], yet allowing for trade-offs – sensitivity tests of controllable and
uncontrollable variables of the system [18]. Since the qualitative part is the one that
carries the description of the technical and commercial model, it is essential for the
development of the economic model [2].
On the other hand, a business plan is a document that contains description and
characterization of a given business, the way to operate it, its strategy, its plan to
conquer market-share and its projections of revenue, expenses and financial results
[17].
However, a business plan concerns company or product? Egg or chicken?
Indeed, a company does not exist without a product. It is seen as the outcome, or
the result of work or of a process, and as something that adds value to the one who
uses it or consumes it. But the product does not exist either, without the process
that originates it and that makes it available to the market. Therefore, they are
interdependent and each one's feasibility influences the other's.
In other words, the product (good or service) is an offer of value to the market
and can be used in an exchange process to generate revenue [10]. As a result, there
is no offer without the process that bring it into existence. Decision towards
investment in a company cannot treat product and business separately. In order to
compare projects and decide, understanding of the whole is necessary. Hence, the
model must represent the performances of both product and business.
A Study on the Application of Business Plans 207
3 Methodology
The present work has been carried out through an exploratory research on
international and national literature about product development processes and
businesses, business plan models and similar documentation, and project
selection/evaluation methods. To accomplish that, a list of fundamental questions
and key-words has been placed, which has been constantly revisited due to new
coming pieces of information.
Significant results have been registered and analyzed using mind maps and
block diagrams. In the light of commonly cited phase-gate PDP models, it has been
possible to figure out relationships between terms and topics.
5 Discussions
Although the advantages presented previously are evident, there has been no strong
relationship between performance and the use of a formal business plan [6, 11].
Also, there are doubts concerning the effectiveness of the enterprise planning
activity as a whole [11].
Most of the criticism on Bps is related to startup evaluations by venture
capitalists. One of them is that investors typically focus first and foremost on the
quality of a venture personnel; they invest in people not in paper [6]. As there is no
way to foresee the future, a team's capacity to becoming reality rather than a
fantasy is desired. Therefore, team experience with the proposed business and its
formation is crucial.
Another criticism is that not always investors base their decisions on business
plans. The dynamics of venture capital investments has demanded faster and
practical alternatives to be analyzed than a document that can have as many as 100
pages (with appendices and annexes). Some approaches could be used before the
presentation of a complete BP, as electronic presentations, web-sites, two or three
pages summaries or synopses, among others [6, 12].
About the BP application in the PDP, research is demanded on how to integrate
it operationally. After all, the necessary time for BP writing and analysis can be
extensive, therefore they precede all the system synthesis. Thus, in the
development of initial phases, when there are many alternatives to choose from, a
A Study on the Application of Business Plans 209
6 Conclusions
It was verified that NPD and BP literature are seldom presented in a correlated
way. The subjects are generally treated separately, even though they are known to
be interdependent.
To capitalize a market opportunity implies in a product and a business
existence. A model that represents a system's feasibility and its performance is
needed. The BP is a document that has been used to make this model explicit.
The BP is a document that can be adapted for diverse goals. It must incorporate
business and product related questions. It can be simultaneously developed with
the PDP and be used at the process decision gates, after the initial idea selection. It
may be useful, over all, for companies who need to raise external resources for its
development projects, as it is meant for the evaluation and capturing of potential
investors.
7 References
[1] Abrams RM. Business plan: segredos e estratégias para o sucesso. São Paulo: Érica,
1994.
[2] Bernardi LA. Manual de plano de negócios: fundamentos, processos e estruturação.
São Paulo: Atlas, 2006.
[3] Casarotto Filho N. Projeto de negócio: estratégias e estudos de viabilidade: redes de
empresas, engenharia simultânea, plano de negócio. São Paulo: Atlas, 2002.
[4] Cooper RG. Winning at new products: accelerating the process from idea to launch. 3rd
edn. New York: Basic Books, 2001.
[5] Dolabela F. O segredo de Luísa. São Paulo: Cultura, 1999.
[6] Gumpert DE. Burn your business plan: what investors really want from entrepreneurs.
Needham: Lauson Publishing Co., 2002.
[7] Hustad TP. Reviewing current practices in innovation management and summary of
best practices. In: Rosenau Junior MD, editor. The PDMA handbook of new product
development. New York: John Wiley & Sons Inc., 1996; 489-511.
[8] Kerzner H. Gestão de projetos: as melhores práticas. 2nd edn. Porto Alegre: Bookman,
2006.
[9] Kim WC, Mauborgne R. A estratégia do oceano azul: como criar novos mercados e
tornar a concorrência irrelevante. 11th edn. Rio de Janeiro: Elsevier, 2005.
[10] Kotler P, Keller KL. Administração de marketing. 12nd edn. São Paulo: Pearson
Prentice Hall, 2006.
[11] Lumpkin GT, Shrader RC, Hills GE. Does formal business planning enhance the
performance of new ventures? Babson College, 1998. Available at:
<http://www.babson.edu/entrep/fer/papers98/VII/VII_A/VII_A.html>.Access on: Feb.
20th 2007.
210 J. R. Kampa and M. Borsato
[12] Lechter MA. Como conseguir dinheiro: a arte de atrair dinheiro de outras pessoas para
seus empreendimentos e investimentos. Rio de Janeiro: Elsevier, 2005.
[13] Pahl G, Beitz W, Feldhusen J, Grote KH. Projeto na engenharia: fundamentos do
desenvolvimento ficaz de produtos: métodos e aplicações. São Paulo: Ed Edgard
Blücher Ltda., 2005.
[14] Rosenau Junior MD. Choosing a development process that´s right for your company.
In: Rosenau Junior MD, editor. The PDMA handbook of new product development.
New York: John Wiley & Sons Inc., 1996; 77-92.
[15] Rozenfeld H, Forcellini FA, Amaral DC, Toledo JC de, Silva SL; Aliprandini DH et
al., Gestão de desenvolvimento de produtos: uma referência para a melhoria do
processo. São Paulo: Saraiva, 2006.
[16] Sahlman WA. How to write a great business plan. Harvard business review 1997;jul-
aug:97-108.
[17] Salim CS, Hochman N, Ramal AC, Ramal AS. Construindo planos de negócios: todos
os passos necessários para planejar e desenvolver negócios de sucesso. 2nd edn. Rio de
Janeiro: Elsevier, 2003.
[18] Smith PG, Reinertsen DG. Desenvolvendo produtos na metade do tempo: a agilidade
como fator decisivo diante da globalização do mercado. São Paulo: Futura, 1997.
[19] Trout J. O líder genial. Rio de Janeiro: Best Seller, 2005.
A case study about the product development process
evaluation
Abstract. The significance of business process approach has been increasingly recognized
on new product development management (NPD). The challenge is how to build models to
support it. This paper presents a model named PDPNet to provide it and describes an
application case study in a agriculture machine enterprise. The data collect instruments are
participant observations and document analysis. The results contains the description and an
evaluation of the maturity level, considering the model proposed. The conclusion presents
considerations about the support disposal by the model during the application, the
challenges findings, and proposals about future research.
1 Introduction
New product development process (NPD) is vital to competitiveness in all sectors
of economy. Among the best management practices is the business process (BP)
approach [10], which seeks to integrate activities from different enterprise
functions, in order to obtain performance excellence. To apply this approach is
fundamental the use a formal NPD process, that means to produce a map
describing the new product development process and provides a set of techniques
that make this possible [17].
Kalpic and Bernus [9] demonstrated the importance of this approach in a case
study specifically at the new product development area and explain how reference
models can be helpful in project, management and execution of BP. Since the
emergence of the BP approach on NPD, more or less elaborated models have been
proposed to help professionals to identify the best available practices [12, 16].
The bibliographical review presented in section 3 analyses some one of most
knowledge models and as a result two aspects to be noted. The first is the lack of
NPD transformation models. The second one is the need to integrate the three types
1
Assistant Professor, University of Sao Paulo, Sao Carlos School of Engineering, Industrial
Engineering Department, Integrated Engineering Research Group (EI2). Trabalhador
Saocarlense, 400; 13566-590; Sao Carlos-SP; Brazil; Tel+55(0)16 3373-8289; Fax +55(0)16
3373-8235; Email: amaral@sc.usp.br ; http://www.numa.org.br/ei_en .
212 D. Amaral, H. Rozenfeld, C. Araujo
of models, i.e. process, maturity and change management. This implies the
conception of distinct models evolving and kept independently, but which could be
used together, allowing the diagnosis (by means of maturity assessment),
identification of needed practices (with the process model) and prioritization and
identification of transformation strategies in the change model (NPD change
model).
Since 2002 a network of researchers and professionals interested in PDP have
developed a reference model, PDPNet, composed of three independent and
integrated parts: a process model, a maturity model and a transformation model.
This work, through a case study, constitute an evaluation of the maturity according
the model proposed and a description of the change management model
application.
The objective of this paper is to present the PDPNet model and the account of
an application case at an agriculture machinery enterprise. The study contributes to
the development of the model by identifying perceived improvements and
assessing its application potential.
The PDPNet process reference model depicts the best practices for the
management of product development processes, presenting and relating phases and
activities to several practices and methods available in the field. Its goal is to
integrate available practices and to elucidate them in detail irrespective of the
company’s evolution level.
The reference model is divided into three macro-phases.
Pre-development macro-phase. Pre-development is the link between the
projects developed by the company and its goals. It includes the company’s the
strategic product planning, which involves the Corporation or Business Unit
Strategic Plan deployments on project portfolio, with the evaluation and track of
selected projects.
214 D. Amaral, H. Rozenfeld, C. Araujo
The NPD maturity model is used to support the identification of the evolution level
reached by a company at a given moment. It depicts maturity levels and shows
which activities should be formalized and implemented at each one of these levels.
The description shows, therefore, a hierarchy of priorities in terms of activities so
that higher levels can only be achieved if lower levels have already been reached.
The model—according the CMMi model—utilizes five possible evolution stages:
Level 1 – Basic. When the company systematically carries out a set of practices
deemed as essential for the effective management of product development. It is
subdivided into five sublevels, each one grouping practices according to areas of
knowledge: product engineering, marketing and quality, manufacturing process
engineering and projects, costs and environment management.
Level 2 – Intermediate. Practices are standardized, thus results are predictable. At
the previous level it was sufficient to have them performed, even with variations.
This level is also subdivided into four intermediate levels consistent with the
knowledge areas
Level 3 – Measurable. Besides being standardized, there are indicators that assist
in assessing the performance of activities and the quality of results.
Level 4 – Controlled. The company works systematically to correct practices
whose indicators have deviated from expected values.
Level 5 – Continuous improvement. There are institutionalized processes to
improve the BP itself. They may take place in the short run or in the long run. The
authors propose two models. The first model is the “incremental improvement
process”, one of the processes that give support to the NPD reference model. The
second one is the NPD transformation process model, which aims at deeper and
long-term improvements.
Each maturity level indicates a set of institutionalized practices, as in the
CMMi model. In the PDPNet model this is verified by formalized activities.
4 Method
The research methodology employed was a single and holistic case study
according to Yin [19]. The BP of the agriculture machinery company (soil
preparation and planting) in question constituted its analysis unit, considering both
the product development process and the actions taken to improve it. The complete
activities involved three steps which have been described at sections 5.1, 5.2 and
5.3.
During the investigation the researchers accompanied the implementation by
visiting the company for 18 months on a fortnightly/monthly basis. Data were
collected via documents and field observations.
This phase comprised a critical analysis of the company’s problems, whose goal
was to assess its maturity level in the management of the new product development
process. The diagnosis was carried out by the researchers in the first semester of
2005, during four weeks on a weekly basis.
The analysis was conducted internally by the NPD director with the help of one
of the designers from the product design area, using the maturity model assisted by
the process reference model. These professionals began by identifying the
documents that described the company’s NPD process. The most important
documents were the Quality Assurance System procedures. The documents and
these professionals’ experience assisted in carrying out the activities and
identifying the level of formalization. The results were discussed with the
researchers.
The results showed that the company had a well established set of
institutionalized procedures for more conventional and routine development
activities, deriving from the Quality System.
However, some relevant gaps were identified according to the maturity
model:Product portfolio, the company did not have a formal system to manage
the portfolio of new products; Project planning, it did not seriously take into
account aspects such as risks, customers’ needs, project strategies; It did not have
phases and gates (phase transitions), although there was a procedure to carry out
development activities, phases were not formalized; Flaws in institutionalized
procedures, there were certain flaws in the identification and control process of
new product projects; Functional structure, the company displayed a classical
functional organizational structure.
The company was classified as being in Level 1.1. of the model. The
assessment allowed the group to have a shared vision of the main inefficiencies
that needed to be tackled.
The group chose to introduce two fundamental measures: to define formally the
project manager’s role and to define the improvement team.
A matrix structure was established by creating project manager positions. There
were three areas so as to incorporate new markets and types of equipment an a new
functional area was also created to address the performance of tests, prototypes and
high technology projects.
A team of managers from the product engineering area, the Engineering
Committee, was in charge of the program. People of related areas are invited to
participate as specific needs arise. Results generated by teams are presented to the
Committee, which has the responsibility of validating new standards.
This infrastructure evolved and was consolidated at the end of the work, when
it was named as Project Office. It assists in the improvement program as well as in
the work carried out by the project Managers. The tasks assisted are: elaboration of
standards and procedures related to the NPD process, consolidation of information
and product portfolio reports, generation of information on performance and
A case study about the product development process evaluation 217
The main goal selected for the first phase was to reach Level 1, i.e. the basic level
of the maturity model, thus obtaining a stable development process. Four initial
projects were chosen: 1) Introduction of a system to create and manage the project
portfolio (medium term); 2) Implementation of system to control resources
(medium term); 3) Improvement of document management system (short term); 4)
Mapping of NPD process with phases and gates (long term).
The mapping project was put on hold, and the teams’ efforts were directed to
the other projects. Two projects were finished on time: Aggregated project plan
and Solution of specific problems in the document management system.
A third project was partially implemented in this period of time: project to
implement a resource control system. This project involved the introduction of
information systems capable of integrating resources from diverse functional areas
in a single pool, discriminating their allocation in each project. It allowed the
introduction of weekly planning of activities performed by people involved in the
development of the company’s products.
As the previous projects were finished new improvement projects were
initiated: Mapping and optimization of the project procedure for supplied parts and
Organizational restructuring of areas related to NPD.
The first project is over now and the second one is in its final phase.
This effort resulted in a project prioritization system that contributed a larger
quantity of products to the company. The final result was the achievement of the
basic maturity level, as initially planned.
6 Conclusions
Results from the case study and the analysis of the model indicate that the PDPNet
model helped to build a permanent transformation process of the company’s NPD
business process, which indicated the best practices as well as allowed the
establishment of a system—internal to the organization—to keep this ongoing until
strategic goals were reached.
The PDPNet model has classifications in process areas and types of basic
elements (activities, phases and gates) that were followed in the three models. This
aspect was fundamental throughout the implementation, in particular the
classification of knowledge areas and phases in the maturity model. They allow the
direct identification of activities that are related to gaps in terms of the company’s
maturity evolution. Then, users may identify which methods, tools and principles
can be implemented to improve the activity.
The transformation model, surprisingly, gained more importance than the other
models in the case in question and was fundamental to the success of the
implementation. When the company’s professionals understood the concept of
systematic changes deriving from improvement projects the team’s focus was
218 D. Amaral, H. Rozenfeld, C. Araujo
7 References
[1] Baxter M. Product design: a practical guide to systematic methods of new product
development. CRC Press, 1995.
[2] Clark KB, Fujimoto, T. Product development performance: strategy, organization and
management in the world auto industry, 1991 (Harvard Business School Press, Boston
Mass.).
[3] Clark KB, Wheelwright SC. Managing new product and process development: texts
and cases. New York: Free press, 1993.
[4] Clausing D. Total quality development. New York: ASME Press, 1994.
[5] Cooper RG. Winning at new products: accelerating the process from idea to launch.
Reading, MA: Perseus books, 1993.
[6] Creveling C.M., Slutsky J.L. and Antis Jr D. Design for six sigma: in technology and
product development, 2002 (Prentice Hall).
[7] Crissis M.B., Konrad, M and Schrum, S. CMMI: guidelines for process integration and
product improvement, 2003 (Addison-Wesley professional).
[8] Fettke P, Loos P, Zwickerm J. Business process reference models: survey and
classification. In Workshop on business process reference models, Nancy – France,
September 2005:1-45 (Satellite workshop on the Third international conference on
Business Process Management – BPM)
[9] Kalpic B, Bernus P. Business process modeling in industry – the powerful tool in
enterprise management. Computers in industry, 2002:47:299-318.
[10] Lee RG, Dale BG. Business process management: a review and evaluation. Business
process management journal, 1998:4(3): 214-225.
[11] Mertins K, Jochem R. Architectures, methods and tools for enterprise engineering.
International journal of production economics, 2005:98:179-188.
[12] Pugh S. Total design: integrated methods for successful product engineering. Reading,
HA: Addison, 1978.
[13] Rozenfeld H et al. Building a community of practice on product development. Product:
management and development, 2003:1(3):37-45. Accessed at
http://pmd.hostcentral.com.br/index.php on 10 january 2007.
[14] Rozenfeld H et al. Gestão do desenvolvimento de produtos: uma abordagem por
processos. São Paulo: Saraiva, 2006.
[15] Rozenfeld H PDPNet. Accessed at http://www.pdp.org.br on 10 january 2007.
[16] Smith RP, Morrow JP. Product development process modeling. Design studies, 1999:
20:237-261.
[17] Ullman DG. The mechanical design process. New York: International Editions, 1997.
[18] Ulrich KT, Eppinger, SD. Product design and development. New York: McGraw-Hill,
1995.
[19] Yin RK. Case study research: design and methods. Sage Publications, 2003.
Product Development Systematization and
Performance: a case-study in an automotive company
1 Instruction
Recognized as a source of profits, product development process (PDP) nowadays
has been viewed with key-point of success once through PDP systematization
companies can reduce their costs and development time and increase their product
quality. Available literature suggests that to get and effective systematization it is
necessary to continuously improve the PDP so that it can follow the continuous
necessity of develop better products to be launched to the market. Then PDP
models describing phases, best practices and methods for product development had
been discussed a lot in the agenda [2, 5, 9, 12-14]. Even considered as a key-point
of success, it is not commum to find companies carrying or working on PDP
systematization. In addition, a small number of studies can be found discussing
effective PDP models implementation, considering steps this implementation
represents and also how to proceed PDP improvement and performance impacts
are aspects were not be explored sufficiently.
1
Juliana Silva Agostinetto is a Master in Production Engineering at Universidade de São
Paulo (USP) with 7 years of experience in automotive companies. Phone Number: ++55-19-
81832505; mail: juliana.agostinetto@gmail.com
220 J.S. Agostinetto and D.C. Amaral
3 PDP Sistematization
companies – can also be applied to PDP, if they are right adapted to its particular
characteristics such as creativity and intangibility. Some of them are described by:
Benchmarking; 5S Program; Lean Thinking; Kaizen culture; Stage-Gate, etc.
During the analyses of case study it was possible to find many of these
activities and tools been implemented and used by that company. They will be
better explained during next chapters. Detailed discussion about list of those
activities and tools available on literature can be found on [1].
4 Case-Study
The case study of this research is a Brazilian unit of a worldwide auto parts.
The site selected is a technologic center located in Brazil since 1999. First product
development activities under local team responsibilities happened in 2001, when it
was allocated first product engineers. Before this all responsibilities were located
in the USA and Brazilian team only supported them. In 2002 it was established a
Project Support Office (PSO) to support developments which local project
managers as strategy to increase business in Brazil. In 2006 they had 35 projects,
25 advanced projects (strategic developments without a agreement from customer)
and many business opportunities for future competition in the market.
Figure 1 brings a general vision of PDP model into the company. It is called
Phase-Gates; it was a way they found to analyze development during their
evolution and to give direction (from high-levels of the organization). These
directions can be: stop developing, keep working or re-make.
There are two types of gate-review: technical ones, called design reviews
represented by lozenges on Figure 1 and management gates, called project reviews,
222 J.S. Agostinetto and D.C. Amaral
represented by circles; this last kind of gates has the goal to verify if actual status
of a specific project is aligned with proposed plan on the scope, by the beginning
of the project, including key-dates, scope, customer requirements, costs, timing,
etc. Communication with customer happens during all development guarantying its
participation and that they are following the voice of customer during the project.
PDP model empathizes plan and design begin phases. It also suggests a global
project categorization by types (A, B, C and D), where A category represents a
most complex product and process development and D represents a routine project.
Each category has its own requirements and steps to be followed.
During the research it was possible to note systematization of PDP into that site
was based in an introduction of a model to structure activities, where many areas of
the company worked together, focusing their own tasks but looking for a common
goal: better products to be faster offered to market, following company strategies.
This model is known and adopted by a multifunctional team but it was noted some
of people didn’t know all PDP activities as they should. Otherwise they know
exactly and in details their responsibilities into projects and what is the best to way
to do them. Main problem regarding PDP model into the company refers to people
from multifunctional team but who work outside that site. To solve those issues
Product Development Systematization and Performance 223
PSO team established activities and tools as set of efforts demonstrated at the
Figure 2, including process culture diffusion, training e projects audits.
The figure demonstrates that it was more intensive done last three years.
80
72
70
60 Formal
53 Ad-Hoc
50
No-Exist
40
35
30 28
20 17
10
5
0
1st Group (1999 - 2002) 2nd Group (2002 - 2006)
Analyzing presented results by the number of found activities and tools in each
project group we have a significant increase of formal activities on the same time
we have a decrease of ad-hoc and not-exist ones. It represents that the effort
located to PDP systematization resulted in a formalization of most activities and
tools they had or they knew. Regarding key-performance indicators we have:
Product Development Systematization and Performance 225
Results are present in percent and not in real numbers due to confidential
requirements from the company. Indicators demonstrate that first projects (belong
to the first group) met expectations of the company and the customer because
timing, budget and quality requirements were met. By the other side, analyzing all
data it is possible to note that goals of the first group were easier than the second
group, especially for timing and costs and as consequence, easier to be achieved.
Local responsibilities also increased a lot for the second group once during the
first projects they only supported activities in Brazil.
The second group of projects presents better results with compared with the
first group and also they have targets more difficult to be achieved. One example is
budget defined to the projects.
7 Conclusion
Examples of PDP models seem to be generic and application weren’t found on
the literature. It is necessary to implement some of them to study their performance
and impacts in a practical business scenario. Case-study demonstrated that
companies’ models are following literature, but usually after an adaptation in a
specific segment such auto parts. Indicators used on case-study validate previous
discussion found on the literature that the implementation of activities contribute to
PDP systematization brings wins profit and reduce development timings, and, in
addition, keep the company with competitive advantages. Project analyzes show
PDP systematization brought positive results so that this research suggests it can be
important to have a reorganization of complementary processes to PDP, such as
continuous improvement process. One of hypotheses that this research suggests is
it can be essential to have a systematized continuous improvement process with
focus on PDP and not only as a support process for PDP, as described by literature.
226 J.S. Agostinetto and D.C. Amaral
8 References
[8] MERTINS, K. and JOCHEM, R. Architectures, methods and tools for enterprise
engineering. International journal of production economics, 2005:98:179-188.
[9] PUGH, S. Total design: integrated methods for successful product engineering.
Reading, HA: Addison, 1978.
[10] ROZENFELD, H. et al. Product development management: a process approach. São
Paulo: Saraiva, 2006.
[11] TOLEDO, J.C. Reference model to product development process management:
applicability to auto parts companies. São Carlos: GEPEQ. 2002.
[12] ULLMAN D.G.The mechanical design process. NY: International Editions, 1997.
[13] ULRICH, K.T. & EPPINGER, S.D. Product design and development. NY: McGraw-
Hill, 1995.
[14] WHEELWRIGHT, S.C.; CLARK, K.B. Revolutionizing product development:
quantum leaps in speed, efficiency and quality. New York: The Free Press. 1992.
[15] YIN, R.K. Case study research: design and methods. Sage Publications, 2003
An approach to lean product development planning
Marcus Vinicius Pereira Pessôa a1, Geilson Loureiro b and João Murta Alves c
a
Departamento de Controle do Espaço Aéreo – DECEA, Brazil.
b
Instituto de Pesquisas Espaciais – INPE, Brazil.
b
Instituto Tecnológico de Aeronáutica – ITA, Brazil.
Abstract. A product development system (PDS) is based on two pillars: "do the thing right"
and "do the right thing". While the former leads to operational efficiency and waste
reduction, the latter guarantees the fulfillment of all stakeholders needs. In this context,
Toyota's PDS has a superior performance. The lack of formalization of the Toyota PDS
system, though, makes it difficult to replicate. Research on this system has resulted in the
identification of several principles, tools and techniques, but did not present a way to make
them systematic. This paper aims to propose a systematic way to make the lean engineering
product development planning. The method allows the creation of an activity network, which
provides at the same time value creation and waste reduction. The first part of the paper
identifies the needs to the lean development planning. In sequence the method conception is
presented. Finally the method is evaluated against the identified needs and improvement
opportunities observed on an aerospace product development example.
1 Introduction
New product development (PD) can be understood as some kind of information
based factory [1]. The goal of the PD process is to create a “recipe” for producing a
product [2], which reduces risk and uncertainty while gradually developing a new
and error-free product which can then be realized by manufacturing, sold, and
delivered to the customer.
PD is a problem-solving and knowledge-accumulation process, which is based
on two pillars: "do the thing right" and "do the right thing". The former guarantees
that progress is made and value is added by creating useful information that reduces
uncertainty and/or ambiguity [3], [4]. The latter addresses the challenge to produce
information at the right time, when it will be most useful [5], [6]. Developing
complex and/or novel systems multiplies these challenges; the coupling of
individual components or modules may turn engineering changes in a component
1
Process Engineer, CCA SJ - DECEA (São José dos Campos), Campus do CTA, São José
dos Campos, SP, Brazil; Tel: +55 (12) 3947 3668; Fax: +55 (12) 3947 5817; Email:
mvppessoa@gmail.com; http://www.ccasj.aer.mil.br
230 M. V. P. Pessôa, G. Loureiro and J. M. Alves
into “snowballs”, in some cases causing long rework cycles and turning virtually
impossible to anticipate the final outcome [7].
Not surprisingly, overtime, over budget and low quality are commonplaces on
PD projects. A great exception in this scenario, and benchmark on the automotive
industry, is the Toyota Motor Company. Toyota has, consistently, succeeded in its
PD projects, presenting productivity four times better then their rivals [8]. To
deliver better products faster and cheaper, some firms are attempting to use the
same principles as Toyota’s, and create “lean PD” processes that continuously add
customer value (i.e., that sustain a level of “progress” toward their goals) [9], [10].
Unfortunately, unlike Toyota Production System (TPS), that was formalized by
Shigeo Shingo and enforced by Taichii Ohno, the Toyota development system has
not been well documented [11], [12].
This paper aims to propose a systematic way to make the lean engineering
products development planning. The method allows the creation of an activity
network, which provides at the same time value creation and waste reduction.
This work consists of three parts: (1) the identification of the needs to the lean
development planning; (2) the method conception; and (3) the method evaluation
against the identified needs and improvement opportunities observed on an
aerospace product development example.
The value, as defined by the final client, is the basis of lean thinking. In a program
or project, the value is the raison d’être of the project team, which means they must
understand all the required product/service characteristics regarding the value that
all stakeholders of the program expect to receive during the product life cycle [8],
[5], [13].
There is no recipe, though, to value creation. Value is: (1) personal, because
something of high importance to a group or person may not be valuable to others;
(2) temporal, since it is not static, but evolve according to stakeholders’ change of
priorities; (3) systemic and enterprise wide, as the parts, subsystems or company’s
sectors only add value if they contribute for the whole; and (4) fuzzy at the
beginning of the lifecycle, due to the few information available to determine the
whole value and, sometimes, even the final client [5].
The inherent complexity of the development of complex engineering products is
a serious obstacle to value creation. According to concurrent engineering principles
when a product is conceived it already constrains its life cycle processes and the
organizations that perform those processes, creating a total perspective where the
product plays only part of the whole complexity [14]. Thus the “total” value
An approach to lean product development planning 231
includes not only the product’s perceived benefits, but also the benefits achieved
through the life cycle processes and performing organization. Value for a
stakeholder, then, is the total and balanced perception of all benefits provided by
the results from the life cycle processes; as a total perception are considered not
only the results related to the product, the processes and the performing
organization, but also the fulfillment of all functional, cost, schedule and risk
requirements [15].
Concerning waste, in manufacturing for instance, Toyota has established a set of
seven waste categories that make easier the task of waste finding and elimination.
These categories were further adapted to the PD environment [1], [10], [16]. Table
1 shows the seven wastes countermeasures from the PD perspective [15].
The identified issues are related to the PD process, to the lean prerequisites to PD
and to the traditional planning methods.
A PD process has to be capable to deal with: (1) product complexity, as
customers demand products more and more complex; (2) process complexity,
regarding integrated development, process standardization, amount of information
involved, etc.; and (3) uncertainty between supposed and verified characteristics
and ambiguity due to multiple and conflicting interpretations.
In order to be lean, the PD process must adhere to the lean principles (Table 2)
and avoid traps such as [5]:
- The ‘preconceived solution’: a solution that has worked in the past and that
has become institutionalized as a ‘monument’.
- The existence of a powerful advocate with a vested interest in a specific
design approach or solution to a problem.
- The tendency to underestimate the difficulties in developing a new
technology, particularly if this occurs simultaneously with the development of a
new product or system based on that technology.
232 M. V. P. Pessôa, G. Loureiro and J. M. Alves
pull events. Activities are performed by the teams whose deliverables incorporate
the value items in the scope of the pull event.
The value delivery occurs while the activities are actually performed and the
resulting deliverables benefits are perceived by the stakeholders.
3 The Method
The approach described in this section applies the lean principles, based on value
creation and waste reduction, to derive a project activity network that is based on a
value creation sequenced set of confirmation events that pulls only the necessary
and sufficient information and materials from the product development team. The
purposed method has four steps (Figure 2):
(1) Value determination: having the product vision as an input, this process
defines the Value Breakdown Structure – VBS. The VBS differs from the usual
WBS, where the latter decompose the work, to make major project deliverables or
perform project phases, into smaller and more manageable chunks, and the former
deploys the stakeholders' value into unequivocal and verifiable parameters, called
value items.
(2) Set-based Concurrent Engineering (SBCE) prioritization: determines the
most critical product modules or organizational processes, which will be developed
through a set of alternatives. During SBCE, the development team does not
establish an early system level design, but instead establishes sets of possibilities
for each subsystem or process, many of which are carried far into the product and
process design.
(3) Pull events determination: No process along the value flow should produce
an item, part, service or information without direct request from the afterward
234 M. V. P. Pessôa, G. Loureiro and J. M. Alves
processes. The pull events arte associated to physical progress evidences (i.e.,
models, prototypes, start of production, etc.) and are important moments to
knowledge capture. Differently from tall gates where information batches are
created, pull events guarantee the value flow, make quality problems visible and
create knowledge.
(4) Value creation activities sequencing: the activities to be performed are
defined and sequenced based on the pull events.
Value creation
determination
determination
prioritization
sequencing
Pull events
activities
SBCE
Value
Specify value
Lean Principles
x
Identify the value stream x x
Guarantee the flow x x
Pull the value x
Seek perfection x x x x
Preconceived solution x x
Traps to
creation
value
planning
As the basis for a contrived example of development planning, was used the
data collected from a finished and successful project, which produced a stall
recovery system to be used during flight tests. Table 4 presents a comparative
analysis of the original planning and the one resulting from the method application.
On this particular example there were better results at each of the four steps.
236 M. V. P. Pessôa, G. Loureiro and J. M. Alves
9 Conclusions
The research method presented in this paper provides a useful approach to planning
to complex engineering products development. Conclusions are that the developed
method: (1) fits the product development environment; (2) adheres to the lean
principles; (3) faces the traditional planning deficiencies; (4) exploits the
improvement opportunities from the studied example.
This work contributes to the PD discipline by the application of the lean
principles, based on value creation and waste reduction, to derive a project activity
network that is based on a value creation sequenced set of confirmation events that
pulls only the necessary and sufficient information and materials from the product
development team.
References
[1] Bauch, C. Lean Product Development: Making waste transparent. Diploma Thesis.
Cambridge: Massachusetts Institute of Technology, 2004.
[2] Reinertsen, D. Lean thinking isn’t so simple. Electron. Design, vol. 47, p. 48, 1999.
[3] De Meyer, A.; Loch, C. H.; Pich, M. T Managing project uncertainty: From variation to
chaos. Sloan Management Rev., vol. 43, no. 2, pp. 60–67, 2002.
[4] Schrader, S.; Riggs, W.M.; Smith, R. P. Choice over uncertainty and ambiguity in
technical problem solving. J. Eng. Technol. Management, vol. 10, pp. 73–99, 1993.
[5] Murman et al. Lean Enterprise Value: Insights from MIT’s Lean Aerospace Initiative.
New York, NY: Polgrave, 2002.
[6] Thomke, S.; Bell, D. E. Sequential testing in product development. Management Sci.,
vol. 47, no. 2, pp. 308–323, 2001.
[7] Mihm, J.; Loch, C. H.; Huchzermeier, A. Modeling the Problem Solving Dynamics in
Complex Engineering Projects. INSEAD Working Paper. Fontainebleau, France:
INSEAD, March, 2002.
An approach to lean product development planning 237
[8] Kennedy, M. N. Product development for the lean enterprise. Richmond: Oaklea Press,
2003.
[9] Browning T.; Deyst J.; Eppinger S.; Whitney D. Adding value in product development
by creating information and reducing risk. IEEE Transactions Engineering
Management, 49(4):443–458, 2002.
[10] Walton, M. Strategies for Lean Product Development: A Compilation of Lean
Aerospace Initiative Research. Research Paper 99-02. Cambridge, MA: Massachusetts
Institute of Technology, 1999.
[11] Ward, A. C.; Liker, J. K.; Cristiano, J. J.; Sobek, D. K. The second Toyota paradox:
how delaying decisions can make better cars faster. Boston: Sloan Management Review,
p. 43-61, spring 1995.
[12] Sobek, D. K.; Ward, A. C.; Liker, J. K. Toyota’s principles of set-based engineering.
Boston: Sloan Management Review, p. 67-83, winter 1999.
[13] Mascitelli, R. Building a project-driven enterprise. Northridge: Technology
Perspectives,2002.
[14] Loureiro, G. A systems engineering and concurrent engineering framework for the
integrated development of complex products. 1999. Thesis (PHD) - Department of
Manufacturing Engineering, Loughborough University, Loughborough, UK.
[15] Pessôa, M. V. P.. Proposta de um método para planejamento de desenvolvimento
enxuto de produtos de engenharia. 2006. 267f. Tese de Doutorado em Engenharia
Aeronáutica e Mecânica. Área de Produção – Instituto Tecnológico de Aeronáutica, São
José dos Campos.
[16] Millard, R. L. (2001) Value Stream Analysis and Mapping for Product Development.
Thesis (S.M.). Cambridge, MA: Aeronautics and Astronautics, Massachusetts Institute
of Technology.
[17] Laufer, A.; Tucker, R.L. Is construction project planning really doing its job? A critical
examination of focus, role and process. Constr. Mgm. and Econ., 5, 243 – 266, 1987.
[18] Koskela, L.; Howell, G. The Underlying Theory of Project Management is Obsolote.
Proceedings of the PMI Research Conference, 2002. Pg. 293-302, 2002.
[19] Womack, J. P.; Jones, D. T. A mentalidade enxuta nas empresas. São Paulo: Ed.
Campus, 1998
Managing new product development process: a
proposal of a theoretical model about their dimensions
and the dynamics of the process
Abstract. The development of products consists of a process that involves knowledge and
several functional areas and presents a high degree of complexity and iteration in its
execution. The literature presents several models and approaches for characterizing the new
product development process. However, they usually do not adequately represent its
dynamics. The present work aims at characterizing the product development process based
on the nature of its elements covered by the literature. A representative conceptual model of
this process is proposed within two levels of integration. The theoretical model is based on
six dimensions (strategic, organizational, technical, planning, control and operational)
integrated in the levels of structural and operational. It also identifies what the elements that
compose the operational dimension are and how the interaction between them might be
characterized. The paper also emphasizes the need for new studies with a detailed analysis of
the interaction and the integration of the elements here presented.
1 Introduction
1
MSc researcher, Universidade de São Paulo (São Paulo), Avenida Professor
Almeida Prado, Travessa 2, Nº 128, Cidade Universitária, Brasil; Tel: +55 11 3091 5363;
Fax: +55 11 3091 5399; Email: almeida.leandro@hotmail.com.
240 L. Almeida, P. Miguel
the process as a linear system, with discrete and sequential stages, while more
recent studies consider that the development process evolves through stages, but
with overlap and feedback loops [10].
According to ref. [8], the development projects became a collaborative
entrepreneurship with highly complex interdependencies. In doing so, the search for
more effective organizational patterns in the development process shall include a
detailed analysis of how the development really happens [4].
Considering then the inadequate representativeness and applicability of
theoretical models and frameworks to deal with the dynamics of the product
development process, this paper aims at characterizing this process through a
representation of its dimensions and the elements that compose it, beyond analyzing
the interaction between each party.
3 Literature Background
NPD in a different way, thus complementary. Other emphases can be found in the
product development literature, for example, the use of methods and techniques.
Ref. [3] presented a classification framework of the more relevant topics in
product development management. Three dimensions have been proposed:
strategic, operational and performance evaluation of the product development. The
strategic dimension is divided into two main topics, including subjects related to
portfolio management, capacity dimensioning and inter-organizational and inter-
functional integration. The operational dimension is divided into the following
topics: the development process itself and the use of methods and techniques and
the work organization.
Ref. [9] proposes an approach based on the decisions. They affirm that while
how products are developed differ not only across firms but within the same firm
over time, what is being decided seems to remain fairly consistent. In this way, they
propose a classification that organizes the decisions into two categories: the
decisions within the context of a single project and the decisions in setting up a
development project. On one hand, the authors in ref. [9] divide the decisions
within the context of a single project in four categories: concept development,
supply-chain design, product design, and production ramp-up and launch. On the
other hand, the decisions in setting up a development project are divided into three
categories: product strategy and planning, product development organization, and
project management.
Other authors in the product development literature have used the approach
based on decision. Ref. [10] considered three levels of NPD decisions: strategic,
review and in-stage. The ‘strategic’ decisions are related to market and product
strategies and portfolio management. The decisions in the ‘review’ level occur
between stages, while ‘in-stage’ decisions refer to those in the operational level of
each phase. In the same line of thought, in ref. [1] the decisions are classified in
four levels: strategic planning, tactical planning, operational planning and planning
infrastructure. The work in ref. [13] stemms from ref. [9] and proposes a division
into two systems: the operational system and the development system.
From the classifications shown earlier, the strategic and the operational
dimensions are explicitly cited in almost all of them [1,3,9,10]. Its concepts are also
considered in the classification proposed in ref. [13]. Other dimensions that can also
be also highlighted are the organizational and the project management that are cited
in almost all classifications.
The increasing complexity and the cooperative environment in the design process
requires a more effective coordination of it [8]. On the other hand, coordination
underlies many of the management problems in designing products rapidly and
effectively [2]. The most used representations and techniques do not adequately
describe the dynamics of the development process [6] and this requires an analysis
of how the development really occurs [4]. Considering this, several authors are
working on new approaches and concepts aiming at providing tools that will help in
the coordination, integration and in the development process decision making.
242 L. Almeida, P. Miguel
Planning and control dimensions are in the context of project management. The
classification herein proposed, based on the differentiated nature of its elements,
divided the project management techniques in two other sub-dimensions: planning
and control. The items presented in table 1 were based on ref. [12]. Finally, the
operational dimension does not present specific topics, but consists of the project
execution itself. It is about the application of the strategic definitions, in a defined
organizational structure, in accordance with project plans, making use of specific
methods and tools.
The decision levels proposed in this article are based on the two category division
proposed by ref. [9]. However, they are distinguished in some aspects. Their
classification is based on the decision perspective, that considers what is decided in
the development process, instead of considering the way the development happens
(i.e. how). In this sense, the decisions were organized in two categories, as follows:
the decision in the context of a single project and the decisions in setting up a
development project.
The present work uses both how the product is developed as what is decided.
Therefore, it proposes two levels of integration in the product development process.
The first level refers to how the product is developed and was herein called the
structural level. At this level, the decisions are aimed at setting up the
organizational context and they refer to corporative patterns [9]. Thus, the
integration at the structural level corresponds to the definition and the alignment at
the company about the standards to be used during the development project.
The second level refers to the application of the organizational standards in a
specific project. Therefore, this level contains both the decisions in a single project,
244 L. Almeida, P. Miguel
like the planning and execution of the development project. The integration at the
operational level corresponds to the application of the organizational patterns in the
project being developed. In such a way, the development process could be
represented, in a macro view, by six dimensions and should be integrated in two
levels. At a higher level, there would be the integration in the organizational
context. The development would then happen through the integration in the
operational level, where the standards would be applied according to the project
particularities. The development of the product would be the result of the
application of the elements that compose the five dimensions (strategic,
organizational, technical, planning and control) at the operational dimension that
characterises the operational integration.
Figure 1 illustrates these dimensions and the integration levels mentioned.
4.3 The elements and the dynamics of the product development process
Once the product development takes place at the operational dimension, the
understanding of how it really happens is of great importance [4]. In this way, it is
necessary to consider the information exchange and issues associated to reworks
during the development [4], the changes [2], the overlaps [9], among other things.
The current analysis did not consider only approaches or methods that deal only
with subjects associated to coordination or the dynamics of the product
Managing new product development process 245
5 Concluding Remarks
Since this study is part of an on-going research and it is not fully completed,
conclusions should be taken with care. Nevertheless, some concluding points
deserve attention. Firstly, it has been identified in the literature that it seems to exist
a lack of a conceptual model that represents all dimensions and interactions in the
new product development process. Secondly, the theoretical model shown in this
article comes and meets the needs of a more adequate representation that describes
the dynamics of the development process pointed out by some authors earlier cited.
It integrates different perspectives of new product development process,
considering the nature of the elements as the basis for its elaboration. Finally, even
as a preliminary study, the conceptual model here proposed seems to contribute to
the understanding of the dynamics of the product development process, given the
separation of the operational dimension of the other five that constitute the structure
of the development project. This gives a clear notion that, although the methods and
techniques that compose each dimension are very well understood, the conjoined
application of them in a development project needs to be better detailed and studied.
246 L. Almeida, P. Miguel
In this sense, future studies shall be developed considering the points here
addressed and a detailed analysis of the conjoined application of the topics that
compose each dimension in the operation dimension seems to be also important. In
addition, an analysis of how the integration of all these elements occurs, together
with methods to optimize it would contribute to the understanding of such a
complex process as new product development.
6 References
a
Professor, Federal University of Rio Grande do Sul, Brazil.
b
Sao Paulo University post-graduate student, c Full Professor, Engineering school of São
Carlos, University of São Paulo, Brazil.
Abstract. This paper presents a structured model to help the user choose the most
appropriate statistical technique to solve problems relating to the product
development process. Starting from a well-defined problem, the model helps the
user convert the problem into statistical objectives. Based on those objectives, the
decision model then defines a sequence of structured questions whose responses
lead to the selection of the statistical technique. The sequence of questions is
supported by examples, detailed explanations of the concepts involved, links to
sites associated with the case, and a glossary of statistical terms. Statistical
techniques are support tools for the New Product Development Process (NPD) and
are used in various activities of this process. The main result expected from the use
of the model is the dissemination of the application of statistical techniques during
the NPD process in companies, especially small and medium companies, where
this type of support is most lacking. To enable companies to use and test the
structured model, a decision support system will be developed for free access on
the Internet.
1 Introduction
The industrial sector shows an increasing demand for statistical knowledge to deal
with quantitative or qualitative data. This demand is partly a result of normative
1
Production Engineering Post Graduate Program – Federal University of Rio Grande do Sul
– UFRGS, Brazil, Oswaldo Aranha Street, 99, Porto Alegre,RS – Brazil – 90035-190, Tel:
+55 (51) 3308 4248, E-mail: echeveste@producao.ufrgs.br; http://www.producao.ufrgs.br
248 M. E. Echeveste , C. S. T. Amaral and H. Rozenfeld
processes (ISO, TS, QS) and of quality improvement programs strongly based on
statistical techniques. These techniques include the Six Sigma programs for the
improvement of manufacturing processes and DFSS (Design for Six Sigma) for the
improvement of new product development processes. These programs have created
methodologies based on quality techniques and tools to lead to the solution of
industrial problems. These tools and techniques can aid in the solution of different
problems relating to the improvement of products and processes and of the new
product development (NPD) process.
DFSS is based on the integration of tools such as QFD, Pugh’s Matrix, and
statistical tools of multivariate analysis and Design of Experiments in NPD. The
objective is to establish an integrated set of tools as a means to efficiently translate
information in the development phases, favoring the incorporation of technical,
strategic and financial values to the product in order to meet consumer and market
needs. While Six Sigma programs work in the ambit of reaction and correction of
existing problems in the domain of the process, DFSS is used for the prevention of
problems through the use of quality tools and statistical techniques in the product’s
conception phase [2].
The product development process, in turn, has been systematized and structured
in so-called reference models, which represent the process and serve as a guide for
its application. Starting from a generic reference model, a company can define its
specific model, also known as standard process, which becomes a “manual of
procedures” and serves as the basis for the specification of product development
projects, thus ensuring the repeatability of the company’s projects as well as
constituting a repository of best practices.
Many of the activities of product development and process improvement make
use of quality tools involving statistical techniques. Throughout NPD, from the
conception phase of a product to its removal from the market, methods and tools
are employed to support the execution of the diverse activities of this process.
However, for the business user to apply the best statistical technique, he must be
familiar with the available techniques and with all the presuppositions required for
its correct application. This knowledge is normally a specialist skill, since a variety
of questions and conditions must be analyzed before a specific technique is
selected. In particular, the transition of a practical problem into statistical/research
objectives is the first obstacle to the choice of the most suitable technique. Despite
the advances in computational systems, which seek to improve the interface with
the user and to automate some of the steps of statistical analysis, the phase prior to
the application of the technique, involving the choice and planning of which
analysis to use, is not effectively aided by existing software programs.
Based on the experience of the authors and on interviews with company users
and specialists, a series of problems commonly encountered in the use of statistics
was drawn up, which include learning strongly based on the teaching of the
execution of the technique. Six Sigma programs, for example, are usually based on
the execution of exercises carried out immediately after the presentation of a
technique, which, to a certain extent, directs or guides its application. The
interpretation of the data centers on the needs of the statistical software and on the
output data supplied in the analysis. The companies interviewed indicated that a
problem is the difficulty in recognizing which technique to use in real situations,
A support tool for the selection of statistical techniques 249
2 Methodology
The approach most widely adopted in this project is the hypothetical-deductive
approach proposed by Karl Popper in 1935. This approach allows for the creation
of a set of postulates, tools and hypotheses that one attempts to refute by means of
experiments, based on which one deduces the consequences, which, when refuted,
are replaced by others [4].
With regard to technical procedures, this work is classified as action research, since
it focuses both on action and on the creation of knowledge or theory about the
realization of innovative projects. Action research is recommended for new
approaches where new ideas must be explored and knowledge must be created
from the standpoint of practical aspects. A close association must be conceived and
made with an action or with a solution for a collective problem, in which the
researchers and participants representing the problem are involved cooperatively or
participatively [8].
The model was built based on consolidated information available in didactic books
about techniques and application of statistics in a wide range of areas of
knowledge. In this bibliographic review, information was collected enabled us to
outline the decision structure for selecting the technique best suited to the problem
in question. The most exhaustively consulted references were [2, 3, 5, 7].
2.2.2 Modeling of the decision process for the choice of a statistical technique
Based on the classification of the techniques, the possibilities and ramifications
were defined, resulting in a structured decision model with the logic of what
information would be necessary to recognize the statistical technique.
The structured model was used as the basis to elaborate the support contents, which
will enable the user to know and learn more about the concepts involved in the
analysis. From previous experience, it is known that the use of statistics produces
better results when one presents practical examples relating to the user’s daily
routines. Thus, for each concept, the system intends to associate a description, with
additional references (links), examples of cases and a glossary of terms.
The definition of the target public refers to the user of the model. The initial target
public is a professional of the industrial sector possessing basic knowledge of
statistics, with a degree in engineering, administration or the graduate of training
courses such as Six Sigma or other quality programs encompassing statistical
techniques. Thus, the user of the model is not a layman in the use of statistical
techniques. We believe the system will serve as a support tool in various activities
involving an understanding of data on the market and the product that is being
developed. Therefore, the target public may be the product development team
itself, which already has a way to treat and analyze data integrated with the team’s
technical knowledge.
A support tool for the selection of statistical techniques 251
Based on this context, the structured model should aid users in their selection of
statistical techniques applicable to the improvement of industrial products and
processes. This model includes some basic properties that are summarized in four
main points: (1) It should help the user understand the research problem for the use
of statistical techniques. (2) It must be a learning instrument that teaches terms
employed in statistics, concepts and access to links to delve deeper into the subject.
(3) It should allow the user to standardize his research projects following the logic
of the model. (4) It should favor the insertion of new techniques, broadening the
range of possibilities for the problems involved in new product development.
Control techniques are those that monitor and ensure system quality, and are
basically the techniques pertaining to Statistical Process Control (SPC).
Proceeding along the decision structure, one questions the number of variables
involved in the analysis (stage 3). This option helps the user understand and classify
his study as one variable at a time (univariate), two variables (bivariate) or K variables
simultaneously (multivariate). In each stage, the process of choice is supported by
examples, a glossary of terms with an explanation of the meaning of each of the
statistical terms used in that stage, a more detailed technical description about the
terms, and links for the user to consult.
In stage 4, one questions the type of relationships among the variables, e.g.,
dependent (simple or multiple) or independent. In stage 5, one verifies the number of
explanatory or independent variables in the problem, or else the number of samples.
And lastly, in stage 6, a question that is common to all the possible alternatives, the
user classifies the measurement scale of the variables involved, i.e., as metric
(intervallic) or nonmetric (nominal or ordinal).
To validate the proposed structure, new tests should be conducted to test the
sequence of questions and their applicability to different cases. We believe that
statistical technical language should always be accompanied by simpler language
aided by examples. When the user is able to translate and relate statistical ratiocination
into his daily routine, he will also be more qualified to read books, articles and other
studies that deal with statistical techniques focusing on his area. As he uses the model
for various studies (projects), this language will become incorporated into his
everyday life. Thus, in the end, experiments planned more scientifically and reliably
gradually become part of his routine.
It cannot be stated that all problems can be classified from the questions suggested
here, but it should be noted that this classification encompasses most cases, although
one can expect to encounter situations that do not fit into the set of foreseen
possibilities. However, this is an initial proposal, which should be improved and
perfected based on the results of its application in companies and on refinements of
the available knowledge about the use of statistical techniques.
254 M. E. Echeveste , C. S. T. Amaral and H. Rozenfeld
Some of the principal contributions expected from the use of this model are: (1)
greater utilization of statistics in tests and experiments on existing products and in the
development of new products, helping improve quality through a better understanding
of how the characteristics and quality of a product are correlated and affect its
performance; (2) An understanding of the nature and workings of alterations to
products can be obtained through statistical analysis, such as the treatment of data
from tests and simulations, contributing to basic research in the study of new
technologies and materials; (3) The possibility of applying and perfecting techniques
and methodologies for the dissemination of statistical knowledge integrated to product
and process quality improvement; (4) The possibility of generating new
methodologies to understand what variables are critical to the quality of a product,
adapted to the sector under study; (5) Render viable the integration of statistical
techniques as PDP tools.
5 Final considerations
This article presented the development of a proposal for the elaboration of a support
model for users in the selection of statistical techniques applicable to industrial
product and process improvements. The resulting systematics refers to a decision
model that attempts to reproduce the questions a professional knowledgeable about
statistics would ask himself in the selection and planning of a statistical technique.
The main users are professionals with a basic knowledge of statistics, who aid
product development or quality project improvement teams in the use of statistical
techniques for treating data, helping them understand, conceive and measure product
performance. The model is also easily applicable to the stages of a Six Sigma project.
REFERENCES
a
Docente do Departamento de Design UFPR.
b
Docente do Departamento de Eng. Mecânica Aeronáutica ITA.
Abstract. The objective of the article is to compare the Engineering and Design
fields in relation to the Product Development Process (PDP). In both areas we can
identify different methodologies that guide, each one under its own optic, the
Product Project. Although aiming the same objective, the Product Development,
these two fields present a certain disconnection, if we compare the models
presented in the literature. This can be explained by the fact that Engineering
traditionally develops the products with emphasis in the technical aspects of the
products and Design investigates the interfaces of the users with the products.
Considering this, this article consists in a theoretical discussion regarding to an
appropriation of planning models of the Product Development by the Design field,
from the problem solving process as well as the systematization and coordination
of the creation activity. As conclusion, the work presents a methodological
systematization with the implementation of new techniques for the process of
Design, focusing at the trends adopted for the corporations that search constant
innovation, efficiency of the products and services, and adaptation to the changes,
among others factors.
1 Introduction
1
Docente do Curso de Design UFPR (Designer Bal. Msc Engineering), UFPR
Paraná. Rua General Carneiro 460, 8º andar, Curitiba PR Brasil, Tel/ Fax: +55 (41)
33605210 Email: viviane.gasparibas@ufpr.br; http://www.design.ufpr.br
258 Viviane Gaspar Ribas, Virgínia Borges Kistmann, Luiz Gonzaga Trabasso
that they do not go bankrupt. In this case the demand for a better understanding of
Processes are very important in Business as in Production terms.
The concept of process is considered here like in its origin from the Latin word
procedere, as a verb that indicates the action of going forward, going ahead
(pro+cedere) [12]. It is considered also as a peculiar sequence of changes that
intends to reach a certain goal. Process is used to create, invent, project, transform,
produce, control, keep and use products or systems.
For a better understanding, the concept of Business Process [11] in this work
will be differentiated from the Design Process concept. The first will be considered
as the management action, while the second as the projectual action [13].
Following this, in this paper, Design is considered the processual activity, which is
developed since the enterprise’s strategic business, named strategic Design, to its
operational aspects, named Operational Design [18]. The term Design will refer
not only to the result of the design action, but also to designate the process of
transforming actions.
Another initial consideration must be done in respect to the differentiation
between Design Process and Product Development concepts. Design Process is
considered here as a project activity will be considered as a project activity
oriented to the resolution of interface problems that someone can have with his
surroundings and Product Development is considered the activity developed by the
Engineering field, in which many functional sectors of the enterprise are involved.
Comparing the definitions of DP e PD, we can see that in both areas a group of
systematic project activities is developed, and these group of activities involve
organization, people, functional areas, the product itself through information
transformation, data, inputs, considering the enterprises technological and human
resources. From the realization of these activities it is possible to create and
produce products that fulfill the expectations of the markets where they are
requested, as much in Design as in Engineering approaches.
Considering this, it is possible verify that there is an overlapping between the
Product Development (PD) and the PD activities, when developed by Engineering
or by Design. To both Engineering and Design areas, the basic principle of Product
Development (PD) or the Design Process (DP) is founded in one kind of Process,
hence the initials (PDP) for Product Development Process, which is a
transformation process through which an idea becomes an object (product) with the
premise that it be industrially produced in sufficiently large scales in order to
satisfy stakeholders conditions, and also conjugate and harmonize knowledge from
a handful of different natures.
Subsequently, in this work, the terminology Product Development Process
(PDP) will be adopted in this work, considering that in PDP, both Engineering and
Design areas, Product Development or Design Process include activities that act in
an integrative interdisciplinary way, for they develop a “group of systematic
activities” that encloses product, processes, people and organization, they are either
simple or direct.
Then, this paper aims to discuss the overlapping between these two fields,
Design and Engineering, the first under the concept of Design Process and the
second as Product Development. Doing this, it searches a new correlation between
both areas, improving the Design Process.
Is the design process integrated to product development? 259
Part of the difficulties in understanding Design, its benefits and actuation inside
enterprises, comes up with the need to study how this activity is seen and how its
relations on a business environment happen. A discussion about the design
practice, its importance and influence on the business objectives, may contribute to
the understanding of this activity and how relations should happen for its
effectuation.
It is also important to emphasize that, throughout history; the professional of
this activity has developed in distinct manners due to the social-economical,
cultural and technological characteristics existing in each country, which lead to
different actuations of the designer in society.
2
DMI’s goal is helping design managers to become leaders in their professions;
making studies available, financing, promoting and conducting research in design
management and sustaining the economical and cultural importance of design (DMI, 2004).
Is the design process integrated to product development? 261
of view, but also, they materialize when what is important is to correctly develop
the efficiency-product in the design process, and being integrated and participating
in the enterprises strategic definitions starting from the highest deciding level and
interacting with all of the relevant areas of an enterprise [16].
From the Design perspective, at the operational level, this activity, according to
[16] is defined as “actions turned towards the design process, sorted as work “from
the inside to the outside” in intellectual conception style and functional simplicity
(European) as well as of what is worth selling and advertising (American). It does
not integrate to other areas and the form follows the function (with an emphasis on
the practical-operational functions)”.
Nevertheless, opposite to what was presented in Cheng’s work [7], in the
Design field, there was no significative development of methods and techniques
that guide the Design Process in the way that Product Developments presented
nowadays, in constant improvement and in accordance with new business
structures, horizontalized and by processes.
Despite having a moment in which Design has looked for developing a
methodology of its own, which would congregate many branches of knowledge,
whether they are artistic or technological, apparently, throughout history, there was
no development of project methodologies, methods or techniques according to the
requirements of the product, like Engineering does it.
On the contrary, the empiric methods and a strong basis on creative processes is
a common practice of the professional dedicated to this activity. Maybe, that is
what led the other areas to see Design as an area that only worries about the
product’s aesthetics. Due to its lack of attachments with a methodology, more
strongly based on an analytical thought, whether it is reductionist (Cartesian) or
deterministic (cause/effect) or even to the mechanicist methods or the deepening of
the system theories
262 Viviane Gaspar Ribas, Virgínia Borges Kistmann, Luiz Gonzaga Trabasso
4 Conclusions
With what has been exposed, it is important to observe that Design Process (DP),
as well as Product Development (PD), activity developed by Engineering, are very
wide human activity branches that center on problem resolution, creation, and
coordinating and systemic activities.
Each problem to be solved implicates generating balanced results for a number
of products developed under the optic of technology, of production, of the market,
of the user, of the economy among other factors presented by the two activities.
This fact led processes to be systemized, the information flow to be mapped
and the group of activities to be clear and objective, so that, activities and tasks of
the process itself that aggregate value to the PD could be done, moreover,
capacitating people with different skills and knowledge, generating indicators that
improve the process’ performance for a constant improvement.
However, in the Design Process activity, one can see a stronger preoccupation
in developing a knowledge body, operational models with strong links to business,
marketing, planning, strategies and management. That means an advance in the
developing of Design at a strategic level, as shown before. In this case, the term
used by professionals of the area is Design Management. With this new scenario,
the methods and techniques previously used became outdated or simply were not
used anymore by most designers. So, there was a failure of the methodologies that
had been developed more rigidly until the 70’s. The teaching models and,
consequently, the professional activity were slowly substituted by empiric
methodologies with strong links to the creative process.
Is the design process integrated to product development? 263
5 Reference List
Abstract. To shorten the lead-time of software, the design tasks should be arranged
reasonably. Development process reconfiguration is the key to the concurrent design.
Axiomatic design builds the functional-structure model of products by zigzag mapping
among domains. The independence axiom demands that the independence of the functional
requirements should be maximized. The relationship between tasks is established by
analyzing the design matrix. The diagonal matrix shows that the design tasks are mutually
independent, and can be concurrently processed so that the overall developing time can be
greatly shortened. The triangular matrix shows that the design tasks should be processed
sequentially so that the whole process can be managed effectively. By using axiomatic
design to analyze the design tasks, the design process can be arranged reasonably and the
lead-time can be shortened. The module-junction structure diagram shows the sequence of
the software development.
1 Introduction
With the rapid development of computer technologies, the application of computer
is becoming more and more complicated. Large-scale and highly complicated
software projects emerged continually. Software development becomes a system
engineering which needs many people to participate. To shorten the developing
time to the greatest extend, the tasks should be arranged reasonably. Object-
oriented technology, such as Object Modeling Technique [1] and Object-Oriented
Software Engineering [2], ensure the development of large-scale software in the
perspective of technologies and management. But there are still many problems at
the stage of development and maintenance. Such as, no method can optimize the
software system [3].
Concurrent design is a comparative design methodology, which enhances
productivity and leads to better overall designs. The core of concurrent design is
1
Postdoctoral fellow with School of mechanical, Electronic and Control
Engineering, Beijing Jiaotong University, Beijing 100044, PR China; Tel: +86 10
51688175; Email: ruihong0613@yahoo.com.cn.
268 Ruihong Zhang, Jianzhong Cha, Yiping Lu
the process integration which includes two parts: multidiscipline team method and
engineering of product life-cycle [4]. Concurrent means that the development
activities occur in the same time and multidiscipline developing teams collaborate
to carry out the design. The development process reconfiguration alters the
sequential development process into concurrent development process to reduce the
lead-time of product and consider all the factors of the design and development.
But there is lack of the effective methods to guide the reconfiguration.
Concurrent design can be applied to the software development to optimize the
software system and shorten the lead-time. Development process reconfiguration is
the key to the concurrent design. Function decomposition is the basic of process
reconfiguration, which shows the relationship between the design tasks. It is even
as Axiomatic Design (AD) can do. AD provides a framework and criteria of
function decomposition. AD can ensure the combination of software modules in a
reasonable sequence and manner.
2.1 Background of AD
{FRs} = [A]{DPs}
(1)
Where [A] is a matrix defined as the design matrix that characterizes the
product design. Equation (1) is a design equation for product design. The design
Concurrent Design in Software Development Based on Axiomatic Design 269
matrix is of the following form for a square matrix (i.e., the number of FRs is equal
to the number of DPs):
FR DP
FR1 ½ ªa b º DP1 ½
® ¾ «c d » ®DP ¾ (3)
¯FR 2 ¿ ¬ ¼¯ 2 ¿
270 Ruihong Zhang, Jianzhong Cha, Yiping Lu
Object = FR
Attribute
Data structure = DP
Method
FRi =AijDPj
performed in sequence so that the effect of former modules can be considered and
the iterations of design can be reduced. In this step, we can get the module-junction
structure diagram that indicates the design sequence. The design tasks can be
assigned according to the diagram.
…
…
In this section, we just show part of the software. Figure 4 is the functional-
structure model obtained by zigzag mapping and Table 1 lists the full design
matrix. In full design matrix, the element “1” indicates that its corresponding FR is
influenced by its corresponding DP and element “0” means no interaction. The
272 Ruihong Zhang, Jianzhong Cha, Yiping Lu
shading parts in Table 1 show the interaction of FRs in the same decomposing
branches and the white parts show interaction of FRs in different branches.
Figure 4 illustrates the objects and attributes of the software. So there are 7 design
modules of the software, that is,
(1) Zigzag mapping
(2) Independence axiom
(3) Algorithm of rearranging matrix
(4) Algorithm of blocking
(5) Algorithm of AHP
(6) Data management
(7) GUI
Table 1 shows the design is decoupled. That is to say, these modules must be
performed in some sequence so that the effect of former modules can be
considered and the iterations of design can be reduced. Figure 5 is the module-
junction structure diagram. “S” indicates these modules can be designed
concurrently and “C” indicates these modules should be designed according to the
sequence that the arrows show.
M11 C M12
M3
4 Conclusion
Large-scale software projects are system engineering, which need many people to
participate in. To shorten the lead-time, the design tasks should be arranged
reasonably. Development process reconfiguration is the key to the concurrent
design. Function decomposition is the basic of process reconfiguration, which
shows the relationship between the design tasks. This is what AD can carry out.
AD provides a framework and criteria of function decomposition. Axiomatic
design builds the functional-structure model of products by zigzag mapping among
different domains. The relationship between software modules is established by
analyzing the design matrix. The uncoupled design shows that software modules
are mutually independent, and can be concurrently processed so that the overall
developing time can be greatly shortened. The decoupled design shows that
software modules should be processed in sequence so that the whole process can
be managed effectively. The module-junction structure diagram shows the
sequence of the software development. The diagram can be used to guide the
corresponding work.
References
[1] Rumbaugh J, Blaha M. Object-oriented modeling and design. Prentice-hall, New York,
1991.
[2] McGregor J D, Sykes D A. Object-oriented software development: engineering
software for reuse. Van Nostrand Reinhold, 1992.
[3] Clapis P, Hintersteiner J D. Enhancing Object-oriented software development through
axiomatic design. Proceedings of ICAD2000 First International Conference on
Axiomatic Design, Cambridge, MA-June 21-23, 2000: 272-277.
[4] Xiong GL. Theory and practice of concurrent engineering. TsingHua Press, Bei Jing,
2000.
[5] Suh N P. Axiomatic design-advances and applications. Oxford University Press, New
York, 2001.
[6] Suh N P, Sung-Hee Do. Axiomatic design of software systems. Annals of the CIRP,
2000; 49(1): 95-100.
[7] Zhang RH. Study on enabling technologies of axiomatic design. Ph.D. thesis, Hebei
University of Technology, 2004.
A Systematical Multi-professional Collaboration
Approach via MEC and Morphological Analysis for
Product Concept Development
1 Introduction
In order to develop an innovative product, it is important to know the demands of
consumers. The designer must understand the user’s specific requirements and
marketing strategies and then integrate all of the information into a distinct design
which differentiates competing products from each other.
User-oriented assistive device development is a usability-based innovation
concept, which focuses on the use of disabled patients’ current and future needs, as
well as their characteristics, in the design of innovative and/or improved assistive
products. Consequently, to develop a successful eating assistive device for patients
with cervical cord injuries, user requirements need to be carefully considered by
1
Department of Industrial Management, National Taiwan University of Science &
Technology, 43 Keelung Road, Sec. 4, Taipei, Taiwan, Tel: +886 (2) 2737 6327; Fax: +886
(2) 2737 6344; Email: wanggg@ntit.edu.tw
276 C.H. Wang, S.Y. Chou
2 Related work
It is well known that assistive devices can help disabled persons become more
independent and increase their quality of life. However, there are many differences
between the disabled persons’ conditions as well as due to the limited amount of
available assistive devices. Generally, seriously disabled persons haven’t yet had
access to the functionality of assistive devices. For example, patients with cervical
cord injuries have two types of eating assistive devices to choose from; one is an
expensive, power-controlled device that is unsuitable for patients with a serious
cervical cord injury, while the other involves feeding via a caregiver. In order to
develop an eating assistive device that can be used by a patient with a serious
cervical cord injury independently, it is important to construct a systematic
approach that integrates the MPC knowledge and information technology support
system for implementation the requirement identification, knowledge synthesis and
idea creation to fulfil the need.
consumer needs start the early stage of new product development process, and the
determination of correct and complete information requirements sets the stage for
an effective development process that increases the likelihood of satisfaction in the
implementation and allows for early correction of errors while the cost is lower [1].
One of the primary goals of the collection and adoption of user needs information
in new PCD is the identification of customer preferences [2]. In order to deal with
the task, a MEC approach is used to identify the attributes, consequences and
values perceived by the user. Since its introduction into the marketing literature by
Reynolds and Gutman [3], MEC has become a frequently-used qualitative
technique in formulating the strategy of product development and marketing
promotion in many fields.
An MEC methodology illustrates the connections between product attributes, the
consequences or benefits of using product and personal values; A-C-V structure
(Figure 1), where the means is the product and the end is the desired value state.
The purpose of the MEC theory is to explain how product preference and choice is
related to the achievement of central life values [3].
(cost, ease of implement, functionality etc.) that will be used in the MAEM
process.
Simple multi-attribute rating technique (SMART) is an extension of direct rating
technique, one of the MAEM methods, and is suitable to be applied to verify the
alternative PCD solutions which were generated by MA. In a basic design of
SMART, there is a rank-ordering of alternatives for each attribute setting the best
to 100 and the worst to zero and interpolating in between. By refining the
performance values with relative weights, a utility value for each alternative is
calculated. In SMART, the formula for a weighted average is simply given as
follows: Ui =
¦ w j u ij ,subject to ¦ w j
=1, where Ui is the aggregate
j j
utility for the ith alternative, wj is the normalised importance weight of the jth
attribute of value and uij is the normalised value of the ith alternative on the jth
attribute. In general, the Ui for the ith alternative is provided with the highest
weighted average ranking value will be defined as the optimal solution. It means
Component j
Knowledge 1
Knowledge 2
Knowledge j
Engineers
.
.
.
.
.
.
.
.
Design
towards the differential product attributes. In the third stage, the solution pool of
PCD is produced from MA and the optimal solution is verified with SMART of
MAEM. Finally, the result obtained from the MPC approach is incorporated into
the product usability experiment to derive the satisfaction with finite revision.
5 Conclusions
Table 2 demostrates that the differential importances of components for the PCD
of eating assistive device which must satisfy the need of feeding independently.
Table 2 QFD matrix for mapping the differential needs of manipulation into components
Usability Component
differentiation
need Linkage Handle Handle Spoon Fork Tray Bowl Bowl Cup Cup mat Box
frame cover frame
Feeding 5 9 3 5 3 3 9 3 3 5 5 1
independent
According to the results of MEC and QFD analyses, MPC team members used
the MC technique with CAD system and collaborative communication
environment to perform the PCD which was transferred to prototype and then
verified by the subjects.
282 C.H. Wang, S.Y. Chou
The best alternative of PCD is finalized from MC with SMART process and
presented as following (Figure 6) :
References
[1] Hsia P, Mittermeir RT, Yeh RT, Alternatives to overcome the communications problems
of formal requirements analysis, in: Proceedings of International Symposium on Current
Issues of Requirements Engineering Environments 1982; 161–174.
[2] Moore WL, New product development practices of industrial markets. Journal of
Product Innovation Management 1987; 4; 6-19.
[3] Gutman J, A means-end chain model based on consumer categorization processes,
Journal of Marketing1982; 46(2); 60-72.
[4] Reynolds, TJ, Gutman J, Laddering theory, Method, Analysis, and Interpretation, Journal
of Advertising Research 1988; 28; 11-29.
[5] Grunert, KG, Grunert SC, Measuring subjective meaning structures by the laddering
method: theoretical considerations and methodological problems. International Journal of
Research in Marketing 1995; 12(3); 209–225.
[6] Fransella F, Bannister D, A Manual for Repertory Grid Technique. London: Academic
Press; 1997.
[7] Lines R, Breivik E, Supphellen M, Elicitation of attributes: a comparison of preference
model structures derived from two elicitation techniques. In: Bergadáa, M.: Marketing
Today and for the 21st century: proceedings of the 24th EMAC Conference, Paris; 1995;
[8] Reynolds Th.J, Gutman J, Advertising is Image Management. Journal of Advertising
Research. February/March 1984; 27-36.
[9] Rose B, Gzara L, Lombard M, Towards a formalization of collaboration entities to
manage conflicts appearing in cooperative product design, in: Proceedings of the CIRP
Design Seminar Methods and Tools for Cooperative and Integrated Design, Grenoble,.
[10] Zwicky F, Discovery, Invention, Research - Through the Morphological Approach,
Toronto: The Macmillan Company, 1969.
[11] Ritchey T, "Problem Structuring using Computer-Aided Morphological Analysis".
Journal of the Operational Research Society, Special Issue on Problem Structuring
Methods, 2006; 57; 792–801.
[12] Bezerra C, Owen C, A Computer-Supported Methodology for the Conceptual
Planning Process, EVOLUTIONARY STRUCTURED PLANNING, Illinois
Institute of Technology, Chicago, Illinois, USA.
DFX Platform for life-cycle aspects analysis
1 Introduction
Today's highly competitive global manufacturing environment requires continous
improvement of producers efficiency. One way to achieve it is to increase an
efficiency of individual engineering activities, e.g., through the introduction of IT
technologies. Another way is to improve the coordination between development
activities by application of Concurrent Engineering (CE) methodology and its
means for supporting teamwork. Typical objectives of CE are to (1) optimize
product quality, (2) minimize manufacturing cost, and (3) shorten delivery time.
In this context, the application of the “Design for X” philosophy, which is
commonly regarded as a systematic and proactive designing of products to
optimize total benefits over the whole product life-cycle, seems to be appropriate.
1
Dr Piotr Ciechanowski; ABB Corporate Research; ul. Starowislna 13A, 31-038,
Kraków, Poland; piotr.ciechanowski@pl.abb.com, tel. +48 12 4244114
284 P. Ciechanowski, L. Malinowski and T. Nowak
2 Problem Definition
Similarly, Vliet and co-workers stated that an integrated system for continues
DFX design support should offer (i) coordination of the design process, and (ii)
generic estimators to adequately evaluate and quantify life-cycle aspects [15], [14].
For quantification of life-cycle properties they proposed: cost, quality, flexibility,
risk, lead-time, efficiency and environmental hazard.
The generalized framework (shell) for manufacturability analysis is proposed in
[13]. Unlike previous approaches, in this solution the user is able to choose the
criterion to evaluate the manufacturability and thus is able to ensure that the most
appropriate measure is selected.
But, as concluded by Hazelrigg in his book [5]: the true objective of
engineering design is to make money. The other design targets to (1) optimize
product quality, (2) minimize cost, and (3) to be available sooner just describe how
the company maximizes its profits.
Today’s integrated DFX tools consider mainly production phase of a product life
span. However there are other aspects, which need to be covered. Design for
environment (DFE), together with Life Cycle Assessment (LCA) - its most
powerful instrument, is one of the most difficult to integrate with other DFX tools,
which are much more related to economic benefits [8]. Life time environmental
impact can be expressed in terms of price of pollutions treatment (Tellus and EPS
methods) [12], [11], however these costs would not be covered directly by a
producer. Nevertheless LCA can be easily integrated into the DFX framework by
taking into account customer willingness to pay for “green product” [6]. More and
more producers are forced (WEEE, Waste Electrical and Electronic Equipment -
EU Directive) to take back their product at the “end of life” therefore Design for
Recycling is the most important part of the DFE.
Rising warranty costs focus attention on the issue of Design for Serviceability
[2]. Service Mode Analysis and probabilities of failure modes will be the key
issues in warranty cost evaluations.
Design for Performance and Design for Compliance would complete other
required design aspects related to operational and “end of life” life cycle phases.
The key issue of this research was to develop the means to reliably estimate and
verify the costs/benefits of different design concepts at different stages of product
development. Various design approaches, X-s, are collected and offered in
harmonized way via DFX Platform. The role of this framework is to provide a
structured workflow specifying how and when the different X methodologies can
be applied, and also to unify DFX measures (to combine different DFX metrics,
like direct material cost, number of articles, assembly times, failures probability,
etc.).
286 P. Ciechanowski, L. Malinowski and T. Nowak
3 DFX Framework
Analysis
Continues estimation of Targets: Cost / Q / Speed
Layer
The Information layer stores the input data required by given engineering task
and output information created in following project phases. By this module the
intermediate technical results and design proposals are also transmitted between
different DFX tools.
The Domain evaluation layer is designed to manage DFX approaches – the
specialized methods evaluating the design concept from given product life-cycle
perspective. The most common DFX approaches are:
x Modularization (maximizes external product variety),
x Standardization (minimizes the number of different article types and
mnfg. processes & tools),
x Manufacturability (assigns suitable manufacturing process and materials),
x Assemblability (optimizes assembly process),
x Late Customization (differentiates product variants by application of
supplementary manufacturing steps or optional module),
DFX Platform for life-cycle aspects analysis 287
where:
t - time of the cash flow
n - total time of the project
r - interest rate
Ct - net cash flow (the amount of cash) at that point in time
C0 - capital outlay at the beginning of the investment time (t = 0)
The typical application scenario covers: log-in to the DFX platform web side;
selection of adequate DFX approach and related tool; execution of domain
analysis; evaluation of the results in terms of the domain-specific measures (e.g.
material cost, assembly time) and finally the total profit calculation.
To find out the most profitable product design variants the “Cost of Variety”
calculation method was applied, as described in [9]. The goal was to find the
optimal production volume per variant, minimizing the total manufacturing costs.
It was calculated, that best profitable modularization scenario is to manufacture
two variants only, out of four, what gives more than 25% of savings in comparison
to original production costs.
In second analysis stage, the manufacturing and assembly aspects of new product
design were taken into account. Each component in the assembly was examined
with support of dedicated DFA and DFM tools. This study started with
simplification analysis aiming to reduce the number of product parts. As a result
one could state, that potentially about 37% of components might be eliminated.
Next, the manufacturing aspects for all product components were further
studied, and the most cost efficient manufacturing technologies were assigned
based on the production scale.
Product design optimization related to life phases after “factory gate” were limited
to warranty and “take back” obligations.
Failure costs were calculated according to following formula [10]:
where:
Costfailure = Total costs of failures
CRepair = Repair and/or replacement cost
CConsequence = Consequence costs i.e. standstill cost from failure
fstop = Number of Stopping failures in life time
fnonstop = Number of Non-stopping failures in life time
LCC tool supports failures modes calculation as well as recycling and disposal
options. Average service time and cost can be reduced 25% due to proposed
greasing system modifications.
Decommissioning cost can be minimized due to reduced number of parts and
use of recyclable material eliminating disposal alternative.
One of the key advantages of the DFX Platform is the possibility to reliably
estimate the profit of analyzed product concept. The business impact coming from
different DFX analyses is summed up and total cost/benefit figure is calculated, In
this way, different product concepts can be compared over total life-cycle.
5 Summary
Most of today’s DFX methods and tools (software packages, manufacturing
guidelines, check lists, etc.) consider product and process design in unilateral way
mainly, e.g. manufacture- or assembly centric. This research proposed the
framework, which manages the different design approaches from whole life-cycle
(including “end of life”) perpective, and involves trade-offs between different
design objectives and business profitability measured by present value of net
benefit. Based on the proposed framework, the DFX Platform was developed. The
solution was designed as a web service, which manages the different design
approaches, controls the application of specific tools according to the phase of the
development process, transfers the information between and within engineering
domains and ensures consistency of cost/benefit estimations. The practical solution
supporting proactive, profit oriented design, was implemented and successfully
290 P. Ciechanowski, L. Malinowski and T. Nowak
6 References
[1] Boothroyd G., Dewhurst P. and Knigth W. Product design for manufacture and
assembly, 1994 (M. Dekker, New York).
[2] Bryan, C., Eubanks, C.F., Ishii, K. Design for Serviceability Expert System. ASME
Computers in Engineering. August 1992, San Francisco, CA. Vol.1. ISBN 0-7918-
0935-8. pp. 91-98.
[3] Gupta S.K., Regli W.C. and Nau D.S. Integrating DFM with CAD through Design
Critiquing. Concurrent Engineering: Research and Applications, 1994, 2(2), pp. 85-94.
[4] Hazelrigg G. An Axiomatic Framework for Engineering Design. ASME Journal of
Mechanical Design, 1999, 121, pp. 342-347.
[5] Hazelrigg G. Systems Engineering: An Approach to Information-Based Design, 1996
(Prentice Hall, New York).
[6] Hunkeler D., “Life Cycle Profit Optimization”, International Journal of LCA vol. 5 (1),
pp. 59-62, 2000.
[7] Maropoulos P., Bramall D. and McKay, K. Assessing the manufacturability of early
product design using aggregate process models. Journal of Engineering Manufacture,
2003, 217, pp. 1203-1214.
[8] Norris G. A., “Integrating Life Cycle Cost Analysis and LCA”, International Journal of
LCA vol. 6 (2), pp. 118-120, 2001.
[9] Nowak T., Chromniak M. The Cost of Internal Variety: A Non-Linear Optimization
Model. In Proceedings of International Design Conference – Design, Dubrovnik, 2006.
[10] Ravemark D., LCC/LCA experience - developing and working with LCC tools. ABB
2004, available at: http://www.dantes.info/Publications/publications_date.html
[11] Steen B., “A systematic approach to environmental priority strategies in product
development (EPS). Version 2000 – Models and data of the default method, CMP
report 1999
[12] Tellus Institute, “The Tellus Packaging Study”, Tellus Institute Boston, 1992, available
at: www.epa.gov/opptintr/acctg/rev/7-10.htm
[13] Tharakan P., Zhao Z. and Shah J. Manufacturability evaluation shell: A re-configurable
environment for technical and economic manufacturability evaluation. In Proceedings
of DETC03/DFM-48155, Chicago, 2003.
[14] Vliet J., Luttervelt C. and Kals H. An integrated system architecture for continues DFX
design support. In Proceedings of 9th International Machine and Production
Conference, UMTIK, Ankara, 2000.
[15] Vliet J., Luttervelt C. and Kals H. Quantification of life-cycle aspects in a DFX
context. In Proceedings of 9th International Machine and Production Conference,
UMTIK, Ankara, 2000.
[16] Wassenaar H. J. and Chen W. An Approach to Decision-Based Design. In Proceedings
of DETC01/DTM-21683, Pittsburgh, 2001.
Design For Lean Systematization Through
Simultaneous Engineering
1 Introduction
stamping case resulted from interviews with experts of the main automotive
industry processes: stamping, welding and assembly. These guidelines are related
to one or the other of the seven main wastes encountered in running production.
2 Lean Manufacturing
Once the purpose of this study is to propose a DFX tool to be inserted in the
product development environment, it is necessary to choose a PDP model among
the various available. Figure 1 shows the Rozenfeld, et al. model, which will serve
as a base to have us understand in which development phases the tool will be
applied.
With DFX, proposed tool will have its application in Conceptual and Detailed
Design phases, with its requisites already defined in the informative phase.
Design For Lean Systematization Through Simultaneous Engineering 293
Product concept has a considerable impact on final cost. It is during the production
phase that design change costs are highest. To help the designer evaluate the
impact of his decisions on the product’s life cycle, auxiliary methods for design
decisions called DFX (Design for X) were developed. Among most applied DFX is
Design for Manufacturability.
For better results, this tool must be used the earliest possible in the PDP
process, within Simultaneous Engineering environment.
At present the guidelines contained in DFM tool have supplied subsidies so that
product development teams contribute, although modestly, with the
implementation of Lean Manufacturing philosophy, mainly in what regards
standardization. There is to be noticed, however, a strong absence of tools to
support product development teams specifically in the task of contributing with the
implementation of Lean Manufacturing in its entirety.
According to Rozenfeld [6 apud Womack 8], in order to promote opportunities
that will cause impact on manufacturing efficiency, cost and product quality, Lean
Production must be aligned with product not only in its manufacture, but also in its
concept in PDP.
But, then, what new approach would this tool bring with regards to the already
known DFM? The answer is a set of specific guidelines for the development of
products that is focused on the implementation of Lean Manufacturing philosophy
in the manufacturing phase.
294 M. Raeder, F. Forcellini
4 Simultaneous Engineering
The term simultaneous engineering can denote both a parallel cooperation and
work discipline towards a set of common objectives in the development step or a
form of design time reduction through the accomplishment of independent
activities which can be made simultaneously.
According to Rozenfeld [6], one of the first attempts was to increase the degree
of parallelism among development activities, seeking the simultaneous
accomplishment of design and process planning activities. In this way, activities
that were previously started after the former activity had been finished and
approved, can be made in parallel.
Following benefits can be attained through simultaneous engineering:
x reduction of time for development of new products
x reduction of cost in the development of new products
x better quality of new products as per customer needs
So, through Simultaneous Engineering, one can make the product concept
changes in the initial development process steps.
Our object is not exactly the reduction of product development time, but yes the
assurance that the developed product meets essential requisites for the
Design For Lean Systematization Through Simultaneous Engineering 295
Once the environment of this study is mainly the automotive industry it is only
natural that study focus shall be on it.
As explained in the initial stages, the objective of this work is to supply
directives which will enable a wider penetration of Lean Manufacturing philosophy
through strong contribution of the product development team as well as other DFX
tools.
Therefore it has been decided to stratify the main steps of automotive production
so as to supply orientation to each of them. Whenever possible, these orientations
296 M. Raeder, F. Forcellini
will be directly related to the seven main wastes mentioned in the Lean
Manufacturing definition.
The following recommendations where extracted from experiences taken place
in daily production, and from interviews with specialists in the respective areas. It
is expected that these recommendations will allow a leaner and more efficient
production system all the way from its conception to the final delivery of the good.
First step – stamping: if the manufacturing process of an automobile is analyzed
as a whole we will notice that, what concerns product design, the higher the quality
of a stamped part the lower is the investment needed. Higher productivity gains in
stamping have been obtained through improvements by way of high flexibility
equipment. Less complex geometry, right choice and adequate thickness of
materials will allow following benefits, shown in table 1, according to the seven
main wastes previously mentioned:
Table 1. Relation between benefit and avoided waste during stamping process
Benefit Waste
Less complex geometry bringing lower tool adjusting Over and incorrect
time; processing;
Adequate raw material and thickness, bringing lower Defects;
template numbers;
Less complex geometry meaning less time spent on Over and incorrect
measuring and rework; processing;
Adequate raw material and thickness reducing stamping Over processing;
stages;
Table 2. Relation between waste causes and consequences during welding process
Causes Waste
Necessary process checkings to ensure good product Over processing and
quality to be delivered to assembly due to low design incorrect processing;
robustness;
Amount/complexity of gadgets to ensure welding Over processing and
geometry; incorrect processing;
Number of welding spots to ensure product rigidity; Over processing and
incorrect processing;
Part degradation in storage due to high degree of Defects; over production;
ductibility and thickness;
Design For Lean Systematization Through Simultaneous Engineering 297
Table 3. Relation between waste causes and its consequences during final assembly process
Causes Wastes
Components with geometric variations received from Waiting time; over and
internal suppliers; incorrect processing;
unnecessary moving;
Component with undefined positions in assembly Over processing and
operation (oblongs, keyways), requiring many incorrect processing;
adjustments at assembly; unnecessary moving;
Parts that could have their assembly through fitting Unnecessary moving,
instead of screws, decreasing number of components, excess inventory; defects;
inventory, number of components and operations; over processing and
incorrect processing;
Lack of standardization of components that have very Unnecessary moving,
similar functions (screws, nuts), meaning high inventory, excess inventory; defects;
process faults, excessive equipment needed; over processing and
incorrect processing;
Inadequate raw material which deforms during process, Defects; excess inventory;
requiring adjustments steps; over and incorrect
processing; unnecessary
moving;
6 Conclusion
Through this study we can observe that the integrated development of products has
an important contribution for the implementation of Lean Manufacturing
philosophy in its entirety. The DFX tools available to this date like DFM are not
sufficient, there being room for the creation of a new tool – DFL.
Simultaneous Engineering has an important role in the search for the
development of products with focus on Lean Manufacturing, still underutilized, not
only in the reduction of product development time, but mainly in the development
teams interaction with their client areas.
Much has been done in the search for a Lean manufacturing, with contributions
from product in a punctual manner and in isolated steps of the manufacturing
process. Isolated changes of the product in search of cost reductions in the
production process can bring as a consequence an excess of operations that do not
add value along the production chain.
Present study has shown that the product development can present a positive
contribution for the implementation of Lean Manufacturing philosophy, mainly if
the guidelines here outlined are applied in the initial steps of the manufacturing
process.
Design For Lean Systematization Through Simultaneous Engineering 299
7 Bibliographic References
[1] Edwards KL. Towards more strategic design for manufacture and assembly: priorities
for concurrent engineering. Materials and Design 23, 2002
[2] Bralla JG. Design for Excelence. New York: Mc Graw Hill, 1996.
[3] Forcellini FA. Introdução ao desenvolvimento sistemático de produtos. Florianópolis,
2000. 110 f. Apostila (Disciplina de Projeto Conceitual) – Pós-Graduação em
Engenharia Mecânica, Departamento de Engenharia Mecânica, Universidade Federal
de Santa Catarina.
[4] Huang GQ. Design for X – Concurrent Engineering Imperatives. 1 ed. Dundee:
Chapman & Hall, 1996.
[5] Liker JK, O Modelo Toyota. Porto Alegre: Bookman, 2004.
[6] Rozenfeld H, Forcellini AF, Amaral DC, Toledo JC, Silva SL, Alliprandini DH,
Scalice RK. Gestão de Desenvolvimento de Produtos: uma Referência para Melhoria
do Processo. São Paulo: Editora Saraiva, 2006.
[7] Womack PW, Jones DT, Roos D. A máquina que Mudou o Mundo. Rio de Janeiro,
Elsevier, 1990.
[8] Womack JP, Jones dt – A Mentalidade Enxuta nas Emmpresas. 6 ed. Rio de Janeiro,
Editora Elsevier, 1996.
Postponement planning and implementation from CE
perspective
1 Introduction
The markets for mass production low cost standard goods, actually, are a hostile
environment and of decreasing profitability. The wide scale production is still a
requirement, however a new market characteristic emerges: to customize products
according to specific customer needs. The current economic scenario is
characterized by uncertainty and high competition level.
As a consequence, companies are facing difficulties to forecast the product
demand. For many companies, a bad demand forecast means changes in the
schedule of customer orders generating product’s reconfiguration down to the
assembly line [10]. The high competition level in the market creates a more
stringent customer profile, who asks for more customized products, shorter
delivery times and lower prices [14]. Then, the following conflict is generated: the
1
Corresponding author: Cássio Dias Gonçalves, Av. Independência, 531, apto 112A,
CEP.: 12031-000, Taubaté, São Paulo, Brazil; E-mail: cdgoncal@yahoo.com.br.
302 C. Gonçalves, G. Loureiro and L. Trabasso
The main challenges observed in the literature that impairs the postponement
implementation are: little knowledge about the benefits and associated costs,
technology limitation, difficulties to estimate gains and weak alignment among
departments in the organisation [21].
Other factors such as inefficient transportation, manufacturing and information
technology systems also raise difficulties to postponement implementation [29].
There are no quotations in the literature that refer to the lack of Concurrent
Engineering (CE) culture in the company as a factor that turns difficult the
postponement implementation. In this paper it is shown that the CE practice is
primordial to have success in the postponement strategy implementation, specially
when an enterprise develops complex products .
Postponement planning and implementation from CE perspective 303
According Bullock [3], there are two main kinds of postponement referred to in the
literature: form and time postponement.
Form postponement aims at delaying certain stages of the product
manufacturing process until a customer order has been received.
Time postponement refers to the situation where the distribution or the actual
delivery of a product is delayed until customer demand is known.
Iyer et al [10] presented a postponement strategy ,where the customer order is
postponed, thus establishing a trade-off between the payment of contractual
penalties and operational costs reduction. There are some authors which propose to
mix postponement and speculation strategies [23][25].
The present paper proposes to use a different strategy from those just
mentioned. It consists of applying CE concepts, to define some manufacturing
strategies during the preliminary design phase. Manufacturing and market teams
work together to define optional items kits, using analysis tools which cope with
uncertainty [2]. Then, the company builds up buffers of these kits that might be
applied in the product assembly line to increase the flexibility and agility of the
customization process.
Zinn and Bowersox [28] classify the postponement costs as: inventory, process,
transportation and cost with lost sales.
One can observe that there is no agreement among the authors about
postponement costs. It is also found that higher postponement levels lead to lower
inventory holding costs [28], there is also a literature quotation [7] which claims
that in some cases, customized components must be on the shelf (stock) to keep the
production line flexible and agile.
The transportation costs and the cost with lost sales are not well elaborated,
because of the uncertainties shipments size and the unknown balancing between
customization levels versus product delivery time.
Some authors say that the manufacturing costs increase with postponement
implementation because of the new technology requirements [3][12], however this
viewpoint is questioned by Waller [28].
. The kind of relationships among postponement and its associated costs can
change from one case to another. This evidences that the current formulations for
postponement cost are not yet mature.
This paper takes into account a cost-postponement relation that has not been
found in the literature so far: the product reconfiguration cost.
There are authors that relate postponement to other theories, pointing out cases
where one theory can help the other, for example Just in Time (JIT). Some authors
say that JIT often results in postponement [28].
304 C. Gonçalves, G. Loureiro and L. Trabasso
Shihua [13] and Yang [30] relate postponement to modular design. Shihua says
that through commonality, it can be possible to reduce risk and uncertainties
associated with lead-times. This might help to reduce stock in a safe way, improve
the wide scale production, simplify the planning task and improve the product
development process. However, the commonality brings in higher costs per unit
due to excess of performance, greater workload and work-in-process variability
[13]. Other consequence of the modular design is that components tend to be
weightier, what is extremely bad for aerospace industries.
As can be observed, just few authors relate postponement to product design,
and an even lower number mention the importance of Concurrent Engineering as a
enabling factor to postponement. When this does happen, usually it is presented in
a superficial, ad hoc manner, without practical examples to support it [26][23][3].
This gap in the literature is partially fulfilled by Tseng [4], Du [25] and Jiao
[26]. They propose a certain relationship between postponement and concurrent
engineering, bringing the customer close to product development, to help the
creation of a Product Family Architecture (PFA) that enable the postponement and
generates value to customer.
2.5 Benefits
Most authors say that the advantages of postponement are: customer satisfaction
improvement, inventory costs reduction and uncertainty reduction of demand
forecast [21] [3].
However, there are other authors that, in particular cases, relate postponement
to the investment reduction [10], by delaying customer orders when there are
unpredicted demands, to avoid production capacity overloads as well as
investments in new equipments, as a consequence. Then, they propose the
following trade-off: either pay contractual penalties or invest in production
capacity. In the next section, it becomes evident yet another benefit brought
through the postponement utilization: the product reconfiguration cost reduction.
3 Proposed method
The detailed presentation of the proposed method, including case study
applications, is described in previous papers [7][8]. Its main characteristic, is the
utilization of CE tools to:
x identify, during the product development phase, the best way and level of
postponement that should be adopted in an aerospace company;
x map the relationships among the functional and physical characteristics
with the customer needs and, at the same time, assure that the costs
comply with the product development limitations;
1. determine the best moment to take the main decisions related to
postponement during the product development phase.
Figure 1 provides an overview of the referred postponement method:
Postponement planning and implementation from CE perspective 305
The Critical Path Method (CPM) is used to identify the most critical processes
to the customization cycle time [6][17][20][24]. Then, through a technical
analysis, design alternatives are created for customized product components and
manufacturing processes. After that, the Quality Function Deployment (QFD) [11]
and Design to Cost (DTC) tools help to determine the best product design and
manufacturing alternative according to cost constraints and customer needs. The
customer needs are related to the company’s engineering and manufacturing
requirements through the QFD matrices; they also help to weight the relationship
between the requirements and the product’s parts and manufacturing processes.
The DTC technique, by its turn, is used to compare the target costs with the
estimated costs, singling out the product part or process that must be redesigned to
meet the target cost. All the CE tools mentioned are combined to perform the
postponement strategy definition process. Then, the activities which compose this
decision process are classified as questions, decisions and milestones [15], to be
scheduled through Design Structure Matrix (DSM) [9][15][18][22], generating the
postponement strategy definition process planning.
306 C. Gonçalves, G. Loureiro and L. Trabasso
5 Conclusions
After comparing the proposed postponement strategy definition method with
existing postponement approaches found in the literature, it can be formulated its
main contribution: it helps to define the best postponement strategy that should be
adopted to develop a complex product, from a CE perspective.
There were found some works which propose product customization strategies
aligned with customer needs, but due to their non-pragmatic approach, they can not
be used for complex products. The main contribution of this paper is to provide a
method, based on CE concepts and supported by tools as QFD, DTC, DSM and
CPM, to create systematic links from customers needs to product functional
Postponement planning and implementation from CE perspective 307
requirements and product physical characteristics, and at the same time, assuring
that the costs comply with the product development constraints, to determine the
best postponement level. The method also assures that any change in the functional
or physical characteristics in the product or its manufacturing process will be
evaluated according to the customer needs, product development costs limitations
and postponement level.
6 References
[1] Aviv Y, Federgruen A. Design for postponement: a comprehensive characterization of
its benefits under unknown demand distributions. Operations Research 2001. Vol. 49,
No 4, pp. 578 - 598.
[2] Beckman OR, Neto PLOC. Análise Estatística da Decisão. Edgard Blücher, editor. São
Paulo, 2000.
[3] Bullock PJ. Knowing When to Use Postponement. International Logistics. 2002.
Available on-line at:
<http://www.its.usyd.edu.au/past_assignments/tptm6260_bullock_2002.pdf>. Access
date: September 10, 2005
[4] Chen SL, Tseng MM. Defining Specifications for Custom Products: A Multi-Attribute
Negotiation Approach. Annals of the CIRP 2005. Vol. 54, pp. 159-162.
[5] Cunha DC. Avaliação dos resultados da aplicação de postponement em uma grande
malharia e confecção de Santa Catarina. PhD Thesis, UFSC, Florianópolis. 2002.
[6] Darci P. PERT/CPM. INDG, editor. São Paulo, 2004.
[7] Gonçalves CD, Trabasso LG and Loureiro G. Integrated CE tools for postponed
aerospace product and process decisions. Annals of the International Conference on
Concurrent Engineering, 2006. Vol. 143, pp. 477-488.
[8] Gonçalves CD, Trabasso LG and Loureiro G. Integrated postponement and concurrent
engineering applied to the aerospace industry. Annals of the International Conference
on Production Research, 2006.
[9] Hoffmeister AD. Sistematização do processo de planejamento de projetos: Definição
e seqüenciamento das atividades para o desenvolvimento de produtos industriais. PhD
Thesis, UFSC, Florianópolis, 2003.
[10] Iyer AV; Deshpande V, Wu Z. A postponement model for demand management.
Management Science 2003, vol. 49, n. 8, pp. 983-1002.
[11] Lee GH and Kusiak A. The house of quality for design rule priority. The International
Journal of Advanced Manufacturing Technology, 2001. Vol. 17, pp. 288-296.
[12] Lembke RST, Bassok Y. An inventory model for delayed customization: A hybrid
approach. European Journal of Operational Research 2004. Los Angeles, vol. 165, pp.
748-764.
[13] Ma S, Wang W, Liu L. Commonality and postponement in multistage assembly.
European Journal of Operational Research 2002. Vol. 142, pp. 523-538.
[14] Manufacturing postponement strategies come of age. AME Info. 2004. Available on-
line at: <http://www.ameinfo.com/40996.html>. Access date: October 8, 2005.
[15] MIT & UIUC DSM Research Teams. The Design Structure Matrix - DSM Home
Page. 2003. Available on-line at: <http://www.dsmweb.org/>. Access date: December
17, 2005
[16] Nasa, presentation on training for the use of DSM for scheduling questions, decisions
and milestones for the definition of roadmaps, Washington DC, 2005.
308 C. Gonçalves, G. Loureiro and L. Trabasso
[17] Peralta AC, Tubino DF. O uso do DSM no processo de projeto de edificações. 2003.
Available on-line at: <http://www.eesc.sc.usp.br/sap/projetar/files/A018.pdf>. Access
date: December 18, 2005
[18] Peralta AC. Um modelo do processo de projeto de edificações, baseado na engenharia
simultânea, em empresas construtoras incorporadoras de pequeno porte. PhD Thesis,
UFSC, Florianópolis, 2002.
[19] Piller F, Reichwald R, TSENG M. Editorial. International Journal of Mass
Customisation 2006, vol . 1, Nos 2/3.
[20] Pinedo M. Scheduling – Theory, Algorithms, and Systems. Prentice Hall, editor. New
Jersey, 2002.
[21] Prats L, Patterson J. New Supply Chain Study Finds Postponement Strategies Critical
For Reducing Demand Uncertainty And Improving Customer Satisfaction. 2003.
Available on-line at: <http://www.oracle.com/corporate/press/2436365.html>.
Access date: August 14, 2005.
[22] Primavera Project Planner. User manual-version 2.0b, Primavera Systems Inc, 1997.
[23] Sampaio M, Duarte ALCM, Csillag JM. O poder estratégico do postponement. 3rd
International Conference of the Iberoamerican Academy of Management. 2003.
Available on-line at: <http://www.fgvsp.br/iberoamerican/Papers/0304_ACF55A.pdf>.
Access date: August 8, 2005.
[24] Souza CE. Primavera Project Planner – Guia de aprendizado básico versão 3.0. Verano
Engenharia de sistemas, 2000
[25] Tseng MM, Du X. Design by Customers for Mass Customization Products. Annals of
the CIRP 1998. Vol. 47, pp.103-106.
[26] Tseng MM, Jiao J. concurrent design for mass customization. Business Process
Management Journal 1998. Vol. 4, No. 1, pp. 10-24.
[27] Vitaliano WJ. Three Design to Cost Myths. International Conference of the Society of
American Value Engineering. New Orleans, LA, 1994.
[28] Waller MA, Dabholkar PA, Gentry JJ. Postponement, product customization, and
market-oriented supply chain management. Journal of Business Logistics. 2000.
Available on-line at:
<http://www.findarticles.com/p/articles/mi_qa3705/is_200001/ai_n8889247>. Access
date: September 14, 2005
[29] Yang B, Burns ND, Backhouse CJ. The Application of Postponement in Industry. IEEE
Transactions on Engineering Management 2005. Vol. 52, No 2.
[30] Yang B, Burns ND. Implications of postponement for supply chain. International
Journal of Production Research 2003. Vol. 41, no 9, pp. 2075 – 2090.
Neural Network and Model-Predictive Control for
Continuous Neutralization Reactor Operation
Flávio Perpétuo Briguentea, Marcus Venícius dos Santosb and Andreia Pepe
Ambrozinc
a
Chemical Engineer, Monsanto Company, São José dos Campos, Brazil.
b
Mechanical Engineer and cChemical Engineer, Monsanto Company.
Abstract. This paper outlines neural network non-linear models to predict moisture in real
time as a virtual on line analyzer (VOA). The objective is to reduce the moisture variability
in a continuous neutralization reactor by implementing a model-predictive control (MPC) to
manipulate the water addition. The acid-base reaction takes place in right balance of raw
materials. The moisture control is essential to the reaction yield and avoids downstream
process constraints. The first modeling step was to define variables that have statistical
correlation and high effect on the predictable one (moisture). Then, it was selected enough
historical data that represents the plant operation in long term. Outliers like plant shutdowns,
downtimes or non-usual events were removed from the database. The VOA model was built
by training the digital control system neural block using those historical data. The MPC was
implemented considering constraints and disturbances variables to establish the process
control strategy. Constraints were configured to avoid damages in equipments. Disturbances
were defined to cause feed forward action. The MPC receives the predictable moisture from
VOA and anticipates the water addition control. This process is monitored via computer
graphic displays. The project achieved a significant reduction in moisture variability and
eliminated off-grade products.
1 Introduction
A neural network, also known as a parallel distributed processing network, is a
1
computing solution that is loosely modeled after cortical structures of the brain. It
consists of interconnected processing elements called nodes or neurons that work
together to produce an output function. The output of a neural network relies on the
cooperation of the individual neurons within the network to operate. Processing of
information by neural networks is characteristically done in parallel. Since it relies
on its member neurons collectively to perform its function, a unique property of a
1
Monsanto do Brasil Ltda, Avenida Carlos Marcondes, n. 1200; CEP: 12240-481,
Jardim Limoeiro, São José dos Campos, São Paulo State, Brazil; Tel: +55 (12) 3932 7184;
Fax: +55 (12) 3932 7353; Email: flavio.p.briguente@monsanto.com;
http://www.monsanto.com.br
310 F.P. Briguente, M.V. Santos and A.P. Ambrozin
neural network is that it can still perform its overall function even if some of the
neurons are not functioning. It is robust to tolerate error or failure, as described by
Mandic[1].
Neural network theory is sometimes used to refer to a branch of computational
science that uses neural networks as models to simulate or analyze complex
phenomena and/or study the principles of operation of neural networks
analytically. It addresses problems similar to artificial intelligence (AI) except that
AI uses traditional computational algorithms to solve problems whereas neural
networks use software or hardware entities linked together as the computational
architecture to solve problems, Saint-Donat[2]. Neural networks are trainable
systems that can "learn" to solve complex problems from a set of exemplars and
generalize the "acquired knowledge" to solve unforeseen problems as in stock
market and environmental prediction. They are self-adaptive systems as shown in
figure 1, according to Wikipedia[3].
The United States is a major player in all of the technologies which make up
predictive process control. For example, historically Honeywell has had a major
presence, having introduced the first distributed control system (the Honeywell
TDC 2000) in 1975. Many other countries are also players in this area, however. In
the UK, BNFL has developed advanced control system. In Germany, Siemens
Industrial Automation has been leader in designing control systems with open
architecture. The Japanese company, Yokogawa, is active in the International
Fieldbus Consortium.
Model Predictive Control (MPC) is widely adopted in industry as an effective
means to deal with large multivariable constrained control problems. The main
idea of MPC is to choose the control action by repeatedly solving on line an
optimal control problem. This aims at minimizing a performance criterion over a
future horizon, possibly subject to constraints on the manipulated inputs and
outputs, where the future behavior is computed according to a model of the plant,
as showed in figure2. Issues arise for guaranteeing closed-loop stability, to handle
model uncertainty, and to reduce on-line computations, according to Bemporad [4].
MPC is an advanced method of process control that has also been in use in the
process industries such as chemical plants and oil refineries since the 1980s. Model
predictive controllers rely on dynamic models of the process, most often linear
empirical models obtained by system identification. The models are used to predict
the behavior of dependent variables (outputs) of a dynamical system with respect
to changes in the process independent variables (inputs). In chemical processes,
independent variables are most often set points of regulatory controllers that
govern valve movement (e.g. valve positioners with or without flow, temperature
or pressure controller cascades), while dependent variables are most often
constraints in the process (e.g., product purity, equipment safe operating limits).
The model predictive controller uses the models and current plant measurements to
calculate future moves in the independent variables that will result in operation that
honors all independent and dependent variable constraints. The MPC then sends
312 F.P. Briguente, M.V. Santos and A.P. Ambrozin
2 Baseline Process
Monsanto has implemented a manufacturing unit in Sao Jose dos Campos city
using as concept a continuous process to make a specific salt through a continuous
acid-base reaction. The basic process consists in a continuous addition of an acid to
be stoichiometrically neutralized with a base, in presence of water according to
figure 3. Since the start-up of the plant several operating constraints were observed
regarding the high variability in moisture control. Moisture is an important
parameter to ensure that the acid-base reaction takes place properly. The control is
done by feeding water into the continuous reactor, creating product dough. It is
done automatically via a closed-loop configured in the Distributed Control System
(DCS). The set-point for water feed rate is determined by the operators through a
previews visual analysis of the product in the reactor outlet pipeline.
ACID Water
Amperage
Reactor
Moisture
BASE
Control
Product
The ideal moisture operating range is within 4 – 6%. Working off the range many
plant shutdowns is observed due to pluggages in the equipments located in the
downstream process. It can also impact the product quality once the higher
moisture causes lump formation and lower moisture creates dust in the further
process steps.
In order to establish the process baseline many six sigma statistical tools were
applied to the historical data of the manufacturing unit. MINITAB® software was
used for calculating the indexes and supports the technical evaluation. According
to Hayashi[7], the capability tool was applied and the result showed a process cpk of
0.29 for moisture control. This value is much lower than 1.32 that is the reasonable
number for a capable process, see figure 4.
LSL USL
P rocess D ata O v erall C apability
LS L 4,00000 Pp 0,29
T arget * PPL 0,53
USL 6,00000 PPU -0,04
S am ple M ean 6,04841 P pk -0,04
S am ple N 157
E xp. O v erall P erform ance
S hape 5,82506
P P M < LS L 56390
S cale 6,52064
P P M > U S L 540172
O bserv ed P erform ance P P M T otal 596562
P P M < LS L 38217
P P M > U S L 490446
P P M T otal 528662
3 4 5 6 7 8 9
3 Project Implementation
In order to implement a more reliable system this project outlines neural
network models to predict the moisture in the continuous reactor in real time as a
virtual on-line analyzer. The project considered also a model-predictive control to
manipulate the water feed rate based on the predicted moisture in the continuous
reactor.
A benchmarking was done to find out potential softwares to be used for
implementing the VOA and MPC applications. The marketing search considered
the restrictions of the current plant DCS. A versatile software was defined to be the
platform to run the neural network and MPC applications, Emerson [8].
The neural network was trained using 6-month period of historical data with the
objective to establish a control block system in order to replace the linear
regression equation, previously used in the DCS. The amount of data used
represented the plant operation for long term, excluding the outliers that in fact, are
non-usual events, plant shutdowns, downtimes or experiments. The model is
expected to predict the moisture in the continuous process.
The first step of modeling was to calculate the correlation between chosen
variables and select those ones which statistically had the main effects with
predictable variable (moisture). Based on the engineering flow diagram, the reactor
temperature, amperage and raw material feed rates, assays and moistures were
selected as potential variables to obtain the model correlation. The selected
variables were the acid feed rate, acid assay and the water feed rate which have
demonstrated correlations higher than 0.80. Virtual on-line analyzer was obtained
by training the digital control system neural block through historical data of the
unit, as showed in figure 5.
The predictive model gains were obtained by providing steps in the process,
collecting and treating the data through the software. In figure 6 is shown the
Neural Network and MPC for Continuous Neutralization Reactor Operation 315
software template. To establish the process control strategy for MPC it was
considered the following variables:
x Constraint variable: Reactor amperage that can not exceed a certain value
to avoid damages in the reactor mechanical structure.
x Disturbance variables: Acid feed rate and acid assay have a direct
correlation to the reactor moisture and influenciate the predictable
parameter by the VOA.
x Controlled variable: Moisture predicted by the VOA. The operators insert
the moisture set point in the DCS and based on that the MPC manipulates
the water feed rate set point.
x Manipulated variable: Water feed rate, is the variable adjusted by MPC
automatically in remote operation mode.
LSL USL
P rocess D ata O v erall C apability
LS L 4,00000 Pp 2,22
Target * PPL 1,78
USL 6,00000 PPU 3,32
S ample M ean 5,13184 P pk 1,78
S ample N 414
E xp. O v erall P erformance
S hape 46,48871
P P M < LSL 5,46202
S cale 5,19114
P P M > U S L 0,00000
O bserv ed P erformance P P M Total 5,46202
P P M < LSL 0
PPM > USL 0
P P M Total 0
5 Conclusions
The project was implemented accomplishing the goals and was recognized as a
breakthrough solution. Technology innovation and business strategies were
focused on this project by searching modern ways for manufacturing process
control and management. Smart tools, new control strategies, process modeling and
teamwork were the key to achieve success in this implementation. The engineering
approach in this work allowed the process to be anticipated avoiding waste of
resources in the manufacturing organization and working in a proactive vision. The
technology innovation provided a friendly user tool for the operators and
knowledge exchange among the team. Being so, by applying intelligent control, it
was possible to increase overall productivity of the manufacturing unit. The
collaboration among all the individuals involved from different areas of knowledge
was essential to get the results in an integrated manner. The overall results lead the
company to a sustainable business strategy due to the large potential to increase the
instantaneous plant capacity. This project opens also new opportunities to reduce
costs in the manufacturing units by applying smart control system.
6 References
[1] Mandic, D. & Chambers, J.. Recurrent Neural Networks for Prediction: Architectures,
Learning algorithms and Stability. Wiley (2001).
[2] Saint-Donat, J., N. Bhat and T. J. McAvoy. Neural net based model predictive control,
1991.
[3] Wikipedia The Free Encyclopedia. Neural networks articles.
http://en.wikipedia.org/wiki/Neural_network , viewed on April 20, 2006.
[4] A. Bemporad, A. Casavola, and E. Mosca. Nonlinear control of constrained linear
systems via predictive reference management. IEEE Trans. Automatic Control, vol.
Neural Network and MPC for Continuous Neutralization Reactor Operation 317
Fredrik Elgh1
1 Introduction
Today, many companies have adopted the strategy of product customization. To be
able to reduce the workload and handle the large amount of information that this
strategy entails, companies have to make use of appropriate methods and tools.
Further, companies have to capture the knowledge behind a design for internal
reuse and/or to be able to provide design history documentation as requested by
customers and authorities. This implies that they have to consider the modelling
and management of the knowledge that govern the designs. This includes the core
elements of the knowledge, the range of the knowledge, its origin, its structure, and
its relations to other systems and life-cycle aspects.
The purpose with this work is to integrate the properties and the functions for
knowledge execution and information management into one system. The work is
1
Lecturer, School of Engineering, Jönköping University, Gjuterigatan 5, 551 11
Jönköping, Sweden; Tel: +46 (0) 36 101672; Fax: +46 (0) 36 125331; Email:
fredrik.elgh@jth.se; http://www.jth.hj.se
322 F. Elgh
based on two previously developed systems: one system for automated variant
design [1] and one system for management of manufacturing requirements [2].
Both systems can be used as analysis or synthesis tools concerning producibility
aspects [3]. The systems have different functionalities and properties, e.g.
regarding knowledge execution and information management, and it would be
fruitful to combine these in one system. The aim of the work is to provide an
approach for modelling of manufacturing requirements in systems for automated
variant design, supporting both knowledge execution and information
management.
One strong reason for using IT-support to manage requirements is the need for
traceability. This implies that changes should propagate to the product definition
guided by traceability links. According to [4], a requirement is traceable if one can
detect:
x the source that suggested the requirement,
x the reason why the requirement exists,
x what other requirements are related to it,
x how the requirement is related to other information such as function
structures, parts, analyses, test results and user documents,
x the decision-making process that led to derivation of the requirement, and
x the status of the requirement.
To support traceability between customer requirements and systems/parts the
employments of three additional structures: functions, solutions, and concepts has
been proposed [5]. A similar approach is to enhance traceability using additional
structures for functions and function-carriers [6]. Both approaches are based on the
chromosome model [7], which is a further development of the theory of technical
systems [8,9].
The introduction of a process function domain, with process requirements, in
the four domains of the design world, [10], has also been proposed [11]. The
purpose is to enable manufacturing requirements for the physical product to be the
mapped. However, the approach focuses on the management of process
requirements set by the product. This is intended for a company strategy where the
design of the manufacturing system is subordinated to the design of the product
and a new manufacturing system is developed for every new product. Another
approach argues for the structuring of manufacturing requirements in accordance to
the product and manufacturing domain [12]. It is suggest that the manufacturing
structures (processes, functions, functional solutions, and resources) could be used
for the structuring of manufacturing requirements. However, it is not describe how
to support the conceptual phases where different manufacturing alternatives are to
be evaluated or how to model requirements arising from the combination of
resources. The approach is applicable for product documentation and configuration
systems. Although, the approach’s applicability for systems supporting evaluation
of different courses of action or for generative process based systems is considered
to be limited.
Modelling and Management of Manufacturing Requirements 323
Design Rq Station
Rq Rq Rq
Material Rq Equipment
Rq Rq
Figure 2. Different concepts for modelling of manufacturing requirements. The Olsson table
[13] supports the definition of requirements.
The initial steps in the system development procedure [3] are to define: the
variables and requirements origin from the customers within a Customer space, the
resources within a Company design space, the product variables within a Product
design space, and finally to formulate the design algorithms, rules, and relations
that transform customer and company variables to product variables (Figure 3).
Parts
Tasks
Part 2
Part 3
Part 7
Part 4
Part 5
Part 6
Part 8
Part 1
Task 1
Task 2
Task 3
Task 4
Task 5
Task 6
Task 7
Task 8
Ö
Part 1 X
Task 4
Part 2 X X Task 1
Task 3
Part 3 X X Task 2 X Task 2
Part 7 X Task 3 X Task 1
Product Design
Part 4 X X Task 4 X Calculation ID XX
Task 5 X X X Design:
Part 5 X X X X - Mechanics
Part 6 X X X X Task 6 X X X X - Geometry
Manufacturing:
Part 8 X X X X Task 7 X X X
- Process Planning
Task 8 X X X X - Producibility
Ö
Sub Elements:
Ö - Assemblies
- Parts
A l i d d lli f d i l ih l d l i
Figure 3. An analysis and a modelling of design algorithms, rules, and relations that
transforms customer and company variables to product model variables results in a generic
product structure. The items in this structure have to be clustered in executable knowledge
objects by deploying a process view to resolve the bidirectional dependencies and/or the
recursive dependencies.
Requirement Knowledge
Objects Objects
Figure 4. Initial and generated system objects, structures, and relations. For the mapping of
the knowledge objects to the product structure there are two solutions: explicit mappings of
individual knowledge objects to related item(s), or implicit relations that are realised when
the knowledge objects are executed. At system execution two relations are created; one for
the creation and one for the definition of the product items.
Figure 5. Upper left: a car seat with heating elements in the cushion and backrest. Lower
left: a cushion element glued to the seat foam. On the right: a cushion element on a lighting
table showing the heating wire with sinus formed loops, the thermostat. and the connection
cable beween two layers of carrier material.
326 F. Elgh
4 Conclusion
The presented work provides an approach for modelling manufacturing
requirements in design automation. The approach promotes the integration of
properties and functions for knowledge execution and information management
into one system, i.e. integration of design know-how with life-cycle related know-
Modelling and Management of Manufacturing Requirements 327
why. The focus in this work has been on requirements originating from
manufacturing, although the presented principles are perceived as applicable to
other life-cycle requirements. An expanded support for requirements modelling
and mapping will support different stakeholders’ needs of requirement traceability
and system maintenance.
The proposed approach has been adopted during the planning and setting up of
a first solution for a design automation system. The system provides the company
with opportunity to work with producibility issues in a systematic way. It can also
serve as a tool that enables the evaluation of different courses of action in the early
stages in the development of product variants. Future work includes further system
development, user tests, and evaluations. Issues to be studied can be: the relation
between Knowledge Objects and Product Elements, the scope and re-execution of
the Knowledge Objects, how general the Knowledge Objects shall be, how to
include process planning and cost estimation, how to handle implications on the
knowledge base resulting from system generated product structures and process
plans, and suitable execution principle (depth-first or breath-first) to be deployed.
Algoritms:
- Wire specification
Length
Computations: Diameter
Inference Resistance
Output: Number of strands
engine ...
Access
Figure 6. System architecture. The graphical user interface (GUI) and the interfaces to
different software applications and databases are programmed with Visual Basic. The
knowledge base comprises rules in Catia Knowledge Ware Advisor (KWA). The rules are
linked (through an Access database) to different Knowledge Objects. A Knowledge Object
is a database object that has a number of input parameters and output parameters. The
Knowledge Objects can be of different types (e.g. Catia KWA rules, Mathcad worksheets)
in which the methods of the different Knowledge Object are implemented. The rule firing,
invoking the Knowledge Objects, is controlled by an inference engine, Catia KWA. The
company resources with associated manufacturing requirements are stored in an Access
database together with the Knowledge Objects. The product items and structure together
with the two relations, Created by and Definied by, are created at runtime. The system is fed
with customer specific input (parameter with associated values together with a 2D outline of
the heated seat areas). The main output is the pattern for the heating wire’s centre line, an
amplitude factor for the sinus formed loops, and the wire specification.
328 F. Elgh
5 Acknowledgements
This work was conducted within the Vinnova program for “Collaborative platform
and rule based product development” and financial support is gratefully
acknowledged. The authors would also like to express his gratitude to Kongsberg
Automotive for information and knowledge about the case of application as well as
for helpful discussions.
6 References
[1] Elgh F, Cederfeldt M. A design automation system supporting design for cost –
underlying method, system applicability and user experiences. In: Sobolewski M,
Ghodus P, (eds) Next generation concurrent engineering. International society of
productivity enhancement, New York, 2005;619-627.
[2] Elgh F, Sunnersjö S. Ontology based management of designer’s guidelines for
motorcar manufacture. In: Indrusiak LS, Karlsson L, Pawlak A, Sandkuhl K, (eds)
Challenges in collaborative engineering - CCE’06. School of Engineering, Jönköping
University, Jönköping, 2006;71-83.
[3] Elgh F, Cederfeldt M. Producibility awareness as a base for design automation
development – analysis and synthesis approach to cost estimation. In: Ghodus P,
Dieng-Kuntz R, Loureiro G, (eds) Leading the Web in Concurrent Engineering. IOS
press, Amsterdam, 2006;715-728.
[4] Kirkman DP. Requirement decomposition and traceability. Requirements Engineering,
1998;3;107-111.
[5] Sutinen K, Almefelt L, Malmqvist J. Implementation of requirements management in
systems engineering tools. In: Proceedings of Product models 2000. Linköping
University, Linköping, 2000;313-330.
[6] Sunnersjö S, Rask I, Amen R. Requirement-driven design processes with integrated
knowledge structures. In: Proceedings of Design Engineering Technical Conferences
and Computers in Information in Engineering Conference 2003. American Society of
Mechanical Engineering, New York, 2003.
[7] Andreasen MM. Designing on a ”Designer’s Workbench” (DWB). In: Proceedings of
the ninth WDK Workshop, Rigi, Switzerland, 1992.
[8] Hubka V, Eder WE. Principles of engineering design. Heurista, Zurich, 1987.
[9] Hubka V, Eder WE. Theory of technical systems. Springer-Verlag, Berlin, 1988.
[10] Suh NP. The principles of design. Oxford University Press, New York, 1990.
[11] Sohlenius G. Concurrent Engineering. In: CIRP annals. The International Academy for
Production Engineering, Paris, 1992;41;645-655.
[12] Nilsson P, Andersson F. Process-driven product development – managing
manufacturing requirements. In: Horváth I, Xirouchakis P, (eds) Proceedings fifth
international symposium on tools and methods of competitive engineering. Millpress,
Rotterdam, 2004;395-404.
[13] Olsson F, Principkonstruktion (in Swedish). Lund University, Lund, 1978.
Integrating Manufacturing Process Planning with
Scheduling via Operation-Based Time-Extended
Negotiation Protocols
Izabel Cristina Zattara 1 , João Carlos Espindola Ferreiraa , João Gabriel Ganacin
Granadoa and Carlos Humberto Barreto de Sousaa
a
Universidade Federal de Santa Catarina, Departamento de Engenharia Mecânica,
GRIMA/GRUCON, Florianópolis, SC, Brasil.
Abstract. It is proposed in this paper the on-line adaptation of process plan with
alternatives, through the application of an operation-based time-extended negotiation
protocol for decision-making about real-time routing of job orders of parts composed of
machining operations in a job-shop environment. The protocol is modified from the contract
net protocol to cater for the multiple tasks and many-to-many negotiations. The grouping of
the machining operations enables reduction of setup times, resulting from the reduction of
machines changes. For each part, all feasible routings are considered as alternative process
plans, provided the different manufacturing times in each machine are taken into account.
The time-extended negotiation period allows the visualization of all of the times involved in
the manufacture of each part, including those times that are not considered in systems of this
nature, such as the negotiation times among agents. Extensive experiments have been
conducted in the system, and the performance measures, including routings, makespan and
flow time, are compared with those obtained by the search technique based on the co-
evolutionary algorithm.
1 Introduction
A large obstacle for the integration between process planning and production
scheduling, in dynamic manufacturing environments, is the lack of flexibility for
the analysis of alternate resources when allocating the jobs in the shop floor.
According to Shen et al. [4], the integration problem of manufacturing process
planning and scheduling becomes even more complex when both process planning
and manufacturing scheduling are to be done at the same time. This paper will
1
Universidade Federal de Santa Catarina, Departamento de Engenharia Mecânica,
GRIMA/GRUCON, Caixa Postal 476, CEP 88040-900, Florianópolis, SC, Brazil; Tel: +55
(48) 3721-9387 extension 212; Fax: +55 (48) 3721-7615; Email: izabel.zattar@gmail.com;
http://www.grima.ufsc.br
330 G. Owen, S.J. Culley, M. Reik, R.I. McIntosh and A.R. Mileham
2 Related Research
The problem of integrating process planning and production scheduling has been
under investigation in the last years, and many different approaches have been
applied to accomplish that. More recently many authors have suggested the
multiagent systems (MAS) as an adequate approach for solving this problem.
In spite of the advances in this area, it is observed that in many works that use
the multiagent approach, a greater emphasis is on production scheduling, both
predictive and reactive, while process planning is treated in a static way, i.e. it is
determined before the part is released into production. It is also noticed that despite
some authors use dynamic process planning in their approaches, there is a
recurring problem related to the researches that study the integration of process
planning and production scheduling, which is the consideration of the activities
that compose the scheduling of an order or a part. In [3], the features that compose
each of the parts are treated in an independent way from each other, i.e. a single
feature is negotiated at a time between the part and the resources. This kind of
treatment may lead to an increase in the setup and queue times, resulting in a
longer makespan and flow times. This increase in the manufacturing times results
from the many changes machines on which the parts are manufactured, but if the
features or operations are grouped based on the setup, this may improve the
manufacturing and transport times. These possible gains are investigated in this
paper.
The proposal time is the sum of all the times considered by a resource agent for
the elaboration of a proposal in response to a request made by a part agent. This
proposal time indicates the time predicted to start manufacturing the job on the
resource. Equations (1) to (4) represent the times that compose the proposal time:
gi
§
¨ ¦ j 1
p i , m s i , m ·¸
(1) Proposal time = Tq m ¨ ¸
¨ gi ¸
© ¹
n
(2) Tq m ¦ i 1
( qi ci wi )
n ¨ j 1
¦ i 1
( q i c i w i ) ¨
¨ gi ¸
¸
Where: © ¹
x Tqm: queue time to carry out all manufacturable jobs on a resource. These
jobs are already in the resource processing queue, but they have not yet started
manufacturing at the instant of negotiation. If there are no jobs in the resource
processing queue at the negotiation instant, then Tqm = 0;
x qi: resource queue time of job i;
x ci: contract time. These orders have already contracted a resource, but have
not yet arrived at the resource processing queue (for instance, they are still
being manufactured at a previous resource);
x wi: waiting time. It is the interval between the sending of the proposal for
job execution by a resource agent, and the acceptance of the proposal by the
part agent that is negotiating with the resource. If no jobs are in the waiting
interval at the negotiation instant, then wi = 0.
For a better understanding of the contract time (ci), waiting time (wi), and
resource queue time (qi), which compose Equations 2 and 4, figure 1 presents an
example illustrating the exchange of messages between three part agents i1, i2 and
i3, and three resource agents R1, R2 and Rn. Each of the resource agents has an
internal counter of the total queue time, Tq1, Tq2 and Tqn, responsible for adding
the total queue time (Tqm) that will later be used in the proposal.
Proposal
Reject
x
[ c1 ] [w2]
[ w1]
Accept
Disconfirm
New Proposal x
[ c1 ]
[ w2]
[ c1 ]
Refuse
Proposal [ c1 ] [w3]
Reject
Accept
x
[ w2]
[ c1 ] [c3]
Cancel
Cancel
On resource queue [ q1 ] [c3]
Cancel
On resource queue [ q2]
The negotiation starts with part agents i1, i2 and i3, which send a call for
proposal (CFP), requesting a proposal time to resource agents R1, R2 and Rn. As
soon as resource R1 sends a proposal to i1, the counting of the waiting time starts
(wi). This time is calculated in column Tq1 until resource R1 receives an “accept”
or “reject” by i1 referring to its proposal. In the case of a positive response
(“accept”) by i1, the time will not be calculated as waiting time (wi), and instead it
will be considered as contract time (c1), remaining that way until job i1 is moved
to the resource processing queue. At this instant job i1 sends to resource R1 the
message informing that it arrived at the resource queue. When resource R1 receives
this message, it considers the time related to i1 as a portion of the resource queue
time (q1). If the resource proposal is rejected, as it occurs in the negotiation
between i2 and R1, where R1 receives a “reject” of a proposal made to job i2, the
waiting time (w2) is not considered as part of the resource negotiation time, and it
is discarded.
CFP
CFP
CFP
CFP
CFP
CFP
Proposal [ w1]
Proposal [ w1 ] [w2]
Refuse
Accept [ c1 ] [w2]
(a) (b)
Reject
Proposal x
[ c1 ] [w2]
[ w1]
Refuse
Proposal
[ w1 ] [w3]
Accept
[ c1 ]
(c)
Disconfirm
New Proposal
(d) x
[ c1 ]
(e) [ w2]
Accept
[ c2 ]
(a) When sending a “reject” proposal, the part agent also needs to send the best
proposal that it received until that moment of negotiation, i.e., the proposal that
motivated its refusal;
(b) This proposal, which includes the information about the part that originated it,
is stored temporarily by the resource agent;
(c) If an alteration occurs in the queue of jobs of the resource agent, such as an
order cancellation, this resource will calculate new queue times, updating the
proposal time of all of the jobs that are in its physical queue or negotiation queue;
(d) After calculating all the new times, the resource agent will analyze the
proposals stored in item (b), comparing them with its new availability. In case the
new proposal is better than the stored proposal, the new proposal is sent again to
the part agent;
(e) Finally the part agent will analyze this new proposal, verifying if it will accept
it or not.
Number Number
Problem Job Number Problem Job Number
of jobs of jobs
1 6 1, 2, 3, 10, 11, 12 13 9 2, 3, 6, 9, 11, 12, 15, 17, 18
2 6 4, 5, 6, 13, 14, 15 14 9 1, 2, 4, 7, 8, 12, 15, 17, 18
3 6 7, 8, 9, 16, 17, 18 15 9 3, 5, 6, 9, 10, 11, 13, 14, 16
4 6 1, 4, 7, 10, 13, 16 16 12 1, 2, 3, 4, 5, 6, 10, 11, 12, 13, 14, 15
5 6 2, 5, 8, 11, 14, 17 17 12 4, 5, 6, 7, 8, 9, 13, 14, 15, 16, 17, 18
6 6 3, 6, 9, 12, 15, 18 18 12 1, 2, 4, 5, 7, 8, 10, 11, 13, 14, 16, 17
7 6 1, 4, 8, 12, 15, 17 19 12 2, 3, 5, 6, 8, 9, 11, 12, 14, 15, 17, 18
8 6 2, 6, 7, 10, 14, 18 20 12 1, 2, 4, 6, 7, 8, 10, 12, 14, 15, 17, 18
9 6 3, 5, 9, 11, 13, 16 21 12 2, 3, 5, 6, 7, 9, 10, 11, 13, 14, 16, 18
10 9 1, 2, 3, 5, 6, 10, 11, 12, 15 22 15 2, 3, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 16, 17, 18
11 9 4, 7, 8, 9, 13, 14, 16, 17, 18 23 15 1, 4, 5, 6, 7, 8, 9, 11, 12, 13, 14, 15, 16, 17, 18
12 9 1, 4, 5, 7, 8, 10, 13, 14, 16 24 18 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18
Integrating Manufacturing Process Planning with Scheduling 335
(5) Improved rate ={mean of SEA approach – mean of proposed protocol/ mean of
SEA approach} x 100%.
Table 2 shows the experimental results for mean makespan and flow time,
respectively. Since the test-bed proposed by Kim [1] does nor consider the setup
times of the operations, it is necessary to carry out two performance comparisons
in order to better characterize the nature o this investigation. Column Setup_0 in
table 2 refers to the same conditions presented by Kim, i.e., the setup time is not
considered. On the other hand, in column Setup_10_10, the total operation time
used by Kim was divided in three parts: 80% processing time in the machine
(pi,m); 10% machine setup (spi,m); 10% fixturing setup (sfj,m).
Table 2 reveals that, for several test-bed problems, the proposed operation-
based time-extended negotiation protocol provides the best makespan performance
among the compared algorithms. The global average obtained is also better than
those generated by the other SEA algorithm used in the comparison. The cases in
which the results for the makespan are worse than those attained by the SEA
336 G. Owen, S.J. Culley, M. Reik, R.I. McIntosh and A.R. Mileham
5. Conclusion
The system proposed in this paper uses a hierarchical multiagent model that allows
the dynamic process planning while reducing makespan and flow time through the
reduction of the setup time between the jobs. In order to reach this objective, an
operation-based time-extended negotiation protocol was used.
One of the most significant contributions to the efficacy of the proposed
operation-based time-extended negotiation protocol is the use of flexible process
plans that can be verified step by step during the sequencing and routing of jobs,
which allows the resources group the operations that they are capable of
manufacturing, reducing the machine setup time. This grouping allows the
reduction of both the makespan and the flow time, and this is due to the reduction
in the number of machine changes on which the jobs are manufactured. This shows
that the simplification of the scheduling problem in a job shop layout through the
inclusion of setup times in the total processing time of the machines may result in
an incorrect analysis of the problem.
As a future work, an analysis of the influence of the setup times in the
reduction of makespan and flow time will be carried out. This analysis will be
based on the gradual increase of the contribution of the machine setup time. A
mechanism will also be created for the analysis of the effects caused by
disturbances on the flow time. At first two types of disturbances will be analyzed:
machine failures, and the cancelling of orders that have already been released for
manufacture. Also, solutions that minimize the effects of these disturbances will be
investigated through the re-dynamic scheduling of the orders.
[1] Kim YK, Park K, Ko J. A symbiotic evolutionary algorithm for the integration of
process planning and job shop scheduling, Computers & Operations Research 2003;
30:1151-1171.
[2] Kim Y. A set of data for the integration of process planning and job shop scheduling,
2003. Available at: <http://syslab.chonnam.ac.kr/links/data-pp&s.doc>. Access on:
Sept. 13th 2006.
[3] Usher J. Negotiation-based routing in job shops via collaborative agents, Journal of
Intelligent Manufacturing 2003; 14:485-558.
[4] Shen W, Wang L, Hao Q. Agent-Based Distributed Manufacturing Process Planning
and Scheduling: A State-of-Art Survey, IEEE Transactions on Systems, Man and
Cybernetics – Part C: Applications and Reviews 2006; 36: 563-571.
Using Differing Classification Methodologies to Identify
a Full Compliment of Potential Changeover
Improvement Opportunities
Geraint Owena, Steve Culleya, Michael Reika, Richard McIntosha,1 and Tony
Milehama
a
Department of Mechanical Engineering, University of Bath, UK.
1 Introduction
From the end of the 1970’s and into the 1980’s and beyond, western volume
manufacturers were confronted with an ever worsening competitive position
relative, particularly, to leading Japanese manufacturers [11]. With the emergence
of new manufacturing paradigms such as lean and mass customisation awareness
has grown that competitive criteria extend considerably beyond those simply of
high product quality and low unit cost which traditionally dominated in mass
production [17]. A leading changeover capability greatly assists manufacturers to
be more responsive and is widely identified to be at the heart of modern small-lot
multi-product manufacturing practice [14].
1
Research Fellow, Department of Mechanical Engineering, University of Bath,
Claverton Down, Bath, Somerset, BA2 7AY, UK; Tel: +44 (0) 1225 386131; Fax: +44 (0)
1225 386928; Email: ensrim@bath.ac.uk; http://www.bath.ac.uk
338 G. Owen, S.J. Culley, M. Reik, R.I. McIntosh and A.R. Mileham
become involved in the research, based throughout the UK and mainland Europe.
Many highly diverse industrial situations have been investigated.
The difficulty of understanding changeover in all its complexity and of
understanding all the myriad potential improvement opportunities which can be
available is reflected in the number of classifications University researchers alone
have adopted. Many of these classifications have been modeled from others’ work,
sometimes being adopted unchanged. Conversely some novel working
classifications have also been developed.
Recognizing that an overall changeover is typically interwoven with aspects of
technical, personnel and behavioral issues (both in the immediate confines of the
tasks being undertaken and beyond) the paper now investigates some selected key
global classifications – which are later collated together into a new improvement
framework.
For many years University researchers have drawn a distinction between design-
led and organization-led improvement [7]. Any chosen improvement lies on a
spectrum between being 100% design based and 100% organization based (where
only what people do is altered). Whereas retrospective improvement can be
undertaken with either a design or an organizational bias, OEMs (Original
Equipment Manufacturers) can only realistically pursue better equipment design.
It is for such personnel in particular that the University team has been developing
its DfC tool.
2.4 Changing when tasks occur and changing what tasks occur
People Practice
Changeover
Activities
Change
Elements
Products Process
Whether seen as a series of tasks, activities, events or actions (or any other similar
terms), all changeovers take a time to complete and represent a level of
complexity. Reducing complexity by reducing the difficulty of individual tasks (or
activities or events or actions) and/or their number should result in an easier and
Using Differing Classification Methodologies 341
Importantly, despite the scope of the above options, it needs to be recognized that
reducing complexity per se (including doing so by reducing variety) is not the only
potential way that quicker changeovers may ensue.
An alternative strategy in many industrial circumstances is to change when
tasks occur [8] – for example externalizing them, prior to production being halted
[12]. The time at which tasks commence is changed, rather than the tasks
themselves intrinsically being changed.
Changing when tasks take place has been argued elsewhere [8] to be strongly
influenced to the ability to match tasks to the resources needed to complete them,
most notably labor (which is the reason externalizing tasks is often particularly
attractive). In other words task re-sequencing is potentially possible if a resource is
not being used at any given time. Normally the task start time would be altered to
minimize the period when production is interrupted by the changeover (‘internal
time’ in SMED nomenclature).
Conversely, better matching of resources to the tasks which need to be
completed can also be beneficial, for example adding more skilled labor or
providing pre-setting jigs [8]. Once more the effect is potentially to allow change
to the time when tasks can be undertaken, hence again reducing production losses.
342 G. Owen, S.J. Culley, M. Reik, R.I. McIntosh and A.R. Mileham
Organisation
Reduce
Products Process Variability
Design
Realm of all
available
Improvement
Techniques
5. Reduce Variability
(standardize work practices) Reduce
6. Reduce
Products Variability (standardize
Process Variability
physical entities)
Design
Realm of all
available
Improvement
Techniques
Similarly, amending when tasks occur – for which once again any relevant
techniques can be used, with either an organization or design emphasis – involves
better matching tasks to resources and/or better matching resources to tasks [8]. A
design-led opportunity might be to provide specialist jigs and an organization-led
opportunity might be to provide additional manpower. It is being sought in either
case to optimize the task sequence and thereby reduce production downtime. A
particularly potent technique can be that of externalizing tasks.
The final option of variability reduction equally guides improvement by use of
any potentially relevant technique, once again with either an organization or a
design bias. Variability of work practices relates to what people do. Variability of
entities relates to physical hardware entities such as change parts, physical product
entities such as surface finish and operational entities such as temperature.
5 Conclusions
Flexibility and responsiveness are watchwords of modern manufacturing, driven by
a desire to reduce non-value-added activity and better respond to customer
demands. Rapid changeover between products is paramount if genuine
manufacturing flexibility and efficiency are to be achieved.
To date the quest for better changeover performance has been substantially
guided by the pioneering work of the late Japanese engineer-consultant, Dr Shigeo
Shingo. However it is becoming clear that his SMED methodology does not
readily embrace all potential improvement opportunities, not least those which are
344 G. Owen, S.J. Culley, M. Reik, R.I. McIntosh and A.R. Mileham
6 References
[1] Boothroyd G, Dewhurst P, Knight W, 1994, Product Design for Manufacture and
Assembly, Marcel Dekker, New York.
[2] Carlson JG, Yao AC, Girouard WF, 1994, The role of master kits in assembly
operations, International Journal of Production Economics, 35, 253-258.
[3] Chan D, 2006, Product design and changeover, MSc. Thesis, University of Bath.
[4] Claunch JW, 1996, Set-Up-Time Reduction, McGraw-Hill.
[5] Eldridge C, Mileham AR, McIntosh RI, Culley SJ, Owen GW, Newnes LB, (ed. Pinto-
Ferreira JJ), 2002, Rapid changeovers - the run up problem, Proc. 18th ISPE/IFAC
Int.Conf on CAD/CAM, Robotics and Factories of the Future, Porto, INESC Porto
(Manufacturing Systems Dept), July 2002, 161-167.
[6] McIntosh RI, Culley SJ, Mileham AR, Owen GW, 2000, A critical evaluation of
Shingo’s SMED (Single Minute Exchange of Die) Methodology, International Journal
of Production Research, 38, 2377-2395.
[7] McIntosh RI, Culley SJ, Mileham AR, Owen GW, 2001, Improving Changeover
Performance, Butterworth-Heinemann, Oxford, UK.
[8] McIntosh RI, Owen GW, Culley SJ, Mileham AR, 2007, Changeover Improvement:
Reinterpreting Shingo's ‘SMED’ Methodology, IEEE Transactions on Engineering
Management, Vol. 54, No. 1, pp 98-111.
[9] Reik MP, McIntosh RI, Culley SJ, Mileham AR, Owen GW, 2006a, A formal design
for changeover methodology Part 1: theory and background, J. Proc. IMechE., Part B:
J. of Engineering Manufacture, Vol. 220, No 8, pp 1225-1235.
[10] Reik MP, McIntosh RI, Culley SJ, Mileham AR, Owen GW, 2006b, A formal design
for changeover methodology Part 2: methodology and case study, J. Proc. IMechE.,
Part B: J. of Engineering Manufacture, Vol. 220, No 8, pp 1237-1247.
[11] Shimokawa K, 1994, The Japanese automobile industry, Athlone Press, London.
[12] Shingo S, 1985, A Revolution in Manufacturing: The SMED System, Productivity
Press, Portland, USA.
[13] Smith D. A., 2004, Quick die change, SME.
[14] Spencer MS and Guide VD, 1995, An exploration of the components of JIT: case study
and survey results, Int. J. of Operations and Production Management, 15, pp 72-83.
[15] Tickle S and Johnston B, 2001, People power: the secret weapon to high speed
changeover, Conf. rapid changeover of food production lines, 13 Dec 2001, IMechE.
[16] Tseng MM and Piller FT, 2003, The Customer Centric Enterprise – Advances in Mass
Customization and Personalization, Springer-Verlag, Berlin.
[17] Womack JP, Jones DT, Roos D, 1990, The machine that changed the world, Rawson
Associates, New York, USA.
Museum Visitor Routing Problem with the Balancing of
Concurrent Visitors
Abstract. In the museum visitor routing problem, each visitor has some exhibits of interest.
The visiting route requires going through all the locations of the exhibits that he/she wants
to visit. Routes need to be scheduled based on certain criteria to avoid congestion and/or
prolonged touring time. In this study, museum visitor routing problems (MVRPs) are
formulated by mixed integer programming and can be solved as open shop scheduling
(OSS) problems. While visitors can be viewed as jobs, exhibits are like machines. Each
visitor would view an exhibit for a certain amount of time, which is analogous to the
processing time required for each job at a particular machine. The traveling distance from
one exhibit to another can be modeled as the setup time at a machine. It is clear that such
setup time is sequence dependent which are not considered in OSS problems. Therefore, this
kind of routing problem is an extension of OSS problems. Due to the intrinsic complexity of
this kind of problems, that is NP-hard, a simulated annealing approach is proposed to solve
MVRPs. The computational results show that the proposed approach solves the MVRPS
with a reasonable amount of computational time.
Keywords. Museum Visitor Routing Problem, Open Shop Scheduling, Vehicle Routing,
Simulated Annealing, Sequence-dependence Setup Times
1 Introduction
Museums provide people a physical environment for leisure sight-seeing and
knowledge acquisition. By enhancing museum collection and services, countries
all over the world are able to use museums as a core facility to facilitate the
elevation of culture, art and tourism industry. Due to the advent of IT and
networking technologies, museum services can be strengthened by providing
context-aware guidance systems for visitors with different background, interests
and/or time constraints.
1
Professor with department of Industrial Management, National Taiwan University
of Science and Technology, No. 43, Keelung Road, Section 4, Taipei, Taiwan; Tel: +886 (2)
27376327; Fax: +886 (2) 27376344; Email: sychou@im.ntust.edu.tw.
346 S.-Y. Chou and S.-W. Lin
In order to establish a prototype tour guide system for the National Palace
Museum of Taiwan, a context-aware framework where visitors in different
contexts can obtain information customized to their needs is established. Visitors
provide their personal data, special needs and constraints to the guidance system.
The system in turn extracts suitable information from the structured museum
contents for the visitors to utilize during the visit. Such context data are classified
by demographic data, preferences and interests such as ages, genders, education,
professions, languages, media type preferred, time available, special subject of
interest, specific assignment, device used, and the location.
This research implements a route guidance function to automatically provide
real-time information to visitors. Routes need to be scheduled based on certain
criteria to avoid congestion and/or prolonged touring time. In a museum visitor
routing problem, each visitor has some locations to visit; the visiting sequence is
not restricted as long as visitors trip all the stations that they want to visit. Visitors
are like jobs, display locations are like machines. Each visitor stays at a display in
a certain amount of time, which is analogous to the processing time required for
each job being processed at a particular machine. This routing problem can
therefore be solved as an open shop scheduling (OSS) problem. In an OSS, the
processing jobs follow no definite sequence of operations. As long as all the
operations needed for a job are done, the processing for the job is considered done.
The open shop formulation allows much flexibility in scheduling, but is more
difficult to develop rules that give an optimum sequence for the problem.
The objective of the museum visitor routing problem can therefore be modeled
as (1) minimizing the makespan, that is minimizing the visiting completion time of
the last visitor; (2) minimizing the variation of flow time, that is, each visitor is
required to spend as close amount of time in visiting as possible; and (3)
minimizing the maximum lateness, that is, if an expected visiting time is pre-
determined, the visiting completion time of each visitor is scheduled as close to the
expected visiting time as possible. In this study, the objective is to minimize the
makespan.
Yet, the visitor's travelling distance between two display locations depends on
the physical distance between them, indicating that there exists a sequence-
dependent setup time in OSS. In addition, each display has a maximum space
where visitors can stay and browse. The museum visitor routing problem is
therefore more complicated than the OSS. This study applied the simulated
annealing (SA) approach for solving the museum visitor routing problem to find a
solution which is a (near) global optimization based on the objective function of
the problem.
This research takes into account of the potential congestion problem that may
occur with a great number of concurrent visitors having the same set of interests on
the exhibits. To provide smoother visiting experiences for the visitors, museums
can adopt the concept of concurrent engineering to avoid congestion by
recommending different routes for different visitors subject to the total visiting
time of all visitors can be minimized. The reminder of this paper includes the
following sections. The problem formulation and literature is mentioned in Section
2. Section 3 elaborates the proposed approach. Experimental are made in Section 4.
The conclusion is made in the last Section.
Museum Visitor Routing Problem with the Balancing of Concurrent Visitors 347
comes first, then we need the following constraint cik c jk t tik . An indicator
variable xijk can be used. xijk 1 , if visitor i precedes visitor j on exhibition k;
otherwise, xijk 0.
The MVRPs with a makespan objective can be formulated as follows:
min max{ci 0 }
i di d n
The objective is to minimize the makespan, constraint (1) and (2) ensure each
visitor can visit on only exhibition at a time. Constraint (3) and (4) ensure that
exhibition can be visited only one visitor at a time. Constraint (5) ensures that the
traveling distance from indoor any exhibition is included in the model. Constraint (6)
ensures that the traveling distance from any exhibition to the outdoor is considered in
the model.
As far as heuristics are concerned, there are only few heuristic procedures for
the general m-machine open shop problem published in the literature. Rock and
Schmidt [16] introduced a machine aggregation algorithm based on the result that
the two-machine cases are polynomially solvable. Gueret and Prins [10] proposed a
simple list scheduling heuristics based on priority dispatching rules. The shifting
bottleneck procedure, originally designed for the JSP, has been adapted by
Ramudhin and Marier [15] to the OSP. Bräsel et al. [4] proposed efficient
constructive insertion algorithms based on an analysis of the structure of a feasible
combination of job and machine orders.
Recently, meta-heuristic approaches have been developed to solve the open
shop problem, including tabu search (TS) [2][12], genetic algorithm (GA) [7] and
simulated annealing [13]. Liao [14] developed a powerful hybrid genetic algorithm
(HGA) that incorporates TS as a local improvement procedure into a basic GA.
Furthermore, Blum [3] proposed a hybrid ant colony optimization with bean search
to open shop scheduling, and obtain better solution in existing benchmark
instances. Meta-heuristic approach can obtain (near) optimal solution at the
expense of large computing resource. To the best we know, there are seldom
literature to deal with museum visitor routing problems which is the extension of
OSS with sequence-dependence setup times.
between obj(X) and obj(Y); that is ' obj(Y )-obj(X ). The probability of replacing
X with Y, where X is the current solution and Y is the next solution, given that
' ! 0, is exp(' / T ), is accomplished by generating a random
number r [0, 1] and replacing the solution X with Y if r exp( ' / T ). Meanwhile,
if ' d 0, the probability of replacing X with Y is 1. If the solution X is replaced by
Y. If T is lower than TF, the algorithm is terminated. The Xbest records the best
solution as the algorithm progresses.
4 Computation results
The problem set from Taillard [17] is used to verify the developed approach. This
set consists of six different problem types and 10 instances of each problem type,
for a total of 60 different problems. However, only 3 instances of each problem
type are tested. These problems are all square problem, i.e., n=m, and range form
small ones with 16 visits to problem with 400 visits (Taillard observed that open
shop problems with n=m are harder to solve than those with n>>m). The problem
set from Taillard is originally used in open shop scheduling; therefore, we add the
sequence-dependence setup times in problems. The sequence-dependent setup
times for problems are generated by the similar way of Taillard’s problems set. The
setup times are generated uniformly distributed over the interval [1, 5], with s 3.
The proposed SA approach is implemented in c language and run on a Pentium-IV
2.4 GHz PC with 512 MB Memory. After running a few problems with several
combinations of parameters, the parameter values for SA were Iiter=m*n*500,
T0=100, TF =1, Į=0.965 where n is the number of visitors and m is the number of
exhibitions to be visited. Each problem is solved 10 times. The worst, average, and
best objective function value among 10 runs are shown in Table 5.
The lower bound of objective function values are also shown in the Table. It can be
found in Table 5, for small-scaled problems (4×4 and 5×5), the solutions obtained
by proposed SA approach is same as the optimal solutions obtained by CPLEX.
For larger-scaled problems, the range of the solution obtained among 5 times for
each problem is smaller, which means the proposed SA approach is stable. Thus,
the proposed SA approach is suitable for the MVRPs.
5. Conclusions
In this study, museum visitor routing problems (MVRPs) is formulated as a
mixed integer programming. While the problem can be formulated as a mixed
integer programming problem, solving this problem using mathematical methods is
not feasible when problem scale is large. A simulated annealing approach is
proposed to solve the MVRPs. The effectiveness of the proposed SA approach is
demonstrated in experiments with encouraging results. The SA approach presented
in this paper provides an effective method to generate very good solutions to this
problem with computer technology that is well within the reach of museum.
References
[1] Adiri I, Aizikowitz N, Open shop scheduling problems with dominated machines. Nav
Res Log 1989;36:273-281.
[2] Alcaide D, Sicilia J, Vigo D, Heuristic approaches for the minimum makespan open
shop problem. J Spanish Oper Res Soc 1997;5:283-296.
[3] Blum C, Bean-ACO᧩hybridizing ant colony optimization with bean search: an
application to open shop scheduling. Comput Oper Res 2005;32:1565-1591.
[4] Bräsel H, Tautenhahn T, Werner F, Constructive heuristic algorithms for the open shop
problem. Computing 1993;51:95-110.
[5] Brucker P, Hurink J, Jurish B, Wostmann B, A branch and bound algorithm for the
open-shop problem. Discrete Appl Math 1997;76:43-59.
[6] Dorndorf. U, Pesch E, Phan-Huy T, Solving the open shop scheduling problem. J
Scheduling 2001;4:157-174.
[7] Fang HL, Ross P, Corne D, A promising hybrid GA/heuristic approach for open-shop
scheduling problems. In: Cohn A (eds), Proceedings of ECAI 94, 11th European
Conference on Artificial Intelligence, 1994; 590-594.
[8] Fiala T, An algorithm for the open-shop problem. Math Oper Res 1983;8:100-109.
[9] Gonzalez T, Sahni S, Open shop scheduling to minimize finish time. J ACM
1976;23:665-679.
[10] Gueret C, Prins C, Classical and new heuristics for the open-shop problem: a
computational evaluation. Eur J Oper Res 1998;107:306-314.
[11] Pinedo M, Scheduling: Theory, Algorithms, and Systems, Prentice-Hall, Englewood
Cliffs, NJ, 1995.
[12] Liaw C-F, A tabu search algorithm for the open shop scheduling problem. Comput
Oper Res 1999;26:109-126.
Museum Visitor Routing Problem with the Balancing of Concurrent Visitors 353
[13] Liaw C-F, Applying simulated annealing to the open shop scheduling problem. IIE
Trans 1999;31:457-465.
[14] Liaw C-F, A hybrid genetic algorithm for the open shop scheduling problem. Eur J
Oper Res 2000;24:28-42.
[15] Ramudhin A, Marier P, The generalized shifting bottleneck procedure. Eur J Oper Res
1996;93:34-48.
[16] Röck H, Schmidt G, Machine aggregation heuristics in shop-scheduling. Meth Oper
Res 1983;45:303-314.
[17] Taillard E, Benchmarks for basic scheduling problems. Eur J Oper Res 1993;64:278-
285.
[18] Tanaev VS, Sotskov YN, Strusevich VA, Scheduling Theory: Multi-Stage Systems,
Kluwer Academic Publishers, Dordrecht, 1994.
Improving Environmental Performance of Products by
Integrating Ecodesign Methods and Tools into a
Reference Model for New Product Development
1 Introduction
The increasing consumption of products is at the origin of most pollution and
depletion of resources caused by our society [10]. Environmental impacts observed
throughout a product lifecycle are, to a large extent, determined during its
development phase [9]. Hence, taking environmental aspects into consideration
during the new product development (NPD) phase can play an essential role in the
reduction of environmental impacts related to product lifecycle. Ecodesign can be
1
University of São Paulo, São Carlos School of Engineering, Department of Production
Engineering, Nucleus of Advanced Manufacturing Av. Trabalhador São-Carlense, 400 –
Centro; CEP: 13566-590 - São Carlos/SP – Brazil; Tel: +55 (16) 33739433; Fax: +55
(16) 33738235; Email:agf@sc.usp.br; http://www.numa.org.br
356 A. Guelere Filho, H. Rozenfeld, D. Pigosso and A. Ometto
2 Literature Review
members, who in turn should take into account the dynamics of their activities and
their ecodesign maturity level. This selection may be assisted by a specialist.
Table 2. Integration of ecodesign methods and tools into NPD reference model
Phases Methods/Tools
Strategic Product Planning Porter’s guidelines, the seven steps for managing an
enterprise according to sustainable development principles
and Baumann’s organizing tools
Informational Design The Ten Golden Rules, QFDE (Phase I), LCA
Conceptual Design QFDE (Phase II and III), EEA, LCA
Detailed Design QFDE (Phase IV), LCA
The use of selected methods and tool depends mainly on the stage of the
product development process, i.e., how detailed the available information is. Time
and cost may be reduced and a more environment-friendly product may be
produced if ecodesign methods and tools are used early in the design process.
Since ecodesign factors of success are, to a large extend, similar to NPD factors of
success [15], the task of integrating the ecodesign concept as a whole will be easier
to those companies that have high NPD maturity levels.
4 Conclusion
NPD plays an important role in reducing the environment impacts of a product
lifecycle. The use of a reference model for NPD may contribute to the
standardization of some practices, the use of a common language, the repeatability
of projects and to its quality, thus increasing the probability of making successful
products by means of structuring this business process. Despite the existence of
many ecodesign methods and tools, a systematic way to use them in NPD is
lacking. Introducing ecodesign methods and tools into designers’ daily activities
through a reference model for NPD may bridge this gap. The task of selecting
ecodesign methods and tools and deciding at which NPD phase they should be
used is something companies have to perform internally. However, in order to
ensure concrete results with the implementation of ecodesign methods and tools, it
is also necessary to introduce the topic of sustainability into the company’s
business core as a preliminary measure. The proposed integration is a set of NPD-
oriented structured activities that can successfully combine environmental and
business perspectives.
5 References
[1] Baumann H, Boons F, Bragd A.Mapping the green product development field. Journal
of Cleaner Production, 2002; 10; 409-425.
[2] Boks C. The soft side of Ecodesign. Journal of Cleaner Production, 2005; 1346-1356.
362 A. Guelere Filho, H. Rozenfeld, D. Pigosso and A. Ometto
[3] Boothroyd P, Dewhurst W. Product Design and Manufacture for Assembly. Marcel
Dekker, 1994.
[4] Brezet H. Dynamics in ecodesign practices. Industry and Environment, 1997; 21-24.
[5] Byggeth S, Hochschorner E. Handling trade-offs in ecodesign tools for sustainable
product development and procurement. Journal of Cleaner Production 2006.
[6] Charter M.. Managing eco-design, In: Industry and Environment, 1997; 20(1-2); 29-31.
[7] Clark KB, FujimotoT. Product development performance: strategy, organization and
management in the world auto industry. Harvard Business School Press, 2001.
[8] Goedkoop M, Schryver AD, Oele M. Introduction to LCA with SimaPro 7. PRé
Consultants, 2006.
[9] Graedel E, Allenby R. Industrial ecology. Prentice Hall, New Jersey, 1995.
[10] Green paper on integrated product policy, 2001. Available at: < http://eur-
lex.europa.eu/LexUriServ/site/en/com/2001/com2001_0068en01.pdf>. Accessed on:
Jan. 15th 2007.
[11] Handfield B, Melnyk A, Calontone J, Curkovic S. Integrating environmental concerns
into the design process. Institute of Electrical and Electronics Engineers Transactions
on Engineering Management, 2001; 48(2); 189-208.
[12] Hauschild M, Jeswiet J, Alting L. From Life cycle Assessment to sustainable
Production: Status and Perspectives. Annals of the CIRP, 2005; 54/2; 625–636.
[13] Hauschild M., Jeswiet J, Alting, L. Design for environment – do we get the focus
right? Annals of the CIRP, 2004; 53/1; 1-4.
[14] International Institute for Sustainable Development. Available in:
< http://www.iisd.org>. Acessed on: Jan 15th, 2007.
[15] Johansson G. Success factors for integration of ecodesign in product development.
Environmental Management and Health, 2002; 13(1); 98-107.
[16] Karlsson R, Luttropp C. EcoDesign: what’s happening? Journal of Cleaner Production,
2006; 14; 1291-1298.
[17] Lindahl M. E-FMEA - A new promising tool for efficient design for environment.
Kalmar University, Sweden.
[18] Lindahl M. Environmental Effect Analysis (EEA) – an approach to Design for
Environment. Licenciate Thesis. Royal Institute of Technology, 2000.
[19] Luttropp C, Lagerstedt J. EcoDesign and The Ten Golden Rules. Journal of Cleaner
Production 2006;1-13.
[20] Masui K, Sakao T, Inaba A. Quality function deployment for environment: QFDE (1st
report) – a methodology in early stage of DFE. 2001.
[21] PDMA Glossary for New Product Development. Avaiable at:
<http://www.pdma.org/library/glossary.html>. Accessed on: Feb. 15th 2007.
[22] Porter ME, Kramer MR. Strategy and society. Harvard Businees Review, 2006.
[23] Pugh S. Total design: integrated methods for successful product engineering. Addison
Wesley, 1990.
[24] Rozenfeld H, Eversheim, W. An architecture for management of explicit knowledge
applied to product development processes. CIRP Spain, 2002; 51(1); 413-416.
[25] Rozenfeld H., Amaral D, Forcellini F, Toledo J, Silva S, Alliprandini D, Scalice R.
Gestão de Desenvolvimento de Produtos: uma referência para a melhoria do processo.
São Paulo, 2006.
[26) Sakao T, Masui K, Kobayashi M, Aizawa S. Quality function deployment for
environment (2nd report) – verifying the applicability by two case studies. 2001.
[27] Valeri G, Rozenfeld H. Improving the flexibility of new product development (NPD)
through a new quality gate approach. Journal of Integrated Design and Process
Science, 2004; 8[3); 17-36.
[28] van Hemel C. Towards sustainable product development. Journal of Cleaner
Production, 1995; 3(1-2); 95-100
Sustainable Packaging Design Model
Abstract. For many consumer products the packaging is as important as the product
itself. This means that one does not exist without the other. The product
development process, in this in case, is only complete when the packaging is also
developed. This work aims to consider an integrated Sustainable Packaging Design
Model(SPkDM) .Therefore, in view of the imperious necessity and concurrence of
PDP (Product Development Process) and PkDP (Packaging Development Process)
and its interdependence and the resulted solutions, a model that also integrates the
environmental aspects was established upon their initial phases. The strategies of
eco-design and tools must be incorporated in each phase of the development
process, as well as the assessment of the impacts, before going to the next phase..
They also provide the designers with data that will enable them to refine the
packaging and the product. The model will instrument the packaging designers of
the companies, guaranteeing that the process has a better ecological efficiency. The
regular use of the model will bring as benefits time, cost and environmental impact
reductions of the packaging.
1 Introduction
The packaging development process is an activity becoming more important
every day in the economic context. It has its coverage and direct relation with
practically all the productive sectors. The packaging industry has a structural role
in the capitalist society. Through packaging millions of people in the whole world
have access to all types of consumer products. For the consumer, the packaging
represents the symbol of the modern world, the consumerism, the practicality, the
1
Universidade Regional de Blumenau (FURB), Departamento de Engenharia
Química(DEQ), Rua São Paulo 3250, CEP 89030-00- Blumenau, SC, Brazil; Tel: +55 (47)
3221 6049; Fax: +55 (47) 3221 6000; Email: doris@furb.br; http://www.furb.br
364 DZ. Bucci, FA. Forcellini
convenience, the comfort, the facility to conserve food and the desire of ownership
[1].
Being also a product, packaging generates environmental impacts throughout
its life cycle. Packaging developing has become a great challenging task and of
enormous responsibility for the companies and the professionals involved. The
reality has shown that most of the companies have their product development
process totally independent from the PkP. As a result, they face losses in
competitiveness, costs increases, and longer lead time. In addition, it also brings an
unfavorable environmental performance for both the product and the packaging.
Currently, the companies have focused their concerns in the environmental
performance of the products. Thus, both product and packaging development have
become of great importance, since their environmental performance is determined
mainly during the PDP. The PDP is the key to develop environmental friendly
products. Ecodesign is a proposal that incorporates environmental aspects in the
traditional product development process, also known as Design for Environment
(DfE) [5]. It incorporates a systematic examination, during the PDP, of the aspects
related to the environment protection and to the health of the human being. This
systematic examination is performed throughout all the phases of the life cycle
process of the product. It allows that the development team includes the
environmental considerations since the initial phases and during the development,
taking also into account the traditional aspects as function, production costs,
aesthetic, ergonomics, etc. However “Charter and Tischner” [5] also consider the
sustainability in the product development. The development of sustainable
products for them is more than just ecodesign, since there is also a concern in
balancing economic, social and environmental aspects in the PDP.
The objective of this article is to propose a Sustainable Packaging Design
Model (SPkDM), integrating Product Development Process (PDP) and Packaging
Development Process (PkDP) as well as the environmental aspects since the initial
phases of the process.
2 Packaging
The packaging throughout its history has represented an important tool for the
trade development and the growth of the cities. It guarantees that the product
reaches the final consumer in safe conditions and quality. It can be defined as an
industrial and marketing technique to contain, to protect, to identify and to
facilitate sales and distribution of consumption, industrial and agriculture products
[15]. “Saghir” [17] defines packaging as a coordinated system of preparation of
goods for safety, efficiency and effective handling, transportation, distribution,
storage, retail, consumption and recovery, reuses or disposal combined with the
maximum value to the consumer, sales and consequent profit.
Sustainable Packaging Design Model 365
The existing models for the packaging development process available in the
current literature are limited and scarce and do not include environmental issues
since the initial phases of the design.
It was also noted that the existing processes are related to more specific areas
for example, food product development like Fuller [7]; Brody and Lord, [4]. There
are not much authors with publication in the area of packaging development
process which can be cited like Griffin [8]; Paine [12]; Romano [13]; DeMaria [6];
and ten Klooster [18].
Bramklev[3]model’s however integrates product and packaging but
environmental issues are not considered, therefore this model is not complete for
the demands of eco efficient products.
A Sustainable Packaging Design (SPkD) model was developed, having in mind the
relevant necessity and simultaneity of Product Development Process (PDP) and
Packaging Development Process (PkDP). It also includes environmental aspects
since the initial phases. It is based upon “Rozenfeld et al” [16] an updated and
generic model. It is integrated with the product and the ecodesign strategies and
tools are incorporated in each phase of the development process. The mentioned
model is shown on Figure 1.
The interdependency of each one, the product and the packaging processes, will
be represented by arrows. Each phase of the model will be described in a generic
form. It performs the macro phases of Pre-Development, Development and Post-
Development. It includes the various phases of the Packaging Strategic
Planning(macro phase of pre-development), Packaging Planning, Concept Design,
Detail Design, Proving Functionality, Packaging Launch and Packaging Review
(macro phase of post-development).
366 DZ. Bucci, FA. Forcellini
Management areas, etc. There will be two teams, working in parallel – product and
packaging. These two will interact when necessary, as established on the project
timeline. For an environmental friendly packaging development the packaging
team will have to have a strong interaction with Logistics, Supply Chain and
Environmental areas. In addition, there will be exchange of information and
participation of external members like packaging and raw material and packaging
equipment suppliers.
In this part of the phase the project scope is also worked out in detail. It’s a
common guideline document for both PD and PkD. Furthermore, Stage 1 of this
phase includes situational analysis, task description, establishment of
responsibilities and timeline preparation. Also cost indicators, necessary resources,
critical success factors and establishment of performance indicators and financial
feasibility of the project cannot be ignored.
In this phase the design team looks for, creates and presents solutions for the
product-packaging system guided by the target-specifications of the project with
environmental focus, considering environmental issues during the life cycle. By
looking for solutions, the team will have to give attention to marketing and
logistics requirements and adopt DfE and DFX (Design for X) where X may be
any of a number of design attributes to get efficiency, improvement of materials
368 DZ. Bucci, FA. Forcellini
and energy, careful soil use and cleaner production. Packaging alternatives or
generated concepts must be analyzed and combined with product alternatives and
concepts, evaluating them together in order to generate the concepts of the product-
packaging system.
The concept that best fits the target-specification, consumer expectations, costs,
technical and environmental criteria, will have to be chosen.
In this phase various decisions are made like materials to be used, shapes and
colours. These all cause substantial influence to the flow cycle of materials and
production processes which are used. Concepts are developed to comply with the
product-packaging design specification and to detail the product-packaging system,
before production or introduction for use. In this phase most of the activities run
simultaneously for both product and packaging. It is essential to use DfX (Design
for X) considerations, including DfE (design for Environmental) ecodesign tools
like strategies, matrixes, catalogues, checklists as well as LCA.
It is also very important that the Design team works close together in order to
specify thoroughly the product- packaging system, sharing information and
characteristics or properties of both systems in order to assure the design success.
In order to determine the primary packaging several tests are necessary. In a
sequence, all levels of packaging are detailed.
The purpose of this phase is to assure that the company is able to produce the
volume established in the project scope, complying with the client’s requirements
during the life cycle of the product-packaging system.
The evaluation and assay of a pilot lot of the product-packaging system will
improve the detailing of the project, indicating necessary adjustments of the
process conditions, and sometimes even change of suppliers. LCA revision can
also be performed in this phase.
3.6 Integration between the Product Launching Phase and the Packaging
Launching
The launching phase covers the activities of the logistics chain in delivering the
product to the market, involving the sales and distribution processes, customer’s
service, technical assistance and marketing campaign. It encompasses the planning,
production and packing of the product and delivery to the sales point. In order for
the product launching date to perform as planned in the project time line the
various activities related to the manufacturing of the product, packing and
distribution will have to be properly coordinated to assure that the product reaches
the sales point during the advertise period and sales promotions. This phase also
includes the presentation of the information regarding the characteristics and
benefits of the product and its packaging, stimulating customers to look for and to
Sustainable Packaging Design Model 369
buy the product. The sales catalogue must mention information and benefits of the
environmental packaging. It will help to attenuate the environmental impacts
during the use and disposal phases of the product and packaging. Environmental
information and communication (Environmental Labelling) can be propagated in
different forms: at the sales point, on the packaging label, on the Web, on the
company’s customer service and on other media channels.
The company can also supply collectors for the disposal of the packaging after
its use at the sales point, stimulating its recycling.
3.7 Packaging Review and its Development Process and Integration with the
Phases of Follow-up and Discontinuing Products
3.7.1 Packaging Review and its Development Process and Integration with
Product and Process Follow-up Phase
It is extremely important that the company reviews the packaging and its
development process six months after launching it [6]. This review must
encompass consumer’s satisfaction, product functionability, manufacturing
records, packaging, waste indicators, worker’s health records, energy and water
consumption, and environmental impacts.
3.7.2 Packaging Review and its Development Process and Integration with the
Product Discontinuing Phase
The discontinuing of an item of the company’s Product Portfolio takes place when
it shows no more benefits or even importance. The final packaging use or its
disposal, having a different life cycle in relation to the product, starts with the use
of the product. In this case the packaging is disposed and then sent for the
recycling process or to the land fields or to collecting points. It may also be
returned to the manufacturer.
In reality, the correct destination of the disposed packaging depends on local
authorities and laws applicable in every country, besides the people’s education
and awareness. This creates then an adequate structure of shared responsibilities of
all interested parties. These lead to a gradual expansion of the production and
consumption of products and packaging with less environmental impact.
Conclusion
The SPkD model presented complies with the current demand of competitiveness
and eco-efficiency, besides fulfilling an important gap in the literature for product
developments.
This model will contribute with a greater integration of knowledge within the
companies as well as with the partnership commitment among different companies.
It will instrument designers and packaging developers with a complete SPkDM,
including the environmental variables in all the phases, assuring a more consistent
and sustainable environmental efficiency. By integrating PDP and PkDP and also
370 DZ. Bucci, FA. Forcellini
including the environmental variable, since the beginning of the project there will
definitely be quality gain and cost reduction, besides shorter development lead time
and great benefits as a consequence of the reduction of the environmental impacts.
5 References
[1] Bucci DZ. Avaliação de Embalagens de PHB (Poli (Ácido 3-Hidroxibutírico)) para
Alimentos de Produção, UFSC, Florianópolis, 2003.
[2] Back N. Metodologia de projeto de produtos industrial. Rio de Janeiro: Guanabara Dois,
1983.
[3]Bramklev, C; J Jönson, G; Johnsson, M. Towards an Integrated Design of Product and
Packaging. In: ICED, 05, 2005, Melborne. International Conference On Engineering Design.
Melborne: ICED, 2005; 01 - 14
[4] Brody, AL. Development of Packaging for Food Products. In: Brody ,AL ; Lord, J. B.
Development of New Food Products for Changing Market Place. CRC Press LLC:
Philadelphia, 1999; chap.8
[5] Charter, M; Tischner, U (Eds.). Sustainable solutions: developing products and services
for the future Sheffield, U.K : Greenleaf Pub, 2001;118-138
[6] DeMaria, K The packaging development process: a guide for engineers and project
managers. Lancaster : Technomic Publishing, 2000.
[7] Fuller, GW. New food product development: from concept to marketplace”. Boca Raton
: CRC, 1994.
[8] Griffin Jr, RC. Materials and package testing. In: Principles of package development”. 2.
ed. Connecticut: Avi Publishing Company Inc, 1985; 130-167
[9] Hundal, MS. Mechanical life cycle handbook: good environmental design and
manufacturing. New York: Marcel Dekker, 2002.
[10] Moura, RA and Banzato, M. Embalagem, unitização e conteinerização. 4. ed. São
Paulo : IMAM, 2003; 354 vol.3.
[11] Pahl, G. Projeto na engenharia: fundamentos do desenvolvimento eficaz de produtos,
métodos e aplicações. São Paulo: Edgard Blücher, 2005.
[12] Paine, FA. The packaging Design and Performance. Surrey: Pira, 1996.
[13] Romano, LN. Metodologia de projeto de embalagem. Dissertação (Mestrado em
Engenharia Mecânica)-Universidade Federal de Santa Catarina, Florianópolis, 1996.
[14] Raper, S. A. Toward the Development of an Integrated Packaging Design Methodology
Quality Funtion – An Introduction and Example. In: In: Brody,Y.A. L e Lord, J. B.
Development of New Food Products for Changing Market Place. CRC Press LLC:
Filadélfia, 1999. chap.14
[15] Robertson, GL. Food packaging: principles and practice. New York : Marcel Dekker,
1993.
[16] Rozenfeld, H. et al. Gestão de desenvolvimento de produtos: uma referência para a
melhoria do processo. São Paulo: Saraiva, 2006; 542.
[17] Saghir, M. Packaging logistics evaluation in the Swedish retail supply chain, licentiate
thesis, Department of Design Sciences, Division of Packaging Logistics, Lund
University,Lund, 2002.
[18] ten Kloooster, R. Packaging design: a methodological development and simulation of
the design process. Thesis, Delft University of Techology, Delft, 2002.
[19] Ulrich, KT and Eppinger, SD. Product design and development . - 3rd ed. - Boston :
McGraw-Hill/Irwin, 2004
Information Modelling for Innovation and
Sustainability
Environmental Regulations Impose New Product
Lifecycle Information Requirements
Abstract. In a global response to increasing health and environmental concerns, there has
been a trend towards governments enacting legislation to encourage sustainable
manufacturing where industry creates products that minimize environmental impact. This
legislative trend seeks to shift the environmental responsibility of product manufacturing to
the finished goods manufacturer. To meet this new responsibility, data relevant to the
material composition of a product must flow unimpeded from the raw material producers to
the final producers. Unfortunately, existing systems are ill-prepared to handle the new data
requirements. For example, the European Union’s (EU) Energy Using Product (EuP)
Directive will require that companies provide total energy used during a product’s lifecycle,
including manufacturing and transportation energy. To meet these new requirements, new
systems must be designed and implemented, or modifications made to existing data
management systems. Because every law poses its own unique requirements on industry, it
is not always clear what information will need to be collected and stored. This paper seeks
to provide industry with a forward-looking view at new data exchange requirements needed
within the manufacturing supply chain of the future. It surveys current and forthcoming
environmental legislation including EU Restriction of Hazardous Substances (RoHS), China
RoHS, California RoHS, EU EuP, and the EU Registration, Evaluation and Authorization of
Chemicals Directive (REACH). The paper identifies the unique data requirements that will
need to be incorporated in a products supply chain in order for companies to comply with
each law.
1 Introduction
Many governments, corporations, and other regulating political bodies are
seeking to address health and environmental problems through initiatives and laws.
1
Computer Scientist, National Institutes of Standards and Technology (NIST), 100 Bureau
Drive, Gaithersburg, Maryland, 20899, USA; Tel: 301 975-4284; Fax: 301 975-8069;
Email: john.messina@nist.gov; http://www.eeel.nist.gov/812/IIEDM/
374 J. Messina, E. Simmon, M. Aronoff
The European Union’s Directive 2002/95/EC on the restriction of the use of certain
hazardous substances in electrical and electronic equipment is the most famous of
the new type of toxic substance legislation. The directive bans various substances
(such as heavy metals) from being incorporated into electronic devices. In essence,
the directive seeks to shift the responsibility for a product’s environmental impact
back to the manufacturer of the product. The directive, which went into effect July
1, 2006, has left the electronics industry scrambling to develop substitute materials
[1] and new data exchange mechanisms in order to ensure compliance.
The directive has several key elements that directly impact any company importing
electronic products into the EU [2]:
Environmental Regulations Impose New Product Lifecycle Requirements 375
Compliance with RoHS is the responsibility of the company that seeks to market
the product within the EU. There is no detailed declaration requirement, simply a
yes/no declaration of compliance by the manufacturer. However, since the six
banned substances are tracked at the homogeneous level, it is now the
responsibility of the final product producer to track those six substances through
the supply chain from raw materials to the final product. The final product
producer needs to ensure that their products do not exceed the MCVs[3] listed in
Table 1.
This proposed law is very similar to the European Union’s RoHS. So much so, in
fact, that it is commonly referred to as China RoHS by industry. Unfortunately,
while both pieces of legislation target the same set of six substances [4], there are
substantial differences between the two that could pose problems for industry.
376 J. Messina, E. Simmon, M. Aronoff
The directive has several key elements that are very similar to EU’s RoHS:
x It bans the same six hazardous substances (lead, mercury, cadmium,
hexavalent chromium, PBB’s, PBDE’s) in products with specified MCVs.
x It maintains the right to ban or restrict future toxic or hazardous
substances.
However, beyond the six restricted substances and their MCVs, there are
differences between the two sets of legislation in both their scope and how they are
to be implemented.
Several new unique data elements stand out in the China RoHS legislation:
“environmentally-friendly use period”, an alternate way to calculate the maximum
permitted substance concentration basis, product catalogue information, and
compliance testing data. The new use-period requirement identifies the period
during which toxic and hazardous substances contained within the electronic
information product will not leak out or mutate. It is still undecided which
substances qualify for this requirement or how the use period will be determined,
but the requirement will be put in place. China RoHS also allows for reporting on
the basis of the mass proportion of the entire device for devices smaller than 4mm3
(the approximate size of a surface mount transistor). This will lead to differences in
reporting mechanisms for the EU and China legislations; a product could pass one
but fail the other based on size. With the product catalogue subject to yearly
modifications, it will be important to keep a link between the product and a
specific version of the catalogue. Finally, with the products being tested for
compliance it will be important to propagate the test results throughout the supply
chain.
Environmental Regulations Impose New Product Lifecycle Requirements 377
The California law also seeks to limit the levels of hazardous materials that appear
in some electronic devices. This law is in fact so closely linked to the EU’s RoHS
that it seeks to prohibit an electronic device from being sold or offered for sale in
California if it is also currently prohibited from being sold in the EU due to
Directive 2002/95/EC. However, as the California RoHS was implemented by two
sets of emergency legislation in late December 2006[5], there are noticeable
differences between the pieces of legislation. Specifically, California RoHS targets
fewer restricted substances and focuses on a smaller set of covered electrical
devices (CED)[6].
x California RoHS bans only the four heavy metal substances (lead, mercury,
cadmium, hexavalent chromium) and establishes an allowable MCV
(Harmonized to match the MCV values in EU’s RoHS).
x The law covers only CED which are enumerated in Public Resources Code
section 42463 as video display devices beyond a certain size. More
specifically, in 2005 eight broad categories of displays were established by
the California Department of Toxic Substances Control (DTSC).
x The second legislation added portable DVD players to the list of CEDs.
x The law targets only retail products, not business to business products.
x California RoHS allows exemptions for some classes of products.
REACH is unlike any existing restrictive substance legislation, and its unique data
requirements will have far reaching implications for industry. First, REACH will
require new data information flows: downstream to customers (chemical safety
information), upstream to suppliers (chemicals’ expected use) and sideways
(dossiers) to the new ECA. While systems for the exchange of chemical safety
data sheets (SDS) exist and work, new and expanded SDS will be required to
include information such as proposed use, exposure scenarios, etc. Also, implicit
with REACH is that there will likely be a need for material declarations for the
substances of high concern, to avoid legal liability. It is likely that upwards of
2,000 substances will need to be declared if they appear in a product.
Environmental Regulations Impose New Product Lifecycle Requirements 379
The EuP is different from the other legislation in that, rather than focusing on
restricting hazardous substances, it focuses on improving energy efficiency [8]. In
fact, the EuP is part of the EU-wide Energy Efficiency Action Plan [9], which
seeks to reduce energy consumption by improving energy efficiency. The EuP
targets the negative environmental impacts of energy-using products that occur
throughout the product life-cycle. These impacts arise from the extraction of raw
materials, the manufacturing process, distribution, use and eventual disposal. In
essence, the EuP will require manufacturers to calculate and track the energy used
to produce, transport, sell, use and dispose of any non-transportation product all the
way from extracting raw materials to the product’s end of life disposal. The energy
data collected by manufacturing companies will then be used in a variety of ways
to spur energy conservation (consumer labeling, setting limits, etc.)
• The idea is that these new energy labels will allow customers to change
their buying behaviors in favor of products that cost less energy to
produce and use.
The EuP introduces data tracking requirements that could be an order of magnitude
greater than RoHS, REACH, or any other pending environmental legislation. Most
companies do not currently track this information and have no systems or
processes in place to measure energy usage, let alone pass it along to customers. To
make matters worse, no metrics for tracking rate of energy usage have been
finalized and no specialized reporting format for energy usage has been introduced.
While EuP is limited to energy, it is likely that additional data reporting
requirements, such as greenhouse gases and water usage, will be added.
<http://www.aeanet.org/governmentaffairs/gabl_ChinaRoHS_FINAL_March2006
.asp >. Accessed on Feb 1, 2007.
[5] California Office of Administrative Law:. Available at: <
http://www.oal.ca.gov/Recent%20Action%20Taken%20on%20Emergency%20Re
gs.htm >. Accessed on Feb 1, 2007.
[6] California Department of Toxic Substance Control: Electronic Hazardous Waste:.
Available at: <http://www.dtsc.ca.gov/HazardousWaste/EWaste/index.cfm >.
Accessed on Feb 1, 2007.
[7] European Commission website information on REACH: Available at: <
http://ec.europa.eu/environment/chemicals/reach/reach_intro.htm >. Accessed on:
Feb 1 2007.
[8] European Commission website on Energy Using Products. Available at: <
http://ec.europa.eu/enterprise/eco_design/dir2005-32.htm>. Accessed on Feb 1,
2007.
[9] European Union: Action Plan for Energy Efficiency. Available at: <
http://ec.europa.eu/energy/action_plan_energy_efficiency/index_en.htm>.
Accessed on Feb 1, 2007.
Data Modeling to Support Environmental Information
Exchange throughout the Supply Chain
1
Electrical Engineer, National Institutes of Standards and Technology (NIST), 100 Bureau
Drive, Gaithersburg, Maryland, 20899, USA; Tel: 301 975-3956; Fax: 301 975-8069;
Email: eric.simmon@nist.gov; http://www.eeel.nist.gov/812/IIEDM/
384 E. Simmon, J. Messina
1 Introduction
Countries around the world are creating legislation designed to encourage
manufacturing practices that promote human health and environmental protection.
Many of these laws are built around a concept called extended producer
responsibilities (EPR). EPR shifts the responsibility of a product’s negative
environmental or health impact directly to the producing company. The first major
regulation of this type was the EU RoHS directive [1], which went into legislative
force in 2006. The RoHS directive restricts imports of new electrical and
electronic equipment containing six substances specified in the directive. For
manufacturers to successfully comply with RoHS, and similar legislation, they
need the ability to exchange material content. This information would then
propagate through the supply chain from the raw material suppliers to the final
producer. At the time the RoHS directive was developed, there was no standard to
support material data exchange through the supply chain. This necessitated the
creation of a new data exchange standard. The design of this standard was
complicated by the diverse nature of the electronics industry business processes
and reporting practices. In fact, the complexity of the reporting requirements
meant that traditional ad hoc standards development processes would likely fail to
produce a viable standard. To overcome this, software development methodologies
were chosen to be the basis for the standards development process. After
reviewing several design methodologies, simple Uniform Modeling Language
(UML) [2] modeling with class diagrams was chosen. The main benefit of using
UML being, it offered a relatively high degree of improvement for the
development process compared with a relatively low cost of implementation. The
UML is one part of a structured design approach that starts with domain experts
defining the scope and use requirements which are then developed into a data
model.
NIST developed a data model (using UML class diagrams) that described the
required underlying material content based on the requirements specified by the
IPC 2-18 experts. This data model was used to generate an eXtensible Markup
Language (XML) schema that defines the IPC 1752 Material Declaration standard
[3]. An important result of this approach is that, while the IPC 1752 standard was
developed to support EU RoHS, the data model was designed to be flexible enough
that it could be modified to support additional material data from other content
regulations (China RoHS, California RoHS, etc) with little effort. The salient point
is that, even if different data exchange solutions were developed for every new
piece of legislation, as long as they where based on the same data model, the
solutions would be interoperable. This paper looks at the data model designed for
the IPC1752 standard and how it can be revised for similar laws and directives.
was developed. Specifically, a UML design model was created to show the
different material composition data, associated business information, and the
relationships between this information. (see Figure 1).
While these issues could not be resolved for the first implementation of the
design model, the IPC standards committee has begun developing version 2.0 of
1752. This presented the opportunity to revise the data model for the next version
of the standard and using a software development methodology based around UML
means that changes made to the data model can be propagated easily into the new
standard.
One of the easiest requirements to address in the new data model will be the
safe use period information requirement in China RoHS. Because of the
methodology used to design version 1.0, the structure and attributes already present
within the v1.0 design model and schema will require very little modification to
support this information, since it can fit within the new generalized declaration.
Another issue that needs to be addressed is the ability to report concentration in
ppm of homogeneous materials (as for EU RoHS) or mass (if the mass is under the
China RoHS weight limit for small parts). Since the inception of EU RoHS there
has been concern about how to report the quantity of a substance. RoHS requires
that companies report the mass of substance relative to a homogeneous material,
while other legislation and the Joint Industry Guide (JIG) require that a company
report mass relative to the entire part (unless specified by a specific regulation), not
just the material. Adding an attribute to qualify the reported mass as being
reported at the part level or the homogeneous material level resolves this
discrepancy.
The EU EuP Directive does not cover materials, but instead covers the energy
used during the production and use of a product. Since the directive has not been
finalized, it is not clear what information will be required, but it is likely the
electronics supply chain will want to track this information in a similar manner to
material data. Using a modular design, this additional energy related information
may be added by creating a class associated with the product which holds the
energy information for the product. Likewise, the manufacturing data will be
modularized.
To provide declaration information for the new regulations, the RoHS
declaration class will become more generic, which will allow declarations for other
regulations. To support products that are shipped to multiple markets, it will
support multiple declarations for a single part. Figure 2 shows a high level view of
388 E. Simmon, J. Messina
the prototype design model for version 2.0. It does not show the association
multiplicity or sequence information. Additional information related to the XML
schema has been removed to make the diagram easier to read. Interested parties
may contact the authors for more detailed models.
Figure 2. Prototype high level design model for next generation material
composition data standard
The prototype proposed for the next generation model has several key classes,
which include:
Declaration – defines the basic information for the data transfer, such as an
identifier and version for the standard. This class represents the top-level element
in the XML implementation.
Product class – defines the product or group of products to which the declaration
applies. This is the central class in the model. The other main classes are
associated with the product class, and the product class is associated with itself as a
subclass to allow nesting of product information. Structuring products this way
means that an entire product declaration can be wrapped up and easily used as part
of another product declaration.
Data Modeling to Support Environmental Information Exchange 389
ProductID information – Holds the identifying information for the product being
declared. Version 2.0 will support multiple product IDs by allowing multiple
ProductID objects to be included within a file. A part family ID may still be used
as with v1.0.
Legal Information – includes the regulatory information and any legal language
required for the data exchange to happen. This class is flexible so that it can
handle legal information for specific regulations. Legal language about the file’s
data, along with regulation-specific language, is included here.
Material Information – holds the actual material content data. This data can be
reported at the product level (e.g., is the product within the limits set by RoHS
yes/no), the category level (e.g., JIG substance categories), or the substance level
(e.g., JIG substances).
5 Conclusions
The electronics industry is building advanced manufacturing facilities and supply
networks to produce goods that have a complexity only imagined a few years ago.
The data exchange standards to support this are equally complex. Creating and
implementing these standards is a continuing task. By using simple tools such as
those available with UML modeling in a structured development process, robust
data exchange standards may be created that are easily modified to meet future
requirements. These material declaration models and resultant XML
implementations are a good example of how to apply software development
methodologies to data exchange standards development.
[1] European Commission website information on RoHS and WEEE. Available at:
<http://ec.europa.eu/environment/waste/weee_index.htm>. Accessed on: Feb. 1st 2007.
[2] Unified Modeling Language Specification. Available at:
<http://www.omg.org/technology/documents/modeling_spec_catalog.htm>. Accessed
on April 3, 2007.
[3] IPC 1752 for Material Declarations. Available at:
<http://members.ipc.org/committee/drafts/2-18_d_MaterialsDeclarationRequest.asp>.
Accessed on: Feb. 1st 2007.
EXPRESS to OWL morphism: making possible to
enrich ISO10303 Modules
Abstract. ISO10303 STEP has been acknowledged by the world’s largest industrial
companies, as the most important family of standards for the integration and exchange of
product data under manufacturing domains. With the advent of globalization, smaller
enterprises (SMEs) looking to level up with world-class competitors and raise their
effectiveness are also realizing the importance of the usage of this kind of standards.
However, to enable a model-based interoperability, STEP industrial standards, the
Application Protocols (APs) follow a modular approach, i.e. they are composed by a set of
generic purpose modules sharable by a number of different APs. This way, the core STEP
reference models contain vague definitions that sometimes raise ambiguous interpretations.
A possible solution to overcome this barrier would be to add further semantics to the
concepts defined and enable STEP modules as ontologies, thus providing an alternative to
traditional implementations. SMEs can benefit even more from this alternative, since OWL
is currently a widespread technology, with abundant low cost supporting tools comparing to
the ones dealing directly with STEP.
Introduction
Interoperability and standardization have been playing important roles in lowering
costs related to production, sales and delivery processes, which permits to reduce
final prices, and increase competitiveness. Enterprise and systems interoperability
is frequently associated with the usage of several dedicated reference models,
1
Group for the Research in Interoperability of Systems (GRIS), UNINOVA, Campus da
Caparica, 2829-516 Caparica, Portugal; Tel: +351 21 2948365; Fax: +351 21 2957786;
Email: ca@uninova.pt
392 C. Agostinho, M. Dutra, R. Jardim-Gonçalves, P. Ghodous, A. Steiger-Garção
covering many industrial areas and related application activities, from design phase
to production and commercialization [1].
ISO10303, most commonly known as the Standard for the Exchange of Product
Model Data (STEP), is one of the most important standards for representation of
product information. However, despite the many success stories involving the large
enterprises (i.e. from the aeronautics, ship building, automotive or aerospace
sectors), where STEP enables estimated savings of $928 million per year, it still
has some drawbacks [2]: STEP reference models are somewhat vague and contain
definitions that can raise ambiguous interpretations among the industrial experts
that have not been involved in the standardization process; and also, the use of
languages that are unfamiliar to most application developers [1,4,5].
A solution to overcome the last problem would be to enable STEP industrial
models, the Application Protocols (APs), in more user-friendly and supported
technologies and standards, such as Extensible Markup Language (XML) [1,3-6].
Regarding the first drawback, an innovative approach would be to link STEP to the
semantic web. It that were possible, it might be easier to reduce the
misinterpretations, by associating sector specific semantics to each model.
In this paper we focus the harmonization of STEP with the Web Ontology
Language, to cover both needs. OWL is an ontology language produced by the
W3C Web Ontology Working Group. It is structured to be a major formalism for
the design and dissemination of ontology information, particularly in the Semantic
Web. OWL is intended to be used when the information contained in documents
needs to be processed by applications, as opposed to situations where the content
only needs to be presented to humans [7].
Because OWL is part of W3C's Semantic Web, the official exchange syntax for
OWL is XML/RDF, a way of writing RDF in XML. However, since OWL has
more facilities for expressing meaning and semantics than XML, RDF, and RDF-S
(RDF Schema), it goes beyond these languages in its ability to represent machine
interpretable content on the Web [7,8].
STEP is nowadays one of the most important family of standards for the
representation of product information. It contains more than forty APs that reflect
the consolidated expertise of major industrial worldwide specialists working
together for more than twenty years, covering the principal product data
management areas for the main industries, e.g. oil and gas, automotive,
aeronautics, aerospace. This kind of knowledge should not be wasted by the
market, and gives STEP a distinct advantage over similar technologies and
standards [1,3-6]. STEP is also one of the most innovative families of standards on
the reusability sense. Application modules were recently introduced to the STEP
architecture and are the key component to make its APs more interoperable,
cheaper, easier to understand, manage, and quicker to develop [10].
However, the modular architecture in spite of promoting reusability raises the
problem of abstracting too much the standards definitions because they stop being
associated with any particular environment. Also, smaller industries like SMEs still
don’t use STEP because of another problem: it is associated with technologies that
lack tool support and require big initial investments [1,3-6]. The EXPRESS
modelling language specified by STEP part 11 (ISO 10303-11) [11], is example of
that. In spite of being a very powerful language, is not acquainted by most
application developers and consequently is almost ignored by users outside the
STEP community [1,3-6].
OWL
OWL
RDF
RDF Schema
Schema
RDF
RDF
XML
XML NAMESPACES
NAMESPACES
URI
URI UNICODE
UNICODE
One of the main benefits of OWL is the support for automated reasoning, and
to this effect, it has a formal semantics based on Description Logics (DL). They are
suitable for representing structured information about concepts, concept hierarchies
and relationships between concepts. The decidability of the logic ensures that
sound and complete DL reasoners can be built to check the consistency of an OWL
ontology, i.e., verify whether there are any logical contradictions in the ontology
axioms. Furthermore, reasoners can be used to derive inferences from the asserted
information, e.g., infer whether a particular concept in an ontology is a subconcept
of another, or whether a particular individual in an ontology belongs to a specific
class [7,15].
Each OWL file represents an ontology. An OWL file header extends the RDF file
header, by aggregating URIs to OWL vocabulary and to the ontology being
described. In our approach, EXPRESS schemas were translated into OWL
ontologies, by creating separated files to represent each one of them.
So, a typically OWL file representing an EXPRESS schema named
Fruit_schema – which uses definitions from Fruit_description schema – should
look like this:
<rdf:RDF
xmlns="http://www.uninova.pt/ontology/Fruit_schema#"
xmlns:owl="http://www.w3.org/2002/07/owl#"
xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:xsd="http://www.w3.org/2001/XMLSchema#"
xml:base="http://www.uninova.pt/ontology/Fruit_schema">
<owl:Ontology rdf:about="">
<owl:versionInfo>1.0</owl:versionInfo>
<rdfs:comment>Fruit_schema</rdfs:comment>
<owl:imports rdf:resource="Fruit_description.owl"/>
</owl:Ontology>
</rdf:RDF>
EXPRESS entities are used to define concepts from the real world which have
properties that characterize them. In the entity-relationship, they would be tables,
but in OWL they are classes. By using this principle, we can map directly any
entity as well as their subtypes and supertypes (profiting from OWL classes
396 C. Agostinho, M. Dutra, R. Jardim-Gonçalves, P. Ghodous, A. Steiger-Garção
Table 1. Entities
EXPRESS
ENTITY Tree
ENTITY Fruit;
SUBTYPE OF (Thing);
description : OPTIONAL STRING;
root : Root;
END_ENTITY;
END_ENTITY;
OWL
<owl:Class rdf:about="#Fruit">
<rdfs:subClassOf>
<owl:Restriction>
<owl:minCardinality
rdf:datatype=http://www.w3.org/2001/XMLSchema#int"> 0
</owl:minCardinality>
<owl:onProperty>
<owl:DatatypeProperty rdf:ID="Fruit_description">
<rdfs:domain rdf:resource="#Fruit"/>
<rdfs:range rdf:resource="http://www.w3.org/2001/XMLSchema#string"/>
</owl:DatatypeProperty>
</owl:onProperty>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<owl:Class rdf:about="#Tree">
<rdfs:subClassOf rdf:resource="#Thing"/>
<rdfs:subClassOf>
<owl:Restriction>
<owl:cardinality
rdf:datatype="http://www.w3.org/2001/XMLSchema#int"> 1
</owl:cardinality>
<owl:onProperty>
<owl:ObjectProperty rdf:ID="Tree_root">
<rdfs:range rdf:resource="#Root"/>
</owl:ObjectProperty>
</owl:onProperty>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
There are two kinds of constructed data types in EXPRESS: enumeration data
types and select data types. The enumeration is a concept common to many other
languages, and defines a set of names to be used in a domain. Regarding the select,
EXPRESS to OWL morphism: making possible to enrich ISO10303 Modules 397
<owl:Class rdf:about="#Lemon">
<owl:disjointWith df:resource="#Orange"/>
<owl:disjointWith df:resource="#Grapefruit"/>
</owl:Class>
<owl:Class rdf:about="#Grapefruit">
<owl:disjointWith rdf:resource="#Orange"/>
<owl:disjointWith rdf:resource="#Lemon"/>
</owl:Class>
<owl:Class rdf:ID="Citric_Fruit">
<owl:equivalentClass>
<owl:Class>
<owl:unionOf rdf:parseType="Collection">
<owl:Class rdf:about="#Orange"/>
<owl:Class rdf:about="#Lemon"/>
<owl:Class rdf:about="#Grapefruit"/>
</owl:unionOf>
</owl:Class>
</owl:equivalentClass>
</owl:Class>
It was not possible to translate EXPRESS rules into OWL. Concerning the
EXPRESS uniqueness rule, both languages have different ways to interpret this
principle. While in EXPRESS a unique value is defined to an object and
understood by all involved actors, there is no such possibility in OWL, where
infinite URIs may be set to represent the same object. Moreover, as OWL is a
declarative language, it is not possible to use it to represent functions statements.
As well, the mapping of EXPRESS domain rules (WHERE clause) was also not
possible.
OWL
<owl:Class rdf:ID="OWL_Set_Tree">
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty>
<owl:ObjectProperty rdf:ID="OWL_Set_belongTo_Tree"/>
</owl:onProperty>
<owl:allValuesFrom rdf:resource="#Tree"/>
</owl:Restriction>
</rdfs:subClassOf>
<rdfs:subClassOf>
<owl:Restriction>
<owl:onProperty rdf:resource="#OWL_Set_belongTo_Tree"/>
<owl:minCardinality rdf:datatype=
"http://www.w3.org/2001/XMLSchema#int"> 1
</owl:minCardinality>
</owl:Restriction>
</rdfs:subClassOf>
</owl:Class>
<owl:Class rdf:ID="Orchard">
<owl:sameAs rdf:resource="#OWL_Set_Tree"/>
</owl:Class>
To improve the efficiency of MRS, the more morphisms are classified, the better.
Indeed, all the morphisms that are part of the UniSTEP-toolbox are classified
under MoMo’s reference ontology so that any user looking to work with STEP
models but desires to use different technologies, could be advised to use the
morphisms implemented in this toolbox.
400 C. Agostinho, M. Dutra, R. Jardim-Gonçalves, P. Ghodous, A. Steiger-Garção
Figure 4 - Snapshot of the Instance of the MoMo Ontology for the EXP2OWL Morphism
The diagram from Figure 4 uses an UML notation to illustrate how the EXP2OWL
is classified using the reference ontology for model morphisms. For the sake of
simplicity, this figure does not reflect the entire set of elements of the ontology,
especially some class properties. Having this classification performed, anyone
(human or machine) should have only one interpretation of what the morphism is
about, i.e. “UniSTEP is an automatic SoftwareTool with academic background that
realizes an EXPRESS to OWL ModelTransformation implemented in the JAVA
TransformationLanguage (is a restrictive classification for JAVA but expresses its
role in this scenario). This transformation defines the ModelMorphism EXP2OWL
which takes as input the source ApplicationPlatformModel represented using the
EXPRESS ModelingLanguage, processes the Architectural part and generates a
target model represented using the OWL ModelingLanguage”.
5 Closing Remarks
With so many different modelling and implementation standards being used
nowadays, STEP is one of the most distinguished regarding product data. To
promote the reusability of its industrial standards, ISO adopted the modular
approach for STEP to enable more efficient development, standardization,
implementation and deployment. Compared with the classic STEP architecture,
this emerging approach promises to bring major advantages for users and
developers [9].
However, the modular standards may raise the problem of becoming quite hard
to understand due to vague definitions not associated with any particular
environment. Yet, another problems arises when the chosen product model is
described using one particular technology (e.g. EXPRESS) and is required to be
EXPRESS to OWL morphism: making possible to enrich ISO10303 Modules 401
integrated with end-user systems that use totally different technologies with
different degrees of expressiveness like XML or OWL.
The integration of the EXPRESS language with the Web Ontology Language
can be the means to put way the enumerated STEP problems, since OWL provides
a valuable link with the emerging field of Semantic Web which is gaining high
relevance in the global market, and has XML syntax for easy data exchange using
web-based systems. The Semantic Web has the aim of extending the current Web
infrastructure in a way that the information is given a well defined meaning,
enabling software agents and people to work in cooperation by sharing knowledge
[18]. This way, if STEP standards are transformed to OWL, they could in the
future, be easily complemented with links to semantic information contextualizing
the scope of the defined concepts regarding the environment were they are applied.
Moreover, representing EXPRESS modules as ontologies enables the use of
OWL reasoning, a very powerful way to check inconsistence and incoherence of
information. This can lead us to a scenario where human users can, in an easier
way, exchange and verify EXPRESS represented data. Such scenario can enhance
the use of EXPRESS language, promoting its adoption by a large number of
platform-independent and language-independent users. The morphism developed is
also part of a collaborative design project, described in [19].
The different degrees of expressiveness of the referred languages impede a full
binding (e.g. EXPRESS rules), thus originating a partial morphism. In this case,
the morphism results in the loss of some information. This way if a user needs to
transform an EXPRESS model into XML based languages, namely OWL, without
losing much information, it probably should combine more than one technique and
tool. This combination could be suggested in an automatic way by the MRS that
reasons on knowledge-base provided by the MoMo reference ontology.
6 Acknowledgments
The authors would like to thank all the organizations supporting the international
projects that resulted in the development of the UniSTEP-Toolbox. Namely, the
funStep community (www.funstep.org) and its members that somehow contributed
for the presentation of this work; Also, CEN/ISSS and ISO TC184/SC4 for the
effort in developing industrial standards and cooperating with the presented
research.
7 References
[1] Agostinho, C., Delgado, M., Steiger-Garção, A., and Jardim-Gonçalves, R., (2006),
‘Enabling Adoption of STEP Standards Through the Use of Popular Technologies’, in
proceedings of 13th ISPE International Conference on Concurrent Engineering (CE
2006), Juan-Les-Pins, France, September 2006.
[2] White W. J., O’Connor A. C., Rowe B. R., (2004) Economic Impact of Inadequate
Infrastructure for Supply Chain Integration. National Institute of Standards &
Technology (NIST), Planning Report 04-2.
402 C. Agostinho, M. Dutra, R. Jardim-Gonçalves, P. Ghodous, A. Steiger-Garção
[3] Delgado, M., Agostinho, C., and Jardim-Gonçalves, R. (2006), ‘Taking advantage of
STEP, MDA, and SOA to push SMEs towards a Single European Information Space’,
in proceedings of eChallenges e2006, Barcelona, Spain, October 2006.
[4] Jardim-Gonçalves, R., Agostinho, C., Malo, P., and Steiger-Garção, A., (2005) AP236-
XML: a framework for integration and harmonization of STEP Application Protocols.
Proc. of IDETC/CIE’05, ASME 2005 International Design Engineering Technical
Conferences and Computers and Information in Engineering Conference; Long Beach,
California, USA, Sep 24- 28, 2005.
[5] Delgado M., Agostinho C., Malo P., Jardim-Gonçalves R., (2006) A framework for
STEP-based harmonization of conceptual models. 3rd International IEEE Conference
on Intelligent Systems (IEEE-IS’06), Wetsminster, Jul 4-6 , 2006.
[6] Lubell J., Peak R. S., Srinivasan V., and Waterbury S. C., (2004) STEP, XML, AND
UML: COMPLEMENTARY TECHNOLOGIES. Proc. of DETC 2004, ASME 2004
Design Engineering - Technical Conferences and Computers and Information in
Engineering Conference, Salt Lake City, Utah USA, Sep 28- Oct 2, 2004.
[7] OWL Web Ontology Language Overview. Available at: http://www.w3.org/TR/owl-
features/. Accessed on: April 2007.
[8] Semantic Web, http://www.w3.org/2001/sw, Accessed on: April 2007.
[9] Jardim-Gonçalves, R., Olavo, R., Steiger-Garção, A., (2003) ‘Modular Application
Protocol for Advances in Interoperable Manufacturing Environments in SMEs’ in
proceedings of 10th ISPE International Conference on Concurrent Engineering (CE
2003), Madeira, Portugal, July 2003.
[10] Feeney, A. B., “The STEP modular architecture”, Trans. ASME: J. Comput. Inform.
Sci. Eng., vol. 2, pp. 132–135, Jun. 2002.
[11] ISO10303-11 (2003), Product data representation and exchange — Part 11: Description
methods: The EXPRESS language reference manual.
[12] InterOP NOE consortium, (2005) Deliverable DTG3.2: TG MoMo Roadmap.
[13] D'Antonio F., Missikoff M., Bottoni P., Hahn A., Hausmann K., (2006) An ontology
for describing model mapping/transformation tools and methodologies: the MoMo
ontology. EMOI’06 - Open InterOP Workshop on Enterprise modelling and Ontologies
for Interoperability, Luxemburg, June 2006.
[14] InterOP NOE consortium, (2006) Deliverable DTG3.3: Toolbox definition and
Workshop report.
[15] Patel-Schneider, P. F. (2004), ‘What is OWL (and why should I care)?’, invited paper
for the Ninth International Conference on the Principles of Knowledge Representation
and Reasoning, Whistler, Canada, June 2004
[16] OMG Model Drivel Architectures (2006). www.omg.org/mda/
[17] ISO 10303-236 (2006), ‘Industrial automation systems and integration — Product data
representation and exchange — Application protocol: Furniture catalog and interior
design’.
[18] Berners-Lee, T., Hendler, J. and Lassila O. (2001), ‘The Semantic Web, a new form of
Web content that is meaningful to computers will unleash a revolution of new
possibilities’, Scientific American, May, 2001.
[19] Dutra M, Slimani K, Ghodous P (2007) A Distributed and Synchronous Environment
for Collaborative Work, submitted to the Integrated Computer-Aided Engineering
journal (ICAE), ISSN 1069-2509, vol. 14, IOS Press, Amsterdam, The
Netherlands, 2007.
Complex Modelling Platform based on Digital Material
Representation
1 Introduction
Numerical modelling is currently applied to predict material behaviour under
manufacturing and exploitation conditions. The most commonly used method for
this purposes is Finite Element Method (FEM), which can be applied to simulate a
variety of problems, from simple experimental tests (e.g. tensile, torsion) to
sophisticated processes consisted of complex structures and systems (e.g. cars,
building, implants). FEM is able to simulate real processes occuring inside the
material and its environment (tools), as well as interactions between them.
Althought the method is becoming introduced in industrial aplications, it still
1
AGH – University of Science and Technology, Krakow, Poland, Modelling and
Information Technology (MiTI) Department; Tel: +48 12 6172921; Fax: +48 12 6172921;
Email: lrauch@agh.edu.pl; www.miti.agh.edu.pl
404 L. Rauch, L. Madej, T. Jurczyk and M. Pietrzyk
and then investigated to obtain the properties of product after processing. The
required additional data, like material phases, grains rheological models or their
chemical composition, are stored in the external database. Since in this approach
the phenomena occurring in other scales are not taken into consideration, the final
results are reliable mainly in the macro scale analysis.
The meshing process consists of two tasks: (a) preparation of a special control
space structure to provide the required sizing of elements throughout the domain
and (b) discretization of the domain following the control space as closely as
possible. The sizing information in the control space is stored in the form of an
anisotropic metric and it can be automatically gathered from two geometrical
sources, either from the user input or from the numerical adaptation process. All
available sources are processed and stored in a single adapted control space (either
quadtree or background mesh) structure using an adaptive procedure [9]. In the
presented approach the control space is initialized with a uniform coarse sizing,
which can be then further refined in a convenient way, by introducing a number of
discrete metric sources at the areas of interests. After all discrete sources are
inserted, the metric field is adjusted according to the prescribed element size
gradation and it is then used for guiding the generation of a triangular mesh.
The meshing procedure starts with the discretization of domain contours. The
created boundary points are triangulated using a modification of the Delaunay
incremental insertion algorithm working in Riemannian space. The constrained
triangulation is obtained by recovering all missing edges and removing obsolete
elements. A number of additional points are inserted within the domain in order to
achieve a unit mesh (i.e. mesh, where all edges have unitary metric length)
property according to the control space. If the quadrilateral mesh is requested, a
conversion procedure can be used. Finally, several methods of mesh post-
processing are applied in order to improve the quality of elements (Figure 3).
Prepared mesh on the basis of the DMR can be used during investigation of
behaviour of particular grains with different properties i.e. different
crystallographic orientation.
408 L. Rauch, L. Madej, T. Jurczyk and M. Pietrzyk
Figure 3. Digital microstructure obtained from the optical microscope image with applied
mesh on the grains area.
Analysis of phenomena, which occur in the scale of single grains and their
interactions, is a key to solve various problems that modelling of deformation
processes has to face, i.e. the change in the deformation path, strain localization.
Since such an analysis is crucial to support experimental observations, scientists
are trying to capture these microscale processes [10].
a) b)
Figure 5. Strain distribution obtained from the a) FE and b) CAFE simulation.
4 Conclusions
Following the concept of Digital Material Representation presented in [3, 4] the
complex solution dedicated to investigation of material behaviour under loading
condition in micro, mezo and macro scale was developed. Presented model is an
example of multi disciplinary approach to material science. From one side image
processing techniques and MySQL database system, from the other side mesh
generation algorithms and multi scale computational methods. Such an approach
gives opportunity to overcome difficulties in precise modelling of material
behaviour in various scales, what is of importance from the industrial application
point of view. Future work will focus on further development of the system to
extend its capabilities and to cerate a user friendly environment.
Acknowledgements. The financial support of the Polish Ministry of Science and Higher
Education, project no. COST/203/2006, is acknowledged.
410 L. Rauch, L. Madej, T. Jurczyk and M. Pietrzyk
5 References
[1] PASZYNSKI, M., KOPERNIK, M., MADEJ, L., PIETRZYK, M., Automatic hp
adaptivity to improve accuracy of heat transfer model and linear elasticity problems in
engineering solving, J. Machine Eng., vol. 16, pp. 73-82, 2006.
[2] MADEJ, L., HODGSON, P.D., PIETRZYK, M., Multi scale analysis of material
behavior during deformation processes, in: Foundation of Materials Design, eds,
Kurzydlowski, K.J., Major, B., Zieba, P., Research Signpost, Kerala, pp. 17-47, 2006.
[3] DAWSON, P.R., MILLER, M.P., The digital material – an environment for
collaborative material design, project poster. Available at:
<http://anisotropy.mae.cornell.edu/downloads/dplab/>
[4] BERNACKI, M., CHASTEL, Y., DIGONNET, H., RESK, H., COUPEZ, T., LOGÉ,
R.E., Development of numerical tools for the multiscale modelling of recrystallisation
in metals, based on a digital material framework, Comp. Meth. Mater. Sci., 7,
pp. 142-149, 2007.
[5] RAUCH, L, KUSIAK, J., Edge Detection and Filtering Approach Dedicated to
Microstructure Image Analysis, Comp. Meth. Mater. Sci., vol. 7, pp. 305-310, 2007.
[6] NIXON, M.S., AGUADO, A.S, Feature Extraction and Image Processing, First
Edition, Newnes, 2002.
[7] RAUCH, L., KUSIAK, J., Image filtering using the Dynamic Particles Method, Proc.
Conf. CMS’05 Computer Methods and Systems, eds, Tadeusiewicz, R., Ligeza, A.,
Szymkat, M., Krakow, pp. 115-121, 2005.
[8] KOPERNIK, M., SZELIGA, D., Modelling of nanomateriaals – sensitivity analysis
with respect to the material parameters, Comp. Meth. Mater. Sci., vol. 7, pp. 255-262,
2007.
[9] JURCZYK, T, GLUT, B., Adaptive Control Space Structure for Anisotropic Mesh
Generation. In: Proc. of ECCOMAS CFD 2006 European Conference on
Computational Fluid Dynamics. Egmond aan Zee, 2006.
[10] PAUL, H., DRIVER, J.H., JASIENSKI, Z., Shear banding and recrystallization
nucleation in a Cu-2%Al alloy single crystal, Acta Mater., vol. 50, pp. 815-830, 2002.
[11] DAS, S., PALMIERE, E.J., HOWARD, I.C., CAFE: a tool for modeling
thermomechanical processes, Proc. Conf., Thermomech. Processing: Mechanics,
Microstructure & Control, eds, Palmiere, E.J., Mahfouf, M., Pinna, C., Sheffield,
pp. 296-301, 2002.
[12] MADEJ, L., HODGSON, P., GAWĄD, J., PIETRZYK, M., Modeling of rheological
behavior and microstructure evolution using cellular automaton technique, Proc. 7th
Conf. ESAFORM 2004, ed., Støren, S., Trondheim, pp. 143–146, 2004.
[13] DING, R., GUO, Z., Coupled quantitative simulation of microstructural evolution and
plastic flow during dynamic recrystallization, Acta Mater., vol. 49, pp. 3163–3175,
2001.
[14] MADEJ, L., HODGSON, P.D., PIETRZYK, M., Multi scale rheological model for
discontinuous phenomena in materials under deformation conditions, Comp. Mat. Sci.,
vol. 38, pp. 685-691, 2007.
Interoperability for Collaboration
Collaborative Implementation of Inter-organizational
Interoperability in a Complex Setting
Abstract. This paper explores the challenges in the collaborative implementation of inter-
organizational interoperability through observations of social dynamics. We focus on an
inter-organizational information system that has interfaces with several information systems
managed by different organizations. This complexity increases degrees of difficulty of the
implementation projects. Our main finding is that as the inter-organizational nature of the
problem increases considerably the technical complexity of the implementation, it also
significantly increases difficulties in the social dynamics. We argue that a careful analysis of
these social issues can reveal some interesting viewpoints that otherwise may stay hidden.
We limit this paper to consider only the implementation of an inter-organizational
information system that is implemented to support pre-defined joint functionalities.
1 Introduction
Inter-organizational information systems are implemented because they inevitably
increase the possibilities of organizations to collaborate with each other [7, 15, 26].
Inter-organizational information systems allow, for example, enterprises to
participate in the e-economy by enabling cross-organizational connections in a
network or supply chain. Information system implementations are described
challenging efforts that require expertise, insights and skills of several individuals
[23]. Information technology has enabled ever quicker information sharing and
transfer across organizational borders and this has come true because modern
technology enables interaction without physical attendance. By saying this, we
want to emphasize the increased interaction among people in organizations but also
between organizations. However, implementing collaboration technology in inter-
1
Corresponding Author, Email: raija.halonen@gmail.com
414 Raija Halonen and Veikko Halonen
3 Research Approach
This research was qualitative. Therefore it enables and requires the researchers to
explain the research setting in detail enough to help the reader to understand the
research approach. When explaining the case the role of interpretation is
recognized, paying attention also to the experience that inevitably influences our
actions [25]. The means of case study and participatory observation have been
chosen due to their convenience in our research.
416 Raija Halonen and Veikko Halonen
The importance of the new information system was realized by one participant:
“If the information system will not be implemented, the actions will be declined in
our organization. The stipulation for the nation-wide actions will be an
information system!” (Memorandum September 12, 2003). The chairman stated in
the same meeting: “Our motive is to get this information system as soon as
possible because it’s impossible to act in the current way.”
I-System was to have both intra-organizational and inter-organizational
interfaces. The intra-organizational interfaces are described in Figure 1.
Department A
Codes
I- Department B
System
Legacy information
User
system
Identification
2006). The answer remained the same: “So far no automatic writing to the
receiving systems may be done.” (Memorandum November 2, 2006).
I-System was planned to support collaboration between the participating
organizations and this character necessitated interfaces to be build according to the
organizations. Because the organizations were interdependent and they had their
own information systems, the interfaces differed from each other. In the beginning
there were only three user organizations participating but the number of them
increased smoothly. In addition, in the development work there were other
organizations needed that were responsible e.g., some of the legacy systems and
data administration. We perceived problems with commitment by some of the
organizations, though. The project manager got email (September 16, 2004): "It
really seems that all tasks that were assigned to Acro [pseudonym] are left half-
way." There were problems with the user organizations, too. The project manager
got email May 12, 2005: “The situation is as before. We’ll start the technical
implementation at the end of the summer.” This email discussion continued on
February 10, 2006: “The progress has been slow. The specifications are almost
ready. We’ll try to get this fixed in the second quarter.” However, the assignment
was not completed until the project was ended.
We also perceived reluctance in delivering information in organizations when
there was need to get changes made in other information systems. On several
occasions it was found that knowledge was not available there where it was
needed. “I’m sorry about this outburst but we don’t really know anything about
this task and this ‘cgi’ is everything we have been told even if we wanted to know
something else about it, too!” (Email August 8, 2005). This problem in interaction
was evident e.g., when the interfaces to offer data from a legacy information
system were needed. We also noticed that the need was not informed to the actors
that were responsible for the new functionality. “We [project managers] cannot
push them to transfer information in their organization. They have been present
when we have discussed about transferring data between I-System and their
information system.” (A phone call to the project manager from the vendor).
This reluctance also influenced interoperability between organizations because
the information needed was not available in I-System that was to use it and to
deliver it forward.
5 Conclusions
When analyzing research material we must be conscious of the interpretative
nature of the task. In this sense, our personal experience influences also this,
however, how objective we try to be in our approach.
We witnessed that high felt motivation from the very beginning was driving the
project forward. All the same, motivation perceived by some people is not enough
if pertinent people are not motivated. Due to the diversified setting with several
actors interoperability was difficult to carry out. In our case, all desirable
information was not available when users needed it in the new information system.
Further, we also noticed that not all needed information was forwarded in the
organizations especially in case of distributed departments and units. This lack of
Collaborative Implementation of Inter-organizational Interoperability 419
7 References
[1] Clark T, Moon T. Interoperability for Joint and Coalition Operation. Australian
Defence Force Journal 2001;151:23-36.
[2] Coghlan D, Brannick T. Doing action research in your own organization. Sage
Publications, London, 2002.
[3] Daniel EM, White A. The future of inter-organisational system linkages: findings of an
international Delphi study. European Journal of Information Systems 2005;14(2):188-
203.
[4] Davis GB, Olson MH. Management information systems: Conceptual foundations,
structure and development. McGraw-Hill Book Company, New York, 1985:561-601.
[5] Flick U. An Introduction to Qualitative Research. Sage Publications Inc., Thousand
Oaks, 1999.
[6] Hasselbring W. Information system integration, Communications of the ACM
2000;43(6):32-38.
[7] Hevner AR, March ST, Park J, Ram S. Design Science in Information Systems
Research. MIS Quarterly 2004;28(1):75-105.
[8] Hong IB, A new framework for interorganisational systems based on the linkage of
participants’ roles. Information & Management 2002;39:261-270.
[9] IEEE. IEEE Standard Computer Dictionary: A Compilation of IEEE Standard
Computer Glossaries. Institute of Electrical and Electronics Engineers, New York,
1990.
[10] Johnston HR, Vitale MR. Creating Competitive Advantage with Interorganizational
Information Systems. MIS Quarterly 1988;12 (2):153-165.
[11] Kemmis S. Exploring the Relevance of Critical Theory for Action Research:
Emancipatory Action Research in the Footsteps of Jurgen Habermas. In: Reason P,
Bradbury H (eds) Handbook of Action Research. SAGE Publications, London, 2002;
91-102.
420 Raija Halonen and Veikko Halonen
[12] Kemmis S, McTaggart R. Participatory action research. In: Denzin NK, Lincoln YS
(eds) Handbook of Qualitative Research. Sage Publications Inc., Thousand Oaks, 2000;
567-605.
[13] Klein K, Myers M. A set of principles for conducting and evaluating interpretative field
studies in information systems. MIS Quarterly 1999;23(1):67-94.
[14] Klischewski R. Information Integration or Process Integration? How to Achieve
Interoperability in Administration, 2004. Available at:
http://is.guc.edu.eg/uploads/egov2004_klischewski.pdf. Access on: Nov. 15, 2006.
[15] Laudon KC, Laudon JP. Management Information Systems, New Approaches to
Organization and Technology. Prentice-Hall, New Jersey, 1998.
[16] Manola F. Interoperability issues in Large-Scale Distributed Object Systems. ACM
Computing Surveys 1995;27(2):268-270.
[17] Markus ML. Building successful interorganizational systems. In: Chen C-S, Filipe J,
Secura I, Cordeiro J (eds) Enterprise Information Systems VII, Springer, Heidelberg,
2006.
[18] Munkvold BE. Challenges of IT implementation for supporting collaboration in
distributed organizations. European Journal of Information Systems 1999;8:260-272.
[19] Park J, Ram S. Information Systems Interoperability: What Lies Beneath? ACM
Transactions on Information Systems 2004;22(4):595-632.
[20] Sawyer S, Southwick R. Temporal Issues in Information and Communication
Technology-Enabled Organizational Change: Evidence From an Enterprise Systems
Implementation. The Information Society 2002;18:263-280.
[21] Schultze U. A Confessional Account of an Ethnography About Knowledge Work. MIS
Quarterly 2000;24(1):3-41.
[22] Southon G, Sauer C, Dampney CNG. Lessons from a failed information systems
initiative: issues for complex organizations. International Journal of Medical
Informatics 1999;55:33-46.
[23] Tiwana A, McLean ER. Expertise Integration and Creativity in Information Systems
Development. Journal of Management Information Systems 2005;22(1):13-43.
[24] Van Maanen J. Tales of the Field: On Writing Ethnography. University of Chicago
Press, Chicago, 1988.
[25] Walsham G. Interpretive case studies in IS research: nature and method. European
Journal of Information Systems 1995;4:74-81.
[26] Wognum PM, Mensink G, Bühl H, Ma X, Sedmak-Wells M, Fan IS. Collaborative
enterprise system implementation. In: Sobolewski M, Cha J (eds) Concurrent
engineering - The worldwide engineering grid. Tsinghua University Press, Beijing,
2004; 583-588.
FICUS - A Federated Service-Oriented File Transfer
Framework
Abstract. The engineering data of a large enterprise is typically distributed over a wide area
and archived in a variety of file systems and databases. Access to such information is crucial
to team members and relevant processing services (applications, tools and utilities) in a
concurrent engineering setting. However, this is not easy because there is no simple way to
efficiently access the information without being knowledgeable about various file systems,
file servers, and networks, especially when complex domain related data files get bigger. In
a concurrent engineering environment, there is every need to be aware of the transparent and
dynamic data perspectives of the other members of the team.
Our paper describes the methodology of how FICUS works along with the details of the
implementation and extensions planned for the future. We believe the performance and
reliability offered by FICUS will make it a very useful distributed file transfer protocol for a
large design team and will make it very convenient to integrate heterogeneous legacy file
systems.
Keywords. Data sharing, distributed file systems, federated systems, collaborative work.
1 Introduction
Managing engineering data is becoming an increasingly complex task.
Heterogeneity of hardware and software platforms is one of the barriers to be
overcome in achieving this end. With increasing use of computers, we have islands
of automation that have resulted in information archival in legacy file systems.
1
SORCER Research Group, Computer Science Dept., Texas Tech University, Box 43104,
Boston & 8th, Lubbock, TX 79409, USA; Tel: +1-806-742-5851; Email: sorcer@cs.ttu.edu;
http://sorcer.cs.ttu.edu
422 A. Turner and M. Sobolewski
This makes the access to information easy for someone who uses a single
repository as their primary or native environment. However, the information access
problem increases manyfold when one wishes to access enterprise-wide
information. Access to enterprise-wide information is very important in a
collaborative setting when a number of people access a corpus of information,
albeit with different perspectives. Major problems to be addressed include how to
integrate all the information, how to deal with legacy systems and how to provide a
wide view to the user abstracting all the hardware and software specifics of the
individual systems. The federated service-oriented file transfer framework
(FICUS) we describe in this paper enables access to distributed and replicated
information over a wide area with special emphasis on efficient access to data
repositories. This includes CAD drawings and other kinds of raster and vector data,
in addition to voice and video clips that will be archived in the future. The need for
accessing distributed information by people viewing from different perspectives
arises in a concurrent engineering setting. Also, the efficient file download by
many services sharing the same data becomes essential when requestors share the
same copy of a master file at the same time.
Building on the OO paradigm is the service-object-oriented (SOO) paradigm, in
which the objects are distributed, or more precisely they are remote (network)
objects that play some predefined roles. A service provider is an object that accepts
remote messages, called service exertions, from service requestors to execute an
item of work. A task exertion is an elementary service request – a kind of
elementary remote instruction (statement) executed by a service provider. A
composite exertion, called a job exertion, is defined in terms of tasks and other jobs
- a kind of procedure executed by a service provider. The executing exertion is a
SOO program that is dynamically bound to all relevant and currently available
service providers on the network. This collection of providers identified in runtime
is called an exertion federation, or an exertion space. While this sounds similar to
the OO paradigm, it really isn’t. In the OO paradigm, the object space is a program
itself; here the exertion space is the execution environment for the exertion, which
is a network OO program. This changes the game completely. In the former case,
the object space is hosted by a single computer, but in the latter case the service
providers are hosted by the network of computers. The overlay network of service
providers is called the service provider grid [5-7, 9] and an exertion federation is
called a virtual metacomputer. The metainstruction set of the metacomputer
consists of the method set defined by all service providers in the grid. Do you
remember the eight fallacies of network computing? Creating and executing a SO
program in terms of metainstructions requires a completely different approach than
creating a regular OO program. In other words, we apply in FICUS the OO
concepts directly to the service provider grid.
The SORCER environment [1, 10, 14-16, 20, 22-28] provides the means to
create interactive SOO programs and execute them without writing a line of source
code via zero-install, interactive service interfaces. Exertions can be created using
interactive user interfaces downloaded directly from service providers, allowing
the user to execute and monitor the execution of exertions in the SOO
metacomputer. The exertions can also be persisted for later reuse. This feature
allows the user to quickly create new applications or programs on the fly in terms
FICUS - A Federated Service-Oriented File Transfer Framework 423
of existing tasks and jobs. SORCER introduces federated method invocation based
on peer-to-peer (P2P) [17, 18] and dynamic service-oriented Jini technology [4, 8,
12, 13, 18, 19, 21].
To integrate applications and tools on a B2B grid with shared engineering data,
the File Store Service (FSS) [5] was developed as a core service in
FIPER/SORCER. The value of FSS is enhanced when both web-based user agents
and service providers can readily share the content in a seamless fashion. The FSS
framework fits the SORCER philosophy of grid interactive SOO programming,
where users create distributed programs using exclusively interactive user agents.
However FFS does not provide the S2S flexibility with separate specialized and
collaborating service providers for file storage, replication, and meta information
that have been added in the SILENUS federated file system [1].
In this paper, the FICUS federated service-oriented file transfer framework is
described that allows an exertion federation for collaborative, efficient data sharing
across federating service providers in terms of files split into smaller chunks that
are replicated and stored at multiple locations.
2 FICUS Architecture
FICUS has been designed to explore the file sharing concepts used in modern peer-
to-peer technologies such as BitTorrent [2, 3], and investigates how they can be
applied to a file system. FICUS is an extension to SILENUS, a federated file
system developed at Texas Tech University [1]. The SILENUS file system is
comprised of several network services that run within the SORCER environment,
each of which provides a functional aspect of the file system. These services
include a byte store service for holding file data, a metadata service for holding
metadata information about the files (such as file names), several optional
optimizer services, and façade services to assist in using these services. SILENUS
is designed so that many instances of these services can run on a network, and the
required services will federate together to perform the necessary functions of a file
system. FICUS adds support for storing very large files within the SILENUS file
system by providing two more services: a splitter service and a tracker service.
When a file is uploaded to the file system, the splitter service determines how that
file should be stored. If a file size is above a predetermined threshold, the file will
be split into multiple parts, or chunks, and stored across many byte store services.
Once the upload is complete, a tracker service keeps a record of where each chunk
was stored. When a user requests to download the full file later on, the tracker
service can be queried to determine the location of each chunk and the file can be
reassembled.
424 A. Turner and M. Sobolewski
BitTorrent [2, 3] doesn't really have any set rules for determining file splitting
parameters other than that all pieces must be of the same size (except for possibly
the last piece) and the piece size in bytes must be a power of 2. It is normally left
up to the hosting user to determine what the piece size should be used. In a file
system, this decision should be handled automatically and transparently and should
not bother end users with the specifics. Thus, a splitter service is responsible for
determining whether or not a file should be split, and if so, what parameters should
be used for splitting. Administrators can affect many of the parameters used in
making this decision in order to optimize network, storage, and file usage. For
example, an administrator can specify the minimum file size required for a file to
be considered for splitting. A minimum and maximum chunk size can be specified
along with a minimum and maximum number of file splits to use for any file. The
splitter can use this information to calculate the optimum chunk size to use for a
file based on the file's size and possibly other parameters as well, such as available
storage space on each of the byte store services.
The splitter also provides services for splitting and reassembling large files on
the requestor side through the use of proxies. A splitter proxy object can
concurrently manage multiple byte store proxy objects for communicating directly
with various byte store services. When a user or other agent uploads a large file,
the splitter proxy can send parts of the file simultaneously to individual byte store
FICUS - A Federated Service-Oriented File Transfer Framework 425
services to store as chunk files. A splitter proxy could then download these chunk
files from the multiple byte stores simultaneously and save them as file segments
to recreate a copy of the original file.
BitTorrent uses a tracker to help peers discover each other and to help peers
determine the location of desired file pieces. A tracker service for FICUS provides
similar functionality. When a large file is uploaded in chunks to the file system, the
location of each chunk is recorded and this information is given to the tracker for
storage in a database. During replication, a replicator service can also notify the
tracker of any new chunk files that have been made. In addition to chunk locations,
the tracker also records the size of the original file, the chunk size used, how many
chunks a file has been split into, and an optional checksum for each chunk to see if
the chunk file has been corrupted. When a split file needs to be retrieved, the
tracker can be queried to find the locations of each chunk needed to completely
reassemble the file.
Since a tracker service handles location information for files, it acts as a logical
extension to a metadata store service. When a file is stored on a byte store service,
the byte store names the file with a UUID to provide unique and persistent
identification for the file content. The metadata store normally keeps track of
where a file is located using the service ID of the byte store upon which a file is
stored along with the file's UUID. In the case of split files, a tracker service records
this information for each chunk file while the metadata store records the service ID
of any tracker services that are tracking the file along with a similar UUID with a
numeric extension to refer to the record number of the file within the tracker.
With the inclusion of these new services, an updated façade service is needed to
assist with the coordination of the various file system services. Specifically, the
façade service is responsible for discovering the metadata store services needed for
browsing through the file system, and splitter services to initiate file storage and
retrieval. When uploading files or performing other operations that require data
alterations on more than one file system service, the façade is also responsible for
keeping these actions under transactional semantics to verify that all required
operations either complete successfully or abort. Due to the nature of the façade
service, it can act as an entry point for other services into the file system. Through
the use of a service browser such as Inca X [11], a user could obtain a GUI
(ServiceUI [21]) for a façade service and interact with the file system directly
without the need to install any additional software. The façade could also be used
to manage and distribute remote event notifications. For example, once an upload
has completed, a façade service could notify a replication service with the
appropriate information to begin replicating the new file to additional storage
services.
426 A. Turner and M. Sobolewski
3.1 BitTorrent
BitTorrent is a peer-to-peer technology that has become quite popular over the
past few years. It allows a person to share a large file (or set of files) with many
users while transferring relatively little data. It accomplishes this task by breaking
the file into smaller pieces that can be quickly shared between other users. Once a
user has downloaded a file piece, it can send the completed piece to other users
who still require it. In essence, the upload bandwidth required to share the file is
now distributed amongst the peers rather than making a single server solely
responsible for doing all the uploading. User systems within the peer “swarm” are
able to find each other and figure out which systems have desired pieces through
the use of a tracker service running on the Internet.
In order to begin serving a file, a user would go through the following steps.
1. Find a tracker to manage the peers involved in transferring file pieces.
2. Generate a metainfo (torrent) file using the complete file to be served and
the URL of the tracker.
3. Upload the torrent file to a website.
4. Start a BitTorrent client using the torrent file to begin seeding the full file.
Downloading the full file involves the following steps.
1. The user finds and downloads the torrent file from the web server.
2. The user loads the torrent file into their BitTorrent client.
3. The BitTorrent client connects to the tracker to find other peers.
4. The BitTorrent client downloads pieces of the file from others and shares
the pieces it has with others.
Once all pieces are distributed into the peer swarm, clients can share with each
other until they all have the full file.
FICUS - A Federated Service-Oriented File Transfer Framework 427
3.2 FICUS
4 Conclusions
FICUS is able to provide several benefits over traditional client-server file system
in which file's are stored in their entirety. Many of these benefits stem from the
service-to-service oriented nature of SORCER along with the file splitting
capabilities of FICUS. In a traditional client-server based network file system, a
large amount of storage space must either be found or created in order to store
large amounts of data, especially if the data is contained in only a single file. The
speed at which this data can be provided to others is usually limited by the
maximum bandwidth available to the server, which can cause severe bottlenecks if
many clients request the same data at once. If an error occurs during a file transfer,
the client often has to restart the transfer from the beginning, which wastes time
and magnifies the bottleneck issue. If the file server goes down, then these files are
typically unavailable until the server can be restored. Basically, the major
disadvantage of storing whole files within a client-server type file systems is that
the server can easily become a single point of failure. To help alleviate this
problem, many file servers run on expensive, high end, redundant server
equipment. High speed RAID arrays are often employed to not only help recover
from a hard drive failure, but also to provide increased throughput for client
requests. Additionally, servers are often placed on high speed network segments to
handle the necessary bandwidth requirements.
428 A. Turner and M. Sobolewski
Many of these problems can be avoided by splitting large files into chunks and
by using a service-to-service type architecture as provided by SORCER. By storing
files in chunks across multiple storage locations, storage and network requirements
become much more distributed. For example, when storing a large file, it is usually
much easier to find several storage locations with lesser amounts of free space than
it is to find a single location with a massive amount of free space. When
downloading files, there may be several different storage locations that have the
requested data rather than just a single server, thus it is far less likely for any single
storage location to become a bottleneck due to a high number of file requests. If
files are spread across multiple locations in chunks, a client could download
multiple chunks simultaneously, thereby using the aggregate bandwidth of all
storage nodes rather than the available bandwidth of just a single server. If an error
occurs during a file transfer, then only the erroneous chunk would have to be
transferred again rather than the full file, which can save a lot of time and waste
less bandwidth.
The concepts and methodologies proposed by FICUS provide other
opportunities for enhancing a file system as well. For example, after making a
modification to a large file, it may be possible to save only the relevant chunk files
that have changed rather than the entire file. This technique has the potential to
save drastic amounts of bandwidth and storage space, especially when storing
multiple versions of the same file. Special optimizer services could be designed for
replicating chunk files that are used most often to more stable and higher powered
machines, thus providing greater availability for frequently accessed data. Overall,
the concepts proposed by FICUS provide many different avenues for exploration to
enhance the scalability, reliability, and performance of distributed network file
systems.
References
[1] Berger M, Sobolewski M. SILENUS – A Federated Service-oriented Approach to
Distributed File Systems. In: Next Generation Concurrent Engineering. New York:
ISPE, Inc., 2005; 89-96.
[2] BitTorrent.org – BitTorrent Protocol Specification. Available at:
<http://www.bittorrent.org/protocol.html>. Accessed on: Mar. 8th 2007.
[3] Brian's BitTorrent FAQ and Guide. Available at: <http://www.dessent.net/btfaq/>.
Accessed on: Mar. 8th 2007.
[4] Edwards WK. Core JINI Second Edition. (2000). Prentice Hall, 2000.
[5] Foster I. The Grid: A New Infrastructure for 21st Century Science. In: Physics Today.
American Institute of Physics, 2002; 55(2): 42-47.
[6] Foster I, Kesselman C, Nick J. Tuecke S. The Physiology of the Grid: An Open Grid
Services Architecture for Distributed Systems Integration. Open Grid Service
Infrastructure WG. Global Grid Forum, June 22nd 2002.
[7] Foster I, Kesselman C, Tuecke S. The Anatomy of the Grid: Enabling Scalable Virtual
Organizations. International Journal of Supercomputer Applications, 2001; 15(3).
[8] Freeman E, Hopfer S, Arnold K. JavaSpaces™ Principles, Patterns, and Practice.
Addison-Wesley, 1999.
[9] The Globus Project. Available at: <http://www.globus.org>. Accessed on: Mar. 8th
2007.
FICUS - A Federated Service-Oriented File Transfer Framework 429
Abstract: The major objective of the Service Oriented Computing Environment (SORCER)
is to form dynamic federations of network services that provide engineering data,
applications and tools on an engineering grid with exertion-oriented programming. To meet
the requirements of these services in terms of data shar-ing and managing in the form of
data files, a corresponding federated file system, SILENUS, was developed. This
system fits the SORCER philosophy of interactive exertion-oriented programming,
where users create service-oriented programs and can access data files in the same
way they use their local file system. This paper gives a brief overview of SORCER
and then the SILENUS methodology is described.
Next, we discuss SILENUS gateway, management, and data services with related
disconnected and data synchronization mechanisms. We also discuss experimental
results of the implemented system.
1 Introdution
In an integrated environment, all entities must first be connected, and they then
must work cooperatively. Services that support concurrency through
communication, team coordination, information sharing, and integration in an
interactive and formerly serial product development process provide the foundation
for any CE environment. Product developers need a CE programming and
execution environment in which they can build programs from other developed
programs, built-in tools, and persisted data describing how to per-form complex
design processes. Like any other services in the environment, a CE distributed file
system can be structured as a collection of collaborating distributed services
enabling for robust, secure, and shared vast repository of engineering enterprise
data.
432 Max Berger and Michael Sobolewski
Several systems exist to access data that is spread across multiple hosts.
However, except for a few exceptions, all of them require manual management and
knowledge of the exact data location. Very few offer features like local caching or
data replication.
Under the sponsorship of the National Institute for Standards and Technology
(NIST) the Federated Intelligent Product Environment (FIPER) [12][13][11] was
developed (1999-2003) as one of the first service-to-service (S2S) CE com-putting
environments. The Service-Oriented Computing Environment (SOR-CER) ([15],
[14], [16]) builds on the top of FIPER to drastically reduce design cycle time, and
time-to-market by intelligently integrating elements of the de-sign process by
providing true concurrency between design and manufacturing.
The systematic and agile integration of humans with the tools, resources, and
information assets of an organization is fundamental to concurrent engineering
(CE).
Two years ago we introduced a novel approach to share data across multiple
service providers using dedicated storage, meta information, replication, and
optimization services in the SORCER/SILENUS environment [1], a ser-vice
oriented approach to distributed file systems. The access via WebDAV adds to the
idea of heterogeneous interactive programming, where the user through its diverse
operating system interfaces can manage shared data files and folders. The same
data can be accessed and updated by different ser-vice providers and authorized
users can monitor data processing activities executed by the service providers
involved with co-operating WebDAV user agents. Like any other services in the
P2P environment, the SILENUS services are also peers in the SORCER
environment.
The paper is organized as follows. Section 2 provides a brief description of the
dynamic service object oriented computing; section 3 describes the SILENUS
methodology; section 4 presents disconnected operation and data synchronization;
section 5 describes experimental results using the NFS adapter; section 6 provides
concluding remarks.
2 SILENUS Methodology
SILENUS is based on a dynamic service object oriented architecture. As such, it
consists of individual service objects, which, when combined, provide the
SILENUS functionality. These components can broadly be categorized into
gateway components, data services, and management services. Figure 2 gives an
overview of the SILENUS architectural components.
To store data in the SILENUS file system, the following assumptions about the
data are made:
The Byte Store provides functionality for creating and retrieving file content. The
Byte Store does not provide file attribute storage. It does, however, provide support
for retrieving attributes that are derived from the file data. Such attributes include
file size and checksums. These can be used to verify the integrity of file contents.
The Byte Store provides fast access to the files stored on the provider’s host.
Stored files are usually encrypted, but can be stored unencrypted for performance
reasons.
The Metadata Store provides functionality to create, list, and traverse directories. It
also provides functionality to retrieve the file data location. File metadata is all the
information that is either included in the actual file data or that can be derived from
the file data, such as file name, creation date, file type, type of encryption, and
others. As a matter of fact, the file storage location, the file name, and even the
directory a file is in are nothing different than just three file attributes. This allows
all these attributes to be handled in a standard way persisted in the Byte Store’s
embedded relational database. Multiple versions of one file may exist in the
database for recovery purpose.
4 Experimental Results
Fig. 3. Data collected for SILENUS performance using the NFS adapter in a 100
MBit network. The NFS adapter was run locally (1.8 GHz Core-Duo); the byte
store on a remote machine (1 Ghz AMD Duron)
Figure 3 shows that the performance of the SILENUS system is not so much
dependent on the actual file size but rather on the number of requests.
Lessons Learned from the SILENUS Federated File System 437
Creating an empty file is almost instant, but it still requires a file metadata creation.
Retrieving an empty file is instant, as there is no file content to retrieve. For small
files, the time for creating the file is about 2 seconds, not really dependent on the
file size. Retrieving a file is much faster: no transaction is needed and no
modifications are done. For a large file, the actual network performance shows up
as indicated in Figure 3 . Without any overhead, a 100 MB file could be transferred
in about 9.3 seconds. For file upload, the SILENUS system reaches 40% of the
maximum network performance. For file download this increases to 56% of the
maximal network performance.
Given the overhead of locating the file, transferring it from a byte store to the NFS
adapter, and through the NFS protocol to the local host these values are very
satisfying. For concurrent engineering environments these values are good enough
to share large data files, such as CAD designs. As reading files is more efficient,
this system could be used with large files that must quickly be distributed to
multiple engineers.
5 Conclusions
This paper highlights the issues involved in designing and implementing federated
file systems and demonstrates the feasibility of such deployment in CE federated
environments. The presented SILENUS architecture shares the attributes of grid
systems, P2P systems, dynamic service object oriented programming, and
inheriting the security provided by Java/Jini security services.
It is modularized into a collection of core distributed providers with multiple
remote Facades. Facades supply with a uniform access points via their smart
proxies available dynamically to file requesters. A Facade smart proxy
encapsulates inner proxies to federating providers accessed directly (P2P) by file
requesters.
Core SILENUS services have been successfully deployed as SORCER services
along with WebDAV and NFS adapters. The SILENUS file system scales very
well with a virtual disk space adjusted as needed by the corresponding number of
required byte store providers and the appropriate number of needed metadata stores
to satisfy the needs of current users and service requesters.
Work is underway to improve upload- and download speed through a BitTorrent
like system with the FICUS framework [17].
The system handles very well several types of network and computer outages by
utilizing the presented disconnected operation and data synchronization
mechanisms. It provides a number of user agents including a zero-install file
browser (service UI) attached to the SILENUS Facade. This file browser with file
upload and download functions is combined with an HTML editor and multiple
viewers for documents in HTML, RTF, and PDF formats. Also a simpler version
of SILENUS file browser is available for smart MIDP phones.
438 Max Berger and Michael Sobolewski
References
18. S. Zhao and Michael Sobolewski. Context model sharing in the FIPER
environment.
In 8th Int. Conference on Concurrent Engineering: Research and Applications,
Anaheim, CA, 2001.
A P2P Application Signatures Discovery Algorithm
Keywords. P2P application signatures discovery, sequence mining, digital search tree
1 Introduction
P2P is a kind of distributed network, where the participants share resources. P2P
applications are now extending rapidly in the Internet. The current P2P traffic
capacity is even more than traditional web applications, occupying above the half
of the total network capacity. Various security problems will soon face with the
huge developing range of P2P traffic capacities. We need to identify P2P system
for real-time, in order to discovery and monitor P2P networks and block malicious
information. There are two ways that commonly used to identify P2P system. One
method recognizes P2P application according to P2P network transmission
behavior characteristic, which does not analyze the payloads [1]. The other method
detects the payloads and finds signatures that match with known P2P application
signatures. In order to hide its flow and avoid being detected based on ports, some
P2P applications often transform the ports. As more and more P2P applications can
operate on any port, the method based on the port number identification is no
longer valid in the identification of P2P traffic [2]. Therefore, all data packet must
be carried on the depth identification. It is said that the payloads of transport
protocol TCP should be detected. So it can be judged whether the packets contain
1
Associate Professor, College of Computer Science and Technology, Beijing University of
Technology, 100 Pingleyuan, Chaoyang District, 100022,Beijing, China; Tel: +86 (10) 6739
6063; Fax: +86(10) 6739 1742; Email: ljduan@bjut.edu.cn
442 L.J. Duan, L. Han, Y.F. Yu and J. Li
the sample signatures. That is the main idea of Deep Package Identification (DPI)
based on application signatures.
Key Tree is also called Digital Search Tree. It is a tree with more than 2 degrees.
Each node in the tree contains an element that is a character of string. The
characters in each path from the root to the leaf node represent a string. The special
symbol ‘$’ in leaf node indicates the end of the string. There is variable in the leaf
node for counting the frequency.
Key Tree has characters as following.
(1) Each character of the string is distributed in the path from the root node to the
leaf node. Therefore, the size of keyword sets is irrelevant to depth of the tree.
(2) Key Tree is an ordered tree. Each internal node x stores an element such that
the elements stored in the left brother node of x are less than elements stored in
the right brother node of x . It is assumed that symbol ‘$’ is less than any other
symbol.
There are two storage structures to represent Key Tree. The first one is child-
brother list (double list). The second one is multilinked list (Trie tree). This paper
selects the first one to implement Key Tree.
minimum support, transforms the database, and then finds sequential pattern. But
this algorithm does not handle time constraints. GSP [4] explores a candidate
generation and test approach to reduce the number of candidates to be examined.
The previously developed sequential pattern mining methods are used in customer
database, DNA database and etc.
For P2P application, we wish to find the sequence pattern in payloads. The
difference with traditional sequence mining application is that the location
information must be considered. Some characteristics which appears in specific
positions are application signatures, while appears in other positions are general
data. Based on the peculiar circumstance, this paper designs a new algorithm,
which uses the idea of sequence mining and the statistical function of Key Tree to
discovery application signatures of network packets. Since the algorithm is very
similar to Apriori algorithm, it will face the same problem as Apriori algorithm. It
needs a scan of the original database for each pass, so there is a lot of I/O
operation. The efficiency of the algorithm will be affected. Key Tree will exactly
solve the problem of massive I/O operates. It only needs one time I/O operation to
read payload, and the subsequence selection can obtain data from Key Tree.
As a result of using Key Tree's structure, data is compressed. When large amounts
of data stored in external storage is added to the tree, the memory space occupied is
smaller than external storage. Meanwhile, we can also recover the original data
without information lost. The statistical function of Key Tree can be used directly,
so we need not to design an additional statistic program. The implementation
process will discover frequent itemsets, which similar to steps of traditional
sequence mining. First it finds the frequent 1-sequences L1 , which are in the same
position within different payloads. Then it joins L1 into C2 , the set of candidate 2-
sequences. It deletes the all sequences c C2 such that some 1-subsequences of c
are not in L1 , and gets L2 . Using the same method, it tries until generate Ck , which
turns out to be empty. Then it outputs all k-sequences.
After simple processing, the protocol type, the source IP address, the source port,
the destination IP address, the destination port number and DATA segment are
obtained as following.
In fact, the count of corresponding bytes in the same location can be operated in
column direction. So, we establish a Key Tree for each column, and store each Key
Tree into a link list (each node of the list is a Key Tree). The values in the column
and the identifiers of each value are stored in the nodes of Key Tree. In order to
examine the packet identifier of each value, the identifier is also stored in a link
list. After all Key Trees are created, the frequent 1-sequences can be found by
using the statistical function in Key Tree. But it can only find the application
signatures whose size is only one byte. It could not organize effective memory
structure to discovery long frequent sequence.
3 Experiments
In order to get results on different conditions, we test the algorithm using simulate
data. The input simulate data is as following.
> 01 23 03 23 44
> 23 23 03 33
> 01 23 93 84 77
> 01 23 03 43 44
> 66 32 03 45 67
> 55 44 93 57 83
A P2P Application Signatures Discovery Algorithm 447
> 44 02 03 04 42
It contains the DATA segments of 7 data packets. We can find that the ‘01 23 03
44’ occurs in 0th, 1st, 2nd, 4th bytes of 1st and 4th packets. ‘01 23’ and ‘23 03’
appears three times separately. The highest frequent character is ‘03’ which occurs
five times.
If the min-support is 25%, the output is ‘[01, 23, 03, 44] [0, 1, 2, 4] [1, 4]’. Here,
[01, 23, 03, 44] represents the application signature˗[0, 1, 2, 4] represents the
location of the application signature˗[1, 4] represents 1st and 4th packets.
We also test the algorithm on real data which is scratched from network as skype
application is login. It is known that the 3rd byte in DATA segments of lot of
packets is ‘02’. The algorithm finds the sequence. [02] [2] [7, 8, 55, 56, 59, 60, 65,
66, 140, 142, 143, 144, 145, 146, 148, 149, 150, 152, 153, 156, 158, 159, 176, 177,
178, 179, 180, 181, 298, 299, 300, 301]. ‘[02]’ represents that ‘02’ is the
application signature. ‘[2]’ represents that the location of application is 2nd. In
factually, it is the 3rd byte of the DATA segment, but the index of line is from 0.
‘[7, 8, 55, 56, 59, 60, 65, 66, 140, 142, 143, 144, 145, 146, 148, 149, 150, 152, 153,
156, 158, 159, 176, 177, 178, 179, 180, 181, 298, 299, 300, 301]’ represents the
identifier of packets which value 3rd byte is 02. Compared with real data, we can
find that the result is correct. For example, the DATA segment of 7th packet is ‘03
3f 02 3b 8a 55 97 05 12 1a 6c 7d 46 97 51 5c 24 91 6f b3 eb 9e a3’.
4 Conclusions
This paper presents an application signature discovery algorithm. It can be said to
be a variant of Apriori algorithm. The design of the new algorithm is not
completely in accordance with the mode of Apriori algorithm, because it must
focus on the location information. The kernel idea of this algorithm is from the
Apriori algorithm, but the coding process is completely independent. We determine
the completeness of the algorithm through test on virtual data. Based on real data
test, we can see the role of the algorithm in practical applications. Although the
448 L.J. Duan, L. Han, Y.F. Yu and J. Li
algorithm does not yet have a good ability to discover low frequent signatures, it
provides a high frequent signature discovery method. The other hand, since the
algorithm adopts idea of Apriori, therefore it is still unable to avoid some of the
drawback of Apriori algorithm. The efficiency of discovery exceed long sequence
is low. Since P2P application signatures are not too long, the low efficiency of such
extreme circumstances can be tolerated. Meanwhile, we can also set up a limitation
of the length of signatures. So that, we could get segments of exceed long
sequences quickly, without a long time waiting.
Acknowledgement
This research is partially sponsored by National 242 Information Security Project
(2005C47) and Beijing Municipal Education Committee (No.KM200610005012).
References
[1] Karagiannis T, Broido A, Faloutsos M, Claffy Kc. Transport Layer Identification of
P2P Traffic. In: proceedings of the 4th ACM SIGCOMM conference on Internet
measurement, Taormina, Sicily, Italy, ACM Press, 2004;121–134
[2] Sen S, Spatscheck O, Wang DM. Accurate. Scalable In-Network Identification of P2P
Traffic Using Application Signatures. In: proceedings of the 13th international
conference on World Wide Web, New York, NY, USA, ACM Press, 2004; 512–521
[3] Agrawal R, Srikant R. Mining Sequential Patterns. In: Yu PS, Chen ALP, eds; Proc. of
the 11th Int'l Conf. on Data Engineering, ICDE'95; Taipei: IEEE Computer Society,
1995; 3-14
[4] Srikant R, Agrawal R. Mining Sequential Patterns: Generalizations and Performance
Improvements. In: Apers PMG, Bouzeghoub M, Gardarin G; Proc. of the 5th Int'l Conf.
on Extending Database Technology: Advances in Database Technology; London:
Springer-Verlag, 1996; 3-17
Knowledge Management
Knowledge Oriented Process Portal for Continually
Improving NPD
Abstract. Business process management integrated with new product development (NPD)
provides practices to help companies to improve their competitiveness. However, few
companies know the benefits of these practices and few have the culture of systematically
sharing knowledge about these practices to continually improve their NPD. In order to
encourage companies to use process management with systematic sharing of knowledge,
this paper proposes the development of a knowledge oriented process portal. This portal
comprises information related to generic NPD reference models and its continual renewal by
using the body of knowledge (BOK), made available by a community of practice (CoP).
1. Introduction
New product development (NPD) is a business process aimed at converting needs
into technical and commercial solutions. To accomplish an efficient NPD it is
important to define and to manage it in agreement with business process
management (BPM). BPM along with strategic planning provide inputs to monitor
business process performance indicators. This combination may indicate the way
to perform necessary changes to continually improve NPD. The definition of a
NPD reference model promotes a single vision of product development to all actors
in the process. This vision has the purpose of leveling important knowledge
between the stakeholders.
Existing elements of the reference model, i.e. tools, methods, templates, can be
adapted to the company’s NPD maturity level. This knowledge associated to these
elements should be dynamic, that is, it should be updated in agreement with the
creation of new solutions and corresponding information. Thus, it is possible to
1
University of São Paulo. Trabalhador São Carlense Av., 400, NUMA, São Carlos, SP,
Brazil, Tel.: +55-16-3373.9433, E-mail: andreapjubileu@gmail.com
452 A. P. Jubileu, H. Rozenfel, C. S. T. Amaral, J. M. H. Costa, M. L. S. Costa
Business Process Management (BPM) is about both business and technology [15].
BPM is regarded as a best practice management principle to help companies
sustain a competitive edge. Based on a holistic perspective, BPM combines Total
Quality Management (TQM) that is incremental, evolutionary and continuous in
nature and Process Re-engineering that is radical, revolutionary and a one-time
undertaking, and is regarded as suitable for performance improvement in most
circumstances [7]. In the so called third wave the goal is to apply BPM toward
Knowledge oriented Process Portal for Continually Improving NPD 453
innovation [15]. On the other hand there are author that emphasizes BPM as a
technological tool, seen as an evolution of workflow management systems [1].
In this case BPM life-cycle has four phases: process design, system
configuration, process enactment and diagnosis. The focus of traditional workflow
management (systems) is on the process design and process enactment phases of
the BPM lifecycle. This definition of BPM extends the traditional workflow
management (WFM) approach by supporting the diagnosis phase and allowing for
new ways to support operational process [1, 2].
Since in this paper the scope is to integrate NPD process improvement in a
broader BPM framework, the focus is on the business vision of BPM. However the
implementation of BPM technology is a very important issue, though it is
considered in the Product Lifecycle Management (PLM) systems.
Hung [7] defines BPM as an integrated management philosophy and set of
practices that incorporates both incremental and radical changes in business
processes, and emphasizes continuous improvement, customer satisfaction and
employees’ involvement.
BPM is a structured and systematic approach to analyze, improve, control and
manage processes to increase the quality of products and services. This approach
depends on the alignment of business operations with strategic priorities,
operational elements, use of modern tools and techniques, people involvement and,
most importantly, on a horizontal focus that can best suit and deliver customer
requirements in an optimum and satisfactory way [10, 17].
New product development (NPD) is often recognized as the key process to
enhance competitiveness [4]. To this end, NPD improvement should be in harmony
with BPM practices to ensure the commitment with the company’s strategic goals.
process models by adapting the generic reference model to their contexts. The
instances may be created based upon standard processes for many types of
projects. It is possible to obtain with a reference model a single vision of product
development and, thus, to equalize the knowledge among all stakeholders
participating in a specific development [13].
Some examples of NPD reference models with different levels of detail are: the
PDPnet reference model [13], MIT Process Handbook [9] e Capability Maturity
Model Integration (CMMI) [16]. The PDPnet model was adopted as a generic
reference model to NPD in this work, which will be described in Section 5. This
model synthesizes the best NPD practices [13]. This reference model contains
following phases: product strategic planning; design planning; informational
planning; conceptual design; detailed design; product production preparation;
product launching; product and process follow-up; product discontinuation. The
model highlights the integration of strategic planning and portfolio management;
the incorporation of PMBOK concepts [12] into the planning phase; definition of
integrated cycles for detailing, acquisition and optimization of products in the
detailed design phase; insertion of optimization activities; validation of productive
processes and techniques to meet ergonomic and environment requirements; and
integration of product launching phase where other business processes such as
technical assistance and sales processes are defined and implemented.
Existing elements in the reference model—i.e., tools, methods, templates, best
practices—can be adapted to the company’s NPD maturity level. This knowledge
should be dynamic, that is, it should be updated to be consistent with new
information. Some ways to share knowledge—knowledge sharing concepts
(strategic resource to companies) and portals (one of the web-based tools)—are
explained in the next section.
5.1 Objectives
5.2 Methodology
5.3 Requirements
Figure 1 shows the functional structure of the portal. The idea is that
users/companies could refer to a model—initially the PDP reference model [13]—
existing in the portal. The model contains information about tools, methods,
procedures or routines, and document templates to assist in the accomplishment of
the activities.
The portal may also be used by users/companies as an instance to create their
knowledge bases using the structure developed in the portal. Once users create
their standard process models, the information in them could be modified or
criticized when allowed by the companies/users.
The support to users to improve the knowledge related to standard process
models will be provided by the community of practice (CoP), which will allow
users to contact NPD specialists, access links to interesting websites, papers, books
and other important research material. Thus, the portal will have a dynamic nature,
promoting knowledge sharing among users, who in turn can improve their BOK’s
on their standard process models. At the moment CoP support is being provided by
means of PDPNet.
Knowledge oriented Process Portal for Continually Improving NPD 457
The integration with modeling tools allows the conversion of textual models in
graphic models and vice-versa.
6. Conclusion
In the context of NPD management with BPM principles, portals could be used to
integrate people and provide access to information/knowledge collaboratively and
with a cycle of knowledge retention, use and sharing. This cycle generates new
knowledge, which is incorporated into NPD reference model activities, continually
increasing companies’ capability in product development.
It is hoped that the knowledge about NPD reference models will be used by a
larger number of companies, since the content of the portal is open and free.
The integration between the NPD reference model and the dynamic BOK allow
users to select the knowledge more appropriated to their needs and even to define
their own standard NPD process. Moreover a company can include in its standard
NPD process part of the BOK available, so it can leverage the competency of its
NPD team members. The participation in the available community of practice is
important to keep the company updated. Being free and open make it possible to
be used by small and medium enterprises (SME), which cannot sometimes afford
to hire consultants to improve their NPD processes.
458 A. P. Jubileu, H. Rozenfel, C. S. T. Amaral, J. M. H. Costa, M. L. S. Costa
Acknowledgement
The authors are grateful to GEI2 colleagues for suggestions and to CNPq for the
financial support.
References
[1] Aalst, W. M. P. van der; Hofstede, A. H. M. ter; Weske, M. Business Process
Management: A Survey. BPM Center Report BPM-03-02, BPMcenter.org, 2003,
http://is.tm.tue.nl/staff/wvdaalst/BPMcenter/reports/2003/BPM-03-02.pdf.
[2] Aalst, W. M. P. van der. Business Process Management Demystified: A Tutorial on
Models, Systems and Standards for Workflow Management. In: Desel, J.; Reisig, W.;
Rozenberg, G. (Eds): ACPN 2003, LNCS 3098, pp. 1-65, Springer-Verlag Berlin
Heidelberg, 2004.
[3] Benbya, H.; Passiante, G.; Belbaly, N. A. Corporate portals: a tool for KM
synchronization. Inter. Journal of Inf. Manag., 24, 201–220, 2004.
[4] Büyüközkan, G; Baykasoglu, A.; Dereli, T. Integration of Internet and web-based
tools in new product development process. Production Planning & Control, 18(1), 44–
53, 2007.
[5] Clarck, K. B.; Fujimoto, T. Product Development Performance: strategy,
organization and management in the world auto industry. Boston, Harvard Business
School Press, 1991.
[6] Hahn, J.; Subramani, M. R. A Framework of KMS: Issues and Challenges for
theory and practice,
http://ids.csom.umn.edu/faculty/mani/Homepage/Papers/Hahn&Subramani_ICIS2000.
pdf.
[7] Hung, R. Y. Business process management as competitive advantage: a review and
empirical study. Total quality management. 17(1), 21–40, Jan, 2006.
[8] Kruchten, P. The Rational Unified Process: An Introduction. 3th ed. Harlow:
Addison-Wesley, 2003.
[9] Malone, T. W.; Crowston, K.; Herman, G. Organizing Business Knowledge: The
MIT Process Handbook. Cambridge, MA: MIT Press, 2003.
[10] Mckay, A.; Radnor, Z. A characterization of a business process.
International Journal of Operational and Production Management, 18(9/10), 924-936,
1998.
[11] Özer, M. The role of the internet in new product performance: a conceptual
investigation. Indust. Market. Manage., 33, 355–369, 2004.
[12] PROJECT MANAGEMENT INSTITUTE. A guide to the project
management body of knowledge (PMBOK guide), Pennsylvania, 2002.
[13] Rozenfeld, H.; Forcellini, F. A.; Amaral, D. C.; Toledo, J. C. de; Silva, S.
L. da; Alliprandini, D. H.; Scalice, R. K. Gestão de Desenvolvimento de Produtos –
Uma referência para a melhoria do processo. São Paulo: Saraiva, 2006.
[14] Smith, M. A. Portals: toward an application framework for interoperability.
Commun. ACM, 47(10), 93–97, 2004.
[15] Smith, H.; Fingar, P. Business Process Management: The third wave.
Meghan-Kiffer Press, 2003
Knowledge oriented Process Portal for Continually Improving NPD 459
Abstract. The Potential Failure Modes and Effects Analysis in Manufacturing and
Assembly Processes (PFMEA) represents an important preventive method for quality
assurance, in which several specialists are involved in the investigation of all the causes and
effects related to all possible failure mode of a manufacturing process, still in the initial
phases of its development. Thus, the decisions based on the severity levels of effects and on
the probabilities of occurrence and detection of the failure modes can be planned and
prioritized. The result of this activity consists of a valuable source of knowledge about the
manufacturing processes. However, this knowledge is hardly reusable in intelligent retrieval
systems, because in general all related information is acquired in the form of natural
language and it is not semantically organized, and therefore its meaning depends on the
understanding of the specialists involved in the production chain. In this context, this paper
describes the development and implementation of a formal ontology based on description
logic (DL) for the knowledge representation in the domain of PFMEA, which fundamentally
intends to allow the computational inference and ontology-based knowledge retrieval as
support to the activities of organizational knowledge in manufacturing environments with
distributed resources.
1 Introduction
The Potential Failure Mode and Effects Analysis in Manufacturing and Assembly
Processes (PFMEA) is an analytical method of quality engineering for the analysis
of a manufacturing process, still in the initial phases of its development, in order to
identify all of the potential failure modes, their causes and effects on process
1
Universidade Federal de Santa Catarina, Departamento de Engenharia Mecânica,
GRIMA/GRUCON, Caixa Postal 476, CEP 88040-900, Florianópolis, SC, Brazil; Tel: +55
(48) 3721-9387 extension 212; Fax: +55 (48) 3721-7615; Email: jcarlos@emc.ufsc.br;
http://www.grima.ufsc.br
462 W. L. Mikos, J. C. E. Ferreira
The PFMEA-DL Ontology proposed in this work was developed in its conceptual
phase in consonance with the concepts and terms established in the SAE J1739
464 W. L. Mikos, J. C. E. Ferreira
standard [15] and in the AIAG reference [2], thoroughly used in the area of quality
engineering. Thus, the knowledge domain was modeled considering the description
of concepts and their relationships starting with seven main concepts elements:
Product, Process, Function, Failure, Actions, FMEA Description and Images.
In this scenario, the Product Concepts represent the domain of the product
model, particularly its logical structure, and it corresponds to the classes and
subclasses of products. The Process Concepts represents the logical and temporal
structure of the processes, their respective operations and pieces of equipment, for
a given industrial plant. The Function Concepts comprises a model of functions
associated with each process or operation.
In the Failure Concepts the fundamental concepts and relationships of the
PFMEA method are represented, which include: potential failure mode, effect of
failure, causes of failure. In an innovative way it links the concept of potential
failure mode with the concepts of primary and secondary identifiers, as well as the
allocation of the failure with regard to a model of features as proposed by Shah and
Mäntilä [14]. This is done in order to reduce the possible ambiguity between the
instances of the concept Potential Failure Mode and to increase the expressiveness
of the semantic representation of the knowledge and the capacity of the inference
service and knowledge retrieval tasks. Figure 1 illustrates the model of concepts
and roles (binary relationships) among instances for the Failure Concepts.
Process Concepts Process
Sub Process
Operation
Failure Concepts
Operation Function
hasPrimaryID
isRelatedToFunction Primary Identifier
hasSecondaryID
Potential Failure Mode Secondary Identifier
hasFailureCause hasFeature
Location of Failure
Potential Causes of Failure
isRecommendedToFailureMode
failure or presence of other factors or specific means. For example: Failure Mode:
Direct chemical attack; Primary Identifier: Corrosion; Secondary Identifier: surface
exposed to corrosive media [19].
In the Action Concepts, the concepts and current relationships resulting from
the risk analysis of the PFMEA method are represented, such as: current process
control for prevention and detection, a rating criteria (severity scale, occurrence
scale and detection scale), risk priority number, recommended actions, actions
taken and responsibilities. The FMEA Description represents the other concepts
regarding the core teams and responsibilities.
Finally, the ontology includes the Image Concepts, whose objective is to
represent the concepts and relationships, such as: material description,
metallographic preparation, and material processing history, besides the concepts
related to the image type and image source, allowing the semantic-based image
indexing related to a failure effect.
The PFMEA-DL ontology was implemented through the standard language for
ontologies OWL DL (Web Ontology Language - Description Logic), developed by
W3 Consortium [20], which combines a great power of expressiveness with the
possibility of the inference service common to the description logic [9], using the
Protégé-OWL Ontology Editor [12].
In this context, figure 2 presents the logical structure of classes and subclasses
modeled in the Protégé OWL Editor, demonstrating the application of the property
restriction to the OWL Subclass PotentialFailureMode.
It is important to observe that the application of the existential quantifier
restrictions in Equation 1, is analogous to the existential quantifier of Predicate
Logic and consist of describing an anonymous “unnamed” class that restricts a
group of individuals (instances) of the subclass “PotentialFailureMode” connected
to individuals of the “PotentialCausesOfFailure” subclass, through the
“hasFailureCause” object property, which will be determined automatically by the
inference service of the reasoning engine.
? (RETRIEVE
(?POTENTIAL_CAUSES_OF_FAILURE ?POTENTIAL_END_EFFECT_OF_FAILURE)
(AND (?POTENTIAL_CAUSES_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#PotentialCausesOfFailure|)
(?POTENTIAL_END_EFFECT_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#EndEffect|)
(?POTENTIAL_CAUSES_OF_FAILURE
?POTENTIAL_END_EFFECT_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#isRelatedToEndEffect|)
(?POTENTIAL_CAUSES_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#Oil_remainder_on_cage|
|http://www.owl-ontologies.com/Ontology1144700912.owl#isACauseOfFailureMode|)
(?POTENTIAL_END_EFFECT_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#Oil_remainder_on_cage|
|http://www.owl-ontologies.com/Ontology1144700912.owl#isAEndEffectOfFailureMode|)))
> (((?POTENTIAL_CAUSES_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#Temperature_to_low_in_washing_bath|)
(?POTENTIAL_END_EFFECT_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#Stained_phosphating|))
((?POTENTIAL_CAUSES_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#Spooling_mouthpiece|)
(?POTENTIAL_END_EFFECT_OF_FAILURE
|http://www.owl-ontologies.com/Ontology1144700912.owl#Stained_phosphating|))
Figure 3. Complex ABox query and their results - RacerPro Log snapshot
Knowledge sharing and reuse in PFMEA domain 467
The objective of this complex query (figure 3) is to recover all the Potential
End Effects and the Potential Causes of Failure for a given Potential Failure Mode,
which is an instance of the concept Potential Failure Mode, in this case “Oil
remainder on cage”. It is important to observe that the query combines concepts
atoms (#PotentialCausesOfFailure) and role atoms (#isRelatedToEndEffect)
through the query constructor “and”.
The end effects is the impact of a possible failure mode on the highest process
level and is evaluated by the analysis of all the intermediate levels, and it may
result from multiple failure modes. Thus, in figure 3, the instance “Temperature
too low in washing bath” is a potential cause of the failure mode “Oil remainder on
cage” and its respective end effect is the instance “Stained phosphating”.
However, nRQL also provides complex TBox queries to search for certain
patterns of sub/superclass relationships in taxonomy (OWL Document) [8].
4 Summary
This paper presented the development of a formal ontology based on description
logic (DL) for the knowledge representation in the PFMEA domain. The proposed
ontology was developed from the conceptual point of view, in consonance with the
concepts and terms widely recognized in the quality context.
The proposed ontology was implemented through the OWL DL (Web Ontology
Language - Description Logic) and the RacerPro Server was used as core engine
for reasoning services and nRQL query processing.
The functional evaluation showed the semantic consistency and the
applicability of the proposed ontology to support knowledge sharing and reuse in
the PFMEA domain, as well as being promising as support to the activities of
organizational knowledge in manufacturing environments with distributed
resources.
Finally, the proposed ontology was developed as a means to work as a
terminological component TBox for other specific knowledge base for applications
toward complex manufacturing processes, e.g. in thermoplastic injection moulding,
in future ontology-based knowledge retrieval systems applications, and for agent-
based mediated knowledge management.
5 References
[1] Abdullah MS, Kimble C, Benest I, Paige R. Knowledge-based systems: a re-evaluation.
Journal of Knowledge Management 2006;10;3;1212-142.
[2] Automotive Industry Action Group (AIAG). Reference Manual: Failure Modes and
Effects Analysis, 3rd edition. 2001.
[3] Baader F, Nutt W. Basic Description Logics. In: Baader F, editor. Description Logic
Handbook: Theory, Implementation and Applications. New York USA: Cambridge
University Press, 2003; 43-95.
[4] Baader F, Sattler U. An Overview of Tableau Algorithms for Description Logics.
Studia Logica 2001;69;1;5-40.
468 W. L. Mikos, J. C. E. Ferreira
[5] Dittmann L, Rademacher T, Zelewski S. Performing FMEA using ontologies. In: 18th
International Workshop on Qualitative Reasoning. Evanston USA, 2004;209– 216.
[6] Fernández-López M, Gómez-Pérez A, Pazos-Sierra A, Pazos-Sierra J. Building a
chemical ontology using METHONTOLOGY and the ontology design environment.
IEEE Intelligent Systems & their applications 1999;4;1;37–46.
[7] Gangemi A, Catenacci C, Ciaramita M, Lehmann J. Modelling Ontology Evaluation
and Validation. In: 3rd European Semantic Web Conference. The Semantic Web:
Research and Applications. Lecture Notes in Computer Science, vol 4011, Springer,
2006;140-154.
[8] Horrocks I. Application of Description Logics: State of Art and Research Challenges.
In: 13th International Conference on Conceptual Structures. Lectures Notes in Artificial
Intelligence, vol 3596, Springer, 2005;78-90.
[9] Horrocks I. OWL: A Description Logic Based Ontology Language. In: 11th
International Conference on Principles and Practice of Constraint Programming.
Lecture Notes in Computer Science, vol 3709, Springer, 2005;5-8.
[10] Lee B. Using FMEA models and ontologies to build diagnostic models. Artificial
Intelligence for Engineering Design, Analysis and Manufacturing 2001;15;281-293.
[11] Lennartsson M, Vanhatalo E. Evaluation of possible SixSigma implementation
including a DMAIC project: A case study at the cage factory, SKF Sverige AB. Master
Thesis, Lulea University of Technology, 2004.
[12] Protégé-OWL Editor. Available at: <http://protege.stanford.edu>. Accessed on: Feb.
15th 2006.
[13] Racer Systems, RacerPro - User’s Guide Version 1.9. Available at: < http://www.racer-
systems.com>. Accessed on: Feb. 10th 2006.
[14] Shah J, Mäntyla M. Parametric and Featured-Based CAD/CAM: concepts, techniques
and application. New York: John Wiley and Songs Inc., 1995.
[15] Society of Automotive Engineers International. SAE J1739, Potential Failure Mode and
Effects Analysis, PA, USA, 2002.
[16] Stamatis D. Failure Mode and Effect Analysis: FMEA from Theory to Execution. New
York: ASQ Quality Press, 2003.
[17] Teoh PC, Case K. Failure modes and effects analysis through knowledge modeling.
Journal of Materials Processing Technology 2004;153;253–260.
[18] Tsarkov D, Horrocks I. Optimised Classification for Taxonomic Knowledge Bases. In:
International Workshop on Description Logics. CEUR Workshop vol 147, 2005.
[19] Tumer IY, Stone RB, Bell DG. Requirements for a failure mode taxonomy for use in
conceptual design. In: International Conference on Engineering Design, Stockholm,
Sweden, Paper N. 1612, 2003.
[20] W3C: World Wide Web Consortium, Web Ontology Language (OWL). Available
<http://www.w3.org/2004/OWL/>. Accessed on: Feb. 15th 2006.
[21] Zúñiga G. Ontology: its transformation from philosophy to information systems. In:
International Conference on Formal Ontology in Information Systems, ACM Press,
New York, 2001.
Collaboration Engineering
Collaborative Product Pre-development: an
Architecture Proposal
Abstract. Nowadays, designers usually interact with teams of distributed stakeholders using
information and communication technology, aiming time/cost reductions and quality
improvement. However, a lack in collaboration and knowledge management support
mechanisms persists, especially in the product pre-development. Best practices for product
pre-development are still ill-defined because information available for designers in this
phase are still instable and too abstract. Portfolio management highlights reasons,
restrictions, tendencies and impacts, using competitive intelligence concepts insights on a
knowledge management perspective, in order to classify project proposals in accordance
with the organizational strategy. An agreement about what is really important to
organizational strategy, along with a right team appointment, can contribute to empower
portfolio management decisions. To achieve such an agreement, it is necessary to
understand the different viewpoints in the negotiation process, to reduce impositions and the
dependency from senior professionals with consecrated skills. The proposed architecture can
contribute to portfolio management commitment, increasing the rate of right decisions and
the support for these decisions, enabling coherence on similar situations. A collaborative
product pre-development can extend the organizational capacity to obtain competitive
advantages, because a consistent pre-development results in minor deviation on subsequent
phases of the new product development.
1 Introduction
Globalization and the technology evolution urge for adaptive organizations.
Product lifecycles are increasingly reduced, requiring more agility and flexibility
from project teams in new product development (NPD). The great challenge is how
to make feasible the collaboration in early NPD phases, when vague and
incomplete information make collaboration hard [1].
1
Product and Process Engineering Group (GEPP), Federal University of Santa Catarina
(UFSC), Postal Box 476, 88040-900, Florianopolis-SC, Brazil; Tel: +55 (48) 3331 7063;
Email: moeckel@emc.ufsc.br, forcellini@deps.ufsc.br; http://www.gepp.ufsc.br
472 A. Moeckel, F. A. Forcellini
4 Product Pre-development
The product pre-development is characterized basically by the definition of the
projects that will be developed in the organization. The pre-development mission is
[3] to guarantee that strategy direction, stakeholder’s ideas, opportunities and
restraints, can be systematically mapped and transformed in a project portfolio.
In the product innovation management, portfolio management represents [11]
the business strategy expression, defining where and how resources will be put in
the future. Project selection can involve value measurement approaches as well as
other decision criteria – for example, a competence creation in a strategic area that
can be important for the organization to survive [12]. The complexity involved in
the product pre-development requires know-how and wisdom accumulation, and
making tools that can adequately support designers in the initial phases of new
product development is highly desirable [2]. The portfolio management efficacy
can be improved using collaborative systems in product innovation, by overall
visibility offered in decision-making, or by the stimulus to share ideas, increasing
commitment and minimizing domination.
5 Architecture Overview
A system for collaboration support in product innovation management should have
its focus on portfolio, and be able to prepare a dynamic structure to incentive the
generation and capture of abstract ideas, whose expression can be concrete
functional specifications [6]. It should have a universal workspace, accessed via
Web, with integrated resources to project management, document management,
project agenda and calendar [5], among others. The problems solution should not
occur only in a reactive mode, but also in an active one. Beyond to support
portfolio management, the architecture should incorporate techniques and best
practices aiming to stimulate the project goals alignment, the effective knowledge
Collaborative Product Pre-development: an Architecture Proposal 475
and information use, and the increase on decision-making ability [6]. Figure 1
illustrates a general view of the proposed architecture.
Expertise
Competences
Knowledge
6 Related Works
[19] presents POPIM (Pragmatic Online Project Information Management), a
collaborative management system to extended environment product development
projects. Its structure offers a shared workspace to improve team communication,
sharing, and collaboration on projects, accessing on-line information. The system
supports collaboration and knowledge management during the project development
phase, but does not contemplate the pre-development.
[17,20] involve computational systems based in Fuzzy Logic to establish
criteria for project selection in portfolio management, but do not consider
collaboration between the people on decision-making during the criteria ranking.
[21] adopts the collaborative knowledge approach on product innovation
management to conceive the eProduct Manager, a Web system prototype for
portfolio management. The authors claim that the final version will be built in a
structure that integrates four modules: goals, actions, teams and results. Forms are
used to store the organization goals, setting out knowledge that before was tacit,
and also to record problems and ideas, in which the user does a strategic alignment
ranking for each new product concept. In [6] is described its architecture main
elements: controls (consumer requirements, strategic impulses and performance
measurement); mechanisms (individuals, teams and revisers); entrances (ideas,
Collaborative Product Pre-development: an Architecture Proposal 477
problems); and exits (project revisions, scorecards and exception reports). It seems
that a new prototype would use intelligent agents and Web semantic.
[10] proposes a business model called "collaborative product services in virtual
enterprise", based in Web Services, which is going to integrate the majority of the
functions related to collaboration on product lifecycle. The authors defend the
application of PLM systems to collaboration support during all product lifecycle,
but don’t clearly define the requirements of such a system (based itself in a
classification of the nature of the collaboration process) and don’t consider pre-
development peculiarities.
[22] presents the WS-EPM (Web Services for Enterprise Project Management),
a service oriented architecture to business projects management. In what it refers to
the product pre-development, there is an operation to project prioritization named
Prioritization Web Service (PWS), which considers, as criteria to conduct different
projects coordination, the following factors: tangible value, intangible value,
project scope, required time to market, convenience to develop the project inside or
outside the company. The PWS appears graphically associated with all WS-EPM
operations in product lifecycle, hinting that the system is used in the entire new
product development process, and not only in pre-development.
8 References
[1] Kim, KY et al. (2004. Design formalism for collaborative assembly design. Computer-
Aided Design 36(9):849-871.
[2] Huang, GQ and Mak, KL (2003) Internet applications in product design and
manufacturing. Berlin: Springer.
[3] Rozenfeld, H et al. (2006) Gestão de desenvolvimento de produtos: uma referência para
a melhoria do processo. São Paulo: Saraiva.
478 A. Moeckel, F. A. Forcellini
[4] Santoro, R and Bifulco, A (2006) The concurrent innovation paradigm for integrated
product/service development. In: Proceedings of the 12th International Conference on
CONCURRENT ENTERPRISING. ESoCE-Net: Milan, Italy, June 26-28, 2006.
[5] Monplaisir, L and Singh, N (Eds.) (2002) Collaborative engineering for product design
and development. Stevenson Ranch: American Scientific Publishers.
[6] Cormican, K and O’Sullivan, D (2004) Groupware architecture for R&D managers.
International Journal of Networking and Virtual Organisations. 2(4):367-386.
[7] Cybis, WA (2003) Engenharia de usabilidade: uma abordagem ergonômica.
Florianópolis: Laboratório de Utilizabilidade de Informática, UFSC.
[8] Orlikowski, WJ (1992) Learning from notes: organizational issues in groupware
implementation. In: Proceedings of the 4th Conference on COMPUTER SUPPORTED
COOPERATIVE WORK. ACM: Toronto, Canada, October 31 - November 4, 1992.
[9] Ming, XG et al. (2005) Technology solutions for collaborative product lifecycle
management – status review and future trend. Concurrent Engineering. 13(4):311-319.
[10] Ming, XG and Lu, WF (2003) A framework of implementation of collaborative product
service in virtual enterprise. In: SMA ANNUAL SYMPOSIUM - Innovation in
Manufacturing Systems and Technology. Singapore–MIT Alliance: Singapore,
Malaysia, January 17-18, 2003.
[11] Cooper, RG and Edgett, SJ and Kleinschmidt, EJ (2001) Portfolio management for new
product development: results of an industry practices study. R&D Management.
31(4):361-380.
[12] Project Management Institute (2004) A guide to the project management body of
knowledge (PMBOK® Guide), Third Edition. ANSI/PMI 99-001-2004. Newton
Square, PA: Project Management Institute.
[13] Rabelo RJ (2006) Arquiteturas orientadas a serviços. Florianópolis: Universidade
Federal de Santa Catarina, Curso de Engenharia de Controle e Automação.
[14] Camarinha-Matos, LM and Afsarmanesh, H (2005) Collaborative networks: a new
scientific discipline. Journal of Intelligent Manufacturing. 16(1):439-452.
[15] Ratti, R and Gusmeroli, S (2006) An advanced collaborative platform for professional
virtual communities. In: Proceedings of the 12th International Conference on
CONCURRENT ENTERPRISING. ESoCE-Net: Milan, Italy, June 26-28, 2006.
[16] Tramontin Jr, RJ (2006) Web services. Florianópolis: Universidade Federal de Santa
Catarina, Grupo de Sistemas Inteligentes de Manufatura.
[17] Bilalis, N et al. (2002) A fuzzy sets approach to new product portfolio management. In:
Proceedings of the IEEE INTERNATIONAL ENGINEERING MANAGEMENT
CONFERENCE (IEMC '02). Cambridge, UK, August 18-20, 2002. pp. 485- 490.
[18] Tacla, CA and Barthès, J-P (2003) A multi-agent system for acquiring and sharing
lessons learned. Computers in Industry. 52(1):5-16.
[19] Huang, GQ and Feng, XB and Mak, KL POPIM: Pragmatic online project information
management for collaborative product development. In: Proceedings of the 6th
International Conference on COMPUTER SUPPORTED COOPERATIVE WORK IN
DESIGN. London, Ontario, Canada, July 12-14, 2001. pp. 255-260.
[20] Montoya-Weiss, MM and O'Driscoll, TM (2000) From experience: applying
performance support technology in the fuzzy front end. Journal of Product Innovation
Management. 17(2):143-161.
[21] Cormican, K and O’Sullivan, D (2003) A collaborative knowledge management tool
for product innovation management. International Journal of Technology Management
26(1):53-67.
[22] Zhang, L-J et al. (2004) WS-EPM: web services for enterprise project management. In:
Proceedings of the IEEE International Conference on SERVICES COMPUTING
(SCC’04). Shanghai, China, September 15-18, 2004.
Collaborative Augmented Reality for Better Standards
1 Problem
Concurrent engineering depends on clear communication between all members
of the development process. As that communication becomes increasingly
complex (including CAD models, diagnostic data, process control data, etc.), the
quality of the standards used to move and understand that information likewise
becomes more and more important. Current information transfer standards tend to
be designed by domain experts. Traditionally, a group of domain experts gather
together and begin to develop the new standard in an ad hoc manner.
Requirements and document structure are discussed, a written description is
produced, and the standard itself is created in a way that agrees with the
documentation.
Standards developed this way often suffer from ambiguity, redundancy,
missing information, and/or excessive complexity. Terms used in the textual
description may be ill-defined; the result is often that the standard contains
1
Computer Scientist, National Institute of Standards and Technology (NIST), 100 Bureau
Drive, Gaithersburg, Maryland, 20899, USA; Tel: 301 975-8778; Fax: 301 975-8069;
Email: matthew.aronoff@nist.gov; http://www.eeel.nist.gov/812/IIEDM/
480 M. Aronoff and J. Messina
a vaguely defined element, or that one developer expects an element to mean one
thing, while another developer is equally sure that it means something else. Apart
from the confusion caused, ambiguities mean that constraints upon the valid range
of element data are often underused, which can impede automated processing of
the standard. Furthermore, as the standard itself is created by hand, information
found in the textual description may be missing from the standard altogether. If
information is repeated, or referenced multiple times, in the textual description, it
may also appear several times in the standard. Empty (and therefore unnecessary)
“container” elements may become part of the standard because they were part of
the implementing experts’ mental framework during the conversion from textual
description to formal standard.
All of the above problems stem from an attempt to combine two separate steps
of the development process into one. The first step is to determine the standard’s
domain, which is generally the answer to the question “what information and/or
interactions are we trying to capture?” The second step is to create a particular
implementation, which can be characterized by the question “how will that
information be stored and moved around?” When those two steps are performed
simultaneously or in reverse order, the standard often suffers. What is needed is a
tool that allows domain experts to work solely on the domain of a standard without
falling into the trap of designing an implementation at the same time.
Each user in the system is given a virtual representation (an avatar) that shows
their position relative to the model.
Actions performed by the user are captured in one of two ways. A data glove,
capable of tracking finger positions, can be worn, in which case the user interacts
with Focus using a simple gesture language. Alternatively, the user can simply
hold any device that can provide the system with left- and right-click functionality;
in this case, the system uses standard mouse events to trigger actions. Since Focus
uses the secondary fiducial to track the hand’s position, a small presentation
remote works just as well as a mouse. Focus treats all actions as occurring at the
hand’s position. For example, if the user wants to create a new object, they would
reach their tracked hand into the model at the position where the new object should
be, and perform the “Create” gesture. The resulting configuration is flexible, in
that only a camera and a pointing device need to be connected to the system, and
scales well with increasing video (and therefore fiducial-tracking) quality, since no
changes are required when an upgraded camera or display device is attached.
development was done on Mac OS X, but all software components work on Linux
and Windows as well. All Focus software will also be made freely available for
further development.
For video capture, we are using a few different Firewire-based web cameras;
all provide video frames in standard delivery formats, and are therefore supported
on multiple platforms. Gesture input is handled with an inexpensive data glove
which tracks finger positions, but the system can also be operated using a standard
mouse. The most expensive component of the system is the head-mounted display
(HMD). This is also the component likely to see the most improvement over the
next few years. We are primarily using a display which makes use of relatively
new organic light-emitting diode (OLED) displays to provide bright displays that
are high-resolution when compared with other, similarly-priced HMDs. However,
as the technology matures, further increases in both resolution and the user’s
perceived field of view will certainly improve the usability of the system. HMDs in
this class can be found for less than $1000.
Focus utilizes the following software:
ARToolkit [2] for user position tracking and real video capture
Coin3D [5] for 3D scene management and rendering
Libp5glove [10] to interface with the data glove
DRb [6] for the underlying distribution of our modules
Trimurti [12] to decouple the application modules
Additionally, Focus is written in Ruby. As a number of the software
components were not capable of communicating in Ruby, we created Ruby-
language bindings for ARToolkit, Coin3D, and Libp5glove using SWIG [11].
Using Ruby, a higher-level, object-oriented language, shortened the overall
development time; adapting these existing and relatively mature toolkits was
considered a better solution than attempting to write our own tracking algorithms
and glove interface driver.
All data models are also persistent. A user may create a new model, but may
also resume work on an existing model, which may have other participants already
at work. When the current session is complete, the model simply waits on the
server for further modifications.
Part of designing for simplicity required providing users with instant, non-
blocking feedback. When an action is invalid, the user is told during the action
rather than after failure. If items cannot be dropped in a particular location, their
appearance changes while the user is dragging them. If an action does fail (e.g. the
user drops the items even though they are marked as un-droppable in that location),
Focus informs the user via an information display along the edge of the screen, and
does not request user intervention before continuing. The same information
display is used to inform the user of other changes to the environment, such as new
users joining a domain modeling session.
Searches follow the instant, non-blocking philosophy as well. As search
terms are entered and modified, results are highlighted within the model, and the
resulting selection can be used for any of the other available actions in Focus. One
of the advantages of a 3D model layout is the ease of defining a spatial search. A
user can query the model in a traditional way (looking for occurrences of a
particular piece of text or range of numbers), but can also include spatial criteria
results within a certain distance of the user’s position, or of a selected element in
the model).
7 How it works
Now that the underlying technology and design philosophy have been
discussed, the next step is to explore what a Focus modeling session is like. In this
example, a data modeling project has already been created on the repository server
at some previous time. The user wishes to resume work on this project. She places
the main fiducial, which is a rectangle roughly 5x7 inches in size, on the desk in
front of her. She puts on the HMD and glove, and starts the Focus client program.
At startup, it connects to the repository server; the user authenticates for a
particular project, and is logged in. She is now participating in that project’s
modeling session. The data model appears atop the main fiducial on her desk, and
other users’ avatars appear around the desk at their respective relative locations.
Each object in the data model has a representation in 3D. Classes have a
default representation (see Figure 2), but can also be assigned model fragments.
For example, a circuit board class might have a 3D model of a board as its
representation. Focus can read model fragments stored in SGI’s OpenInventor
format, which can be created from any number of free and for-pay 3D modeling
tools. Associations, generalizations, and other connections between classes are
drawn as lines.
Collaborative Augmented Reality for Better Standards 485
Each user working on the model has a 3D palette of tools available to them; one
user’s palette is active at any moment, and only that user may modify the model.
(It should be noted that this was a design decision intended to enhance
collaboration, and the single active palette may be abandoned in the future in favor
of any user being able to modify the model at any time.) While inactive, the
palette contains a representation of the active tool being used on the model, the
name of the active user, and a button to request control of the model. Our user’s
palette is inactive to start. She presses the button to request control. When control
is relinquished by the active user, her palette becomes active and populates with
the available tools. These tools perform functions familiar to users of graphical
editors -- tools for the different types of classes and associations that may be
created, move, delete, and search. At this point, the user can make changes to the
data model, rearranging, adding, modifying, and deleting model elements; all other
users will see her changes reflected immediately in their own client applications.
8 Status
The design phase of Focus is complete, including the following steps:
Use case catalog
Publication on Focus architecture
Investigated component technologies
GUI design whitepaper published [1]
The implementation phase of Focus is ongoing.
The following components have been completed:
I/O hardware purchased
Software components developed and tested
Basic system functionality complete
Create, modify, rearrange and delete model elements
Simple associations between model elements
486 M. Aronoff and J. Messina
9 Future steps
Development on Focus continues, with several organizations planning to test it
as the center of a model-driven process for standards development. Once the first
iteration is complete, additional features such as more complete UML coverage
and automated schema generation are planned. Additionally, as hardware prices
continue to fall, it is possible that “see-through” HMDs (which display only the
rendered 3D content on a transparent display) will be integrated into Focus,
removing the display resolution issues inherent to small, inexpensive cameras.
10 References
[1] Aronoff M, Messina J, Simmon E. Focus 3D Telemodeling Tool: GUI Design for
Iteration 1. NIST Interagency/Internal Report (NISTIR) 32470, 2006.
[2] ARToolkit. Available at: <http://artoolkit.sourceforge.net/>. Accessed on Feb. 23,
2007.
[3] Billinghurst M, Kato H. Collaborative Augmented Reality. Communications of the
ACM, July 2002;45;7:64-70.
[4] Billinghurst M, Cheok A, Kato H, Prince S. Real World Teleconferencing. IEEE
Computer Graphics and Applications, Nov/Dec 2002;22;6:11-13.
[5] Coin3D. Available at <http://www.coin3d.org/>. Accessed on Feb. 23, 2007.
[6] DRb (Distributed Ruby). Available at <http://chadfowler.com/ruby/drb.html>.
Accessed on Feb. 23, 2007.
[7] Fowler, M. UML Distilled: A Brief Guide to the Standard Object Modeling Language,
3rd edn. Boston San Francisco New York: Addison-Wesley, 2004; 35-52, 65-84.
[8] Griesser A. Analysis Catalog for Focus 3D Telemodeling Tool. NIST
Interagency/Internal Report (NISTIR) 32090, 2005.
[9] Griesser A. Focus 3D Telemodeling Tool Use Cases For Iteration 1. NIST
Interagency/Internal Report (NISTIR) 32096, 2005.
[10] Libp5glove. Available at <http://www.simulus.org/p5glove/>. Accessed on Feb. 23,
2007.
[11] SWIG. Available at <http://www.swig.org/>. Accessed on Feb 23, 2007.
[12] Trimurti. Available at <http://trimurti.rubyforge.org/>. Accessed on Feb 23, 2007.
A Pedagogical Game based on Lego Bricks for
Collaborative Design Practices Analysis
1 Introduction
Throughout the competitive character of the market, product design is affected and
driven by the constitution of multidisciplinary teams capable of efficient
collaboration. The collaboration practices between trades become an essential
catalyst for creative sharing of skills. By its socio-technical characteristic, the
collaboration is a relatively complex phenomenon to study and to formalize in the
organization [1]. The interaction between the individuals themselves, as well as the
interaction with the surrounding systems (objects, context, etc.), creates major
concern in the academic and industrial world. We must keep in mind that
collaborative work starts at the moment the actors exchange opinions on the
existing information, share their experiences, define common targets, and compile
1
Laboratory LIPSI - ESTIA Engineering Institute, Technopôle Izarbel, 64210 Bidart
FRANCE; Tel: +33 (0) 559438486; Fax: +33 (0) 559438405; Email: j.legardeur@estia.fr;
http://www.estia.fr
488 J. Legardeur, S. Minel
data and capabilities. All of these activities are carried out together with a common
objective. However, this description hides both the difficulties and the obstacles.
We share Reynaud’s perspective on this affaire. He notes, “the individual
competencies are as fought as they are added” [2]. In regards to the social aspect of
the project, the multi-disciplines usually involves actors of various departments,
possessing complementary skills and of different cultures (reference frame,
vocabulary, formation, history, experience, etc.), yet with each participant
envisioning the creation of a different end product. As a result, the participants
independently develop their own strategies when forming personal objectives and
criteria of evaluation. The study of collaboration is relatively complex in the
industrial context. Likewise, the education of individuals is also a major concern
for most academic institutions and particularly among engineering institutes. In the
end, preparing future engineers with real technical knowledge while allowing them
to acquire collaborative competencies remains a challenge for these institutions.
The actors’ technical and economical knowledge is constantly growing. Due to this
issue, engineering institutes must teach an ever greater number of disciplines to
their students. Therefore, this situation leads to difficulties in comprehending other
areas of expertise and generates significant problems when exchanging and sharing
among trade interfaces [3]. Nowadays, the training of individuals focussed on the
development of know-how and aptitudes of collaboration remains a major
problem. In fact, it is necessary to recognize the strong contextual character of the
collaboration from its numerous “parameters” (personal development, individual
and group psychology, enterprise culture, power game, general working habits,
etc.). For this reason, the aptitude to collaborate is often perceived as a competence
that is essentially learned by experiences and real situations. It is accordingly very
difficult to reproduce such a training environment when striving for a pedagogical
goal. However, the internships and student projects are the first answers delivered
by the academic institutions to encourage their future engineers to act as actors of
collaboration in real situations. In this paper we propose a design game with the
intention that engineering institutes use it as a pedagogical tool for the teaching of
collaboration. This game, essentially based on LEGO blocks, was developed to
simulate the multi-disciplined design of a technical object. Also, the game aims to
lead a group of students to experience a constrained situation where everybody's
participation is needed. Even though the students attend the same class (or course),
they all have different backgrounds (normal or technical high school, etc.) and
engineering options (General Product design, industrial organization or Process
Automation Master). These differences enable the groups to be heterogeneous,
although not entirely representative of the diversity to be found in the industry and
enterprise world.
In the Delta Design context, life on DeltaP is presented as being far from what we
actually have on earth. First of all, DeltaP is not a planet, but in a planned flat
world. Next, the teams design in a 2-dimensional space; thus, not in 3-dimensional
as most are accustomed to. In order to ensure that the designed product respects
specifications (requirement sheet) and that it pleases the client in terms of
aesthetics and function, a representation on a simple piece of paper is sufficient.
The team must design the residence by assembling these equilateral triangles in
either red (heating triangle) or blue (cooling triangle) formations. Due to the use of
triangles as a residential building component, their aesthetic and thermal functions
are much more complex than their form and dimension. Therefore, each team’s
task is to design a house that takes into account and integrates all of the constraints
and tastes of the customer, as well as the characteristics and components of the
triangles.
stronger players try to impose and lobby or their preferences in a dominating way
over the other trades. Subsequently, there is a variation to interpret the instruction
of non presentation of the specification document. Indeed, because the
specification document is not shown and thus the players cannot talk or exchange
about it, some people see it as a form of concurrence within the same group. On the
other hand, some people reinterpret their assigned constraints by organising them
hierarchically in an opportunist way.
Analysing the corpus, the pictures and videos of a session with Delta Design allow
us several opportunities for insight and observation of individuals in a
collaborative situation. In this paper, we will not develop the pedagogical interest
of this approach as the reader may consult the paper [5]. Thereby, through
reflective analysis [6], he can better understand and analyse his own behaviour
during a collective action. However, we have identified some limits of the Delta
Design game.
First of all, the future design product engineers occasionally have a lack of
enthusiasm or concern for the experiment. This is related to the fact that the final
objective of the experiment is the design of a house assimilated to the formation of
architecture and with characteristics not focused on the mechanical product
technology.
The representation of the object to be designed (under the triangle assembly)
seems to be too abstract to solicit more interaction between the players. Moreover,
the 2-D format differs from the 3-D formats (CAO) traditionally used in product
design.
The game format imposes its utilization at the present moment and does not
allow for a test in a distributed environment
Taking these limits into consideration, we designed a different game focused
more upon mechanical product design. This aforementioned game is presented
below.
The objective of the proposed game is to design a space shuttle (cf. example figure
1) by assembling Lego® blocks while respecting various constraints. The choice of
block is essentially based on upon its ease of use, its allowance for building, its
resemblance with mechanical products (an airplane, a car), and its functional free
software used (ex: MLCad) to remotely build and share virtual assembling models.
It is important to note that numerous pedagogical games (engineering specific,
marketing formation, and enterprise coaching) have been designed using these
types of blocks (http://www.seriousplay.com/).
A Pedagogical Game based on Lego Bricks 491
This is an extract of the introduction that each player will receive once the team is
composed.
Our galaxy is not only composed of these atoms, but also of a magnitude called the
“slowness” that impregnates the matter and modifies its characteristics. You have
to design the slowness as a homogenous field of anti-energy of variable potential.
The slowness is included in a standard scale of +15 L° and -15 L°. However, this
scale is not always adapted. The average slowness of a living organism is 0 L° and
is endurable without corporal damages at around 5 L°. It is important to realise
that it is impossible to partition slowness and use its potential and effects. The
properties of certain blocks exist for that purpose.
In order to achieve the global objectives, we have created a random draft principle
to draw the targeted values. We aim to use this to define the different targets for
the different teams, for example: a team will be in charge of designing a space
shuttle to transport x number of individuals to a speed y, for a given autonomy of t.
Each different type of transport has different characteristics and obligations. All of
the variables are taken into consideration in the common specification sheet. The
elementary blocks to construct the final product are used by means of a reduced
and specific database, obtained from the set of Lego® type blocks. This database
can be used virtually with the free software MLCad (http://www.lm-
software.com/mlcad/) or in a tangible way in the event in which the Lego® blocks
have been actually purchased. The blocks are differentiated by their attributes
(form, type, and color). These attributes also give the blocks special characteristics
in terms of weight, cost, mechanics, or heat resistance. There are 5 types of blocks:
492 J. Legardeur, S. Minel
red represents heat resistance, pink represents energy reserves, blue represents
mechanical resistance, yellow represents aerodynamics, and green represents
ergonomics and aesthetics. Weight, heat, mechanical resistance, and cost
characterize each block. There are three types of assemblies that can be made to fix
the blocks: the high heat resistant assemblies, the structural assemblies, and the
standard assemblies. Two motor families (big and small) with non-proportional
power, weight, and heat diffusion parameters are proposed. Each model within
each family has similarities (the “sport” model which is powerful, but not
economic; the “big carrier” model which is cheaper; etc.). Different examples of
forms are available as well: rectangular wings with yield a high carrying capacity,
but are not well adapted to speed and offer high air resistance; “delta” wings which
yield high performance, but need more power, and thus more fuel; etc. The
dimensions solely characterize the frame plate (the number of “wedges”). They can
be fixed amongst each other by means of mechanical resistance type blocks.
In this manner we restrain the students and force them to enter into an unknown
world within which their calculations will be reassuring and manageable.
3.3 The rules from each discipline in the design game context
Within the game, we have a series of rules that belong to each of the implicated
disciplines and assigned to each of the design game players. These different
disciplines are: the assembly manager, the motor engineer, the ergonomist, the
wings’ responsible and the “slowness-man”. We have expressed the rules in both a
quantitative and subjective manner. This enables us to condition our work habits
necessary for the project and avoid our tendency to only cooperate when required
to satisfy the constraints. The rules are designed to encourage the actors to
cooperate, negotiate, and to converge on an acceptable compromise for everyone.
The Assembly Manager must guarantee the budget is followed. He is also
responsible for the surveillance of the project’s energy consumption. “It is
imperative that your point of view is taken in account in every design choice made
by your team, all along the game. Only you are to keep the budget and the energy
usage to an acceptable level “. He must calculate the material costs and the salaries
given to the team members. The motor engineer has the following mission: to
choose the dimension for the motor(s) to be used for the shuttle, to place it (them),
and guarantee their proper operation under any and every circumstance. He is also
responsible for the motor power supply. The “slowness-man” requirements begin
as follows: “You must choose the shuttle internal slowness from the beginning, as
it influences the shuttle speed. Once it has been chosen, you must disperse the
slowness and keep it contained to the best of your ability”. The ergonomist must
define the internal organization of the space shuttle. “the distribution of rooms will
have an impact on the design choices for which the rest of the team is responsible.
Think about the consequences of your own choices. You are responsible for the
comfort and practicality within the shuttle. It is your duty to make everybody
follow the slowness imposed for the each room by the specification sheet.”
Lastly, the wing’s responsible must choose “the size of the wings, which
responds approximately to the following law: Sails = 30 % number of frame
A Pedagogical Game based on Lego Bricks 493
References
[1] L.L. Bucciarelli “An ethnographic perspective on engineering design”, Design Studies,
v. 9.3, 1988.
[2] J.D. Reynaud "Le management par les compétences : un essai d'analyse, Sociologie du
travail", 43, p7-31, 2001.
[3] S. Finger, S. Konda, E. Subrahmanian, “Concurrent design happens at the interfaces”,
Artificial Intelligence for Engineering Design, Analysis and Manufacturing 95, v. 9,
89-99, 1995.
[4] L.L. Bucciarelli “Delta Design Game”, MIT, 1991.
[5] G. Prudhomme, J.F. Boujut, D. Brissaud, “Toward Reflective Practice in Engineering
Design Education”, International Journal of Engineering Education, Vol 19, n°2 328-
337, 2003.
[6] D.A. Schön, “The Reflexive Practitioner: How Professionals Think In Action”,
Ashgate publishing, Cambridge, 1991
[7] E. Subrahmanian, A. Westerber, S. Talukdar, J. Garrett, A. Jacobson, C. Paredis, C.
Amon, “Integrating Social Aspects and Group Work Aspects in Engineering Design
Education”, in the proceedings of the Workshop on Social Dimensions of Engineering
Design, pp. 117-126, 17–19 May 2001.
[8] T. Kurfess “Producing the Modern Engineer”, in the proceedings of the Workshop on
Social Dimensions of Engineering Design, pp. 201-208, 17–19 May 2001.
A Reasoning Approach for Conflict Dealing in
Collaborative Design
1 Introduction
Current trends in collaborative design domain have focused on the increasing
needing of teamwork. Complex industrial problems involving different knowledge
areas require, even more, heterogeneous virtual teams to collaborate in order to
solve them. These teams should exchange their knowledge and expertise, creating
a collaborative network.
The enterprises’ collaborative design process uses, even more, geographically
distributed knowledge, resources and equipment [16]. Virtual team members work
in parallel with different tools, languages and time zones. Collaborative
engineering proposes to face these problems by reorganizing and integrating the set
1
Head of Collaborative Modeling Team
LIRIS – Laboratory of Computer Graphics, Images and Information Systems
Bâtiment Nautibus, 43, bd. Du 11 Nov. 1918
69622 Villeurabanne cedex, France
Phone: (33)4 72 44 58 84
Fax: (33)4 72 43 13 12
E-mail : ghodous@bat710.univ-lyon1.fr
496 M. Dutra, P. Ghodous
of development activities, starting from design early stages [6]. An important part
of the collaborative work consists of communication and sharing of data,
information and knowledge, among individuals with different points of view and
specific objectives concerning their particular domains. Once these points of view
and objectives are considered inside the same scope, divergences arisen from this
process lead frequently to conflicts.
This paper proposes an approach to attenuate design conflicts. This approach is
based on OWL reasoning, where all exchanged data and information will be
represented as OWL classes, in order to use an OWL reasoner. To validate this
proposal, a little prototype was implemented, taking as scneario a distributed
architecure for collaborative design, previously described in [2] and [14].
cross-reference and reuse information; and it has an XML syntax (RDF/XML) for
easy data exchange.
One of the main benefits of OWL is the support for automated reasoning, and
to this effect, it has a formal semantics based on Description Logics (DL). DLs are
typically a decidable subset of First Order Logic. They are suitable for representing
structured information about concepts, concept hierarchies and relationships
between concepts. The decidability of the logic ensures that sound and complete
DL reasoners can be built to check the consistency of an OWL ontology, i.e.,
verify whether there are any logical contradictions in the ontology axioms.
Furthermore, reasoners can be used to derive inferences from the asserted
information, e.g., infer whether a particular concept in an ontology is a subconcept
of another, or whether a particular individual in an ontology belongs to a specific
class.
Examples of OWL reasoners [11]:
x F-OWL [5]: an ontology inference engine for OWL based on Flora-2 (an
Object-Oriented Knowledge Base Language).
x Euler [3]: an inference engine supporting logic based proofs. It is a
backward chaining reasoner and will tell you whether a given set of facts
and rules supports a given conclusion.
x Pellet [13] is an OWL DL reasoner based on the tableaux algorithms
developed for expressive Description Logics. After parsing OWL
documents into a triple stores, the OWL abstract syntax are separated into
TBox(axioms about classes), ABox(assertions about individuals) and
RBox(axioms about properties), which are passed to the tableaux based
reasoner.
x Hoolet [7] is an OWL DL reasoner that uses a First Order Prover to reason
about OWL ontologies.
x Cerebra [1] is a product of Network Inference, and its technology provides
a commercial grade, robust, scalable implementation of the DL algorithms
that use OWL documents in their native form. These algorithms are
encapsulated into a run-time engine that is provided as a service to other
applications or services and can respond to queries about ontologies from
those applications.
The OWL Test Cases document [9] defines an OWL consistency checker as
follows: an OWL consistency checker takes a document as input, and returns one
word being consistent, inconsistent, or unknown.
But, while consistency checking is an important task, it does not, in itself, allow
one to do anything interesting with an ontology. Traditionally, in the ontology and
Description Logic community, there is a suite of inference services held to be key
to most applications or knowledge engineering efforts. It is imperative that a
practical OWL reasoner provide at least the standard set of Description Logic
inference services, namely [13]:
x Consistency checking, which ensures that an ontology does not contain any
contradictory facts. The OWL Abstract Syntax & Semantics document
provides a formal definition of ontology consistency that Pellet uses. In DL
498 M. Dutra, P. Ghodous
3 A reasoning approach
The architecture proposed in [2] is based on the Function-Behavior-Structure
framework. This framework is built up to take users’ requirements, to transform
them into formal specifications and to use these specifications to describe
product’s functions, behaviours and structures (Figure 1). All this process is done
by several geographically distributed designer groups, called agencies.
Formal Specifications
Height = 3m
Weight <= 500g
Client’s Requirements etc.
Performance
Appearance
Cost Function Model
etc.
Electrical protection
Behavior Model Fixation of contacts
(male-female)
In case of inadequate Avoiding heating
material etc.
In case of contact with
electrical parts
etc.
Flasque avant Cale Bague d'arrêt Axe Stator Passe-fils Etiquette
Structure Model Tirant Roulement Rotor Carcasse Joint étanchéité Flasque arrière
Figure 1. Function-Behavior-Structure
agent private workspace, but in a higher level, because all data and information
contained in there were previously agreed among all concerning agents. In the
same way, communication inter-agency is provided by a project shared workspace
[15], a global workspace to store the final design models (Figure 2).
Publication
Agent Private Workspace
Formal Specifications
Height = 3m
Weight <= 500g
Client’s Requirements etc.
Performance
Appearance
Cost Function Model
etc.
Publication
Electrical protection
Behavior Model Fixation of contacts
(male-female)
In case of inadequate Avoiding heating
material etc.
In case of contact with
electrical parts
etc.
Flasque avant Cale Bague d'arrêt Ax e Stator Passe-f ils E tiquette
Structure Model Tirant Roulement Rotor Carcasse Joint étanchéité Flasque arrière
Formal Specifications
Height = 3m
Weight <= 500g
Client’s Requirements etc.
Performance
Appearance
Cost Function Model
etc.
Structure Model Tirant Roulement Rotor Carcasse Joint étanchéité Flasque arrière
Formal Specifications
Height = 3m
Weight <= 500g
Client’s Requirements etc.
Performance
Appearance
Cost Function Model
etc.
Behavior Model
In case of inadequate
material
Electrical protection
Fixation of contacts
(male-female)
Avoiding heating
etc.
Project Shared Workspace
In case of contact with
electrical parts
etc.
Flasque avant Cale Bague d'arrêt Ax e Stator Passe-f ils E tiquette
Structure Model Tirant Roulement Rotor Carcasse Joint étanchéité Flasque arrière
4 Prototype
A prototype has been built up in order to validate the proposed approach. Providing
a computational environment, it intends to be an infrastructure tool through which
several experts can collaborate in order to achieve a product design. Collaboration
in this context comprises interaction among different knowledge areas. In an
effective and collaborative environment, experts came from different areas can
integrate their knowledge in a simultaneous way, looking for an expressive gain of
productivity.
Experts have their own way to see and to understand the subject, according to
their duties. Therefore, providing mechanisms of data representation to create a
“common language” understood by the experts was the way chosen to detect
specification conflicts.
An electrical connector design, as depicted in [6], was chosen as scenario to
this prototype. The work required to deploy this connector is decomposed in
several subsystems, corresponding to different disciplines (mechanical, electrical,
500 M. Dutra, P. Ghodous
thermal, etc.). The goal is to simulate the design process of this trying to identify
potential conflicts, by using OWL reasoning. Figure 3 shows the connector’s
overall structure and his components.
Requirements
Publication
Thermal Specific.
Specific. Mechanica Height>=1 Height<=40cm Enginee Model
Model l
Height If
Function should be Height>40cm Function Behavior
Model Behavior at least then Model Model
Model
Structur
Structur Conflict!
e
e
Agency Shared Workspace
Design Agency
In this case, the conflict was avoided because before the second agent publish
his proposition, he was informed by the system that the content to be published
was not consistent neither coherent. So, in this case he must necessarily to redo his
work or try to negotiate with the other involved agent.
6 References
[1] Cerebra. Available at: http://www.semtalk.com/cerebra_construct.htm. Accessed on:
March, 2007.
[2] Dutra M, Slimani K, Ghodous P (2007) A Distributed and Synchronous Environment
for Collaborative Work, submitted to the Integrated Computer-Aided Engineering
journal (ICAE), ISSN 1069-2509, vol. 14, IOS Press, Amsterdam, The
Netherlands, 2007.
[3] Euler proof mechanism. Available at: http://www.agfa.com/w3c/euler/. Accessed on:
March 2007.
[4] Ferreira da Silva C, Médini L, Ghodous L (2004) Atténuation de conflits en conception
coopérative [in french]. In 15èmes journées francophones d'Ingénierie des
Connaissances (IC'2004), Nada Matta ed. Lyon. pp. 127-138. University Press of
Grenoble, Grenoble, France, ISBN 2 7061 1221 2, 2004.
[5] F-OWL. Available at: http://fowl.sourceforge.net. Accessed on: March 2007.
[6] Ghodous P (2002) Modèles et Architectures pour l’Ingénierie Coopérative [in french].
Habilitation Thesis,University of Lyon 1, Lyon, France, 2002.
[7] Hoolet. Available at: http://owl.man.ac.uk/hoolet/. Accessed on: March 2007.
[8] Klein M (2000) Towards a systematic repository of knowledge about managing
collaborative design conflicts. In Gero J (ed.), Artificial Intelligence in Design ’00,
Boston. Dordrecht: Kluwer Academic Publishers, pp. 129-146, 2000.
[9] Kalyanpur A (2006) Debugging and Repair of OWL Ontologies. PhD Thesis, Faculty
of the Graduate School of the University of Maryland, USA, 2006.
[10] Matta N, Corby O (1996) Conflict Management in Concurrent Engineering: Modelling
Guides. Proceedings of the European Conference in Artificial Intelligence, Workshop
on Conflict Management, Budapest, Hungary, 1996.
[11] Mei J, Bontas EP (2004) Reasoning Paradigms for OWL Ontologies. Technical Report
B-04-12, Institut für Informatik, Freue Universität Berlin, Germany, November 2004.
[12] OWL Web Ontology Language Overview. Available at: http://www.w3.org/TR/owl-
features/. Accessed on: March 2007.
[13] Sirin E, Parsia B, Grau BC (2004) Pellet: A Practical OWL-DL Reasoner. Proceedings
of 3rd International Semantic Web Conference (ISWC2004), Hiroshima, Japan,
November 2004.
[14] Slimani K (2006) Système d'échange et de partage de connaissances pour l'aide à la
Conception Collaborative [in french]. PhD Thesis, University of Lyon 1, Lyon, France,
September 2006.
[15] Sriram RD (2002) Distributed and Integrated Collaborative Engineering Design.
Savren, 2002. ISBN 0-9725064-0-3.
[16] Xie H, Neelamkavil J, Wang L, Shen W and Pardasani A (2002) Collaborative
conceptual design - state of the art and future trends. Proceedings of Computer-Aided
Design, vol. 34, pp. 981-996, 2002.
Interface design of a product as a potential agent for a
concurrent engineering environment
Abstract. Product design is currently a subject that involves various kinds of knowledge. In
order to develop products that are successful in the market it is necessary to involve people
from different areas of the production cycle, such as: marketing, assembly, processing, sales,
among others, in addition to consideration of clients’ wishes. To address these issues in the
development of a product, a design method that uses tools based on the principles of
simultaneous engineering needs to be used. Interface design of a product should consider
information from the areas mentioned as well. In this way, development of a method for
interface design in the initial phases of product design could be an element of effective use
of Simultaneous Engineering in the design. The purpose of this paper is to describe how the
development of a method for interface design can contribute to the establishment of a
Simultaneous Engineering environment in the initial phases of product design. To
accomplish this, it is necessary to use techniques such as DFA, DFM, and FMEA in those
stages of the design.
1 Introduction
The Concurrent Engineering (CE) is a work philosophy which objective is to
enhance the product development process. This is searched by the information
sharing between the different knowledge areas involved in this task.
The CE is defined like a philosophy and an environment too. How philosophy
is based on the recognition of each one and your responsibilities to the product
quality. Like an environment it is based on the simultaneous product development
and on the processes which affect it in its lifecycle [1]. To Noble [2], CE is
typically defined like the integration of the design processes: product and
1
Teacher, Centro de Educação Tecnológica de Santa Catarina, Av. Mauro Ramos, 950,
Centro. Florianópolis, Santa Catarina, Brazil. CEP 88020-300; Tel: +51 (048) 3221 0575;
Email: luizsegalin@cefetsc.edu.br
504 L.F.S. Andrade, F. A. Forcellini
manufacture. The objective of this integration is to reduce the time and the cost of
product development and to produce a product that attend to client expectations.
Kerzner [3] point that CE is an attempt to execute work in parallel. Better than
those carried through sequentially - where the greater inconvenience is that the
chosen conception will pass for all the stages of design without a detailed
evaluation of the difficulties of manufacture (execution) of the product. This search
to design ‘on the right way in the first time’ by means of the concurrence among
the product design and its related processes.
So we can see that there are limitations on the sequential design process.
Rozenfeld et al [4] point some of them:
- isolation between the product research and development and product
design areas. This causes a lack of integration with the general strategy of
the business. This is increased by the different languages and understands
of the design problem used by those areas;
- existence of organizational and communication barriers between these
areas and the remain of the company;
- little participation of the high administration in the main definitions of the
goals of these areas;
- hierarchization and linearity of the information flow between the areas of
the company
- little involvement of the suppliers in the creation phases of the product;
- resistance to controls and to the accounting of costs in R&D and Product
Design because these areas deal with activities of risk;
- extreme specialization of the professionals of the area;
This way the authors present a model of Design Process based on the CE
principles which is formed by three phases: Informational Design, Conceptual
Design and Detailed Design.
In this context, the interface design appears like a tool to effectively implement
the CE environment. This is due to its characteristic of connect different
components of product. Related to the interface design are the areas of
manufacture, assembly and reliability [5]. So, the development of a methodology
of interface design is a potential factor of implementation of a CE environment.
The objective of this paper is to develop a bibliographical research about the
interface design to give the base to the development of a methodology of interface
design in the conceptual design phase using like support tools the DFA, DFM and
FMEA methods.
on to the concretization of the product. As a result of its importance, this phase has
some points requiring a better definition.
Develop solution
Model functionality principles for the
functions
Develop solution
alternatives
Define architecture
Define development
partnerships
Define ergonomy
and aesthetic
Analyze SSCs
Evaluate phase
Monitor economic Document decisions
viability made and record
Approve phase
lessons learned
Inside the conceptual design we can see some areas that still are needing of
detailing like the interface design. The existing methodologies attend the interface
design just in the Preliminar Design. This cause some troubles like the need of
iterations due to connection fails between solution principles or the need of very
complex interfaces among two principles. These troubles can cause late and
increase the costs of design.
Moreover, the role of interface design is showed by Ullman [8] that consider
the development of interfaces in the phase of Preliminary Design. The author point
that the “functions occur in the interfaces between components”. This is based on
the following interface definition: “the boundary area between adjacent regions
that constitutes a point where independent systems of diverse groups interact” or
“the interconnection between two equipments that possess different functions and
that could not connect directly themselves” [9].
The great advantage of the determination of the product interfaces is that the
change of the abstraction level to the concrete level becomes less abrupt and
uncertain, that is, instead of creating the solution principles directly from the
structure of functions, we can reduce the possibility of problems in the posterior
phases by the architecture determination of the product what includes the definition
of its interfaces. Thus, parameters for the product development as the involved
variable, flows of material energy and signal as well as the proper interfaces of the
same, become more apparent for the designer.
In this direction, Ulrich and Eppinger [10] proposed a way to determine the
product architecture by the mapping of interactions between functions and the
506 L.F.S. Andrade, F. A. Forcellini
possible product interfaces. However, the proposal of the authors is inserted in the
Preliminary Design what indicates that for its application already exists a definite
conception. Already Otto and Wood [11] describe the possibility of use of the
method of Ulrich and Eppinger [10] still in the phase of Conceptual Design and,
moreover, presents another proposal of generation of the architecture and
determination of the possible modules that are executed by heuristical means. This
proposal presents the advantage to use the specifications of design and the
restrictions of the product as parameters for the development of the product
architecture.
Looking for the definition of product interfaces in the literature there are some
relevant proposals. Ullman [8] considers a systematics of interfaces development
in the Preliminary Design in which presents some steps that go since the concern
with the rocking of energy, material and information in the level of functions,
passing to the most critical interfaces determination, the maintenance of the
functional independence of subsystems and components and the care in separating
the design in distinct components. Another proposal to interface development is
made by Otto and Wood [11]. The authors point the necessity of initially to define
the product architecture and, based on this, to define preliminary layouts. So, we
can establish the product interfaces. However, the authors do not say as to make it.
Another work carried through in the direction to evaluate new techniques of
development of concepts is of Sousa [12]. He made an analysis of the use of
methods of Design for Manufacture and Assembly in the Conceptual Design. In
this study the author points that the requirements of manufacture and assembly,
beyond the functional requirements, are important for the evaluation of the product
structures. Also he places that, for does not being made a correct evaluation of the
product assembly and manufacture in the phase of Conceptual Design, there are
unnecessary iterations when the design already meets in the Preliminary and
Detailed phases returning to the Conceptual Design, also. These iterations finish
for causing increases of costs, delays and reworks that could be prevented.
Ullman [8] and Linhares and Dias [13] places that CE must consider four basic
elements: function, geometry, material and manufacturing processes. Moreover,
they place that design and manufacture (including the assembly) must
simultaneously be developed. Also point that the development of each part needs
to be integrated to the funtions definition and refinement and its respective
interfaces. Linhares and Dias [13] point too that the design must take in account
two tasks in the individual conception of a part: (1) to design it like it was a
product and (2) to consider that it is part of a realizable module and to take in
account its interfaces. Siqueira and Forcellini [14] make the consideration that the
requirements of the unions are excellent factors in the election of conceptions.
This way, we can see the role of the execution of the product interface design
in the initial stages of the design process. This can to increase the knowledge of the
design team about the designed product . However, its necessary to use methods
and tools of support. So we will analyse some of them.
Interface design of a product as a potential agent for a CE environment 507
DFA e DFM
The methods of DFA and DFM aim to optimize the design in the phase of
definition of processes and final shapes, searching lesser times and costs. These
methods had been developed by Boothroyd, Dewhrust and Knight [15] and,
initially, was used in set (DFMA). However, due to importance of each one of the
processes and the possibility to be applied separately in agreement the case, them
they had been branched in two methods: DFM and DFA. To have an idea of the
relevance of such methods Boothroyd, Dewhrust and Knight [15] and Pereira and
Manke [16] esteem that 50% of the manufacture costs are related to the assembly
process and both represent a great parcel in the final cost of the product.
Moreover, the two methods are based on the last experience and search to
expose and systemize the knowledge [17]. Thus, its role is observed as
mechanisms not only of aid technician but, also, of support to the management of
knowledge of the company.
However, for its application, it is necessary that there is an integrated product
development environment, with processes and products engineers working in set in
the early phases of design. This is another characteristic of the DFMA methods. It
requires the implementation of a CE environment with representatives of different
knowledge areas in the design team.
To Keys apud Valeri and Trabasso [18] the DFX methods can be defined as
being “a set of techniques generally applied in the early phases of the integrated
products development, to guarantee that the hole lifecycle will be considered in the
product design”. In this definition the authors point that the DFM method is one
“technique applied in design, aiming at definition of alternatives which optimize
the manufacture system as a whole, identifying concepts of easy manufacturing
products, help the design of these kind of products, facilitates the integration
between the development of manufacture processes and the design of the product”.
Therefore, Stoll apud Valeri and Trabasso [18] place a list of directives for the
DFM:
- to minimize the number of parts;
- to develop modular designs;
- to minimize the variations of parts;
- to design multi-functional parts;
- to design parts for multipurpose;
- to facilitate the manufacture;
- to prevent the separate fixing use;
- to diminish the assembly directions;
- to maximize the of the assemblies;
508 L.F.S. Andrade, F. A. Forcellini
FMEA
The tool of Failure Modes and Effect Analysis (FMEA) were developed with
the intention of assisting in the diagnosis and forecast of military and aeronautical
equipment imperfections. However, due to its predictive character, it passed to be
used in product design.
The FMEA is a standardized analytical tool used to detect and to eliminate
potential problems by a systematic and complete way [17]. It uses the knowledge
of the design team on quality, performance and process. Moreover, the FMEA
allows the hierarchization of the causes of the problems and establishes parameters
for the adoption of preventive or corrective actions [17].
Thus, when the FMEA is applied, we can to determine the design critical
points. This will help the team to define the priorities of design. This tool, as well
as the DFA and the DFM, is used in the Preliminary Design phase. However, like
the FMEA implies in functioning preview it is useful in the Conceptual Design.
The application of FMEA in the Conceptual Design possesses the advantage to
detect and correct problem earlier. However, it presents the disadvantage of, in this
phase, to possess few available information. This lack of information can be a
source of uncertainties, but this can be useful in the interface design to solve some
of the uncertainties.
Interface design of a product as a potential agent for a CE environment 509
4 Conclusions
For the analysis of above displayed it can be evidenced that the interface design
and CE are subjects that look at different aspects of the product development.
Although this they converge to the reduction of the uncertainties in the design process.
This is because they involve people of different areas of the product lifecycle. So, they
can to map and forecast neglected product aspects.
However, there are barriers for use them.
Although sufficiently studied the CE still possesses a set of barriers to effectively
be implemented. It cannot be used in many companies because is a work philosophy
that must has a lot of information exchange and quarrels between different functional
areas of the company. These experience exchange depends basically on the profile of
the involved people in the design team. However, a difficulty can appear due to
internal disputes or problems of relationship.
The interface design has another kind of implementation difficulty. There is not
interface design methodologies for the conceptual design. There are proposals that
suggest its use in the Preliminary Design. However, these keep the sequential status of
the product development.
Based on these barriers, we perceive that there is a need to improve aspects of both
the subjects. This improvement passes for the anticipation of the interface
development to the Conceptual Design. This anticipation with concurrent application
of the DFA, DFM and FMEA tends to add information of different areas of the
product lifecycle facilitating the implementation of the CE environment.
This occurs because the DFx techniques can be considered as a knowledge base
which objectives to design products maximizing characteristics as: quality, reliability,
services, security, user, environment, time-to-market, at the same time that it
minimizes costs of the product lifecycle and manufacture. Thus, the use of the DFx in
the Conceptual Design can have a great role in the taking of abstract level decisions
and in the product costs later phases of its development. In addition, they define the
product performance in its lifecycle.
The use of DFA, DFM and FMEA in Conceptual Design is a new perspective
in the field of integrated product development, therefore it can use all the
advantages that those tools present in relation to the type of knowledge used in the
generation and election of product concepts. This because, being the same exposers
of the tacit knowledge, they can contribute for the diffusion of the experts
knowledge for the whole organization.
Thus it is evidenced the viability of the use of the techniques with indicative for
possible works in the Artificial Intelligence (AI) area. This is because AI permits to
deal with the tacit knowledge of the human being transforming it to explicit
knowledge. In addition, the AI is in a certain form, linked to the problems of
heuristical nature as the election of the best conception for a product.
510 L.F.S. Andrade, F. A. Forcellini
5 References
[1] MOLLOY, E. and BROWNE, J. A knowledge-based to design for manufacture using
features. In: PARSAEI, H.R. and SULLIVAN, W. G. Concurrent engineering:
contemporary issues and modern design tools. London: Chapman & Hall, 1993, p. 386-
401.
[2] KERZNER, H. Project management: a systems approach to planning, scheduling and
controlling. John Wiley & Sons, Inc. 1998.
[3] ROZENFELD, H. et al. Gestão de Desenvolvimento de Produtos: Uma referência para
a melhoria do processo. São Paulo, SP: Saraiva, 2006. 542 p.
[4] ANDRADE, L.F.S and FORCELLINI, F.A. Estudo da viabilidade de utilização do
DFA, DFM e FMEA como ferramentas de auxílio para o projeto de interfaces na fase
de projeto conceitual. In: 3º CONGRESSO NACIONAL DE ENGENHARIA
MECÂNICA, 2004, Belém, PA.
[5] HÖLTTÄ, K.M..M. and OTTO, K.N.. Incorporating design effort complexity measures
in product architectural design and assessment. Design studies, Elsevier, v. 26, n. 5,
p.463-485, set. 2005.
[6] BACK, N. and FORCELLINI, F. A. Projeto de produto. Florianópolis, 2000.
Coursebook for the Conceptual Design Course, Post-Graduation in Mechanical
Engineering, Department of Mechanical Engineering, UFSC.
[7] ULLMAN, D.G.. The Mechanical Design Process. Highstown, NJ, EUA: McGraw-
Hill, 1992. 336 p.
[8] FERREIRA, A.B.H. Interface. Novo Aurélio Século XXI. Rio De Janeiro: Nova
Fronteira, 2003.
[9] ULRICH, K. T. and EPPINGER, S. D.. Product Design and Development. 3. ed. New
York, EUA: McGraw-Hill/Irwin, 2004. 366 p.
[10] OTTO, K.N. and WOOD, K.L.. Product Design: Techniques in Reverse Engineering
and New Product Development. Upper Saddle River: Prentice-Hall, 2001. 1065 p.
[11] SOUSA, A.G. Estudo e Análise dos Métodos de Avaliação da Montabilidade de
Produtos Industriais no Processo de Projeto. Post-Graduation Program in Mechanical
Engineering, UFSC, 1998. Master’s Dissertation
[12] LINHARES, J.C. and DIAS, A. Modelamento de dados para o desenvolvimento e
representação de peças – estudos de caso. Post-Graduation Program in Mechanical
Engineering, UFSC, 1998. Master’s Dissertation
[13] SIQUEIRA, O. C. e FORCELLINI, F. A. Sistemática para seleção do tipo de união de
componentes de plástico injetados. In: Brazilian Congress of Product Development
Management, 3rd, 2001, Florianópolis - SC.
[14] BOOTHROYD, G. DEWHURST, P e KNIGHT, W., Product Design for Manufacture
and Assembly. New York: Marcel Dekker Inc, 1994
[15] PEREIRA, M. W. e MANKE, A. L. MDPA – Uma metodologia de desenvolvimento
de produtos aplicado à engenharia simultânea. In: Brazilian Congress of Product
Development Management, 3rd, 2001, Florianópolis - SC.
[16] FERRARI, F. M., MARTINS, R. A. and TOLEDO, J. C. Ferramentas do processo de
desenvolvimento do produto como mecanismos potencializadores da gestão do
conhecimento. In: Brazilian Congress of Product Development Management, 3rd,
2001, Florianópolis - SC.
[17] VALERI, S. G. e TRABASSO, L. G. Desenvolvimento integrado do produto: uma
análise dos mecanismos de integração das ferramentas DFX In: Brazilian Congress of
Product Development Management, 4th, 2003, Gramado - RS.
Knowledge Engineering: Organization Memory,
Ontology, Description logics and Semantics
Organizational Memory for Knowledge and
Information Management in the Definition, Analysis
and Design Phases of Civil Engineering Projects using
an XML Model
1 Introduction
Organizational Memories (OM) are knowledge-based systems. Knowledge
acquisition is a central aspect for developing this type of systems. In early systems
knowledge was extracted from experts of domain and it was represented as a set of
heuristic rules to solve problems. To find these heuristics is not easy. Reynaud in
[9] argues that the difficulties arise, among others, from: the absence of a
methodological guide for constructing heuristic knowledge bases, the fact that the
heuristics are not directly expressed by the experts and that the ability to resolve
problems depends directly on the existence of these heuristics.
1
Professor Assistant. National University of Colombia. Faculty of Mines, Systems School
(Of. M8A-313). Street 80, 65 223. Tel: (+574) 4255358. Fax: (+574) 4255365. E-mail:
glgiraldog@unalmed.edu.co Url: www.unal.edu.co Medellín – Colombia.
514 G.L Giraldo, G. Urrego-Giraldo
Active research about these difficulties began in 1980. The dynamic of research
in this area has lead to approaches based on the construction of models in
opposition to those centred on the knowledge extracted from the experts.
The knowledge of a study domain for developing an application, named
“domain knowledge”, has been differenciated from “reasoning knowledge”,
namely the knowledge obtained by reasoning, It describes abstractly the process of
an application in terms of “task” and “methods”.
Reasoning knowledge is abstract knowledge. It focus on the relationships to
objects, tacking account of bject’s roles in the reasoning model. The reasoning
model is a frame of knowledge categories [12] supporting the interventions of the
agents on the relationship between de agent and objects of the domain.
Domain knowledge is centered on objects, their associated semantic concepts,
the relationship among these objects and physical interventions on domain objects.
This knowledge expresses structured knowledge arising from well defined process
and oriented to the communication. This characterization corresponds to the
information concept.
The domain knowledge is currently structured in ontologies containing the
domain objects and the relationships among them. Many domain ontologies are
proposed in specific economical sectors. For example, [5] presents an ontology in
civil engineering construction domain. Other ontologies are centered in phases or
aspects of the construction work, for example, the definition phase of a project is
considered in [14]. In [2] a project ontology is organized in terms of process,
organization and product. We propose an enterprise ontology containing a project’s
ontology in the domain of studies and design in civil engineering for constructing
an organizational memory, in order to store, manage, retrieve and reuse project
knowledge.
In the domain of Interoperability for Enterprise Applications, the European
project InterOp, www.interop-noe.org, envisions to facilitate the emergence of an
interoperability research corpus through the fusion of three knowledge-
components: software architecture, enterprise modelling and Ontology. These
integration concepts are considered in that european project as requirements to
interoperability. We introduce a model for integrating information and knowledge
and a model of Organizational Memory for knowledge and information
management related to definition, analysis and design phases of civil engineering
projects.
Many definitions and interpretations of organizational memory exist in the
literature. The concepts expressed, among others, in [1,6,13] are extend in [8]
which considers the corporate memory as an «explicit, disembodied, persistent
representation of knowledge and information in an organization, in order to
facilitate its access and reuse by members of the organization, for their tasks».
Organizational Memory as a model for knowledge management concentrates the
research work in particular domains in order to pay attention to specific knowledge
and for developing an organizational culture favourable to an extended use of the
Organizational Memory. In this way, recent studies in [7] and [3] highlight the
handicap between the external diffusion of Organizational Memory models and the
limited impact within the enterprises.
Organizational Memory for Knowledge and Information Management 515
2 Meta-model of Knowledge
A first generation of knowledge systems was centered on the knowledge obtained
from domain experts, while the second one is characterized by the use of models
for supporting the stages of knowledge management. We use knowledge meta-
model [4], in Figure 1, intending to integrate knowledge and information concepts
as well as to construct structured reasoning models and ontologies representing
knowledge of different nature.
Knowledge is considered, in our work, as all that may be known, understood
and imagined about a subject, considering existent or created objects, agents,
means, methods and agents’ interventions. Equally, we define information as
predefined communication-oriented knowledge arising from well structured
processes from systematic intervention of existent agents, means and methods on
existent or created objects, considering known and created relationships among
these objects. We extend the concept of knowledge involved in a Knowledge Base
and in Knowledge-based Systems, in order to incorporate and differentiate
information and knowledge. The Knowledge-based Systems (KBS) change to
Extended Knowledge based Systems (EKBS) including knowledge and
information.
Our work takes domain and context concepts of [10,11] where the domain is an
activity field characterized by the objects, while the context defines a space of
agents’ interventions including means, methods and the circumstances that
complement the description of a phenomenon.
516 G.L Giraldo, G. Urrego-Giraldo
Knowledge of Extended
Knowledge Systems
Domain objects, Objects’ features and Agents-, means- methods- their features
relationships centered Knowledge and relationships-centered Knowledge
… ExternalInteraction …
InteractionOnSpecificContextKnowlege InteractionOnGlobalContextKnowledge
ExistentKnowledge
GlobalContextCientificalIntervention
Document
Title Section
Text
Offered Services as Top concept of the Engineering Service Ontology has two
derived branches: CivilEngineeringConsultingServices and InternalServices. The
first one generalizes the concepts Design, Study and Control which, in turn are
specialized in particular Studies and Design Services. For example Design is
specialized, among others in StructuralDesign and this last in
BridgeStructuralDesign, BuildingStructuralDesign, DamStructuralDesign,
WallStructuralDesign, etc.
The concepts PreviousInformation, ProjectInformation, DeliveryInformation
and PostDeliveryInformation specialize the concept CosultingServiceInformation
which is contained in CivilEngineeringConsultingServices.
PreviousInformation concept considers the necessary knowledge for defining,
contracting and developing an engineering service and for managing the related
project. PreviousInformation contains: TechnicalInformation, Proposal and
Contract. This last concept is detailed in Head, Clauses and FinalPart.
518 G.L Giraldo, G. Urrego-Giraldo
engineering, involves and deploys, among others, the concepts civil engineering
consulting service and projectInformation. Thus, the ontology concepts are used to
tag the documents related to the specific services and projects.
The tagged documents are incorporated in an XML file, which constitutes the
Organizational Memory for the enterprise of Studies and Design. The little part of
the XML file, in next paragraph, illustrates the storage and availability of the
registered knowledge for retrieval and reuse. The concept External Interaction of
the ontology links Domain knowledge (Information), related to the project, with
knowledge of Specific and Global Contexts. There, the chapter “Earth Pressure and
Hydraulic Pressure” of the book “Monobe-Okabe Method” is a scientific report
required suddenly by the engineers for accomplishing projects activities.
Some conclusions and future works are presented in next Section.
References
[1] Ackerman MS, Halverson C. Considering an Organization’s Memory. In: Proceedings
ACM conference on Computer supported cooperative work, 1998.
[2] Bicharra AC, Kunz J, Ekstrom M, Kiviniemi A. Building a project ontology with
extreme collaboration and virtual design and construction. Advanced Engineering
Informatics 18(2): 71-83 (2004)
[3] El Louadi M, Tounsi I, Ben Abdelaziz F (2004). Mémoire organisationnelle,
technologies de l’information et capacité organisationnelle de traitement de
l’information, IN: Perspectives en Management Stratégique, L. Mezghani et B. Quélin
(Eds.), Editions EMS Management & Société, Chapitre 10, pp. 225-244.
[4] Giraldo G, Urrego-Giraldo G. Integrating Information and Knowledge Systems.
(Research Report KEP-1). Medellin, Colombia, 2007.
[5] Lima C, El-Diraby T, Stephens J (2005) Ontology-based optimisation of knowledge
management in e-Construction, ITcon Vol. 10, pg. 305-327
[6] O’Leary D. Knowledge management: Taming the information beasts. IEEE Intelligent
Systems, 13(3):30-48, 1998. Special Issue with three contributions.
[7] Ozorhon, B., Dikmen, I., Birgonul, T. Organisational memory formation and its use in
construction", Building Research and Information, 33(1), 67-79. 2005
[8] Rabarijaona A, Dieng R, Corby O, Ouaddari R. Building and Searching an XML-Based
Corporate Memory. IEEE Intelligent Systems 15(3): 56-63 (2000)
[9] Reynaud C. L'exploitation de modèles de connaissances du domaine dans le processus
de développement d'un système à base de connaissances. Habilitation à diriger les
recherches, Laboratoire de Recherche en Informatique, Paris, France, 1999
[10] Urrego-Giraldo G, Giraldo G. Estructuras de servicios y de objetos del dominio: una
aproximación al concepto de ontología. TECNO LÓGICAS. Medellín, Colombia: v.15,
2005.
[11] Urrego-Giraldo G. ABC-Besoins : Une approche d’ingénierie de besoins fonctionnels et
non-fonctionnels centrée sur les Agents, les Buts, et les Contextes. Thèse Doctorale,
Université Paris 1, Panthéon Sorbonne, France, 2005.
[12] Urrego-Giraldo G. Toward Integration of Enterprise, System and Knowledge Modeling.
International Society of Productivity Enhencement (ISPE), International Conference on
Requirements Engineering: Research and Applications, Beijing, China, June 2004.
[13] van Heijst G, Schreiber A Th, Wielinga Bob J. Using explicit ontologies in KBS
development. International Journal Human-Computer Studies. 46(2): 183-292 (1997)
[14] Whelton M, Ballard G, Tommelein ID (2002).A knowledge management framework for
project definition, ITcon Vol. 7, Special Issue ICT for Knowledge Management in
Construction , pg. 197-212, http://www.itcon.org/2002/13
Organizational memory supporting the continue
transformation of engineering curricula
Abstract. We consider seven knowledge components that constitute the pillars for building
a document-based organizational memory for engineering curriculum design: epistemology,
pedagogy, Philosophy, universal knowledge, internal academic knowledge, external
academic knowledge and extra-academic knowledge. We present domain ontology for
guiding access to, and management and retrieval of knowledge and information stored in
annotated documents. The curriculum oriented organizational memory supports the
construction, evaluation and continuous evolution of engineering curricula. The integration
of knowledge and information for continuous curricular transformation is illustrated with a
case study of an informatics curriculum.
1 Introduction
[1] Professor. University of Antioquia. Faculty of Engineering, (Of. 21-409). Street 67, 53 108. Tel: (+574) 210 5530. Fax: (+574)
2638282. E-mail: gaurrego@udea.edu.co Url: www.udea.edu.co Medellin–Colombia
522 G. Urrego-Giraldo and G. L. Giraldo
Moreover, knowledge extracted form experts does not distinguish clearly Domain
knowledge from Reasoning knowledge. Although this aspect has captured the
interest of approaches oriented to models construction, the characterization of
reasoning knowledge is not yet easy, as it remains constrained to reasoning about
identified domain objects. We propose to extend the reasoning to consider created
domain objects, additional semantic features, created agents from specific and
global context, existent and created contextual objects and abstract interventions of
existent and created agents. In this way, our work uses a meta-model of
knowledge, in [4] in order to clarify the knowledge categories on the outside of
domain knowledge. The concept Utilisation Context of a system, for example, is
only a part of the Context and exceeds the scope of traditional reasoning models. In
effect, Utilisation Context includes the existent and created agents, means and
methods, as well as physical and abstract interventions of these agents, involving
domain existent objects and created objects.
At first, knowledge general areas are conceptualized using the knowledge meta-
model proposed in [4]. This knowledge meta-model contains two capital
knowledge categories: Domain Knowledge and Context Knowledge. For sake of
space, knowledge meta-model is not depicted.
The natural domain for curricula development encloses the part of the universal
knowledge necessary to satisfy social needs belonging to the field of engineering.
This fraction of knowledge feeds the curriculum model with knowledge arising
from these two contexts. The two next paragraphs verify which categories of
knowledge Meta-model provide the knowledge for engineering curriculum models.
Domain Knowledge includes knowledge centred on: existent and created
domain objects, considering object’s features and their relationships, object
semantic concepts and Agent Interventions, which involve physical and intellectual
treatments of objects. Agent intervention expresses actions and interactions of
agents. It uses known and created relationships among objects. Thus, Domain
Knowledge contains the knowledge associated to the domain objects, which
represent the part of the universal knowledge necessary to satisfy social needs,
belonging to the field of engineering.
Context Knowledge is focused on: existent and created agents, means and
methods, contextual objects, semantic concepts and Agent Interventions involving
existent and created context objects. Agents Interventions may be physical or
intellectual. We consider three context types: Social, Organizational and Systemic.
Context Knowledge is divided into Global Context Knowledge and Specific
Context Knowledge. Existent and Created agents, means and methods, their
semantic characteristic and their relationships constitute the Specific Context
Knowledge, while contextual objects, their semantic characteristics, their
relationships and agent interventions involving the contextual object compound the
Global Context Knowledge. Agent interventions, involving means, methods and
domain objects determine the Utilisation Context, which defines the interactions of
agents of the Specific Context Knowledge involving the part of the universal
knowledge necessary to satisfy social needs, belonging to the field of engineering.
This part of universal knowledge belongs to Domain knowledge and is found in
particular knowledge areas constituting the support of engineering curricula
models.
The problem at this point consists of determining knowledge general categories
of engineering curricula models, leading to the construction of these models.
Because of this, we construct the Utilisation Context of engineering’s curricula,
Figure 1, aiming to define knowledge general categories considering typical
interactions of agents. From a pattern of typical interactions of commercial agents,
presented in [9], we keep only two interactions: “Request and delivery of service
and objects of information and knowledge” and “Request and delivery of
information and knowledge: scientific, technological, technique, social, marketing,
organizational, commercial, economic, legal and personal”.
The agents related by the engineering curricula are introduced in the Utilisation
Context, in Figure 1. By imagining the above proposed typical agent interaction,
using the envisioned engineering curricula models as a means, the following
general categories of knowledge are identified: epistemology, pedagogy,
524 G. Urrego-Giraldo and G. L. Giraldo
General
Specific knowledge
knowledge National and
source external
Universities
and
Institutions
Government Work
field
Candidate
Students
Program
Direction
Researchers
and
Teachers
Domain Knowledge and Context Knowledge specialize the top concept of the
curricular knowledge ontology. The knowledge general categories of engineering
curricula models, defined in the previous Section, constitute the second level of
ontology depicted in Figure 2. Other ontology levels consider the particular
categories of knowledge, which are essential meta-concepts to construct curricula
models. Meta-concepts of the Ontology of Curricular knowledge Categories lead to
define the concepts for constructing the engineering curricula models. For instance,
the meta-concept Universal Knowledge leads to define the concepts Mathematic,
Physic, Chemistry, Biology, Economy, Management, etc.
Organizational memory supporting the continue transformation 525
Specific Context
Context Information
Philosophy Epistemology Pedagogy Universal
Knowledge Utilisation
Context K Global Context
contribute contribute
satisfies Context K Expanding
Knowledge
The ontology drive to define what Curricula models may do, how they do it,
why they do it, for what they do it, as well as to construct pertinent and effective
curricula. For assuring that curricula answer the mentioned questions, we propose a
set of criteria that the curricula models of engineering have to incorporate:
a) Be focused on an undergraduate engineering curriculum adopting specific
formation proposes.
b) Be based on solution of problems related to social needs.
c) Pertinence and effectiveness in the problems solutions
d) Be supported on pedagogical models centred on research. The students are
directly related to knowledge.
e) Adaptability facing the social, economic, scientific, technological and
technical changes
f) Be fed by scientific and technological developments
g) Be oriented to create open mind, autonomous and creative engineers
h) Integral formation of persons, considering three human dimensions.: to
be, to know and to do.
i) Flexibility aiming to offer alternative formations and to develop
vocational options for the students
j) Pertinence in contents according to specific areas of knowledge of other
national and international institutions and to new knowledge trends.
k) Continuous revision and improvement of curriculum
526 G. Urrego-Giraldo and G. L. Giraldo
Relating these criteria to the concepts of the second ontology’s level, the
remainder concepts of the ontology, in Figure 2, are defined.
Criterion a) drives to discover the concepts curriculum model and formation
purposes. Social Needs and Problems arise from criterion b). From criterion c)
arises the concept extra academic knowledge. Criteria c), d) and i) induce the
concepts knowledge general areas, curricular organization units, thematic units,
didactical strategies. Personal competences are a concept suggested by criterion
g). Criterion h) provides the concept Curriculum conceptual dimensions. The
analysis of criterion j) produces the concept external academic knowledge.
The predominant relationships among ontology concepts are: “is- a”,
“composition”, “aggregation” and “specific of domain” relationships. These
relationships leave from domain knowledge, in particular from the concepts
expressing semantic characteristics of the domain objects.
6 References
[1] Abel M-H, Benayache A, Lenne D, Moulin C, Barry C, Chaput B. Ontology-based
Organizational Memory for e-learning. Educational Technology & Society 7(4): 98-
111, 2004.
[2] Colace F, De Santo M, Liguori C, Pietrosanto A., Vento M. Building Lightweight
Ontologies for E-Learning Environment. IPSI, Hawai, January 2005.
[3] Gaševiü D, Hatala M. Ontology mappings to improve learning resource search British
Journal of Educational Technology 2006, 37 (3), 375–389.
[4] Giraldo G, Urrego-Giraldo G. Integrating Information and Knowledge Systems.
(Research Report KEP-1).Medellin, Colombia, 2007.
[5] Gupta A, Ludäscher B, Moore RW. Ontology Services for Curriculum Development
in NSDL. ACM/IEEE-CS Joint Conference on Digital Libraries (JCDL), July 2002.
[6] Ronchetti M, Saini PS. Knowledge Management in an e-Learning System. ICALT,
2004
[7] Soffer P, Wand Y. Goal-Driven Analysis of Process Model Validity. In: Proceedings
of CAiSE’04, Riga, Latvia. 2004.
[8] Toronto Virtual Enterprise. University of Toronto Canada. Enterprise Integration
Laboratory - TOVE Ontologies. Available at http://www.eil.utoronto.ca/enterprise-
modelling/tove/index.html Accessed on: Feb. 15th 2007.
[9] Urrego-Giraldo G, Giraldo GL. Estructuras de servicios y de objetos del dominio: una
aproximación al concepto de ontología. TECNO LÓGICAS. Medellín, Colombia:
v.15, 2006.
[10] Uschold M, King M, Moralee S, Zorgios Y. The Enterprise Ontology, The Knowledge
Engineering Review , Vol. 13, Special Issue on Putting Ontologies to Use (eds. Mike
Uschold and Austin Tate), 1998. pp 31-89
[11] Wand Y, Weber R. An Ontological Model of an Information System, IEEE
Transactions on Software Engineering, November 1992, pp. 1282-92.
Development of an Ontology for the Document
Management Systems for Construction
Alba Fuertesa,1, Núria Forcadaa, Miquel Casalsa, Marta Gangolellsa and Xavier
Rocaa
a
Construction Engineering Department. Technical University of Catalonia, Spain.
Abstract. This paper describes the development of an ontology for the AEC/FM projects’
documentation management that allows the classification of the documents along the
lifecycle of AEC/FM projects. This ontology is aimed at reducing the interoperability and
information exchange problems, inherent nowadays in AEC/FM projects, establishing a
hierarchical structure of the different areas that conform the lifecycle of AEC/FM projects
and an interrelationship system between them. Therefore, all the documentation created
along a project could be classified in the different areas of the project lifecycle and related to
them by this hierarchical structure. Moreover, metadata like identifier, creation date,… have
been incorporated to documents in order to be completed and modified by the author to
facilitate users’ understanding. Therefore, this ontology is the first step to improve the
Document Management Systems in AEC/FM projects and their interoperability limitations.
1 Introduction
The architecture, engineering, construction and facility management industry
(AEC/FM) is fragmented due to the many stakeholders and phases involved in a
construction project as well as to the complexity and diversity of their projects.
This fact has led to a huge amount of organizational information formalized in
unstructured documents.
Electronic Document Management Systems (EDMS) is an Information and
Communication Technology (ICT) application that has started to be used in the
construction industry as a tool to reduce some of the problems generated by
fragmentation creating an environment that allows the centred stored of the
documentation on a server. However, EDMS have also some limitations, most of
them related to the interoperability, the ability for information to flow from one
computer application to the next throughout the life cycle of a project, that
1
Construction Engineering Department. Technical University of Catalonia, Spain.
C/Colom,11, Building TR-5. 08222 Terrassa (Barcelona, Spain); Tel:+34 93
7398947; Fax: +34 7398101; Email: alba.fuertes@upc.edu; nuria.forcada@upc.edu
530 A. Fuertes, N. Forcada, M. Casals, M. Gangolells, X. Roca
becomes difficult to achieve because of its dependence on the development and use
of common information structures throughout all the different EDMS users. In
order to solve this problem, different standards, aimed at establishing
internationally recognised information classification structures, projects based on
them and ontologies, another kind of system of representation of the concepts that
a domain contains and the relations that exist between them, are being developed.
As example ISO 12006 [3] and Industry Foundation Classes (IFC) [4], standards
that establish a structure for the classification of objects in AEC/FM sector,
Lexicon [5], a project based on ISO 12006 that provides general information such
as building regulations, product information, cost data and quality assessments in a
common and standardized language, and eCognos [2], a project that establishes
and deploys a domain ontology for knowledge management in the AEC/FM sector,
between others. From the study of these different initiatives it could be observed
that most of them are object-oriented and define an information classification
structure in different fields of the domain of AEC/FM projects, however none of
them are oriented to the document management.
3 Background
Classification systems that attempt to organize the knowledge base of national
construction industries have a long history. The Swedish SfB system has been
under development since 1945 and although it has long been superseded in Sweden
itself it remains the basis for many existing knowledge classification systems such
as CI/SfB [6], which is widely used in the UK.
The growing experience with classification systems and the development of
ICTs has led to the development of the ISO 12006 series [3], which is aimed at
establishing internationally recognized classification principles. Projects such as
Uniclass [1], Lexicon [5], etc are examples of adaptations of ISO 12006.
Development of an Ontology for the Document Management Systems 531
character of the processes which occur within it, and Stages, defined as sub-
processes of the project Phases. See Figure 1.
On the other hand, documents are a result of different activities and subactivities
occurred along the project. By this way, Activities, defined as a working area of
the project, and Subactivities, understood as the type of information of special
importance in a project, are included in the Concept Model. Therefore, the
Activities defined are: Advance, Changes, Contractings, Costs, Environment,
Project, Quality and Safety & Health. And the Subactivities identified are:
Communication, Documentation, Control and Planning.
As a result, with the Stages and Phases, that complete the lifecycle of the
project, the documents listed and the Activities and Subactivities identified, all the
relationships between them can be defined. These relationships will classify the
documents along the lifecycle of the project bearing also in mind the activity and
subactivity from where they come from. Therefore, the concepts and the
relationships that constitute the Concept Model of the Documentation flow are
already identified.
These concepts are classified in two kinds of metadata related to each
document. Content-related metadata will be understood as the metadata that relates
the documents with the phases, stages, activities and subactivities in order to
situate them along the project lifecycle. This metadata is inherent in the document.
On the other hand Content-properties metadata is related to what the document
contains or is about, thus providing to users and applications useful hints to help
document search and retrieval and to improve the reuse of documented
information. This metadata is not compulsory and depends on the author needs. As
example the name of the creator or the receiver, the type of format, the creation
date, the version,… See Figure 2.
From the definition of this Concept Model of the Documentation flow,
documents are classified as the result of the intersection of an Activity and a
Subactivity that take place along one or more Stages that are part of a Phase. By
this way documents are located along the lifecycle of the construction project
basing on a three dimensional model. See Figure 3.
Development of an Ontology for the Document Management Systems 533
Project documentation
is part of
is a result of
DOCUMENT Activity & Subactivity
has
comes from a
has
Identifier
PROJECT LIFECYCLE
has
Status Project Stage
has
Description is part of
has
Version
has Project Phase
Creation date
has
Creator
has
Receiver
has
Format
Late submittal has
date
PHASE 1
PHASE 2
4.2 Implementation
The implementation of the ontology has been carried out in Protégé editor, for its
opened code, free access and simplicity, working with OWL DL language.
Reusing ontologies is an important point to bear in mind before the
development of a new ontology. Taking into account that ontologies related to
DMS haven’t been found reusing has not been possible, however terminology to
describe the phases, stages, authors, activities and subactivities in AEC/FM has
been extracted from existing standards.
534 A. Fuertes, N. Forcada, M. Casals, M. Gangolells, X. Roca
From the concepts identified in the Concept Model, classes, subclasses and
properties are defined. Classes describe concepts in the domain and subclasses
kinds of the already defined classes. In this proposed ontology the classes defined
are shown in Figure 4. By the same way are represented the subclasses of each
class.
Figure 4. Classes of the Ontology for DMS for Construction exposed in Protégé
Documents are related to the Stages, Activities and Subactivities from which they
come from by logic expressions such as and and or, that express intersection and
union, respectively. Therefore, and expression is used to state that a document is a
result of the intersection between an Activity, a Subactivity and a Stage (the Phase
is already related with the Stage), and or expression is used to state that this
document can be located in different locations (intersections) along the project
lifecycle. See Figure 7.
6 References
[1] Construction Industry Project Information Committee (CIPIC). Available online at:
<http://www.productioninformation.org/> Accessed on Jan. 25th 2007.
[2] Ei-Diraby TE, Lima C and Feis B. Domain taxonomy for construction concepts:
Toward a formal ontology for construction knowledge. Journal of Computing in Civil
Engineering, Vol. 19, Issue 4, 2005, pp. 394-406.
[3] International Organization for Standardization ISO 12006-22. International
Organization for Standardisation. Building construction - Organization of information
about construction works. Part 2: Framework for classification of information, 2001.
[4] Liebich T and Wix J Eds. IFC Technical Guide. International Alliance for
Interoperability, October 27, 2000.
[5] Woestenek K. From lexicon to XTD. In Z. Turk & R. Scherer (eds), European
Conference on Product and Process Modelling in the Building and Related Industries;
Proc. eWork and eBusiness in Architecture, Engineering ans Construction, Portoroz,
Slovenia, 9-11 September, 2002. The Netherlands: Balkema
[6] Wright T. Classifying Building Information: A Historical Perspective. Technical report,
Construction Information Systems Australia Pty Ltd., 1998.
Some approaches of ontology Decomposition in
Description Logics
Introduction
Previous studies about DL-based ontologies focus on such tasks as ontology
design, ontology integration, ontology deployment,… Starting from the fact that
one wants to effectively solve and reason with a large ontology, instead of
integrating multiple ontologies we examine the decomposition of an ontology into
several sub-ontologies that overlapping content. Some reasoning algorithms on the
system of decomposed ontologies and the criteria for decomposition have been
proposed in the previous paper [4]. In this paper, we resolve the problem of
decomposing, more concretely, we delve into the techniques of decomposing an
ontology into several sub-ontologies. Our principal goal is how select a "good"
one? A decomposition is called "good" if it improves the efficiency of reasoning
and guarantees the properties proposed in [4]. Our computational analysis for
reasoning algorithms guides us to suggest the parameters of such a decomposition:
the number of concepts and roles included in the semantic mappings between
*
Laboratoire I3S, UMR 6070, CNRS, Les Algorithmes, Bât. Euclide 2000, Route des
Lucioles, BP121, 06903 Sophia-Antipolis, France; Email: tpham@i3s.unice.fr
538 Thi Anh Le PHAM, Nhan LE-THANH, Peter SANDER
partitions, the size of each component ontology (the number of axioms in each
component) and the topology of the decomposition graph. There are two
approaches to be considered here. They are based on two presentations of the
ontology. First, we present the ontology by a symbols graph, and implement
decomposition based on minimal separator method. Second, the ontology is
presented by an axiom graph, corresponding to the image segmentation method.
The paper is organized as follows. Section 1 describes two ways for transforming
an ontology by an undirected graph (symbol graph or weighted graph). Section 2
defines the overlap decomposition of a TBox and the criteria for a good
decomposition. In sections 3 and 4, we discus the methods and two partitioning
algorithms of the graph representing an ontology corresponding to the above
graphs. Section 5 presents some evaluations of the effects of the decomposition
algorithms. Section 6 provides some conclusions and future work.
Fig. 1. TBox T
Some approaches of ontology Decomposition in Description Logics 539
To simplify, we use the notation symbol instead of primitive concept (role), i.e., a
symbol is either a primitive concept (role) in a TBox. A symbol graph will be
introduced in the following section.
2 Ontology Decomposition
2.1 Definition
In this paper, the ontology is examined as a graph, therefore the notion of ontology
decomposition is based on graph partitioning. Assume that a graph G is partitioned
into m sub-graphs Gi,idm, then partitioning G is defined as follows:
roles of the original ontology will be kept through the decomposition. As a result,
we propose two techniques for decomposing based on graphs. In this paper, we
examine only the simplest case, decomposing a TBox into two smaller TBoxes.
The general case is presented in [7].
We defined the decomposition of a TBox as a distributed TBox consisting of sub-
TBoxes and the semantic mappings between each pair of sub-TBoxes. Our
decomposition approach is most related to the graph formulation and partitioning.
A delicate aspect of decompositionbased logical reasoning is in the selection of a
good partitioning of the ontology, and we also need to ask the following questions:
These questions have been also proposed for image segmentation and data
clustering. In the general DL case, our purpose is not to reduce computational
complexity, but the results for reasoning in decomposition suggest similar
relationship between the decomposed TBoxes and the original TBox. The
computation analysis of our decomposition-based reasoning algorithms provides a
metric for identifying parameters of decomposition that influence our computation:
the size of the communication part between component TBoxes, the size of each
component, and the topology of the decomposition graph. Our goal is minimizing
the disassociation between the component TBoxes and maximizing the association
within the components. Moreover these component TBoxes must preserve the
properties of decomposition that we proposed in [4]. These parameters guide us to
propose two greedy algorithms for decomposing an ontology into sub-ontologies
by trying to optimize these parameters. The decomposition algorithms depend on
the representation graphs of ontology that will be presented in the following
sections.
3.1 Definitions
Note that, the minimal vertex separator can be defined as: S is a minimal separator
of the graph G = (V,E) if and only if there are two different connected components
of G[V í S] such that every vertex of S has a neighbor in both of these
components. This definition as a lemma that have been proved in [6]. If S is a
minimal a,b-separator which contains only neighbors of a then S is called
close to a.
3.2 Algorithm
• Split G into the two subgraphs G1,G2 that are separated by S*, with S* included in
both subgraphs.
• Create an undirected graph Gp = (Vp,Ep) with Vp = {G1,G2} and Ep = S*
Figure 4 illustrates the connected graph of TBox T in Figure 2.
In this example, we have two minimum vertex separators {X} and {Y}. If we chose
{X} to split T, then we collect two groups of symbols {C1,C2,C3,X} and
{X,C4,C5,C6,Y,H,T}.
Hence, we obtain two sub-TBoxes respectively:
T1 = {A1,A2,A7,A8} and T2 = {A3,A4,A5,A6,A9,A10}. Similarly, if we chose {Y}, we
obtain two sub-TBoxes T1 = {A1,A2,A3,A4,A7,A8,A9,A10} and T2 = {A5,A6}.
We will take the first result, because there is a better balance between the number
of axioms of the two sub-TBoxes. Thus, the axioms in the original TBox is
distributed into T1 and T2 with the number of axioms respectively N1 = 4, N2 = 6.
The number of symbols in the minimum vertex separator is |S*| = |{X}| = 1.
3. Once the eigenvector are computed, we can decompose the graph into two
partitions using the second smallest eigenvector. In the ideal case, the eigenvector
should only take on two discrete values and the signs of the values can tell us
exactly how to decompose the graph. However, our eigenvectors can take on
continuous values and we need to choose a "good splitting point" to decompose it
into two parts.
4. After the graph is broken into two partitions, we can recursively implement
our algorithm on these two decomposed partitions.
Figure 5 illustrates the NCut obtained (denoted by the blue line) in the example of
Figure 2, and result graph of decomposition is shown in Figure 6. This result is the
same in the method based on minimal separator.
Some approaches of ontology Decomposition in Description Logics 545
References
1. Dieter Jungnickel, Graphs,Networks and Algorithms. Springer 1999.
2. Eyal Amir and Sheila McIlraith, Partition-Based Logical Reasoning for First-
Order and Propositional Theories. Artificial Intelligence, Volume 162, February
2005, pp. 49-88.
3. Jianbo Shi and Jitendra Malik, Normalized cuts and Image segmentation. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 22(8), 888-905,
August 2000.
4. Thi Anh Le Pham and Nhan Le Thanh, Decomposition-Based Reasoning for
Large Knowledge Bases in Description Logics. In Proceedings of the 13th ISPE
International Conference on Concurrent Engineering: Research and Applications,
p. 288-295, Antibles, France, September 2006
5. Kirill Shoikhet and Dan Geiger, Finding optimal triangulations via minimal
vertex separators. In Proceedings of the 3rd International Conference, p. 270-281,
Cambridge, MA, October 1992.
6. T.Kloks and D.Kratsch, Listing all minimal separators of a graph. In
Proceedings of the 11th Annual Symposium on Theoretical Aspects of Computer
Science, Spinger, Lecture Notes in Computer Science, 775, pp.759-768.
7. Thi Anh Le Pham, Raisonnement sur des ontologies décomposées. Rapport de
recherche, Lab I3S, Université de Nice-Sophia Antipolis, 2006.
Modeling ORM Schemas in Description Logics
Abstract: In recent years, there has been a growing interest in integration of semantics into
the Semantic Web environment, whose goal is to access, relate and combine information
from multiple sources. With regard to this tendency, our work studies a mechanism to model
ORM schemas in the Description Logic language SHOINK (D), the underpinning of a Web
ontology language. This mechanism meets the key feature required by ORM schemas (i.e.
identification and functional dependency constraints). It can be applied to integrate
information not only from systems described in ORM schemas but also from relational
databases into the Semantic Web environment.
Key words: Information integration, object role modeling (ORM), description logics (DLs),
web ontology language, semantic web.
1 Introduction
In recent years, the problem of interoperability and semantic integration of
heterogeneous information sources in the Semantic Web environment has triggered
various research [2, 11, 15]. Besides the universal environment of the WWW
(World Wide Web), this is thanks to the expressivity of Web ontology languages
that deals with the problem of heterogeneity of sources and to their well-formalized
annotations which can be used in automated reasoning [9].
These features are provided by underlying description logic (DL) languages, a
family of knowledge representation formalisms based on First-Order Logic (FOL)
[17], e.g. SHOIN(D) [7, 8].
One of the most popular information modeling approaches nowadays is ORM
(Object Role Modeling) [4, 5]. It facilitates modeling, transforming, and querying
information using facts and constraints, which may be verbalized in a close-to-
natural language. This increases the ease of use, specially toward non-technical
users. Unlike Entity-Relationship (ER) modeling [19] and United Modeling
Language (UML) [16], ORM is attribute-free. The latter avoids the problems
caused by instability in ER and UML as clearly shown in [3].
1
Tel: +33 (0)492942743; Fax: +33 (0)492942898; Email: tnguyen@i3s.unice.fr
548 Thi Dieu Thu NGUYEN, Nhan LE THANH
Moreover, for information modeling, ORM graphical notations are far more
expressive than those of ER and UML. It provides procedures for mapping to other
database schemas (e.g. ER, UML). ORM has been fully formalized in FOL [4], so
that its semantics is very close to that of DLs.
Hence, with the aim of integrating information systems into the Web
Semantics, we introduce a formalization of ORM schemas in terms of a DL.
In particular, we show how ORM schema semantics can be captured in Web
Semantics thanks to the expressivity of the DL SHOINK(D) [14, 12], which in its
turn is the underpinning of a Web ontology language [13].
Several formalizations of ORM schemas have been proposed in the literature
[10, 18]. They have been proved very useful with respect to establishing a common
understanding of the formal meaning of ORM schemas. However, to the best of
our knowledge, none of them has the explicit goal of building a framework to
integrate information into the Semantic Web environment.
The rest of the paper is organized as follows. Section 2 briefly introduces the
SHOINK(D) language. In section 3, we show how ORM schemas can be
formalized in this DL. Section 4 will conclude the paper with some future
perspectives.
Concrete datatypes are used to represent literal values such as numbers or strings.
They compose a concrete domain D, as introduced for SHOQ(D) [7]. Each
datatype d D is associated with a set dD dD, where dD is the domain of
interpretation of all datatypes. For example, a datatype 21 in D defines a set ¸21D
of integer values greater than or equal to 21.
Syntax and Semantics of SHOINK(D) can be seen in table 1, where C;D are
concept descriptions; o is nominal, i.e. singleton concept; R, R1,…,;Rn, S are roles;
R+ is the set of transitive roles; # denotes cardinality; I is the interpretation
function; 'I is the interpretation domain disjoint from 'D, the concrete
interpretation domain. Further details of SHOINK(D) can be found in [14, 12].
2
Tel: +33 (0)492942743; Fax: +33 (0)492942898; Email: tnguyen@i3s.unice.fr
550 Thi Dieu Thu NGUYEN, Nhan LE THANH
It has been proved that n-ary predicates can be transformed to binary ones [20]
in designing ORM schemas and we do not support nested-fact types and
derivation constraints (the latter are not part of the ORM graphical notation).
Example 1. Fig. 1 describes in ORM graphical notation [6] the information of the
students who attended an International meeting of the best students in the world.
Each country, coded by CountryId, had only one student per subject (which is
coded by SubjectId) who was elected to represent his/her country in this discipline.
For social events, each student could attend many groups, each of which has only
one topic. Different groups can share the same topic, but the student can only
attend groups having different topics. These descriptions can be expressed in
SHOINK(D) as shown in Table 2.
The principle components of ORM are object types, which include entities (e.g.
Student, Country) and values (e.g. Topic, CountryId), and roles (e.g. contain,
represent). Object types are connected by the roles they play, composing the fact
types. Two roles in a fact type make up its binary predicate. Constraints are applied
to roles to create the population of a database and restrict the data permitted in it.
We here focus on the representation of the mandatory (·), internal uniqueness (-)
and external uniqueness (T) constraints.We briefly explain the formalization shown
in Table 2, where C1, C2, …,Cn are entity names; R1, R2,…,Rn are role names;
(CRi=Rinv(i)Ci) are fact types, Rinv(i) is the inverse of Ri, C is an entity name and
1d i d n; C(Id) is the reference scheme of C; Rid is a role name used to translate a
reference scheme; Cfd and Rfd are respectively an entity and a role name used to
translate a functional dependency.
The basic process is to go through a given schema, just as an automated system
would do, looking for the following descriptions in the schema:
It is obvious that this mapping is polynomial w.r.t the number of elements in the
schema.
Since constraints are applied to fact types, first of all, object types and roles must
be formalized. Values are object types that cannot be defined by others. They are
either strings or numbers, which exactly correspond to datatypes in SHOINK(D).
Entities can be defined by other entities and/or values. When defined by values,
they can be seen as atomic concepts in SHOINK(D). Otherwise, constraints must
be used to describe them (see section 3.2).
A role can attribute a value to an entity (attributive role, e.g. has Topic) or connect
one entity to another (e.g. represent). The former corresponds to a concrete role
while the latter corresponds to an abstract role in SHOINK(D). In any case, the
instances assisting a given fact compose a set subsumed by their entity. So that we
can model the context of the role they play as an axiom where the set generated by
atleast restriction by 1 on the role is subsumed by the entity playing the role
(e.g. 1hasTopic Group). Besides, whenever the entity plays its role, the object
it plays with is defined by the role. So we can formalize this expression by an
axiom specifying that the value restriction on the role subsumes the universe
(e.g. T hasTopic.Topic). In binary predicates, one role needs to be the inverse
of the other. Hence, for a given fact type, two predefined non-attributive roles are
-
mapped as the inverse of each other (e.g. attend { contain ).
A mandatory constraint (MC) on a role species that all instances of an object type
must play the role. For example, it is applied on the role represent in Example 1 to
show that every student must represent some country. This constraint is equally
expressed by the existence restriction in SHOINK(D).
Uniqueness constraints (UCs) are used to express that each instance of an object
type plays a set of roles at most once. These constraints make think of atmost
restrictions in SHOINK(D). However, atmost restrictions are applied to a single
role only. UCs in this case (e.g. on represent, bestAt) can thus be described by
atmost restrictions by 1.
In case UCs are applied to a set of two roles of the same predicate (e.g.
contains/attend), the two objects are in a n-n relationship (in Example 1, the objects
are Group and Student). Implicitly, a binary relation is n-n without condition. So
that there is no need to make a translation for this case.
In Fig. 1, we also see entities as so called reference schemes, e.g.
Country(CountryId). The latter species that CountryId identifies uniquely Country.
Essentially, reference schemes can be explained by adding an attributive role,
imposing an MC and two UCs on both the two roles in the predicate (see Fig. 2).
However, the inverse of an attributive role cannot be represented in SHOINK(D)
552 Thi Dieu Thu NGUYEN, Nhan LE THANH
(see Table 1). Nevertheless, ICs in SHOINK(D) allow us to represent this ORM
formalization, e.g. (idForCountry IdFor Country).
x Using EUCs to describe FDs. Except the case described above, the
application of EUCs results in FDs. For example, the EUC on attend and
the inverse of hasTopic species that Group is functionally dependent on
the couple (Student,Topic). Note that in SHOINK(D), constructors,
except for ICs, are applied to one role only. In that way, it is not capable
of ex-pressing FDs nor ICs on many roles [1]. However, observe that an
FD of an object A on two objects B and C can be described as an FD of A
to an object A', which is uniquely identified by the couple (B,C). Hence,
we make use of this idea to formalize this kind of EUCs. For example,
modeling the given EUC on attend and the inverse of hasTopic, we create
an entity StudentTopic which is uniquely identified by the couple
(Student,Topic). A new role, specGroup, is created to set the functional
dependency of Group on StudentTopic. The two ORM schemas in Fig. 3
are equivalent.
References
1. A Borgida and Grant E Weddell. Adding uniqueness constraints to description
logics (preliminary report). In DOOD '97: Proceedings of the 5th International
Conference on Deductive and Object-Oriented Databases, pages 85{102, London,
UK, 1997. Springer-Verlag.
2. Li Ding, P Kolari, Z Ding, and S Avancha. Using Ontologies in the Semantic
Web: A Survey. Springer, October 2005. UMBC CS Technical Report 05-07.
3. TA Halpin. Business rules and object rolev modeling. Available at:
http://www.orm.net/pdf/dppd.pdf.
4. TA Halpin. A logical analysis of information systems: static aspects of the data-
oriented perspective. PhD thesis, University of Queensland, Australia, 1989.
5. TA Halpin. Object-role modeling (ORM/NIAM). In Handbook on Architectures
of Information Systems, 2nd edition, pages 81{103. Springer, Heidelberg, 2006.
6. TA Halpin. ORM 2 graphical notation. Technical report, Newmont University,
September 2005.
7. I Horrocks and U Sattler. Ontology reasoning in the SHOQ(D) description logic.
In Proc. of the 17th Int. Joint Conf. on Artificial Intelligence (IJCAI 2001), pages
199{204. Morgan Kaufmann, Los Altos, 2001.
8. I Horrocks and U Sattler. A tableaux decision procedure for SHOIQ. In Proc. of
the 19th Int. Joint Conf. on Artificial Intelligence (IJCAI 2005), pages 448{453,
2005.
9. I Horrocks, Patel P Schneider, and F van Harmelen. From SHIQ and RDF to
OWL: The making of a web ontology language. Journal of Web Semantics,
1(1):7{26, 2003.
10. M Jarrar and E Franconi. Formalizing ORM using the DLR description logic.
2007. Submitted.
11. Y Kalfoglou, Bo Hu, D Reynolds, and N Shadbolt. Semantic integration tech-
nonolgies survey. Technical report, University of Southampton, 2005.
12. T D Thu Nguyen and N Le-Thanh. La contrainte d'identification dans la
logique de description SHOIN(D). ISRN I3S/RR- 2006-34-FR, Laboratoire I3S
(CNRS - UNSA), France, 2006.
13. T D Thu Nguyen and N Le-Thanh. Extending OWL-DL with identification
constraints. ISRN I3S/RR- 2007-03-FR, Laboratoire I3S (CNRS - UNSA), France,
2007.
14. T D Thu Nguyen and N Le-Thanh. Identification constraints in SHOIN(D). In
Proc. of the 1th Int. Conf. on Research Challeges in Information Science (RCIS
2007), 2007. Accepted to publish.
15. Natalya F Noy. Semantic integration: a survey of ontology-based approaches.
SIGMOD Rec., 33(4):65{70, 2004.
16. Keng Siau and TA Halpin, editors. Unified Modeling Language: Systems
Analysis, Design and Development Issues. Idea Group, 2001.
17. Raymond M Smullyan. First-Order Logic. Dover Publications, 1995.
18. P Spyns, SV Acker, M Wynants, M Jarrar, and A Lisovoy. Using a Novel
ORM-Based Ontology Modelling Method to Build an Experimental Innovation
Router. In EKAW, pages 82{98, 2004.
Modeling ORM Schemas in Description Logics 555
Vitaly Semenov1
1 Optimistic replication
1
Professor (Computer Science), Institute for System Programming of the Russian Academy
of Sciences; 25, Bolshaya Kommunisticheskaya str., Moscow, 109004, Russia; Tel: +7
(495) 9125317; Fax: +7 (495) 9121524; Email: sem@ispras.ru; http://www.ispras.ru
558 V. Semenov
LOGIC DEDUCTION
Detection of implication chains, equivalence groups
Reconciliation
graph, SCHEDULING
schedules, Forming promising candidate schedules and plans
plans
SIMULATION
Validation of schedules upon all the constraints
CORRECTION
Correction of violations, weakening of relations
At the matching and harmonization stage the divergent data replicas are
compared with each other to identify and to harmonize differences and to correctly
reconstruct logs. The actions contained in transaction logs are joined at the
semantic analysis stage and analyzed against all the preconditions and constraints
defined by the data model. Dependence and precedence relations are established
among the actions. Logic deduction is applied to determine transitive closures of
the relations and to form implication chains and equivalence groups for the actions.
At the scheduling stage alternative solutions are explored to satisfy the
representativeness requirement for the consolidated schedule and the consistency
requirement for the convergent data representation. The simulation stage is
necessary to verify whether the built schedules really satisfy all the conditions. If
no acceptable schedules have been found, correction may be constructive to
weaken the established relations and to bring the resulting data representation into
consistent state. Correction can be executed iteratively in feedback cycle as shown
in the figure 1. Finally, at the functional selection stage the schedules being
successive simulation outcomes are chosen in accordance with functional
requirements and user intents.
Reconciliation should combine the initial logs in some way to produce a new log,
which can be replayed to bring the replicated data from its last common consistent
state to a new common consistent state. Ideally, the reconciled schedule would
contain all the actions and satisfy all the constraints and user intents. Nevertheless,
careful analysis of semantic relations among actions of concurrent transactions has
to be conducted to guarantee that in general case.
The tentative log T = T' T'' is a set of actions formed by union of the
reconciled transactions T', T''. In other words, if x is an ancestor state of the data
and x', x'' are the divergent replicas of the data, then x' = T'(x) and x'' = T''(x).
The dependence relations D are defined using boolean operations as follows.
For actions t1, t2 T, if t1 ĺ t2 then the schedule must contain t2 on condition that
it contains t1. If ¬t1 ĺ ¬t2 then the schedule must exclude t2 on condition that it
doesn’t contain t1. The relations are non-symmetric, reflexive and transitive. We
find also reasonable to utilize the symmetric implication relations t1 ĺ ¬t2, ¬t1 ĺ t2
as well as the relations induced by equivalence operation t1 ~ t2 Ł t1 ĺ t2 t2 ĺ t1
and by exclusive disjunction operation t1 t2 Ł t1 ĺ ¬t2 ¬t1 ĺ t2.
In some cases multiple dependency relations should be employed for the
comprehensive semantic analysis. They can be represented in a general form by
characteristic boolean functions D(t1, t2, t3, …) and corresponding truth tables [13].
As an example, the relation D(n:m) (t1+,…, t+I+ , t1–,…, t–I–) should be taken into
account if some cardinality constraint is imposed upon a collection and transaction
actions are capable to insert new elements in the collection and remove the
elements from it. The relation takes place if only n card( T+ ) – card( T– ) m,
where T+ = { ti+, i (1, I+) | ti+ = true }, T– = { ti–, i (1, I–) | ti– = true }, and
the function card() returns the cardinal number of action subsets T+, T–.
Semantics-based Reconciliation of Divergent Replicas 561
The characteristic functions of the cardinality relations D(0:0) (t1+, t2+, t1–, t2–),
D (t1+, t2+, t1–, t2–) and D(2:2) (t1+, t2+, t1–, t2–) are given by trivial truth tables [8].
(1:1)
The functions of derived cardinality relations can be expressed using the following
recursive identities:
D(n:m) (t1+,…, t1–,…) Ł D(-m:-n) (t1–,…, t1+,…),
D(n:m) (t1+,…, t1–,…) Ł D(n:n) (t1+,…, t1–,…) … D(m:m) (t1+,…, t1–, …)
Often, if some constraint c(x1, x2,…) is given by a predicate function of several
variables and it is required to satisfy it, the algebraic dependence relation D c (t1', t2',
…,t1'', t2'',…) can be set among the transaction actions, which modify the variables
of the given constraint. The relation takes place if the resulting schedule contains
all the modification actions belonging either to one or another transaction D c (t1',
t2',…,t1'', t2'',…) Ł t1' t1'' t1' ~ t2' t2' ~ t3' … t1'' ~ t2'' t2'' ~ t3'' … Being
established the relation guarantees the algebraic constraint will be satisfied.
The precedence relation P is defined as follows. For actions t1, t2 T, if t1 t2
then action t1 must appear before (not necessarily immediately before) action t2 in
any schedule that contains both t1 and t2. The precedence relation is non-
symmetric, non-reflexive, but transitive.
Once the relations are established among actions, the logic deduction can be
applied to form a resulting schedule that would satisfy all the action preconditions
and the semantic constraints. Early we presented a method exploiting a graph
representation for the introduced semantic relations and a poly-syllogistic
deduction on the graph to form consistent schedules. The method enables to
determine implication chains, equivalence groups, and conflicts by transforming of
the semantic relations into logic ones and by computing their transitive closures.
An important feature is that the method tends to consolidate within the final
schedule as many actions as possible (see the published works [7, 8]).
[1] [2]
[3] [4]
Figure 3. The original, divergent and resulting replicas of the statechart diagram
Similar semantic analysis can be performed for the second transaction and
across concurrent transactions to detect possible conflicts. So, in the example
considered there is a conflict between the actions wr (Line4.Destination, state2), wr
(Line4.Destination, state3) in the first and the second transactions respectively. If
the conflict is resolved by taking changes made by the second transaction, the
result looks as shown at the figure 3. It is important that the semantically consistent
and functionally meaningful result has been derived in a completely formal way
using underlying data model and applying logic, poly-syllogistic deduction.
Certainly, the example is not exhaustive. Nevertheless, it outlines the essential
advantages of the developed method.
564 V. Semenov
5. Conclusions
Thus, the method for semantics-based reconciliation of divergent replicas in
advanced concurrent engineering environments has been presented. Its main
advantages are a significant formalization making possible to employ it for
sophisticated data models and applications, mathematically strong guarantees of
the result correctness and meaningfulness, capabilities to use it in autonomous and
user interactive modes as well as avoidance of combinatorial explosion peculiar to
many other methods. In the future, some adjacent problems will be investigated.
These are semantics-based matching of divergent replicas, building well-balanced
reconciliation plans, adaptive correction of transactions as well as application-
specific reconciliation solutions.
The work is supported by Russian Foundation for Basic Research (RFBR, grant
07-01-00427).
6. References
[1] Anderson T, Breitbart Y, Korth HA. Wool, replication, consistency, and practicality:
are these mutually exclusive? In: Proceedings ACM SIGMOD, Seattle, WA, USA,
1998; 484-495.
[2] Cederqvist P, Pesch R, et al. Version management with CVS, 2001. Available at:
http://www.cvshome.org/docs/manual. Accessed on: Feb. 26th 2007.
[3] ISO 10303. Industrial automation systems and integration — Product data
representation and exchange, 1994.
[4] ISO 10303-11: 1994, Industrial automation systems and integration — Product data
representation and exchange — Part 11: Description methods: The EXPRESS language
reference manual.
[5] Model driven architecture: how systems will be built, 2006. Available at:
http://www.omg.org/mda. Accessed on: Feb. 26th 2007.
[6] Ramamritham K, Chrysanthis P. Executive briefing: advances in concurrency control
and transaction processing. IEEE Computer Society Press, 1997.
[7] Saito Y, Shapiro M. Optimistic Replication. ACM Computing Surveys 2005; 37(1):
42–81.
[8] Semenov V, Bazhan A, et al. Distributed STEP-compliant platform for multi-modal
collaboration in architecture, engineering and construction. In: Proceedings of X
International Conference on computing in civil and building engineering, Weimar,
2004; 318-319.
[9] Semenov V, Bazhan A, et al. Efficient verification of product model data: an approach
and an analysis. In: Proceedings of 22nd Conference on information technology in
construction, Dresden, 2005; 261-268.
[10] Semenov V, Karaulov A. Semantic-based decomposition of long-lived optimistic
transactions in advanced collaborative environments. In: Proceedings of 6 European
Conference on product and process modeling, Valencia, 2006; 223–232.
[11] Semenov V, Eroshkin S, et al. Semantic reconciliation of product data using model
specifications. In: Proceedings of Institute for System Programming, Moscow, ISP
RAS, 2007; 21-42.
[12] Unified Modeling Language (UML), Version 2.0. Available at:
http://www.uml.org/#UML2.0. Accessed on: Feb. 26th 2007.
[13] Zakrevskij A. Recognition logic. Editorial Press. Moscow. 2003.
Controlled Vocabularies in the European Construction
Sector: Evolution, Current Developments, and Future
Trends
Abstract. In the last 40 years, the development of Controlled Vocabularies (CVs), such as
dictionaries, classifications, taxonomies, and of course the “appealing” ontologies, has been
the focus of many research projects around the world targeting the Construction sector.
Being involved in several pan-European initiatives, the authors of this paper show
milestones on the path of evolution (what has happened so far), the current situation (where
we are now) in terms of development and adoption of results, the main problems found
regarding both development and adoption of the CVs, and finally, present some speculative
and provocative ideas about the future of CVs in the European Construction sector.
1 Introduction
In the last 40 years, the development of Controlled Vocabularies such as
dictionaries, classifications, taxonomies, and of course the appealing ontologies,
has been the focus of many research projects in Europe. A non-exhaustive list of
well known efforts in this are is the following: ISO12006 parts 2 and 3, LexiCon
(the Netherlands), Barbi (Norway), bcBuildingDefinitions taxonomy (e-Construct
Project), ICONDA terminology (Fraunhofer IRB), BS6100 and UNICLASS
(British Standards), e-COGNOS ontology (e-COGNOS project), Standard
Dictionary for Construction in France (SDC). It is worth recalling that in other
continents similar efforts were also conducted, such as the SI/SfB, Masterformat,
Omniclass, and the Canadian Thesaurus, just to name a few.
Even a brief review of the above listed projects/initiatives allows us to imagine
how much effort has been devoted to this area around Europe2, (likely) guided by
1
CSTB, Centre Scientifique et Technique du Bâtiment, Route des Lucioles, BP 209, Sophia
Antipolis CEDEX, France. Tel: +33 4 93956722; Fax: +33 4 93956733; Email:
celson.lima@cstb.fr.
566 C. Lima, A. Zarli, G. Storer
a single aim: to put the Construction sector at the leading edge for the best
advances of semantic-related ICT resources. Preliminary thoughts were about
developing useful e-Commerce/e-Business related tools and resources to help
construction companies publish their own catalogues using their own languages
and, at the same time, become actors in the European arena.
Based on the results achieve so far, however, a quite pragmatic question arises:
after one decade (or more) investing on this topic, what is the reality in Europe
nowadays? Are Controlled Vocabularies really used in a daily basis by
Construction actors or are they still remaining a piece of art contemplated and
admired behind the security protection provided by the ‘research walls’?
If Controlled Vocabularies are fully adopted and used on a daily basis, what
might we do with them in the future, and what are the trends currently observed in
the research area? What are the domain(s) of work where Controlled Vocabularies
will likely play an important role? What is the future of e-business and e-
commerce related activities in Europe, a very fragmented market where the
national (sometimes regional) norms and regulations impose a strict control on
products and services construction-oriented? And a more ambitious question, what
Construction can do in order to place itself properly regarding the business
exploitation of the Semantic Web?
This paper discusses the questions raised above, based on the experience
gathered by the authors through their involvement in several European initiatives
related to the subject. Section 2 presents the main reasons behind the development
of CVs in Construction (why). Section 3 discusses very briefly a selected set of
European/International initiatives on this area. Section 4 draws a picture about
where we are now, preparing the ground for the speculative and provocative
discussion in Section 5, about where we are going versus where we could/should
go. Finally, some conclusions close the paper.
2
Not only Europe, but for the context of this paper the discussion will remain inside
European borders.
Controlled Vocabularies in the European Construction Sector 567
simply use another name “dachshund” or “sausage dog”. These need to have
agreed meaning, not least because to the English or French the word
dachshund is foreign and the other is a descriptive nick-name. The deeper we
need to go with meaning to add detail or to differentiate, the more control
there needs to be in the use of the language. Between specialists in one
discipline there can be quite precise understanding of words (in this case
zoologists who might even use Latin names) but between experts and non-
experts and different kinds of expert there can be misunderstanding. To
change to a construction example, what is the difference between a “brick
pillar” and a short length of thick wall made from brick? A bricklayer and a
cost estimator might use different terms. The answer (in UK at least) is that the
difference is defined by rules related to the dimensions.
x Vocabularies are important to conveying human thought in a concise
way and with precision in a given working context. Vocabularies must
be controlled to achieve this or we have the Humpty Dumpty situation
(from Alice in Wonderland) pictured in Figure 1. There must be as much
preciseness as possible although in human exchanges we sometimes say
that something is like something else e.g. the dog is like a dachshund but
with longer legs. We can then ask questions to refine meaning and
(perhaps finally) identify the breed of dog.
IFC
OCCS/ OmniClass
MasterFormat
Lexicon
eCOGNOS
CI/SfB ICONDA
SDC bcBuildingDefinitions
Bibliographic
BARBI
1960 1986 1992 1994 1996 1998 2000 2002 2004 2006
The ISO 120063 family (part 2 and part 3) came from another level of concern:
the International Organisation for Standardisation. ISO was also targeting the
development of standard CVs for the Construction sector in a world-wide scale. On
one hand, ISO12006-2 targeted the definition of a model for classification systems
(it is not a classification system in itself); rather it sets out an approach whereby
particular classification systems that meet regional or national requirements can be
developed according to a common international approach. On the other hand, the
ISO 12006-3 defines a schema for a taxonomy model, providing the ability to
define concepts by means of properties, to group concepts, and to define
relationships between concepts. Objects, collections and relationships are the basic
entities of the model.
The ISO foundation work was adopted and used by some institutions around
the world. Among them, we can cite Stabu (Netherlands), Edibatec (France), and
the Norwegian construction industry, which respectively started their own
implementations of ISO-based tools, namely the LexiCon, SCD, and BARBI. In
other words, the three of them are independent implementations of dictionaries that
are compliant with the specification given in ISO 12006-3.
Next we talk about the International Alliance for Interoperability (IAI) and its
Industry Foundation Classes (IFC). The IFC model has been progressively
developed by the IAI since 1995 through several releases implemented in software
for data exchange and sharing across applications. Since the IFC.2x release
(October 2000), a core part of the model has been protected against change and
formally accepted as ISO PAS 16739 in November 2002 under the external
“harvesting” procedures of ISO TC184/SC4. IFC is the IAI vehicle aiming to
promote and support the implementation of the concept of a Building Information
Model (BIM) to increase the productiveness of design, construction, and
maintenance operations within the life cycle of buildings.
The IFC model is rooted in approaches initially developed within the work of
ISO TC184/SC4; most particularly in the development of the ISO 10303 series of
standards (STEP q.v.). In particular, IFC has adopted and/or adapted certain parts
of the STEP standards including: formal specification of IFC is in the EXPRESS
language from ISO 10303 part 11; encoding of files for data exchange is
undertaken using ISO 10303 part 21; and the IFC model uses schema that have
been adopted from the resource standards within ISO 10303, particularly parts 41,
42, 43 and 46. Despite the fact that from a “semantic perspective”, the IFC model
per se cannot be considered ontology/taxonomy, part of it has been used to support
reasoning and to exchange meaningful pieces of information among different
software tools.
Many others European projects (research-oriented, standards-biased, etc.) were
performed after this. A brief xlist includes: eConstruct [1], e-COGNOS [2],
CEN/ISSS eConstruction series of Workshops [3], FUNSIEC [4], and the on-going
CONNIE [5], and SEAMLESS [6] projects.
3
It is worth noticing that both standards were officially relased by 2001, after the normal
“evolution” through the standardisation channel (PAS-Publicly Available Specification,
DIS-Draft International Standard), altough they have started been used before that.
Controlled Vocabularies in the European Construction Sector 571
4
: English, French, Dutch, German, Norwegian, and Greeklish (Greek language written with
Latin characters). Additional information about e-Construct can be found at (Lima 2003).
572 C. Lima, A. Zarli, G. Storer
of our knowledge, quite difficult to change. For instance, in a new IST project5,
where CVs are required, the development team finds arguments to justify the
development of ‘yet another’ ontology editor and a new tool to produce semantic
mappings. What is behind this behaviour? Is it the need to differentiate the from
previous ones? The need to leave ‘fingerprints’ on this area? Authors do not know;
what they know is that recommendations from standards-related initiatives, like the
CEN/ISSS eConstruction workshops, are not really being taken into account. These
recommendations talked about ‘analysing what is available’, ‘reusing current
results’, etc., which is definitely relegated to a second place. What matters is to
propose new ideas, develop new things and try to be innovative & revolutionary,
even when only keeping pace and going nowhere!
Business initiatives, even supported by less advanced solutions, are pushing
things forward. IFCs have catalysed the adoption of the BIM concept and,
considering the fact that part of the IFC model can be considered as an ontology,
new experiments on the area have been launched and we can wait for solid results
very soon. Other more modest initiative but also very useful (e.g. ICONDA family,
CSTB products) are making money using very dedicated CVs, which for some
people is more than enough. But it is not for some others. ICONDA is trying to
push their ‘semantic side’ to something more modern and supported by new
technologies and CVs. For instance, the ICONDA agency has started agreements
with several countries around the world in order to enrich its terminology; CSTB
has started an internal project to extend the capabilities of dictionaries and
taxonomies supporting the search process of content-based products, such as their
CD-REEF and I-REEF), and so on. Both examples are also looking very closely to
the standard-related initiatives aiming at capitalising on them.
5 Conclusions
Communication is about exchanging signs. Humans are able to use words, body
gestures, images, etc.. Jargon is often used inside a given community and those not
belonging to this community will have problems to communicate. If we are to be
clear and unambiguous, we must ‘control’ the vocabulary we are using in
communication. Only parties knowing the words and their meanings are equipped
to engage in communication free from misunderstanding. When it comes to
computer-based communication, this is even more crucial since computers cannot
establish dialogues in order to know elucidate precisely ‘what is meant by that’.
The conceptual approach to handle this situation often relies on the adoption of
formal CVs (as much as possible) which help define the universe of discourse (the
working context) of those involved in the communication process.
Several examples can be found around the world, coming from very different
initiatives ranging from industrial support to feasibility projects funded by research
programs. Results are emerging; education is gaining a new status in the European
scene for several reasons, including European policies, businesses profit, and
natural evolution of the area. LexiCon and BARBI (two implementations of ISO
5
Authors are intentionally avoiding to identify the initiative, for obvious reasons.
574 C. Lima, A. Zarli, G. Storer
12006) have joined forces; IFCs are becoming the standard supporting the
inevitable BIM concept; IFD is attracting worldwide attention, and governments
have published policies that directly or indirectly enforce the adoption of shared
CVs and semantic-related resources. This is the future path we are called to – with
no acceptable excuses and no other choice in moving forward.
Recalling McGuiness (and adapting her sayings to our context), an ontology (or
CV) is required when there is a need to communicate/exchange (transfer and/or
share) various sorts of information where the meaning is fundamental. Ontology
(CV) is also useful when the reuse of existing knowledge is required. From a non-
exhaustive list of uses, an ontology (CV) can be used for simple kinds of
consistency checking, interoperability support, validation and verification testing,
configuration support, help to perform structured, comparative and customised
search as well as to exploit generalisation/specialisation of information [7]. This
means whenever we must communicate precisely, our vocabulary must be
controlled, our jargon must be shared and meaningful, and our semantics must be
refined for the sake of the communication process. This is the mission behind the
development and use of CVs in the Construction sector. This is the justification for
proposing, developing, and assessing CVs. This is the quest that keeps the authors
of this paper involved in this field. Results are still in their infancy, but they are
promising and exciting, and hold the key to the future.
6 References
[1] Lima C, Stephens J, Böhms M. The bcXML: Supporting eCommerce and Knowledge
Management in the construction industry. Itcon Journal, v. 8, p. 293-308, 2003.
[2] El-Diraby T, Lima C, Fiès B. Domain Taxonomy for Construction Concepts: Toward a
Formal Ontology for Construction Knowledge, Journal of Computing in Civil
Engineering, Vol. 19, No. 4, October 2005, pp. 394-406.
[3] Böhms M, Lima C, Storer G, Wix J. Framework for Future Construction ICT,
International Journal of Design, Sciences & Technology, Volume 11, Number 2, p.
153-162, editor: Dr. Reza Beheshti.
[4] Lima C, Silva C, Sousa P; Pimentão JP, Duc CL. Interoperability among Semantic
Resources in Construction: Is it Feasible? In proceedings of CIB / W78 22nd
Conference on Information Technology in Construction, p. 285-292, ISBN 3-86005-
478-3, CIB Publication No. 304, Dresden, Germany, July 2005.
[5] Cerovsek T, Gudnason G, Lima C. CONNIE - Construction News and Information
Electronically. Joint International Conference on Computing and Decision Making in
Civil and Building Engineering, Montreal, Canada, 14 - 16 June 2006.,
[6] Lima C, Bonfatti F, Sancho S, Yurchyshyna A. Towards an Ontology-enabled
Approach Helping SMEs to Access the Single European Electronic Market, In
proceedings of the 13th ISPE International Conference on Concurrent Engineering:
Research and Applications, 18 - 22 September, 2006, Nice, France.
[7] Mcguiness D. Ontologies Come of Age, In Dieter Fensel, J im Hendler, Henry
Lieberman, and Wolfgang Wahlster, editors. Spinning the Semantic Web: Bringing the
World Wide Web to Its Full Potential. MIT Press, 2002.
Technology for Collaborative Engineering
Supporting Collaborative Engineering Using an
Intelligent Web Service Middleware
Abstract. Collaborative Engineering tasks are difficult to manage and involve a high
amount of risk – as such, CE tasks generally involve only well-known pre-established
relationships. Such collaborations are generally quite static and do not allow for dynamic
reactions to changes in the environment. Furthermore, not all optimal resource providers can
be utilised for the respective tasks as they are potentially unknown.
The TrustCoM project elaborated the means to create and manage Virtual Organisations
in a trusted and secure manner integrating different providers on demand. However,
TrustCoM focused more on the VO than on the participant. The BREIN project enhances
the intelligence of such VO systems to support even providers with little business expertise
and provide them with capabilities to optimise their performance.
This paper analyses the capabilities of current VO frameworks on the example of
TrustCoM and identifies the gaps from the participant’s perspective. It then shows how
BREIN addresses these gaps.
1. Introduction
Modern day engineering tasks typically demand a complexity not supported by in-
dividual companies – accordingly, enterprises join in collaborations to outsource
and distribute tasks according to the tasks that need to be fulfilled. Such collabo-
rations are normally difficult to manage considering their size and complexity.
In recent years, the concept of Virtual Organisations has been developed to
describe such collaborations on basis of resources exposed to the internet.
Following the grid concept, such organisations allow for managed and dynamic
collaboration between different resource types, or in other words to enable
transactions between different companies in a coordinated manner.
The TrustCoM project has delivered a framework, as well as a reference
implementation that enables organised and contract managed collaborations in a
secure and trusted manner. Even though TrustCoM principally allows for dynamic
on-demand creation of Virtual Organisations, as well as their autonomous
management according to predefined collaboration description, the project does not
support all issues to ensure full uptake by the eBusiness community.
578 L. Schubert, A. Kipp, B. Koller
This way, participants are considered real business entities, with their own
existing typical workflows to generate the “products” they sell and with an infra-
structure they do not want to expose to, let alone be controlled by external bodies.
From the TrustCoM perspective, enterprises participate in a Virtual Organisation
according to the roles they bring in rather than according to their resources. This
respects the first main issues in (electronic) business: the confidentiality of
providers’ infrastructure and leaving complete control over this in their hands.
Preparation
Evolution
Stores results to
CE3 VO
Uses analysis
services from
Resource
Planning
Storage Provider
Sub-system
Production Facilities
provider
Sub-system Production Consultants
Design Team
Product design
Populates design database database Buys storage fro m
Beside the support to manage the lifecycle of a VO the main issue here is that
the TrustCoM framework allows for such collaboration by providing a secure and
contract managed middleware that enables the individual participants to expose
“virtual” resources that reflect the capabilities of the respective “local” and private
business processes. The framework provides participants with a means to host such
an interface that secures message exchange, controls access according to the
overall collaboration description and ensures that the according transaction
requirements are met and automatically updated. From the individual participant’s
Supporting Collaborative Engineering 581
Customer &
Business
Interactions
Define
Individual
Business
Goals
Control
Fulfill
Semantic
Descrip-
tion
Agents Processes &
Supply-
Grid Configure/Execute
Chains
The BREIN framework builds strongly upon Web Service technologies and
incorporates existing VO middleware solutions, as well as communication
standards and specifications that promise or already found wide acceptance by the
research and eBusiness community. Thus, it will realise a Service Oriented
Architecture that integrates the most relevant aspects related to supporting and
realising Virtual Organisations.
Supporting Collaborative Engineering 583
With a significant relationship between Grid and Agent technologies [5] the
BREIN project focuses on extending common grid technologies with the autonomy
and negotiation capabilities of Multi-Agent approaches and thus implicitly
extending Multi-Agent technologies with the stability and reliability of the Grid.
With respect to modern day eBusiness requirements, the BREIN consortium
identified in particular the following technical areas as most important for future
VO middleware needs (cf. figure 5):
With the enhancements as pursued by the BREIN project, scenarios such as the
Collaborative Engineering one described above will profit very much from both
provider as well as customer perspective:
x customer and provider may describe their requirements, respectively their
capabilities in a more abstract way
This way, no additional background knowledge about the underlying
common language model needs to be acquired and participants can expose
and make use of functionalities in their own way. This allows in particular
integrating providers according to their capabilities, rather than having to
respect interoperability issues.
x business processes and collaboration may be described in a more intuitive
manner with only limited business expertise, collaboration details are
derived automatically from capabilities and requirements
Since complex engineering processes are difficult to design and require
expertise in particular to optimise the execution, such an approach allows
providers to implement and realise new services more effectively. Given
the business processes and the requirements / capability descriptions, the
BREIN framework furthermore supports the design process in a way that
allows customers to define complex collaborations more easily
x contract details are (more) human readable
x the collaboration is capable to adapt to changes in the environment in a
more autonomous manner
With the intelligence to monitor and integrate environmental information,
participants in the VO are enabled to react more quickly and effectively.
This may involve both changes on the local infrastructure side (such as
limited resources) as well as external effects (such as additional customer
requirements).
Given the capabilities, the BREIN framework will allow participants to
generate and integrate their services more efficiently with less effort. From the CE
perspective, this allows in particular to realize more complex engineering tasks
without the additional effort of having to “understand” the system first.
Supporting Collaborative Engineering 585
Acknowledgements
The work reported in this paper has been partially funded by the EC through
a FW6 IST programme grant to the TrustCoM integrated project under contract
01945. The project partners, who have all contributed to the work reported in the
paper, are: Atos Origin, BAE Systems, BT, IBM, Microsoft, SAP, CCLRC, ETH,
HLRS, SICS, SINTEF, and the universities of Kent, London (Imperial and King’s
colleges), Milan and Oslo.
References
[1] M. Wilson, L. Schubert. TrustCoM Framework V4. http://www.eu-trustcom.com
[2] W. Saabeel, T. Verduijn, L. Hagdorn & K Kumar. A Model for Virtual Organisation:
A structure and Process Perspective, in 'Electronic Journal of Organizational
Virtualness'.
[3] D. Golby, M. Wilson, L. Schubert & C. Geuer-Pollmann. An assured environment for
collaborative engineering using web services, in 'CE2006 Conference Proceedings'.
[4] T. Mahler, A. Arenas & L. Schubert. Contractual Frameworks for Enterprise Networks
& Virtual Organisations in E-Learning, in Paul Cunningham & Miriam Cunningham,
ed.,'Exploiting the Knowledge Economy - Issues, Applications, Case Studies, Vol. 3'.
[5] I. Foster, N. Jennings & C. Kesselman. Brain meets brawn: Why grid and agents need
each other, in 'Autonomous Agents and Multi-Agent Systems'
[6] J. Haller, L.Schubert & S. Wesner. Private Business Infrastructures in a VO
Environment in ' Innovation and the Knowledge Economy - Issues, Applications, Case
Studies, Vol. 3'
Research on Concepts and Technologies of Grid
Collaborative Designing to Supporting Cross
Enterprises Collaboration
Abstract. The original Cross Enterprises Grid Collaborative Designing System could not
meet industrial demands on dynamic sharing of various and transient manufacturing
resources and collaboration due to their inherent weakness. The concept of collaborative
designing technology under the guidance of Grid theory based on the principle of facing to
business users, supporting "Integration on Demands" and "Instantaneous Integration."
During application integrating process, Business-driven Grid technology breaks the barriers
between computer experts, domain experts, business designers and business executants, and
supports instantaneous integration. Unlike traditional network collaborative design
technology, Resources is integrated instantly according to the directives of its business
users, and thus comes the concept Grid Collaborative Designing. This paper analyzes the
different between manufacturing and computing resources, gives the definition, connotation
and the problem which Grid Collaborative Designing attempt to solve, discusses the
differences and similarities between Grid Collaborative Designing and other related
technologies, analyzes the limitation of traditional technology supporting cross-enterprise
collaboration. At last an architecture of WSRF based OGSA is yielded, it encapsulate
manufacturing resources into services using WSDL description, services mapping and
deploying method, and the core research issues in Grid Collaborative Designing are
submitted and the related technologies are summarized.
1 Instructions
The modern complicated products are an achievement of using multi-field
knowledge, usually involve the knowledge in fields such as electromechanics,
hydraulic pressure, control, information,etc.. In today in high development such as
information technology, how about be under the assisting of computer, integrate
the design knowledge in the multi-disciplinary field effectively, it is the key point
that the products succeed in designing. And single enterprise rely on oneself only
1
School of Sciences,Hebei Polytechnic University,NO.46 Xinhua West Street,Tangshan
063009, Hebei Province,China; Email:chxb@heut.edu.cn; http://www.heut.edu.cn
588 Chen, Xuebin, Duan, Guolin
resource and ability meet its product design's demands alone while being very
difficult, with the increase collaboration among enterprises, more and more
enterprises have already realized that realize the necessities of information sharing,
resource-sharing among enterprises through Internet, on the basis of to design with
manufacture system because of inherent weakness their in coordination Web a
traditional one, make it unable to deal with the environment of the dynamic
change, especially can't deal with the sharing of instantaneous resources.
Appearance of Grid, design, exploits resources, innovate and design offering
new theory and infrastructure. Grid can realize the virtualization of many kinds of
distributedly computing resources, such as handling the course, memory capacity,
data and network broadband, thus establish a single fictitious system, offering for
users and applications to a large number of IT functions visiting that has not sewn.
The concept of the Grid has already been applied to a lot of fields such as
calculation, astronomy, biology information, and intensive in calculation, intensive
application of data, such as Globus[1], SETI2@home[2],Global Grid Forum[3],
European Union (EU)DataGrid[4], so many research projects have got application.
In order to satisfy and realize fundamentally the demand of solving in dynamic
resource-sharing and question among the cooperative enterprises of the dynamic
change, the concept of the Grid Collaborative Designing rises in the field of
product design. In order to explain the feasibility of the Grid Collaborative
Designing in the manufacturing field, this paper is yielded an architecture of
WSRF based OGSA, it encapsulate manufacturing resources into services using
WSDL description, services mapping and deploying method, and the core research
issues in Grid Collaborative Designing are submitted and the related technologies
are summarized.
users and perceive behavioral from physics realizing of resources of software and
hardware [6]. On the manufacturing field, realizing the foundation facing integration
of enterprise served virtually of resources, through fictitious to turn Web service
realize in resource of environment entity[7].
On the management style, manufacturing resources have the same
characteristics with computing resources. If distributed in different organizations of
different regions, the owner of resources has top administration authority to their
resources, all dispersible resource owner determine according to one's own
condition whether to share his resource at the Grid, make dynamic change of
quantity of resources at the whole net, meanwhile, because the fast changes of
market and the change of peripheral economy, the political environment,
dispersible resource requirement person's demand for resources in the region is
dynamic change too, this causes resources shared to have very obvious distribution
and dynamic characteristic.
on Internet like using local resources synthetically. Users will design the task to be
referred to the door, the task will be assigned to corresponding resources to finish
through the course of task decompose and resources mapping.
The Grid Collaborative Designing utilizes net and relevant advanced computers
and information technologies, to realize the resource-sharing in the net and the
collaborative designing question to ask and solve. In order to realize this goal, on
the basis of grid computing research, fully consider the characteristic of the
manufacturing industry, study and solve the key technology of the manufacturing
operations characteristic, design the grid framework suitable for manufacture
resource sharing and product collaborative designing, develop one the middle the
application facing manufacturing field, construct the grid collaborative designing
system which meets the demand characteristic of manufacturing operations.
Grid framework
After several years of research and practice, the grid technology system
structure has been already mature day by day. In the abstract structural level, the
most important and most representative one is the structure of five layers of sand
filters[8]; In 2002, GGF propose Open Grid Services Architecture[9] [10]. In OGSA
frame, it make everything abstract into service, including the computer, the
procedure, the data, instrument and equipment,etc.. this kind of idea help to
manage and use the grid by unified standard interface[11].
But traditional computing grid or general service grid framework can't be
totally competent at manufacturing grid. First of all, manufacturing resources is
disperse, mutual and complicated, and its operation is not always automatic, which
make it more complicated and challenge to set up the grid of manufacturing
service. Second, the traditional computing grid lays particular emphasis on solving
computing question on a large scale, it is often a comparatively simple calculation
task that its homework is submitted. There is a lot concrete complicated business
treatment not computing in the grid to deal with, which have the characteristic of
long periodicity, complexity, flexibility and cross-organizing collaborative
with,etc.; Third, the nodal institutional framework of manufacturing grid is
complicated, consider fully considering nodal autonomy in the design, the sense of
organization, adapt to the need of the construction of the grid, expand and the
business operate.
Manufacturing resources modeling and encapsulation of resources
Manufacturing resources modeling is an important basic work in manufacturing
grid research. It needs to study and manufacturing grid resource-sharing
implementation method, especially the systems approach to realize resource-
sharing under facing the service framework at first, show resources, resources
encapsulate and the resource scheduling are made into the whole consideration;
Need to study the expression model which makes resources emphatically, can
contain different types of manufacture resources, solve the problems of the various
types and different shapes, at the same time considerate the demands on resource
find and match; Study the encapsulation of resources and virtual on this basis,
Research on Concepts and Technologies of Grid Collaborative Designing 591
Application PORTAL
Layer
Grid Programme Interface
Server Layer Collaborative Designing Environment Designing Data Management Designing informationQuery
Grid security frame
Run Environment
GT4 CORE/Tomcat
Service Service
Service Service Service Service
Resource layer Encapsulation Encapsulation
Encapsulation Encapsulation Encapsulation Encapsulation
Designing Compute
Flow Resources Fixture Resources Analyze Resources DataResources
Resources Resources
This paper provides a kind of support across the grid system structure which
enterprises design, as shown in Fig. 1. This structure is adopted on the basis of
facing service , follow OGSA normal hierarchical structure set up. Encapsulate for
Web service and register and form its storey of resource in the net from dispersible
manufacture resources. Utilize GT4 CORE / Tomcat / Axis put up environment
operate, it waits to be comparatively ripe one under grid as the foundation at
present with piece to GRIA that industrial circle employ, dispose MDS4, GridFTP,
GRAM, OGSA-DAI, UDDIe,etc. construct the back-up environment of ground
floor based on kernel service, offer resource management, data transmission,
scheduling of resource, different core of constructing nets such as the access to
data,etc. to support the function; Under the support of a technology between the
design, shield the different resources that constructed of ground floor, support the
realization that the net of design employed in coordination of the fictitious
organization's mode.
Resource layer
The resource layer is gathering layer of all kinds of resources, including the
organizations of all kinds of data, knowledge and some logic deal with, even
including relevant designers and apparatus. These manufacturing resources are
offered by the enterprises participated in the grid collaborative designing, these
resources carry on the packaging in form of serving, ones that finished resources
are virtual, to the resource type of manufacturing field and characteristic of
describing, offer different kinds of encapsulation models to encapsulate them.
Resource of physics can be used by grid only after encapsulating. So main job
of resource layers is virtual. Resource can have two way while being virtual, first
to express as resource standard Web service by provider of resource, until sharing
of resource use, turn into to request that Web serve correctly, resource taken to
serve dispose at resource night host computer of provider. Another kind of
encapsulate, accord with WSRF normal service (abbreviate as WSRF serve) as
one, concrete manufacture as a resource registration that WSRF serve this
resource, and the WSRF serve, can register get net service registration center too,
equivalent to one " container" of resource; Each kind of resource has offered one to
encapsulate the template, the resource provider can download this model, use it to
encapsulate a kind of manufacture resource, it can already be in order to register
the resources containers that entered to encapsulate good resources on the resource
provider's local constellation host computer, can pass the net Portal to public
resource container registration too.
Grid Middleware
Distributed because of different constructing and wide area manufacturing
resources, these resources belong to different enterprises at the same time, it is
difficult to apply in net directly, so it need the corresponding software system to
dispel the difference between the systems joint differently, realize the unified
management of the whole net resource. Each research institution in the world has
carried on a large amount of research in such aspects of system structure of grid ,
Grid middleware, agreement, programming running environment. And it has
developed some middleware tools and frame, such as Globus, Condor-G, Storage
Resource Broker, DataCutter, Legion, Cactus and common framework. These
594 Chen, Xuebin, Duan, Guolin
research results establish the foundation for designing the implementation of grid
collaborative designing intermediate.
Design Middleware
The upper strata of one the middle the net are to face each the middle the
design that is designed in coordination, include some special-purpose services that
are developed in the middleware, core to the manufacturing field on a foundation
among them, in order to support the application that is designed in the net together,
among them mainly include setting up in job management, data management, grid
information management, manufacturing resources management, virtual tool of
resource and visual environmental foundation of grid, in coordination with the
design environment, designing data management, designing the information
management,etc.
Application Layer
Employ layer offer interface of contents of grid, which is the aspect that
ordinary users can perceive. It including the door entry based on Web and
application interface using the interface of the net service to develop among other
systems. Application layers of user needn't understand that design the structure of
ground floor of the net in coordination.
6 Conclusions
Through integrating and sharing of manufacturing resources, grid technology
will bring the prominent advantage to manufacturing industry. Design research and
application of the grid present the good development trend in coordination.
However, grid collaborative designing is still a new developing field, its research is
at the starting stage at present, no matter theoretical research or application is not
ripe. This paper encapsulate manufacturing resources into services using WSDL
description, services mapping and deploying method. It give a reference model of
grid collaborative designing, to solve the main problem of the research on the grid
collaborative designing, make the net widely used in the business circles in
coordination with designing technique.
References
[1] Globus Alliance http://www.globus.org/
[2] SETI@home http://setiathome.ssl.berkeley.edu/
[3] Global Grid Forum http://www.gridforum.org
[4] European Union (EU) DataGrid http://eu-datagrid.org
[5] Ian Foster, Carl Kesselman, A Distributed Resource Management Architecture that
Supports Advance Reservations and Co-Allocation.
http://www.globus.org/documentation/incoming/iwqos.pdf
[6] Renato Figueiredo, Peter A Dinda, Jose Fortes. 2005. Resource Virtualization
Renaissance Computer Volume 38. Issue 5.:28-31
[7] Rajaram Shivram Kumar, Zhonghua Yang, Jing Bing Zhang, and Liqun Zhuang.
Virtualization for Manufacturing Web Services: a WS-RF approach. In Grid Asia
Research on Concepts and Technologies of Grid Collaborative Designing 595
Abstract. The control of the design process requires to take into account three narrowly
overlapping dimensions relating to the product, as an object to be defined, the process, as a
generator of this object and finally the organization. Within the framework of the work
undertaken during the IPPOP project, an integrated model was proposed with a view to
development of a prototype of software. We are interested here in the software tool which
can be brought to the actors to manage design process by correlating the organization of the
company and the definition of product development process with the structure of the design
projects and the control of the real process. This tool is materialized by the development of
the PEGASE software for which we present an application inspired of an industrial case
study in SME.
1 Introduction
Today, to increase performance in design, companies must not only control the
design process but also manage the design system. Finalities of design
management are to improve the performance of the company and to bring it
reactivity to the evolutions of customers’ waiting and to the constraints of the
market. Control of the design system obliges to be able to understand and evaluate
the design process, in particular the activities which make it up but also the context
of the design. Thus, the evaluation of the design must propose a whole of elements
of measurement, identified thanks to a model of the system, to provide relevant
information to ensure a coherent decision-making in comparison with the real state
of the system. Difficulty will be in the modelling of the system for its evaluation.
Concerning the design process, it is necessary to be focused on the definition of the
product and its evolution, on the objectives of design constrained by the
1
Associate Professor. IMS, CNRS UMR 5218, LAPS department, University Bordeaux 1,
351 Cours de la Libération. 33405 Talence. Tel: +33540003584, Fax: +33540006644, email:
vincent.robin@laps.ims-bordeaux.fr
598 V. Robin, C. Merlo, Ph. Girard
organization of the company [1] but also on the factors that influence the system as
technologies, human resources and physical implementations [2]. Considering this
viewpoint and the integrated model defined during the IPPOP project [3], Robin et
al. proposed a model to evaluate the design system [4], a methodology to
implement this model and a prototype of software to assist design actors
(PEGASE) to make operational this methodology [5]. In this paper, we present
some results of the IPPOP project and we focus more particularly on the software
application PEGASE. The first part of the paper describes the integrated product –
process – organization model of IPPOP project. On the basis of this model we
propose then a detailed presentation of PEGASE, a prototype of software
supporting actors all along design project. We show how the prototype makes it
possible to model and to follow-up the evolution of the design system but also to
create a project and to follow-up its progress.
Organization model
Process model
Product model
According to the concepts and the models suggested in the IPPOP project we
developed a prototype of software to support actors during a design project:
PEGASE. To ensure that our prototype respects criterion of conformity, reliability,
safety, dimensioning and maintainability [7], the design phase was based on
concepts proposed by the creators of UML language [8]. This choice is justified by
the fact that this method is very structuring. Moreover, as we wished that the
application either based on the open source principles and easily and quickly
usable in network of actors, the Graphical User Interfaces were developed in PHP
language. This language is used in the development of Internet websites and offers
the advantage of being a script language, not compiled, directly interpreted.
Finally, the great number of data to be stored and handle implying the use of a data
600 V. Robin, C. Merlo, Ph. Girard
base. MySQL was retained to manage the data base. The initial objective to which
PEGASE must answer is to ensure the connection between the structuring of the
organization of the company relating to the design and the control of a design
project, such as it is considered in IPPOP project.
These actions associated with the integrated model ensure that the organization
of the company, the multilevel management of the projects, the differentiation
between the decisions and the transformation of product-process knowledge, the
synchronization of informational flows and finally the follow-up of the projects are
taken into account.
A prototype of software to manage design system 601
Society
B Administrative Direction
B Marketing Department
B Machine Shop
Mechanic Devices Dept.
Design framework
Summary Structure Project
Decision
Mass of the mast : Same mass as the A350
Variables Target: X kg, Actual value of the PI: X kg
Human
Resources
Material
Resources
Name :
Type :
Target :
Unit :
Associated Objective : Choose an Objective
Add Cancel
The decision frame enables him to know his context of work: his objectives, his
criterion and decision variables, his constraints, his performance indicators and the
resources which are allocated to achieve his goals regarding to performance
indicators. He is then able to begin the phase of control previously structured,
assigned and planned. The coordinator has the opportunity to create sub-projects
which will be automatically associated to decision centres for the lower decisional
level. He defines finally the tasks to be carried out by completing whole or part of
the tasks specified by the administrator, or by introducing new tasks depending on
the needs for the project. It guarantees the flexibility of the design process
evolution during the project. By using the preset informational links, PEGASE
informs each new local coordinator of sub-projects and each designer affected to
specific tasks of their objectives. Project managers and the designers have the same
GUI (Figure 3) to understand the context in which they must carry out their tasks.
Difference is that project manager create performance indicators and designer
could just complete these indicators. They must, at the end of their task, indicate
the values of the performance indicators.
A prototype of software to manage design system 603
The coordinator is informed of the new values to his next connection. If the
indicators don’t correspond to the attended values, he analyzes the situation and
then could decide: to start new activities, to modify some elements of the decision
frame (objectives, constraints, resources...), etc.
Reference
[1] Mintzberg H. Le management: voyage au centre des organisations”, Les Editions
d'Organisation, 1989.
[2] Wang F, Mills J.J, Devarajan V. A conceptual approach managing design resource.
Computers in Industry, 2002, Vol. 47, pp 169-183.
[3] Roucoules L, Noel F, Teissandier D, Lombard M, Debarbouillé G, Girard P, Merlo C,
Eynard C. IPPOP: an opensource collaborative design platform to link product, design
process and industrial organisation information. Proceedings of the International
Conference on Integrated Design and Manufacturing in Mechanical Engineering,
Grenoble, France 2006.
604 V. Robin, C. Merlo, Ph. Girard
Abstract. Ant-based clustering due to its flexibility, stigmergic and self-organization has
been applied in a variety areas from problems arising in commerce, to circuit design, and to
text-mining, etc. A new ant-based clustering method named AMC algorithm has been
presented in this paper. Firstly, an artificial ant movement(AM) model is presented;
secondly, the new ant clustering algorithm has been constructed based on AM model. In
this algorithm, each ant is treated as an agent to represent a data object, each ant has two
states: resting state and moving state. The ant’s state is controlled by two predefined
functions. By moving dynamically, the ants form different subgroups adaptively, and
consequently the whole ant group dynamically self-organized into distinctive and
independent subgroups within which highly similar ants are closely connected. This
algorithm can be accelerated by the use of a global memory bank, increasing radius of
perception and density-based ‘look ahead’ method for each agent. Experimental results
show that the AMC algorithm is much superior to other ant clustering methods. It is
adaptive, robust and efficient, and achieves high autonomy, simplicity and efficiency. It is
suitable for solving high dimensional and complicated clustering problems.
1. Introduction
Clustering analysis is an important method in data mining. It is a discovery process
that groups a set of data such that the intra-cluster similarity is maximized and the
inter1-cluster similarity is minimized. Clustering of data in a large dimension space
is of a great interest in many data mining applications[10].
Clustering has been widely studied since the early 60’s. Some classic
approaches include hierarchical algorithms, partitioning method such as K-means,
Fuzzy C-means, graph theoretic clustering, neural networks clustering, and
statistical mechanics based techniques. Recently, several papers have highlighted
1
Associate Professor, Business College of Beijing Union University,A3 Yanjingdongli,
Chaoyang,Beijing,100025,P.R.CHINA; Phone : 086-10-65940712; Fax : 086-10-
65940655;E-mail: jianbin.chen@bcbuu.edu.cn
606 J.Chen,J.Sun,Y.Chen
the efficiency of stochastic approaches based on ant colonies for data clustering
[2,3, 5,8].
Ant-based clustering and sorting was originally introduced for tasks in
robotics by Deneubourg [3]. The entomologists who observe societies of ants
found that larvae and food are not scattered randomly about the nest, but in fact
they are sorted into homogenous piles. Deneubourg et al. proposed a basic model
that explains the spatial structure of cemetery forms as a result of simple, local
interactions without any centralized control or global representation of the
environment[3]. Holland et al. applied related model to robotics to accomplish
complex tasks by several simple robots[9]. Lumer and Faieta modified the
algorithm to extend to numerical data analysis by introducing a measure of
dissimilarity between data objects[5].Kuntz et al. applied it to graph-
partitioning[11], text-mining[7] and VLSI circuit design[12]. Note that Monmarché
has introduced an interesting AntClass algorithm, a hybridization of an ant colony
with the k-means algorithm, and compared it to traditional k-means on various data
sets, using the classification error for evaluation purposes[8]. However, the results
obtained with this method are not applicable to ordinary ant-based clustering since
it differs significantly from the latter.
The research presented here proposes a new Ant-based Clustering
algorithm(AMC) which is enlightened by the behaviors of gregarious ant colonies.
Firstly, an artificial ant movement(AM) model is presented; secondly, the new ant
clustering algorithm has been constructed based on AM model. In this algorithm,
each ant is treated as an agent to represent a data object, each ant has two states:
resting state and moving state. The ant’s state is controlled by two predefined
functions. By moving dynamically, the ants form different subgroups adaptively,
and consequently the whole ant group dynamically self-organized into distinctive
and independent subgroups within which highly similar ants are closely connected.
This algorithm can be accelerated by the use of a global memory bank, increasing
radius of perception and density-based ‘look ahead’ method for each agent.
Let us assume that an ant oi is located at site r at time t. The local density of objects
similar to oi at the site r is given by
1 ª d (oi , o j ) º
° 2
f (i ) ® V
¦ «1 D » f (i ) ! 0
˄˅
o j NeighV uV ( r ) ¬ ¼
°0
¯ otherwise
clusters. The parameter D also determines the cluster number and the speed of
convergence. The bigger D is, the smaller the cluster number is, and the faster the
algorithm converges.
Probability activation function is the function which active an ant agent from a
state to another state, resting or moving. Obviously similarity is one of its
variables. The value domain is [0, 1]. There are two situation for an ant agent to
exchange its state, and we defined two function for them to determine whether to
exchange. The probability of an ant agent resting to moving will be calculated by
pM(i):
2
§ k ·
p M (i ) ¨¨ ¸
¸ (2)
© k f (i ) ¹
and the probability of an ant agent moving to resting will be calculated by pR(i):
2
§ f (i ) ·
p R (i ) ¨ ¸
¨ k f (i ) ¸ (3)
© ¹
Where commonly k 0.1 and k 0.3 .
1: procedure BASIC_ALGORITHM
/*INITIALIZATION PHASE*/
2: data preprocess and initialize parameter
3: for each Agent do
4: Randomly scatter Agent on the toridal grid
5: let ci =data item ID
6: let si=moving
7: end for
8: while(not termination)
9: for each Agent do
10: if si=moving calculate pR(i)
11: if si=resting calculate pM(i)
turn state based on pR(i) or pM(i)
13: update ci
14: if si=moving select next site
15: end for
16: end while
17: output cluster information of all Agents
A new Ant-based Clustering Algorithm on High Dimensional Data Space 609
3 Algorithm Optimization
In data clustering, many algorithms (like K-means and ISODATA [1]) require that
an initial partition be given as input, before the data can be processed. This is the
one common drawback of these methods, for it is not easy to specify a proper
number of clusters for a set of data in advance. Moreover, these methods are often
led to locally optimal by using a deterministic search, which is another major
drawback of these methods.
Contrasting with those methods mentioned above, ant-clustering boasts a
number of advantages due to the use of mobile agents, which are autonomous
entities, both proactive and reactive, and have the capability to adapt, cooperate
and move intelligently from one location to the other in the bi-dimensional grid
space. These advantages are:
Autonomy: Not any prior knowledge (like initial partition or number of
classes) about the future classification of the data set is required. Clusters are
formed naturally through ant’s collective actions.
Flexibility: Rather than deterministic search, a stochastic one is used to avoid
locally optimal.
Parallelism: Agent operations are inherently parallel.
While many of those advantages look prefect, two important defects remain.
The first one is that there may be some data objects which have never been
assigned to ant when the algorithm is terminated. This is due to the fact that each
time when an ant is assigned a new data object to inspect, the selected data object
is randomly selected. Un-assigned objects lead to a high misclassification error rate
of the algorithm, for they have never taken part into the clustering loop. The
second defect is the long time consumption of clustering. Due to the random
motion of the ants, ant-based clustering algorithms have a slow convergence rate.
To improve the performance of traditional ant-clustering algorithm, we have
made some effort in it. We treat each data object as an ant agent to be sure that
every object should be visited in the iteration. This method also can save memory
because there are no additional agents beside data object themselves. Otherwise,
we also adopt technologies such as memory bank, “look ahead” methods and so
on.
The results demonstrate that, if clear cluster structures exist within the data, the ant
clustering algorithm including: CSI and AMC, is quite reliable at identifying the
correct number of clusters. In contrast with the k-means, the AMC algorithm shows
its strength in its ability to automatically determine the number of clusters within
the data.
Compare the runtimes of the three algorithms, we can see AMC is the fastest
algorithms and its time consumer changes little with the scale of data set. So it is a
fast clustering algorithm with prefect scalability. The CSI algorithm is the slowest
one of the three algorithms, but compared with the k-means, the increasing
gradient of its time consumption decreases with the growth of data set.
5. Conclusion
In this paper, we have proposed a new ant-based clustering algorithm, which is
derived from the AntClass, LF clustering algorithm and CSI. Firstly, the AM
model has been proposed and a new clustering methods has been presented based
on the AM model. Secondly, the device of memory bank is proposed, which can
bring forth heuristic knowledge guiding ant moving in the bi-dimensional grid
space. So the classification error rate drops subsequently. Thirdly, we proposed a
density-based method permits each ant to “look ahead”, which reduces the times of
region-inquiry. Consequently the clustering time gets saved. We made some
experiments on real data sets and synthetic data sets. The experiments’ results are
compared with those obtained using other classical clustering algorithm. The
results demonstrated that AMC is a viable and effective clustering algorithm.
Future work focuses in:
(1) How to give ant more powerful heuristic rule, which can guide the ant
motion, therefore speed up the clustering rate.
(2) Combines AMC with the other clustering algorithm such as k-means and
DBSCAN to further improve the clustering quality.
References
[1] Ball G.H and Hall D.J.,ISODATA, a novel method of data analysis and pattern
classification, Technical report, Stanford Research Institute,1965
[2] B.wu,Y.zheng,S.liu and Z.shi, SIM:A Document Clustering Algorithm Based on
Swarm Intelligence. IEEE World Congress on Computational
Intelligence,Hawaiian,PP.477-482.2002
[3] Deneubourg J L , Goss S , Frank N , Sendova-hanks A ,Detrain C ,Chrerien L. The
dynamics of collective sorting : robot-like ants and ant-like robots. In : Proceedings of
the 1st International Conference on Simulation of Adaptive Behavior : From Animals
to Animats , MIT Press/ Bradford Books , Cambridge , MA , 1991. 356–363
[4] E.Bonabeau, M.Dorigo, and G. Theraulaz. Swarm Inteligence—From Natural to
Artificial System. Oxford University Press, New York, NY,1999.
[5] E. Lumer and B. Faieta. Diversity and adaption in populations of clustering ants. In
Proceedings of the Third International Conference on Simulation of Adaptive
A new Ant-based Clustering Algorithm on High Dimensional Data Space 611
Behaviour: From Animals to Animats 3, pages . 501–508. MIT Press, Cambridge, MA,
1994.
[6] Handl J, Meyer B. Improved ant-based clustering and sorting in a document retrieval
interface. LNC 2439,2002.913-923.
[7] K. Hoe, W. Lai, and T. Tai. Homogenous ants for web document similarity modeling
and categorization. In Proceedings of the Third International Workshop on Ant
Algorithms (ANTS 2002), volume 2463 of LNCS, pages 256–261. Springer-Verlag,
Berlin, Germany, 2002.
[8] Nicolas Monmarché, Mohamed Slimane, Gilles Venturini. AntClass: discovery of
clusters in numeric data by an hybridization of an ant colony with the Kmeans
algorithm, Internal Repport No 213,E3i,January 1999
[9] O.E.Holland and C.Melhuish. Stigmergy, self-organization, and sorting in collective
robotics, Artificial Life,5,1999,pp.173-202
[10] P.Berkhin. Survey of Clustering Data Mining Techniques. Accrue Software Research
Paper.2002.
[11] P.Kuntz, D. Snyers, and P. Layzell. A stochastic heuristic for visualizing graph clusters
in a bi-dimensional space prior to partitioning. Journal of Heuristics, 5(3):327–351,
1998.
[12] P.Kuntz,P.Layzell,D.Snyers. A colony of ant-like agents for partitioning in VLSI
technology, in: P.Husbans,I.Harvey(Eds.), Proceeding of the Fourth European
Conference on Artificial Life, MIT Press, Cambridege,MA,1997,pp.417-424
Tools for Designing Collaborative Working
Environments in Manufacturing Industry
1 Introduction
The main business objective of the work presented in this paper is to provide a
comprehensive solution to extend products of manufacturing companies acting at
the global market. The objective is to enable different equipment manufacturers,
especially SMEs, to provide new services and new business models, e.g. selling
services (instead of selling classical equipment as e.g. control systems), by renting
the equipment and guaranteeing its optimal use, which in turn will ask for
innovative monitoring of product usage conditions and functions. So companies
1
Dr. Dragan Stokic ATB – Institute for Applied System Technology Bremen GmbH Wiener
Straße 1, 28359 Bremen, Email: dragan@atb-bremen.de – Web: www.atb-bremen.de, Tel.:
0049-421-22092-40, Fax: 0049-421-22092-10
614 D. Stokic, A.T. Correia, C. Grama
will not sell (only) the classical products, but also the knowledge on how to
optimally select and use the products. Such an approach, i.e. provision of effective
customer support for the equipment operating world wide, is of a vital importance
for survival on the global market. An effective provision of such application
services requires means for efficient collaboration of different actors within a
supply chain on one side and customers on the other side, within an extended
enterprise (EE) context. Although a number of ICT solutions for product extension
are available, they do not allow for an efficient collaboration within EE, taking into
account different, dynamically changing collaboration patterns and different
technical backgrounds of the actors to be involved.
The research work presented is aimed at pushing the use of advanced service
oriented architectures for Collaborative Working Environments (CWE) in real
industrial practice. Such a CWE solution has to enable cost-effective product
extension independently of geographical locations of customers and manufacturers.
The paper is structured as follows. In Section 2 a brief overview of the State-of-
the-art of CWE in manufacturing industry is provided. Section 3 explains the
concept of the proposed CWE platform. Section 4 is dedicated to the tools for
design of CWE in manufacturing industry. Section 5 briefly presents applications
of the results in specific industrial environments. Conclusions indicate the key
innovations of the research work presented in this paper and outline future work.
2 State-of-the-art
The collaborative work in manufacturing industry requires solving several
fundamental problems. Collaboration amongst teams in an enterprise often
geographically dislocated, is currently burdened with a number of problems
concerning distribution of work, synchronisation and persistence of workspaces,
knowledge activation etc. The teams in modern and highly flexible manufacturing
industry often require different collaboration patterns (e.g. a combination of
synchronous and asynchronous collaboration) [6]. For example, a collaboration for
product problem solving has to integrate effective information sharing and activity
synchronization [4].
Based on the analysis of the requirements of users in manufacturing industry in
the two European projects [8, 2], and on the analysis of state-of-the-art (SotA) the
main gaps between the requirements and SotA are identified. This led to the
definition of the key RTD challenges which have to be addressed in order to satisfy
the requirements of industry regarding CWEs for product extensions.
The analysis of users´ requirements clearly indicate that different so-called
collaborative application services (see the next Section) are needed which will
satisfy the (basic) requirements to support (a) different collaboration patterns with
special emphasis on temporal aspect: synchronous, asynchronous, multi-
synchronous patterns, (b) distributed work in EE environment, which includes
identification of appropriate expertise, team forming, checking availability of
experts etc. with special challenges regarding collaboration among organized teams
and wider community, (c) effective (on-line) provision of knowledge on
product/equipment and on actors involved, (d) effective knowledge management
Tools for Designing Collaborative Working Environments 615
(KM), (e) dynamic changes in collaborative work conditions, and (f) decision
making in CWE.
Many of the addressed problems are common for collaboration work in
multiple different domains. However, there are several specific issues related to
CWE in manufacturing industry which impose the needs for RTD activities
specifically focused upon manufacturing industry. Such issues are:
x high difference in working environments of the collaborative teams (e.g.
shop-floor, logistics area, office area for design teams, etc.)
x different technical backgrounds of teams collaborating on common
problems (e.g. shop-floor workers with practical experience but (often) low
ICT backgrounds, designers with technical expertise etc.)
x specific security requirements
x strong focusing on organized groups but evident needs to include in
collaboration more ad-hoc groups.
3 Proposed CWE
The objective is to develop a generic and widely applicable, modular collaborative
platform to support product extensions. The platform will provide various
Collaborative Application Services (CAS) to support product extensions, e.g.
support in solving problems related to product use (including support in
diagnostics etc.), maintenance service, product/equipment reconfiguration etc. The
targeted platform will be open for various services to support different
products/equipment and involvement of different actors (product designers and
service providers, maintenance providers, shop-floor operators, end-customers).
Under CAS are understood the services which involve collaboration of
different actors, teams and artefacts within an EE, and which are focused on
specific application areas [7]. These services use the information middleware
which connects machines/equipment with the platform and provides information
on products/processes/production units (equipment) needed for collaborative work
within specific application area.
The analysis has shown that the required application services to support product
extension have common ‘collaborative’ actions which may be supported by a
common approach. Therefore, the new CWE solution will include so-called core
collaborative services, addressing application independent functionality to support
collaboration and covering these common collaboration actions (such as resource
discovery, collaboration traceability, knowledge provision etc.). The core
collaborative services (CCS) will be combined with application specific software
solutions and semantic-based knowledge management tools (SBKM) allowing for
effective collaborative work and knowledge sharing among different actors.
Since the application services will have to be dynamically updated, due to
frequent changes in collaboration needs and conditions, and have to be integrated
with other collaborative services, there is a clear need to provide a platform which
616 D. Stokic, A.T. Correia, C. Grama
CCS
Product/Process Selection of
Resource Decision
Knowledge Communication
Discovery Support
Provision Services
Team Collaboration
… ...
Composition Traceability
Three tools are investigated and developed, addressing several RTD problems:
Service Creation and Edition of CAS: The tool allows users to create a new
application service, or edit/modify an existing application service. The
functionality is available to expert users, and might require programming skills and
it certainly requires high-level access rights, as it implies access to sensitive
information of the company (e.g. list of all users and respective rights). The tool
will allow users to define issues such as: the purpose of CAS; text and structure for
the help system to be used in CAS; users who are allowed to use CAS;
collaboration patterns which are supported by CAS; CCS and additional
functionality that are necessary to implement CAS, the history of all actions related
to creating and editing of CAS.
Identification of Knowledge Flow for CAS: The tool will help the service
developer in defining which information is needed for the service, a selection of
SBKM tools and auxiliary functionality and their potential use in the services. It
Tools for Designing Collaborative Working Environments 617
The tool gets information from the Service Creation tool what the supported
collaboration patterns will be for CAS, as well as the target user group. Knowing
those is the starting point for the definition process. For instance, as can be seen in
Figure 2, the expected collaboration patterns restrict the possible range of available
Knowledge objects. The visibility of a knowledge object is defined as follows:
Visibility (Object) = v (collaboration pattern, user rights)
That means that whether a Knowledge object will be available in a service
depends both on the desired collaboration pattern (for instance, some objects could
be available only in asynchronous mode) and on the rights of the users in the group
cleared for using the service.
The relevance of a SBKM tool is defined in an analogous way:
Relevance (Tool) = r(collaboration pattern, user rights)
e.g. Particular cases of tool relevances: for instance, communication tools like
email are not highly relevant if the spatial collaboration pattern is local.
2
The CCS are available as atomic Web Services which can then be composed/orchestrated
in order to obtain composite CASs. An orchestration of CCS could be done by BPEL tools.
Thus, the BPEL partner links will point to CCS which will be used by the composite service.
618 D. Stokic, A.T. Correia, C. Grama
The key layer includes CCS – generic set of services supporting collaboration
among teams in an EE. The key issue is that the design tools and the CCS support
different patterns for collaboration among the teams [5]. Table 1 provides an
overview of the key CCS.
Different (existing) tools for KM are combined within CAS to support knowledge
sharing among teams. The CCS for Product/process knowledge provision has
a task to select the appropriate tool to provide knowledge (e.g. for searching on
documents related to problems etc.). This CCS provides e.g. ‘similar’ problems to
the one to be solved by an actor/group by applying Case Based Reasoning and
Rule Based Reasoning tools [8]. The existing tools are used, but they are upgraded
by adding collaborative aspects: for each actor the knowledge on his/her
collaboration within different groups is used in defining searching criteria and/or
weighting of different similarity criteria for Case Based Reasoning. The problem of
ontology is addressed by applying new approach for distributed set-up and
maintenance of ontology [3].
Tools for Designing Collaborative Working Environments 619
5 Application
One application addresses CAS for problem solving within complex assembly lines
(for small motors at the automotive industry supplier), supporting collaboration
among product (line) designers with: operators/foreman at the shop-floor,
maintenance service and control system providers, in order to identify the
problems/possible improvements and support design of new/reengineered lines.
The currently used information middleware (based on the Siemens’ ePS) is
‘upgraded’ with CCS.
Another application is the company that designs, manufactures and trades air
conditioning equipment. The company needs effective solutions for maintenance
and after-sales services since such CAS have a strong impact on the brand image.
In one CAS oriented to the support of acclimatisation units, three different entities
in distinct geographical locations interact simultaneously to solve a problem in the
machine. The second CAS is oriented to product design and reconfiguration.
620 D. Stokic, A.T. Correia, C. Grama
6 Conclusions
The main innovation of the work presented is the provision of a new CWE
platform including a set of CCS, combined with existing technologies to provide
application services for product extensions in manufacturing industry. The
proposed approach is fully in line with the findings of the Expert Group on CWE
[1]. The group has identified needs to develop collaboration services which can be
layered in three blocks: generic services that define basic components (i.e. CCS);
domain-specific services (e.g. for manufacturing industry) and context-specific
services. Any layer of this architecture has to be supported by services and tools
for collaboration design. Exactly these are the objectives of the work presented:
x to develop a set of CCS to allow building of CAS for product extensions
1. to develop a set of tools for design of CAS in manufacturing industry.
The solution will be general enough to be used for different products and
scalable to support future products, and thus usable by a wide spectrum of
companies and their customers. The future work will be focused upon automatic
design/update of CAS by the developed design tools.
Acknowledgement
The work presented is partly carried out in the scope of the current RTD project:
InAmI supported by the Commission of European Community, under Information
Society Technology Programme under the contract FP6-2004-IST-NMP-2-16788.
6 References
[1] Expert group (2006) Towards Activity-oriented CWE – A Research Roadmap 2007 –
2020. DG Information Society. EC. Rapporteurs: N. Mehandjiev and D. Stokic.
[2] InAmI consortium. Presentation of InAmI Project. 2005. Available at:
<http://www.inami.eu>.
[3] Kuczynski A, Stokic D, Kirchhoff U. Set-up and maintenance of ontologies for
innovation support in extended enterprises. Springer, London, 2005.
[4] Miao Y, Haake J M. Supporting Concurrent Design by Integrating Information Sharing
and Activity Synchronization. Presentation, CE'98, Tokyo, Japan, July 15-17, 1998.
[5] Molli P, Skaf-Molli H, Oster G, Jourdain S. SAMS: Synchronous, asynchronous,
multi-synchronous environments. Proceedings, International Working Group on CSCW
in Design, September, 2002.
[6] Stokic D. Towards Activity-oriented CWE in Manufacturing Industry. Presentation,
CWE FP7 Consultation Workshops, Brussels, March 16-17, 2006.
[7] Stokic D. A New CWE for Concurrent Engineering in Manufacturing Industry. 13th
ISPE Intern. Conf. on Concurrent Engineering, Juan-les-Pins, France, 2006.
[8] Urosevic, Lj, Kopaczi S, Stokic D, Campos A, Knowledge Representation and Case-
Based Reasoning in a Knowledge Management System for Ambient Intelligence
Products. AIA 2006 Conference, Innsbruck, February, 2006.
The Collaborative Digital Process Methodology
achieved the half lead-time of new car development
Hiroshi Katoha1
a
Digital Process Ltd
NOTE, a new automotive released from Nissan Motor Co., Ltd. in January 2005, was
thrown into the market after 10 and a half months of product development period, which
was the first achievement all over the world. (1) That means "super-shortened process"
reduced the lead-time of new automotive development to the half which had needed more
than 20 months. The Japanese automotive industry has achieved the less than one year
period of "super-shortened process", coming from "five-lot process" in 1980's through
"shortened process" in 1990's as a result of its continuous efforts.(Fig.1) I have brought up
the methodology called "Digital Collaboration Process Methodology" from the
countermeasures which has contributed these 20 year process innovations.
1
Hiroshi Katoh, Digital Process Ltd., Aida Bldg., 2-9-6 Nakacho, Atsugi-city, 243-0018, Japan, T)+81-
46-225-3903, F)+81-46-225-3907, Hkatoh@DIPRO.co.jp
622 Hiroshi Katoh
In 1970's, the automotive manufacturers in the world developed their own "in-
house CAD", the first generation CAD program using wire frame and surface, and
began utilizing it. Through practical use of "in-house CAD", they reached
"CAD/CAM unification" in 1980's. As expressed as the keyword "Clay to Die", a
conceptual design model created with clay was directly converted to 3D CAD data,
and die processing was unified to NC manufacturing. It greatly contributed to
quality improvement in automotive manufacturing.
In model based die manufacturing, a wood pattern making expert produces a press
die by creating a 3D plaster model (master model) from curves expressed in
conceptual design and drafting and by performing profiling the model.
The Collaborative Digital Process Methodology 623
design information of the internal structure has been incorporated. Adding die-
specific addendum and die face geometry to this 3D face geometry, "Die geometry
master data" is created; in addition, NC data for die machining is produced with
CAM software. In other words, "CAD/CAM unification method" is the procedure
that a series of die creation steps is consistently connected by using CAD data
united from conceptual design to die creation. (Fig.2)
In 1990's, with the progress of computer technology, the first generation CAD "in-
house CAD" began to be replaced with "second generation CAD", commercial
solid modeling CAD system software which had achieved the practical use level.
The automotive manufacturers began to employ and utilize this software. "Second
generation CAD" reached for the efficiency represented by DMU (Digital Mock
Up) and enabled the automotive manufacturers to realize "shortened process", a
year and half period of development and three times of prototyping cycle. "Vehicle
layout design" typified by engine room layout had been realized in the first
generation CAD (wireframe-based). It was enabled to be verified by using solid
models, which brought tremendous efficiency gains and improvement in quality of
verification to the automotive manufacturers. (Fig.3).
624 Hiroshi Katoh
This "DMU realization" has largely contributed not only to design process but also
to digitizing and bringing forward the process of productivity verification by
enabling "production engineering verification" which had not be realized with the
drawing. In body structural analysis field, furthermore, "Digital Performance
Evaluation" became possible by supplying CAD DMU data from design
department, which had been difficult to evaluate timely because of a long lead-time
to prepare the structural analysis model.
Not only design department itself but also both of the production engineering
department and performance evaluation department have became able to use DMU
data for their tasks on the faith of data. It means that both the departments creating
data and using data perform their works using the scheme of "data master =data
criteria". From this point, we call these basic design process using DMU as "Data
Master Process" (Fig.4)
Fig.4. Achieve “Super Shorted Development Lead Time” less than one year
The Collaborative Digital Process Methodology 625
By defining all the components composing a vehicle with 3D solids and surfaces,
the digital prototype of a new car which was shown like a real experiment car was
created. The key factors for this were progress in the second generation CAD
software and improvement in performance of workstation platform running such
CAD software. This is indeed the innovation of automotive engineering supported
with progress in Information Technology.
However, the automotive manufacturers encountered a lot of issues when trying to
apply DMU (Digital Mock Up) on the stages of their new product development:
(1) DMU has not been completed until all the components are collected.
(2) To present variation of a product with DMU, product requirements and
management of the related assembly components are needed.
(3) How they manage the timing of replacement of assembly component geometry
which is intermittently modified with the new one for DMU.
To solve these issues, the following measures were taken:
The Collaborative Digital Process Methodology 627
(a) Establish the system to support creating data for reused components in order to
ease designer's burden.
(b) As for components in provision (components on the approved drawing), the
component suppliers in charge should be responsible for delivery in 3D data.
(c) Set DMUs under verification planning only necessary times, list required
components for individual DMU, and regulate levels of details for creating CAD
data of the component.
(d) By using created DMU, clarify items and simulation criteria for "analysis
evaluation" and "production engineering verification", and get more precise results
of design verification.
Complying with these "DMU operation rules" in an entire company increases the
value of using "digital mock up car" and brings successful results. The key here is
that the design department creating data assures "segregated analysis group" and
"production engineering group" of liability of the product assembly, and engineers
can use it on each engineering stage without their concern. Concurrent engineering
with DMU cannot be completed unless liability of CAD data is assured.
system". This "CAD template" provides the advancement that the functionality to
modify dimensions is incorporated into the manipulation history of the CAD
system, and 3D manipulation for a same structural work is re-executed for different
dimension. Knowledge CAD effectively performs 3D verification on the phase of
"Basic design", early in the new automotive development process. (2)
CAM, the computer technology for creating die and tools with NC controlled
machining equipment, was introduced in 1970's and is currently a typical approach
for die creation. Furthermore, there is an approach of more comprehensive
verification of production engineering by using computer technology, called CAPE
(Computer Aided Production Engineering). This system provides a verification
method on production engineering aspects by simulating a series of worker's
assembling activities with computer and evaluating the procedure and problems in
advance. CAPE also has a long history as software or method; however, its usage
has been propagated rapidly for recent several years for which upstream designers
have turned to create a 3D product model and supply it to production engineers. To
shorten the period of a new automotive development and reduce the rework cycle
of prototyping and tests, it is necessary to be able to manufacture the product
efficiently at a reasonable cost, in addition to "maintain functionalities and
performance".
The key three simulations for the "productivity evaluation" are NC data teaching
for welding robot, etc and "equipment movement simulation (robot simulation)"
(Fig.9), "worker's assembling activities" and "forming simulation" such as die
sequence.
The Collaborative Digital Process Methodology 631
All of them have more than 10 year history as digitization technology application,
and have achieved complete application of CAPE to maintain productivity with
only one time mass prototyping. Conventional CAPE used to "check extracted
portion" in the process including a potential problem has been improved to
standardize necessary items to be checked based on error information from past
manufacturing and to extend a limited number of portions to be check to the all.
This leaded CAPE to the level that can be called as "Digital prototyping" and
realized a successful level of productivity with one-time mass prototyping.
In the CAPE area, although necessary technology was ready to be used, the
foundation of "preparation of product data" was incomplete, which had not lead to
a large achievement. However, it had been rapidly improved by "reinforcing
product data with complete DMU" and by "arranging the environment", and
became an innovation.
632 Hiroshi Katoh
"Knowledge CAD" tools always assure up-to-date "3D data of component parts"
and "DMU data" needed for design verification of individual portion.
(6) Use of analysis simulation
The issued 3D DMU data is necessary to build structural analysis model. "Digital
experiment model" is built from "Digital Prototype model".
(7) Digital engineering process verification
Digital engineering process verification enabled digital verification of production
engineering such as in-advance verification of manufacturability.
Table 3: “The Collaborative Digital Process Methodolody” and “Seven countermeasues for
digitalization”
With speedup of design study, it contributed the early feed back to styling change
and early consideration of engineering requirements.
(6) Use of analysis simulation
By establishing "Early analysis evaluation" through DMU, it enables the speedy
feed back cycle between "design specifications" and "functionality and
performance evaluation".
(7) Digital engineering process verification
The collaboration cycle between design and production engineering has become
speedy because all of the parts were processed in 3D data and the BOM system
assured related data including specifications with the design division. It achieved
the efficient study by production engineers and tremendously improved accuracy
of verification.
The Collaborative Digital Process Methodology 637
To make "Process innovation action plan" for the targeted product development
work, characteristic of the product, development organization and today's progress
of digitalization should be considered. It is not true that "Seven schemes for
digitalization" is always adaptable for any products or any organizations. We will
study and analyze today's works at first, decide what work should be focused on,
and select what schemes should be applied from "Seven schemes for digitalization"
and 25 detail schemes. Through these processes, we are going to make the most
suitable action plan and receive an approval from the person in charge to get start
the action.
Fig. 13 shows the practical process, beginning with "Study current situation",
through "make schemes" to make up "action plan" finally.
4. Conclusion
Today's "super-shortened process" is the result that the automaker has extruded
the maximum synergy effect combining the "Seven schemes for digitalization"
organically. I have established "The Collaborative Digital Process Methodology"
based on my experiences to achieve these process innovations in the automaker.
Japanese automotive manufacturers have had advancement in product design and
product manufacturing technologies. Japanese manufacturers have a big challenge
that the automotive manufacturers have to use digitization technology efficiently
and appropriately and continue having the ability to release their attractive
638 Hiroshi Katoh
products to the market in a short period of time. It can be said that 30-year effect
for improvement in the Japanese way leads to their ability to develop the best
products in the world. That is derived from their cleverness on using digital
technology with scrap-and-build policy and from their effort "Continuation is
power".
I would like to make effort to achieve the digital process innovation by this "The
Collaborative Digital Process Methodology" for all of the manufacturing
industries, using "Seven schemes for digitalization" and 25 detail schemes.
References
[1] Newspaper "Nikkan Kogyo Shinbun" Jan. 18 '2005
[2] "Nissan Motor New Development Process V-3P", "Nikkei Monozukuri" Jun. '2005,
Nikkei BP Co,
[3] Toshiaki Mase, Progress and Future Trend of Virtual Development Technology in
Vehicle Development Process, Japan Automotive Engineering, Vol.55, No.1, '2004
[4] Hiroshi Katoh, Digital Development Process of Automotive –Review History and
Know Now--, Japan Automotive Engineering, Vol.60, No.6, '2006
Stakeholder Value Sustainability
Improvement of the Efficiency Model in Health Care
through the use of Stakeholders’ Analysis Techniques
Abstract. The pursuit of health efficiency has become the aim of many stakeholders in their
respective sector, because of the increasing demand for health care services and the rising
expenses in the sector. However, the efficiency analysis is complex in systems, like the
health system, where exist conceptual challenges, multiple objectives and great scope for
error. One of the difficulties is the selection of prominent variables of the efficiency model,
which represent the requirements of stakeholders. The Stakeholders’ Analysis is a technique
used to evaluate different clusters of insterest in complex systems. Yet, its application for
efficiency analysis in the health sector is still rare. This paper aims at using the stakeholders’
analysis as a support for the efficient health model and verify its advantages and restrictions.
1 Introduction
During the last decades, an astonishing increase of pursuit for health care in all
countries has been stated, independently of the economic classification in the world
scope. The increase of demand excels actions of important social actors: the
consumers, anxious for a long and healthy life and the suppliers, who, come across
new medicines and technological advances in the health area. These changes are
favorable, since they increase the life expectancy and well-being of the people.
However, from another point of view, the countries have a raised expenditures with
health, consuming a sizeable proportion of their gross domestic product. Because
of this characteristic of the health sector, policy makers, administrators and
clinicians are not only worried about supplying quality services, but also about
efficient ways to deliver health services. According to [10], efficiency
improvements in the health sector, even in small amounts, can yield considerable
savings of resources or expansion of services for the community.
1
Engineering. Area of Production Engineering- Instituto Tecnológico de Aeronáutica
(ITA)/Comando-Geral deTecnologia Aeroespacial (CTA), Praça Marechal do Ar Eduardo
Gomes, 50 -Vila das Acácias, Cep 12228-900, São José dos Campos – SP;
Email:clarissa@ita.com.br; http://www.ita.com.br/
642 Clarissa C. Pires, Carolina D. Vidal
[2]. The distinct stakeholders’ points of view become the variable of model
dynamic. The figure 1 shows an example. The variable “number of hospital beds”
can be an output or input, in a hospital efficiency context. From the point of view
of the administrator, this variable represents a system resource (input). For the
beneficiary, it is an output, which he wants available whenever he needs.
Administrator Beneficiaries
Another difficulty is the trust in the results of the efficiency model. As, most of
the time, the choice of these variable is subjective. So, the goal of the analysis
might be compromised, as some of the stakeholders could be biased. [4] stress that
to attain the objective of the efficiency analysis and identification of benchmarks,
in the health sector the correct identification of the inputs and outputs of the model
is necessary. The methodology for performance evaluation depends on the clarity
of the objectives and goals of the health systems that it desires to evaluate [6].
3 Stakeholders’ Analysis
Stakeholders’ Analysis is a methodology that promotes ways to evaluate and to
understand people, groups of people and institutions involved in the system,
through the identification of stakeholder requirements. Stakeholder requirements
express what stakeholders of a system require of the product, processes and the
organization of that system; these requirements may be expressed as needs, wants,
expectations, desires, priorities, objectives, or capabilities [8].
[14] defines Stakeholders’ Analysis as being a systematic process of capturing
and analyzing qualitative information, used for identifying interests of third parties
when wanting to develop and/or implement a policy or a program. For [15]
Stakeholders’ Analysis is a methodology to identify key stakeholders of a project,
searching for their interests, and verifying how these interests can affect the risk
and the viability of the project. In the health sector, [11] concludes that
Stakeholders’ Analysis is an important task to guarantee the success of the reform
in the sector. Therefore this question must be answered: What have been the roles
of stakeholders, and their effect on the process of health sector reforms in
developing countries?
To apply Stakeholders’ Analysis doesn’t exist only one way to specify the
stakeholders’ requirements. However, is useful to know the goals of the project
before start the analysis. The first step in approaching stakeholder analysis is
644 Clarissa C. Pires, Carolina D. Vidal
determining the purpose of your inquiry; which in turn determines the time focus
of interest and issues to consider in conducting the analysis [13].
The methodology is composed for three phases (table 1), illustrated for a case
study. This case pursues to identify the variables of the efficiency model, for a
comparative analysis among the diverse Health Care Systems of the country
members of the WHO. The definite variable (from this methodology) will be
compared with the variable used in other studies as [7] and [9].
The first phase is characterized by the identification and context analysis of the
system. Describing the objective is the first activity to be done. After, it is possible
to specify the DMUs, which will be used in the efficiency model.
According to the phase I of methodology, the fundamental objective of our
study is an international comparison of health systems. Therefore, each DMU will
Improvement of the Efficiency Model in Health Care 645
be represented by the Health Care System of these countries. For this reason, we
need to define what will be considered a Health Care System. According to [16]
“The Health Care System” is the total sum of all the organizations, institutions and
resources whose primary purpose is to improve health.
In the second phase the identification and the clustering of stakeholders is done
(figure 3). Further, it captures the requirements of stakeholders in the sector, which
is the most important stage of the methodology. It is important to observe that the
stakeholders are part of the health system. This determines the approach used by
the methodology of the stakeholders’ analysis, and it is based on two issues: What
stakeholders receive from the health system (expectations): what stakeholders
supply to the system (responsibilities). To identify the stakeholders of the health
system, we have used a guidance list provided by [3], which have helped us to
execute a brainstorming. After the brainstorming we have interviewed health sector
expertise.
Investors
Government Physicians
Managers
Regulatory Authorities Nurses
Voluntaries
Technology
Beneficiaries:
Laboratorians and pharmacist: People who receive Medicines
People who work in a laboratory, benefits as a result of
examines or perform tests, or prepare health care system
Scientists
medicines
To apply the phase three, we choose two clusters of stakeholders, the policy
makers and the beneficiaries (figure 4). The variables are found answering the two
previous issues. The inputs possibly come from the responsibilities of stakeholders
and the outputs come from the expectations.
646 Clarissa C. Pires, Carolina D. Vidal
Indicator Indicator
Responsibilities Expectations
(possible input) (possible output)
x Basic
infrastructure: number of
1. Promoting
professionals, hospitals,
conditions for
medicines
protection and
x % covering of
recovery of the
sanitations services and
health of the
garbage collection
population x Infant
x % population in
Mortality
urban and rural area 1. Supplying
x Life
2. Promoting x Number of laws health for all the
Policy expectancy
laws and and national programs population with
Makers x % of
programs aiming (preventions, quality and
fairness population
at improvements immunization,
covered by
in the health vaccination)
health programs
x Total expenditure
on health (as % of GDP,
3. Financial per capita, government,
accounting of the private)
health system x Number of
beneficiaries of private
and public plans
x The beneficiaries` 1. To have a
1. Payment of x Life
contribution to health long and healthy
taxes expectancy
expenses life
2. Respect for 2. Basic
x Expenses with taxes
Beneficiaries
the law infrastructure x Numbers of
x Number of laws clinicians,
3. Awareness of and national programs laboratorians,
3. Indicators of pharmacists and
the health (preventions,
quality hospital beds
programs immunization,
vaccination)
Table 2 presents a comparison among the variables used in two other studies
about efficiency analysis with the variables found through our methodology. As
can observe, there are several approaches that can be used for efficiency analysis,
for example historical, monetary or quantitative approaches. However, the
stakeholders approach identifies different variable that are based on the
requirements of stakeholders. Through inputs as: “% covering Sanitation” and
“Average years of schooling”, the methodology retracts the complexity of
evaluating the health sector.
[3] Hull, E., Jackson, K., Dick, J..Requirements Engineering. USA :Springer, 2004
[4] Jacobs, R., Smith, P.C., Street A. Measuring Efficiency in Health Care. Analytic
Techniques and Health Policy. Cambridge, 2006.
[5] Slack, N. et. al. Administração da produção. São Paulo: Editora Atlas, 1997.
[6] Hurst J & Hughes MJ 2001. Performance measurement and performance management
in OECD health systems. <www1.oecd.org>. OECD Health Systems. Labour Market
and Social Policy – Occasional Papers n. 47, pp.1-60.
[7] Liu, C.; Ferguson, B. & Laporte, A. Ranking the Health System Efficiency among
Canadian Provinces and American States, 2006
[8] Loureiro, Geilson. 1999. A Systems Engineering and Concurrent Engineering
Framework for the Integrated Development of Complex Products. A Doctoral Thesis.
Department of Manufacturing Engineering. Loughborough University
[9] Marujo, E.C.; Martins, Carina B.; Saito, Cristiana; Pires, Clarissa C. Saúde suplementar
frente às demandas de um mundo em transformação. IESS, Volume 1, 2006
[10] Peacock, S., Chan, C., Mangolini, M. and Johansen, D. 2001, Techniques for
Measuring Efficiency in Health Services, Productivity Commission Staff Working
Paper, July.
[11] Semali, I. 2004. Understanding stakeholders’ roles in the health sector reform process
In Tanzania: the case of decentralizing the immunization program. University of Basel,
Basel, Switzerland. PhD thesis.
Aravind Betha1
Abstract. In day to day’s competitive and dynamic business environment. The complexity
of the technology is increasing in the applications in the industries. A new system is required
to maintain the competitive advantage of the industries. Increasingly successful business
leaders recognize that the integration of management and organization and facilities is the
key to inspiring organizational performance and value creation.
Three of the primary resources namely people, place and tools are to integrate as a
coherent whole and aligned to support a robust strategy. The new frontier of Knowledge
Worker effectiveness lies in integrating the design and implementation of these three keys.
The Industries have begun to integrate their operations along the value chain of the
products they design, produce or sustain. The creation of the value is one of the important
tasks in Integration. The object of meeting the technical performance and the costs and
scheduled goals effectively and efficiently is a serious challenge. Hence, the process of
integration to the enterprises can achieve the target. The nature of enterprises provides a
solution for obtaining these challenges. Enterprise Integration is the process of linking these
applications and creating a linkage between the different sources is an important aspect.
Information is the consideration as the most important factor for implementation of
integration in the enterprise. The second step takes place in close interaction between the
customer and the supplier. The customer is to integrate into the value creation of the
supplier. Value is the mutual creation among the factors on different levels. Customer
integration is to define as a form of industrial value creation where the consumers take part
in activities and processes, which is the domain of the companies.
The current practice of Enterprise Architecting has been a significant contribution to
creating and sustaining modern enterprises. However, the current field is not a sufficient
approach to the enterprises of this new century. A broader and more holistic approach is to
achieve by drawing on the emerging systems and the architecting field.
The objective of this paper is to set a framework for value generation in the enterprise
based on a strong integration of the customer. The main part of the paper will explore
customer integration.
1
Department of Mechanical Engineering, University of Louisiana at Lafayette
650 B.Aravind, S.Dwivedi
1. Introduction
Companies have to adopt strategies, which embrace both cost efficiency and a
closer reaction to customers’ needs. The consumer is a concern as the partner in
value creation. The customer is to integrate in the value creation of the supplier.
Customer-related value added is to produce at the information level. The customer
is to integrate in the value creation of the supplier. Industrial value production is
most often conceptual in terms of the value chain. In this concept, value creation is
sequential. Value is to add from one-step to the other. The customer is not a part of
the value chain. Value is the consideration only in the transaction between the
customer and producer. The large-scale projects often have high complexity. They
have significant technical risks, and a large number of diverse stakeholders. This
environment is challenging for the effective and efficient execution. The objective
of this discussion is to set a framework for value generation in the enterprise based
on a strong integration of the customer.
2 Enterprise
3 Integration
The term Integration refers to the bringing together of two entities in such a way
that unites and coordinates not only their computing resources, but also their
strategies, processes, and organization. This is so that the integrated enterprise
behaves as a coherent entity. Integration plays an important role for enterprise
networks. Enterprise Integration deals with the face of the accelerating rates of
technological change [1].
Integration plays an important role for enterprise networks. The enterprise
integration can quickly multiply in several directions in the face of growing
technological as well as organizational complexity. Enterprise Integration deals
with the face of the accelerating rates of technological change [1].
Enterprise Integration for Value Creation 651
Specify Value: This is the first stage in the implication of lean integration. The
basic task is to generate value for the product. The customer generally does
the value specification. The process involved is the “pull” system [2].
Identify the Value Stream: The products require the system of streaming the
process. This streaming of information or the process cycle is a principle of
lean. The mapping of the end-to-end linked resources is applied. The inputs
and outputs are to identify to eliminate waste [2].
Flow continuously: The process involves flow. The streaming should be
continuous. Thus by the elimination of waste in the process the value creating
steps flow [2].
Pull system: The Customer determines the value of a product. This system is the
pull system. The customers pull cascades to the lowest level supplier enabling
just in time production [2].
Pursue Perfection: A process is perfect through the gradual improvement. This is
the application to any product. In order to achieve the perfection a continuous
modifications is required. [2].
The first level is free market coordination. The implicit coordination exists
between enterprises in a free market. The second level is cooperation. When two
enterprises cooperate, they directly communicate and identify divisions of labor
and desired directions and outcomes. The third level is the collaboration.
Collaboration enterprises begin to exchange sensitive information such as
performance metrics, the long-term strategy, and the process data [1,4].
652 B.Aravind, S.Dwivedi
FIRST LEVEL
FREE MARKET
COORDINATION
SECOND LEVEL
CO OPERATION
THIRD LEVEL
COLLABORATION
Integration challenges have proven to be fraught with many barriers. The barriers
to integration have simply held back large gains and integration has proven to be
ineffective. [1]. Our aim is to examine the most common barriers to integration
across enterprises and to identify best practices and strategies for integration that
mitigate the observed barriers.
5 Value Creation
Identify Identify
Identify Program
Product Stakeholder
CUSTOMER
VALUE
PRICE
SCHEDULE
PRODUCT COST
QUALITY
The four stages make up the basic life cycle of all limited duration enterprise
networks. [1].The first major stage is creation. This is also the most critical stage
to the overall success of the network. The key activities in this initial stage include
the definition of many crucial aspects of business, technology, and organizational
strategy. The second major stage is operation. Processes in this stage include
secure data exchange and information sharing, order management, dynamic
planning and scheduling, and task management and coordination. A third stage in
the life cycle of enterprise networks is evolution. Evolution stage handles the
exceptions to routine operation, such as a change in the environment, a change of
network membership, or other events that would necessitate a change in course and
restructuring of the network. The final life-cycle stage is dissolution, when an
enterprise network has reached the end of its useful life; either by completing its
goals or through the determination of a network collaborates, and must dissolve.
CREATION
OPERATION
EVOLUTION
DISSOLUTION
6. Stake Holders
A stakeholder is ‘any group or individual who can affect or is affected by the
achievements of the organization’s objective’. [8].Shareholders provides capital
and expects a positive return on their investment. This is the center of value
creation. [9].
6.1. Organization
Organizations are generically a large number of people unified by common goals.
Programs generally involve teams from many different organizations. They involve
large subunits of organizations acting as coordination between the different
departments. [4,10].
Enterprise Integration for Value Creation 655
7. Conclusion
Integration of enterprises is the key element for the success of business structure in
the future. The basic goal is to identify opportunities for integration and to
establish strategies to overcome barriers to integration for the enterprise.
Integrating the network should represent a perfect architecture. The strategy has
been successful in large part. [10].
The identification of the Value Creation is an important factor in enterprise
integration. The aim is to create the right products with the required value. This is
the requirement with the efficient lifecycle and enterprise integration. The
customer is a part of integration in the value creation of the supplier [11].
656 B.Aravind, S.Dwivedi
References:
José Carlos de Toledoa1, Sergio Luis da Silvaa, Sabrina Medina de Paulab, Glauco
Henrique de Sousa Mendesb, Daniel Jugend b
a
Professor of Federal University of São Carlos - Brazil.
b
Postgraduate student in Federal University of São Carlos - Brazil.
Keywords. New product development management, high technology small firms, success
factors, small brazilian medical-hospital firms.
1. Introduction
In new product development (NPD) management many strategies, methodologies
and tools are employed aiming at improving efficiency, quality, speed and
innovation indicators. In this case, the most innovative companies seek to adopt
strategies and structures that can combine operational efficiency and high
innovation capacity in NPD in order to continually develop new products.
1
Titular Professor (Department of Production Engineering), Federal University of São
Carlos (Brazil), Whashington Luis Road, Km 235, São Carlos, São Paulo, CEP 13565-905,
Brazil; Tel: +55 (0) 33518236; Fax: +55 (0) 3351 8240; Email: toledo@dep.ufscar.br;
http://www.dep.ufscar.br
658 J.C. Toledo, S.L. Silva, S.M. Paula, G.H.S. Mendes, D. Jugend
Various authors [3, 17, 19] identify many product characteristics that propel
them to success: low cost, high quality, superior performance and unique attributes.
Also acknowledged is the need to integrate the strategy of product development to
other business strategies. Technology sources can also contribute for the success or
failure of a new project, since they demand acquisition, adaptation and
management capacities from the technology-based companies [17].
Competency levels of the areas involved in NPD have been correlated to the
success and failure of new products [3]. In this research, technical competency was
defined as the competency and capacity to accurately execute activities, interfering
directly in the quality of the tasks that make up product development.
The main organizational aspects of NPD mentioned in the literature include the
organization methods of project development, the degree of integration among the
functional areas, the structure of NDP and characteristics of key-individuals
involved in executing the project [11]. However, the foremost factors that affect
NPD performance are: project team, project leader, manager’s role and the
involvement of suppliers and clients during project execution of new products [2].
In regards to execution of NPD activities, it is recommend that attention be given
to pre-development, especially, while conducting technical and market studies and
feasibility analysis [5]. Similarly, the quality in the activities of generation and
analysis of ideas, technical development and market presentation are very
important [9].
3. Research Method
The Brazilian industry of medical-hospital equipment is composed of 374
companies [1]. Based on criteria such as size (small companies with less than 100
employees and mid-sized companies with more than 100 and less than 500
employees), operation segment (equipment manufacturers), geographical
localization (the State of São Paulo) and their own active NPD, totalizing 52
companies (in the State of São Paulo) that fit in the desired profile. From this
number, 30 companies agreed to participate in the research.
For data collection, a questionnaire composed of 64 closed-ended questions
was applied. It was requested that the companies choose two development projects
that resulted in new products, where one was considered a success and the other
one considered unsuccessful. All the answers were supposed to be grounded on the
history, facts and situations experienced at the time of the project execution. The
application of such procedures resulted in a sample made up of 30 companies and
49 new product projects, out of which 30 were considered as successful and 19
unsuccessful.
In the data treatment stage, various statistical techniques were applied. For the
responses related to projects of new products, first, the association of the variables
investigated was measured against the results of the product project (success and
non-success) by means of the respective contingency coefficients. Thus, it was
sought to determine which variables, considered isolatedly, elucidated the new
product’s success or failure. Also, the individual variables were reduced and
summarized by using techniques of factorial analysis, more specifically the
660 J.C. Toledo, S.L. Silva, S.M. Paula, G.H.S. Mendes, D. Jugend
4. Analisys of Results
Each main component illustrated in Table 1 corresponds to a set of isolated
variables (Table 2), which were reduced by applying the multivariate analysis
technique, to facilitate data interpretation. The results in the Table 1 show the
association coefficients and their respective significance levels (p-value) among
ten main components (critical factors) and the result of the new product (success or
not) for the companies. Table 2 shows the isolated variables considered significant
for the companies investigated.
Table 1: Association: Main Components and the Result of the New Product
Eigenvalue % Variance Association
Main Components Explained Coefficient
Characteristics of Market Target 2,21 44,2 0,630*
NPD Proficiency 1 2,94 48,3 0,576*
Integration 2,70 27,0 0,534*
Proficiency –other activities 2,89 48,3 0,484*
Degree of innovation 1,86 46,0 0,436**
Project Leader Competency 4,14 51,9 0,408**
Product characteristics 1,91 47,9 0,327 Į
Company competency 2,46 49,0 0,278**
Organization 1,52 50,8 0,208 Į
Technology sources 2,25 32,0 0,730 Į
Į
*Significant at p 0,001 **Significant at p 0,05 Not significant at p 0,1
1
Proficiency refers to thoroughness, completeness and competency in carrying out these
activities
It was observed in Table 1 that there were three main components associated to
the success of the new product: characteristics of market target, proficiency of
NPD activities and integration of areas involved in NPD. The successful
projects are those in which market assessment was well carried out and which had
user requirements well interpreted concerning new product specifications
The first management implication of the research is to guide NPD to the
market, that is, strategically align it to the needs of the client and the market. The
consequence is that the companies need to develop competencies to constantly
understand and assess consumer needs. From the data in Tables 1 and 2 it can be
observed that placing a new product, based on superior performance compared to
the competitors and the capacity to recognize and elucidate consumer needs, is
important for these companies. Since marketing responsibility, in the companies
studied, is basically done by personnel from the commercial areas, who maintain
a close relationship with those responsible for product development, there is
New Products Success in Small Brazilian Medical and Hospital Firms 661
The need for integration consists of the second management implication of the
research. The Integration in table 1 indicates a strong association with the result of
the product. Throughout the analysis of practices related to integration (Table 2)
adopted during the course of the new product projects, it was found that the
involvement of functional areas is fundamental during the pre-development phase.
662 J.C. Toledo, S.L. Silva, S.M. Paula, G.H.S. Mendes, D. Jugend
5. Final Considerations
This paper analyzed practices adopted during the execution of development
projects and their impact on the results of the new product, in a specific type of
company. The limitation in this exploratory study is the fact that it was carried out
with a small sample of Brazilian companies pertaining to only one sector of the
technology-based companies.
The four management implication highlighted in this paper reinforce concurrent
engineering principles, which enable the companies to continuously innovate.
Concurrent engineering is defined as a systematic approach to paralell
development of all product life-cycle activities, from inicial conception through
desing, planning, production and disposal. It encourages right-first-time methods
through cross-functional team working and consensus.
The increase involvement of the functional areas with NPD (integration) was
observed, mainly in the pre-development activities. When the solutions appear in
initial stages of the development, it become more evident the shortening of the time
expended in some procedures, what it contributes to attenuate or even to eliminate
errors in more advanced stages of development, favoring the anticipation in the
launching of products in the market. For small firms, correct involvement of the
functional areas in this stage can guarantee the rational use of resources employed
in product development.
Notwithstanding, some results are not compatible with success factors in the
literature about critical factors of success in NPD. Seeing that they are high
technology firms, it was expected that the acquisition process and technological
transference would be critical for such companies. However, this hypothesis was
not verified with the results of the research. Another issue is related to the type of
organizational arrangement in the development of a project and the success of a
new product. It is perceived that cross-functional teams represent an important
form of integration. However, the functional approach seems to be more common
in the companies investigated. Nevertheless, the most natural behavior found in the
small and mid-sized companies ends up compensating potential deficiencies of
such organizational arrangement.
Especially in environments where the technological level is high, the
improvement of the process of communication among the individuals and sectors,
becomes a determinative factor for the success of the work. It was observed,
therefore, the support of “project leader” for the success of the new product.
Market guidance, concern with efficiency and activity effectiveness of NPD,
integration of know-how and leadership are key elements in any model of product
development. NPD must combine technical elements that need to be planned and
natural behaviors that bring diversity to the organizations. One can add to these
elements a vision that adapts to contingencies as a way to promote new process
configurations, structure and resources.
664 J.C. Toledo, S.L. Silva, S.M. Paula, G.H.S. Mendes, D. Jugend
6. References
[1] ABIMO - Associação Brasileira da Indústria de Artigos e Equipamentos médicos,
Odontológicos, Hospitalares e de Laboratórios.Available at: <http://www.abimo.org.br.
Accessed on: Feb. 14th 2006.
[2] Brown SL, Eisenhardt KM. Product development - past research, present findings, and
future-directions. Academy of Management Review 1995;20, 2:343-378.
[3] Cooper RG.Kleinschimidt E. What makes a new product a winner: success factors at
project level. The Journal of Product Innovation Management 1987;4:175-189.
[4] Cooper RG, Kleinschimidt E. Determinants of timeless in product development. The
Journal of Product Innovation Management 1994; 11:381-396.
[5] Cooper RG, Kleinschimidt E.. Benchmarking the firm’s critical success factors in New
Product Development. The Journal of Product Innovation Management. 1995,12:374-
391.
[6] Ernst H. Success factors of new product development: a review of the empirical
literature. International Journal of Management Review 200;4, 1:1-40.
[7] Fernandes AC, Côrtes MR, Oshi J. Innovation Characteristics of Small and Medium
Sized Technology-Based Firms. In São Paulo, Brazil: A Preliminary Analysis 2000
Proceedings of 4th International Conference of Technology Policy and Innovation;
Curitiba, Brazil, August.
[8] Grifin A. PDMA Research on new product development practices: Updating trends
and benchmarking best practices. Journal of Product Innovation Management 1997;
14: 429-458.
[9] Kahn KB, Barczak G, Moss, R. Perspective: Establishing an NPD best practices
Framework. Journal of Product Innovation Management. 2006; 23:106-116.
[10] Ledwith A. Management of new product development in small electronics firms.
Journal of European Industrial Training 2000; 24:137-148.
[11] Lee, J, Lee, J, Souder WE. Differences of organizational characteristics in new product
development cross-cultural comparison of Korea and US. Technovation 1999;20, 497-
508.
[12] March-chorda I, Gunasekaran A, Lloria-Aramburo B Product development process in
Spanish SMEs: an empirical research. Technovation 2002; 22, 301–312.
[13] Montoya-Weiss M, Calantone R. Determinants of new product performance: a review
and meta-analysis. Journal of Product Innovation Management. 1994;11,397-417.
[14] Pawar KS, Haque B, Barson RJ. Analysing organisational issues in concurrent new
product development. International Journol of Production Economics. 2000; 67, 169-
182
[15] Pinho, M, Fernandes AC, Cortes MR, Pereira RC, Smolka RB, Calligaris AB et al
Empresa de Base Tecnológica. Relatório de Pesquisa. 2005 São Carlos: UFSCar,
mimeo.
[16] Poolton J, Barcklay, I. New Product Development From Past Research to Future
Applications. Industrial Marketing Management. 1998; 27, 197-212.
[17] Scott GM. Critical Technology Management Issues of New Product Development in
High Tech Companies. Journal of Product Innovation Management 2000;17, 57-77.
[18] Souder WE, Buisson D, Garret T. Success through customer-driven new product
development: a comparison of US and New Zealand small entrepreneurial high
technology firms. Journal of Product Innovation Management 1997; 14: 459-472.
[19] Souder WE, Yap CM. Factors Influencing New Product Success and Failure in Small
Entrepreneurial High-Technology Electronics Firms. Journal of Product Innovation
Management 1994; 11, 418-432.
Systematic for Increase of the Operational Efficiency
from the Allocation of Resources in Intangible Assets
Claudelino Martins Dias Juniora,1, Osmar Possamaib and Ricardo Luís Rosa Jardim
Gonçalvesc
a
PhD Student “Scholarship CNPq – Brazil”.
b
Professor of the Universidade Federal de Santa Catarina - PPGEP
c
Professor of the Faculdade de Ciências e Tecnologia of the Universidade Nova de
Lisboa.
Abstract. The present article presents a model for the operational efficiency management
from the identification and allocation of resources in organizational intangible assets. For
that, the identification of the intangible assets that are linked to priorities strategic products
are used, considering these last ones as the ones which determine the organization’s
economic sustainability. Concomitantly, organizational objectives are established that are
compatible to the development of performance indicators, linked to intern intangible assets
from the organization, classifying them according their contributions for the reach of the
goals of the manufacture’s section. Besides that, it is aimed at establishing criteria for the
application of resources in the elements which form the intangible assets that are considered
crucial to the maintenance of the production capacity.
1 Introduction
It can be observed that the need for better levels of the organizational assets used in
the intern context still constitutes an imperative one, once they can raise the
performance of the manufacture, adjusting it to the reach of the strategic goals. It is
assumed that the compatibility of these goals can be reached by the demonstration
of better levels of acceptance of the products offered for consumer market.
Furthermore, by the consideration of the profit margins decurrent of the
improvement in the execution efficiency of production activities in an operational
level. It is aimed at the identification of organizational intangible assets in order to
1
PhD Student “Scolarship CNPq – Brazil”. Universidade Federal de Santa Catarina –
Programa de Pós-Graduação em Engenharia de Produção e Sistemas/Universidade Nova de
Lisboa – Group for Research Interoperability of Systems. Faculdade de Ciências e
Tecnologia, Quinta da Torre, 2829-516, Caparica – Portugal; Tel.: +351 212948365 or
+351960355083; e-mails: dias.jr@deps.ufsc.br and cdj@uninova.pt.
666 C.M. Dias Jr.; O. Possamai, R. L. R. Jardim Gonçalves
This Preparation Stage has the goal to analyze the current portfolio of the
company’s products. In this case, the positioning map adopted by Siemens [4] will
be used, looking for positioning the products in homogeneous groups
demonstrating their representation, using the analysis of market tendencies
determinant factors and the profit percentage desired for each group.
To determine what the intangible assets are, the concept of [10] will be used that
defines them as: the generators in the organizational context, the originators of
research and development that effectively can represent future industrial or
intellectual property rights and the criteria according to the normalization of
intangible assets defined by FAS 141 (Financial Accounting Standards) [7]. In a
complementary way, for the framing of which intangible assets are considered
intern to the company (IIAs), the proposal elaborated by [12] is used due to its
concision of separability criteria of the organizational intangible assets.
that represent the squares that need to be considered in the value determination of
the intangible assets according to the methodology [9].
The goal is to determine the participation of the IIAs in the manufacture of the
PSPs, focusing the section efficiency of the manufacture unit. In this case, the
levels of efficiency calculation for the PSPs.
Considering the achievement of very distinct section levels of efficiency, it is
not possible to establish an average level of efficiency for the sections involved in
the PSPs production. This step will serve as a base for the determination of the
manufacture and sections goals (SGs) according to what is described in Stage 3.
3.3 Stage 3 – Establishing and sorting the manufacture and section goals by
hierarchy
Considering the exposed in Step 4, this stage is establishing and sorting the goals
of the manufacture by hierarchy that are directly linked to the maintenance and the
improvement of the operational levels of efficiency in the effective use of the
intern tangible and intangible assets related to the PSPs, according to what is
described in Step 5.
3.3.1 Step 6 – Determining the goals of the manufacture for the production of the
PSPs
This step determines and sorts the goals of the manufacture by hierarchy in order to
reach the operational levels of efficiency close to 1 (one) for the PSPs with related
IIAs.
The order of the hierarchy of the manufacture goals (MGs) will obey the higher
punctuation obtained in a decreasing scale. Obtaining equal punctuations, the
hierarchy will be established by the relation of the goal with the manufacture
section in which the PSP obtained the smallest level of efficiency.
This stage aims at proposing PIs that are related to the IIAs. For that, it must be
considered the occurrence of different indicators for each kind of PSP and also
having in mind the different form of utilization of the same IIA by the involved
sections.
3.4.1 Step 8 – Proposing indicators related to the IIAs for the manufacture sections
To define the relations of PIs with the section goals in the production of the PSPs,
considerations obtained from [15] will be used for the determination of the
performance indicators of the manufacture with adaptations. Therefore, haven
adopt as a reference the following question: “The performance indicators of
manufacture have any relation with the IIAs?”
3.4.2 Step 9 – Establishing importance levels for Performance Indicators for the
Intern Intangible Assets (PIIIAs)
The procedure is adopted from the importance level identification of each PIIIA,
based on the “Source of Relations between the Manufacture Goals and the
Performance Indicators Related to the Flexibility” [15]. Parallel to that,
determining the level of importance of the indicator to be adopted by the section,
using the information contained in Step 7.
With this stage the allocation of resources in IIAs considered critical to the raise of
the efficiency levels in the manufacture context, in a way that these IIAs
proportionate the more rational utilization of other tangible assets categories. For
this, a prioritization order must be established for the use of the IIAs with the
necessities pointed by the manufacture managers.
3.5.2 Step 12 – Priorizing the allocation of resources for the elements that form the
critical IIAs
With the calculations of the contribution margins of each element that compose the
IIAs and that are considered critical to the manufacture (by section), it is aimed for
Systematic for Increase of the Operational Efficiency 671
4 Development Context
Nowadays, the prioritization form of portfolio of products is a very complex skill,
due to different knowledge about concepts of the value of choice for different
segments of the market. Working with the perception that intangible assets can be
used as competitiveness criteria, for the units of manufacture is necessary
knowledge about intangible assets to consider better levels operational efficiency.
This way, the described model can be applied in manufacture context. It is the
intended to inquire the significance in the improvement of intangible assets in
better levels of internal efficiency.
6 Conclusions
This study developed a managing model of the operational efficiency from the
allocation of resources in intangible assets in the manufacture of products context
(commodities and services). Furthermore, criteria for the allocation of resources in
intern intangible assets considered critical to the production activity were
established, incorporating the organization's aim for knowledge of the market’s
preferences to be attended, as well as taking advantage of opportunities with
consequent alterations in the general pattern conceptions, in the fall or raise of
profits that result in significant oscillations in the consume power.
This way, through the consequent model’s application is expected to obtain a
superior development of activities connected to the manufacture segment through
the destination of intangible assets resources.
672 C.M. Dias Jr.; O. Possamai, R. L. R. Jardim Gonçalves
7 References
1. Boyer, K. K. and Paggel, M. Measurement issues in empirical research: improving
measures of operations strategy and advanced manufacturing technology. Journal of
Operations Management, 2000, n. 18, p. 361-374.
2. Brynjolfsson, E. et al. Intangible Assets: how the interaction of computers and
organization structure affects stock market valuations. Paper of Shinkyu Yang, New
York University and Erik Brynjolfsson, MIT Sloan School of Management, May, 2001.
3. Butler, J.; Cameron H and Miles I. Feasibility study concerning. A programmer for
research into the measurement and valuation of intangible assets carried out for the
department of trade and industry. CRIC and Policy Research in Engineering, Science
and Technology (PREST) University of Manchester of University Precinct Centre,
Oxford Road, Manchester M13 9Qh. England, UK, April, 2000.
4. Cassapo, F. Siemens – Case. Available at:
<http://www.kmol.online.pt/casos/Siemens/caso_3.html>. Accessed in: jan. 20th, 2005.
5. Copeland, T.; Koller, T. and Murrin, J. Valuation, university edition: measuring and
managing the value of companies. University Edition, 2000.
6. Dias Jr., C. M. Proposta de Detecção de Intangíveis do Consumidor como forma de
Priorizar os Investimentos em Ativos Intangíveis da Organização. Masther Thesis in
Engenharia de Produção – Programa de Pós-graduação em Engenharia de Produção e
Sistemas, Universidade Federal de Santa Catarina. Florianópolis, 2003.
7. Congrès International De Coûts. Goodwill – De la rouque. Available at:
<http://www.candidomendestijuca.edu.br/artigos_professores/andrea/Goodwill%20-
%20De%20la%20reoque%20-%201o%20lugar.doc> Léon, France, 2001.
8. França, R. B. Avaliação de indicadores de ativos intangíveis. Doctoral Dissertation of
PPGEP/UFSC. Florianópolis, 2004.
9. Hoss, O. Modelo de avaliação de ativos intangíveis para instituições de ensino superior
privado. Doctoral Dissertation of PPGEP/UFSC. Florianópolis, 2003.
10. Iudícibus, S. Teoria da contabilidade. 5. ed. São Paulo: Atlas, 1997. 330 p.
11. Oliveira, A. B. S. Contribuição de modelo decisório para intangíveis por atividade –
uma abordagem de gestão econômica. Doctoral Dissertation of the Departamento de
Contabilidade e Atuária - USP, São Paulo, 1999.
12. Peña, D.N.; Ruiz, VRL. El capital intelectual: valoración y medición. Espanha:
Financial Times-Prentice Hall, 2002, 246p.
13. Reis, E. A. Valor da empresa e resultado econômico em ambientes de múltiplos ativos
intangíveis: uma abordagem de gestão econômica. Doctoral Dissertation of the
Departamento de Contabilidade e Atuária - USP. São Paulo, 2002.
14. Silva, C. E. S. Método para avaliação do desempenho do processo de desenvolvimento
de produtos. Doctoral Dissertation of Programa de Pós-graduação em Engenharia de
Produção e Sistemas, Universidade Federal de Santa Catarina. Florianópolis, 2001.
15. Teixeira, R .N .G. Desenvolvimento de um modelo para o planejamento de
investimentos em flexibilidade de manufatura em situações de mudanças estratégicas
da organização. Doctoral Dissertation of PPGEP/UFSC. Florianópolis, 2005.
Geotraceability and life cycle assessment in
environmental life cycle management: towards
sustainability
Aldo Omettoa,1, Mateus Batistellab, Américo Guelere Filhoc, Gérard Chuzeld and
Alain Viaue
a
Professor (Engineering), Enginerring School of São Carlos, University of São
Paulo, Institute Factory of Millennium, Brazil.
b
Researcher (Environmental Science), Embrapa Satellite Monitoring, Brazil.
c
Researcher (Engineering), Enginerring School of São Carlos, University of São
Paulo, Institute Factory of Millennium, Brazil.
d
Researcher (Agricultural Science), Cemagref, France.
e
Professor (Geographic Science), University of Laval, Canada.
1
Professor (Engineering), Enginerring School of São Carlos, University of São Paulo,
Institute Factory of Millennium, Brazil. Av. Trabalhador São-Carlense, 400, City Code:
13566-590 São Carlos/SP - Brazil; Tel: +55 (16) 3373 8608; Fax: +55 (16) 3373 8235;
Email: aometto@sc.usp.br; http://tigre.prod.eesc.usp.br/producao/docentes.htm
674 A. Ometto, M. Batistella, A. G. Filho, G. Chuzel and A Viau
1 Introduction
Sustainability presents many definitions but the basic principles and concepts
remain constant: balancing the economy aspects, protection for the environment,
and social responsibility, so they together lead to an improved quality of life for
ourselves and future generations. “This concept of sustainability encompasses
ideas, aspirations, and values that continue to inspire public and private
organizations to become better stewards of the environment and promote positive
economic growth and social objectives. The principles of sustainability can
stimulate technological innovation, advance competitiveness and improve our
quality of life” [1].
The environmental, social, and economic impacts of the products have to be
analyzed according to their life cycles. Product life cycle thinking is important in
the path towards sustainability by expanding the focus from the production process
to the product life cycle (figure 1).
2 Environmental Management
Environmental management can be defined as the management of human activities
so that natural resources are used adequately to meet human needs and the
environment’s continuing capacity to provide those resources is sustained [4].
This approach is illustrated in figure 2, which shows the phases required to
achieve the environmental viability of an activity.
Environmental Management
Environmental
Characterization Human Activity
Environmental
A l i
Mitigation
3 Geotraceability
Geotraceability is the ability of describing the history, the use, and the location of a
product, allowing tracing and tracking from its production to its consumption.
Thus, it is necessary to retrieve and store information about the characteristics and
the history of the product (tracing), as well as to follow its real time location
(tracking), in particular for recall operations in crisis situations, such as the avian
influenza.
The importance of such tools is evident, as they integrate a spatial component
to the product life cycle, adding value to market products, to certification and
labeling in retail business, and to communication with consumers, with the
potential. to subsidize future policies for the sector.
Geotraceability may be used to increase confidence in products being acquired
by consumers through the knowledge of their trajectory, safety, and quality from
production to consumption. The process is carried out through standard spatial
indicators, in conformity with defined norms, to integrate information from various
sources, quality, and scales of observation. Much has to be improved in terms of
standardization, but efforts have been made in several countries. All these issues
are associated with the availability of information and knowledge about the
product chain.
Some food chains are particularly important due to the emerging sanitary risks
attached to international commercial relations [6]. For obvious reasons, beef is
among the most important products to be tracked and traced using a spatially
explicit system.
In Brazil, various sectors are interested in such tools, as they may become
crucial in the near future. Recently, a specific support action proposed by a
partnership among Cemagref (France), University of Laval (Canada), Embrapa
(Brazil), and Cirad (France) was funded by the European Commission. Its goal is
to develop an operational management and geodecisional prototype to track and
trace agricultural production, with a major focus on the beef chain (figure 3). The
prototype will be implemented in Campo Grande, where Embrapa Beef Cattle is
located.
Geotraceability and life cycle assessment in environmental life cycle management 677
Environmental
Human Activity
Characterization
Geotraceabi LCA
lity
Environmental
Mitigation
6 Conclusion
Geotraceability and LCA are important tools with the potential to introduce, in a
practical way, the life cycle thinking in environmental management (i.e., ELCM).
The adoption of geotraceability systems and LCA can enhance product safety
and quality, providing industries, consumers, and all stakeholders with a level of
information compatible with the demands of a global market and with the need of
effective environmental management, taking in account the environmental
characteristics and the product life cycle.
7 References
[1] Environmental Protection Agency – EPA – US. Available at:
<http://www.epa.gov/sustainability/>. Accessed on: Jan. 15th 2007.
[2] Franke, C. Product Life Cycle. CRI. TUBerlin: 2004.
[3] Souza, M.P. Instrumentos de Gestão Ambiental: teoria e prática. Riani: 2000
[4] Tolba, M.K. Development without destruction: involving environmental perceptions.
Dublin, Ireland: Tycooly International Publishing Ltd. 1982
[5] United Nation Environment Programme – UNEP. Evaluation of Environmental
Impacts of Life Cycle Assesment. Available at: <http://www.unep.org/lci/>. Accessed
on: Jan. 25th 2007. 2003.
[6] Vinholis, M. B., Azevedo, P. F., 2000, Efeito da rastreabilidade no sistema
agroindustrial da carne bovina brasileira, World Congress of Rural Socology, Rio de
Janeiro, v.1, pp.1-14.
[7] Wenzel, H.; Hauschild, M. And Alting, L. Environmental Assessment of Products.
1997
Enterprise Architecture for Innovation
Experimentation of an Enterprise Architecture in
aerospace electrical engineering process
Xavier Rakotomamonjya,1
a
Collaborative Systems, EADS Innovations works (Suresnes) FRANCE
1
Collaborative Systems, EADS Innovation Works., 12, rue Pasteur, BP 76, 92152 Suresnes
Cedex FRANCE; Tel: +33 (0) 146973755; Fax: +33 (0) 146973008; Email:
xavier.rakotomamonjy@eads.net; http://www.eads.net
684 X.Rakotomamonjy
2 Enterprise Architecture
Current economic competitivity forces enterprise to lean their manufacturing and
management operations to be more productive and efficient. Technological
information potentially extends business capabilities by enabling collaborative
practices on virtual product [2] and by enabling share space through Product Data
Management solution [3]. Enterprise Architecture is a discipline that considers the
enterprise as a system.
Architecture intends to capture components of a system, their organizations,
and the relation to the environment. This definition is provided in the IEEE Std
Experimentation of an Enterprise Architecture 685
1471-2000 [4] which gives a frame of reference and set of definitions targeting
architectural description.
The standard defines the entity model. In our case model encompasses ER
(entity relation) grammar such as UML Class Diagram [5,21], Process grammar
like BPMN (Business Process Modeling Notation) [6], but also DfD (Data Flow
Diagram) grammar such as Gane & Sarson diagram [7].
Stakeholder and associated concerns [8] are expressed into viewpoints. A view
may consist of one or more models conforming to viewpoint. The standard does not
provided recommendation for viewpoints choice [9]. External reference viewpoints
are provided by many frameworks [10]. We review for the purpose of the study
many architectures: Zachman Framework Architecture [11], Department of
Defence Architecture System View [12], 4+1 views [13], TOGAF [14]. We
focused on Zachman as a framework to think and organize viewpoints and DoD
AF for ready to use system definition view.
The first diagram is a big picture of electrical design process. The second
diagram shows data repository accessed by electrical department after “program
commission” approval of the design solution.
Electrical business process provides a description in BPMN. Association
between process and actor is provided by horizontal pool in BPMN diagram. In
688 X.Rakotomamonjy
6 Experimentation result
The electrical architecture was implemented in System Architect Tool from
Telelogic [27]. The 10.1 version of the tool comes with high customization
capabilities and functions to increase grammar and entity relation. A Visual basic
module enables to query the database and check constraint. In standard version
many grammars are available. The high expressivity of the tool requires a well
defined methodology in order limit scope of exploitation.
Despite the fact that the demonstrator was not used at the operational level, the
electrical repository was tested with a subset of stakeholders from business and IT
departments. Several viewpoints were implemented, but the architecture was not
used in configured multi-user project mode. We can draw general conclusion from
the project.
In general, modelisation happens at a high level and deals with enterprise
process. The intention is to capture key performance indicator and to provide
guidance for process compliancy with business scorecard. Further detail model in
order to describe how to do a process is not covered.
Our architecture intention was to describe an electrical operational level. This
operational level describes the “How to do” more than “What to do”. Process, but
also know-how and business data are in the scope of the architecture. At this level,
a huge amount of information is processed by actor in day to day tasks. Process is
more flexible and working hypothesis move rapidly during exchange and
collaborative engineering work. Consolidated criteria for process evaluation
depend on real data and uncountable number of situations.
In our case interviews’ provide actor knowledge about activity. It is a good
input to create diagram, identified concept and built scenario. Numerous aspects of
engineering activities imply numerous artifact and grammar. Exploitation and
creation of diagram do not belong to the operational actors’ domain of
competencies. We had to organize learning events to help users and to fill the gap.
We noticed that the more user’ interests is taking into account, the more learning
time is required.
Displays and views extractions play an important part for user adoption. The
development of view extraction required customization out of the project scope.
690 X.Rakotomamonjy
As for the methodology used in TOVE [26] for ontology creation, we put effort
in queries and user scenarios to handle complexity. Ideally, each object and each
property should be a consequence of a user oriented scenario. As a result, in order
to demonstrate how the user can have confidence in the model, our testing
purposes were based on scenario. We believed that efforts in interface development
can not be avoided for exploitation. The current exploitation was to developed
architecture as a communication support but also as a knowledge base for
insightful information extraction.
The most promising use at operational level, that encompasses both
communication purpose and knowledge retrieval, deals with formalization of user
guide and best practices in engineering tool such as CAD and PDM applications. In
our opinion, architecture repository contains enough information and semantic data
to structure a methodological web site for electrical engineering activities.
Concerning the tool, we acknowledge that implementations are not restricted by
System Architect tool, but should be constrained by a methodology in order to
provide means for a consistent repository. On the contrary, tool restricted by
grammar or method does not allow a full expressivity required to formalize product
engineering activities.
7 Conclusion
The analysis performed in this paper shows that enterprise architecture can be used
to represent an overall picture of a business electrical process more than to provide
performance indicator on business process. Complexity of knowledge
representation was subdivided into manageable viewpoint. And viewpoint artefacts
are linked through a metamodel based on a reference construct including element
description, element depictions and cross domain reference.
In addition to methodological repository, enterprise paradigms can assist
product decision making process. Terminology is one of the major challenges in
the field of integrated architecture engineering. The ontology development
approach used in this paper is more top down oriented. User requirement and
scenario are the basis to deepen concept identification and syntactic definition.
Modelling competencies acquisition is a prerequisite at an engineering user point
of view to foster the deployment of operational architecture.
8 Reference
[1] EADS Space, Communications and Public relations Directorate. EADS Space leaflet.
www.space-eads.net.
[2] Mager R, Hartmann R. The Satellite design office at Astrium – A Success Story of an
Industrial Design Center Application. Proceedings of EuSEC 2000.
[3] Kesseler E, Kos J. The next step in collaborative aerospace engineering. 3rd
International conference RIVF’05 2005.
[4] IEEE. Recommended Practise for Architectural Description of Software-Intensive
Systems-Description. IEEE Std 1471-2000
Experimentation of an Enterprise Architecture 691
[5] OMG- Object Management Group Inc. Unified Modeling Language Specification,
version 1.3. http://www.omg.org
[6] Business Process Management Initiative.BPMI.org.Business Process Modeling
Notation.(BPMN) version 1.0, May 2004
[7] McClure C. The CASE for structured development. PC TECH J. 1988; 6: 50-69.
[8] Koning H, Bos R, Brinkkemper S. An inquiry Tool for Stakeholder Concerns of
Architectural Viewpoints : a Case Study at a Large Financial Service Provider.
EDOCW’O6 2006. p.31
[9] Hilliard R. IEEE Std 1471 and Beyond. SEI’s first Architecture Representation
wokshop January 2001.
[10] Fox MS, Gruninger M. Enterprise Modeling. American Association for Artificial
Intelligence1998: 109-121.
[11] Sowa JF, Zachman JA. Extending and formalizing the framework for information
information system architecture. IBM Systems Journal 1992;31:590-616
[12] C4ISR Architecture Working Group, US Department of Defense. C4ISR Architecture
Framework Version 2.0
[13] Kruchten P. Architectural Blueprints – The “4+1” View Model of Software
Architecture. IEEE Software 1995; 12:42-50
[14] TOGAF: The Open Group Architecture Framework “Enterprise Edition” version 8.1.
Available at: http://www.opengroup.org/architecture/togaf8-doc/arch>. Accessed on:
Feb. 5th 2007.
[15] Maiden NAM, Rugg G. ACRE: a framework for acquisition of requirements. Software
Engineering Journal, 1996; 11:183-192
[16] Zachman JA. Entreprise Architecture – a framework. ZIFA, Zachman Internationnal.
Available at:< www.zifa.com>. Accessed on : Janu. 17th 2005
[17] Noran O. A systematic evaluation of the C4ISR AF using ISO15704 annex
A(GERAM). Computer in Industry 2005; 5:407-427
[18] Ducq Y, Chen David, Vallespir B. Interoperability in enterprise modelling :
requirement and roadmap. Advanced Engineering Informatics 2004;18:193-203
[19] Wegmann A, Balabko P, Lê LS, Regev G, Rychkova I. A method and Tool for
Business-IT alignement in Enterprise Architecture, The 17th Conference on advanced
Information Systems Engineering 2005:161
[20] Toussaint P. Bakker AR, Groenewegen LPJ. Constructing an Enteprise Viewpoint:
evaluation of four business modelling techniques. Computer Methods and Programs in
BioMedecine 1998; 55:11-30
[21] Hilard R. Using the UML for Architectural Description. Proccedings of “UML 1999;
1723
[22] Kosanke K, Nell JG. Standardisation in ISO for enterprse engineering and integration.
Computer in Industry 1999;40:311-319
[23] Mertins K, Jochem R. Architectures, methods and tools for enterprise engineering. Int J
Productions Economics 2005;98:179-188
[24] GERAM GERAM: Generalised Enterprise Reference Architecture and Methodology
IFIP–IFAC Task Force on Architectures for Enterprise Integration, version 1.6.3 1999.
[25] Bernus P. Enterprise models for enterprise architecture and ISO9000:2000. Annual
Reviews in Control 2003; 27:211-220
[26] TOVE. Toronto Virtual Enterprise. Available at:
<http://www.eil.toronto.ca/tove/ontoTOC.html>. Accessed on: Feb. 6th 2007.
[27] Telelogic Company <http://www.telelogic.com>
In search of the elements of an Intra-organizational
Innovation System for Brazilian automotive
subsidiaries
Abstract: The present study provides a theoretical basis for the development of product
technological competence in global automotive organizations in Brazilian subsidiaries. It is
argued that the necessary knowledge is fragmented in literature among studies on new
product development, knowledge management and organizational learning, organizational
competences and technological innovation.
It presents two concepts: (a) the concept of Intermediate Technological Leadership (ITL), as
an enterprise purpose to be reached and (b) the concept of Intra-organizational Innovation
System (IIS), as a model to be constructed and applied in local subsidiaries in order to
enable the achievement of all necessary technological competences. The integration of
theoretical sources reveals six fundamental elements for an IIS: strategic adequacy,
interpretation of external environment, conception of internal organizational structure,
integration of external structure, systematization of organizational basic processes, and
consideration of human factors and relationships. It is expected that the theoretical basis
presented in this study will serve as a reference to be validated in real-world applications.
Introduction
Product development in Brazilian automotive industry has been aggregating new
methods and technologies due to the legislative requirements, market needs and
new organizational strategies. Since the great market opening occurred in the
nineties, this industrial sector has been experiencing a fast transformation of its
subsidiary structures. Such transformation aims at a continuous preparation of
these local organizations for a more competitive market.
1
NTQI, UFMG (Production Engineering Department), Av. Presidente Antônio Carlos, 6627
Belo Horizonte - MG, Brazil; Tel: +55 (31) 3499 4889; Email: rbagno@uai.com.br;
http://www.dep.ufmg.br/labs/ntqi/index.html
694 R. B. Bagno, L. C. Cheng
Nonaka and Takeuchi are seen as essential references on knowledge creation for
technological innovation [17]. Representing the Japanese approach, these authors
affirm that the success of Japanese companies mainly happened due to their
abilities in organizational knowledge creation, which they define as being the
capacity a company has to create knowledge, to spread it within the organization
and to incorporate it into products, services and systems.
The acquired learning or created knowledge takes the form of new concepts of
products, archetypes, procedures or services. In the western approach, Senge,
Dibella and Nevis, and Argyris and Shön, present strategies of organizational
learning emphasizing more explicit knowledge than the Japanese does [24, 6, 1].
Nonaka and Takeuchi center their organizational knowledge creation theory in four
mechanisms for knowledge conversion: Tacit-Tacit (socialization), Tacit-Explicit
(externalization), Explicit-Explicit (combination) and Explicit-Tacit
(internalization) [17]. Such mechanisms have been identified throughout studies
and reviews of innovative processes in Japanese organizations. Both approaches
recognize organizational characteristics such as managers’ roles, people’s
autonomy, objectives, etc., and discuss the ideal conditions for learning process
improvement.
Organizational competences
In any organization beginning to develop new products and technologies, there are
activities, work processes, physical structures, organizational and professional
profile definitions that have not been there before. Organizational Competences
refer to the systematization of all these necessary elements which will sustain new
abilities now present within the organization.
Prahalad and Hamel studied the concept of competences focused on product’s
base technologies. According to them, core competences are related to the
In search of the elements of an Intra-organizational Innovation System 695
product’s base technological domain, and this would be the major argument to
explain the difference between technology-based corporations [20]. Prahalad and
Hamel present an important relation between competence formation and innovative
dynamics as they recognize that companies, which are not focused on
technological abilities, are increasingly more limited in identifying innovative
ways in their current product line or simple expansions [20].
For the organizational competence formation, a careful consideration of local
context is highly necessary, as the differences between local and foreign
environments are very relevant. It is also important to consider strategic alliances
which are normally focused on the complementarity of strength and weakness.
Such strategies are commented by Fleury and Fleury, Prahalad and Hamel and
Medcof [11, 20, 14].
innovation. It can be noted that they are mainly from research institutes such as
IPEA and from the main local universities [22].
Conclusion
The formation of technological competence within the context of ITL presents
great social and economic relevance for a country’s development. This article
aimed to provide an important way for changing the roles currently played by
Brazilian subsidiaries within their organizations. This change includes setting new
forms of investment attraction, valuing more highly jobs generated in Brazil, and
also motivating the development of organizations that compose the structure for
local innovative support system. Such reality, however, should be based on
complete IISs, conceived from a rigorous theoretical search, and has to be carefully
integrated into practical environment. Each singular organizational context will
demand a specific and adequate system.
References
[1] Argyris C, Schön D. Organizational learning II: Theory, method and practice. Reading,
Mass: Addison Wesley, 1996.
[2] Cheng LC. Caracterização da Gestão de Desenvolvimento de Produto: delineando o seu
contorno e tópicos básicos. In: Anais do 2o. Congresso Brasileiro de Gestão de
Desenvolvimento do Produto. São Carlos: UFSCar, 2000; 1: 1-10.
700 R. B. Bagno, L. C. Cheng
[3] Clark KB, Wheelwright SC. Managing New Product and Process Development. New
York: The Free Press, 1993. 896pp.
[4] Cooper RG. Winning at New Products: accelerating the process from idea to launch. 2.
edn. Reading: Addison-Wesley Publishing, 1993; 358pp.
[5] Dias AVC. Produto Mundial, Engenharia Brasileira: integração de subsidiárias no
desenvolvimento de produtos globais na indústria automobilística. Ph.D. thesis, Escola
Politécnica, USP, 2003.
[6] Dibella A, Nevis, EC. Como as organizações aprendem. São Paulo:
Educator, 1999.
[7] Dolan RJ. Managing the New Product Development Process. Reading: Addison
Wesley, 1993; 392pp.
[8] Figueiredo PN. Aprendizagem Tecnológica e Performance Competitiva. Rio de
Janeiro: Ed. FGV, 2003; 292pp.
[9] Fleury A. Gerenciamento do Desenvolvimento de Produtos na Economia Globalizada.
In: Anais do 1o. Congresso Brasileiro de Gestão de Desenvolvimento do Produto. Belo
Horizonte: Universidade Federal de Minas Gerais, 1999; 1-10.
[10] Fleury A, Fleury MTL. Aprendizagem e inovação organizacional: as experiências de
Japão, Coréia e Brasil. 2. edn. São Paulo: Atlas, 1997; 237pp.
[11] Fleury A, Fleury MTL. Estratégias empresariais e formação de competências: um
quebra-cabeça caleidoscópico da indústria brasileira. São Paulo: Atlas, 2000; 160pp.
[12] Galina SVR. Desenvolvimento global de produtos: o papel das subsidiárias brasileiras
de fornecedores de equipamentos do setor de telecomunicações. Ph.D. thesis, Escola
Politécnica, USP, 2003.
[13] Griffin A, Page A. PDMA Success measurement project: recommended measures for
product development success and failure. Journal of Product Innovation Management,
1996; vol. 13, 6: 478-496.
[14] Medcof JW. Why too many alliances end in divorce. Long Range Planning, 1997;
vol.30, 5: 718-732.
[15] Meyer MH. Revitalize your product lines through continuous platform renewal.
Research Technology Management, 1997; vol. 40, 2: 17-28.
[16] Nelson RR, Winter SG. In search of a useful theory of innovation. Research Policy,
1977; vol.6, 1:36-77. In: Revista Brasileira de Inovação, 2004; vol.3, 2: 243-282.
[17] Nonaka I, Takeuchi H. Criação de conhecimento na empresa. São Paulo: Campus,
1997.
[18] OICA. OICA Statistics 2005. Available at <http://www.oica.net> Accessed on Dec.
31st 2006.
[19] Pavitt K. Key characteristics of the large innovating firm. British Journal of
Management, 1991; 2: 41-50.
[20] Prahalad CK, Hamel G. The core competence of the corporation. Harvard Business
Review, 1990; 79-91.
[21] Pugh S. Total design: integrated methods for successful product engineering. Addison
Wesley, 1991.
[22] Salerno MS, DE NIGRI JA. (Orgs.) Inovação, padrões tecnológicos e desempenho das
firmas industriais brasileiras. Brasília: IPEA , 2005.
[23] Schumpeter J. (1911) A Teoria do Desenvolvimento Econômico. São Paulo: Nova
Cultural, 1985.
[24] Senge P. A Quinta Disciplina. 2 edn. São Paulo: Best Seller, 1990.
Mectron's Innovation Management: Structural and
Behavioral Analysis
1 INTRODUCTION
The objective of the present work is to identify, based on the literature related to
the technology and innovation management, structural and behavioral aspects that
foment innovation in the scope of Mectron – Engenharia, Indústria e Comércio
Ltda, a Brazilian company performing in the aerospace industrial area.
A literature review is provided, seeking to establish the theoretical basis upon
witch the company’s structure and behavior are analyzed. In the sequence, the
authors expose elements that compose the history of Mectron and its organizational
architecture; in addition to that, Mectron’s mission statement, vision and quality
policy are showed, on an attempt to characterize the company according to its
principles and values.
1
Instituto Tecnológico de Aeronáutica, Vila das Acácias, 50, 12228-900, São José dos
Campos, SP, Brazil, Tel +55 39473836, Email: limaceaae@gmail.com
702 A. Lima, J. Paula
After that, the work exposes the environment in which the company exerts its
activities. At this point, the intention is to comprehend how the company and its
environment influence each other. Then, it’s made an attempt to demonstrate how
the company performs innovations and how these innovations are disseminated to
the market.
Finally, the authors analyze the information collected, attempting to identify
any adherences from Mectron’s structure and behavior to the theoretical
expectation foreseen in the literature review.
2 LITERATURE REVIEW
The present section aims to expose the theoretical basis upon witch the company’s
structure and behavior will be further analyzed in chapter 5. The concepts used
throughout the work emanate from the following approaches.
The concept of competitive strategy emanates from the relationship between the
Organization and its environment. On one hand, the environment represents a
conditioning to the Organization’s activities. On the other hand, the environment
offers important opportunities.
Porter understands competitive strategy as actions that aim to create a
defendable position in an industry, in order to successfully confront competitive
forces, getting a superior return on investments. The same author also establishes
that “competitive strategy is a combination of the ends (goals) for which an
organization is striving and the means (policies) by which it is seeking to get there”
[5].
3 METHODOLOGY
The authors have examined the literature, in a search for the basis upon which the
company’s characteristics would be evaluated. Then, the authors contacted some
Mectron’s managers. At that first meeting, the steps throughout the research would
be carried out were established, and the confidential level was set.
The participants of the meeting have reached an understanding concerning the
formularization of a questionnaire, which would be submitted to Mectron’s Chief
Systems Engineer (CSE).
The company has also supplied the authors with an electronic copy of its
institutional portfolio [4], which contained Mectron’s history, as well as the
description of its products. Relevant information could be acquired from the
examination of that, providing support to the present work.
The methodology adopted throughout the research also involved the
704 A. Lima, J. Paula
4 Results
The previous chapter exposed the procedures adopted for the conduction of the
research and consequent collection of information. The following section presents
the data collected, according to the methodology previously described.
4.3 Innovation
4.5 Management
5 ANALYSIS
6 CONCLUSION
In the present work, the authors have sought to answer how Mectron Engenharia,
Indústria e Comércio Ltda is structuralized and how it is positioned in the market,
in view of the strategic management of innovation approach.
The company was characterized according to its history and its organizational
architecture, as well as its guiding values, exhibited in its declaration of mission,
vision and quality policy, that last one constituting the company’s body of values.
After that, the authors described the environment in which the company exerts
its activities and in what manners that environment influences the company and the
other way around. The authors attempted to demonstrate how the company
performs innovations and how those innovations are disseminated into the market.
The authors have also commented about some strategies used by the company in
order to manage its business.
Finally, the authors accomplished an analysis of the collected information,
verifying the adherence of the investigated characteristics to the theoretical
708 A. Lima, J. Paula
7 REFERENCES
1 Introduction
The strategy management process has been recommended by many authors e.g.
Porter [9], Lobato [8] and Shapiro [10], to guide organizations toward a desired
position.
A small educational enterprise in Brazil set-up its business development project
to achieve this goal. Araujo and Trabasso [1] describe the initial planning phase of
this project where the quality function deployment (QFD) was used to assist the
deployment of the project requirements into a determined set of action plans which
were further deployed into the organizational business processes.
The analysis done by the authors has endorsed the hypothesis that the QFD
methodology can assist the deployment of company strategic objectives and eases
the planning stage of a Business Development Project (BDP). The quality of the
BDP, measured by its completeness, for instance was not within the scope of that
analysis. This paper addresses this very aspect and proposes slightly changes on
QFD methodology in order to assess the completeness of business development
projects.
1
Quality Assurance Manager, Mectron EIC S.A. (S. J. Campos, Brazil) Av. Brig. Faria
Lima, 1399 – 12227-000 – Tel: +55 (12) 2139-3524; Fax: +55 (12) 2139-3535; Email:
m.f.araujo@terra.com.br. – MSc student, Aeronautics Institute of Technology.
710 M. Araujo, L. Trabasso.
The text is organized as follows: initially, it is presented the BDP applied by the
case study enterprise; then, completeness of business development project is
analyzed and a literature review is presented. Next, the proposed procedure to
analyze the project completeness is described. Finally, the conclusions concerning
the specific case study and the modifications suggested in the QFD are presented
and discussed.
QFD
Product development Business development
Matrix
Input Output Input Output
1 Customer needs System requirements Stakeholder needs Model requirements
2 System requirements Characteristics of Model requirements Action plans
parts
3 Characteristics of Production processes Action plans Business processes
parts
4 Production processes Manufacturing Business processes Critical tasks
operations
The action plans were identified after an extensively internal survey to find out
the actions, programs and efforts performed by the enterprise that could be
correlated to any PNQ requirement. These were grouped into a set of 12 action
plans and an initial QFD matrix with their relations with the PNQ requirements
was drawn, as shown in Figure 1.
Completeness of Development Projects Assisted by QFD: a Case Study 711
Action Plans
1 2 3 4 5 6 7 8 9 10 11 12
Legend:
External Communication
General Administration
Pedagogic Excellence
Customer Satisfaction
Social Responsibility
{ - Weak (Value = 1)
ISO9001:2000
Child care
Senior Leadership z { ~ 40
Leadership Continuous Improvement Culture z { ~ { ~ z { { ~ { 40
Organization Performance Review ~ z { ~ ~ z 30
Strategy Development ~ z { { 30
Strategic Planning
Strategy Deployment z z ~ ~ { { ~ z 30
Customer and Market Knowledge z z { ~ ~ ~ z 30
Customer
Customer relationship and satisfaction ~ z { { ~ ~ z 30
Social Social responsibility { { z z ~ { 30
PNQ Requirements
The QFD matrix shown in Figure 1 depicts how the action plans support the
PNQ requirements. The action plans are rated according to its relative importance;
the more important plans can be recursively deployed toward the company
structure up to their critical tasks that would support the plan execution.
Although the QFD matrix shows the relationship between all PNQ
requirements and the selected set of action plans, the recognition of the full
coverage of the requirements is not easily assessed. For instance, observing Figure
1, it is possible to infer that the requirement “Vendor Outcome” is weakly
associated to the set of action plans, due to only three weak relations were found,
whereas the causes of this poor relationship intensity are not directly presented.
On the other hand, it was identified three strong relations for the requirement
“Value Creation and Support Processes”; however the associated action plans
could be correlated leading to an overestimation of the overall relationship
intensity.
From the enterprise view point, the completeness of the business development
project i.e. a plan that addresses every portion of the PNQ requirement, is worth
knowing to evaluate the actual effort required to complete the business
improvement process.
712 M. Araujo, L. Trabasso.
Kim et al [7] report that “The limitations of the current QFD practices mainly
come from the fact that a HOQ (House of Quality) requires subjective, interrelated
and complicated information”, additionally Chen and Chen [2] state that the design
teams should use its own experience, knowledge and intuition to determine the
engineering characteristics that would support the client requirement. These
observations grant an intrinsic uncertainty to the QFD methodology.
Fehlmann [4]; Kim et al. [7]; Shin and Kim [11]; Chen and Chen [2] observed that
the selected engineering characteristics could be dependent to the others (multi-co-
linearity) i.e. they could enlighten the same portion of the requirements, leading to
an over or underestimation of the requirement coverage.
3.3 Absence of formal criteria to identify the intensity of the relation between
requirements and the engineering characteristics.
Some authors e.g. Cohen [3]; Kim et al. [7] and Franceschini and Rupil [6]
proposed directives to analyze the intensity of the relations between the
requirements and the engineering characteristics; however these procedures are not
able to clearly assess the sufficiency of the engineering characteristics to fully
support the requirement accomplishment.
Even though the relations inferred in the QFD matrices can be associated to a
measure of effectiveness (MOE), as recommended by Cohen [3], they are not
specific or reference how a requirement is achieved or verified. Chen and Chen [2]
corroborate this statement: “Wasserman formulated the QFD planning process as a
linear programming model that select the mix of design features which resulted in
the highest level of customer satisfaction. The model focused on prioritizing the
allocation of resources among design features, rather than determining the target
levels of engineering characteristics”.
Completeness of Development Projects Assisted by QFD: a Case Study 713
Table 2. PNQ assessment criteria and relevant action plans and for the requirement:
“Information Knowledge Management”.
PNQ assessment criteria Relevant Action Plan
Adequacy Not related
Proactive General Administration
Refinement Balanced Scorecard
Innovation Not related
Dissemination Balanced Scorecard
Continuity General Administration
Integration Not related
Table 3. Heuristic rule used to determine the relationship intensity between PNQ
requirements (“approach and process”) and action plans.
Intensity of the
Symbol Value Remark
relationship
Strong z Four or more performance areas associated
Average ~ Two or three to the action plan.
Weak { One
Inexistent No
Naturally, many possible alternatives could be used instead of the heuristic rule
presented, e.g. the usage of different weights for each performance area or the
association of more than one action plans to a single performance area. These rules
could lead to slightly differences of the relation intensity.
Even though some uncertainty is expected in the relation intensity, some
interesting results were gathered when the procedure was applied to the case study.
714 M. Araujo, L. Trabasso.
The new QFD planning matrix shown in Figure 2 has substantial differences as
compared to the first draft presented in Figure 1:
x The superposition among the action plans was reduced as only one plan was
labeled as relevant to each PNQ assessment criterion;
x Some action plans were not identified as relevant to any of the PNQ
requirements;
x It was possible to identify the portion of the PNQ assessment criteria that was
not covered by the selected set of action plans. The column “Not related”,
added in Figure 2, highlights this information.
x The priority order of the action plans was modified, as a consequence.
Action Plans
3 2 NR 1 7 5 9 12 11 4 8 10 6
Legend:
E xternal C om m unication
G eneral Adm inistration
P edagogic E xcellence
C ustom er S atisfaction
PN Q P oint Value
~ - Average (Value = 3)
S ocial R esponsibility
{ - Weak (Value = 1)
ISO 9001:2000
N ot R elated
C hild care
Senior Leadership z ~ { 40
Leadership Continuous Improvement Culture ~ { z 40
Organization Performance Review ~ { { ~ 30
Strategic Planning
Strategy Development ~ ~ ~ { 30
Strategy Deployment ~ { { { { { 30
Customer
Customer and Market Knowledge { ~ { { { 30
Customer relationship and satisfaction { ~ { ~ 30
Social Social responsibility { { z { 30
PN Q R eq u irem en ts
Figure 2. QFD planning matrix: relations determined with PNQ assessment criteria
The outcome of this analysis has compelled the project team to review the
business development project and a new set of action plans was determined. Some
plans had their scopes enlarged, new were added and non relevant were merged
into more significant plans e.g. “Child care” and “Making the student a citizen”
Completeness of Development Projects Assisted by QFD: a Case Study 715
were merged into “Pedagogic Excellence”. Figure 3 shows the final result yielded
from the steps just described.
Action Plans
3 1 2 7 9 12 13 5 11 4 NR
Legend:
External Communication
Corporate Governance
General Administration
Pedagogic Excellence
Customer Satisfaction
Social Responsibility
{ - Weak (Value = 1)
Not Related
ISO9001:2000
Senior Leadership z ~ { 40
Leadership Continuous Improvement Culture ~ { { ~ 40
Organization Performance Review ~ { { ~ 30
Strategic Planning
Strategy Development ~ ~ { { 30
Strategy Deployment ~ { { { ~ 30
Customer
Customer and Market Knowledge { { ~ { { 30
Customer relationship and satisfaction { { ~ ~ 30
Social Social responsibility { z { 30
PNQ Requirements
Figure 3. QFD planning matrix: relations determined with PNQ assessment criteria.
5 Conclusions
Although the QFD methodology might be worth using to draw the planning phase
of business development projects, the completeness of the derived plan is not
easily confirmed, because:
x The QFD methodology calls for relations which are arbitrary and subjectively
determined;
x engineering characteristics could be insufficient to cover the requirements and
even might support the same portion of the requirement;
716 M. Araujo, L. Trabasso.
6 References
[1] Araujo M. and Trabasso L. Business Development Process Assisted by QFD. Leading
the Web in Concurrent Engineering. P. Ghoduos et al. (eds.), IOS Press 2006; 469–476.
[2] Chen Y. and Chen L. A non-linear possibilistic regression approach to model
functional relationships in product planning, International Journal of Advanced
Manufacture Technology, 2006; 28:1175–1181.
[3] Cohen L. Quality Function Deployment: how to make QFD work for you, Reading:
Addisson Wesley Longman (ed.), 1995.
[4] Fehlmann T. The impact of linear algebra on QFD, International Journal of Quality &
Reliability Management, Emerald Group Publishing Limited (ed.), 2005;22:83-96.
[5] FNQ – Fundação Nacional para a Qualidade, Critérios de Excelência. FNQ (Ed.) São
Paulo 2006.
[6] Franceschini F. and Rupil A. Rating scales and prioritization in QFD, International
Journal of Quality & Reliability Management, MCB (ed.), 1999; 16:85-97.
[7] Kim K, et al. A Synopsis of recent methodological enhancements of quality function
deployment, International Journal of Industrial Engineering, 2003; 10:462-466.
[8] Lobato D. Administração Estratégica: Uma visão orientada para a busca de vantagens
competitivas. Rio de Janeiro: Editoração (ed.), 2002.
[9] Porter M. Estratégia competitiva: Técnicas para análise de indústrias e da concorrência.
Rio de Janeiro: Campus (ed.), 1986.
[10] Shapiro B. A liderança de mercado sustentável. São Paulo: HSM 2005; 48:98–104.
[11] Shin J. and Kim K. Restructuring a House of Quality Using Factor Analysis, Quality
Engineering, 1997; 9:739-746.
The Effects of Teams’ Co-location on Project
Performance
a
Instituto Tecnológico de Aeronáutica – ITA, Brazil.
b
Instituto de Pesquisas Espaciais – INPE, Brazil.
c
Empresa Brasileira de Aeronáutica S. A. – Embraer, Brazil.
Abstract: This paper aims to present an analysis between teams’ co-location and project
performance. In order to achieve product development project success many decisions shall
be made before the project kick-off. One of these decisions is to whether co-locate or not the
project team. But, what are the effects of teams’ co-location on project performance? The
paper provides a literature review about teams’ co-location, its advantages and
disadvantages, virtual teams and project performance parameters. A table is then proposed
to be used as a guide to determine the degree of success of projects. This paper also presents
a case study where 3 pairs of similar New Product Development (NPD) projects were
analyzed. In each pair of cases, the first NPD occurred using a co-located team and, in the
second case, a virtual team (not co-located team) was adopted. The project performance
parameters for each case were identified using the proposed table from which we concluded
that co-located teams appears to deliver better performance at least in the “internal project
efficiency” parameters. Further research involving a larger sample of cases is still necessary
to confirm these conclusions.
1 Introduction
NPD project performance has been widely studied in the last 20 years by
researchers both from the Product Development and the Project Management
arenas. According to these authors [1, 2, 3], many factors may result in a project
failed. Within these reasons, it may be pointed a classic reason: the project is not
structured appropriately (see, for instance, [3]).
Within the broad topic “project structuring” we find the theme “project
organization approach”. Many authors [4, 5, 6] and practitioners believe that one
ideal situation for project organization is getting the team members on a physical
common area, which it is called team co-location. Some other authors, on the other
1
Corresponding Author e-mail: marina.natalino@embraer.com.br
718 M. M.N. Zenun, G. Loureiro and C. S. Araujo
hand, believe that co-location is not always a must, and that in some cases it is
completely unnecessary and even counter-productive [7, 8]. For the companies, on
the other hand, co-location always means extra-costs in the expectation of better
team results.
In this context this paper aims to present and discuss the early results of a
study at a major aerospace company which tries to shed some light on the complex
relationship between teams’ co-location and project overall performance.
In order to achieve this goal we start by providing a literature review on
project teams, describing teams’ co-location, its advantages and disadvantages, and
virtual teams (Section 2). In the third section, we propose a table with project
performance parameters to be used as a guide to determine the degree of success of
a certain project after a number of dimensions. In the fourth section, we present a
case study performed in an aerospace company showing the project performance
parameters with teams co-located and not co-located. Finally, it concludes with
limitations and future research.
2.1 Teams
The key point is that co-location enables the informal communication. The
water cooler metaphor is used to explain this phenomenon. The water cooler effect
represents a belief that conversations that develop in and around a water fountain,
or in a cafeteria, significantly enable knowledge transfer, which indirectly
contributes to positive work relationships [10].
When the team members are co-located, they can focus their collective energy
on creating the product. This situation can result in lasting camaraderie among
team members, resulting in a huge project challenge: The team spirit [4].
As advantages, besides communication and team spirit, the literature shows that
co-location provides an adequate environment condition for decision making,
collaboration, trust between team members, and effective interpersonal
relationships [11, 12, 13].
Co-location is regarded as one of the key ingredients in shortening development
cycles at many companies, such as Chrysler, Black & Decker, and Motorola [5].
However, team co-location means a representative project cost increase,
sometimes including the need for people re-location or even the requirement for
new infrastructure to allocate the complete team. During the development of its
ERJ 170/190 series, for instance, Embraer Aerospace had to build an entirely new
building in order to accommodate the entire product team of around 600 engineers
from various Countries. This collocation costs indeed increases drastically when
we consider that in some industries such as the aerospace; the needed specialists
are spread around the world. Some more concerns are summarized below:
x Lack of a permanent office home and as a consequence, the employee will be
distant from his functional area, loosing some technological up date [5];
x Functional bosses worried about losing control of theirs employees [5].
Further, based on the authors’ experience, some more concerns would be included:
x The fact that moving very often represents an inconvenience and/or a trouble
for the involved people;
720 M. M.N. Zenun, G. Loureiro and C. S. Araujo
1 – Quality: The project affects quality at two levels: the level of the design;
design quality, and the organization’s ability to produce the design; conformance
quality [15].
2 – Lead time: To achieve a high performance considering the lead time is not
just meeting schedule. Lead time is a measure of how quickly an organization can
move from concept to market. It is important to development lead time because the
time to market is shorter than ever [15].
3 – Productivity: It is considered as the level of resources required to take the
project from concept to commercial product. This includes engineers hours
worked, materials used for prototype construction, and any equipment and services
the organization may use. Productivity has a direct though relatively small effect
on unit production cost, but is also affects the number of projects an organization
can complete for a given level of resources [15].
Figure 2 shows the interaction among these 3 dimensions of project
performance.
In addition, the key success indicators proposed by Shenhar et al. [16] is a set
of measurable success criteria, divided in four:
1) Project efficiency: Internal project objectives such as meeting time and
budget goals.
2) Impact on the customer: Immediate and long-term benefit to the customer.
3) Direct and business success: Direct contribution to the organization.
4) Preparing the future: Future opportunity (e.g. competitiveness or technical
advantage) [16].
The Table showed on the Figure 3 is proposed as a guide to determine if the
analyzed projects achieve the success or not. It is applied in the case studies
discussed in the following section.
722 M. M.N. Zenun, G. Loureiro and C. S. Araujo
Figure 3: Primary success categories, key success indicator, and project performance
parameters
4 Case Study
What are the effects of teams’ co-location on project performance? What are the
relationship between co-location and lead time; co-location and productivity; co-
location and quality? In order to try to answer these questions a case study was
performed in a major aerospace company. Figure 4 illustrates the relationships to
be investigated empirically through this case study.
The Effects of Teams’ Co-location on Project Performance 723
Figure 4. A framework of the possible relationship between teams’ co-location and project
performance
3 pairs of similar NPD projects were chosen and analyzed. In this study we defined
“similar NPD projects” as those involving the development of systems with similar
design characteristics and identical or close number of technologies. It was also
used as selection criteria the following items: Minimum of seven different
technologies involved in the project (including manufacturing) and a minimum of
10 people involved in each project team. For each pair of projects, the first
occurred with a co-located team whereas the second was carried out by a non co-
located team.
The previous proposed table (Table 1) was used to evaluate the success project.
The project performance parameters were identified according to ranking below.
Values from 1 to 5 were attributed for each parameter.
Very low (1) About 20% do total
Low (2) About 40% do total
Medium (3) About 60% do total
High (4) About 80% do total
Very high (5) About 100% do total
The data used to attribute the values were: data from project planning, as
planned project duration and real project duration, time to market, planned budget
and real budget, data from commercial and marketing areas as customer daily
report, marketing perception, customers complains, and people interviews.
The case study results are presented in Figure 5.
724 M. M.N. Zenun, G. Loureiro and C. S. Araujo
In the 1st Case, the performance achieved with the co-located team is little higher
than the performance achieved with the not co-located team (4,2 and 4
respectively). The difference appears in the Internal Project Efficiency (Pre-
completion) in the parameters: how quickly is the project and completing within
budget which indicate lead time and productivity.
In the 2nd Case, the performance achieved with the co-located team is also little
higher than the performance achieved with the not co-located team (4,9 and 4
respectively). The difference appears in the Internal Project Efficiency (Pre-
completion) in the following parameters: how quickly is the project, meeting
schedule and completing within budget which indicate lead time and productivity.
In addition to this, a difference appears during the Impact of the customer phase
(Short term), when the NPD with a co-located team has achieved a performance
lower than the not co-located team, in the fulfilling customer’s needs parameter
which indicates quality.
In the 3rd case, the performance difference between the 2 projects is highest
(4,9 and 3,3). Besides the differences in the Internal Project Efficiency (Pre-
completion), there are also differences which appear in Impact of the customer
phase (Short term), Business and Direct Success (Medium term) and Preparing for
the future (Long term). These differences are showed in figure 6.
The common differences in the 3 cases, related to the NPD with co-located and
not co-located teams, are associated to the Internal Project Efficiency involving
parameters which highlight lead time and productivity, such as: project duration,
meeting schedule and completing within budget. Excepting the 3rd case, the
performance in quality are most the same in the NPD with co-located and not co-
located teams, in parameters which represent Impact of the customer (Short term),
Business and Direct Success (Medium term) and Preparing for the future (Long
term).
The Effects of Teams’ Co-location on Project Performance 725
Analyzing the collected data, it was also observed that the number of product’s
modifications in the NPD with not co-located teams was much higher than the
number of product’s modifications in the NPD with co-located teams. These
product’s modifications probably provoke a lead time increase, however, it seems
that they also contribute to the NPD with not co-located teams achieves the same
quality as the NPD with not co-located teams.
6 References
[1] STANDISH GROUP, The extreme chaos report. Standish Group International, 2001.
Available at: <http://www.standishgroup.com/sample_research/>. Accessed on:
02/15/06.
[2] ANDREASEN, M. M.; HEIN, L. Integrated product development. A reprint of the
1987 edition. Institute for Product Development, IPU, 2000.
[3] McGRATH, M. E. Setting the PACE in product development – a guide to product and
cycle-time excellence. Butterworth-Heinemann, 1996.
[4] ULRICH, K. T.; EPPINGER, S. D. Product design and development. McGraw-Hill,
1995.
[5] SMITH, P. G.; REINERTSEN, D. G. Developing products in half the time: new rules,
new tools. 2nd ed. John Wiley & Sons, 1998.
[6] HERBSLEB, J. D.; MOCKUS, A.; FINHOLT, T. A.; GRINTER, R. E. Distance,
dependencies, and delay in a global collaboration. Proceedings of the 2000 ACM
Conference on Computer Supported Cooperative Work, 2000, pp. 319 - 328.
[7] LEHMANN, J. Virtual meetings: not just an option anymore! Proceedings of the 2003
IEEE Managing Technologically Driven Organizations: The Human Side of Innovation
and Change, 2003, pp. 443 - 447.
726 M. M.N. Zenun, G. Loureiro and C. S. Araujo
[8] KATZENBACH, J. R.; SMITH, D. K. The wisdom of teams: creating the high
performance organization. Harvard Business School Press, 2003.
[9] ALLEN, T. J. Managing the flow of technology. Harvard Business School Press, 1977.
[10] DAVENPORT, T. H.; PRUSAK, L. Working knowledge: how organizations manage
what they know. Harvard Business School Press, 1998.
[11] KAHN, K., McDONOUGH III E., A. N. Empirical study of the relationships among
co-location, integration, performance, and satisfaction. Journal of Product Innovation
Management, Vol. 14, pp. 161 – 178, 1997.
[12] McDONOUGH III., E. F., KAHN, K. B., BARCZAK, G. An investigation of the use of
global, virtual, and collocated new product development teams. Journal of Product
Innovation Management, Vol. 18, pp. 110 – 120, 2001.
[13] LAKEMOND, N., BERGGREN, C. Co-locating NPD? The need for combining project
focus and organizational integration. Technovation, Vol. 26, pp. 807 – 819, 2006.
[14] PATTI, A. L., GILBERT, J. P., HARTMAN, S. Physical co-location and the success of
new product development projects. Engineering Management Journal, Vol. 9, No. 3,
pp. 31 – 37, 1997.
[15] CLARK, K. B.; FUJIMOTO, T. Product Development Performance: Strategy,
Organization, and Management in the World Auto Industry. Boston: Harvard Business
School Press, (1995).
SHENHAR, A. J.; WIDEMAN, R. M. Optimizing Project Success by Matching
Management Style to Project Type. PMForum, 2000. Available at:
<http://www.pmforum.org/library/papers/2000/PM_Style&Scss.pdf>. Accessed
on: 02/15/06.
Product Development Management
A DEA Benchmarking Methodology for New Product
Development Process Optimization
Amy J.C. Trappeya , Tzu-An Chianga, b, 1, Wen-Chih Chenc, Jen-Yau Kuod, Chia-
Wei Yud
a
Department of Industrial Engineering and Engineering Management, National
Tsing Hua University, Taiwan.
b
Department of Industrial Engineering and Management, Mingchi University of
Technology, Taiwan.
c
Department of Industrial Engineering and Management, National Chiao Tung
University, Taiwan.
d
RFID Technology Center, Industrial Technology Research Institute, Taiwan.
Abstract. Developing new products on time within budget constraints is a crucial issue to
survive in today’s competitive marketplace. However, unpredictable incidents occur during
new product development (NPD) processes, which often cause expenses, resources and
schedule overruns. Traditional project management tools lack of efficient and effective
methods to solve these problems and challenges. Hence, this study applies the data
envelopment analysis (DEA) concept to develop a novel project planning and management
decision support methodology for NPD that can optimally allocate resources and
dynamically response to unexpected delays and budget overruns. The research adopts the
methodology to a mobile phone NPD project case to demonstrate the method’s real-world
application and illustrate the effectiveness of the proposed methodology in-depth.
1 Introduction
Introducing new products on time within resource and budget constraints is a
key to success in today’s competitive market place. Thus, distributed and
collaborative product development paradigms are emerged considering time-to-
market and cost efficiency for the complexity of modern product design. However,
unpredictable incidents usually occur during new product development (NPD)
processes, which cause expenses, resources and schedule overruns. Conventional
project planning methods, estimating time, budgets and resources of NPD
activities, are often based on project managers’ expertise and subjective judgment.
1
Department of Industrial Engineering and Engineering Management, National Tsing Hua
University, 101, Sec. 2 Kuang Fu Road, Hsinchu, Taiwan 300, R.O.C.; Tel: +886 (2) 2908
9899; Fax: +886 (2) 2904 1914 ; Email: tachiang@mail.mit.edu.tw
730 A.J.C. Trappey, T.-A. Chiang, W.-C. Chen, J.-Y. Kuo, C.-W. Yu.
The NPD project managers lack objective benchmarking models to gain valuable
insights into the relations between various resource allocations and activity times in
order to support NPD engineers in the best collaborative practices, especially for
the planning and execution phase. During the planning phase of NPD, the proposed
schedule may not satisfy the due day. Thus, the project managers need to alter the
plan accordingly. In addition, during the execution phase, the initially proposed
schedule may become infeasible due to the unexpected delay of NPD activities.
Therefore, modification of the project plan, while the project is being executed, is
needed. However, traditional project management tools cannot provide
mechanisms to dynamically modify NPD projects to avoid schedule and cost
overruns. This research is to develop a novel project planning and management
decision support methodology and tool for NPD process. In order to demonstrate
the method’s real-world application, a mobile-phone development project scenario
is used as a case study to illustrate the effectiveness of the proposed methodology.
2 Literature Review
Many published papers on NPD management have put forward a wide variety of
models related NPD planning and performance evaluation. By using a simulation
model, Yang and Sum [8] investigate the performance of due date, resource
allocation and activity scheduling rules in a multi project environment. The results
show when due day nervousness is not mitigated, first in system first served
(FISFS) resource allocation rule performs better than the due day sensitive resource
allocation rules. Sicotte and Langley [5] examine the efficacy of five types of
integration mechanisms for project performance in a sample of 121 R&D projects.
This study shows the managers adjusted their use of horizontal structures, planning
and process specification, and informal leadership to project uncertainty. In order
to identify the key determinants that affect the project performance, artificial neural
network (ANN) technique is used to check whether these performance metrics can
reasonably predict design-build project performance [4]. Vandevoorde and
Vanhoucke [7] compare three different methods to forecast the duration of a
project. By using real-life project data, they find the planned value rate [1] and the
earned duration [3] are unreliable. Instead, the earned schedule method [2] seems
to provide reliable results through the project lifecycle.
Figure 1. The architecture of the benchmarking methodology for NPD process optimization
During the execution phase, if a certain NPD activity delays, the feed-back
modification mechanism (FBMM) is activated to assess whether the subsequent
732 A.J.C. Trappey, T.-A. Chiang, W.-C. Chen, J.-Y. Kuo, C.-W. Yu.
NPD activities need to be modified by applying the time estimation with added
resource allocation. Figure 3 shows the process architecture of the FBMM.
This model investigates the feasibility of reducing the NPD resource allocation
given the most likely time. Resource allocation given the most likely time is
represented as follows.
A DEA Benchmarking Methodology for New Product Development 733
'if ef
if ef T *f if ef fs *f (2) 'ihmf
ihmf T *f ihmf hsm* (3)
This model analyzes the feasibility of shortening a NPD project time without
adding resources as shown in Model (4).
Min I f (4)
n
s.t. ¦ O j t j d I f t f , (j=1, 2, …, f, …, n),
j 1
N N N
¦ O j if ej d if ef e E , ¦ O j ihmj d ihmf m M , ¦ O j 1
j 1 j 1 j 1
't f t f - t f u I *f (5)
734 A.J.C. Trappey, T.-A. Chiang, W.-C. Chen, J.-Y. Kuo, C.-W. Yu.
This model further analyzes the feasibility of shortening NPD time through adding
activity resources. First, we calculate the decreasing activity time and the
increasing resource quantities of the activity j, and then the project manager
confirms whether the increasing resource quantities violate the resource
constraints. If these resource requirements do not exceed the resource limits, we
can obtain the CTP value using Equation (6).
3.5 The relative economical efficiency for the NPD processes within a product
portfolio
Max J k (7)
n
s.t. ¦ O p npv p npvs t J k npvk , p 1, " , k , " , n
p 1
k
p 1
> @
n
¦ O p tif ep fce tihmp hcm d >tif ek fce tihmk hcm @ , ¦ O p
p 1
1 , O p t 0 where
4 Case Study
Figure 4 shows the network diagram of a new product development (NPD) process
for a music mobile phone (MMP) project. In order to evaluate the completion time
performance of a MMP project, the project manager and NPD engineers provide
the optimistic, pessimistic and most likely time estimations for each activity within
the MMP NPD process. By using the PERT/CPM approach, we can obtain the
expected value and variance of each MMP development activity time.In this case,
we can understand that the network diagram for the MMP development project has
four paths. The longest expected time path A-D-E-F-G-M-O is the critical path.
A DEA Benchmarking Methodology for New Product Development 735
Thus, the expected MMP project time is approximately the sum of the expected
activity time on the critical path, i.e., 197 days. Furthermore, the variance of the
MMP project time is 56.
In this case, the customer requires that the probability of meeting the MMP
project deadline (within 200 days) is 98%. The expected deadline is unlikely to be
met because the estimated probability of meeting that deadline is only 64%.
Firstly, the FFMM analyzes the feasibility of diminishing the activity times on
the critical path without adding resources by employing the time estimation with
fixed resource allocation. Except the activity of mechanism verification, the output
oriented technology efficiency of other activities is less than one. Therefore, the
FFMM employs Model (4) to optimally shorten the most-likely-times of these
activities without adding resources. The expected project time and the probability
of meeting the deadline are adjusted to 191 days and 97% respectively. It cannot
achieve the time-performance goal of the MMP project. Further, the FFMM
analyzes the feasibility of increasing resources for shortening the time of critical
path. Through using the time estimation of added resource allocation, the FFMM
calculates the CTP values of critical path activities for optimally reducing the
critical path time. The CTP value of the mechanism verification is the largest.
Hence, the FFMM increases the activity resources of the mechanism verification.
The expected project time and the probability of meeting the due day change to
185 days and 98% respectively. Because the mechanism-verification activity has
no other benchmarking activities, the FFMM selects the best CTP, i.e.,
development and test of the molds. The probability of meeting the project’s
completion day changes to 99%, which satisfies the customer’s requirement.
Then, the FFMM analyzes the feasibility of decreasing the resource allocations
of non-critical path activities under time performance without being influenced.
From the analytical results, we can see the efficiencies of the design and
verification of the circuit, the verification of the antenna module, and the system
analysis and design are less than one. By employing resource allocation given most
likely time, the FFMM adjusts resource allocations of these activities. Finally, the
FFMM achieves the MMP development process optimization in the planning
phase.
736 A.J.C. Trappey, T.-A. Chiang, W.-C. Chen, J.-Y. Kuo, C.-W. Yu.
In execution phase, the FBMM detects the delays of development and test of
the molds of the MMP project. By understanding the reason of the delay, the
project manager decides to maintain the resource allocation and modify the three
time estimates for this activity. The probability of meeting the deadline is changed
into 96%. Hence, the FBMM is activated to adjust the following activities of the
MMP project. By increasing the working time 208 hours of testing engineers, the
most likely time of the integration and test of the system and the probability of
meeting the project schedule change into 32 days and 98% respectively, which
achieve the customer’s requirement level.
The third phase evaluates the performance of NPD processes with a product
portfolio of 3 NPD projects, i.e., MMP, smart mobile-phone (SMP), and
multimedia mobile-phone (MMP2) projects [6]. By employing Model (7), we
obtain the performance assessment values of these projects. Because the
efficiencies of SMP and MMP2 projects are less than one, this research further
analyzes and provides the improvement directions for both projects. The output-
oriented efficiency analyses suggest the NPD strategies and improve the expected
profits of SMP and MMP2 projects to US$2,457,000 and $2,457,770 (increasing
US$910,000 and US$660,000) respectively.
5 Conclusion
Currently, most performance evaluation methods, which focus mostly on the
planning and completion phases of a project, cannot utilize the assessment results
to further support the feasibility of NPD process improvement by providing
consistent and quantitative comments and suggestions. From the entire process
lifecycle point of view, this research develops a novel DEA benchmarking
methodology, which consists of modification mechanisms to optimize NPD
processes and avoid unexpected delays and budget overruns (during the planning,
execution and completion phases). By applying the DEA methodology to a real
product development project, this research shows that the methodology can be
adopted to generalized NPD applications with significant advantages.
5 References
[1] Anbari, F. Earned value method and extensions. Project Management Journal 2003;
34(4): 12-23.
[2] Lipke, W. Schedule is different. The Measurable News 2003; 31: 4.
[3] Jacob, D. Forecasting project schedule completion with earned value metrics. The
Measurable News 2003. 1: 7-9.
[4] Ling, YY, Liu, M. Using neural network to predict performance of design-build
projects in Singapore. Building and Environment 2004. 39: 1263-1274.
[5] Sicotte, H, Langley, A. Integration mechanisms and R&D project performance. Journal
of Engineering and Technology Management 2000. 17: 1-37.
[6] Trappey, A.J.C., Chiang, T.-A., and Trappey, C.V., and Kuo, J.-Y. Develop a novel
methodology for strategic product portfolio management considering multi-objectives
A DEA Benchmarking Methodology for New Product Development 737
and operational constraints. Proceedings, CE2006, French Riviera, France, 2006; 18-
22.
[7] Vandevoorde, S, Vanhoucke, M. A comparison of different project duration forecasting
methods using earned value metrics. International Journal of Project Management
2006. 24: 289-302.
[8] Yang, KK, Sum, CC. An evaluation of due date, resource allocation, project release,
and activity scheduling rules in a multi-project environment, European Journal of
Operational Research 1997. 103: 139-154.
Critical success factors on product development
management in Brazilian technological based
companies
Sérgio Luis da Silvaa1, José Carlos de Toledoa, Daniel Jugendb and Glauco
Henrique de Sousa Mendesb
a
Professor at Federal University of São Carlos – UFSCar – Brazil
b
Postgraduate student at Federal University of São Carlos – UFSCar – Brazil
1. Introduction
products launched, ended up in failure. The vast amount of literature in the area
produced a collection of factors associated to the success of new products [3, 12].
For the purpose of this paper the following factors were investigated: new product
innovation degree, characteristics of the target markets, product characteristics,
technology sources, company skills/ability, project leader skills, integration of
PDP, PDP organization and execution quality of PDP activities. These factors are
to be briefly discussed below.
The market orientation is critical to the success [3, 12]. This factor approaches
aspects such as company capacity to evaluate market potential for a new product,
understanding the needs of the target market and translating such information into
PDP language [10].
There are numerous products characteristics that propel them to success: low cost,
high quality, superior performance and unique attributes [2, 10]. The need to
integrate the strategy of product development with company strategies at program
and project levels is also recognized [2].
Technology sources can also contribute for the success or failure of a new project,
because they demand acquisition, adaptation and managing skills [5].
The main organization aspects of PDP mentioned in the literature include the
company organization for product development, the degree of integration between
the functional areas, level of PDP structuring and characteristics of key-individuals
involved in the project execution [10]. Reference [3] indicates five important
factors linked to organizational characteristics of PDP: setting up multifunctional
teams, authority and responsibility of the project leader, the scope of responsibility
over the project by the development team, commitment of the team members and
high degree of communication during the entire project.
Regarding to carrying out PDP activities, [10] recommend paying attention to the
pre-development phase, handling of technical and market studies, and feasibility
analysis. Reference [4] emphasizes the need for quality in activities concerning
generating and analyzing ideas, technical development and market introduction.
As regards PDP management in TBCs, [13] indicate that many studies of product
development are carried out in companies located in relatively stable vicinities, a
quite different reality from the areas or markets where Technology-based
companies (TBCs) are usually established.
3. Research Method
The research was projected in three phases. Initially, the bibliographic revision of
PDP management, critical success factors in product development and in
technology-based companies was accomplished. This phase enabled the
formulation of a set of factors that could explain the success of a new product.
The second phase consisted of choosing the participating companies of the research
and data collection. Based on criteria as size, operation segment (manufacturers of
medical and hospital equipment and process control automation), location (Sate of
São Paulo) and existence of their own and active PDP, the sample amounted to 62
TBCs, totaling to 104 products, out of which 62 were considered as successful and
742 S. L. da Silva, J. C. de Toledo, D. Jugend, G. H. S. Mendes
Product Success
Failure
Medical and Hospital Equipment 30 19
Process Control Automation Equipment 32 23
Total 62 42
Success or failure was the denomination given by the answerer, who compared
the performance of the product in relation to the company’s expectations regarding
the launching. In the cases where the performance was equal or surpassed
expectations, they were classified as successful, however in the unsuccessful cases,
they corresponded to products whose performance was considered below or
extremely below expectations.
For data collection, a questionnaire was employed, which by means of 64
close-ended questions recuperated information about managing and handling of
product development that gave rise to successful or unsuccessful product.
In the third phase, statistical techniques were applied to data collect. Initially,
the association of the variables investigated was measure with the result of the
product project (successful and unsuccessful) through the respective contingency
coefficients. Hence, it was sought to determine which variables considered
isolated, explained the success or failure of the new product. Also, reducing and
summarizing the individuals variables was tried by using factorial analysis among
factors was carried out.
The interpretation of the generated results from statistical procedures enabled
finding a set of factors that affect the success of product development in the TBC,
thus indicating priorities and information focus in PDP management.
4. Analysis of Results
The results in table 2 show correlation coefficients and their respective levels of
significance (p) among ten main components (critical factors) and the result of new
product for the companies of medical-hospital equipments (MHE), as well as for
the companies of process control automation (PCA). In agreement with the
methodology, each main component corresponds to a set of isolated variables,
which by applying the multivariate analysis technique were reduced, aiming at
facilitating data interpretation. Table 3 demonstrates the isolated variables
considered equally significant for both sectors.
Critical success factors on product development management 743
Table 2: Correlation between main components and the result of new product
Table 3: Association between isolated variables and the result of new product
The results suggest that the sectors emphasize different aspects in their PDP
managing systems to generate the success of new product. It can be concluded the
PCA companies are more product oriented, while the MHE companies are more
process oriented.
The PCA companies are more concerned with product characteristics and the
innovation degree that is incorporated. For this reason, they should give priority
and much attention in structuring the technical and economic requisites of the
product that will be developed (detail stage of product project and fabrication
process), depending on the characteristics of the project leaders during this process.
For the MHE companies, these components also were found to be relevant,
however with moderate degrees of correlation.
Success in MHE is more dependent on the organization characteristics of the
company, such as proficiency in carrying out PDP activities and marketing skills of
the company. The successful projects are those in which marketing evaluation were
carried out well and user requisites were well interpreted concerning new product
specifications. Thus, it’s important that such companies place more concern in the
proficiency of PDP activities, above all, those related to pre-development
(generating ideas, selecting ideas, formulating concepts and analyzing viability),
because they were pointed out as being critical for success. These results were
compatible with studies performed in many countries [12].
The values of main components referent to market-target and quality of PDP
activities in the PCA sector of the companies present a reasonable correlation with
the result factor of the new product (table 2). In the first component, besides the
variables showed in table 3, the need for synergy between the new product and
already explored markets by the companies could also be indicated as critical
success factor. From the isolated variables that form the quality component of PDP
activities, it can be verified that pre-development and project are factors that should
be carefully managed in PDP activities by such companies.
The results in table 3 regarding preparation and follow-up of documents and
reports necessary to homologize the product were considered equally significant by
the companies of both sectors. While for the PCA companies, the quality need of
this stage is connected to pressure by clients, in MHE companies it is due to legal
norms imposed on the product.
It is presumed that in small companies, integration in the functional areas
occurs naturally and freely, since proximity of individuals emphasizes the level of
contact, facilitates communication and information exchange during PDP.
Integration is substantially correlated to the result of new product in MHE
companies; however, the same is not true for the other sector. The integration need
in this sector was verified as being decisive during the execution of pre-
development activities, which strengthens the results previously described.
According to [3], the project leader plays an important role in handling the
development process of a new product, since he is directly responsible for
organizing and directing the team members of development. Besides leading the
team, he must know how to negotiated with the directory in order to obtain the
necessary resources for the project. In order to perform this role, the leader must be
endowed of managerial qualification and relationship skills to create and
environment of trust, coordination and control.
Critical success factors on product development management 745
5. Conclusions
This paper analyzed management practices and critical success factors during the
realization of new product development projects. Product development is a
complex process and any research in this area shows limitations. The main
restriction of this paper is related to the option made to examine critical success
factors in the new product development projects, although just within specifics
sectors of the Brazilian small and medium size TBCs. Future research may lead to
investigate the core subject within other sectors, software and biotechnology, for
example. Despite the limitations, some considerations can be made in view of the
results obtained.
By interpreting the results obtained, it can be understood that such companies
assign priority and be concerned with the characteristics of the products and their
articulation with the company strategy. By so doing, they should pay much
attention to the pre-development stage, when technical and economic requisites of
the products to be developed are being structured (detail stage of the product
project and manufacture process), and keep this in mind and attitude so that future
products have a characteristic that pursues convergence with strategy and the
company’s target market.
746 S. L. da Silva, J. C. de Toledo, D. Jugend, G. H. S. Mendes
The pre-development stage tends to be effective when right decisions are made
to properly articulate product project and company strategies, capture desired
technology and market information, and to analyze in early stages cost and prices
of the product to be produced. Good decision making in this phase can be
facilitated by creating a “multifunctional development team” right at the beginning
of PDP steps, as suggested by [10].
Thus, from the PDP beginning, analyses and screenings within the areas of
Production, Engineering, R&D (develops technology to be incorporated into the
product) and Marketing, will be intensified and concentrated on the product to be
developed. That integration can be deemed as an important management
mechanism, since the multifunctional team boosts the accumulated knowledge
exchange, by and amongst each company’s function. Integration also diminishes
uncertainties and consequently increases decisions quality as made during the
beginning of the development; this is likely to lower project cost due to the
probable reduction of problems occurrence throughout the PDP.
That type of organizational arrangements for product developments can be
implemented more easily in small and medium companies, as those object of this
research; due to their size, integration and inter-functional communication, the
organizational arrangement tends to occur more naturally. It is a management
mechanism to be better explored by the small and medium size TBCs in the PCA
and MHE sectors.
Some results are not compatible with success factors in the literature
concerning critical success factors in PDP. Since they are TBCs, there were
expectations that the process of acquisition and technology transference were
critical for such companies. However this hypothesis was not verified thought the
results of this research. Lastly, it is hoped that the results of this work are able be
added to the theoretical body concerning success factors in specific management
environments of product development, and at the same time, contribute for
improvements in PDP indicators when evidencing practices that condition the
success or failure of new product.
6. References
[1] Carvalho MM, Machado SA, Pilzysieznig Filho J, Rabechini Jr. R. Fatores críticos de
sucesso de empresas de base tecnológica. Produto & Produção, 2000, vol. 4, número
especial, p. 47-59, abr.
[2] Clark KB, Wheelwright SC. Managing new product and process development: text and
cases. New York: The Free Press, 1993.
[3] Ernest, H.Success factors of new products development: a review of the empirical
literature. International Journal of Management Reviews, 2002, v. 4, n. 1, p. 1-40.
[4] Griffin, A. PDMA Research on new product development practices: updating trends and
benchmarking best practices. Journal of Product Innovation Management, 1997, Vol. 14, p.
429-459.
[5] Kappel TA. Perspectives on roadmaps: how organisations talk about the future. The
Journal of product innovation management, 2001, v.18, p. 39-50.
Critical success factors on product development management 747
Aline Patricia Manoa1, Julianita Maria Scaranello Simõesb , Luciano Silva Limac ,
José Carlos de Toledo d and Sérgio Luis da Silvae.
a
Production Engineering Department- UFSCar, Brasil/ Production Engineering
Department- UNIFEG, Brazil.
b
Production Engineering Department- UFSCar, Brazil.
c
Production Engineering Department- UFSCar, Brazil.
d
Production Engineering Department- UFSCar, Brazil.
e
Production Engineering Department- UFSCar, Brazil
1
UNIFEG Professor and Professor of Production Engineering Department Av. Dona Floriana 483,
Centro, Guaxupé/ Minas Gerais/ Brasil. TEL: (55) 35 35515267. Cell (55) 1681344922. Email:
alineatricia@yahoo.com
750 A. P Mano, J. M. S. Simoes, L. S. Lima, J. C. Toledo, S. L. Silva
1 Introduction
The Brazilian agricultural machines and implements industry (AMIs) shows a high
potential growth for the years to come, which is linked to the domestic agriculture
growth, being the increased utilization of agricultural machines and implements
one of the main reasons of agriculture good productivity performance. Such
demand for MIAs calls for the development of products that yield better
performance and best fit the territorial topographic conditions; namely, a MIA
utilized in a flat region should be adapted to operate in more irregular terrains.
However, product development departments face many daily difficulties that
directly affect the performance of such process.
Some of those problems are rooted in historic aspects of the origin of such
industrial sector, since grounding projects on equipments developed in other
countries was a common practice by the local industry. That resulted in various
difficulties due to climate and topographic characteristics different from the
Brazilian territory, jeopardizing the products performance and, consequently, the
culture [6].
Amongst the main problems experienced by the MIAs, those related to work
accidents due to product dimensioning [7] can be mentioned, as well as precision
problems and, chiefly in seeding and spraying activities [6] and [4]. Those
problems are failures due to product development process (PDP).
This paper discusses the difficulties MIAs face during PDP and which directly
affect their performance.
It is important that companies be aware of what their difficulties are, because,
with the aid of a diagnose it is possible to work on problems solutions to prevent
their consolidation and build up of a competitiveness blocking barrier.
This paper aims, thru an exploratory descriptive research, at identifying the
main difficulties faced by large-sized MIAs located in the State of São Paulo. To
achieve the objective, all the agricultural machines companies matching this profile
were identified. The screening yielded five companies, which were later visited to
interview the person responsible for Product Development, to identify the main
difficulties those companies cope with during the product development.
Despite the fact that all five companies showed the same profile, the
heterogeneous nature of their problems was observed, leading to at least three
group categories: financial difficulties, personnel difficulties and technical
problems like product project and validation, each one of them adversely affecting
PDP specific aspects.
launch, PDP is responsible to watch the new product behavior, in actual use and
production, carrying out, whenever necessary, eventual product specifications or
process changes, until the product withdrawal from the market [5].
PDP is the manner how the company realizes and manages the set of products
that will originate new products. Namely, product projects flow along PDP, which
in any given company shows some basic phases common to all the projects,
although phases of each project are individually treated [5].
PDP shows some peculiarities, like high degree of uncertainties related to this
process; administration of multiply sourced information, like customers, suppliers
and various company areas; diversity of requirements the new product project must
meet, involving customer requirements, manufacture capacity to realize the project,
necessary technical assistance services and recycling at the end of the product life
cycle [5].
Any organization searching for competitiveness thru a new product should
focus on time-quality-productivity based indicators [5]. Good PDP performance
depends on characteristics such as well defined project objectives, focus on time
and market, internal integration to the project, integration amongst company areas
involved in the project, high quality prototyping and strong leadership exercised by
the project team. Thru those management characteristics, a quick and efficient
development is sought for, which will yield competitive processes [1].
Nevertheless, depending on the complexity inherent to PDP, companies
commonly face difficulties in managing this process that directly affect its
performance.
Not always does the reality of a new product development process matches theory,
problems commonly surge along this process that are to be known by companies to
prevent their repetition.
Amongst the most common causes of product development problems [1],
following can be mentioned:
- Moving objectives: frequently, do not the basic product or process consider
changes, whether technological or market wise, taking place during the project.
This is likely to occur when the project is based on an apparently stable technology
aiming at a specific market that suddenly changes, or assumptions are made on
distribution channels that are fairly constant.
- Isolation of the product development department (DP): when the company
developing a product owns more than one productive unit, the product
development department DP commonly performs isolated, which is likely to cause
communication problems amongst the DP and marketing, production, finance,
departments, etc., etc
- Misunderstandings amongst the company functional areas: what a given area
of the company expects or imagines from other area may be unreal or impossible to
be achieved. Frequently, the areas involved in PDP do not understand each other,
use different languages or measure results in a different manner. Frequent
misunderstandings between the marketing and technical areas, for example, are due
to unreliable market research.
752 A. P Mano, J. M. S. Simoes, L. S. Lima, J. C. Toledo, S. L. Silva
3 Research Method
The research was carried out by means of an exploratory descriptive study
involving the following activities: bibliographic research on the subject, mapping
and identification of companies matching the intended profile, design and
preparation of an interview semi-structured check-list, visiting companies and
check-list application while interviewing the responsible for PDP, obtained data
description and analysis.
The bibliographic review included the product development process and its
management as well as the main related problems; this activity also involved
getting to know outstanding characteristics of the Brazilian agricultural machines
and implements industry.
Based on the knowledge acquired on PDP management and the industry, a
semi-structured checklist was prepared so as to identify noteworthy difficulties
those industries face during the PDP.
Data obtained from ANFAVEA - Associação Nacional dos Fabricantes de
Veículos Automotores and also from IBGE - Instituto Brasileiro de Geografia e
Estatísticas, allowed to identify in the State of São Paulo, within August 2005 and
January 2006, five national capital large companies (according to the head count).
All those companies, when visited, showed evidences they had developed products
between 2003 and 2005. After the visits and interviews, the data obtained were
analyzed.
The main problems encountered in the product development process 753
4 Field Research
This item will show the results of the field research conducted in five large size
national companies located in the State of São Paulo, denominated herein as EA
(Enterprise A), EB, EC, ED, EE.
The head count of each one of all the companies under study had, between August
2005 and January 2006, of more than 500 employees, the largest one having 2000
employees and less than 600 the smallest one. Three companies out of the five,
exhibited total sales above one hundred million Reais; it was not possible to
establish a direct relationship between total sales and head count, since the smallest
company presented one of the highest total sales.
As regards company administration, a trend was observed showing the
migration from family to professional style; currently, two companies are in the
transitional stage, one has already professionalized its administration and two are
still family run.
Four out of the five subject companies are certified in conformance with ISO
9001, what evidences their interest to meet stringent requirements and increase
their export operations, since the export business contribution is fairly low in all
the companies except one, whose export contribution to total sales is 50%.
However, the leading customers of such company are developing countries, not
very stringent as regards quality and advanced technologies. Chart 4.1 summarizes
the mentioned information.
According to Chart 4.2, it can be seen that problems encountered by companies are
peculiar to each one, although they can be reunited in 5 groups: Group 1- quality
754 A. P Mano, J. M. S. Simoes, L. S. Lima, J. C. Toledo, S. L. Silva
related problems, Group 2- time related problems (meeting target times), Group 3-
leadership related problems as well as overall development process management,
Group 4- personnel related problems and Group 5- finance problems. Thru this
grouping, it is observable that most difficulties relate to process management.
The variables related to development time are also quite critical, due to delay in
new products launch or due do difficulties in meeting established targets or
deadlines. Companies B and C face problems due to the frequent need of changes
in the original project, which is also a time related problem, because changes end
up delaying the running project.
If compared to other industrial sectors, the new product development time spent
by MIAs is extremely short, however, as previously discussed, that variable is vital
for products success. When a new product launch delay occurs, besides the
problem of having to wait until the next harvest, some worsening factors exist. A
competitor may launch a similar product, heavy financial loss for the delayed
company, since those equipments have a long useful life and the customer who
purchased from a competitor most likely will purchase a similar one in no less than
10 years.
Development costs are also a problem for companies EC and ED. For EC the
development cost is high, as long as for ED the problem is the lack of financial
resources to invest in new projects.
EE faces many people management problems as well as small development
teams. Beyond that, the company has problems managing the information flow
amongst Engineering (responsible for PDP) and other company areas.
Although separately grouped, problems interrelate. For example, delays due to
changes in the original project are frequent. Companies face difficulties to meet
deadlines due to the small project teams, communication difficulties amongst
departments and lack of quick response of the information systems. Indications
The main problems encountered in the product development process 755
exist showing that, by solving any of those problems, there will be a contribution to
solve the others.
5 Conclusions
By studying the various problems which were detected, the companies concern to
hire employees with technical knowledge is noticeable, what can be one of the
causes of PDP management related problems. Hiring professionals with solid PDP
background would facilitate the solution of such problems due to their wide vision
of PDP and not only isolate activities. This fact would also allow the early
identification of most critical functions and activities as well as a better integration
amongst PDP involved areas.
Time related problems, together with changes in original projects, are likely to
provoke misunderstandings between the PDP team/department and Commercial
area. In such situation it is ideal that the companies foresee clients’ future needs
ahead of time, thus preventing projects with moving targets over time. Also, the
lack of integration between product project and process project may result in faulty
prototypes or a manufacture incapable to produce the new product, thus causing
further delays. Carrying out tests and project validation, involving both product and
process, emerge as an alternative prior to the actual scale production, which will
help avoid customers dissatisfaction and complaints about product performance.
Starting from the problems identification and their categories, it is important
that companies may concentrate themselves on their respective causes to prevent
them, since all the identified difficulties result in delays or even in products
performance lower than expected.
In more complex situations, the new product project and its development are
likely to provoke conflicts amongst the different company areas, thus harming the
overall PDP. The ideal situation, during the PDP, is to have clear project
objectives, shared throughout the organization and articulated with the market
needs and the company’s strategy, thus facilitating the early solution of problems
in all the hierarchical levels.
6 References
[1] Clark, K.B.; Fujimoto, T. Product Development Performance: strategy, organization and
management in the world auto industry. Boston: HBS Press, 1991.
[2] Clark, K.B.; Wheelwright, S.C. Revolutionizing product development: quantum leaps in
speed, efficiency, and quality. New York: NY, 1992.
[3] Fernandes, H. C. Rápido e Rentável. Cultivar Máquinas 2005; 41; 10-12.
[4] Mattoso JR. M., Destefano A., Procurando a precisão. Caderno Técnico Máquinas:
Mecanização 2005; 40; 3-10.
[5] Rozenfeld H. et al. Gestão de Desenvolvimento de Produtos - uma referência para a
melhoria do processo. São Paulo: Saraiva, 2006.
[6] Tomelero E. Hora de acertar. Cultivar Máquinas 2006; 52; 12-15.
[7] Yamashita R. Y. Sem acidentes. Cultivar Máquinas 2005; 41; 13-15.
Identification of critical points for the implementation
of a PDP reference model in SMEs
Abstract. Numerous practices and principles are available to improve the company’s
Product Development Process (PDP), including multifunctional teams, integrated
development process, integration of market evaluation to product development, and product
life cycle analysis. Indeed, the importance of PDP systematization and organization is
widely recognized, and the existing reference models offer a representation of the PDP.
However, most companies fail to incorporate these practices into their routine to improve
their PDP, since the implementation of a reference model or the PDP transformation process
are influenced by the companys organizational structure. This paper identifies and discusses
several critical aspects of the PDP transformation process of SMEs, based on an analysis of
the implementation of a reference model in a Brazilian SME. The analysis of this experience
enabled us to pinpoint various difficulties attending the transformation of the PDP, which
we then compared with the literature on the transformation process. This comparison led to
the identification of critical points for the SMEs structure and organization for PDP
improvement. These observations are expected to support the design of PDP transformation
models, thus helping SMEs to enhance their competitiveness.
1 Introduction
The product development process (PDP) management knowledge gradually
evolved accompanying the management theory evolution. The product view was
amplified and the process involves several knowledge areas participation. To this
purpose, the PDP become composed by multifunctional teams and concurrent
activities. Nevertheless, several companies although have an initial stage PDP,
without standardization and with a sequential development process.
The other aspects highlighted by the new approaches are the initial stages of the
PDP, the very well defined information flow, and the customer focused
philosophy. A PDP reference model comprises activities, tasks and tools related to
1
Industrial Pharmacist, Master in Engineering, Doctorate student aLaboratório de
Otimização de Produtos e Processos –UFRGS – Oswaldo Aranha, 99, Porto Alegre – RS –
BRASIL +55-51-3308-4005 tomoe@producao.ufrgs.br – www.producao.ufrgs.br
758 T. Gusberti, M. Echeveste
the product development steps execution. Those models are developed to indicate
the PDP best practices. Those best practices comprise the principles indicated by
the reference philosophies, tools and approaches, as the Concurrent Engineering,
the Product Based Business, and the Integrated Product Development. There are
many authors describing several models. The most known are Hollins and Pugh[1],
Pahl and Beitz[2], Roozenburg and Eeekels[3], Copper[4], Crawford and
Benedetto[5], Dickson[6], and Kotler[7]. They summarize the answer to a PDP’s
improvement need in order to reduce the development time and costs. The high
competitiveness made that the development process steps incorporate the strategic
items, focusing on the customer needs [8].
The PDP literature is largely disseminated, and offers different models. The
difficulties are on the reference model implementation on the companies practice,
especially on the small and medium enterprises (SME).
The most common way to classify companies is the company’s size.
Nevertheless, the SME do not have homogenous culture and practices. The
specificity of the management of SME tends to disappear with the modern
practices as net business, risk capital use, and global market. In this context, those
companies tend to have the similar management as the great corporations [9]. In
this paper we suppose that the initial approach on low maturity level companies is
the sequential development. On this context, the reference models are presented as
an important guide providing benchmarking to PDP structuring.
Hunter [10] emphasizes that the organizational structure is related with the
company’s innovation posture. As the author says, the organizational structure
design is defined by contextual and structural elements. The contextual elements
include the strategy, environment, technology, business size/ life cycle and the
culture. The structural elements include reporting relationship, decision making
processes, communication processes, coordination of work, forms of complexity
and distinguishing characteristics.
This paper objectives the identification of the critical points for the PDP
reference model implementation on the SME. Those points are identified from the
characterization of a medium sized company.
2 Method
The Hunter’s [10] elements are related to the company’s behavior on its ambient.
This paper characterizes those elements in a low to moderate maturity level
company’s reality. From this evaluation, some aspects to understand the low to
moderate maturity level company’s culture were identified.
The evaluation of the Hunter’s elements was conducted based on the
management areas. The relationship between the Hunter’s elements and the
management areas is presented in the Figure 1. The closed cell represents the
management areas contemplated by the Hunter’s elements.
Identification of critical points for the implementation of a PDP reference model 759
Strategic planning
Human resource
Management
management
management
management
management
management
management
Technologic
Information
Financial
Product
Process
Quality
Management area
Strategy
Organizational structure design
Contextual
elements
Environment
elements (HUNTER, 2002)
Technology
Business size/ life cycle
Culture
Structural elements
Reporting relationship
Decision making process
Communication process
Work coordination process
Forms of complexity
Distinguishing characteristics.
Figure 1: Management areas and Hunter’s (2002) structural and contextual
elements
The critical points identified on the management areas are related to the
contextual and structural elements and will assist the diagnostic step on the
reference model implementation. The management areas to be evaluated are
Strategic Planning, Information Management, Quality Management, Financial
Management, Human Resource Management, Product Management, Process
Management and Technologic Management.
3 Literature Review
The small to medium sized and low to moderate maturity level companies’
related literature was investigated. The literature suggests that all companies need
some level of flexibility, depending on their environment, despite their size [9].
Some new tendencies for modern management views that define the high maturity
level companies were identified. The new approaches highlighted the
organizational structures that present flexibility, empowerment of employees,
power decentralization, communication systematization as advantageous to
innovative capacity in an instable environmental context [10; 14; 15; 16; 17].
The change management related literature understands that the techniques, tools
and conceptually (technologically and scientifically) correct systems must be
resultant from the company’s learning to bring a proper innovative culture. This
innovative culture must be capable to conduct the company to generate itself
knowledge to continuous improvement.
The literatures related to the change management [18; 19; 20], the situational
method construction [13], and the action research [21; 22] converge to a central
760 T. Gusberti, M. Echeveste
idea. Those literatures emphasize the individual person’s role on the company’s
and social group’s context as critical for an effective change implementation.
It is expected that the company becomes more innovative after the PDP
improvement by reference model implementation. The innovativeness is
constructed by learning favorable structure, that propitiates actively capture of data
from the environment, transforming more efficiently them into information, using
them to generate knowledge [23]. This system is important both for the new
structured PDP, as for the transformation (and continuous improvement) process -
the reference model implementation. This system will propitiate the organizational
memory vehicle to help on an innovative product generation.
The company’s structure consideration is important for the evaluation tool
selection on a process improvement [13]. This description correlates with the
situational method construction. The process improvement project must start
considering the company’s maturity level and structure to select the most adequate
evaluation tool.
Several tools for maturity levels are available in the literature. One adapted for
the theme product development is useful for the reference model implementation.
A possibility is to adapt the Sturkenboon et al. tool [15], directed to SME.
4 Case study
The selected company is median-sized, by the traditional employee’s number based
classification. It is located on the south of Brazil and has a familiar origin. Its
products had been developed a long time ago, on the company’s activities
beginning. It has a Product Development Department, but new products are not
launched for a several years. The developed products are inspired on market
available products.
The company belongs to the pharmaceutical sector. Locally, this sector is
characterized as low maturity level on the production process, as the product
development process. The local sector’s companies have a quality view focused on
control. The company has an intention to improve, searching to use some
management practices, as strategic planning and use of metrics.
The company’s used representation of its functionality is the organogram. This
kind of representation shows the hierarchic organizational structure. The existing
metrics did not help the management system improvement. In fact, the company
did not have an effective performance measurement system.
The strategic planning was not effectively deployed to the tactic level. The
company did not present investment planning or financial evaluation. This
observation is applied to the equipment and technology substitution politics and to
the product development projects. It was observed that the company did not have
the habit to planning, evaluate and control projects.
In this company, the pharmaceutical product development was understood as
different from the other segments. This idea is the same emphasized by Sharp [11].
This author related that the existent posture is that the product development project
quality does not necessary affect the quality perception on the pharmaceutical
Identification of critical points for the implementation of a PDP reference model 761
sector. The pharmaceutical area, as observed on this company, did not consider
other product dimensions than the generic dimension (the technical characteristics).
The training system had essentially technical emphasis. The managers appeared
to understand the knowledge as coming just from the professional formation. They
did not comply with the need on the more structured training on management
practices and tools, for example.
The company exists for more than 50 years. It had difficulties to invest on new
technologies. For this reason, the company had old technologies on its production
processes. The raw materials are imported in its majority, as the country has low
number of companies that produce them.
4.1 Discussion
is the fundamental step for the process improvement. The process mapping allows
the process steps, material and information flow identification.
The literature allied to the case study management areas observation propitiated
the critical points identification to the reference model implementation. Those
points supply the PDP improvement project needs with methods, tools, or
techniques. They are the implementation process requisites. The project
management is a well structured practice that allows the planning, conduction and
control of the PDP improvement project.
The following critical points for PDP improvement are summarized from the
previous discussion: (1) conversion of the sequential PDP to an integrated PDP,
with concurrent activities; (2) implementation of an information capture
systematic; (3) change management theory contemplation; (4) maturing in context
of the project management knowledge.
To define a method for a PDP improvement, those critical points must be
converted to method principles. Those principles define the deployed directives
that guide the implementation process. They are: (1) mapping of existing PDP,
identifying the existence of a process view, the needed interactions between
departments, the existing problems, and the existing management mechanisms; (2)
Existing information capturing system mapping, and incorporation of company’s
reality customized tools; (3) change resistance avoiding statements establishing to
propitiate a existing culture and philosophies based on learning to allow continuous
improvement; (4) analysis of and project management concepts implementation.
6 Conclusions
The present work identified some principles to guide the reference model
implementation. A method created for reference model implementation using those
principles would be useful to conduct the low to moderate maturity level company
to improve its PDP allowing more competitiveness.
This paper based on the organizational structure defining elements proposed by
Hunter [10] identified by theoretical review. These elements were associated to the
PDP interrelated management areas. The SME improvement aspects for each
management area were detailed. Based on those improvement aspects, the critical
requisites were identified. Those requisites were denominated the PDP
implementation critical points. The paper presents the PDP improvement guiding
principles that allow the critical points implementation.
For more details see Gusberti [24].
References
[1] HOLLINS, B.; PUGH, S. Successful product design. [S1]: Butterworth&Co, 1990.
[2] PAHL, G.; BEITZ, W. Engineering Design: a systematic approach. London: Springer,
1996.
[3] ROOZENBURG, N. F. M.; EEEKELS, J. Product design fundamentals and methods.
[S.1.]: John Wiley and Sons, 1996.
764 T. Gusberti, M. Echeveste
[4] COPPER, R. G. New products: the factors that drive success. International Marketing
Review, v. 11, n. 1, p. 60-76, 1994.
[5] CRAWFORD, C. M.; BENEDETTO, C. A. New products management. 6 ed. Chicago:
McGraw-Hill, 2000.
[6] DICKSON, P. Marketing management. Fort Worth: Dryden Press, 1994.
[7] KOTLER, P. Administração de marketing: análise, planejamento, implementação e
controle. 5 ed. São Paulo: Atlas, 1997.
[8] CUNHA, G. D. Uma Análise da Evolução dos Procedimentos de Execução do
Desenvolvimento de Produtos. Rev. Produto&Produção, Porto Alegre, v. 7, n. 1. 2004.
[9] TORRÈS, O.; JULIEN, P. A. Specificity and Denaturing of Small Business.
International Small Business Journal, London, v. 23, no. 4, 2005, 355-377.
[10] HUNTER, J. Improving organizational performance through the use of effective
elements of organizational structure. International Journal of Health Care Quality
Assurance incorporating Leadership in Health Services. V. 15, N. 3 (2002). P. xii-xxi
[11] SHARP, J. Quality in the Manufacture of Medicines and other Healthcare Products.
Londres: Pharmaceutical Press, 2000.
[12] MINTZBERG, H; VAN DER HEYDEN, L. Organigraphs: Drawing How Companies
Really Work. Harvard Business Review Septeniber.-October 1999. p. 87-94.
[13] BENAVENT, F. B.; ROS, S. C.; MORENO-LUZON, M. A model of quality
management self-assessment: an exploratory research. International Journal of Quality
& Reliability Management. Vol. 22, No. 5, 2005, p. 432-451.
[14] GARVIN, D. A. Gerenciando a qualidade: a visão estratégica e competitiva. Rio de
Janeiro: Qualitymark, 1992.
[15] STURKENBOOM, J.; VAN DER WIELE, T.; BROWN, A. An action-oriented
approach to quality management self-assessment in small and medium-sized
enterprises. Total Quality Management, Abingdon, v. 12, N. 2, 2001, 231-246.
[16] LIEBERMAN, B. A. Gambling with success: software risk management. Available at:
<http://www.therationaledge.com> accessed on: january 2005.
[17] FLANNERY, T. P.; HOFRICHTER, D.; PLATTEN, P. E. Pessoas, desempenho e
salários: As mudanças na forma de remuneração nas empresas. São Paulo: Editora
Futura, 1997.
[18] ARMSTRONG, J. S. Strategies for Implementing Change: an Experiential Approach.
Group & Organization Studies, Thousand Oaks, V. 7, No. 4, p. 457-475, 1982.
[19] RENTES, A. F. TransMeth – Proposta de uma Metodologia para Condução de
Processos de Transformação de Empresas. São Paulo: USP: 2000. Tese de Livre
Docência, Escola de Engenharia de São Carlos, Universidade de São Paulo, 2000.
[20] WOOD Jr., T. (coordenador). Mudança Organizacional: Liderança; Teoria do Caos;
Recursos Humanos; Logística Integrada; Inovações Gerenciais; Cultura
Organizacional; Arquitetura Organizacional. 3ed, São Paulo: Editora Atlas, 2002.
[21] THIOLLENT, M. Metodologia da pesquisa-ação. 13ª ed. São Paulo: Cortez,
2004.108p.
[22] DE HOLANDA, V. B.; RICCIO, E. L. A Utilização da Pesquisa Ação para Perceber e
Implementar Sistemas de Informações Empresariais. Available at:
<www.tecsi.fea.usp.br>. Accessed on: 08/03/05.
[23] BLUMMENTRITT, T. Does small and mature have to mean dull? Defying the ho-hum
at SMEs. The Journal of Business Strategy; ABI/INFORM Global, v. 25, No. 1, 2004,
p. 27 – 33.
[24] GUSBERTI, T. D. Modelo de intervenção para processo de desenvolvimento de
produto farmacêutico para pequenas e médias empresas. Porto Alegre: UFRGS: 2006.
Dissertação (Mestrado em Engenharia de Produção), Escola de Engenharia,
Universidade Federal do Rio Grande do Sul, 2006
A Reference Model for the Pharmaceutical PDP
Management – an architecture
Abstract. The purpose of this article is to introduce the reference model architeture used in
the development of a reference model for pharmaceutical Product Development Process.
The model was created founded on renowned methods as Concurrent Engineering, Stage
Gates and Product Based Business. It was developed using legislation and information from
interviews with professionals of Brazilian pharmaceutical companies and information from
Project Management. This architeture supported the development of a reference model for
the pharmaceutical PDP management, which is adjusted to the Brazilian companies’ reality
and demand.
1 Introduction
Since the 1990’s product development has been considered under a broader
standpoint, in which the idea of development centered in technical activities was
substituted by the concept of business supported by product development. This
new concept has been called, afterwards, Product Development Process (PDP) [5-
9, 19]. Along the last twenty five years several product development approaches
were proposed, supported by methods and tools [6]. Each of them has particularly
contributed to the evolution of this knowledge area. Among the development
approaches, outstands those that are considered under the expression Integrated
Product Development (IPD) as Concurrent Engineering (CE) [22]; Stage Gates
methodology (SG) [6,7]; Product Based Business (PBB) [9,19]; and more recently
the Lean (L); Design for Six Sigma (DfSS) and Maturity Models (MM) considered
as new IPD approaches [23]. Some authors [1,16] discuss IPD as a separate
methodology, but Rozenfeld et al. [23] group CE, SG and PBB as being Integrated
Product Development expressions.
1
Production Engineering Post Graduate Program - Federal University of Rio Grande do Sul
- UFRGS. Osvaldo Aranha Street, 99, Porto Alegre, RS – Brazil - 90.035-190; Tel (55) 51
3308-4298. E-mail: istef@producao.ufrgs.br; ribeiro@producao.ufrgs.br.
766 I. C. de Paula, J. L. D. Ribeiro
2 Reference Models
A reference model serves as a description of how a product development
process progress, providing a common language, a minimum global vision of the
project development or a perception of the expected contribution that project will
bring to the company. The reference model may assume several formats. Some of
them represent only the activities that must be performed in product development;
other models detail what procedures and methods are supposed to be adopted; they
may include an evaluation criteria and may mention what literature has to be
consulted in order to accomplish a specific activity. The model may be a
manuscript, manual or even a graphical representation available in intranet [23].
They may be classified as generic models which may be adopted by different
production companies or specific models, that describe a particular type of product
development, as the model proposed in this paper.
A Reference Model for the Pharmaceutical PDP Management – an architecture 767
The qualitative approach was used for data collection and it was performed in
two interview blocks. The objective in the first interview block was gathering
information for construction of the reference model. The objective in the second
interview block was the reference model validation. The latest was performed by
submitting the reference model to pharmaceutical professional analysis of
performance and applicability in the field.
Five national companies’ professionals were interviewed in first block, from
two large and two medium size companies, from the medicine and cosmetic fields.
The interviewed professional areas were those considered important for product
development and it was respected the company development team or professional
interview availability. The areas included: marketing, and Sales, R&D, Quality
assurance, Production planning and control, Medicine registration, Finances,
Information Technology and High Administratrion. A referee for generic product
registration from ANVISA (Agência Nacional de Vigilância Sanitária), the
Brazilian medicine registration body from the Government Health Ministry, was
also interviewed for the reference model construction. Only one referee was
interviewed at ANVISA, since the legislation information is of deterministic
nature. Concerning validation, the reference model was analyzed by professionals
from seven companies, three large and four medium sizes (medicine, veterinary
and cosmetic fields). The analysis was conducted in a collective approach inside
each company, where the interviewed group exchanged ideas and impressions
about the model. The interviews lengths were two hours in average, in both blocks,
and semi-structured questionnaires were used.
All interviews were recorded and, afterwards, transcript. The First block
interviews were analyzed through internal comparison: between companies’
information, and between the latest and the ANVISA referee information. The data
gathered were important for construction of the reference model macro-phases and
activities. The Second block interviews were analyzed through consensus
ordination and importance ordination. Thus, the elements mentioned by the
interviewed professionals about which they agreed or disagreed were identified; as
well as the model elements considered by them as interesting or object of concern.
The elements mentioned by interviewed professionals from one company were
compared with the opinion of interviewed professionals from other companies,
characterizing the internal comparison in Second block either. The data gathered in
validation block interviews were important for changing, excluding or including
phases and activities in the reference model, or for reinforcing its value as a
reference for generic product development in pharmaceutical companies.
768 I. C. de Paula, J. L. D. Ribeiro
The product development methods that support the reference model are
Concurrent Engineering, Stage Gates and Product Based Business.
Concurrent Engineering (CE) focuses in multidisciplinary teams, co-localized
and simultaneous activities performance, mainly those which are independent. The
physical co-localization of teams and multidisciplinarity will depend on company’s
culture, but the latest element is mandatory to development efficiency. Much
rework may take place when the project of a new product is not simultaneously,
but sequentially analyzed by organizational sector specialists. The application of
tools and methods is important as IT (Information Technology); DfM (Design for
Manufacturability among other methods and tools [12,16,23].
Stage Gates (SG) is a methodology which focuses in two aspects: business
character of product development and product development process managerial
control. The first aspect is guaranteed by the ‘portfolio management methodology’
that analyses what business-products are the companies investing in. It is normally
performed along Corporate Strategic Planning (CSP) implementation. The process
control aspect of SG is the phase transition evaluation/control which is
systematically performed via process interruptions named ‘gates’. The gates are
generally located between important transition phases and they present a decision
nature of process abortion; process modification or process maintenance. The gates
may include control check lists that confirm the conclusion of the most important
activities of that phase; although the document central managerial question is ‘will
the product development be continued in the next phase, changed or aborted?’ The
number of gates is a function of the risk level implicated in the product
development process, but Cooper suggests six gates in his paper [6,7,23].
Product Based Business (PBB) is a methodology which reinforces the
innovation mechanism, represented by two elements: the pair ‘portfolio analysis-
Corporate Strategic Planning’ (from the strategic level) and by the activities of
‘identification, selection and development of opportunities that were identified in
the market’ (from the tactical level). The business/company growth is a result of
innovation in products or services since they must provide both, income and profit.
The incomes from mature and new products maintain the innovation mechanism,
since they may finance new market evaluation and technology acquisition. In this
sense, a feedback mechanism is generated in terms of cash and information. The
products must be followed after launch for all their lives (product life cycle
management), and their performance in market must be measured. The information
gathered from products feedbacks the development process for a new ‘portfolio
analysis-Corporate Strategic Planning’ and the improvement cycle is maintained
[9,19].
Summarizing, the IPD methodologies have in common: (i) a strong market
orientation, based in the knowledge of clients demand; (ii) the practice of business
opportunities screening, competitors benchmarking and portfolio management as
support for decision in ‘what projects to invest’; (iii) the practice of former
technical, financial and economical analysis of projects, before product
development; (iv) the continuous analysis of products after launching, providing
the feedback character of the PDP. The grouped practices (i) to (iii) form the Pre-
A Reference Model for the Pharmaceutical PDP Management – an architecture 769
Development Stage from product development process and the practice number
(iv) outlines the Post Development Stage (see Table 1).
The first effort in organizing a project is the thoroughly description of its scope.
Most authors in PM indicate the use of WBS (Work Breakdown Structure) as an
efficient tool for scope definition [15,21,24]. WBS is a hierarchical decomposition
(top down flow chart) oriented to the project deliverables, including internal and
external project products, aiming to reach project goals. This tool organizes the
project global scope by its division in work packages that are decomposed in
activities. Therefore, WBS is the first step of project planning, since it provides the
base from which the project scope, time, human resources, cost, quality, risk and
other plans derive. WBS may be presented as an indented list or in a graphic
manner as it may be seen in figure 1.
Work Breakdown Structure
Pharmaceutical Reference
Model
Figure 1. Part view of WBS from the pharmaceutical PDP reference model [20]
4. Conclusion
The qualitative approach adopted in the construction of the reference model
proved to be efficient, since it permitted to gather information from professionals
in a deeper manner. The choice of companies from medium and large sizes was
adequate, since their development processes and relationship with ANVISA
presented particularities, and the different types of business these companies
develop brought robustness to the final reference model architecture. The same
differences would not be so clear if the interviews included small companies;
moreover the smaller companies hardly ever produce generic medicines.
The interview with the ANVISA referee was important for the delineation of
legislation related activities in all macro-stages and phases. Such details are not
represented in this paper.
The interviews in the construction phase were important for the reference
model configuration, since each company PDP was modeled in block 1 interviews.
Besides that, professionals from seven pharmaceutical companies, totalizing 40
people with large experience in pharmaceutical product development, expressed
their impressions about the reference model final graphical representation in block
2 interviews. All the interviewed experts recognized the importance of PDP
management, although some of the companies still present a product development
not fully formalized. More details from the final graphical reference model are not
part of this paper.
A Reference Model for the Pharmaceutical PDP Management – an architecture 771
References
[1] Andreasen, MM, Hein L. Integrated Product Development.Berlin:Springer Verlag,
1987.
[2] Balant, LP, Gex-Fabry, M. Modelling During Drug Development. Eur J Ph Bioph
2000;50;13-26.
[3] Boogs, RW et al. Speeding Developing cycles. Res Tech Manag 1999; 42;5; 33-38.
[4] Cavalla, D. Technology Providers and Integrators-a Virtual Architecture for Drug
R&D. In: Bristol, JA (ed), Annual Reports in Medicinal Chemistry London :Academic
Press, 1998;365-374.
[5] Clark, KB, Fugimoto, T. Product Development Performance Strategy, Organization
and Management in the World Auto Industry. Boston: Harvard Business School Press,
1991.
[6] Cooper, RG. New Products: the factors that drive success. Int Mark Rev 1994;11;1; 60-
76.
[7] Cooper, RG, Edgett, SJ, Kleinschmidt, EJ. New product portfolio management:
practices and performance. J Prod Inn Manag 1999;16; 333-351.
[8] Corso, M, Pavesi, S. How management can foster continuous product innovation. Integ
Manuf Sys 2000;11;3; 199-211.
[9] Crawford, M, Benedetto, CA. New Products Management. 6ed. Boston: McGraw Hill,
2000.
772 I. C. de Paula, J. L. D. Ribeiro
[10] Getz, KA, Bruin, A. Speed demons of drug development. Phar Exe 2000;20;7;78-84.
[11] Gieschke, R, Reigner, BG, Steimer, JL. Exploring clinical study design by computer
simulation based on pharmcokinetic/pharmacodynamic modelling. Int JClin Phar
Therap 1997;35; 469-474.
[12] Goldense, BL. Concurrent Product Development Processes Structured Product
Development Processes Electro/92. Conference Institute of Eletrical and Eletronic
Engineers Boston: Massachusetts, 1992.
[13] Hall, A H. Computer modeling and computational toxicology in new chemical and
pharmaceutical product development. Tox Letters 1998;102-103; 623-626.
[14] Hunt, CA, et al. A Forcasting Approach To Accelerate Drug Development Stat Med,
1998;17;1725-1740.
[15] Kerzner, H. Gestão de Projetos. As melhores Práticas. Porto Alegre:Bookman, 2002
[16] Kormos, J. Lessons from the best. Machine Design 1998; 70; 22-52.
[17] Koufteros, X, Vonderembse, MA, Doll, WJ. Integrated product development practices
and competitive capabilities: the effects of uncertainty, equivocality and platform
strategy. J Op Manag, 2002; 20; 331-355.
[18] Moos, WH. Forword:Combinatorial Chemistry at a Crossroads. In: Gordon, E; Kerwin
Jr, JFK (ed) Combinatorial Chemistry and Molecular Diversity in Drug
Discovery,1998;xi-xvii.
[19] Patterson, M L, Fenoglio, JA. Leading Product Innovation Accelerating Growth in a
Product-Based Business.New York :JohnWiley & Sons Inc, 1999.
[20] Paula, IC. Proposta de um modelo de referência para o Processo de Desenvolvimento
de Produtos Farmacêuticos. PhD. Thesis UFRGS, 2004.
[21] PMBoK Guide. 3ed. 2004. Available at:
http://wwwoodesigncombr/forum/indexphp?showtopic=1469.
[22] Prasad, B. Concurrent Engineering Fundamentals. Integrated Product Development
New Jersey:Prentice Hall, 1997.
[23] Rozenfeld, H et al. Gestão do Desenvolvimento de Produtos. São Paulo:Saraiva 2006.
[24] Verzuh, E. Gestão de Projetos. 6ed. Rio de Janeiro:Campus, 2000.
[25] Wechsler, J. FDA: A history of leadership, partnership, and transformation Phar Tech
2001; 25; 14-22.
[26] Wermuth, CG. Strategies in the search for new lead Compounds or Original Working
hypotheses In: Wermuth, CG. The Practice of Medicinal Chemistry. London:
Academic Press, 1996;6; 82-98.
Supply Chain Collaboration
Product Development Process Managing in Supply
Chain
Andréa Cristina dos Santosa1, Rafael Ernesto Kieckbuschb and Fernando Antonio
Forcellinic
a
Grupo de Engenharia de Produto e Processo – GEPP, POSMEC, UFSC, Brazil.
b
Pós-graduação em Engenharia de Produção – PPGEP, UFSC, Brazil.
c
Grupo de Engenharia de Produto e Processo – GEPP, PPGEP, UFSC, Brazil.
Abstract: Today, businesses depend on strategic relations with their customers and
suppliers to create value to develop product and to obtain better market-share. Designing
products to match the processes and supply chains, processes to match product platforms
and supply chains, and supply chains to match the product platforms and process are the
ingredients in today’s fast developing markets. If this co-design is done well up front with
sufficient focus product development process managing, product will cost much less overall
and the time-to-market will decrease substantially. However, the evidence supporting
supplier integration is to less clear than evidence on the positive contribution of customer
integration in product development process. Considering this problem, the purpose of the
present paper is to supply a path aiming to identify managing techniques and practice for the
involvement of suppliers in PDP. A model for product development process managing in
supply chain was proposed. The model focuses on the following factors: outsourcing
process, involving supplier into PDP, knowledge management and design considerations.
1 Introduction
This paper is introduced in the context of a study on the relations between the
supply chain and product design. The importance of beginning the study of supply
chains in product development process (PDP) is mainly because it is at this product
of lifecycle phase that the decisions responsible for 80% (eighty percent) of a
product’s final costs are made [6,15].
In recent years a large number of papers have been published emphasizing the
effects of the suppliers’ participation in PDP, stating the benefits and drawbacks of
the suppliers’ involvement [1, 2, 3, 4, 7, 9, 10]. One of the main drivers behind
involvement suppliers early in the in the PDP is to gain better leverage of
supplier’s technical capabilities and expertise to improve product development
1
Corresponding Author E-mail: andreakieck@gmail.com
776 A. C. Santos, R. E. Kieckbusch, F. A. Forcellini.
Development Post
Lean
Six
Concurrent Engineering sigma
Funnel model
Stage Gate
Product integrate
Development
Sequential Design
Development Methodology
Motivation for supplier integration into PDP due to the possibility of product
innovation is a critical process, which requires a long term strategic partnership
between those involved. However, [14] points out that these rely on a long term
relationship policy or the establishment of alliances for the development of both
products and PDP managerial aspects.
Among the articles studied, of those presenting the greatest contributions to
decision making related to supplier involvement with PDP, the [5] model applied
by [11, 12] stood out for approaching the greatest number of decision-making
factors. This model presents the unfolding of activities from the strategic to the
operational level, to decide what kind of relationship to have with the supplier and
when to involve the supplier in PDP. However, the model focuses on technical
information systemization, putting temporarily aside aspects related to process
management.
778 A. C. Santos, R. E. Kieckbusch, F. A. Forcellini.
Partnership Process
Change Process
Partnership Process
Change Process
The first quadrant focuses on strategy activities in the partnership process: the
main activities in this group involve the definition of guidelines for supplier
involvement in PDP based on company strategies.
The second quadrant focuses on strategy activities in the change process: the
main activities involved in this group aim to define company structure to carry out
the change process in the company to have supplier involvement in PDP.
The third quadrant focuses on operational activities in the partnership process:
the main activities in this group involve the definition of technical activities for
supplier involvement in PDP. Focus on (technical) engineering activities is the
main characteristic for supplier involvement in PDP based on the established
strategic activities.
The fourth quadrant focuses on operational activities in the change process: the
main activities involved in this group are the definition of methods and tools for
implementation of the change process in the company to supplier involvement in
PDP.
780 A. C. Santos, R. E. Kieckbusch, F. A. Forcellini.
8 References
[1] BIDAULT, F.; DEPRES, C.; BUTLER, C. New product development an early supplier
involvement (ESI) the drivers of ESI adoption. IN: PROCEEDINGS OF THE
PRODUCT DEVELOPMENT MANAGEMENT ASSOCIATION INTERNATIONAL
CONFERENCE, ORLANDO, 1996.
[2] BIROU, L.; FAWCETT, S. Supplier involvement in integrating product development a
comparison of US and European Practices. International Journal of Physical
Distribution & Logistic Management, v. 24, n.5, p.4-14, 1994.
[3] BIROU, l.; FAWCETT, S.; MAGNAN, M. Integrating product life cycle and
purchasing strategies. International Journal of Purchasing and Materials Management,
n.33, v.1, winter 1997.
[4] CLARK, K. B. Project scope and project performance: the effect of parts strategy and
supplier involvement on product development. Management Science, n.35, p. 1247-
1263, 1989.
[5] HANDFIELD, R. et al. Involving suppliers in new product development. California
Management Review, n. 42, v.1, p.59-82, 1999.
[6] HANDFIELD, R.; NICHOLS JR, E. Supply chain redesign: transforming supply chain
into integrated value systems. Prentice Halls. 2002. 371 p.
782 A. C. Santos, R. E. Kieckbusch, F. A. Forcellini.
Acknoledgments
We would like to thank the company involved in this research and CNP
[National Brazilian agency of scientific and Technological Development], IFM.
[Instituto Fábrica do Milênio].
The present study was carry out with the support of CNPQ.
Level of knowledge and formalization of logistics and
SCM in the Brazilian automotive industries suppliers
Abstract. The companies of vanguard understood that the real competition is not made
among companies, but among supply chains. The supply chain management (SCM) concept
is the logistics extension, while the logistics management is concerned with the organization
flows optimization. This paper intends to divulge the level of the knowledge and the
formalization of logistics and SCM already set up by the suppliers of Brazilian automotive
industries. The SCM recognized that the internal integration is not enough for the
competitiveness achievement. The automotive segment can be considered representative in
logistics and SCM practices in Brazil deserves to be noticeable. In order to achieve this, it
was accomplished an applied, exploratory, descriptive and qualitative survey, through
inductive approach. The technical procedure used was a survey. The data collection was
carried out through questionnaires sent to fifty representative suppliers of automotive
industry, with the return of 64% answers. The results of the survey showed that the main
impediment in the implementation of the SCM concept is precisely the incoherence in the
culture of the companies surveyed about the logistics and SCM, concerning the partnerships
and the exchange of information.
1 Introduction
Presently, the trend is the integration of all logistics activities in the companies,
since the client’s order to the supplier until the delivery to final consumer, involved
by services and information that aggregate value. In order to make feasible this
integration, the supply chain management – SCM concept is paramount. This
concept cover, not only the business processes, but also the relationships between
clients and suppliers, aiming the strategic partnerships, benefiting all the members
of the supply chain.
1
Professor of the Graduate Course in Production Engineering at Federal University
of Technology – UTFPR – Address: Monteiro Avenue, km 04, Jardim Pitangui,
Ponta Grossa, Parana State, Brazil, Zip Code: 84.016-210 - Phone Number: 55 42
3220-4878 – Fax Number: 55 42 3220-4805 – E-mail: khatakeyama@uol.com.br
784 K. Hatakeyama, P. Guarnieri
improvement in the cost structure through all the process, and reduction of delivery
time.
The concept of integrated logistics, according to [1], is an organized form of
perceive all the processes that generate value to final client, regardless where the
process has been executed, it can be in own company or in other to which
maintains some kind of relationship.
The supply chain management is the compartment of the business key-
processes with other members of supply chain that requires a conceptual change in
the behavior of the companies to how to manage the relation with the goods
offered to the market [1].
In the integrated logistics management, the processes involved are: Plan,
Supply, Make and Delivery. The Plan starts the logistics process; the Supply is
inserted in the Supply Logistics; Make is treated in the Production Logistics, and
Delivery is managed in the Distribution Logistics. In another hand, [6] asserts that,
the SCM, is a more complex task than the logistics management of goods,
information, and services flow related to origin point to consumer point.
To sum up, this concept involves, besides the integrated logistics management,
strategies of relationships with suppliers and customers aiming greater life span in
business, through the partnerships based in trust and collaboration. These factors
generate sustainable competitive advantages, where many companies discovered
that through these partnerships could improve the product project, marketing
strategies and service to clients, and besides that, discover forms to work more
efficiently together. A close relationship between supplier and buyer allows that
the both skills will be applied to mutual benefit.
Thus, according to [4], one of the main SCM objectives is to attend the final
client with greater efficiency, through costs reduction and add more value to final
products. The cost reduction has been obtained through reduction of transactions,
paperwork and information sizeable, besides the reduction of transports and stocks,
elimination of the quality control points and demand variability of products and
services. The creation of goods and services customized, the joint development
between suppliers and clients of competences through productive chain, add value
to products and increase the profitability to all the chain.
In automotive industry, approximately, 12% of materials costs to automakers
are accounted by suppliers logistics costs. Thus, when the reason is the lack of
integration between suppliers and automakers, there is a great opportunity to costs
reduction. In the traditional relationship of opponents, the automaker could reduce
the materials costs exerting pressure on profit share of components suppliers [3].
The data collection of this research was carried out sending questionnaires to
23 automakers and 50 suppliers of automotive industry, which were selected in the
786 K. Hatakeyama, P. Guarnieri
It is important to emphasize that this research did not have statistics awareness,
thus it was not intended to assert that this is the reality of industries population of
the segment researched, with the limited intention to demonstrate the trend of
behavior. In the Chart 01 are the results found, besides the comparison between
automakers and suppliers.
% %
Enough knowledge 44 44
High knowledge 44 25
Low knowledge - 9
No knowledge - 3
Knowledge of SCM concept:
Enough knowledge
67 44
High knowledge
33 28
Medium level of knowledge
- 19
Low knowledge
- 9
No knowledge
- -
knowledge, and only 12% a medium level of knowledge. These results are similar
to suppliers, 44% asserted to have enough knowledge, while 25% alleged to have a
high level, and 19% a medium level.
On the other hand, regarding to SCM concept, 67% of automakers asserts to
have a sufficient level, 33% denoted a high knowledge, while 44% of suppliers
researched, answered to have enough knowledge, 28% a high level, and 19% a
medium level of knowledge. These results demonstrate that the majority of
suppliers believe to know these two concepts, these concepts intrinsically linked,
will yield greater profitability to supply chain members. However, it perceived that
in the automakers segments the SCM concept is more consolidated, because the
most of them are foreign companies.
% %
Yes, partially 56 59
Yes, totally 44 25
Not - 16
also 22% have both. Furthermore, 71% of suppliers have a Logistics department,
only 3% have a SCM department, 13% have both, and 13% have no one.
When these results are compared, is possible to verify that suppliers have not
yet formalized the SCM activities like automakers. Meanwhile, it has denoted that
the most of suppliers formalize the logistics management in their processes,
however, still there are suppliers that have neither one nor the other, whereby this
fact can prejudice the production supply.
Analyzing the SCM concept adoption by the automakers, 56% asserted to adopt
partially and 44% of them totally. Likewise, regarding to suppliers it was noted
that 59% adopt partially, 25% adopt totally, and 16% do not adopt. It was
perceived that the suppliers do not adopt the SCM concept totally, which can be
considered an aggravating scenario. The SCM concept requires that both members
of supply chain work conjointly, otherwise the automakers can not adopt the SCM
concept successfully. This concept is relatively recent in the business environment
and needs to mature by the time to come. This reason is justified why it was not
implemented totally, however it is important to emphasize that according to the
literature, the automotive segment is still that demonstrates greater success in the
implementation of SCM.
Regarding to the main motivator factors to adopt the logistics management by
companies researched, it was noticed that: 32% of automakers assert to be the
resource rationalization, 32% confirmed to be the reduction of logistics costs with
stocks, and 21% the reduction of logistics costs with transports. Furthermore, 79%
of suppliers have the intention to reduce their logistics costs with stocks, 71% by
satisfaction and demand of clients (automakers), 68% by resource rationalization,
and 54% due to reduction of logistics costs with transports.
These results confirm the data collected in the bibliographic research, where
emphasize that activities of stocks and transports management are the major
generators of costs to companies, so that impacts directly in the service level
offered to clients. These activities if well managed, rationalize financial assets,
human and time resources. However, the suppliers present a high level of interest
in this adoption, by satisfaction and demand of clients (automakers). This fact
demonstrates that the evolution and improvement of suppliers is motivated or
pulled by automakers.
It also was researched the interests by adoption of SCM concept, in which the
automakers point out, respectively, the factors: optimization and integration of
company processes with 38% of answers, elimination of costs and activities that
not aggregate value, with 21%, and integration with clients and suppliers also with
21%. The suppliers emphasize the following factors: 25% by elimination of costs
and activities that not aggregate value, 20% with the purpose to integrate clients
and suppliers, and 18% to optimize and integrate the company processes. These
results confirm the theory about the theme, that point out as main factors in the
SCM adoption, the integration between suppliers and clients aiming partnerships
and the optimization of company and their partners processes. Besides the
reduction of costs and unnecessary activities, always having as main purpose to
Level of knowledge and formalization of logistics and SCM 789
satisfy the final client, it maximizes the profitability of the supply chain. In this
sense, both automakers and suppliers have the same view, which is favorable to
both, and means a positive factor to success to consolidate the SCM concept.
Concerning to perspective of SCM concept implementation, and consequently,
the logistics management, it has noted that 55% of automakers will do in short
term, 34% medium term, and only 10% has the intention to implement in long
term. Regarding to suppliers, 47% asserted that will implement in short term, 41%
in medium term, 6% in long term and also 6% do not have perspective of
implementation. Therefore, the suppliers and automakers converge in their
perspective of SCM implementation.
The research also verified about the factors that drive to SCM concept
adoption, that the interest of automakers is motivated mainly to optimize the
790 K. Hatakeyama, P. Guarnieri
company processes. One of the factors was chosen by 39% of the companies
researched; by the interest to reduce costs, the option was by 30%, and obtaining
the competitive advantage with the option of 26% of the companies researched. In
this context, 27% of suppliers believe that the optimization of company processes
is the main driver factor to adopt SCM concept, 24% pointed out the costs
reduction as relevant factor, whereas 20% the obtaining of competitive advantage,
and 19% to attend the demand of clients and suppliers.
4 Final Considerations
5 References
2
[1] ARBACHE, F. S.; SANTOS, A. G.; MONTENEGRO, C.; SALLES, W. F.
Gestão de logística, distribuição e trade marketing. Rio de Janeiro: FGV, 2004, 75-
77.
Level of knowledge and formalization of logistics and SCM 791
Abstract. Many successful technology forecasting models have been developed but little
research has explored the relationship between sample set size and forecast prediction
accuracy. This research studies the forecast accuracy of large and small data sets using the
simple logisticl, Gompertz, and the extended logistic models. The performance of the
models were evaluated using the mean absolute deviation and the root mean square error. A
time series dataset of four electronic products and services were used to evaluate the model
performance. The result shows that the extended logistic model fits large and small datasets
better than the simple logistic and Gompertz models. The findings also show that that the
extended logistic model is well suited to predict market growth with limited historical data
as is typically the case for short lifecycle products and services.
1 Introduction
With the rapid introduction of new technologies, electronic products and services
are often replaced within a year. The product life cycle for electronic goods, which
used to be about ten years in the 1960’s, fell to about 5 years in the 1980’s and is
now less than two years for cell phones and computer games. As product life
cycles become shorter, less data becomes available for analysis. Given this market
situation, it is important to use smaller data sets to forecast future trends of new
electronic products and services as they are introduced.
A product life cycle is typically divided into four stages: introduction, growth,
maturity and decline stage [3]. At introduction stage, the product is new to the
market and the product awareness has not been built, so the feature of this stage is
1
Professor (Management Science), National Chiao Tung University, 1001 Ta Hsueh Road,
Hsinchu, Taiwan 300; Tel: +886 (3) 5727686; Fax: +886 (3) 5713796; Email:
trappey@cc.nctu.edu.tw
794 C. Trappey, H-Y. Wu
slow sales growth. The growth stage is a period of rapid sales growth since the
product is accepted widely by the market and the sales volume will boost. As the
sales growth begin to slow down, the mature stage start. Therefore, the product life
cycle curve can be illustrated with S-shaped curve from introduction stage to
beginning of mature stage. Since lifecycle curve grow as sigmoid curve, growth
curve method could be applied to forecast the future trend of products.
Growth curves are widely used in technology forecasting [4, 6-10] and are
referred to as “S-shaped” curves. Technology product growth follows this curve
since the initial growth is often very slow (e.g., a new product replacing a mature
product), followed by rapid exponential growth when barriers to adoption fall,
which then falls off as a limit to market share is approached. The limit reflects the
saturation of the marketplace with the product or the replacement of the product
with another. The curve also models an inflection or break point where growth
ends and decline begins.
Many growth curve models have been developed to forecast the adoption rate
of technology based products with the simple logistic curve and the Gompertz
curve the most frequently referenced [1, 6]. However, when using these two
models to forecast market share growth, care must be taken to set the upper limit of
the curve correctly or the prediction will become inaccurate. Setting the upper limit
to growth can difficult and ambiguous. If the product is a necessity or will likely be
popular for decades, then the upper limit can be set to 100% of the market share.
This means that the product will be completely replaced only after everyone in the
market has purchased the product. However, when marketers consider new
technology products such as a computer game or a new model cell phone, the value
for the upper limit to market share growth can be difficult to estimate. That is, a
computer game can be quickly replaced by another game after only reaching 10%
market share.
In order to avoid the problem of estimating the market share growth capacity
for the simple logistic and the Gompertz models, Meyer and Ausubel [8] proposed
the extended logistic model. Under this model, the capacity (or upper limit) of the
curve is not constant but is dynamic over time. Meyer and Ausubel also proposed
that technology innovations do not occur evenly through time but instead appear in
clusters or “innovation waves.” Thus, they formulated an extended logistics model
which is a simple logistics model with a carrying capacity K t that is itself a
logistics function of time. Therefore, the saturation or ceiling value becomes
dynamic and can model pulses from bi-logistic growth. Bi-logistic growth
represents growth where market share seems to reach a limit but then growth is
suddenly rejuvenated or reborn and begins again. Therefore, the researchers extend
the constant capacity ( K ) of the simple logistic model to the carrying capacity
( K t ). This study applies this idea to the study of electronics products with K t
representing an extended logistic model.
The proposition of this research is that the extended logistic model with a time-
varying capacity feature will be better than the simple logistic model and the
Gompertz model which require a set constant capacity. Therefore, this research
studies the forecast accuracy of large and small data sets using the simple logistic,
Gompertz, and extended logistic models. A time-series dataset describing the
An Evaluation of the Extended Logistic, Simple Logistic, and Gompertz 795
Taiwan market growth rates for two types of electronic products and two types of
services were used to evaluate model performance.
Most biological growth follows an “S-shape” curve or logistic curve which best
models growth and decline over time [8]. Since the adoption of technology and
technology based products is similar to biological growth, the simple logistic
model is widely used for technology forecasting.
L
The model of simple logistic curve is expressed as yT where L is
1 aebt
the upper bound of yt , a describes the location of the curve, and b controls the
shape of the curve. To estimate the parameters for a and b , the equation of the
simple logistic model is transformed into linear function using natural logarithms.
The linear model is expressed as Yt ln yt L yt ln a bt and the
parameter a and b are then estimated using a simple linear regression.
The Gompertz model was first used to calculate mortality rates in 1825 but has
since been applied to technology forecasting [5]. Although the Gompertz curve is
similar to the simple logistic curve, it is not symmetrical about the infllection point
which occurs at t ln b k . Gompertz’s model reaches the point of inflction
bt
early in the growth trend and is expressed as yt Le ae , where L is the upper
bound which should be set before estimating the parameters a and b . Similar to
the parameters of the simple logistic model, natural logarithms are used to
transform the original Gompertz model to linear
equation Yt ln ln L y t lna bt and then the parameters are estimated.
The simple logistic model and the Gompertz model assume that the capacity of
technology adoption is fixed and there is an a upper bound to growth for these
models. However, the adoption of new technology is seldom constant and changes
over time. As shown by Meyer & Ausubel [8], the original form of simple logistic
df df
model by§¨1 y ·¸ is extended to by§¨1 y · , where k is the upper
dt © k ¹ dt © k t ¸¹
limit of the logistic curve and k t is the time-varying capacity and is the function
which is similar to logistic curve.
796 C. Trappey, H-Y. Wu
There are many statistics used to measure forecast accuracy. In this study, the
Mean Absolute Deviation (MAD) and Root Mean Square Error (RMSE) are used
to measure performance. The mathematical representations are shown below:
n
¦f i fˆi
i 1
MAD
n
n
¦ f
2
i fˆi
i 1
RMSE
n
where f i is the actual value at time t, fˆi is the estimate at time t, and n is the
number of observations. These measurements are based on the residuals, which
represent the distance between real data and predicted data. Consequently, if the
values of these measurements are small, then the fit and predicted performance is
acceptable.
There are 1 data points for color TV set purchases (from 1974 to 2004) and 35
data points for telephone purchases (1964 to 2004). Twenty six quarterly data
points for ADSL subscription were collected from the second quarter in 2000 to the
third quarter in 2006 and 19 quarterly data points for mobile internet subscription
were collected from the third quarter in 2000 to the third quarter in 2006.
Figure 1 shows the saturation rate for the four products. The growth rate for
color TVs and telephones follows as S-shape curve and the curves for ADSL and
mobile internet subscriptions also have basic sigmoid curve. As can be seen in
Figure 1, it spent 20 years (1964-1984) for telephone getting into the mature stage
whose characteristic is slower growth rate than that at the growth stage in product
lifecycle. Further, color TV also spent 10 years (1974-1984) entering the mature
stage. However, for ADSL and mobile internet subscriptions, it only took 5 years
to begin the mature stage. Therefore, the data for color TV and telephones were
categorized as long life cycle products, while the data for ADSL and mobile
internet were as short life cycle products.
These four data are fitted to the models after removing the last five data points.
The last 5 data points were reserved to test the prediction accuracy of the models.
Further, to compare the performance of the three models using large and small data
sets, the data for color TV and telephones were used to represent longer lifecycles
(larger data sets) since these two products have more than 30 years historic data.
Compared to color TV and telephones, the data for ADSL and mobile internet
subscriptions rely on small data sets of less than 6 years and represent shorter
lifecycles.
6DWXUDWLRQ
&RORU79
7HOHSKRQH
$'6/
0RELOHLQWHUQHW
7LPH
Figure 1. Market growth for Color TVs, telephones, ADSL subscription, and mobile
Internet subscription
798 C. Trappey, H-Y. Wu
4 Analysis
The first step is to fit the data to the simple logistic, Gompertz, and the extended
logistic models. After reserving the last five data points, the coefficients of the
models and the statistics for MAD and RMSE are computed. The second step uses
the derived models to forecast the five data points and compare the forecast with
the true observations and compare.
Table 1 summarizes the fit and prediction performance of the three models. The
evaluation rule is that the smaller the value for MAD and RMSE, then the better
the prediction performance. Therefore, the results show that the extended logistic
model has the best fit and prediction performance for both long and short data sets.
Thus, the extended logistic model is suitable for predicting both long and short
lifecycle products.
The simple logistic and the Gompertz model require that the values for the
upper limit be set correctly, otherwise the accuracy of the models can be suffer. For
the long data sets, the upper limit for color TVs and telephones is easy to see and is
set at 100%. However, for the short data sets, the capacity for ADSL and mobile
Internet service is difficult to determine. Thus, for the simple logistic and the
Gompertz model, different upper limits are set for the short data sets. The upper
limit for the saturation rate of ADSL is set to 100%, 50% or 30% whereas and the
upper bound for mobile Internet is set to either 100% or 50%. As shown in Table 1,
the extended logistic model with time-varying capacity yields the best fit and
prediction performance.
logistic model is most suitable and research is planned to conduct a more rigorous
and systematic testing of the model as proposed by Meade and Islam [8].
Table 1. Performance measures for the extended logistic, Gompertz, and the simple logistic
models
5 Reference
[1] Bengisu M, Nekhili R. Forecasting emerging technologies with the aid of science and
technology databases. Technological Forecasting and Social Change 2006;73:835-844.
[2] Chen C-P. The Test of Technological Forecasting Models: Comparison between
Extended Logistic Model, Fisher-Pry Model, and Gompertz Model. Master thesis,
National Chiao Tung University, 2005.
[3] Kotlor P. Marketing management. 11th ed. 2003, New Jersey: Prentice Hall.
[4] Levary RR, Han D. Choosing a technological forecasting method. Industrial
Management 1995;37:14.
[5] Martino JP. Technological Forecasting for Decision Making. 3rd ed. 1993, New York:
McGraw-Hill.
[6] Meade N, Islam T. Forecasting with growth curves: An empirical comparison.
International Journal of Forecasting 1995;11:199-215.
[7] Meade N, Islam T. Technological forecasting--model selection, model stability, and
combining models. Management Science 1998;44:1115.
800 C. Trappey, H-Y. Wu
[8] Meyer PS, Ausubel JH. Carrying Capacity A Model with Logistically Varying Limits.
Technological Forecasting and Social Change 1999;61:209-214.
[9] Meyer PS, Yung JW, Ausubel JH. A Primer on Logistic Growth and Substitution.
Technological Forecasting and Social Change 1999;61: 247-271.
[10] Rai LP, Kumar N. Development and application of mathematical models for
technology substitution. PRANJANA 2003;6: 49-60.
__________________________________________________________________
a
School of Econonomics – University of Applied Sciences at Zwickau, Germany.
b
School of Informatics – University of Edinburgh – UK.
c
School of Management and Economics – University of Edinburgh – UK.
Abstract: Regional clusters at different stages in their life-cycle, provide opportunities for
benchmarking regional and trans-regional strategies for innovation and change management.
The paper reports on trans-regional knowledge transfer and benchmarking strategies used to
enhance the alignment of SME, operators and other stakeholders in regional oil and gas
clusters in two regions with ongoing projects. These were part of separate regional
initiatives to enhance innovation and competitiveness in the supply chain through support
for SMEs as key repositories of niche expertise and local knowledge relevant to the
competitiveness of large operators in particular and to the cluster and the region in general.
The Western Australia and the UK North Sea oil and gas clusters are used as examples to
highlight the recurring sociotechnical problem: solution scenarios that arose in facilitating
communication and coordination of diverse stakeholders within and across regional
clusters.This is part of a wider set of case studies developed by the network in the oil and
and gas and automotive supply chain sector.
1 Introduction
The competitiveness of regions increasingly depends on their innovative ability.
Clusters can be innovation drivers and are therefore key to economic development.
The term “cluster”2 was coined by Porter, who describes this as a geographical
concentration of sector-specific companies, suppliers, service providers and
associated institutions (e.g. universities, research institutes, funding bodies) all of
1
Corresponding Author E-mail: Gudrun.Jaegersberg@fh-zwickau.de
2
This articles does not discuss different concepts of clusters. We refer in this case to clusters
as Porter [1] defines them.
802 G. Jaegersberg, J. Ure, A.D. Lloyd
which are interconnected [14].3 Clusters are linked by extended enterprises and
their supply chains. Therefore, some successful regions have set up knowledge-
sharing networks [20, 18,12,16,25] across clusters and this is increasingly the focus
of research and development funding and particular in the European Seventh
Framework Programme [4] 4.
A cluster-based initiative is currently being carried out by the authors in the
Western Australian (WA) oil and gas supply chain to identify stakeholder
perceptions of gaps, barriers and opportunities to innovation in the supply chain,
and also to support knowledge-sharing between the oil and gas regions of WA and
the UK North Sea, highlighting the need to consider strategies that can develop the
human communication infrastructure required for stakeholders to identify gaps and
barriers and coordinate or reuse strategies and practices. In this case the focus is on
strategies that can facilitate SME-led innovations to meet the needs of large
operators in the supply chain.
2 Theoretical Background
Porter’s theory of national competitive advantage [14] can help to understand the
structure and dynamics of clusters. He suggests that four broad attributes of a
nation shape the environment in which local firms compete: factor conditions
(basic factors such as natural resources and advanced factors such as
communication infrastructure, sophisticated and skilled labour, research facilities
and technological expertise); firm strategy (different management ideologies),
structure and rivalry; related and supporting industries; and demand conditions
(sophisticated customers in the home market create pressure for innovation and
quality). These four determinants promote or impede the creation of competitive
advantage and constitute the so-called diamond which is a mutually reinforcing
system, where the effect of one determinant is contingent on the state of others.
Firm strategy,
chance structure, and
rivalry
Factor Demand
conditions conditions
3
The phenomenon of „industrial districts“ was discussed by the British economist A.
Marshall [2] as early as 1890.
4
Cp. also the Aho report 2006 [1].
Trans-regional Supply Chain Research Network 803
Porter argues that two additional variables can influence the national diamond:
chance (e.g. innovation) and government (e.g. policies such investment in
education or incentives).
According to Porter, a precondition for a cluster to emerge is a critical mass of
companies that agglomerate in spatial proximity and start combining their activities
along a value chain. The cluster identification of related industries is one of the
most influential findings of Porter’s research. The diamond model is an ideal tool
to analyse hard factors of a cluster and to identify a cluster structure. However,
although it stresses that mutual reinforcement between the determinants is key to
successful clusters, there is less clarity with regard to how to build up the
communication structures (advanced factor) and to align the knowledge and
interests of diverse and distributed stakeholders to common ends.
In global distributed markets clusters are networked globally through extended
enterprises [3] and their supply chains [2]. As clusters, like organisms, pass
through a life-cycle [15] - they are born, evolve and decline - there is an
opportunity for cluster cooperation in a range of ways such as strategic process
benchmarking [7] in core areas such as innovation, where emerging clusters can
learn from more mature ones.
Innovation itself may refer to changes to products, processes or services. Tidd et
al. [21] refer to four types of innovation as Product, Process, Position and
Paradigm. From an organisational perspective it may be linked to performance and
growth through improvements in efficiency, productivity, quality, competitive
positioning, market share, etc. From a change management perspective it is
increasingly perceived as a complex process that links many different players
together - not only developers and users, but a wide variety of intermediary
organisations such as consultancies, standards bodies etc. Sawhney and Parikh[17]
and Molina [11] suggest that much of the most successful innovation occurs at the
boundaries of organisations and industries where the problems and needs of users,
and the potential of technologies can be aligned. This requires the development of
sociotechnical constituencies, “dynamic ensembles of technical constituents and
social constituents’’ through a process of sociotechnical alignment - creation,
adoption, accommodation (adaptation) and interaction (interrelation) to achieve
these common ends.
The evolving WA oil and gas cluster can be regarded as the engine of economic
development in WA, where the oil and gas industry is of strategic importance for a
wide range of stakeholders in government, education und industry. All four
determinants of the Porterian model can be found in the structure of the cluster. Oil
and gas resources (basic factor) off the WA coast have stimulated related and
supporting national and domestic industries to cluster around global operators like
Chevron, Woodside, BP, Halliburn, Shell, Agip, Schlumberger. Advanced factors
such as communication infrastructure, sophisticated and skilled labour, research
facilities and technological know-how, which, according to Porter, are the most
significant for the fostering of competitive advantage, are present in WA.
However, the core question is how can conditions be created so that all
determinants mutually reinforce each other and effectively support the change
process so that the diamond begins to shine on value creation.
This is one of a set of regional studies where researchers and students on placement
have used collaborative action research with a range of stakeholders in the supply
chain to identify gaps and barriers, and stakeholder requirements. Students on
placement in WA initiated interviews with a wide range of stakeholders, to identify
gaps and barriers, moving from open interviews to more formalised questions with
larger reference groups.
5
SMEs
O G ICC
Focus G r.
ICNL SAMP
ICN W A
Existing Groups
VRS
W AERA
R&D of new technology
Testing of technology
technology
Commercialisation of
Interactions R&D/industry
commercialisation partners
Interactions R&D/
Interactions R&D/R&D
partnerships
Public-private sector
suppliers (national)
Interactions operators/
suppliers (international)
Interactions operators/
suppliers
Interactions suppliers/
of best practices
International benchmarking
Communication
problems
Strategies for recurring
In many respects it is the equivalent to the benchmarking process, and ideal for
contexts where knowledge transfer and negotiated change are important outcomes.
This involved an extensive number of institutions, groups and inititatives that
had already been created to facilitate coordination, strategic development and
sustainable economic development within the WA oil and gas industry. Students
were able to work across boundaries, often bringing together groups who would
not normally be able to discuss problems and strategies in this way, and appeared
to have a catalytic effect in raising issues and fostering exchange. The study
revealed that there is great potential in terms of organisational infrastructure
(advanced factors, see above) to create a prosperous oil and gas cluster, however,
there are big gaps in the fulfilment of stakeholder requirements for success, as
indicated in Figure 2. This initiative has facilitated collective awareness and
understanding of the gaps and barriers among stakeholders and provides a template
for development.
In addition, the study detected that interaction between all of the above
mentioned stakeholders was insufficient, with a significant lack of communication
and coordination across the entire industry [19]. Thus, the possibilities of
transferring competitive knowledge are very limited; technology diffusion cannot
take place to the extent required. There was also evidence of a lack of awareness
among large and medium sized operators of technical innovations by SMEs in the
region which could address operational problems they were encountering. This was
one of a series of gaps in the innovation process. Gaps such as these were a focus
of the research, as for example in Figure 3.
GAP
Figure 3. Identifying barriers in the WA Oil and Gas Supply Chain. [19]
used in the North Sea PILOT project [13]6. The growing interest in strategies for
developing innovation as a source of competitiveness (as opposed to the previous
emphasis on cost-cutting) is also reflected internationally in current research by the
European Community to identify strategies and practices that can support
innovation [1].
The competitiveness of the UK oil and gas supply chain has been the focus of
different government and industry sponsored initiatives, initially as part of a cost-
cutting model in which the LOGIC [9] organization took a significant role and
latterly through support for SME based innovation with the PILOT project, which
faced very comparable issues to those uncovered by researchers in the WA project7
in relation to the need to support SMEs. These were seen as holders of niche and
local expertise underpinning innovation and implementation of complex
technologies in difficult local terrains. It is notable that in both regions
From a benchmarking perspective, the initial exchanges with the PILOT project
meant shared ideas could be implemented more effectively, and some new ones
considered – notably the use of templates for fair contracting, and payment
practices as an industry standard, and the setting up of operator/contractor/sme
work groups to look at key issues. It became evident from discussion that the role
of PILOT itself was perceived as effective because of the brokering role of the
team, with very senior representatives of all stakeholders, and senior ministerial
commitment underlining the importance of participation, and the potential of the
process to execute change (i.e. not a talking shop).The intention is to extend this
trans-regional process through the linkage now established at different levels
across the regions through governmental, industry, support and higher education
organisations.8 From a socio-technical perspective, the technical networking within
6
PILOT is a joint programme involving the UK Dept. of Trade and Industry , the
aim of securing the long-term future of the Industry in the UK.
7
One difference specific to mature fields was the more pressing requirement for more
innovative technological solutions to extraction in the difficult pockets of recoverable oil
and gas in mature fields which can extend the life of the field.
8
Cp. tri-partite alliances between government, industry and education in the German
Brazilian auto supply chain project where they have proven successful [23, 24].
Trans-regional Supply Chain Research Network 807
5 Conclusions
The paper demonstrates an approach to supporting cluster development within and
across regions through the provision of ‘shared spaces’ for collaborative
stakeholder communication as a basis for aligning efforts towards the common end
of innovation in the supply chain. Porter’s theory is a helpful tool to understand the
structure of successful clusters, however, less is known about the means by which
the stakeholders in these complex, dynamic, socio-technical systems can provide
the human communication infrastructure through which some of these processes
need to be realized. The research outcomes from the project in Australia reinforce
the findings of earlier work on the automotive clusters in Germany and Brazil, and
parallel work in Grid-enabled systems as complex hybrid systems [24] aligning
technical infrastructures with and heterogeneous distributed human infrastructures
across national boundaries. In conclusion, the intention has been to demonstrate the
value of trans-regional action research networks and collaborative benchmarking as
a framework for developing and sharing policies and practices between regions as
a means of enhancing innovation to enhance competitiveness both for the region
itself, and for the cluster in the wider global context [26].
6 Acknowledgements
The authors would like to thank the many collaborators in participating regions,
and national funding agencies who have supported the project in different regions.
References
[1] Aho E. Creating An Innovative Europe. European Communities: Printed In Belgium,
2006. <Http://Europa.Eu.Int/Invest-In-Research/>.
[2] Christopher M. Logistics And Supply Chain Management. London: FT Prentice
Hall/Pearson Education, 1992.
[3] Dyer JH. Collaborative Advantage, Winning Through Extended Enterprise Supplier
Networks. Oxford: OUP, 2000.
[4] European Seventh Framework Programme . Available At:
http://Cordis.Europa.Eu/Fp7/Capacities/Home_En.Html Accessed March 1st 2007.
[5] Jaegersberg G, Hatakeyama K, Ure J, Lloyd AD. Leveraging Regional, Organizational
And Human Resources To Create Competitive Advantage: A New Framework For
808 G. Jaegersberg, J. Ure, A.D. Lloyd
Abstract. New product development is a business process with many functional interactions
in a company. The concurrency of these interactions must be managed in order to meet the
preestablished schedule, budget and scope. The issue of procurement is central to a succesful
project. When a new project belongs to an aerospace program this issue is even more
crucial. And when the aerospace program belongs to a developing country such as Brazil,
the core issue involves its budget and schedule planning. This article addresses the question
of procurement in a small company designing a new satellite camera for the Brazilian
Government. The procurement process was mapped, a monitoring structure was created and
performance indicators were developed. The performance indicators are discussed to
understand the leverage of each kind of purchased item and each process step on costs and
schedule.
1 Introduction
In developing countries, new product development (NPD) is still an obstacle rather
than a common practice. The same holds true for aerospace projects. What happens
when a small Brazilian company attempts to develop a new product for the
aerospace industry?
The purpose of this article is to address that question. However, an examination
of the entire picture would require the analysis of too many aspects. Therefore, this
article analyzes solely the question of procurement, since a new product in the
aerospace sector requires the importation of numerous and miscelanous items.
The company under study has been engaged in the development of a new
aerospace product since December 2004, to which end it set up a project
1
PhD, Senior Engineer, OPTO ELETRÔNICA S.A., Joaquim R. de Souza, Jardim Sta.
Felícia, São Carlos, BR; Tel: +55 (16) 2106 7000; Fax: +55 (16) 3373 7001; Email:
sanderson@opto.com.br; http://www.opto.com.br/mrm
810 S. Barbalho, E. Richter, M. Stefani
All the projects for the BAA follow the phasing structure prescribed by the
European Aerospace Committees. Therefore, there is a bread-board model, two
engineering models – one for environmental and another for functional tests –, a
qualification model, and flying models. Although qualification and flying models
contain a large portion of imported items, their purchase is the responsibility of the
BAA, according to the project contract. These items involve microelectronic
components, whose trade is constrained by US anti-terror legislation. The purchase
of the engineering models and GSEs are the supplier’s responsibility.
The BAA’s supplier selection criteria include penalty clauses. The amount of
money foreseen in these clauses makes it less costly to invest in project
management than to pay the fines. This article discusses the delivery of
engineering models and GSEs.
shelf item and is purchased on the domestic market, a quotation is requested and
submitted to senior management for approval. Senior management decisions are
made after price and timetable negotiations have been completed.
After the negotiation, a supplier contract is signed, after which the item goes
through the normal process of manufacturing, intercontinental transportation and
customs release. The steps outlined in Figure 1 are related to parts, materials and
processes (PMP) and configuration management processes. However, a discussion
about these steps is outside the scope of this article.
Because of the large number of steps in the mapped process, they have been
summarized and their number reduced to allow for monitoring of the process,
especially for imported items. Figure 2 presents the major milestones identified.
The plan was that a date would be set for each milestone and its lead-times
monitored. The process illustrated in Figure 1 was mapped in August 2006 and a
weekly monitoring began in September 2006. A person was appointed to purchase
every item required for the aerospace projects, and to monitor the status of each
item. This employee was allocated to the project office shown in Figure 1.
One person was appointed to head each milestone and the monitoring process
was discussed with him. These head people make weekly reports to the project
management group about the schedule and status of each purchased item.
The last step to structure the monitoring process was to create performance
indicators and a procedure for periodically monitoring and informing the status of
the indicators. The indicators are monitored weekly and fixed on a monthly basis.
5 Findings
Figure 3 depicts the number of acquired items monitored, showing the imported
and domestic items purchased per month. The data were systematized on February
15, 2007. This figure reveals that every domestic purchase process was concluded
while the import processes were not. In fact, there are import processes dating back
to July 2006 whose status is still open. Taking into account only purchases initiated
after September 2006, one can see that almost 43% are from other countries. In the
company’s traditional projects, this number is less than 5%.
This analysis is complemented by Figure 4, which compares item lead-times.
The materials are typologized according to their technological background, while
equipments are classified as imobilized assets and software programs are listed
explicitly. Note that the lead-time of imported mechanical and electronic items are
11 and 8 times longer than national lead-times.
Procurement and Importing in New Product Projects 813
90
80 77
72
70
60
50 45
40
40 37
30
2221
19
20 14
10 7 5
4 4 4 4
0
Jul Aug Sep Oct Nov Dec
Importing Process Importing Process Available National National Available
250,0
219,9
199,0
200,0
150,0 141,9
100,0
79,0
71,9
64,0
50,0
31,4
19,4
0,0
Equipments Optical Items Mechanical Electronical Softwares
Items Items
Figure 5 shows the percentage of the price of each item in the overall purchase
cost of both national and imported items, revealing a very substantial difference.
The price of imported items represents only 25% of the overall import process.
National items rate up to 98% of total cost. This demonstrates how expensive the
importation process in Brazil is.
From the number of national and imported items delivered, an average time can
be established between the beginning (the order) and the end (the component’s
transfer to assembly) of the purchase process, or simply the lead-time of the
process of acquiring new items. Figure 6 presents the lead-times of both national
and imported items according to the month when they were ordered. A mean lead-
time for each monthly average was calculated for national and imported items to
analyze the trends.
Procurement and Importing in New Product Projects 815
The line of averages representing the import procedure is higher than the
national one on scale of six (on average). The importation lead-time was reduced
after the monitoring process started. The difference between national and imported
item lead-times also decreased.
1,20
0,80
0,60
0,46
0,40
0,34
0,31 0,30
0,28
0,25
0,20
0,00
Jul Aug Sep Oct Nov Dec
Importing Process nationals
Figure 5. Average of item prices over the total cost of purchasing them.
200,00
180,00 177,29
160,00 158,18
152,54
140,00 141,25
139,07 129,88
120,00 121,10
107,75
100,00
86,00
80,00
60,00 61,90
40,00 41,00
27,71
17,22 16,48 15,80
20,00
6,73 15,00 13,75
0,00
Jul Aug Sep Oct Nov Dec
Importing Process
Nationals
Medium of Average Lead Times on Importing Process
Medium of Average Lead Times on Nationals
70,00
62,82 Lead-time: request to
approval
60,00
Lead-time: Aprroval to
50,00
transportation
40,00
Lead-time
32,31 Transportation to
arrival
30,00
Lead-time Arriva to
20,00 17,07
Customs approval
15,09
11,76
10,00 Lead-time Customs
approval to Delivery
for assembling
0,00
1
6 Final considerations
The goal of the data presented in this paper is to help managers make decisions
about purchasing and design strategies.
As reported in the literature, there are considerable delays and cost overruns
involved in procurement. In November 2006, the three GSE models were to be
delivered to the BAA. However, due to importation lead-times and costs, this
delivery was postponed.
The company’s management has opted to strictly monitor imported items,
especially mechanical and electronic items, and attempts have been made to
nationalize them. A team has been set up to study the composition of the period
elapsed between an item’s arrival in Brazil and its release from customs. This
period represents one month of the total lead-time and almost all the cost overruns.
The team is trying to apply a lean office program to the overall procurement
process to decrease the other partial lead-times.
7 References
[1] CLARK, K. B.; FUJIMOTO, T. Product development performance: strategy,
organization and management in the world auto industry. Harvard Business School
Press, Boston, Massachussets, United States, 1991.
[2] CHRISSIS, M. B. et. al. CMMI: Guidelines or process integration and product
improvement. Boston, Massachussets, United States, 2006.
[3] EUROPEAN COMMISSION FOR SPACE STANDARDIZATION. ECSS-M-10B.
Space project management – project breakdown structures. Noordwijk, The
Netherlands, 2003.
[4] FRIPONG, Y.; OLUWOYE, J.; CRAWFORD, L. Causes of delay and cost overruns
in construction of groundwater projects in developing countries; Ghana as a case
study. Intenational Journal of Project Management, 21 (2003), 321-326.
Procurement and Importing in New Product Projects 817
Abstract. Outsourcing is related to the action which an organization deals with its suppliers
through a kind of business contract where a specific activity or service has been hired to be
made. The outsourcing of some activities has become a common practice in the industry,
nowadays. It reduces costs, significantly, in the production process and, at the same time,
adds some values to the business organization. However it is necessary to measure the
performance of these activities. Data Envelopment Analysis (DEA) is a non-parametric
method useful to measure comparative performance. It has a wide range of applications
measuring comparative efficiency. The Analytic Hierarchy Process (AHP) is a multiple
criteria decision-making method that uses hierarchic structures to represent a decision
problem and then develops priorities for the alternatives based on the decision-maker's
judgments. This paper presents an integrated application based on DEA and AHP to
evaluate the efficiency of subcontracted companies in a Brazilian aerospace factory.
1 Introduction
1
Corresponding Author Email: angelo.ferreira@hotmail.com
820 A.J.C.A.Ferreira Filho, V. A.P. Salomon, F. A.S. Marins
2 Theoretical considerations
the ratio of the weighted sum of its outputs to the weighted sum of its inputs [6]. In
DEA is possible to consider n DMUs each consuming m inputs and producing p
outputs where X and Y are matrices, consisting of nonnegative elements,
containing the observed input and output measures for the DMUs [6] . Usually
there are multiples inputs and outputs, so it is necessary to form a unique virtual
output and a unique virtual input, for the observed DMUp .By using linear
programming (LP) [4], we can find the weights that maximize the ratio output per
input through the model showed in Figure 1 [4]:
The CCR model was chosen for this purpose once all outsourced
companies have similar scales of operation [5]. In other words, the scale effects for
each company are irrelevant. An important point to be mentioned is that it is
necessary to choose the orientation of DEA model. In this case study the output
oriented model was chosen once it attempts to maximize outputs while using no
more than the observed amount of any input [3]. For this model it is necessary to
exchange the numerator and the denominator and to minimize the objective
function [3].
The aerospace industry has some particular process that differs from other kind of
industries and the process of spare parts technical publications is one of these
particularities.
Generally an aircraft manufacturer used to produce different models of
aircraft and for each of these models it is necessary to support all these aircrafts
during a long period. The aerospace industry follows the procedures and standards
from IATA (International Air Transport Association) which elaborated important
international standards such as ATA 100 , ATA 200, SPEC 2000 and SPEC 2200.
All those standards were elaborated by operators, regulatory agencies,
manufacturers, and others important government authorities around the world.
By this way there is a great importance regarding to safety and quality in
the aerospace industry. As many others industries the resource of outsourcing was
adopted by many of manufactures around the world although always following the
roles and standards based on ATA 100 and SPECs 2000/2200.As a result of this
development the resource of outsourcing has started to be applied in the spare parts
technical publications because it was necessary to increase productivity and at the
same time to improve the efficiency in all the technical team of spare parts
engineering in function of the increasing in modifications related to new models of
aircrafts that were being developed. So it was necessary to elaborate a process to
control the activities of outsourced companies. As consequence it was proposed the
employment of DEA integrated with AHP once it can result in a good tool to
control and to take some efficiency values of these subcontracted companies with
the possibility to incorporate the decision-maker preferences according to the
criteria adopted to this case study.
It is necessary to define a set of criteria, which are sufficient to characterize
the process of outsourcing of spare parts technical publications. The criteria should
be relevant to the decision-maker once he/she should emphasize different aspects
of outsourced companies performance. Also it is important to use multiple criteria
in the evaluation, because it is extremely difficult if not impossible to find a way to
aggregate the criteria into one criterion [6]. These criteria were defined by a group
of specialists of spare parts technical publications team that used their experience
and monthly reports related to all technical documents worked by these
subcontracted companies. According to this group of specialists and their carefully
analysis to define a set of criteria which are sufficient enough to characterize the
process of outsourcing of spare parts technical publications the above criteria were
defined:
x Quality (C1)
x Time (C2)
x Cost (C3)
x Quantity of technical documents released by project (C4)
with them [6].It is also interesting to create some indicators to some criteria once
they contain enough information about the values of the criteria. The indicators
were proposed below to all criteria except for the fourth criterion.
The first step in this analysis was to determine the weighted sums for scaled
indicators [6]. The values for these judgements were obtained with a group of
specialists that attributed values to indicators for each criterion. For this purpose it
was used the AHP and Table 1 presents the judgements according to explanation in
section 2.2.
Criteria C1 is the quality of spare parts technical publications. This quality
can be measured by illustrations created or revised for each outsourced company,
or even by the quantity of questions received per month by operators and also by
the revision of parts list which needs to show exactly all the parts numbers and the
relation of intechangeability between parts. The other criteria C2 is represented for
the time that these companies used to deliver the package of activities send to
them. The indicators in this case mean the time spend for each company to deliver
a package of activities to a new aircraft ( in this case a program that it is still being
developed ), to a delivered aircraft or to an assembly aircraft. The cost of each
outsourced company is represented by criteria C3 which indicators are the external
824 A.J.C.A.Ferreira Filho, V. A.P. Salomon, F. A.S. Marins
costs, internal costs and the costs with specific tools as softwares and systems.
Criteria C4 is the input of the process. This input was measured by the quantity of
technical documents released by project and engineering departments which
reflects modifications in the aircrafts. The other step for this evaluation was to
determine the values for the criteria. A group of five specialists were asked to scale
all indicators in a range where the lower value is zero and the higher value is one.
The weighted sums for each indicators was used to determine the final value for
each criterion [6] and Table 2 gives the result of this analysis.
Criterion C1
Part
Illustration List Questions Weight
Illustration 1.0 0.33 0.2 0.10
Part List 3.0 1 0.33 0.26
Questions 5.0 3.0 1.0 0.64
Criterion C2
Delivered New Assembly Weight
Delivered 1.0 3.0 3.0 0.59
New 0.33 1.0 3.0 0.28
Assembly 0.33 0.33 1.0 0.13
Criterion C3
Internal External Tools Weight
Internal 1.0 3.0 2.0 0.54
External 0.33 1.0 3.0 0.30
Tools 0.5 0.33 1.0 0.16
The criterion values for each alternative will be used to evaluate the
production efficiency for these outsourced companies with DEA and in this
application of DEA the outputs are represented by criteria C1, C2 and C3 while the
Measuring the efficiency of outsourcing: an illustrative case study 825
input is represented by criteria C4.The results for efficiency evaluation with DEA
can be viewed in the Table 3.
DMU Efficiency
A 0.57
B 0.55
C 0.80
D 1.00
E 0.91
F 0.67
G 0.84
H 1.00
The results of DEA evaluation were elaborated with the software FSDA – Free
Software for Decision Analysis [1].Observing the results of Table 4 is possible to
note that the DMUs D and H are efficient with standard efficiency while the others
are not so efficient as DMUs D and H. In this paper an efficiency analysis of 8
outsourced companies was performed considering the values obtained with AHP
analysis to indicators chosen according to decision-maker’s preference. Other
possibilities to develop this study case can be performed applying other decision
making tools although the results obtained here tried to approach the preference of
the decision-maker to establish the efficiency of these outsourced companies.
6 Conclusion
7 References
[1] ANGULO-MEZA, L., BIONDI NETO, L., SOARES DE MELLO, J.C.C.B., GOMES,
E.G., COELHO, P.H.G. “FSDA – Free Software for Decision Analysis (SLAD –
Software Livre de Apoio a Decisão): a software package for data envelopment analysis
models”. Congreso Latino Iberoamericano de Investigación de Operaciones y Sistemas,
La Habana – Cuba, 2004.
[2] KAO,C. Interval efficiency measures in data envelopment analysis with imprecise
data,European Journal of Operational Research 174 (2006) 1087–1099
[3] CHARNES, A., COOPER, W.W., RHODES, E. Measuring the efficiency of
decision making units. European Journal of Operational Research 2, 429–444,
1978.
[4] JAHANSHAHLOO, G.R. Finding strong defining hyperplanes of Production
Possibility Set , European Journal of Operational Research 177 (2007) 42–54
[5] SOARES DE MELLO, J C C B. Engineering Post-Graduate Programmes: A Quality
And Productivity Analysis , Studies in Educational Evaluation 32 (2006) 136-152
[6] KORHONEN, P., TAINIO,R., WALLENIUS, J.Value efficiency analysis of academic
research European Journal of Operational Research 130 (2001) 121±132
[7] ERTAY, T., RUAN,D., TUZKAYA, U,R. Integrating data envelopment analysis and
analytic hierarchy for the facility layout design in manufacturing systems Information
Sciences 176 (2006) 237–262
[8] SAATY, T. L., The Analytic Hierarchy Process, McGraw-Hill , New York,198
Author Index