Nothing Special   »   [go: up one dir, main page]

Franco Intelligent Control

Download as pdf or txt
Download as pdf or txt
You are on page 1of 232

Studies in Computational Intelligence 521

Hiram Ponce-Espinosa
Pedro Ponce-Cruz
Arturo Molina

Artificial Organic
Networks
Artificial Intelligence Based on
Carbon Networks
Studies in Computational Intelligence

Volume 521

Series Editor
J. Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: kacprzyk@ibspan.waw.pl

For further volumes:


http://www.springer.com/series/7092
About the Series

The series ‘‘Studies in Computational Intelligence’’ (SCI) publishes new devel-


opments and advances in the various areas of computational intelligence – quickly
and with a high quality. The intent is to cover the theory, applications, and design
methods of computational intelligence, as embedded in the fields of engineering,
computer science, physics and life sciences, as well as the methodologies behind
them. The series contains monographs, lecture notes and edited volumes in
computational intelligence spanning the areas of neural networks, connectionist
systems, genetic algorithms, evolutionary computation, artificial intelligence,
cellular automata, self-organizing systems, soft computing, fuzzy systems, and
hybrid intelligent systems. Of particualr value to both the contributors and the
readership are the short publication timeframe and the world-wide distribution,
which enable both wide and rapid dissemination of research output.
Special offer: For all clients with a print standing order we offer free access to
the electronic volumes of the Series published in the current year.
Indexed by DBLP, Ulrichs, SCOPUS, MathSciNet, Current Mathematical
Publications, Mathematical Reviews, Zentralblatt Math: MetaPress and
Springerlink.
Hiram Ponce-Espinosa •

Pedro Ponce-Cruz Arturo Molina


Artificial Organic Networks


Artificial Intelligence Based on
Carbon Networks

123
Hiram Ponce-Espinosa
Pedro Ponce-Cruz
Arturo Molina
Instituto Tecnológico de Estudios
Superiores de Monterrey
Campus Ciudad de México
Tlalpan Distrito Federal
Mexico

ISSN 1860-949X ISSN 1860-9503 (electronic)


ISBN 978-3-319-02471-4 ISBN 978-3-319-02472-1 (eBook)
DOI 10.1007/978-3-319-02472-1
Springer Cham Heidelberg New York Dordrecht London

Library of Congress Control Number: 2013950734

 Springer International Publishing Switzerland 2014


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed. Exempted from this legal reservation are brief
excerpts in connection with reviews or scholarly analysis or material supplied specifically for the
purpose of being entered and executed on a computer system, for exclusive use by the purchaser of the
work. Duplication of this publication or parts thereof is permitted only under the provisions of
the Copyright Law of the Publisher’s location, in its current version, and permission for use must
always be obtained from Springer. Permissions for use may be obtained through RightsLink at the
Copyright Clearance Center. Violations are liable to prosecution under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of
publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for
any errors or omissions that may be made. The publisher makes no warranty, express or implied, with
respect to the material contained herein.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.springer.com)


To Omar who makes me believe in the
unbelievable
To my parents and grandfather
—Hiram Ponce-Espinosa

To Norma, Pedro and Jamie who are always


very close to my heart
—Pedro Ponce-Cruz

To my lovely family: Silvia, Julio and


Monserrat
—Arturo Molina
Preface

This book was written for undergraduate and graduate students as well as
researchers and scientists interested in artificial intelligence techniques. In fact, the
intention of this book is to introduce and fully describe the artificial organic
networks technique, a novel machine learning method inspired on chemical carbon
networks. In addition, an organic network-based algorithm named artificial
hydrocarbon networks is presented to show the advantages and the scope of the
technique.
On one hand, the book is complemented with several examples through
chapters and the description of real-world applications using artificial hydrocarbon
networks. On the other hand, the text is accompanied with an artificial organic
networks toolkit implemented on LabVIEWTM allowing a hands-on experience to
readers.
The organization of the book is as follows: Chapter 1 introduces an overview of
machine learning and the modeling problem while Chap. 2 describes key concepts
of organic chemistry in order to understand the technique, Chaps. 3 and 4 describe
the artificial organic networks technique and the artificial hydrocarbon networks
algorithm. Then, Chap. 5 offers some improvements to the basic artificial hydro-
carbon networks algorithm. Finally, Chaps. 6 and 7 provide experimental results
and discuss how to implement the algorithm in real-world applications like audio
filtering, control systems and facial recognition.
Finally, we would like to express our gratitude to all those who provided
support and reviewed details over and over, those who read and offered comments
allowed us to quote their remarks, and those who assisted us in the editing,
proofreading and designing stages. A special acknowledgement to the Tecnologico
de Monterrey.

Mexico City, Mexico, August 2013 Hiram Ponce-Espinosa


Pedro Ponce-Cruz
Arturo Molina

vii
Contents

1 Introduction to Modeling Problems . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 The Modeling Problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Review of Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Classification Algorithms . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 Nature-Inspired Computing. . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Metaheuristic Algorithms . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.2 Evolutionary Algorithms . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.3 Biologically Inspired Algorithms . . . . . . . . . . . . . . . . . 9
1.3.4 Chemically Inspired Algorithms . . . . . . . . . . . . . . . . . . 14
1.4 Comparison of Algorithms for Modeling Problems . . . . . . . . . . 17
1.4.1 Complexity and Stability in Modeling Problems . . . . . . . 17
1.4.2 Artificial Organic Networks and Modeling Problems. . . . 20
1.5 Motivation of Artificial Organic Networks . . . . . . . . . . . . . . . . 24
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Chemical Organic Compounds . . . . . . . . . . . . . . . . . . . . . . . . . . . 31


2.1 The Importance of Organic Chemistry . . . . . . . . . . . . . . . . . . . 32
2.2 Basic Concepts of Organic Compounds . . . . . . . . . . . . . . . . . . 33
2.2.1 Structural Definitions. . . . . . . . . . . . . . . . . . . . . . . . . . 34
2.2.2 Chemical Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 36
2.3 Covalent Bonding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
2.3.1 Characterization of Covalent Bonds . . . . . . . . . . . . . . . 40
2.4 Energy in Organic Compounds . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.1 Energy Level Scheme . . . . . . . . . . . . . . . . . . . . . . . . . 42
2.4.2 Measures of Energy . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.5 Classification of Organic Compounds . . . . . . . . . . . . . . . . . . . 45
2.5.1 Hydrocarbons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2.5.2 Alcohols, Ethers, and Thiols. . . . . . . . . . . . . . . . . . . . . 46
2.5.3 Amines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2.5.4 Aldehydes, Ketones, and Carboxylic Acids . . . . . . . . . . 47
2.5.5 Polymers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.5.6 Carbohydrates, Lipids, Amino Acids, and Proteins . . . . . 48
2.5.7 Nucleic Acids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

ix
x Contents

2.6 Organic Compounds as Inspiration . . . . . . . . . . . . . . . . . . . . . 49


2.6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2.6.2 Characteristics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3 Artificial Organic Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53


3.1 Overview of Artificial Organic Networks . . . . . . . . . . . . . . . . . 53
3.1.1 The Metaphor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
3.1.2 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2 Artificial Organic Compounds. . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2.2 Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.3 Networks of Artificial Organic Compounds . . . . . . . . . . . . . . . 66
3.3.1 The Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.3.2 The Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.3.3 Mixtures of Compounds. . . . . . . . . . . . . . . . . . . . . . . . 66
3.4 The Technique of Artificial Organic Networks . . . . . . . . . . . . . 67
3.4.1 Levels of Energy in Components . . . . . . . . . . . . . . . . . 67
3.4.2 Formal Definition of Artificial Organic Networks . . . . . . 68
3.4.3 Model of Artificial Organic Networks . . . . . . . . . . . . . . 69
3.5 Implementation Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
3.5.1 The Search Topological Parameters Problem . . . . . . . . . 70
3.5.2 The Build Topological Structure Problem . . . . . . . . . . . 71
3.5.3 Artificial Organic Networks-Based Algorithms . . . . . . . . 72
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4 Artificial Hydrocarbon Networks . . . . . . . . . . . . . . . . . . . . . . . . . 73


4.1 Introduction to Artificial Hydrocarbon Networks . . . . . . . . . . . . 73
4.1.1 Chemical Inspiration . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.2 Objectives and Scope . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2 Basics of Artificial Hydrocarbon Networks. . . . . . . . . . . . . . . . 75
4.2.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2.2 Interactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
4.2.3 The Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2.4 Mathematical Formulation . . . . . . . . . . . . . . . . . . . . . . 101
4.3 Metrics of Artificial Hydrocarbon Networks . . . . . . . . . . . . . . . 102
4.3.1 Computational Complexity. . . . . . . . . . . . . . . . . . . . . . 102
4.3.2 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
4.4 Artificial Hydrocarbon Networks Practical Features. . . . . . . . . . 108
4.4.1 Partial Knowledge Representation . . . . . . . . . . . . . . . . . 108
4.4.2 Practical Issues in Partial Knowledge Extraction. . . . . . . 110
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Contents xi

5 Enhancements of Artificial Hydrocarbon Networks . . . . . . . . . . . . 113


5.1 Optimization of the Number of Molecules . . . . . . . . . . . . . . . . 113
5.1.1 The Hess’ Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
5.1.2 Boiling and Melting Points in Hydrocarbons . . . . . . . . . 114
5.1.3 Enthalpy in Artificial Hydrocarbon Networks . . . . . . . . . 115
5.2 Extension to the Multidimensional Case . . . . . . . . . . . . . . . . . . 119
5.2.1 Components and Interactions . . . . . . . . . . . . . . . . . . . . 120
5.2.2 Multidimensional AHN-Algorithm . . . . . . . . . . . . . . . . 124
5.3 Recursive Networks Using Aromatic Compounds . . . . . . . . . . . 127
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6 Notes on Modeling Problems Using Artificial Hydrocarbon


Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.1 Approximation Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
6.1.1 Approximation of Univariate Functions . . . . . . . . . . . . . 132
6.1.2 Approximation of Multivariate Functions. . . . . . . . . . . . 137
6.2 Clustering Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.2.1 Linear Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
6.2.2 Nonlinear Classifiers . . . . . . . . . . . . . . . . . . . . . . . . . . 145
6.3 Guidelines for Real-World Applications . . . . . . . . . . . . . . . . . . 149
6.3.1 Inheritance of Information . . . . . . . . . . . . . . . . . . . . . . 150
6.3.2 Catalog Based on Artificial Compounds . . . . . . . . . . . . 152
6.3.3 Using Metadata. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

7 Applications of Artificial Hydrocarbon Networks . . . . . . . . . . . . . 155


7.1 Filtering Process in Audio Signals . . . . . . . . . . . . . . . . . . . . . . 155
7.1.1 Background and Problem Statement . . . . . . . . . . . . . . . 156
7.1.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
7.1.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . 159
7.2 Position Control of DC Motor Using AHN-Fuzzy
Inference Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
7.2.1 Background and Problem Statement . . . . . . . . . . . . . . . 166
7.2.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
7.2.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . 177
7.3 Facial Recognition Based on Signal Identification
Using AHNs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
7.3.1 Background and Problem Statement . . . . . . . . . . . . . . . 183
7.3.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7.3.3 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . 187
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
xii Contents

Appendix A: Brief Review of Graph Theory . . . . . . . . . . . . . . . . . . . 191

Appendix B: Experiment of Signal-Molecule Correlation . . . . . . . . . . 195

Appendix C: Practical Implementation of Artificial


Hydrocarbon Networks . . . . . . . . . . . . . . . . . . . . . . . . . 201

Appendix D: Artificial Organic Networks Toolkit


Using LabVIEWTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

Appendix E: Examples of Artificial Hydrocarbon Networks


in LabVIEWTM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
Chapter 1
Introduction to Modeling Problems

Computational algorithms for modeling problems are widely used in real world
applications, such as: predicting behaviors in systems, describing of systems, or
finding patterns on unknown and uncertain data. Interesting applications have been
developed for engineering, biomedical, chemical, economics, physics, statistics, and
so forth.
In that way, a large set of computational algorithms have been developed like: clas-
sical methods, probabilistic methods, parametric and nonparametric methods, prob-
abilistic methods, heuristic and metaheuristic based methods, and naturally inspired
methods. However, these modeling algorithms have several drawbacks. For instance,
some algorithms assume partial knowledge of the system to model; others suppose
specific dependences on variable instances; some others act as black boxes without
concerning on how models work; like other algorithms are not stable due to heuristics
or arbitrary parameters difficult for setting, except by trial-and-error or expertise.
It is well known that there is no algorithm that actually deal with all classes of
modeling problems. In addition, some algorithms are ineffective respect to others
and it depends on the domain problem. Hence, focusing on two key features of
computational algorithms for modeling problems, i.e. stability in algorithms and
partial knowledge of model behavior, a new paradigm of algorithms is emerging,
called artificial organic networks, in order to enhance both features.
This framework is a chemically inspired technique based on carbon networks
that aims to model engineering systems capturing stability in the topology of the
algorithm and partial knowledge of the given system behavior. In particular, the idea
behind artificial organic networks is to build a stable topology of mathematical units,
so-called molecular units, that capture information of a system in small clusters.
As chemical molecules do, these units can interact among them forming complex
entities, i.e. artificial organic compounds, that capture nonlinearities among attribute
variables in the system. At the end, artificial organic compounds can mix them up
forming mixtures of nonlinear behaviors. Moreover, chemically inspired rules are
applied in the interaction of molecular units and in the resultant topologies, exploiting

H. Ponce-Espinosa et al., Artificial Organic Networks, 1


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1_1,
© Springer International Publishing Switzerland 2014
2 1 Introduction to Modeling Problems

this “intelligent” chemical behavior to build stable and organized topologies that may
help to explain systems in a better way.
Furthermore, artificial organic networks framework has been used for developing
a computational algorithm for modeling problems named artificial hydrocarbon net-
works. In advance, that algorithm is inspired on chemical hydrocarbons (the most
stable organic compounds in nature as well as the simplest compounds in terms of
their structures) and it is considered as a class of supervised and parametric learning
algorithms for modeling causal, continuous, multivariate systems.
Thus, in this chapter, an introduction to modeling problems and a brief review of
machine learning are presented. Later on, naturally inspired computing is discussed
to address chemically inspired algorithms where artificial organic networks frame-
work fits. An outline of common methods are described as well as some aspects of
computational complexity and stability that will be important in the next chapters.
Then, a comparison between artificial organic networks and other learning algorithms
is discussed to highlight the advantages of using artificial organic networks in model-
ing problems. Finally, the motivation of artificial organic networks summarizes this
chapter.

1.1 The Modeling Problem

Consider a system Σ that presents a response Y for a stimulus X. Let also assume that
a set of observations from X and Y , denoted as an ordered pairs (X, Y ), are known.
Then, it is possible to find a functional relationship M between X and Y such that M
clearly describes the behavior of Σ due to a stimulus X, denoted as MΣ . Moreover,
if MΣ (X) can estimate the observed values of Y and can generalize it to other values
of unknown pairs (xi , yi ) ∈
/ (X, Y ), then MΣ is considered a model of Σ.
In engineering, a model can be understood as a representation of a system. How-
ever, in real world applications there only exists a set of observations of the system,
but it does not exist a functional relationship that helps to describe the system. In
that sense, one may wish to build a model of that system. But, it is not an easy task
to handle and several methods have been developed to solve the so-called modeling
problem, as stated in Definition 1.1:
Definition 1.1 (modeling problem) Given an unknown system Σ and a set of
ordered pairs of observations (X, Y ) of the stimulus and the response of Σ. The
modeling problem refers to find, or build, a functional relationship MΣ between X
and Y , such that, MΣ describes Σ closely and generally. Moreover, MΣ is called
the model of Σ, X is known as the set of attribute variables and Y as the set of target
variables.
As noted in Definition 1.1, build a model of a system requires to describe it closely,
i.e. the error between the model response and the target has to be minimum. But also,
it needs to describe it generally. In other words, the model has to describe closely
1.1 The Modeling Problem 3
Training
System
Process

x y
Query Model Response

Reasoning Process

Fig. 1.1 Block diagram of modeling systems implementing the training and reasoning processes

the target values and other values not seen in the set of data observations in order to
get a predictable property to the model. As seen later in Sect. 1.2, some metrics of
models in these terms are also described.
In any case, modeling systems have two steps: the training and the reasoning
processes. The training process refers to the usage of datasets to build a model of
a particular system. This is implemented as a learning algorithm. Once a model is
trained, the reasoning process is the step in which a model is applied. Figure 1.1
shows a two step modeling system.
Learning algorithms are widely used in machine learning to develop systems that
simulate human intelligence. In that sense, unknown systems or uncertain environ-
ments can be dealt using machine learning methods by building models of these
systems or environments and then applying reasoning processes.
However, before running a training process, some assumptions to models are
required. Actually, models can be classified as follows:
• Continuous and discrete models. A continuous model assumes that the system is
continuous in its input domain, e.g. the indoor temperature in terms of time; while
a discrete model assumes that the system is constrained to be discrete in its input
domain, e.g. a set of different geometric figures.
• Linear and nonlinear models. A linear model considers that the relationship
between attribute and target variables is linear; while a nonlinear model con-
siders that the relationship between attribute and target variables is nonlinear, e.g.
logarithmic, quadratic, etc.
• Recursive and nonrecursive models. A recursive model depends on attributes and
target variables such that MΣ (X, Y ); while a nonrecursive model only depends on
attribute variables MΣ (X).

1.2 Review of Machine Learning


Machine learning refers to the study, build and analysis of systems that can learn
from information [2, 18]. Computationally, these systems apply learning algorithms
that mainly model a set of observations and generalize it to predict or infer new
information.
4 1 Introduction to Modeling Problems

Applications of machine learning are vast. For example, inference systems to


predict future data, pattern recognition, audio and video signal processing, classifi-
cation of data, natural language processing, search engines, data mining, financial
analysis, modeling systems, artificial intelligence based approaches, medical diagno-
sis, recommender systems, intelligent tutoring systems, intelligent assistant systems,
robotic approaches, scheduling, and so forth.
Focusing on modeling problems, literature reports different computational algo-
rithms that allow suitable solutions for them [18]. In any direction, classical methods
or machine learning techniques, might be classified in two different ways for solv-
ing modeling problems: learning algorithms and clustering algorithms. In a nutshell,
given any system with suitable conditions, it can be modeled or inferred using learn-
ing algorithms; while given any set of unknown objects, it can be classified into
several groups using clustering algorithms. Actually, the most important applica-
tions of these learning algorithms are inference and classification systems.
Since artificial organic networks framework falls into machine learning tech-
niques, this section introduces the notion of these algorithms and presents a brief
review of the most common methods used.

1.2.1 Learning Algorithms

Interesting real world phenomena is highly complex in terms of their behavior. If


delimited these phenomena to be any system (such as, physical, medical, economical,
biological, etc.), it can be analyzed, predicted or described by a model. In fact, a model
is a simple representation of a system or process. In suitable cases, representation of
these systems as models is done by experts or by extracting information of amounts
of data. However, the process is intractable if systems have complex, unpredictable
behaviors or there are huge amounts of data to analyze. Thus, other techniques need
to be applied. In that sense, learning algorithms intend to make this process easier.
For example, Fig. 1.2 shows a typical situation in which a system (data points) has
to be modeled using any learning algorithm and obtain a close function (continuous
line) that best represents the system and allows generalization.
Depending on the way data observations are treated, learning algorithms can be
classified as: supervised and unsupervised algorithms, parametric and nonparametric
algorithms, probabilistic algorithms, or univariate and multivariate algorithms.

1.2.1.1 Supervised and Unsupervised Learning Algorithms

Let denote Σ = (X, Y ) to any given system Σ with ordered pairs (X, Y ) of attribute
and target variable observations, respectively. Supervised learning algorithms build
a model MΣ of a system Σ using past and present data or experience in the form of
ordered pairs of attribute and target variables, also denoted inputs and outputs. The
idea behind these algorithms is to minimize any metric that measures the distance
1.2 Review of Machine Learning 5

20
System
15
Model

10

5
y1

-5

-10

-15
-25 -20 -15 -10 -5 0 5 10 15 20 25
x1

Fig. 1.2 Example of building a model (continuous line) of a given system (data points)

between the target observations and the model. This metric is known as loss function
or error function E, and measures how precise the model is in terms of target data. In
optimization procedures, this loss function is called the objective function. Actually,
the most common metric is the so-called squared loss function like (1.1); where, q
is the number of data observations (samples), xk ∈ Rn is the k-th sample of attribute
variables x1 , ..., xn ∈ X, yk ∈ Rm is the k-th sample of target variables y1 , ..., ym ∈ Y
and MΣ (xk ) is the response of the model due to xk .

1
q
E= (yk − MΣ (xk ))2 (1.1)
2
k=1

In contrast, unsupervised learning algorithms build models MΣ of a system Σ


only using attribute variables X. These learning processes recognize regularities in
the input domain of the system. Different metrics are widely used, but a squared
loss function E can be computed as (1.2); where, q is the number of input samples,
xk ∈ Rn is the k-th sample of attribute variables x1 , ..., xn ∈ X and θ i is a set of
parameters defined in the model.

1
q
E= (xk − θ i )2 (1.2)
2
k=1

1.2.1.2 Parametric and Nonparametric Learning Algorithms

On the one hand, parametric learning algorithms build a model MΣ that depends on
suitable parameters θ i , such that, MΣ (X|θ i ). In fact, parametric learning lagorithms
are reduced to an optimization problem in which the set of optimal parameters θ i
6 1 Introduction to Modeling Problems

satisfies the minimization of E in (1.1) or (1.2). The most known types of algorithms
for continuous models fall into regression methods, and variants of it, like lin-
ear regression, quadratic regression, or polynomial regression. For example, arti-
ficial hydrocarbon networks algorithm uses the well-known least squares estimates
method, a regression technique, as a supervised and parametric learning algorithm
to find suitable parameters inside it.
On the other hand, nonparametric learning algorithms build a model MΣ that
only depends on data observations from the system, such that, MΣ (X). Actually,
nonparametric learning algorithms suppose similarities in input data. Thus, similar
inputs may have similar outputs. Then, these methods are also called as smoothers.
Common algorithms are the running mean smoother, the kernel smoother and the
running-line smoother. Each of them makes a guess of the model MΣ (X) in terms
of relative measurements among samples in X in order to minimize (1.1).

1.2.1.3 Probabilistic Learning Algorithms

Probabilistic learning algorithms claim for uncertainty in the input-output data obser-
vations (X, Y ). In that case, a model is built in terms of dependencies of units of
information that are related with a conditional probability due to the Bayes’ rule.
Typical algorithms referred to Bayesian decision algorithms, such as Bayesian net-
works or belief networks. The maximum likelihood estimation and the bias and
variance model are other learning algorithms using probabilities.

1.2.1.4 Univariate and Multivariate Learning Algorithms

Univariate learning algorithms build models that only depend on an attribute variable
x ∈ X while multivariate learning algorithms build models that depend on several
attribute variables x1 , ..., xn ∈ X. In real world applications, multivariate learning
algorithms are preferred more than univariate ones because complex systems depend
on several input variables.

1.2.2 Classification Algorithms

As said previously, unsupervised learning algorithms assume similarities on input


data observations. Then, classification algorithms implement unsupervised learning
algorithms for discriminating input data to be part or not of a category called a
class. In the general case, classification algorithms can act as clustering algorithms
in which the input data is partitioned into several classes or clusters. In order to
do that, classification algorithms have a discriminant function which it decides if
any input data is part of a class or not. Notice that parametric/nonparametric algo-
rithms, probabilistic algorithms, and univariate/multivariate algorithms can be used
1.2 Review of Machine Learning 7

15

10
x2

5
Class 1
Class 2
Class 3

0
0 1 2 3 4 5 6 7 8
x1

Fig. 1.3 Example of clustering a set of data points

for classification algorithms, too. The significant change is that there is no objective
function for comparison purposes.
As an application, clustering is a special case of classification algorithms. For
example: mixture densities, the k-means clustering, fuzzy clustering means, and the
expectation-maximization, are the most common algorithms for portioning the input
space into several categories. Figure 1.3 shows an example of clustering given a
dataset.

1.3 Nature-Inspired Computing

Many machine learning algorithms and other optimization procedures involved in


learning and classification algorithms are inspired on nature, called naturally inspired
algorithms [20, 21]. Roughly speaking, those are algorithms that use information
about natural processes like water activities, social behavior, living organisms and
their different levels, environmental actions, physical phenomena, and so forth. In
fact, during the last decades, these algorithms have been developed and used more
and more due to the advantages they bring on computing. In addition, naturally
inspired, or simply nature-inspired, algorithms are related to artificial intelligence,
a field in which theories of knowledge, soft-computing, mathematical models and
biological or natural studies are joined together in order to give to an autonomous
agent a specific behavior [20].
In computer sciences, nature-inspired algorithms are applied to achieve approxi-
mate solutions to problems like modeling, optimization, searching, scheduling, clas-
sification, clustering, etc. Actually, they employ heuristics that can handle with a
large variety of problems but reaching approximate solutions, giving robustness. In
some problems, it is preferable reaching an approximate solution more than an exact
8 1 Introduction to Modeling Problems

solution, if the algorithm gets the solution in minor time or solves different problems
with minor changes.
Furthermore, two subclasses of algorithms can be found under nature-inspired
algorithms. Biologically inspired, or simply bio-inspired, algorithms are those that
emulate the way of reasoning, developing and solving problems using observations
of biology processes. In a similar fashion, chemically inspired, or chemistry-inspired,
algorithms are those that use chemical rules, interactions and organic agents to solve
problems.
The most common nature-inspired algorithms are [9, 20, 21]: meta-heuristic
algorithms, evolutionary algorithms, artificial neural networks, swarm intelligence,
and DNA computing algorithms. Neurobiological systems, social behaviors, organic
compounds, or adaptability of populations, are just a few examples in which these
techniques are based on. Following, a brief description of these and other techniques
is presented.

1.3.1 Metaheuristic Algorithms

In optimization problems nature-inspired metaheuristic algorithms employ nature-


based strategies to guide the search process, in an efficient way, to reach feasible
solutions [31]. Usually, nature-inspired metaheuristics are approximate and solve
non-specific problems. In addition, two search strategies are commonly used: inten-
sification and diversification. The first refers to locally search domain spaces while
the second refers to avoid local minima. Actually, metaheuristics emerge because
they can get approximate solutions in a reasonable amount of time, in comparison
with many optimization methods that are NP-hard (refer to Sect. 1.4).
General metaheuristics can be classified into trajectory-based algorithms and
population-based algorithms [3]. Firstly, trajectory-based algorithms are those that
have an initial state and reach a final state (also called the attractor), drawing a path of
transient states during the search process. Simulated annealing is a good example of
these kinds of nature-inspired metahauritic algorithms. Secondly, population-based
algorithms deal with a set of solutions in each iteration of the algorithm that provides
exploration in the search space. In contrast, single point search algorithms use one
entity at a time to explore the domain space. For example, genetic algorithms and ant
colony optimization are population-based nature-inspired metaheuristic algorithms.
Furthermore, some parametric learning and classification algorithms implement
nature-inspired metaheuristics at the training process to find the optimal values of
parameters. Some metaheuristics used are [3]: genetic algorithms, simulated anneal-
ing, ant colony optimization, particle swarm optimization, and others. In other cases,
nature-inspired metaheuristic algorithms are directly applied to solve learning and
classification problems like artificial neural networks do.
1.3 Nature-Inspired Computing 9

1.3.2 Evolutionary Algorithms

Nature evolves for adapting to an environment. Evolutionary algorithms are inspired


on natural evolution in which the information is coded in chromosomes. In fact,
the process of evolution uses two simple operations: mutation and recombination.
Mutation meets the criterion of variability in a pool of basic information while recom-
bination exchanges information among individuals. Since evolutionary algorithms
are population-based metaheuristics, the above operations are done over a popula-
tion or a set of individuals that evolves through epochs (or iterations). At last, natural
selection causes extinction in non-adapted organisms and provides better resources
to adapted organisms [18, 20].
In context, evolutionary algorithms can be classified into three families: evolu-
tionary programming (aiming the adaptation of individuals rather than the evolution
of them), evolution strategies (looking for optimization in engineering problems with
some rules of adaptation in operations and individuals), and genetic algorithms (the
most popular family of evolutionary algorithms which it primary uses recombina-
tion because independent solutions can derive better solutions to a suitable problem).
Hybrid combinations of these algorithms can also be found. In that way, some other
techniques that can be found are evolution programs, genetic programming, and
mimetic algorithms.
Other techniques like gene expression and memetic algorithms are summed up to
evolutionary algorithms. For instance, gene expression algorithms are bio-inspired
by the central dogma of biology in which stands that DNA is the central part of the
gene information and then it is translated to proteins via the mRNA and synthesis
of proteins can do it well. In fact, genes are expressed in different ways depending
on the transcription. Actually, in fitness functions of genetic algorithms, individuals
are not properly prepared to be encoded and to be evaluated into these functions.
Thus, genes in genetic algorithms can be evaluated in the fitness function via gene
expression algorithms in which take into account the expression of the gene or chro-
mosome in individuals and then process a phenotype to translate them properly. In
contrast, memetic algorithms introduces cultural units, called memes, into genetic
algorithms [20]. The idea is to use specific problem constrains in memes and gener-
ally run genetic algorithms. Then, at each iteration in genetic algorithms, a subset of
population is evaluated using memes.
Some applications of evolutionary algorithms are focused to solve problems of
[18, 20] combinatorial optimization, scheduling, classification, modeling, placement
problems, searching, telecommunications, design, mechanical optimization, power
planning, error coding design, and so forth.

1.3.3 Biologically Inspired Algorithms


Biologically inspired, or simply bio-inspired, algorithms are those that emulate the
way of reasoning, developing and solving problems using observations of biology
processes. Following, two common bio-inspired techniques are described.
10 1 Introduction to Modeling Problems

Fig. 1.4 Simple neuron x1


model of artificial neural w1
networks
w2
x2 y

...
wn
xn

Fig. 1.5 Different topologies


of artificial neural networks

single multilayer

recurrent self-organizing

1.3.3.1 Artificial Neural Networks

Artificial neural networks are biologically inspired algorithms that also fall into
machine learning techniques. Roughly, they are mathematical models based on ner-
vous system to emulate learning. They can improve their behavior applying a training
process [4, 11]. Artificial neural networks define neurons as their basic entities in
which they are cells that create relationships among them shaping networks. In fact,
reasoning is perfomed in cells using the information coming from input messages or
other output responses of neurons via an activation function, a mathematical model
of a neural behavior. Figure 1.4 shows an simple neuron model of artificial neural
networks.
Several types of neural networks are available. For example, the well-known super-
vised perceptron, singled or multilayered, was the first neural network implemented.
It uses a threshold to stimulate or inhibit neurons. Another supervised neural networks
is the general singled or multilayered artificial neural network, which differs from
the previous one in the activation function. In contrast, examples of unsupervised
neural networks are the feedforward and feedback associative memory networks like
Hopfield’s nets, and the competitive learning networks like self-organizing networks
(e.g. Kohonen maps) [4, 11]. In addition, there are the recurrent neural networks that
create linking cycles in neurons to perform dynamic behavior. Figure 1.5 shows the
most common topologies in artificial neural networks.
1.3 Nature-Inspired Computing 11

One of the most appreciated features of artificial neural networks is the ability to
model a process or learn the behavior of a system. In that sense, artificial neural net-
works have to be trained to meet a model or to learn behaviors. Supervised and unsu-
pervised learning algorithms are applied in artificial neural networks. For instance,
some algorithms for supervised learning [4, 20] are perceptron learning, linear auto-
associators learning, iterative learning, Hopfield’s model, means square error algo-
rithms, and the Widow-Hoff rule or least mean squares algorithms. In addition,
the most popular algorithm for multilayed neural networks is the back-propagation
method [4]. Several unsupervised learning algorithms [4, 20] are k-means cluster-
ing, Kohonen clustering, self-organizing models, and adaptive resonance theory (e.g.
ART1 or ART2).
Applications of artificial neural networks range from modeling, clustering, classi-
fication, pattern recognition, control, robotics, filtering, forecasting, fault detection,
feature extraction, and so forth.

1.3.3.2 Swarm Intelligence

Swarm intelligence is an artificial intelligence technique firstly inspired on social


behavior in animals [20], i.e. insects. For instance, to insects, social behavior means
the interaction of individuals in which they solve an optimization problem like finding
the optimal path for food, task allocation, optimization based on communication, and
so forth. Actually, swarm intelligence is one of the most useful bioinspired techniques
for optimization problems. Nowadays, swarm intelligence also refers to agents that
interact among them following bio-inspired metaheuristic rules, as explained below.

Algorithm 1.1 AGENT-CYCLE: Main routine of an agent.


Determine the initial state of the environment
Determine the initial internal state of the agent
while goal is not reached do
Sense the environment
Update the internal state of the agent
Define the next action to do using the internal state and the perception of the environment
Execute the action over the environment
end-while

Formally, an agent is a computational system that is located in a given environ-


ment, and it has two important characteristics: it is autonomous in the environment
and it can reach goals previously assigned [28]. For example, consider a mobile robot
inside a maze; the maze is the environment and the robot is the agent that interact in
the environment with one objective: solve the maze. In terms of swarm intelligence,
ants are agents that find optimal paths for food between their colony and the source
of food (space in the ground corresponds to the environment).
The interaction between the agent and the environment can be modeled as shown
in Fig. 1.6. In particular, the agent can sense the environment and then it can do
12 1 Introduction to Modeling Problems

Fig. 1.6 The agent-


environment model Agent

sensing

actions
Environment

actions in the environment, until the goal is reached. The latter is called the agent
cycle [28], as depicted in Algorithm 1.1.
Prominent algorithms in swarm intelligence are reported in literature like [20,
21, 31]: ant colony optimization, particle swarm optimization, honeybee forager
allocation, intelligent water drops, artificial immune systems, and bee colony opti-
mization. To illustrate swarm intelligence algorithms, particle swarm optimization
and ant colony optimization will be present following.
Particle swarm optimization (PSO) is inspired on social behavior of bird flocking
[16]. Because it is a population-based algorithm, a population of particles (possible
solutions) is randomly initialized. In addition, each particle keeps track its own best
position pbest = (p1 , ..., pn ) in the search space Rn and the best global position
gglobal = (g1 , .., gn ) of all particles is also stored. Iteratively, the position of each
particle is updated based on rule (1.3); where, xi represents the position of a particle
in the i-th dimension of the search space and vi is the velocity of a particle in the
i-th dimension and computed as (1.4). The momentum parameter ω is used to follow
the tendency of velocity in a particle, φp and φg are known as learning factors and
commonly set up to a value of 2, and rp and rg are random values normally distributed
over the interval [0, 1].
xi = xi + vi (1.3)

vi = ωvi + φp rp (pi − xi ) + φg rg (gi − xi ) (1.4)

If the position x = (x1 , ..., xn ) is better than the last one, it is stored as the best
position of particle. After all current position are calculated, the global position is
updated if one of the best position of all particles is better than the last global position
stored so far. Eventually, the algorithm will reach an approximate solution pglobal of
the global optimum. In order to measure how good is a particle, a fitness function
f (p) must be design for a specific problem. Figure 1.7 shows the flow diagram of the
simple PSO algorithm.
On the other hand, ant colony optimization (ACO) is inspired on ants that find
optimal paths between their colony and a source of food [6]. Typically, it is imple-
mented to optimize trajectories. Again, this algorithm is a population-based meta-
heuristic technique that initializes a population of ants. Each ant has a pheromone
level assigned. Then, each ant can move in the search space iteratively sparsing its
pheromone, leaving a path of pheromone trail. Then, an ant moves from position x
1.3 Nature-Inspired Computing 13

Fig. 1.7 Flow diagram of


the simple particle swarm START
optimization algorithm

Initialize population
of particles

Initialize best position


Initialize best global position
Initialize velocities

yes
goal reached?

no

For each dimension


update velocities and positions

Update best positions


Update best global position

END

to position y based on the probability of movement pxy calculated by (1.5); where,


ηxy is the attractiveness level of pheromone between positions x and y, τxy is the trail
level indicating how well this path from x and y is so far, α ≥ 0 is the influence of
pheromone and β ≥ 1 is the influence of attractiveness. Commonly, ηxy = d1xy where
dxy is the distance between x and y.

α ·η β
τxy xy
pxy =  β
(1.5)
α
τxy · ηxy
y

In each iteration, the trail level of each k ant is updated following rule (1.6); where,
σ is the pheromone evaporation coefficient, C is a gain value of pheromone and Lk
is the length of path that the k ant has travelled.
14 1 Introduction to Modeling Problems

Fig. 1.8 Flow diagram of the


simple ant colony optimiza- START
tion algorithm

Initialize population
of ants

Initialize pheromone trails


Initialize attractiveness levels

yes
goal reached?

no

For each ant choose


a path with some probability
and move a step forward

Update all pheromone trails

END


τxy = (1 − σ )τxy + Δτxyk
 k
C/Lk if k use path xy (1.6)
Δτxy =
k
0 otherwise

At last, the path that has the highest pheromone trail level is the optimal trajectory.
Figure 1.8 shows the flow diagram of the simple ACO algorithm.

1.3.4 Chemically Inspired Algorithms

Chemically inspired, or chemistry-inspired, algorithms are those that use chemical


rules, interactions and organic agents to solve problems. Following, some chemically
inspired techniques are described.
1.3 Nature-Inspired Computing 15

1.3.4.1 DNA Computing

This technique is a (bio-) chemically inspired (i.e. organic chemistry, biochemistry


and molecular biology) on the deoxyribonucleic acid (DNA) [1, 9]. The idea is
to represent information as DNA strands or chains and use chemical reactions, as
operations, over DNA strands. In that way, parallel computation is done using several
strands and applying the same operation at once. In fact, DNA is a string of bases, i.e.
adenine, guanine, cytosine, and thymine; and then data can be coded in these bases.
In that case, bit strings (or strands) of bases are used to codify information, via
ligation reaction to produce an initial random set of strands. Secondly, these strands
are amplified by polymerase chain reaction in order to find all strands that start and end
in specific positions of the search space, and then they run on an agarose gel to purify
them to get only specialized strands. Affinity of these strands is also implemented with
a biotin-avidin magnetic beads system. Finally, another amplification via polymerase
chain reaction is done. Several methods are presented in literature as the sticker based
model.
Solving NP-hard optimization problems (see Sect. 1.4), implementations in mole-
cular computing or parallel computing approaches are examples of applications of
DNA computing.

1.3.4.2 Organic Computing

Since 2004, the German Research Foundation (DFG, by its German acronym) pro-
poses organic computing [30]. This is an emerging paradigm of developing large
collections of agent systems inspired on living organisms aiming the following
characteristics: self-organizing, self-configuring, self-optimizing, self-healing, self-
protecting, self-explaining, and context aware. Thus, organic computing systems
dynamically adapt from the current state of the environment. It is primary focuses
on autonomous systems. Several artificial intelligence techniques and control system
methods are used on it for achieving the final objective.

1.3.4.3 Chemical Reaction Optimization

This is an emerging algorithm proposed in 2010 by Lam and Li [15]. In fact, chemical
reaction optimization (CRO) algorithm loosely mimics what happens to molecules
microscopically when they are subjected to chemical reactions, following the energy
minimization rule in the potential energy surface (PES). Figure 1.9 summarizes the
simple CRO algorithm.
As noted in Fig. 1.9, there are three stages: initialization, iteration and finalization.
The first step considers the initialization of parameters and control variables that
model the minimization problem and determine the size of the set of molecules in the
chemical reaction. The second step loops until a stop criterion is reached. In this stage,
molecules are subjected to chemical reactions: inter-molecular ineffective collision,
16 1 Introduction to Modeling Problems

START

initialization
Initialize population
of molecules

yes
goal reached?

no

no inter-molecular yes
collision?
iteration

Choose one molecule Choose two or more molecules

Decomposition Synthesis
or on-wall collision or inter-molecular collision

Update best molecule


finalization

Obtain the best molecule

END

Fig. 1.9 Flow diagram of the simple chemical reaction optimization algorithm

synthesis, decomposition, or on-wall ineffective collision. Those cause that each


molecule reacts changing its inner structure. The algorithm stops in hte finalization
step when a stop criterion is reached. In order to determine the minimization of energy
in the PES, the objective function of the problem is evaluated with each molecular
structure, storing the best molecular structure so far.
1.4 Comparison of Algorithms for Modeling Problems 17

1.4 Comparison of Algorithms for Modeling Problems

In general, computational algorithms for modeling problems are considered nonde-


terministic polynomial time (NP) problems which means that the solution might not
be calculated with deterministic algorithms in polynomial time (with respect to the
inputs), but it is easy to check the solution in polynomial time. For instance, consider
the modeling system in Fig. 1.1. The training process is hard to solve computation-
ally because finding the proper topology and parameters of models (i.e. building and
optimizing) requires too much effort in terms of time and non-deterministic algo-
rithms have to be used. However, once the model is obtained, the reasoning process
is easy to compute because the response to a query is totally deterministic. Thus,
different techniques have been studied and proposed in order to reduce the time in
the learning phase, for example: randomization, conditional probabilities, heuristics
and metaheuristics, naturally inspired algorithms, or parallel computing.
For instance, randomization refers to use random events to probably obtain good
solutions in minor time, while conditional probabilities are implement in techniques
that have to compute the joint probability of events. Heuristics and metaheuristics, as
described in Sect. 1.3.1, guide the algorithm to find an approximate solution rapidly.
In the same way, naturally inspired algorithms assume that nature implement optimal
rules that might be used to guide algorithms to a feasible solution. In contrast, parallel
computing accelerates the learning process by dividing an algorithm into small tasks
that run in multiple threads and cores, simultaneously.
In a nutshell, computational algorithms for modeling problems have the following
weaknesses in terms of the computational time:
• Most of the algorithms are in the class of nondeterministic polynomial time com-
plexity.
• Fast and efficient algorithms reach at least an approximate solution.
• Some algorithms depend on (meta) heuristics or randomization.
• Some algorithms are considered as black boxes (they cannot offer an interpretation
of the system behavior).
• Fast and efficient algorithms are designed for specific problems.
In some cases, it is useful to know which modeling algorithms are better than
others. In that sense, learning algorithms can be compared in terms of: computational
time complexity, stability (generalization), characteristics of the learning algorithm,
characteristics of the models built, types of problems that solve, and so forth.

1.4.1 Complexity and Stability in Modeling Problems

Two important metrics in learning algorithms are the complexity and the stability. In
this section, both of them are briefly introduced.
18 1 Introduction to Modeling Problems

Table 1.1 Computational


Running time Representation in O-notation
time complexities of
algorithms Constant O(1)
Logarithmic O(log n)
Linear O(n)
k-order Polynomial O(nk )
Exponential O(2n )

1.4.1.1 Computational Time Complexity

Complexity is a metric of how well or how efficient is an algorithm, and it can be


used for comparing computational algorithms that solve the same class of problems.
Formally, the computational time complexity, denoted by O(t(n)), measures the time
taken by an algorithm to run in terms of the size of the instance n; where t(n) is a given
function [7]. Table 1.1 shows the most important time complexities of algorithms,
in ascending order, and their representations. Notice that very large algorithms that
run in exponential time will not converge [7].
On the other hand, three important time complexity notations have been devel-
oped: the upper bound, the lower bound and the time average complexities. In that
way, the O-notation measures the running time of algorithms in the worst case, also
known as the upper bound computational time complexity. The Ω-notation repre-
sents the lower bound complexity and assumes that an algorithm runs proportionally
at least as Ω(t(n)) for input size n. Finally, the Θ-notation represents the time aver-
age complexity of an algorithm, if for some values of n, the lower and upper bounds
of the time complexity of the algorithm are equivalent [19].
In computational complexity theory, problems can be classified depending on
the running time that their algorithms take to solve them with respect to an input
size n. For instance, in complexity classes, P and NP are two fundamental classes [7,
19]. All problems that can be solved at most in polynomial time with deterministic
algorithms are P problems. In constrast, all problems that can be solved with non-
deterministic algorithms and their solutions can be checked in polynomial time are
NP problems. In fact, P ⊂ NP (it remains open the question if P = NP). As noted,
P problems are preferred more than NP problems, when n tends to be large.
The hardest problems in the NP class are considered NP-complete problems. Fur-
thermore, problems that can be solved with non-deterministic algorithm but their
solutions cannot be checked in polynomial time are NP-hard problems. Roughly
speaking, NP-hard problems are at least as hard as NP-complete problems.
Figure 1.10 shows a complexity class diagram of P, NP, NP-complete and NP-
hard classes. Moreover, it is known, by Ladner’s theorem [23], that there exist some
problems in NP that are neither as easy as P problems nor as hard as NP-complete
problems, if and only if P = NP. Figure 1.10 also shows these types of problems.
1.4 Comparison of Algorithms for Modeling Problems 19

complexity

NP

NP- complete
NP
P
hard

NP \P
(Ladner’s Theorem)

Fig. 1.10 Diagram of complexity classes

4
System
2 Model

0
y
P
-2
1
y

-4

-6

-8

-10
-10 -8 -6 -4 -2 0 2 4 xP 6 8 10
x1

Fig. 1.11 Example of generalization in learning algorithms

1.4.1.2 Stability and Generalization

Stability is the property of algorithms that claims whenever possible that for small
changes in input data, algorithms produce small changes in the final results [2, 18]. If
this property is not satisfied, then an algorithm is said to be unstable. Furthermore, if
an algorithm is stable for certain choices of input data, then it is called conditionally
stable [5].
In learning algorithms, stability is important because this property is closely
related to generalization [2, 18]. The latter is considered the ability of an algorithm
to return an accurate output value for a new, unseen data point. For example, consider
the training set (data points) and the model response (continuous line) depicted in
Fig. 1.11. As seen, the model responses from the continuous input domain giving
acceptable points (output range) closely to the training set; then, the model is con-
sidered stable. In contrast, Fig. 1.12 shows the same training set but with another
model response (continuous line). Clearly, the model does not respond closely to the
training set; thus, the model is considered unstable.
In order to perform generalization in learning algorithms, two loss functions have
to be minimized: the error due to bias and the error due to variance [8]. The first
20 1 Introduction to Modeling Problems

10
System
Model
5
y
P
y1

-5

-10 x
-10 -8 -6 -4 -2 0 2 4 6 P 8 10
x1

Fig. 1.12 Example of an unstable model in learning algorithms

minimizes (1.1) allowing the model to fit the target data. The second minimizes the
variability of a model response for a given target data, if this model have different
realizations, as (1.7); where, E is the error due to variance, q is the size of observations,
xk ∈ Rn is the k-th observation of attribute variable x = (x1 , ..., xn ), MΣ is the model
and MΣ is the average prediction of model MΣ in a given point xk .

1 
q
2
E= MΣ (xk ) − MΣ (xk ) (1.7)
2
k=1

This tradeoff is important because models might be over-fitting or under-fitting


[8]. Roughly, an over-fitted model predicts training data points very precise, but new
data points are not well generalized. In comparison, an under-fitted model predicts
training and new data points loosely. At last, both behaviors are not prefer.

1.4.2 Artificial Organic Networks and Modeling Problems

In this section, the most common algorithms in machine learning are compared
in terms of computational time complexity [10, 12–14, 17, 22, 24–27, 29, 32],
characteristics of learning algorithms, characteristics of built models and types of
problems solved. In addition, artificial organic networks technique is also contrasted
with these learning algorithms.
For instance, Table 1.2 summarizes the time complexity of learning algorithms as
well as characteristics of those like: supervised or unsupervised, parametric or non-
parametric, deterministic or nondeterministic, probabilistic, and univariate or mul-
tivariate algorithms. In addition, Table 1.2 also presents the direct type of problems
that algorithms can solve (approximation or prediction, classification, optimization),
Table 1.2 Characteristics of common learning algorithms

Method/Algorithm
Time Complexity
Supervised
Unsupervised
Parametric
Nonparametric
Deterministic
Nondeterministic
Probabilistic
Univariate
Multivariate
Approximation
Classification
Optimization

General
Linear regression (LR) O(c2 n) X X X X X X
General regression (GR) O(c2 n) X X X X X X
Running mean smoother (RMS) O(n) X X X X X X
Kernel smoother (KS) O(nd) X X X X X X
Decision trees (DT) O(nc2 ) X X X X X X X X X
Random forest (RF) O(Mcn log n) X X X X X X X X X
Naive Bayes classifier (NBC) O(nc) X X X X X X X
Bayesian networks (BN) X X X X X X X X
1.4 Comparison of Algorithms for Modeling Problems

O(cd e )
Gaussian mixture models (GMM) O(Sikn) X X X X X X X X
Support vector machine (SVM) O(n3 ) X X X X X X X
k-nearest neighbor (kNN) O(knd) X X X X X X X X
k-means algorithm (kM) O(ndk+1 log n) X X X X X X
Fuzzy clustering means (FCM) O(indk) X X X X X X
Expectation-maximization (EM) O(Sikn) X X X X X X X
Simulated annealing (SA) OP * * * * X X X X X
Tabu search (TS) OP * * * * X X X X
Evolutionary
Genetic algorithms (GA) NA * * * * X X X X
Gene expression algorithms (GE) NA * * * * X X X X
(continued)
21
22

Table 1.2 (continued)

Method/Algorithm
Time Complexity
Supervised
Unsupervised
Parametric
Nonparametric
Deterministic
Nondeterministic
Probabilistic
Univariate
Multivariate
Approximation
Classification
Optimization

Memetic algorithms (MA) NA * * * * X X X X


Artificial neural networks
Backpropagation (BP) TD X X X X X X X
Generalized Hebbian algorithm (HA) TD X X X X X X X
Hopfield’s nets (HN) TD X X X X X X
Kohonen maps (SOM) TD X X X X X X
Swarm intelligence
Particle swarm optimization (PSO) NA * * * * X X X X
Ant colony optimization (ACO) NA * * * * X X X X X
Bees colony optimization (BCO) NA * * * * X X X X X
Intelligent water drops (IWD) NA * * * * X X X X
Cuckoo search (CS) NA * * * * X X X X
Bacterial foraging optimization (BFO) NA * * * * X X X X
Chemically inspired
DNA computing (DNA) NA * * * * X X X X
Chemical reaction optimization (CRO) NA * * * * X X X X
Artificial hydrocarbon networks (AHN) O(Cmn ln 1ε ) X X X X X X X
X—marks stand for a characteristic found in the method/algorithm. *—marks refer that a method/algorithm does not present that characteristic. OP—marks
refer to a method/algorithm which its time complexity varies depending on the specific-problem and/or optimization algorithm. TD—marks stand for a method
that its time complexity is topology-dependant. NA—marks refer to a method/algorithm that does not build a model directly
1 Introduction to Modeling Problems
1.4 Comparison of Algorithms for Modeling Problems 23

Table 1.3 Symbols used in Table 1.2


Symbol Description
n Number of samples
c Number of attribute variables
d Dimensionality of inputs
k Number of clusters
i Number of iterations
e Maximum number of parents in Bayesian networks
M Number of trees in random forests
S Number of random repetitions
C Number of compounds in AHNs
m Number of molecules in AHNs
ε Tolerance greater than zero in AHNs

i.e. the main purpose of algorithms; however, it does not mean that algorithms
might be used in other problems. Furthermore, it is remarkable to say that the chart
reports the time complexity of classical or simple algorithms unless otherwise noted.
Table 1.3 shows the symbols used in Table 1.2.
Focusing on artificial organic networks, i.e. the artificial hydrocarbon network
algorithm (AHN), it is close related to backpropagation-based multilayer neural
networks (BP) and support vector machines (SVM) in terms of supervised learn-
ing, non-probabilistic models used for approximation/classification problems. It is
important to note because it gives a general view of where artificial hydrocarbon
networks is located, as revealed in Fig. 1.13. Notice that artificial hydrocarbon net-
works is also located between regression algorithms, like linear regression (LR) and
general regression (GR), and clustering algorithms like k-nearest neighbor (kNN),
k-means algorithm (kM) and fuzzy clustering means (FCM). Smoothers are not too
far away from regression algorithms and AHNs. Roughly speaking, the above dis-
cussion means that artificial hydrocarbon networks builds approximation and clas-
sification models as well as like-smoothers (i.e. filtering systems).
Unfortunately, in terms of time complexity, artificial hydrocarbon networks can-
not be easily compared with support vector machines or backpropagation-based
multilayer neural networks. However, if the number of compounds and molecules
(units of AHNs) are fixed, artificial hydrocarbon networks would be less complex
than support vector machines, when the number of training samples is large. The
backpropagation algorithm depends on the topology of the artificial neural network,
thus a comparison of AHNs and BP cannot be computed in terms of computational
time complexity (see Table 1.2). At most, backpropagation algorithm is based on
gradient descent methods that actually has time complexity O(ln 1ε ) where ε > 0 is
a small tolerance value that is used as stop criterion.
Finally, it is important to highlight that some time complexities summarized
in Table 1.2 are specialized. For example, the time complexity of linear/general
regression is based on the least squares estimates algorithm, the time complexity of
24 1 Introduction to Modeling Problems

GR LR
BP
non AHN
SVM
probabilistic GA KS
DNA GE HA
RMS kNN
CRO MA
kM HN
SOM
FCM
PSO
TS
IWD GMM
NBC BN
CS
BFO ACO EM
SA BCO RF DT
probabilistic

classification
approximation
or classification

approximation optimization

Fig. 1.13 Clustering map of learning algorithms/methods of Table 1.2. Labels are acronyms of
algorithms/methods

decision trees is computed with the standard C4.5 training algorithm, time com-
plexity of Bayesian networks is computed in polytrees, time complexity of kernel
smoothers is calculated with easiest known nearest neighbor smoother. In addition,
time complexity of expectation-maximization (EM) is computed for random EM
algorithms, as well as in Gaussian mixture models.
On the other hand, Table 1.4 reports characteristics of models when they are built
from some learning algorithms. In this chart, the simple artificial hydrocarbon net-
works algorithm build continuous, nonlinear and static models representing white or
gray boxes of the system. Notice that nonrecursive models are trained, thus dynamic
models cannot be handled. However, artificial organic networks framework does not
limit training models, allowing recursive models for dynamic systems.

1.5 Motivation of Artificial Organic Networks

In a nutshell, most of computational algorithms for modeling problems, also known


are learning algorithms, are NP-complete which means that, if P = NP, they have
to be improved. For instance, naturally inspired algorithms have been designed for
that purpose, using heuristics and metaheuristics to face NP-complete problems. In
that sense, there are important key features in learning algorithms that have to be
improve: computational time complexity, stability or generalization in algorithms,
and easiness of model interpretation to understand systems.
The above features motivate artificial organic networks as a learning algorithm that
is chemically inspired on carbon networks. It assumes that the easiness of topology,
Table 1.4 Characteristics of models built by common learning algorithms
Method/Algorithm Continuous Discrete Linear Nonlinear Static Dynamic Recursive Nonrecursive White/Gray Box Black Box
General
Linear regression X X X X X X X
(LR)
General regression X X X X X X X X
(GR)
Running mean X X X X X X
smoother (RMS)
Kernel smoother X X X X X X
(KS)
Decision trees (DT) X X X X X X X
Random forest (RF) X X X X X X X
Naive Bayes X X X X X X
1.5 Motivation of Artificial Organic Networks

classifier (NBC)
Bayesian networks X X X X X X X X X
(BN)
Gaussian mixture X X X X X X X
models (GMM)
Support vector X X X X X X X X X
machine (SVM)
k-nearest neighbor X X X X X X X
(kNN)
k-means algorithm X X X X X X X
(kM)
Fuzzy clustering X X X X X X X
means (FCM)
Expectation- X X X X X X X
maximization
(EM)
(continued)
25
26

Table 1.4 (continued)


Method/Algorithm Continuous Discrete Linear Nonlinear Static Dynamic Recursive Nonrecursive White/Gray Box Black Box
Simulated annealing NA
(SA)
Tabu search (TS) NA
Evolutionary
Genetic algorithms NA
(GA)
Gene expression NA
algorithms (GE)
Memetic algorithms NA
(MA)
Artificial neural
networks
Backpropagation X X X X X X X X X
(BP)
Generalized Hebbian X X X X X X X X
algorithm (HA)
Hopfield’s nets (HN) X X X X X X X X
Kohonen maps X X X X X X X
(SOM)
Swarm intelligence
Particle swarm NA
optimization
(PSO)
Ant colony NA
optimization
(ACO)
(continued)
1 Introduction to Modeling Problems
Table 1.4 (continued)
Method/Algorithm Continuous Discrete Linear Nonlinear Static Dynamic Recursive Nonrecursive White/Gray Box Black Box
Bees colony NA
optimization
(BCO)
Intelligent water NA
drops (IWD)
Simulated annealing NA
(SA)
Tabu search (TS) NA
Cuckoo search (CS) NA
1.5 Motivation of Artificial Organic Networks

Bacterial foraging NA
optimization
(BFO)
Chemically inspired
DNA computing NA
(DNA)
Chemical reaction NA
optimization
(CRO)
Artificial X X X X X X
hydrocarbon
networks (AHN)
X—marks stand for a characteristic found in the model. NA—marks refer that a method/algorithm does not train a model directly
27
28 1 Introduction to Modeling Problems

i.e. the multilevel arrangement of molecular units as well as mixtures of organic


compounds, captures the behavior of a given system; but also, that topology brings
other properties like stabilization, encapsulation, inheritance and robustness. At last,
the topology can also be built on-line using chemical rules.
Thus, in the rest of the book, artificial organic networks technique and artificial
hydrocarbon networks algorithm are introduced and fully described. In particular,
the book is organized as follows:
Chapter 2 introduces basic knowledge of organic chemistry in order to under-
stand the inspiration of artificial organic networks. The chapter begins with an over-
all description of organic chemistry and its importance in real-world. Then, basic
concepts like atoms, molecules, mixtures and interactions in chemical compounds
are described. In particular, energy minimization in organic compound structures is
discussed because it will be very important in the design phase of artificial organic
networks. At last, an overview of classification of organic compounds and charac-
teristics that artificial organic networks can mimic are finally described.
Chapter 3 fully describes artificial organic networks technique from its metaphor
of carbon networks to definitions of concepts treated. Actually, artificial organic net-
works is also defined from three different views: the levels of energy in the topology,
the mathematical definition and the framework of the technique. Finally, implemen-
tation issues of training procedures and on-line building models are discussed.
Chapter 4 introduces and describes artificial hydrocarbon networks algorithm, a
chemically inspired learning method based on artificial organic networks and chem-
ical hydrocarbon compounds. The algorithm is described deeply. For instance, two
theorems about time complexity and stability of the algorithm are stated. At last,
the chapter presents a discussion about how to interpret parameters and obtained
topologies of artificial hydrocarbon networks in order to understand the behavior of
a system.
On the other hand, Chap. 5 describes three enhancements of the artificial hydrocar-
bon networks algorithm. First, the chapter describes how to optimize the number of
molecules in artificial hydrocarbon compounds using the enthalpy rule of molecules.
Second, it extends the algorithm to the n-dimensional case in which several attribute
variables can be taken into account in artificial hydrocarbon networks. Finally, artifi-
cial aromatic compounds algorithm is introduced, as the recursive version of artificial
hydrocarbon networks., which it can be used for dynamic systems.
Chapter 6 presents experimental results of artificial hydrocarbon networks. It
summarizes examples of approximation problems, inference applications, clustering
problems and classification applications. The last section discusses some guidelines
to implement artificial hydrocarbon networks in real-world applications.
Finally, Chap. 7 encloses real applications of artificial hydrocarbon networks. For
example, it describes two different digital signal processing systems in audio and
image, the design of fuzzy-molecular control systems in direct current motors, the
design of a hybrid algorithm using genetic algorithms, and finally, the development
of multi-agent based systems using artificial hydrocarbon networks.
References 29

References

1. Adleman L (1994) Molecular computation of solutions to combinatorial problems. Science


266:1021–1024
2. Alpaydin E (2004) Introduction to Machine Learning. MIT Press, United States of America
3. Blum C, Roli A (2003) Metaheuristics in combinatorial optimization: overview and conceptual
comparison. ACM Comput Surv 35(3):268–308
4. Boden M (1996) Artificial Intelligence. Academic Press, United States of America
5. Burden R, Faires J (2005) Numerical Analysis. Cengage Learning, United States of America
6. Dorigo M, Gambardella LM (1997) Ant colony systems: a cooperative learning approach to
the traveling salesman problem. IEEE Trans Evol Comput 1(1):53–66
7. Evans J, Minieka E (1992) Optimization algorithms for networks and graphs. Marcel Dekker,
United States of America
8. Hastie T, Tibshirani R, Friedman JH (2009) The elements of statistical learning: data mining,
inference, and prediction. Springer, New York
9. Hromkovic J (2001) Algorithms for hard problems: introduction to combinatorial optimization,
randomization, approximation, and heuristics. Springer-Verlag, Germany
10. Hung MC, Yang DL (2001) An efficient fuzzy C-means clustering algorithm. In: Proceedings
of IEEE international conference on data mining. California, San Jose, pp 225–232
11. Irwin G, Warwick K, Hunt K (1995) Neural network applications in control. The Institution of
Electrical Engineers, England
12. Jana PK, Sinha BP (1997) Fast parallel algorithms for forecasting. ELSEVIER Comput Math
Appl 34(9):39–49
13. Kolahdouzan MR, Shahabi C (2004) Continuous K nearest neighbor queries in spatial network
databases. In: Proceedings of the 2nd workshop on spatio-temporal database management.
Toronto, Canada
14. Kramer KA, Hall LO, Goldgof DB (2009) Fast support vector machines for continuous data.
IEEE Trans Syst Man Cybern 39(4):989–1001
15. Lam AYS, Li VOK (2010) Chemical-reaction-inspired metaheuristic for optimization. IEEE
Trans Evol Comput 14(3):381–399
16. Lazinica A (ed) (2009) Particle swarm optimization. InTech
17. Memisevic R (2003) Unsupervised kernel regression for nonlinear dimensionality reduction.
Ph.D. thesis, Universitat Bielefeld
18. Mitchell T (1997) Machine Learning. McGraw Hill, United States of America
19. Moret B, Saphiro H (1991) Algorithms from P to NP. The Benjamin/Cummings Publishing
Company, United States of America
20. Olariu S, Zomaya A (2006) Handbook of bioinspired algorithms and applications. CRC Press,
United States of America
21. Pazos A, Sierra A, Buceta W (2009) Advancing artificial intelligence through biological process
applications. Medical Information Science Reference, United States of America
22. Robnik-Sikonja M (2004) Improving random forests. In: Boulicaut JF (ed) Proceedings of
ECML machine learning. Springer, Berlin
23. Rudich S, Wigderson A (eds) (2004) Computational complexity theory. American Mathemat-
ical Society, Providence
24. Schoukens J, Rolain Y, Gustafsson F, Pintelon R (1998) Fast calculation of least-squares esti-
mates for system identification. In: Proceedings of the 37th IEEE conference on decision and
control, vol 3. Tampa, Florida, pp 3408–3410
25. Sreenivasarao V, Vidyavathi S (2010) Comparative analysis of fuzzy C-mean and modified
fuzzy possibilistic C-mean algorithms in data mining. Int J Comput Sci Technol 1(1):104–106
26. Su J, Zhang H (2006) A fast decision tree learning algorithm. In: Proceedings of the 21st
national conference on artificial intelligence, vol 1, pp 500–505
27. Vens C, Costa F (2011) Random forest based feature induction. In: Proceedings of IEEE 11th
international conference on data mining. Vancouver, pp 744–753
30 1 Introduction to Modeling Problems

28. Wooldridge M (2002) An introduction to multiagent systems. John Wiley and Sons, England
29. Wu D, Butz C (2005) On the complexity of probabilistic inference in singly connected bayesian
networks. In: Proceedings of the 10th international conference on rough sets, fuzzy sets, data
mining, and granular computing, vol Part I. Springer, Berlin, pp 581–590
30. Wurtz RP (ed) (2008) Organic computing. Springer, Berlin
31. Yang XS (2010) Nature-inspired metaheuristics algorithms. Luniver Press,University of Cam-
bridge, United Kingdom
32. Zhao Q, Hautamaki V, Karkkainen I, Franti P (2012) Random swap EM algorithm for gaussian
mixture models. ELSEVIER Pattern Recognit Lett 33:2120–2126
Chapter 2
Chemical Organic Compounds

For centuries, human beings have found inspiration in nature, from macro-scale to
micro-scale. Animals have been inspiring designs of cars, robotics, and even com-
putational algorithms based on their behaviors. Some new super tough materials got
inspired in deer antlers. Environmental analysis of pressure, temperature or humidity
have been inspiring new ways of greenhouses. Shapes of nature have been inspiring
on painting, digital art, or sculpture. Chemical products like waterproofing sprays
were inspired on specific nanostructures of lotus leaves. Also, burdock seeds inspired
the well-known hook-and-loop fastener.
In particular, scientists and researchers take advantage of natural inspiration
because nature has shown that it can adapt itself to better response of changes and can
reach feasible solutions to problems like better configurations of structure of matter,
or animal survival in ecosystems. In fact, nature tends to optimality in all different
ways. For instance, consider atom-structures that tend to minimize energy in bonds,
but also preserve particular characteristics depending on atom relationships. To this
end, the notion of the latter will be highly important through this book because the
study of chemical organic compounds inspires artificial organic networks.
For instance, consider a given physical system with some input and output sig-
nals. Then, the supervised artificial organic networks technique can model the system
using signal information to build a structure made of atoms that are clustered in mole-
cules. In fact, these molecules will be used to enclose related information found in
the signals of the system. Moreover, if molecules cannot approximate completely
the behavior of the system, they can be joined together forming complex molecules
(referred also as compounds). At last, compounds can also mix them up forming
mixtures of molecules that represent linear combinations of behaviors of subsys-
tems, giving an organized structure approximating the overall system. Actually, this
structure will be referred as an artificial organic network because it is inspired on
carbon based networks, especially studied on organic chemistry, that present highly
stable molecules due to the electronegativity of carbon atoms. Interesting, an artifi-
cial organic network presents characteristics like structural organization, clustering

H. Ponce-Espinosa et al., Artificial Organic Networks, 31


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1_2,
© Springer International Publishing Switzerland 2014
32 2 Chemical Organic Compounds

Chemistry

Organic Analytical Biochemistry


Chemistry Chemistry

Inorganic Physical
Chemistry Chemistry

Fig. 2.1 Branches of chemistry

information, inheritance of behavior, encapsulation of data, and stability in structure


and response.
Thus, this chapter introduces fundamentals on organic chemistry to deeply under-
stand artificial organic networks. In particular, basic concepts of organic compounds
are described from the point of view of structural stability and energy minimization.
In addition, classification of organic compounds is outlined. Finally, hydrocarbons,
the most stable organic compounds in nature, are described.

2.1 The Importance of Organic Chemistry

Chemistry studies the matter, its properties and the laws that govern it. As known,
chemistry is divided into five branches: organic chemistry, inorganic chemistry, ana-
lytical chemistry, physical chemistry, and biochemistry, as shown in Fig. 2.1. For
instance, organic chemistry is the study of the compounds of carbon while inor-
ganic chemistry is the study of all other compounds [10]. Analytical chemistry is
the study of methods that determine and identify elements in compounds. Physical
chemistry applies physics to chemistry and it is the study of termodynamics and
kinetics of chemical reactions, and biochemistry is the study of chemical processes
inside living organisms.
Centering on organic chemistry, it is very important in different ways. Looking
around, organic products are present daily like cleaning accesories as soaps, sham-
poos or perfumes; they are also present in food as proteins, lipids and carbohydrates
that give and store energy, that are included in structural formation, that transport
other molecules, or that regulate growth in living organisms. In fact, all compounds
responsible of life are organic substances denominated biomolecules, e.g. proteins,
lipids, carbohydrates; but also nucleic acids, complex molecules involved in genetics,
are considered part of them [1, 3, 4].
It is interesting how compounds made of carbon atoms can define millions and
millions of different organic compounds. Actually, carbon atoms can interact among
them to form chains and rings, giving the opportunity of changing chemical properties
2.1 The Importance of Organic Chemistry 33

of compounds [4]. This is possible because their bonds with other atoms are very
strong in comparison with other atomic interactions, as explained later in this chapter.
Moreover, organic chemistry is not only present in living organisms, it is also
involved in human health technologies like the development of new drugs or the
study of materials, e.g. hypoalergenic materials, used in protesis or cookware coat-
ing; production of food; development of cancer treatments as the so-called “sharp-
shooter” drugs. Also, it is widely used for developing new materials applied in space.
Other industrial applications of organic compounds are: solvents like degreasers or
dry cleaning products, chemical gums for wood like latex or white glue, plastics like
toys or plastic coatings, etc.
Then, organic chemistry is generating new tendencies in technology and applied
sciences. For example, consider the development of alternative energy sources as
biofuels that can be produced from plants or organic waste; or consider organic
photo-sensors that convert light in electrical signals to capture images in cameras.
Furthermore, in computer sciences, organic chemistry is inspiring DNA-computing
to solve hard problems with strands of DNA, or the organic computing paradigm that
inherits organic properties to multiagent systems.
In addition, current trends of organic chemistry are related to green chemistry
that encourages the usage minimization of hazardous substances to the environment;
applications of fullerenes (molecules completely made of carbon atoms) in strength
of materials, biology, treatments for industrial wastewater in the field of nanotechnol-
ogy, architecture inspiration, conductivity materials, antiviral production, etc.; other
trends like solid state materials, organic nanostructures, liquid crystals, organic dyes,
organometallic applications to polymeric production, agrochemistry, and so forth.
On the other hand, it is well known that organic chemistry influences the global
economy with hydrocarbons, specially in the production of petroleum, and the indus-
try of polymers [3]. Thus, organic chemistry is highly important in the daily life, from
the study of living organisms to medical applications, through the economical and
technological impact.

2.2 Basic Concepts of Organic Compounds

Organic chemistry is the study of the compounds of carbon [3, 10]. Interestingly,
most of these compounds consist of carbon and a few of other elements like hydro-
gen, oxygen, nitrogen, phosphorus, sulfur, and halogens (fluorine, chlorine, bromine,
iodine, astatine). In fact, organic compounds have important physical and chemical
properties derived from carbon bonding to itself and the specialization of the few
other elements, like the strength of structures or the effecting of heat.
In the following sections, physical properties are associated to the structure of
organic compounds while chemical properties to the behavior of them. Notice that
this review is organized in order to highlight the role of energy stabilization in organic
compounds.
34 2 Chemical Organic Compounds

Fig. 2.2 Example of the H O


molecular structure of
polyvinyl acetate, an organic atoms
compound used in white glue H C C

bonds O H
H

( C C
)
H H

2.2.1 Structural Definitions

Organic compounds are constituted with basic units called atoms. When atoms
interact among them, they form molecules and compounds, as shown in Fig. 2.2.
Thus, the structure of organic compounds considers the set of atoms and the ways
they are bonded. But also, energy minimization and geometric configuration play an
important role in the structure [5]. Following, basic components and interactions of
chemical organic compounds are summarized.

2.2.1.1 Atoms

The basic unit of organic compounds is the atom [4, 5]. Each atom is a set of charged
particles including protons, electrons and the neutral particles so-called neutrons.
Roughly, any atom has protons and neutrons joined together at its nucleus and the
latter is surrounded by a set of electrons. The sum of protons and neutrons is equals
to the mass number of an atom. In addition, the number of protons in the nucleus is
referred as the atomic number. Hence, different atoms have different atomic numbers.
The well-known periodic table summarizes this and other concepts about atoms.
Thus, different atoms have distinct physical characteristics. In the sense of this book,
the distinction of an atom is limited to the atomic number, and different atoms refer
to different behaviors due to their physical characteristics.
For instance, the model of atoms assures that they have a particular way to distrib-
ute their electrons known as electron configuration [3, 4]. It assumes that electrons
can only orbit in specific spaces called energy shells. Additionally, shells are divided
into subshells (i.e. s, p, d and f ) in which electrons are grouped in orbitals. Finally,
each orbital can be occupied by at most two electrons. However, electrons might
represent an infinite set of configurations. In that sense, studies in chemistry and
quantum mechanics reveal that there is an electron configuration in which electrons
preserve the minimum energy of the atomic system, so-called the ground-state (see
Sect. 2.2.2.3).
2.2 Basic Concepts of Organic Compounds 35

2p
2s

energy
C

n=1
1s
n=2

Fig. 2.3 Simple model of the carbon atom and its energy-level diagram

Atoms are also characterized by valence electrons. These kinds of electrons are
the ones which conforms the last energy shell defined in the atom. Particularly, this
energy shell is called outer shell. Since electrons need some energy to be in shells,
the ground-state assures that energy minimization occurs when electrons follow the
quantum chemistry principles (see Sect. 2.2.2.3). Otherwise, atoms are in an excited
state.
Basically, noble gases (e.g. helium, neon, argon, krypton, xenon, radon) are the
only atoms with minimum energy per shell. Then, atoms must satisfy a chemical rule
that states atoms in ground-state which do not have an entire outer shell (there are
some valence electrons but not all of them) need more electrons in order to complete
it. This is the basis of atomic interactions in organic chemistry: ionic interactions and
covalent bonds, treated later in this chapter. Figure 2.3 shows a simple model and the
energy-level diagram of carbon atom.

2.2.1.2 Molecules

A set of two or more atoms acting as a single unit is called a molecule. As mentioned
previously, atoms need to complete the outer shell with electrons; thus, creation
of molecules is one mechanism to do that. Three important types of molecules are
defined following:

Elements. These molecules are made of like atoms, e.g. hydrogen element, because
atoms are not found isolated in nature. Elements can be classified into
three categories:
Metals. Physically, they are solid at room temperature (except mer-
cury), they can also be shiny, malleable and ductile, and
they serve as conductors of electricity. Chemically, they
react losing electrons.
Nonmetals. Physically, they are the opposite of metals. Chemically,
they react gaining electrons. These elements are the only
ones present in chemical organic compounds.
Metalloids. They have a hybrid behavior combining metal and non-
metal properties.
36 2 Chemical Organic Compounds

Compounds. They are molecules made of other molecules when reacting two or
more of them. In the general case, a compound is made up elements proportionally
to its mass. For instance, organic compounds are made of carbon atoms. If there
are no carbon atoms, then compounds are considered as inorganic.

Functional groups. These molecules are made of a carbon atom bonded with one
of the other atoms allowed in organic compounds. Functional groups are important
because organic compounds are classified based on them; chemical reactions act
on them in order to form other stable compounds, simple or complex; and, they
are basis for naming organic compounds.

2.2.1.3 Chemical Bonding

In molecules, chemical bonding refers to the relationship among atoms. In fact, this
process is based on charged particle attraction. In chemistry, two main chemical
bondings are present: ionic interactions and covalent bonds.

Ionic interactions. A metal element tends to lose electrons while a nonmetal ele-
ment tends to gain electrons. Thus, ionic interaction is present into a metal-and-
nonmetal reaction. Particularly to organic compounds, these chemical bondings
are not present.
Covalent bonds. When two nonmetal atoms are interacting, they cannot lose or
gain electrons. Thus, they share electrons of their outer shells. Due to the set
of atoms in organic compounds, covalent bonds will be considered through this
book.

In both cases, chemical bonding is explained by electronegativity which it is


presented in Sect. 2.3.

2.2.2 Chemical Definitions

The study of interactions of atoms and molecules are based on chemical rules and
properties. Depending on them, organic compounds have specific characteristics
and behaviors. In the following section, chemical principles are depicted in order to
understand the behavior of molecules and chemical bonding [5].

2.2.2.1 Electronegativity

In chemistry, the notion of tendency of atoms to attract electrons is a chemical


property associated to atoms named electronegativity [4, 5]. For instance, consider
two atoms closely located. If they interact making a chemical bond, then both atoms
have high electronegativity. Otherwise, atoms will not relate at all. Interestingly,
2.2 Basic Concepts of Organic Compounds 37

+
electronegativity

1 18
1 2
+ 1 H 2 13 14 15 16 17 He
3 4 5 6 7 8 9 10
2 Li Be B C N O F Ne
electronegativity

11 12 13 14 15 16 17 18
3 Na Mg 3 4 5 6 7 8 9 10 11 12 Al Si P S Cl Ar
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
4 K Ca Sc Ti V Cr Mn Fe Co Ni Cu Zn Ga Ge As Se Br Kr
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54
5 Rb Sr Y Zr Nb Mo Tc Ru Rh Pd Ag Cd In Sn Sb Te I Xe
55 56 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86
6 Cs Ba Lu Hf Ta W Re Os Ir Pt Au Hg Tl Pb Bi Po At Rn
87 88 103 104 105 106 107 108 109 110 111 112 114 116 118
7 Fr Ra Lr Rf Db Sg Bh Hs Mt Ds Rg Uub Uuq Uuh Uuo

57 58 59 60 61 62 63 64 65 66 67 68 69 70
6 La Ce Pr Nd Pm Sm Eu Gd Tb Dy Ho Er Tm Yb
89 90 91 92 93 94 95 96 97 98 99 100 101 102
7 Ac Th Pa U Np Pu Am Cm Bk Cf Es Fm Md No

Fig. 2.4 Tendency of electronegativity in the periodic table

Table 2.1 Electronegativity


Element Electronegativity (Pauling scale)
values of atoms related to
chemical organic compounds H 2.1
C 2.5
O 3.5
N 3.0
P 2.1
S 2.5
F 4.0
Cl 3.0
Br 2.8
I 2.6
At 2.2

elements in the periodic table are arranged so that electronegativity increases from
left to right and from top to bottom, except for the hydrogen atom that has similar
electronegativity as atoms related to organic compounds. Figure 2.4 shows tendency
of electronegativity in the periodic table. In addition, electronegativity values of the
set of atoms related to organic compounds [2, 3] are also shown in Table 2.1.
Electronegativity is measured using the Pauling scale. It represents the absence
of electronegativity with a zero-value and higher values represent higher electroneg-
ativity. For example from Table 2.1, hydrogen and astatine elements have smaller
38 2 Chemical Organic Compounds

electronegativity than fluorine which has the highest electronegativity even in the
whole periodic table.
Moreover, electronegativity is useful when classifying chemical bondings. If the
difference in electronegativity is less than 0.5 in the Pauling scale, then the chemi-
cal bonding is considered as covalent bond. If the difference in electronegativity is
between 0.5 and 1.7, then it is a polar covalent bond (see Sect. 2.3). Finally, if the
difference in electronegativity is greater than 1.7, then it is an ionic interaction.

2.2.2.2 Quantum Chemistry Principles

As mentioned previously, the ground-state principle assures energy minimization in


chemical structures [1, 8]. However, three important quantum chemistry principles
need to be stated before introducing ground-state formally. These quantum chemistry
principles refer to theoretical observations that regulate chemical interactions among
atoms.

Aufbau Principle. This principle states that electrons must fill lowest energy
shells first. As an exception, electrons in d orbital have more stability when this
orbital is half filled or full filled. Thus, one electron from the above s orbital is
placed on the d orbital.
Pauli exclusion. This principle states that a pair of electrons in an orbital have to
have opposite spins.
Hund’s rule. It states that filling orbitals with electrons, these must not be in pairs
on orbitals until each orbital contains one electron except when filling s orbitals.

Figure 2.5 shows the diagram for filling electrons in order to reach a ground-state
in atoms [1, 8]. The arrows mean the path for picking up orbitals in order to filling
them. Notice that the above three quantum chemistry rules must be satisfied.
Example 2.1 Determine the ground-state electron configuration of: (a) the carbon
atom, and (b) the copper atom.

Fig. 2.5 Diagram model


for writing the ground-state 1s
electron configuration of
atoms 2s 2p
3s 3p 3d
4s 4p 4d 4f
5s 5p 5d 5f
6s 6p 6d
7s 7p
2.2 Basic Concepts of Organic Compounds 39

Solution 2.1 Using Fig. 2.5 and the atomic number of each atom, the ground-state
electron configuration is:
(a) Carbon atom has atomic number six, thus six electrons are needed:
C : 1s 2 2s 2 2 p 2 .
(b) Copper atom has atomic number 29, thus 29 electrons are needed:
Cu : 1s 2 2s 2 2 p 6 3s 2 3 p 6 4s 1 3d 10 .

2.2.2.3 Ground-State Principle

Structures of chemical organic compounds preserve stability while minimizing


energy. In practice, electrons have infinite number of configurations for each atom,
and for each electron configuration it has an specific energy level. The electron config-
uration that has the minimum energy level is named ground-state [8]. Other electron
configuration different from the ground-state is called excited-state. In organic chem-
istry, the ground-state electron configuration is considered for all chemical structures
because it aims to study and understand physical and chemical properties.
On the other hand, ground-state has theoretical applications in energy minimiza-
tion and chemical bonding. For instance, noble gases are the only elements with
the minimum energies for each shell. It means that noble gases have filled full with
electrons in the outer shell. Thus, atoms tend to fill the outer shell with all possible
valence electrons. In organic chemsitry, this rule is called the octet rule. In general,
not all atoms claim for the rule, but in organic chemistry it is perfectly possible.

2.2.2.4 Lewis Dot Structure

A visual interpretation of ground-state, chemical bonding and electronegativity con-


cepts all together, is the so-called Lewis dot structure. This diagram considers the
chemical symbol of a given atom and some dots around it representing the valence
electrons of that atom. Thus, sharing, gaining or losing electrons are seen explicitly.
Figure 2.6 shows the Lewis dot structure of a carbon atom and the Lewis dot structure
of the interaction between carbon and hydrogen atoms.

Fig. 2.6 Examples of Lewis dot structures. (left) Diagram of carbon atom. (right) Diagram of
interaction between carbon and hydrogen atoms
40 2 Chemical Organic Compounds

2.2.2.5 Chemical Balance Interaction

After atoms find ground-state electron configuration and molecules are formed via
the octet-rule, another chemical interaction can be applied to minimize energy in
structures. This is referring to the chemical balance theory [5, 8, 10]. Notice that it
does not use atomic interactions.
In fact, chemical balance interaction determines different ways to mix two or
more substances, e.g. molecules or compounds, by reacting and forming products,
i.e. new compounds. In the simplest form, compounds might be determined as a
linear mixture of substances in definite ratios. To this end, these ratios are named
stoichiometric coefficients. In fact, if optimal stoichimetric coeficcients were found,
then chemical balance interaction can be viewed as an energy stabilization process.
In chemistry, these structures are called mixtures.

2.3 Covalent Bonding

In Sect. 2.2.1.3, covalent bonds were defined. Since, they are important to organic
chemistry, this section introduces the classification and an accepted chemical model
for covalent bonds.
Rewriting, covalent bonds appear when two nonmetal atoms are sharing electrons
of their outer shells because they cannot lose or gain electrons. In general, chemical
interactions of two atoms with difference in electronegativity less than 1.7 in Pauling
scale, are considered covalent bonds. An usual classification of covalent bonds are
the following:
Polar covalent bonds. It appears when two different atoms are interacting, e.g. in
C − H interaction.
Nonpolar covalent bonds. It appears when two similar atoms are interacting, e.g.
in C − C interaction.
Below, there is a description of an accepted model of covalent bonds. In particular, it
introduces the empirical model and two important parameters which are: the length
and the minimum energy of bond.

2.3.1 Characterization of Covalent Bonds

In organic chemistry, the model of chemical covalent bonds refers to a mathematical


expression that characterizes the chemical covalent bond between two atoms [5, 8]. In
fact, this model relates the energy of bond and the interatomic distance. For instance,
the energy of bond is the value of energy necessary to break the atomic bond; while the
interatomic distance is the length between the nuclei of the two atoms participating in
the chemical interaction. Additionally, the notion of covalent bonds between atoms
2.3 Covalent Bonding 41

can be extended to covalent bonds between molecules, relating the energy of bond
and the intermolecular distance which it refers to the interatomic distance of the two
atoms located on distinct molecules participating in the interaction.
Without loss of generalization, interatomic distances are considered following [8].
On one hand, if the interatomic distance is too large, there is no chemical bond
between atoms; thus, the energy of bond is equal to zero. On the other hand, let
E attraction be the energy related to the attraction force between electrons of one
atom and the nucleus of the other which it increases rapidly when the distance is
short. However, if atoms are too close, a repulsive force appears due to the electrons
of both atoms and the energy associated to this force, let say Erepulsion , increases
negatively. Then, the sum of both energies E attraction and Erepulsion gets the total
energy of bond E bond as (2.1):

E bond = −E attraction + Erepulsion (2.1)

In fact, the empirical model of chemical covalent bond (2.1) [8] can be rewritten
in terms of the interatomic distance as expressed in (2.2):

A B
E bond = − m
+ n , (m < n) (2.2)
r r
Where, A and B are constants related to the charge of atoms, r is the interatomic
distance, and m, n are two positive empirical coefficients defining the attractive and
repulsive forces, typically n ≈ 12 and m ≥ 6. Figure 2.7 shows the relationship
between E attraction and Erepulsion .
Actually, two interesting parameters of covalent bonds are the length of bond
and the minimum energy, and one parameter for specific nonpolar covalent bonds is
called the order of bond:

Attraction
Repulsion
Model
Energy

stable state

Interatomic Distance

Fig. 2.7 Empirical model of chemical covalent bonds


42 2 Chemical Organic Compounds

Length of bond. It is the measurement of interatomic or intermolecular distance


where the minimum energy of bond is reached.
Minimum energy of bond. It is the minimum energy required to break the bond,
and it also represents the most stable state of chemical covalent bond.
Order of bond. (Nonpolar covalent bonds) It is the number of pairs of electrons
shared in the covalent bond. While the order of bond increases, the covalent bond
is physically stronger and more stable. Orders are:

Simple bonds. One pair of electrons are shared.


Double bonds. Two pairs of electrons are shared.
Triple bonds. Triple pairs of electrons are shared.

2.4 Energy in Organic Compounds

Energy is an important concept concerning into physics, chemistry, thermodynamics


and generally in engineering. It can be defined as the ability of a system to do work.
Referring to organic compounds, chemical energy is the responsible of structural
transformation, combustion, electricity due to nuclear fission, batteries, etc.
Typically, energy is classified into kinetic and potential energies. Roughly speak-
ing, kinetic energy is the one used when any mass is in movement while potential
energy is referred to as stored energy. Considering the latter, potential energy deter-
mines stability in chemical structures. If potential energy is low, structures are stable.
If potential energy increases, chemical structures are unstable. Since potential energy
could transform it to kinetic energy, the higher the potential energy is, the greater
motion of chemical structures is, assuming rearranging in geometric structures of
atoms, molecules and compounds. Thus, energy minimization refers to chemical
structural stability.
In that sense, organic compounds have some strategies to minimize energy in their
chemical structures, summarized in the following energy level scheme. Finally, three
measurements of energy in thermodynamics are introduced.

2.4.1 Energy Level Scheme

In this section, it is proposed a model of energy minimization in chemical structures


named the energy level scheme. In fact, this is a synthesis of concepts of quantum
chemical structures, molecular formation and mixture formation.
Stability in organic compounds comes from the fact that every structural unit
claims for energy minimization Then, three levels of energy are considered. The
first level of energy minimization refers to the ground-state electron configuration.
Before any atom interact among others, atoms tend to minimize energy following
2.4 Energy in Organic Compounds 43

Mixtures

Compounds

energy
Atoms
and
Molecules

Fig. 2.8 Representation of the energy level scheme

quantum chemical structures (see Sect. 2.2.2.2). In addition, since atoms aim to hold
the octet-rule, they interact among them via chemical bonds forming molecules.
The next level of energy minimization considers chemical reactions on mole-
cules to create compounds. These compounds are made of different molecules but
interaction of them needs minimizing energy to conform bonds. In particular, these
interactions release energy in structures by breaking and making bonds. Finally, the
last level of energy minimization is presented in chemical balance interaction while
creating mixtures of molecules or compounds.
As notice, atoms are more stable than molecules, molecules are more stable than
compounds, and the latter are more stable than mixtures. Figure 2.8 represents visu-
ally this energy level scheme. It concludes that stability is preserved easily in the
bottom level and structures are more unstable at the top level.

2.4.2 Measures of Energy

In thermodynamics—a branch of physics which describes the relationship between


heat and work, changes in temperature, and transformation of energy—, transference
and conversion of energy in chemical reactions are studied [1, 4]. Actually, three
important thermodynamic properties are considered to measure energy in chemical
reactions: enthalpy, Gibbs free energy, and entropy. In practice these properties can-
not be measured directly to the system, thus the difference of them are useful, as
mentioned later.
Consider a chemical reaction from reactants (R) to products (P) as depicted in
Fig. 2.9. At the beginning, reactants have some energy and at a given time a chemical
reaction is produced and the products have less energy than reactants. Visually, this
is known as reaction coordinate diagram.
44 2 Chemical Organic Compounds

transition
state

Energy
activation

reactants
reaction products

Reaction Coordinate

Fig. 2.9 Example of a reaction coordinate diagram

2.4.2.1 Enthalpy

Enthalpy H is a thermodynamic property of chemical bonds [6]. In practice, the


energy involved in these bonds is directly related to the chemical energy at constant
pressure. Thus enthalpy can be measured using the heat in reaction. When computing
the enthalpy in chemical reactions, Hess’ law [6] states that enthalpy of a reaction
is independent of the pathway between the initial and final states, then the heat of
reaction 1 ΔH ◦ is used instead of H , defined as (2.3); where, Hi◦ is the initial enthalpy
(e.g. energy of reactants) and H ◦f is the final enthalpy (e.g. energy of products). In
laboratory, the heat Q measured with a calorimeter is equal to ΔH such that (2.4)
holds.
ΔH ◦ = H ◦f − Hi◦ (2.3)

ΔH ◦ = Q (2.4)

In organic chemistry, several enthalpies of chemical bonds and molecules are


already obtained. Table 2.2 summarizes enthalpies of chemical bonds related to
important atomic interations in organic compounds [4, 6, 8, 9].
Finally, heat of reaction determine if the reaction releases heat known as exother-
mic (ΔH ◦ < 0) or absorbs heat known as endothermic (ΔH ◦ > 0).

Table 2.2 List of enthalpies


Chemical bond Enthalpy (kJ/mol)
of chemical bonds in organic
compounds C −C 350
C−H 415
C =C 611
C ≡C 837

1The superscript (◦ ) means that the property is measured at stardard states of 298 K in temperature
and 1atm in pressure.
2.4 Energy in Organic Compounds 45

2.4.2.2 Gibbs Free Energy

The Gibbs free energy G is another thermodynamic property of chemical reactions


that measures the velocity (rate) of reaction and its equilibrium [7]. At standard
states, the change of Gibbs free energy ΔG ◦ is related to the equilibrium constant
K eq as (2.5); where, R is the gas constant (8.314 J/mol · K), and T is the temperature.
If ΔG ◦ < 0 refers to an spontaneous reaction. Otherwise, reactions need external
energy to produce it.
ΔG ◦ = −RT ln(K eq ) (2.5)

Interestingly, enthalpy, entropy and Gibbs free energy are expressed together in
the Gibbs-Helmholtz equation as (2.6); where, T is temperature, ΔG ◦ is the change
in Gibbs free energy, ΔH ◦ is heat of reaction, and ΔS ◦ is the change in entropy.

ΔG ◦ = ΔH ◦ − T ΔS ◦ (2.6)

2.4.2.3 Entropy

Entropy S is the measurement of the number of processes or possible conformations


in chemical reactions [7]. Roughly speaking, entropy measures disorder in reactions.
It is an statistical property associated to stability which it is associated to the second
law of thermodynamics that states stable systems are those with maximum entropy.
For instance, in standard states, if the change in entropy ΔS ◦ > 0, then the system is
more stable; and if the change in entropy ΔS ◦ < 0, then the system is more unstable.
This behavior is also observed in (2.6).
In practice, the change in entropy is calculated with (2.6) because there are no
instruments to measure it directly in the system.

2.5 Classification of Organic Compounds

As discussed previously, chemical organic compounds are based on their own func-
tional group (see Sect. 2.2.1.2). In fact, the most studied organic compounds are [4]:
hydrocarbons, alcohols, amines, aldehydes, ketones, carboxylic acids, polymers, car-
bohydrates, lipids, amino acids, proteins, and nucleic acids. Following, there is a brief
description of each type of compound.

2.5.1 Hydrocarbons

Hydrocarbons are the simplest organic compounds with functional group CH [10].
Hydrocarbons are classified as alkanes, alkenes and alkynes, depending on the order
of bonds among carbon atoms of their functional group.
46 2 Chemical Organic Compounds

On one hand, alkanes are the simplest hydrocarbons formed by one polar covalent
bond between two carbon atoms [10]. The general formula can be expressed as
Cn H2n+2 , in which n is the number of carbon atoms in the molecule. Structural
isomers are the different forms that alkanes can be expressed in terms of the topology
of the molecules with the same number of elements. Structural isomers have particular
physical and chemical properties.
Another type of hydrocarbons is the cyclic hydrocarbons in which carbons are
ring-shaped bonded [10]. If all carbons in cyclic hydrocarbons are bonded with
hydrogen fulfilling the octet rule, they are called cycloalkanes. In nature, the most
abundant are cyclopentane and cyclohexenes.
When alkanes and cycloalkanes have low molecular weight tend to be gases and
when it grows its molecular weight they are solids. This depends on the intramole-
cular forces and the melting and boiling points, where they become smaller as the
dispersion forces become weak. The average density of alkanes is 0.8 g/ml, so they
float in water [4]. Also, the more compact isomers (fewer branches) have a boiling
point higher than the highly branched isomers. These organic compounds are more
stable by strong CH bonds, but may react to form oxygen and oxidation. This oxida-
tion can release heat that can be used as an energy source. Mainly, they come from
fossil fuels.
Other types of hydrocarbons are alkenes and alkynes. Alkenes are unsaturated
hydrocarbons having carbon atoms with double bond and alkynes are those with
carbon atoms of triple bond. Also, the arenes are the functional groups formed by
a ring of carbon with double bond. In general, they share the same physical and
chemical properties of alkanes, but are not soluble in water, only on themselves.
The most common processes in nature are the formation of CH skeletons that allow
biodiversity, mainly in plants, where the structures are iterative and the ramifications
are taken for enzymatic reactions. If any of the hydrogen atoms of an alkane is
removed is known as alkyl and if one-hydrogen atom is replaced by a halogen is
known as halo alkanes.
Figure 2.10 shows the difference among alkanes, alkenes, alkynes and arenes.

2.5.2 Alcohols, Ethers, and Thiols

On one hand, alcohols [4] are considered the central organic compounds because they
can be transformed in other organic compounds or can be expressed as a product
of reactions of other compounds. Their functional group is the hydroxyl –OH that
is bonded to a carbon atom. In general, alcohols are more soluble in water than
hydrocarbons since their molecular weight comes to grow. In addition, alcohols can
form weak acids in presence of water; and in presence of strong acids, alcohols
can form weak bases. Several compounds that alcohols can be converted are alkyl
halides, alkenes, aldehydes, and carboxylic acids, among others.
On the other hand, ethers [4] have a functional group with oxygen bonded to
two-hybrid carbon atoms sp 3 . They are more soluble in water than hydrocarbons
2.5 Classification of Organic Compounds 47

H H H

H C C C H

H H H H H

Alkane C C

H C C H
H H

C C C C

H H H H
Alkene
Arene

H C C H

Alkyne

Fig. 2.10 Different types of hydrocarbons

because oxygen may form hydrogen bonds (weaker bond between a hydrogen and
an electronegative element). In fact, ethers are very stable as hydrocarbons and they
cannot react with the majority of organic compounds. Actually, ethers can help to
chemical reactions in other organic compounds.
At last, thiols [4] are organic compounds with sulfhydryl –SH as functional group.
Physically, they cannot form hydrogen bonds; thus, the boiling point of thiols is lower
than alcohols or ethers. Moreover, thiols may not be too soluble in water.

2.5.3 Amines

The functional group of amines is the amino, a compound formed with nitrogen
and one, two, or three groups of carbon atoms [3, 4]. Depending on the ammonia,
amines can be classified as primary amines if its ammonia has one-hydrogen atom
replaced; secondary amines if its ammonia has two-hydrogen atoms replaced; or
tertiary ammines if its ammonia has three-hydrogen atoms replaced. In fact, amines
can present hydrogen bonds with nitrogen atoms. The greater the molecular weight
amines present, the less soluble in water they can be.

2.5.4 Aldehydes, Ketones, and Carboxylic Acids

First, aldehydes [3] have a carbonyl functional group C = O, and a union of one or
two hydrogen atoms. Ketones [3] have the same carbonyl functional group union
48 2 Chemical Organic Compounds

with two-carbon atoms. In general, the boiling point of aldehydes and ketones is
higher than the boiling point of nonpolar compounds. Aldehydes can react in order
to form carboxyl acids because they are the one of the easiest compounds that can
oxidize. In contrast, ketones cannot oxidize easily.
On the other hand, carboxylic acids [3] have the carboxyl functional group
–CO2 H. This functional group is a hybrid between a carbonyl and a hydroxyl. In
reactions, carboxyl acids can be converted into acid chlorides, esters and amides.
Because carboxyl acids have oxygen and hydrogen atoms, they are quite stables
and can form hydrogen bonds. In comparison with aldehydes, ketones or amines,
carboxyl acids have large boiling points.

2.5.5 Polymers

In organic compounds, polymers are molecules of long chains formed by monomers


(simplest organic compound) bonded [3, 4]. In general, polymers have different
structures, the most common are: linear, branched, comb-shaped, ladder, star, cross-
linked network, and dendritic. The properties of polymers depend on the size and
topology of the molecules. Physical properties of polymers are related to resistance,
elasticity, and so forth. In general, monomers used in polymers are iteratively repeated
through the whole network, and relations are done by covalent bonds.
There exist two processes of polymer synthesis. Step-growth polymerization is
a laboratory synthetic process in which monomers are added to the chain one at a
time. Since, polymerization is the process in which small molecules are added to the
chain in one chemical reaction without loss of atoms. The most important polymer
structures are polyamides, polyesters, polycarbonates, polyurethanes, and epoxies.

2.5.6 Carbohydrates, Lipids, Amino Acids, and Proteins

These are considered as biomolecules and they are associated to a chemical


function [1]. In that context, carbohydrates [1] are compounds that can store energy,
form part of structural tissues, and they are part of nucleic acids. The majority of
carbohydrates are based on the formula Cn (H2 O)m .
Lipids [1] are organic compounds typically known as the energy source of liv-
ing beings. In contrast, amino acids [1] are organic compounds formed with a car-
boxyl and an amino, used in the transportation of enzymes. Alpha-amino acids are
monomers of proteins.
Finally, proteins [1] are organic compounds with one or more chains of polypep-
tides, macromolecules containing ten or more amino acids joined together with
peptide bonds. In general, proteins are the structural basis of organisms, the growth
regulators, the transportation of other molecules, etc.
2.5 Classification of Organic Compounds 49

2.5.7 Nucleic Acids

Other biomolecules are the nucleic acids [1, 4]. They form the basis of information
over the organization, maintenance, and regulation of cellular functions. This infor-
mation is expressed in genes via deoxyribonucleic acids (DNA) translated by ribonu-
cleic acids (RNA) in the synthesis of proteins. Roughly speaking, the structure of
DNA is based on deoxyribose units and phosphate, in which simple bases of aro-
matic heterocyclic amines mate to them: adenine, guanine, thiamine, and cytosine.
The entire structure has two helices. Observations summarized the following:
• Composition of bases in any organism is the same in every cells of the organism,
and it is unique to it.
• Molar percentages of adenine and thiamine are equal.
• Molar percentages of guanine and cytosine are equal, too.
• Molar percentages of purine bases (adenine and guanine) and the pyrimidine bases
(cytosine and thiamine) are equal.
• In comparison to DNA, RNA is based in one structure in which uracil and cytosine
are bases rather than thiamine and cytosine.

2.6 Organic Compounds as Inspiration

As reviewed, organic compounds are represented physically and chemically. How-


ever, these representations are intimately related because structures are conformed
under chemical rules. In addition, these compounds aim to reach stability via energy
minimization. In fact, this point of view inspired artificial organic networks. Thus,
this section introduces the motivation of that inspiration and the characteristics stud-
ied.

2.6.1 Motivation

When looking at nature, and more specifically at the structure of matter, some char-
acteristics of chemical compounds are rapidly noticeable. For instance, just around
eleven elements can derived in more than twenty millions of different organic com-
pounds. In addition, compounds are made of other simple molecules that it can be
described as organized units. Also, this organization is based on chemical rules that
are applied over and over again from bottom to top in the energy level scheme. But
the must important observation is that chemical structures look for stabilization via
energy minimization. Thus, organic compounds are made of minimal resources and
optimal ways to do that.
Actually, notice that the relationship of atoms makes possible molecular units.
Different atoms and different arrangements of those make different molecules. Notice
50 2 Chemical Organic Compounds

that the differentiation of molecules may be observable from their physical and
chemical properties. In that sense, physical properties may refer to the structure
of the molecule while chemical features may refer to the behavior of that mole-
cule. Thus, molecules can be seen as basic units of information characterized by
the atoms in the inside. Moreover, molecules might be seen as encapsulation and
potential inheritance of information. However, relationships among atoms cannot
be performed without two possible interactions: chemical bonds and chemical reac-
tions. The first interaction forms basic and complex molecules, and the second one
forms mixtures of molecules.

2.6.2 Characteristics

From the above motivation, systems inspired on chemical organic compounds will
have the characteristics described below. In Fig. 2.11 is shown a diagram explaining
these characteristics assuming a system inspired on chemical organic compounds.

2.6.2.1 Structural and Behavioral Properties

On one hand, structural properties refer to units or subsystems with specific prop-
erties. For instance, these units are molecules while specific properties are atoms.
On the other hand, behavioral properties refer to functions or processes inside units.
These processes are chemical interactions and chemical behaviors of compounds.

Fig. 2.11 Example of a


system inspired on chemical
organic compounds X Y
1 2

System

C1
Y

C2
AON
2.6 Organic Compounds as Inspiration 51

2.6.2.2 Encapsulation

Structural units with processes can be encapsulated. Thus, information of subsystems


can be easily clustered. In fact, inheritance and organization are derived from this
characteristic.

2.6.2.3 Inheritance

Consider a given, known system that is inspired on chemical organic compounds. If


another, unknown system has similar behavior to the given one; then, structural units
from the known system might be inherited to the unknown system. This characteristic
comes from the encapsulation property.

2.6.2.4 Organization

Since structural units encapsulate information, these can be organized from basic to
complex units. Basic units are the ones that have elemental information, e.g. with
low energy, and complex units are made of combination of basic units, e.g. with
high energy. The energy level scheme can be used in order to organize information
into molecular units, compound units and mixture units. For example, mixtures can
contain several compounds, each one assimilating different characteristics of the
whole system inspired on chemical organic compounds.

2.6.2.5 Mixing Properties

Organization can derive in mixing properties. As described previously, molecular


units can be seen as pieces of a puzzle; thus molecules can chemically interact
among them in order to create complex units as compounds or mixtures.

2.6.2.6 Stability

Since it is the central observation of organic compounds, stability is one of the


most important characteristics of systems inspired on chemical organic compounds.
Structural units, encapsulation, inheritance, mixing properties and organization are
possible due to energy minimization. Structures of these kinds of systems will be
made of minimal units, and optimality will be found. Thus, energy is an important
property of systems inspired on chemical organic compounds. As explained before,
chemical rules (i.e. Aufbau principle, Pauli exclusion and Hund’s rule) help molecular
structures to be stable since atoms and molecules are arranged suitably such that the
overall structure has the minimum energy.
52 2 Chemical Organic Compounds

2.6.2.7 Robustness

The notion of the finite small set of atoms and the huge amount of different organic
compounds characterizes systems inspired on chemical organic compounds in robust-
ness. This characteristic can be seen as the power of finite set of parameters to model
a large set of different systems.

References

1. Bettelheim FA, Brown WH, Campbell MK, Farrell SO (2008) Introduction to general, organic
and biochemistry. Cengage Learning, Belmont
2. Bhushan B (2007) Springer handbook of nanotechnology. Springer, Berlin
3. Brown TL, Lemay HE, Bursten BE, Murphy CJ, Woodward PM (2009) Chemistry: the central
science. Pearson Education, Upper Saddle River
4. Brown WH, Foote CS, Iverson BL, Anslyn EV (2011) Organic chemistry. Cengage Learning,
Belmont
5. Carey FA, Sundberg RJ (2007) Advanced organic chemistry: part A: structure and mechanisms.
Springer, New York
6. Ganguly J (2009) Thermodynamics in earth and planetary sciences. Springer, Berlin
7. Greiner W, Neise L, Stocker H (1995) Thermodynamics and statistical mechanics. Springer,
Berlin
8. Klein DR (2011) Organic chemistry. Wiley, Hoboken
9. Lide DR (2008) CRC handbook of chemistry and physics. Taylor and Francis, Boca Raton
10. Quinkert G, Egert E, Griesinger C (1996) Aspects of organic chemistry: structure. Wiley, New
York
Chapter 3
Artificial Organic Networks

Chemical organic compounds are based on a finite, small set of elements that can
create more than twenty million known compounds. These organic compounds are
the most stable ones in nature primary due to chemical rules aiming energy min-
imization. In fact, they have characteristics like: structural and behavioral proper-
ties, encapsulation, inheritance, organization, stability, robustness, and complexity.
Because of these features, the artificial organic networks technique is inspired on
chemical organic compounds.
For instance, consider any unknown system that has to be described, analyzed, or
predicted. Modeling techniques would be selected to solve that problem. However, as
explained later in this chapter, some of these techniques have drawbacks, specially in
stability and structural modeling understandings. Then, artificial organic networks is
proposed to enhance these characteristics in computational algorithms for modeling
problems.
In this chapter, artificial organic networks are introduced and fully described, from
its components and interactions to the formal definition. In order to better understand
it, an overview of the technique and the metaphor of it are discussed. Finally, some
implementation issues and the outline of their solutions are described.

3.1 Overview of Artificial Organic Networks

Let Σ be an unknown given system like Fig. 3.1. Also, suppose that X is a set of
excited signals to Σ such that ∈x ≥ X represents input signals, and Y is a set of output
signals ∈y ≥ Y of Σ. Moreover, suppose that Σ needs to be described, analyzed, or
predicted. Then, a model MΣ might be used to represent Σ (see Fig. 3.1).
In that way, it would be interesting that MΣ represents Σ as closer as it is in
real. Thus, MΣ cannot be a black or a gray box. Contrasting, the model would try to
understand what happens inside the unknown system like shown in Fig. 3.2. However,
in Chap. 1, it was explained that modeling the inner behavior of a system is difficult

H. Ponce-Espinosa et al., Artificial Organic Networks, 53


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1_3,
© Springer International Publishing Switzerland 2014
54 3 Artificial Organic Networks

Fig. 3.1 Example of the


model of an unknown system X Y

X M Y

Fig. 3.2 The idea behind


modeling with artificial
organic networks
X C1
Y

C2

and in some cases intractable due to the initial guess of the system, possibility of
local minima, uncertainty in data, etc., deriving in an unstable algorithm.
To this end, let suppose that there exists a model MΣ representing Σ that can
encapsulate and organize information coming from X and Y signals, and it can stay
stable for suitable conditions. Moreover, let suppose that MΣ is a model inspired
on chemical organic compounds such that all characteristics of these compounds
are also presented on it. Then, intuitively, the model MΣ is considered an artificial
organic network, as shown in Fig. 3.2.
Notice that in order to MΣ be considered as a model of Σ inspired on chemical
organic compounds, its structure, behavior and rules must satisfy chemical condi-
tions. In the next section, some key concepts from chemical organic compounds
are clearly matching to concepts related to artificial organic networks; so that in
following sections, the modeling technique be easily understandable.

3.1.1 The Metaphor

In nature, the environment plays an important role when creating or modifying chem-
ical organic compounds. It starts with a punch of atoms subjected to conditions that
allows favorable interactions on them. Chemically, the relationships in that atoms
are searching for optimal solutions, like energy minimization. This optimality makes
possible organized structures, seen as modules or units of information. Moreover, the
whole assembly responds to the environment. At last, the obtained chemical struc-
ture not only satisfies environmental constraints and stability, but also it develops a
proper behavior.
In fact, similar chemical organic compounds have similar structures and behav-
iors. In other words, molecules can package information and inherit that to other
3.1 Overview of Artificial Organic Networks 55

compounds, representing the same structure and behavior. For instance, consider
Fig. 3.3. It shows the structure of a chemical organic compound that has similar
structure of another chemical organic compound having one equal package of infor-
mation. In nature, functional groups are an example of that property.
Now, consider that there are artificial structures inspired on chemical organic
compounds satisfying some inspired chemical rules. Then, the same characteris-
tics of modularity, inheritance, organization and stability found in chemical organic
compounds are present in the artificial structures. At last, these artificial structures
will have a behavior, such that the model of artificial organic networks presented in
Fig. 3.2 can be realized. In that sense, Table 3.1 presents the elements of chemical
organic compounds used in artificial organic networks.

3.1.2 Objectives

Artificial organic networks (AONs for short) is a computational technique inspired


on chemical organic compounds that models unknown engineering systems [1, 2].
In particular, this technique has two important characteristics: it is a stable algo-
rithm for suitable conditions (see Sect. 3.2) and it represents, at least partially, the
Fig. 3.3 The metaphor of
chemical organic compounds
M
*

M M
* *

Table 3.1 Elements of chemical organic compounds and the meanings in artificial organic
networks
Chemical organic compounds Meaning in artificial organic networks
Atoms Basic structural units. Parameters of properties
Molecules Basic units of information
Compounds Complex units of information made of molecular units
Mixtures Combination of compounds
Chemical bonds Rules of interaction in atomic and molecular units
Stoichiometric coefficients Definite ratios in mixtures. Weights of compounds
Chemical balance interaction Solution to definite ratios of mixtures
Enthalpy Energy of molecules or compounds
Environment Input or exciting signals
Chemical properties Output signals. Behaviors
Chemical organic compound Structure and behavior of the obtained model
56 3 Artificial Organic Networks

internal mechanisms of unknown engineering systems. In addition, obtained models


of artificial organic networks also hold chemical characteristics listed in Chap. 2.
To reach the goal, the artificial organic network technique defines components
and their interactions, mathematically. It also provides the levels of energy expected
in components in order to follow chemical rules.

3.2 Artificial Organic Compounds

The artificial organic networks technique defines four components and two interac-
tions inspired on chemical organic compounds. In particular, the topology of compo-
nents in AONs is based on the Lewis dot structures; thus, graph theory is the simplest
way to model it (refer to Appendix A).

3.2.1 Components

An artificial organic network is a set of graphs. Each graph represents a molecule with
atoms as vertices and chemical bonds as edges. In addition, these molecules interact
via the so-called chemical balance interaction, forming a mixture of compounds.
Thus, in the general case, an artificial organic network is a mixture of compounds.
It can be identified four components of AONs: atomic units, molecular units, com-
pounds, and mixtures.

3.2.1.1 Atomic Units

The simplest structural component of artificial organic networks is the atom. It para-
meterizes a molecular unit (see Sect. 3.2.1.2). In order to do that and to satisfy the
ground-state principle electron configuration inside the atom with the lowest energy,
the atom may be described with the number of valence electrons or the number
of degrees of freedom the atom can connect to others. Moreover, in organic com-
pounds, the octet rule supposes the ground-state principle; but in the general case, it
can be considered an α-rule, where α is the maximum number of valence electrons
that each atom must have in the outer shell. The following definition formalizes the
above description of atoms:
Definition 3.1 (atomic unit) Let A(α) be a set of elements ai and ei a positive integer.
Moreover, let α be a positive integer related to A. An element ai is said to be an atom if
α
it has a fixed value ei denoting the number of valence electrons satisfying ∈i, ei ⊂ ,
2
where α stands for the full number of valence electrons. Then, A(α) is said to be a
set of atom units. Let also ai , aj ≥ A(α) be two atomic units and ei , ej be the valence
3.2 Artificial Organic Compounds 57

electrons of those. If ei = ej then ai and aj are called similar atoms denoted by


ai = aj . If ei = ej then ai and aj are called different atoms denoted by ai = aj .
It is remarkable to say that the valence electrons ei is the actual number of electrons
that an atom ai has in its outer shell, and α stands for the total number of electrons
shared in ai , so that the α-rule holds. Moreover, the maximum number of pairs of
electrons shared in the outer shell of ai is equal to α/2.
Example 3.1 Represent a hydrogen atom H and a carbon atom C using the above
definition. Consider α = 8.
Solution 3.1 Let a1 = H and a2 = C be two atomic units in the set A(8) . Since,
a hydrogen atom has one valence electron and the carbon atom has four valence
electrons; then, e1 = 1 and e2 = 4.
Example 3.2 Represent a set of atoms that can share at most 12 valence electrons.
Is the atom a1 with e1 = 4 be part of that set? Is the atom a2 with e2 = 8 be part of
it?
Solution 3.2 The set of atoms with these characteristics is represented as A(12) . In
12 12
fact, a1 ≥ A(12) because e1 < / A(12) because e2 >
; but, a2 ≥ .
2 2
In addition, atomic units also have associated the number of degrees of freedom they
can connect to others, as follows:
Definition 3.2 (degree of atomic units) Let ai ≥ A(α) be an atom with a number of
valence electrons ei . Let di , fi be two positive integers with di , fi ⊂ α. Then, di is the
degree of an atomic unit ai defined as the number of valence electrons shared with
other atoms, and fi is the number of free valence electrons in atom ai both of them
holding: ei = di + fi .
Figure 3.4 shows an atomic unit. Notice that ai is the indentifier and ei , di , fi are
values representing the number of valence electrons, the degree of the atom and
the free valence electrons, respectively. Actually, the latter values are completely
defined by the notion of the generalized octet-rule realized on the full number of
valence electrons α. In that sense, an atomic unit is completely defined, as expected
from chemical organic compounds theory.
Moreover, atomic units are also considered parameters of artificial organic net-
works, as described later in this chapter. In that sense, the following definition holds:

Fig. 3.4 Example of an


atomic unit

valence
electrons
58 3 Artificial Organic Networks

Definition 3.3 (atomic value) Let ai be an atom unit and vai a complex value. Then,
vai is said to be the value of atom ai .
Throughout this book, the values of atoms vai ≥ C will be written as the identifier
of atoms ai indistinctively, except when a clear difference between them be required.

3.2.1.2 Molecular Units

In artificial organic networks, the basic unit with information is the molecule. It is
defined with a structure and a behavior. In particular, the structure is associated with
an arrangement of atoms and the behavior is a response of some input signals. Then,
the following statements hold:
Definition 3.4 (structure of a molecular unit) Let M = (A(α) , B) be a graph and
A(α) be a set of n ≥ 2 atoms (vertices) connected with a set of covalent bonds B
(edges). Also, let di and ei be the degree and the number of valence electrons of
atomic atoms ai ≥ A(α) , respectively. Then, M is said to be a molecule if for all di ,
di ⊂ ei . Moreover, if for each di , di = ei then M is said to be a stable molecule. If for
any di , di < ei then M is said to be an unstable molecule. Those are the only states
of a molecular unit M.
Definition 3.5 (behavior of a molecular unit) Let M be a molecule and X be the set
of inputs that excite the molecule M. Also, let ϕ be a function such that ϕ : X → R.
Then, ϕ is referred to the behavior of the molecule M due to X, if there exists an
atom ac ≥ A(α) of M with degree dc and the following holds:

1. ϕ = ϕ(X, dc ).
2. ϕ converges in the input domain X.

Example 3.3 Let M be a molecule defined as M = ({a1 , a2,1 , a2,2 , a2,3 },


{b12 , b12 , b12 }) with number of valence electrons {e1 = 3, e2,i = 1}, ∈i = 1, 2, 3.
Draw M assuming that covalent bonds bij means that atom ai is connected with atom
aj .
Solution 3.3 Figure 3.5 shows molecule M. Notice that covalent bonds are undi-
rected edges in the graph.

Example 3.4 Consider the above molecule M in Example 3.3. Propose a molecular
behavior ϕ(x, d1 ) for the input signal x ≥ X. Which one is the atom ac ?
Solution 3.4 One possible molecular behavior may be written as (3.1) with atom
ac = a1 since the degree of atom in the molecular behavior depends on d1 .


d1
ϕ(x, d1 ) = a2,i · x i (3.1)
i=1
3.2 Artificial Organic Compounds 59

Fig. 3.5 Graph of molecule


M in Example 3.3

Notice that ϕ increases its meaning while the degree of atom d1 increases.
It is remarkable to say that stable molecules are the preferred ones more than unstable
molecules because a stable molecule means minimum of energy due to the ground-
state principle of atomic units. But consider that only unstable molecules might be
connecting to others in order to increase the complexity of its behavior. Moreover,
Definitions 3.4 and 3.5 specify that molecules encapsulate information via the ϕ-
function, but they are constrained to interact with other molecules by the fact of the
structure in graphs (Definitions 3.1 and 3.2). Also, consider that the ϕ-function in
Definition 3.5 is inspired in order to follow chemical rules. For example, property
1 assures that the function depends on the number of valence electrons shared with
the atom ac as shown in Fig. 3.5. As noticed, if the number of atoms associated to ac
changes, the ϕ-function needs to have a different behavior. For example, in Fig. 3.5,
the addition of a2,i atoms increases the meaning of (3.1). For instance, consider (3.1)
with d1 = 1, then ϕ is a linear equation with slope equal to the value of a2,1 while
if d1 = 3, ϕ is a cubic equation with coefficients a2,1 , a2,2 , a2,3 . Thus, atomic units
can also be considered as properties of molecules as described in Sect. 3.2.1.1.
Moreover, property 2 of Definition 3.5 is modeling stability of chemical structures
in a mathematical fashion. Here, the behavior of molecule is inspired on reaching
a steady-state of the molecule; then, in mathematics, convergence is one way to do
that. Again, consider the ϕ-function in (3.1). This is a power series with a radius of
convergence R such that x < R. In this case, it is easy to see that the interval of
convergence of ϕ-function is x < 1.
On the other hand, two kinds of molecules are defined in artificial organic net-
works: functional groups and primitive molecules.
Definition 3.6 (functional group) Let M be a molecule and G be a subgraph of M,
denoted by G ⊆ M. Then, G is a functional group if G is an unstable molecule.
It is clear by Definition 3.6 that a functional group can be any subgraph of any
molecule. However, consider the inverse process: defining a functional group G, a
set of molecules Ω = {M1 , ..., Mk } can be spanned from G. Then, all molecules in Ω
will have the same primary structure defined on G, as shown in Fig. 3.6. This notion
60 3 Artificial Organic Networks

Fig. 3.6 Example of a func-


tional group
G

M1

M2

of spanning will be useful when formalizing artificial organic networks. In addition,


a functional group can act as kernel of molecules (and later of compounds).
Accordingly to the ground-state principle and bonds in chemistry, molecules inter-
act each other in order to minimize the energy in structure, reaching a steady state
of energy. This assumption requires that molecules be unstable. Otherwise, mole-
cules are in steady state and only chemical reactions can perform interaction among
molecules. Thus, using the criterion of unstable molecules, primitive molecules are
introduced in Definition 3.7.
Definition 3.7 (primitive molecule) Let G be a functional group and M = (A(α) , B)
be an unstable molecule spanned from G. Then, M is said to be a primitive molecule,
and it satisfies the following properties:

1. M is unique.
2. For each ai ≥ A(α) , there is at least any di , di < ei .
3. B is a set of polar covalent bonds.
It is remarkable to say that primitive molecules can only be made if all of the relation-
ships among atomic units are polar covalent bonds (see Sect. 3.2.2) because it refers
to that there are no relationships between similar atoms inside primitive molecules.
In Fig. 3.7 is shown all possible primitive molecules spanned the given functional
group G.
Finally, artificial organic networks makes a distinction between simple and com-
plex molecules. For instance, consider a simple molecule to be any molecule only
made of polar covalent bonds, e.g. functional groups, primitive molecules, or any
other molecule with an atom ac and other atoms different from it and attached directly
to it, like molecule in Fig. 3.5. If molecules are made of polar and nonpolar covalent
bonds (refer to Sect. 3.2.2), then these molecules are complex. In order to distinguish
3.2 Artificial Organic Compounds 61

Fig. 3.7 Example of primitive


molecules spanned from G H C G ~ CH

H C CH 2

H C CH 3

both simple and complex molecules in artificial organic networks, the first ones are
simply named molecular units, while the second ones are named compounds.

3.2.1.3 Compounds

In artificial organic networks, a compound unit is a complex molecule made of two


or more primitive molecules which they interact together in order to increase the
complexity of its behavior. Since compounds are molecules, they have an structure
and a behavior, as described next:
Definition 3.8 (structure of a compound) Let Ω = {M1 , ..., Mk } be a set of primi-
tive molecules spanned from a functional group G. Also, let C = (Ω, BN ) ⊇ G be a
molecule consisting of primitive molecules Ω linked with a set of nonpolar covalent
bonds BN . If for |BN |-pairs1 of molecules (Mi , Mj ) ≥ Ω 2 with i = j, there exists a
ij
pair of similar atomic units (ai , aj ), such that, bn ≥ BN ; then, C is a compound.Also,
let Di be the set of the degree
of atoms and Ei be the set of the number of valence
electrons in each Mi ≥ Ω. If i Di = i Ei , then C is said to be a stable compound.
Otherwise, C is an unstable compound.
Definition 3.9 (behavior of a compound) Let C be a compound made of a set of
molecules Ω = {M1 , ..., Mk } with molecular behaviors ϕ1 , ..., ϕk . Also, let X be the
set of inputs that excite the compound C. Then, ψ is the behavior of the compound
C due to X, such that, ψ : ϕ1 × · · · × ϕk → R.

1 |(·)| stands for the cardinality of the set (·).


62 3 Artificial Organic Networks

Fig. 3.8 Example of a com-


pound
C C C

nonpolar
covalent bonds

As notice, compounds are based on nonpolar covalent bonds as shown in Fig. 3.8.
Roughly speaking, a compound is necessarily made of at least two similar atoms
joined together. Additionally, compounds map from the set of behaviors of primitive
molecules to a real value giving a more complex behavior than simple molecules.
This is important when different information needs to be crossover. In practice,
compounds might be nonlinear relationships among molecular behaviors.
Finally, stable compounds are more preferable than unstable ones. In order to do
this, primitive molecules or atomic units have to be used.

3.2.1.4 Mixtures

In nature, when two or more molecules are mixed together, the resultant mixture
contains more information. For instance, it assumes that any number of molecules
can interact without sharing electrons, mixing them up in definite ratios. In such
cases, artificial organic networks defines the chemical balance interaction, useful to
find the optimal ratios of molecules to get the minimum loss energy in the whole
structure. Following, mixtures of molecules and the chemical balance interaction of
AONs are introduced.
Definition 3.10 (mixture of molecules) Let Γ = {M1 , ..., Mk } be a set of molecules
with a set of behaviors Φ = {ϕ1 , ..., ϕk }. Then, the mixture of molecules S is a linear
combination of behaviors of molecules in Φ such that there exists a set of coefficients
Λ = {α1 , ..., αk } of real values, called the stoichiometric coefficients. Hence,


k
S(X) = αi ϕi (X) (3.2)
i=1

Moreover, Φ is the basis of the mixture of molecules, Γ is the structure of the


mixture of molecules, and S(X) is the behavior of the mixture of molecules.
In general, the notion of mixtures can be expanded to a mixture of compounds
allowing the opportunity to represent segmented, understandable information and
behaviors. It follows from the fact that compounds are defined as molecules. Let
Γ = {C1 , ..., Ck } be the set of compounds with behaviors Φ = {Ψ1 , ..., Ψk }. Then,
3.2 Artificial Organic Compounds 63

Fig. 3.9 Example of mixtures


C1

C2

S is the mixture of compounds if Γ is its structure, Φ is its basis and S(X) is its
behavior such that (3.2) holds.
Figure 3.9 shows an example of a mixture of molecules and a mixture of com-
pounds.

3.2.2 Interactions

The above section introduces all components in artificial organic networks. In this
section, two interactions are defined: covalent bonds and the chemical balance. The
first interaction refers to the relational rules that allow atoms to join together, forming
molecules and compounds. The second interaction refers to the way that molecules
and compounds relate among them without sharing electrons.

3.2.2.1 Covalent Bonds

Atomic interaction is very important in artificial organic networks because it defines


the way that they link each other. In chemical organic compounds, this interaction is
known as a chemical covalent bonding. Artificial organic networks defines covalent
bonding as following:
Definition 3.11 (covalent bond) Let ai , aj ≥ A(α) be two atomic units. Also, let B
ij ij
be a set of elements bk . If bk links atomic units ai and aj then it is called a covalent
ij
bond.Hence, B is called the set of covalent bonds. In particular, if ai = aj then bk
ij
is called a nonpolar covalent bond. Similarly, if ai = aj then bk is called a polar
covalent bond.
64 3 Artificial Organic Networks

Fig. 3.10 Structure of mole-


cule M in Example 3.5
explaining covalent bonds

Example 3.5 Suppose that there exist two atoms a1 , a2 and four atoms a3 , a4 , a5 , a6
in A(8) with number of valence electrons e1,2 = 4, e3,4,5,6 = 1. Also, consider that
there is a molecule M made of these atoms with covalent bonds B = {b113 , b214 , b315 ,
b412 , b526 }. Determine the shape of M and classify covalent bonds into polar and
nonpolar bonds.
Solution 3.5 Using Definition 3.11, the structure of molecule M is absolutely deter-
mined, as shown in Fig. 3.10. From this figure, notice that a1 and a2 are two similar
atoms linked with b412 . Thus, from Definition 3.11, b412 is a nonpolar covalent bond,
and the set B\{b412 } contains polar covalent bonds because the atoms joined by them
are different. Remember that similarity in atoms are based on the number of valence
electrons ei .
As discussed in Sect. 3.2.1.2, the behavior of molecules is built using the combi-
nation of atoms according to a central atom ac . In such that case, polar covalent bonds
are used. However, compounds use nonpolar covalent bonds to define their behaviors
in terms of mapping simple molecular behaviors. Focusing on that mapping, non-
polar covalent bonds play an importat role. Thus, artificial organic networks makes
an specification of them. In fact, nonpolar covalent bonds have some properties that
promote a kind of behavior on them, as stated next:
ij
Definition 3.12 (properties of nonpolar covalent bonds) Let bk ≥ B be a nonpolar
covalent bond of a pair of atoms ai , aj ≥ A(α) . Also, let Δ = δ1 , ..., δn be an
ij ij
n-tuple of interesting properties of bk . Then, bk is said to be a nonpolar covalent
bond characterized by Δ. Also, let π be a mapping such that π : Δ → R. Then, π
ij
is referred to the behavior of the nonpolar covalent bond bk .
In fact, Definition 3.12 does not determine any particular property Δ; but it would
be interesting to get inspiration from some properties of chemical covalent bonds as
reported in Chap. 2. For instance, the length, the order and the minimum energy of
bonds might be proposed, as explained later in Chap. 4.

3.2.2.2 Chemical Balance Interaction

On the other hand, when molecules and compounds are completely stable, they
cannot interact each other unless a chemical reaction carries out. In that sense, a
chemical balance interaction can be used instead. In artificial organic networks, the
chemical balance interaction is defined as follows:
3.2 Artificial Organic Compounds 65

Definition 3.13 (chemical balance interaction) Let X be the set of inputs that excites
the set of molecules Γ = {M1 , ..., Mk } with behaviors Φ = {ϕ1 , ..., ϕk }. Let S(X)
be the response of the mixture of molecules S, such that,


k
S(X) = αi ϕi (X) (3.3)
i=1

The chemical balance interaction is the solution to the problem of finding the set
of stoichiometric coefficients Λ = {α1 , ..., αk } of the mixture of molecules S.
Example 3.6 Let M1 and M2 be two molecules in the set of molecules Γ of a mixture
S with molecular behaviors ϕ1 and ϕ2 , respectively. Also, let Λ = {1.0, −0.5} be
the set of stoichiometric coefficients of S. Obtain the response of S with respect to
an input signal x, if ϕ1 and ϕ2 are defined as (3.4) and (3.5). In addition, determine
the response of S in the input domain x ≥ [−1, 1].

ϕ1 (x) = 5x 2 + 1 (3.4)

ϕ2 (x) = 4x 2 − 6x + 2 (3.5)

Solution 3.6 Using (3.3), the response of S can be expressed as (3.6):

S(x) = α1 ϕ1 + α2 ϕ2 (3.6)

Substituting (3.4) and (3.5) with Λ in (3.6), the response of S is finally written as
(3.7).
S(x) = 3x 2 + 3x (3.7)

The response S(x) for all x ≥ [−1, 1] is shown in Fig. 3.11.

4
S(x)

-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x

Fig. 3.11 Response of the mixture of molecules S of Example 3.6.


66 3 Artificial Organic Networks

3.3 Networks of Artificial Organic Compounds

As described above, artificial organic networks technique inherits characteristics of


chemical organic compounds. In this section, these characteristics are discussed in
terms of the structure, behavior and mixtures of artificial organic compounds.

3.3.1 The Structure

The structure of artificial organic networks is organized from basic units to complex
units. As discussed later in Sect. 3.4.1, the structure of AONs is obtained from a chem-
ically inspired strategy that will assure self-organizing artificial organic compounds.
If neccesary, this organization in structures might be found using optimization algo-
rithms via energy minimization.2
Then, the idea behind organizing structures in AONs refers to encapsulation and
inheritance of information. For instance, molecular topologies can be inherited to
other artificial organic compounds getting similar structure and behavior. In that
sense, modular units (or sub-models) can also be obtained when AONs are used for
modeling systems.

3.3.2 The Behavior

In terms of the behavior of artificial organic networks, molecules encapsulate non-


linear relationships among input signals, but also are parametrized by atomic values.
For instance, consider any molecule with instanced atomic values, such that they can
be used as metadata; then, artificial organic networks are self-organizing topologies
with classified data for posterior analysis of the model, giving to AONs a partial
understanding of the modeled system.
On the other hand, artificial organic compounds can also assimilate nonlinear
relationships among molecules. This characteristic offers encapsulation of complex
behaviors due to input signals of the given system.

3.3.3 Mixtures of Compounds

At last, mixtures can be used to combine linearly complex behaviors in definite


ratios for capturing correlations among properties assimilated by molecules and
compounds. It is remarkable to say that mixtures can also be used for combining

2 In Chap. 4, this statement is proved and a search algorithm based on optimization processes is
presented.
3.3 Networks of Artificial Organic Compounds 67

compounds from different systems in order to get hybrid models. For example, image
processing systems can used artificial compounds as different filters that act over
another compounds modeling images, via mixtures, getting processed images.

3.4 The Technique of Artificial Organic Networks

Now that components and interactions have been defined, the artificial organic
networks technique can be described from three different but complementary per-
spectives. Firstly, a chemically inspired rule of levels of energy in artificial organic
compounds is introduced in order to determine how atoms, molecules, compounds
and mixtures will be organized. Then, the mathematical definition of artificial organic
networks is presented. Finally, all aspects described earlier are summarized into the
model of artificial organic networks.

3.4.1 Levels of Energy in Components

Artificial organic networks technique considers energy minimization as the central


observation. For instance, take a look at the objective of AONs: modeling systems. In
that way, information like input and output signals are required. From signal theory,
information in systems might be seen as energy, i.e. energy signals. Thus, the structure
of artificial organic networks ‘grows’ from this information; but chemically, it has
to be minimized. Components and interactions are prepared to do that; however,
they need a rule for this process. It is summarized in the three-level energy model as
follows.
It identifies three levels of energy around the structure of artificial organic networks
in which the energy is supposed to be minimized. The information of the system
allows organizing the structure of artificial organic networks. First, atoms interact
among them to make molecules. Then, molecules interact to make compounds. And
finally, compounds interact making mixtures. In a nutshell, the three-level energy
rule is presented next:
1st level: (ground-state electron configuration) Information tends to be pack-
aged in stable molecules using atomic units, i.e. as parameters, with
free valence electrons. Polar covalent bonds are required at this level of
energy minimization.
2nd level: (composite units) Information not covered in stable molecules tends to
be packaged using a set of molecules, i.e. primitive molecules, to create
compounds. Actually, nonpolar covalent bonds are required at this level
of energy minimization.
68 3 Artificial Organic Networks

Mixtures

Compounds

energy
Atoms
and
Molecules

Fig. 3.12 The three-level energy rule of artificial organic networks

3rd level: (mixtures) Complex information tends to be packaged as mixtures of


molecules and compounds. At this level, chemical balance interaction
regulates energy in structures.
Figure 3.12 shows this energy rule of artificial organic networks. Notice that this rule
allows components of AONs interact among them, as chemical organic compounds
do in nature. Furthermore, this rule implicitly organizes the structure of the artificial
organic network because simple information is stored in primitive molecules, and
complex information can either be modeled as compounds or mixtures.
Additionally, a mathematical reference and components of AONs related to each
level of energy are also depicted in Fig. 3.12. It is remarkable to say that in this rule,
lower levels guarantee energy minimization more than upper levels. In practice, this
energy rule has to be implemented in an algorithm, let say f .

3.4.2 Formal Definition of Artificial Organic Networks

Mathematically, the notion of artificial organic networks is formalized into the


following definition:
Definition 3.14 (artificial organic networks) Let Γf be the structure of a mixture S
built by algorithm f , with basis Φ, a set of stoichiometric coefficients Λ = {α1 , .., αm }
and S(X) be the behavior of a mixture S due to a set of inputs X. Then, an artificial
organic network AON = Γf , Φ, Λ, X is a quadruple consisting of the structure of
a mixture S, a basis, a set of stoichiometric coefficients and a set of inputs, that holds
the following properties:

1. There exists a finite set A(α) of atomic units spanning Γf .


2. There is a unique functional group G = (A(α) , BP ) with polar covalent bonds BP .
(i)
3. Each molecule Mk ≥ Ω (i) in Ci ≥ Γf is spanned from G and each behavior of
(i)
molecule ϕk represents a single unit of information.
3.4 The Technique of Artificial Organic Networks 69

4. Each compound Ci ≥ Γf is spanned from G and each behavior of compound ψi


is a function of the form ψi : ϕ1(i) × · · · × ϕk(i) × · · · × ϕr(i) → R, where r is the
cardinality of Ω (i) , representing a composite unit of information.
5. f is an algorithm based on the three-level energy rule.
6. S(X) is the behavior of AON due to X.

The above statement represents the formal definition of artificial organic networks.
In fact, all properties assume that molecules and compounds are made of the same
primary structure G and they can be used for representing information in a compact
form (molecules) or in composite form (compounds). According to the three-level
energy rule, Γf is defined with molecules as the first compact units with information.
Compounds are the next level of packing information, and finally, the mixture of
compounds is the last level of packing information. This notion of packaging or
encapsulation of information reveals some understandings on how artificial organic
networks might be used for partial interpretation of modeled systems. Figure 3.13
shows a simple artificial organic network.

3.4.3 Model of Artificial Organic Networks

In order to use artificial organic networks as a model of any given system, two
steps are defined: the training process and the inference process. In the first case,
the training process refers to build the AON-structure based on atoms, molecules,
compounds, mixtures and interactions, and then to fix all parameter values in the
structure capturing all relevant information from the given system, as described in
Sect. 3.5.

C1

C2
n

Cn
AON
S(X )

Fig. 3.13 A simple artificial organic network


70 3 Artificial Organic Networks

Training
Process

X Y
Query Model Response

Inference Process

Fig. 3.14 The training and inference processes of artificial organic networks technique

The second process refers to actually use the AON-structure, obtained in the first
step, as an inference system. In that sense, the AON-structure can return an output
value based on an input value of the system. Figure 3.14 shows the two steps of the
artificial organic network model. Notice that once the AON-structure is found by the
training process, it can be used separately as a model of the given system.

3.5 Implementation Issues

Formally, the artificial organic networks technique faces two open problems when it
has to be used as an inference system or as a modeling system. These problems are
introduced following.

3.5.1 The Search Topological Parameters Problem

Suppose that there is any given system Σ = (X, Y ) with a set of input signals X
and a set of output signals Y . Also, let suppose that there exists an artificial organic
network AON with structure Γf and basis Φ that has to model the system Σ. However,
all parameters in Γf has to be found in order to use AON as a model of Σ. In other
words, a topology of molecules and mixtures are already fixed, but atomic parameters
and sotichiometric coefficients are not set. Then, any process to find these values is
necessary. Equivalently, this problem can be stated as Definition 3.15.
Definition 3.15 (search topological parameters problem) Let Σ be any given sys-
tem with input signals X and output signals Y , and let AON = Γf , Φ, Λ, X
(i)
be an artificial organic network as a model of Σ. Also, let ϕk be the behavior
(i)
of molecule Mk ≥ Ω (i) , for all compounds Ci ≥ Γf , as a parametric function
(i) (i) (i) (i) (i) (i)
ϕk = ϕk (θ1k , ..., θlk ) with atomic parameters θ1k , ..., θlk . Also, let S(X) be the
behavior of AON due to X, such that,
3.5 Implementation Issues 71

C1
Fig. 3.15 The search topo-
logical parameters problem
(STPP) refers to find all para-
meters in the AON-structure
1

C2


m
S(X) = αi ϕk(i) (θ1k
(i)
, ..., θlk(i) , X) (3.8)
i=1

The search topological parameters problem (or STPP for short) refers to find the set of
stoichiometric coefficients Λ = {α1 , ..., αm } and the collection of atomic parameters
(i)
θ1k , ..., θlk(i) for all i = 1, ..., m, such that S(X) be equivalent to Y .
As notice, the search topological parameters problem indicates that has to be
a procedure to find all parameter values in a prior structure of artificial organic
networks, as shown in Fig. 3.15. In this case, Definition 3.15 does not define how to
obtain Γf .

3.5.2 The Build Topological Structure Problem

Let suppose that there is the same system Σ and an artificial organic network AON
has to model it. But now, AON does not have any structure Γf . Then, there exists a
problem to find the structure. Mathematically, it can be stated as follows:
Definition 3.16 (build topological structure problem) Let Σ be any given system
with input signals X and output signals Y , and let AON = Γf , Φ, Λ, X be an
artificial organic network with behavior S(X) due to X as a model of Σ. Then, the
build topological structure problem (or BTSP for short) refers to find any Γf = ∅,
such that, S(X) be equivalent to Y and AON holds for the input set X.
The build topological structure problem only refers to find the structure of the
artificial organic network as shown in Fig. 3.16. In practice, this problem has to be
merged with the search topological parameters problem in order to find the whole
artificial organic network AON that models Σ.
72 3 Artificial Organic Networks

Fig. 3.16 The build topologi-


cal structure problem (BTSP)
refers to find the structure of ?
the AON

3.5.3 Artificial Organic Networks Based Algorithms

In summary, the artificial organic networks technique is inspired on chemical organic


compounds. Actually, the technique defines components, interactions, energy min-
imization rules and the open problems in order to design practical algorithms for
modeling systems. Three kinds of algorithms based on artificial organic networks
may be identified:
Chemically inspired algorithms They find inspiration in organic compounds to
define their functional groups and molecular structures in order to obtain certain
interesting characteristics of the chemical organic compounds.
Artificial basis algorithms They define specific functional groups and set of
atoms independently to chemical organic compounds.
Hybrid algorithms They mix artificial organic network structures coming from
both chemically inspired and artificial basis algorithms.
In particular, Chap. 4 presents the first artificial organic network based algorithm
named Artificial Hydrocarbon Networks that falls into the chemically inspired cate-
gory in order to exploit some characteristics of chemical hydrocarbons.

References

1. Ponce H, Ponce P (2011) Artificial organic networks. In: Proceedings of IEEE conference on
electronics, robotics, and automotive mechanics, Cuernavaca, Mexico, pp 29–34
2. Ponce H, Ponce P (2012) Artificial hydrocarbon networks: a new algorithm bio-inspired on
organic chemistry. Int J Art Intell Comput Res 4(1):39–51
Chapter 4
Artificial Hydrocarbon Networks

Hydrocarbons, chemical organic compounds based on hydrogen and carbon atoms,


are the most stable compounds in nature. Actually, the efectiveness of stability in
hydrocarbons is the electronegativity property between carbon atoms allowing strong
covalent bonds. For instance, consider an artificial organic network that uses only two
atoms with high stability in its structure like hydrocarbons; then, any given system
may be modeled by using hydrocarbon inspired networks. In that sense, this chapter
introduces artificial hydrocarbon networks.
The first section introduces an overview of the artificial hydrocarbon networks
algorithm from its inspiration to its objectives. Then, this chapter presents the math-
ematical definition of components and interactions occupied in the approach, as well
as the formal definition of artificial hydrocarbon networks. In addition, it introduces
and formulates the basic algorithm of artificial hydrocarbon networks. In following
chapters, improvements to this algorithm are also presented. After that, two impor-
tant theorems related to the metrics of the basic algorithm are introduced and proved.
Finally, this chapter discusses implementability of the algorithm.

4.1 Introduction to Artificial Hydrocarbon Networks

Hydrocarbons are simple and stable, chemical structures that are very attractive to
perform and implement artificial organic networks in an algorithm named artificial
hydrocarbon networks. In order to understand the approach, this section presents an
overview of chemical hydrocarbon compounds, used as inspiration, and the scope of
the algorithm.

4.1.1 Chemical Inspiration

In nature, hydrocarbons are the simplest and the most stable chemical organic com-
pounds. These two properties may hydrocarbons be used as artificial organic net-
works because they are made of hydrogen and carbon atoms, delimiting the definition

H. Ponce-Espinosa et al., Artificial Organic Networks, 73


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1_4,
© Springer International Publishing Switzerland 2014
74 4 Artificial Hydrocarbon Networks
H

H C H H H

C C
H
H C C H
H H H
C C

H C C C H H H

H H H

Fig. 4.1 Examples of different hydrocarbons due to the carbon property

Table 4.1 Bond enthalpies


Chemical bond Enthalpy (kJ/mol)
present in hydrocarbons in
kJ/mol C−C 350
C−H 415
C=C 611
C∈C 837

of atomic units in AONs [1]. Also, stability in hydrocarbons is highly important to


the artificial organic network technique.
Up to date, more than 20 million different chemical hydrocarbon compounds have
been identified and catalogued. This versatility is due to the property of carbon atoms
to form chains and rings of them, as shown in Fig. 4.1. That is one of the reasons
hydrocarbons inspire the algorithm introduced in this chapter: combinations of two
different atoms have the potential to form infinite number of different chemical
molecules. To the approach, this potentiality is translated to have many compounds
with different behaviors formed by two atomic units.
Moreover, the property of carbon chains is chemically explained via bond energy
of carbon elements [2, 3]. For instance, consider only the chemical bonds present in
hydrocarbons. Actually, the energy of these particular bonds are high in comparison
to the energy of chemical bonds between other elements, giving the stability property
to hydrocarbons. In other words, bond enthalpies represent the minimum energy that
has to be applied to break them. Table 4.1 summarizes the enthalpies of chemical
bonds present in hydrocarbons. Thus, stability is another reason to use hydrocarbons
as inspiration of the algorithm introduced.

4.1.2 Objectives and Scope

Artificial hydrocarbon networks (AHNs for short) is a computational algorithm


inspired on chemical hydrocarbon compounds and based on artificial organic net-
works (discussed in Chap. 3). The objective of this mathematical model is to constrain
4.1 Introduction to Artificial Hydrocarbon Networks 75

an artificial organic network to conditions of stability and simplicity shown by


hydrocarbons, in order to allow them to model unknown engineering systems and
to partially understand unknown information inside systems, e.g. as gray boxes.
Moreover, artificial hydrocarbon networks algorithm solves the search topological
parameters (STPP) and the build topological structure (BTSP) problems of artificial
organic networks discussed in Sect. 1.5.
In terms of machine learning, artificial hydrocarbon networks is a supervised
learning algorithm because it needs the inputs and the outputs of the unknown system
to model.
On the other hand, the scope of artificial hydrocarbon networks is limited to model-
ing causal and continuous physical systems. Examples of some engineering systems
that can be modeled and inferred by AHNs can be audio signals, environmental
variables, voltage and current signals, and so forth.

4.2 Basics of Artificial Hydrocarbon Networks

Artificial hydrocarbon networks algorithm is based on the framework of artificial


organic networks. In that way, the following section introduces all components and
interactions instantiated from the AONs technique, arising on mathematical and
algorithmic formulations to derive the basic AHN-algorithm. In the first part, atomic
units, molecular units, compounds and mixtures are introduced. Then, covalent bond-
ing and chemical balance interactions are presented. After that, the formal definition
of AHNs is described and the chemical rule of artificial compound formation is dis-
cussed. At last, the basic algorithm of artificial hydrocarbon networks is introduced.
It is remarkable to say that artificial hydrocarbon networks is inspired on the way
chemical hydrocarbon compounds are formed. In that sense, some chemical elements
are adopted like atoms, their valence electrons, and the chemical rule of nonpolar
covalent bonding formation in hydrocarbons.

4.2.1 Components

The artificial hydrocarbon network algorithm defines four components to produce a


set of artificial compounds: atomic units, molecular units, compounds and mixtures.
Recall that these definitions were presented and discussed when artificial organic
networks technique was introduced. Refer to Chap. 3, if necessary.

4.2.1.1 Atomic Units

The AHN-algorithm defines two different atomic units. Inspired on chemical hydro-
carbon compounds, these two atomic units are the hydrogen atom and the carbon
76 4 Artificial Hydrocarbon Networks

atom. In addition, the valence electrons of both atomic units are chemically based,
as the following definition:
Definition 4.1 (atomic units) Let H and C be two different atomic units with valence
electrons eH and eC , respectively, in the set of atomic units A(8) . Then, H is called
hydrogen atom with eH = 1 and C is called carbon atom with eC = 4. Also, A(8) is
called the set of atom units of artificial hydrocarbon networks.
Notice that the set of atom units of artificial hydrocarbon networks claims for the
octet rule as defining α = 8. It is important because this condition will constrain the
structural forming of artificial hydrocarbon compounds.
Definition 4.2 (atomic values) Let vH and vC two complex numbers vH , vC ≥ C.
Then, vH and vC are atomic values of hydrogen and carbon atoms, respectively.

4.2.1.2 Molecular Units

The interaction among hydrogen and carbon atoms is important to build artificial hy-
drocarbon compounds. Definition 4.3 introduces the notion of a particular interaction
of atoms in AHNs denominated CH-molecules.
Definition 4.3 (CH-molecules) Let Hi be a hydrogen atom with atomic value vHi ,
and C be a carbon atom with atomic value vC . Also, let M = (A(8) , BP ) be a
molecular unit formed with a set of atom units ak ≥ A(8) and a set of polar covalent
bonds bpq ≥ BP , for all ap ⊂= aq . Then, M is said to be a CH-molecule if Hi , C ≥ A(8)
and bCHi ≥ BP , and there exists a molecular behavior ϕ around some atom ac ≥ A(8)
due to some input X, such that, ϕ = ϕ(X, dc , vHi ). Moreover, ac = C, and then
ϕ = vC .
In other words, CH-molecules are built with one carbon atom and up to four
hydrogen atoms. Actually, any CH-molecule has the carbon atom as the central
one of M with structure M = ({C, H1 , . . . , Hd }, {bCH1 , . . . , bCHd }), where, d is the
degree of freedom of the carbon atom, such that, 0 < d ≤ eC . Figure 4.2 shows

Fig. 4.2 Structure of a CH-


molecule
H1

H2 C H4

H3
4.2 Basics of Artificial Hydrocarbon Networks 77

a CH-molecule. Through this book, CH-molecules with defined degree of freedom


d will be referred as CHd .
In fact, the AHN-algorithm proposes the carbon atom as the central one because
chemical hydrocarbon compounds change their behavior by adding or subtracting
carbon atoms. Thus, carbon atoms have more complex functionality than hydrogen
atoms.
It is remarkable to say that CH-molecules have at least a carbon and a hydrogen
atom because of the definition of a molecular unit.
Once the structure of a CH-molecule is defined, the behavior ϕ of that molecule
has to be introduced. In the following propositions of CH-molecular behaviors, the
set of input signals is restricted to a single input x.
Proposition 4.1 (first model of CH-molecules) Let M be a CH-molecule with
ac = C. Also, let ϕ be the behavior of molecule M due to an input signal x with
|x| < 1. Then, the behavior ϕ holds:

vHi = hi , hi ≥ C (4.1)

C
d≤e
ϕ(x) = hi · x i (4.2)
i=1

where, d is the degree of freedom of the C atom, hi are the valued constants of Hi
atoms, and eC is the number of valence electrons of C.
The above Proposition 4.1 conditions the behavior of molecules to be in a poly-
nomial form. In fact, the functions of molecules are expressed in a truncated power
series fashion in order to perform an outer shell of atoms like in nature. Notice that,
the radius of convergence in the open interval |x| < 1 guarantees a mathematical
stability of CH-molecules.
Remark 4.1 Consider the expanded form of ϕ with d = 4 is defined as: ϕ(x) =
h1 · x + h2 · x 2 + h3 · x 3 + h4 · x 4 . Be careful in the order of the polynomial function
because the values of hydrogen atoms are not interchangeable.
Example 4.1 Calculate the behavior of the CH 3 molecule due to x = 0.4, if the
values of its hydrogen atoms are: h1 = 2, h2 = 6, h3 = −5. What is the value of the
carbon atom at x = 0.4?
Solution 4.1 Using (4.2),

ϕ(x) = (2)(0.4) + (6)(0.4)2 + (−5)(0.4)3 = 0.80 + 0.96 − 0.32 = 1.44

Then, the value of the behavior of CH 3 is 1.44. Actually, the value of C at x = 0.4
is vC = 1.44 because vC = ϕ(x).
78 4 Artificial Hydrocarbon Networks

Example 4.2 Calculate the behavior of the CH 3 molecule due to x = 0.4, if the
values of its hydrogen atoms are: h1 = 2, h2 = −5, h3 = 6. Is this molecule the
same as the molecule depicted in Example 4.1?
Solution 4.2 Using (4.2),

ϕ(x) = (2)(0.4) + (−5)(0.4)2 + (6)(0.4)3 = 0.800 − 0.800 + 0.384 = 0.384

Then, the value of the behavior of CH 3 is 0.384. In fact, this molecule is not
the same as the CH 3 in Example 4.1 because the order of the hydrogen atoms are
not the same. It can be demonstrated comparing the results from the two molecules:
1.44 ⊂= 0.384 at x = 0.4.
In order to compute behaviors of CH-molecules and using Remark 4.1, a conven-
tion for linking hydrogen atoms to the carbon atom is proposed. This is known as
the counterclockwise convention and it states that hydrogen atoms are linked to the
carbon atom starting from the top of the molecule and going on a counterclockwise
direction. Figure 4.3 shows this convention.
Remark 4.1 shows that Proposition 4.1 has a disadvantage in the ordering of
hydrogen atoms linked to the carbon atom in CH-molecules. In addition, in this par-
ticular arrangement hydrogen atoms do not offer any understanding in the meaning of
these artificial hydrocarbon molecules. Thus, the model in Proposition 4.2 improves
CH-molecules.
Proposition 4.2 (second model of CH-molecules) Let M be a CH-molecule with
ac = C. Also, let ϕ be the behavior of molecule M due to an input signal x with
|x| < 1. Then, the behavior ϕ holds:

vHi = hi , hi ≥ C (4.3)

C
d≤e
ϕ(x) = (x − hi ) (4.4)
i=1

where, d is the degree of freedom of the C atom, hi are the valued constants of Hi
atoms, and eC is the number of valence electrons of C.

Fig. 4.3 Counterclockwise


convention of hydrogen atoms H1
in CH-molecules

H2 C H4

H3
4.2 Basics of Artificial Hydrocarbon Networks 79

Notice that behaviors of hydrogen atoms associated to the carbon molecule


represent the roots of the CH-molecule behaviors, increasing the meaning of these
values. Additionally, the order of hydrogen atoms is unimportant since the commu-
tative law of multiplication (in contrast to geometry interpretation of molecules in
organic chemistry). Thus, artificial hydrocarbon networks algorithm uses the factored
form of Proposition 4.2. In addition, that enhancement allows a more flexible struc-
ture of artificial hydrocarbon molecules in which hydrogen atoms can be associated
freely without any order. Thus, there is no importance or priority in selecting any
of these values, against to the previous model (Proposition 4.1). Other remarkable
characteristics are that ϕ is also normalized and hydrogen values remain unchange-
able.
Example 4.3 Calculate the behavior of the CH 3 molecule due to x = 0.4, if
the values of its hydrogen atoms are: h1 = 0.2, h2 = 0.6, h3 = −0.5. Use
Proposition 4.2.
Solution 4.3 Using (4.4),

ϕ(x) = (0.4 − 0.2)(0.4 − 0.6)(0.4 + 0.5) = (0.2)(−0.2)(0.9) = −0.036

Then, the value of the behavior of CH 3 is −0.036.


Example 4.4 Calculate the behavior of the CH 3 molecule due to x = 0.4, if the
values of its hydrogen atoms are: h1 = 0.2, h2 = −0.5, h3 = 0.6. Is this molecule
the same as the molecule depicted in Example 4.3?
Solution 4.4 Using (4.4),

ϕ(x) = (0.4 − 0.2)(0.4 + 0.5)(0.4 − 0.6) = (0.2)(0.9)(−0.2) = −0.036

Then, the value of the behavior of CH 3 is −0.036. This molecule is the same as
the molecule depicted in Example 4.3 because −0.036 = −0.036.
Furthermore, artificial hydrocarbon networks algorithm spans CH-molecules to
form artificial hydrocarbon compounds. In that way, the functional group of artificial
hydrocarbon networks is defined as follows.
Definition 4.4 (functional group) Let H and C be a hydrogen atom and a carbon
atom, respectively, joined together by a polar covalent bond bCH . Also, let G be
a molecule. Then, G is said to be a functional group of CH-molecules if G =
({C, H}, {bCH }).
In Fig. 4.4 is shown the functional group of CH-molecules. In fact, because G is
equal to the molecule CH, G is also known as the CH functional group.
Definitions 4.3 and 4.4 can be used to derive CH-primitive molecules, unstable
artificial hydrocarbon molecules that will be used to form complex molecules, so-
called artificial hydrocarbon compounds. Thus, CH-primitive molecules are intro-
duced in the following Lemma 4.1.
80 4 Artificial Hydrocarbon Networks

Fig. 4.4 Functional group of


CH-molecules

C H

Lemma 4.1 (CH-primitive molecules) Let G be a CH functional group, and let


CH, CH 2 , CH 3 be three different CH-molecules spanned from G with behaviors
ϕCH , ϕCH2 , ϕCH3 due to the set of inputs X. Then, there are at most three CH-primitive
molecules. No other molecules more than CH, CH 2 , CH 3 are CH-primitive mole-
cules.
Proof Let M be a primitive molecule as Definition 3.7. By induction, using Defini-
tion 4.3, M can be CH, CH 2 , CH 3 and CH 4 , no other CH-molecules can be depicted
because the degree of freedom d is delimited to the interval 0 < d ≤ eC = 4. But,
CH 4 is a stable molecule, thus CH 4 is not a primitive molecule. Then, the only
CH-primitive molecules are CH, CH 2 , CH 3 . 

4.2.1.3 Compounds

Artificial hydrocarbon networks algorithm build complex molecules called artificial


hydrocarbon compounds. In fact, those are made of CH-primitive molecules, as
follows:
Definition 4.5 (hydrocarbon compounds) Let C = (Ω, BN ) be a compound con-
sisting of primitive molecules Ω linked with a set of nonpolar covalent bonds BN .
Then, C is said to be a hydrocarbon compound if Ω = {M1 , . . . , Mk } is a set of
CH-primitive molecules, spanned from the CH functional group, with molecular be-
haviors ϕ1 , . . . , ϕk . Moreover, ψ is the behavior of C due to any set of input signals
X, such that, ψ : ϕ1 × · · · × ϕk → R.
The behavior of artificial hydrocarbon compounds ψ is a mapping from a com-
posite function of CH-molecular behaviors to a real value. In Sect. 4.2.2, different
mappings are proposed.

4.2.1.4 Mixtures

At last, artificial hydrocarbon networks algorithm uses the same definition of mixtures
as described in artificial organic networks (see Definition 3.10). In that sense, AHN-
algorithm has mixtures of hydrocarbon compounds with basis Φ = {ψ1 , . . . , ψk }.
Figure 4.5 shows a mixture of artificial hydrocarbon compounds.
4.2 Basics of Artificial Hydrocarbon Networks 81

2
n

Fig. 4.5 Example of a mixture of artificial hydrocarbon compounds

4.2.2 Interactions

In this section, an interaction among CH-primitive molecules is presented in order


to determine the mechanisms of how artificial hydrocarbon compounds are built.
Then, an interaction among these artificial hydrocrbon compounds is defined to mix
them up. In practice, these two different interactions are implemented to perform the
artificial hydrocarbon networks algorithm.

4.2.2.1 Nonpolar Covalent Bonds

As detailed in Chap. 3, covalent bonds are the interactions among atomic units.
Actually, polar covalent bonds are present between two different atomic units. Thus,
they are trivial in artificial hydrocarbon networks as noticed in Remark 4.2.
Remark 4.1 Let BCH all possible combinatorial interactions between two atomic
units in artificial hydrocarbon networks. Using Definition 4.1, these interactions are
the covalent bonds bCC , bCH , bHH ≥ BCH . Moreover, using Definition 3.11, the only
polar covalent bond in the set of all possible interactions BCH is bCH . However, bCH
defines CH-molecules as stated in Definition 4.3. Thus, bCH bonds do not transmit
(map) information from atom to atom. Concluding, bCH polar covalent bonds are
trivial in artificial hydrocarbon networks.
In addition, the nonpolar covalent bond bHH is not possible because a molecule
of two hydrogen atoms is not defined in artificial hydrocarbon networks. In other
words, only molecules with one carbon atom and at least one hydrogen atom are
totally defined (see Definition 4.3). At last, bCC nonpolar covalent bonds are the
only ones that have behaviors associated, as introduced in Proposition 4.3.
ij
Proposition 4.3 (first model of nonpolar covalent bonds) Let bk ≥ B be a nonpolar
covalent bond of a pair of molecules Mi , Mj with behaviors ϕi , ϕj due to an input x.
82 4 Artificial Hydrocarbon Networks

ij
Let Δ = →δ1 , . . . , δ5  be a tuple of properties of bk . Then, the behavior of nonpolar
covalent bonds π holds:

wi ϕi (x) + wj ϕj (x)
π(ϕi , ϕj , x) = (4.5)
wi + wj

With,  
δ1
wi (x) = max 1 − |δ2 − x| , 0 (4.6)
δ3
 
δ1
wj (x) = max 1 − |δ4 − x| , 0 (4.7)
δ5

where, δ1 ≥ {1, 2, 3} is called the order of bond; δ2 , δ4 ≥ R represents the center


of molecules Mi , Mj , respectively; and δ3 , δ5 ≥ R represents the spread of mole-
cules Mi , Mj , respectively. Moreover, the behavior Ψ of an artificial hydrocarbon
compound C consisting of molecules Mi , Mj is equal to be behavior of the nonpolar
covalent bond, such that, Ψ = π(ϕi , ϕj , x).
As noted, this model π takes into account the center of molecules involved in the
bond. Then, the centroid of molecules is calculated to perform the behavior of bonds.
In fact, the centroid of molecules is derived from the weighted sum of molecules using
the values wi , wj . In fact, those are triangular functions that smooth the molecular
behaviors ϕi , ϕj .
Example 4.5 Determine the behavior of compound C in the input domain x ≥
(−1, 1), if it is made of two CH-primitive molecules as Fig. 4.6 (hydrogen atoms
represent the roots of molecules). Use the first model of nonpolar covalent bonds
with properties Δ = →1, −0.5, 2.0, 0.5, 2.0.
Solution 4.5 Using Proposition 4.2, the molecular behaviors in x ≥ (−1, 1) are
shown in Fig. 4.7. In addition, Fig. 4.7 shows the weight values of the first model of
nonpolar covalent bonds. Figure 4.8 shows the behavior of compound C. Remember
that Ψ = π(ϕi , ϕj , x).
The above Proposition 4.3 depends on several properties of nonpolar covalent
bonds. In order to simplify it, the second model of nonpolar covalent bonds is
introduced following. Notice that the behavior of these bonds is proposed to be

Fig. 4.6 Artificial hydrocar- − 0.8 0.8


bon compound C of Example
4.5
− 0.4 C C 0.4

0.2 − 0.2
4.2 Basics of Artificial Hydrocarbon Networks 83

1.5
Mol. 1 (CH )
3
1

0.5
(x)

-0.5
Mol. 2 (CH )
3
-1

-1.5

-2
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x

0.8
w(x)

0.6

0.4
w2(x) w1(x)

0.2

0
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x

Fig. 4.7 Molecular behaviors and weight values of Example 4.5

a nonlinear mapping, such that, the composite of molecular behaviors can capture
the nonlinearities of them.
ij
Proposition 4.4 (second model of nonpolar covalent bonds) Let bk ≥ B be a non-
polar covalent bond of a pair of molecules Mi , Mj with behaviors ϕi , ϕj due to an
input x. Then, the behavior of nonpolar covalent bonds π holds:
⎜ ⎟
π(ϕi , ϕj , x) = exp ϕi (x) + ϕj (x) (4.8)

Where, exp(·) stands for the exponential function. Moreover, the behavior Ψ of
an artificial hydrocarbon compound C consisting of molecules Mi , Mj is equal to be
behavior of the nonpolar covalent bond, such that, Ψ = π(ϕi , ϕj , x).
In contrast with the first model stated in Proposition 4.3, the behavior of bond π
in (4.8) is not characterized with Δ properties. This means that the order of bond or
any other property is not taking into account to calculate π .
84 4 Artificial Hydrocarbon Networks

0.5
ψ (x)

-0.5

-1
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x

Fig. 4.8 Behavior of compound C of Example 4.5

However, in both cases, Propositions 4.3 and 4.4 does not follow any energy
stabilization. In that sense, Proposition 4.5 defines the behavior of nonpolar covalent
bonds using an energy function coming from the model of covalent bonds (refer to
Sect. 2.2.1).
ij
Proposition 4.5 (third model of nonpolar covalent bonds) Let bk ≥ B be a nonpolar
covalent bond of a pair of molecules Mi , Mj with behaviors ϕi , ϕj due to an input x.
ij
Let Δ = →δ1 , δ2 , δ3  be a tuple of properties of bk . Then, the behavior of nonpolar
covalent bonds π holds:
 1
π(ϕi , ϕj , x) = δ3 1 − (δ1 δ2 )2 exp − (δ1 δ2 )2 , π ⊆ 0 (4.9)
2

With,
δ2 : θi × θj → R (4.10)

where, δ1 ≥ {1, 2, 3} is called the order of bond; δ2 ≥ R with δ2 ⊆ 0 represents the


length of bond and it is a metric on the parameters θi , θj that characterize ϕi , ϕj ;
and, δ3 ⊆ 0 represents the minimum energy of bond. Moreover, the behavior Ψ of
an artificial hydrocarbon compound C consisting of molecules Mi , Mj is equal to the
composite molecular behaviors, such that, Ψ : ϕi (θi ) × ϕj (θj ) → R.
Actually, (4.9) models (2.2). For instance, consider two CH 3 molecules with a
minimum energy of bond δ3 = 5. Then, δ1 = 1 because these molecules are joined
together with a simple bond. Figure 4.9 shows the behavior π in an intermolecular

distance domain r ≥ [0, 10]. Notice that the optimum parameter of δ2 = 3.
In that sense, (4.9) can be seen as an objective function for minimizing the behavior
π in terms of the intermolecular distance r. In other words, the optimum value of
4.2 Basics of Artificial Hydrocarbon Networks 85

1
E

-1

-2

-3
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
r

Fig. 4.9 Behavior of a nonpolar covalent bond with Δ = →1, 3, 5

the intermolecular distance ∂π


∂r = δ2 represents the optimum distance at which two
CH-molecules have to be separated.

4.2.2.2 Chemical Balance Interaction

Artificial hydrocarbon networks algorithm uses the same definition of chemical bal-
ance interaction as described in artificial organic networks (see Definition 3.13). In
that sense, the chemical balance interaction in AHN-algorithm finds the set of sto-
ichiometric coefficients Λ = {α1 , . . . , αk } of the mixture of artificial hydrocarbon
compounds Γ = {C1 , . . . , Ck } with basis Φ = {ψ1 , . . . , ψk }.

4.2.3 The Algorithm

Once the components and the interactions of artificial hydrocarbon networks


algorithm are introduced, a proposition referent to the chemical rule in forming
of compounds is necessary to finally state the mathematical formulation of artificial
hydrocarbon networks.

4.2.3.1 Chemical Rule in Forming of Compounds

As described in Sect. 3.4.1, the three-level energy model is required to propose the
chemical rule that will be useful to form artificial hydrocarbon compounds. In fact,
CH-primitive molecules have to follow that rule in order to determine how they bond.
86 4 Artificial Hydrocarbon Networks

Fig. 4.10 An artificial hydro- H H H


carbon compound presented
as a saturated linear chain with
n⊆2 H C ... C ... C H

H H H
- molecules

Proposition 4.6 (first chemical rule) Let f be an algorithm and n ⊆ 2 be an integer


number. Also, let C = (Ω, BN ) be a compound made of CH-primitive molecules
Ω = {M1 , . . . , Mn } linked with a set of nonpolar covalent bonds BN . Then, f returns
the structure of C as a saturared linear chain, such that, M1 = Mn = CH3 and
Mi = CH2 for all i = 1, . . . , n − 2.
Notice that in Proposition 4.6, the algorithm f that forms an artificial hydrocarbon
compound C = (Ω, BN ) returns a linear chain of CH-primitive molecules with
simple nonpolar covalent bonds as shown in Fig. 4.10. Actually, this linear chain is
the easiest artificial hydrocarbon compound that can be formed. However, the order
of bond does not vary, and in some cases this arbitrarity is not the optimal way to
form compounds.
In order to derive a forming rule closer to chemical hydrocarbon compounds, basic
observations to the trend of nonpolar covalent bonds in carbon-carbon bonds was
realized. Tendencies of bonds in pairs of carbon atoms indicate that simple bonds
are more frequent than double bonds, and the latter are more frequent than triple
bonds. It follows from the fact that triple bonds need more energy (837 kJ/mol) than
double bonds (611 kJ/mol) to be formed/broken; in the same way, double bonds have
higher energy than simple bonds (350 kJ/mol) to be formed/broken, as summarized in
Table 4.1. At last, the priority of occurrence of nonpolar covalent bonds in artificial
hydrocarbon compounds can be written as (4.11); where, bi bj stands for the
operation bi inhibits bj , and b1 , b2 , b3 represent simple, double and triple bonds,
respectively.
b1 b2 b3 (4.11)

Then, consider any given system Σ with an input signal x and an output signal y.
Also, consider that the system is equally splitted into n different partitions Σi for all
i = 1, . . . , n, such that, Σ = i Σi . Then, it is possible to capture the behavior of
system Σi with a CH-primitive molecule via its molecular behavior ϕi , as depicted in
Fig. 4.11. In that sense, a proper algorithm to find the best i-th CH-primitive molecule
to model Σi is required.
Since there is no other information than input x and output y signals of system Σ,
the approximation procedure using artificial hydrocarbon networks results difficult
to realize. Thus, implicit information needs to be extracted from Σ, i.e. the energy
of system. Based on signal theory, the energy E of signal y = Σ(x) in the interval
x ≥ [a, b] is calculated using (4.12); where, g stands for the norm of g:
4.2 Basics of Artificial Hydrocarbon Networks 87

1 2 3 4 5
2

1.5

1
System
Y

0.5

-0.5

-1

1.5

1
AHN
ψ

0.5

-0.5

-1
1 2 3 4 5

Fig. 4.11 Example of a given system Σ modeled with n CH-primitive molecules

E(y) = y 2 dx, x ≥ [a, b] (4.12)


a

Thus using (4.12), the energy Ei of a partition Σi is expressed as (4.13); where,


xi is the input and yi is the output signals of Σi in the interval xi ≥ [L1 , L2 ].

L2

Ei = E(yi ) = yi 2 dx, xi ≥ [L1 , L2 ] (4.13)


L1

In fact, energy Ei is used for selecting the i-th CH-primitive molecule in artificial
hydrocarbon compounds, as explained below. Actually, the notion of energy can be
used because behaviors of both molecules and nonpolar covalent bonds might be
treated as energy signals.
88 4 Artificial Hydrocarbon Networks

Table 4.2 Relationship among the order of bond bk , the bond energy Ek and the type of CH-
primitive molecules
Order of bond bk Bond energy Ek Type of CH-molecules
1 152 CH 3 − CH 3
2 132 CH 2 = CH 2
3 48 CH ∈ CH

In that way, an experiment was run in order to prove a significant relationship


between the energy Ei and the i-th primitive molecule that best models Σi . That
experiment is described in Appendix B in which the relationship energy-molecule
is proved. In a nutshell, the experiment reveals that two CH-primitive molecules, let
say M1 and M2 , that models a given system Σ form a nonpolar covalent bond bk
with order k, if the energy of the system E calculated as (4.12) is close to the bond
energy Ek as depicted in Table 4.2.
Thus, the two CH-primitive molecules M1 , M2 can be selected by comparing the
energy of system E with the bond energy Ek and selecting the best order of bond k ∅
as expressed in (4.14). Once done, the type of the two CH-primitive molecules can
be looked up in Table 4.2.

k ∅ = arg min {|E − Ek |}, ∀k ≥ {1, 2, 3} (4.14)


k

In that sense, the above experiment of two molecules can be extended to form an
artificial hydrocarbon compound with n CH-primitive molecules in order to model a
system Σ splitted into n partitions Σi for all i = 1, . . . , n. Algorithm 4.1 computes
that artificial hydrocarbon compound assuming that it is a saturated linear chain.
The inhibition rule (4.11) is also used. Notice that from Lemma 4.1, a CH-primitive
molecule cannot be a single carbon atom without hydrogen atoms; therefore, triple
bonds can only appear when there are artificial hydrocarbon compounds with fixed
length n = 2, e.g. of the form CH ∈ CH, as reflected in Algorithm 4.1.
Concluding, the following Proposition 4.7 raises the usage of Algorithm 4.1 to be
the chemical rule applied on artificial hydrocarbon networks algorithm to form the
structure of artificial hydrocarbon compounds.
Proposition 4.7 (second chemical rule) Let Σ = (x, y) be any given system with
input signal x and output signal y, and n ⊆ 2 be an integer number. In addition,
let f be Algorithm 4.1 with inputs Σ and n. Also, let C = (Ω, BN ) be a compound
made of CH-primitive molecules Ω = {M1 , . . . , Mn } linked with a set of nonpolar
covalent bonds BN . Then, f returns the structure of C as a saturared linear chain,
such that, C is close related to the energy of Σ.
Example 4.6 Using Algorithm 4.1, build an artificial hydrocarbon compound C with
three CH-primitive molecules to capture the energy of the system Σ depicted in
Fig. 4.12.
4.2 Basics of Artificial Hydrocarbon Networks 89

1 2 3

E 3 = 10

E 1 = 60

E 2 = 70

Fig. 4.12 System Σ split into 3 partitions, for Example 4.6

Algorithm 4.1 CREATE-COMPOUND(Σ, n): Compound formation based on bond energy.


Input: the system Σ = (x, y) and the number of molecules n ⊆ 2.
Output: the structure of hydrocarbon compound C.

k0∅ = 0
split Σ into n equal partitions Σi
for j = 1 : (n − 1) do
Ej ← energy of Σj using (4.13)
Ej+1 ← energy of Σj+1 using (4.13)
E = Ej + Ej+1
kj∅ ← using (4.14) and E
Mj , Mj+1 ← select two CH-primitive molecules using kj∅ and Table 4.2
while (kj∅ + kj−1∅ ) ⊆ 4 do

if kj−1 == 1
kj∅ = kj∅ − 1
Mj , Mj+1 ← select two CH-primitive molecules using kj∅ and Table 4.2
else
∅ = k∅ − 1
kj−1 j−1
Mj−1 , Mj ← select two CH-primitive molecules using kj−1 ∅ and Table 4.2

end-if
end-while
end-for
Ω ← {M1 , . . . , Mn }
BN ← {b1 , . . . , bi , . . . , bn−1 }, bi with order ki∅ for all i = 1, . . . , n − 1
C = (Ω, BN )
return C

Solution 4.6 Let n = 3 be the number of molecules in the compound, k ∅ =


(k0∅ , k1∅ , k2∅ ) be the set of the best order of bonds in the compound, and Ω be the set
of CH-primitive molecules forming the compound C. Let initialize k ∅ = (0, 0, 0),
90 4 Artificial Hydrocarbon Networks

and Ω = ∅. Then, the system Σ has to be split getting three partitions Σi as shown
in Fig. 4.12.
For j = 1:
E = E1 + E2 = 60 + 70 = 130.
k1∅ = 2. Then, k ∅ = (0, 2, 0).
M1 , M2 are CH 2 molecules from Table 4.2. Then, Ω = (CH2 , CH2 ).
Since k1∅ + k0∅ = 2 + 0 < 4, this ends the first iteration.
For j = 2:
E = E2 + E3 = 70 + 10 = 80.
k2∅ = 3. Then, k ∅ = (0, 2, 3).
M2 , M3 are CH molecules from Table 4.2. Then, Ω = (CH2 , CH, CH).
Since k2∅ + k1∅ = 3 + 2 > 4, then:
k1∅ = 2 ⊂= 1, then k1∅ = k1∅ − 1 = 2 − 1 = 1.
Refreshing, k ∅ = (0, 1, 3) and Ω = (CH3 , CH3 , CH).
Since k2∅ + k1∅ = 3 + 1 = 4, then:
k1∅ = 1, then k2∅ = k2∅ − 1 = 3 − 1 = 2.
Refreshing, k ∅ = (0, 1, 2) and Ω = (CH3 , CH, CH2 ).
Since k2∅ + k1∅ = 2 + 1 < 4, this ends the second iteration.
Finally, C = (Ω, BN ) with Ω = (CH3 , CH, CH2 ) and BN = k ∅ \{k0∅ } = (1, 2).

Figure 4.13 shows the resultant artificial hydrocarbon compound.


As noted, Proposition 4.6 forms a saturated, linear artificial hydrocarbon
compound without taking into account the system to be modeled. To improve it,
Proposition 4.7 uses Algorithm 4.1 to link the energy of the system to be mod-
eled with the structure of an artificial hydrocarbon compound. However, in both
cases, these propositions generate a compound; but the system might be too complex
that one hydrocarbon compound could not be sufficient to model it. Then, another
algorithm has to be designed in order to form mixtures of artificial hydrocarbon
compounds, reaching the third level in the three-level energy model described in the
artificial organic networks technique.
Therefore, consider a system Σ = (x, y) with input x and output y. Let also
consider an artificial hydrocarbon compound C = (Ω, BN ) with n CH-primitive
molecules with behaviors ϕi for all i = 1, . . . , n. Finally, let ψ be the behavior of C,
such that, ψ = ψ(ϕ1 , . . . , ϕn , x). Then, the approximation error eC of compound C

Fig. 4.13 Artificial hydrocar- H H


bon compound capturing the H
energy of Σ using Algorithm
4.1 H C C C

H
H
4.2 Basics of Artificial Hydrocarbon Networks 91

between the output y and the its behavior ψ can be computed as (4.15); where, g
stands for the norm of g.
eC = y − ψ (4.15)

On the other hand, suppose that there exists a small positive constant ε. If eC > ε,
then a new artificial hydrocarbon compound C of n CH-primitive molecules has to
be formed; but now, this C will model the residue eC of the system. This process
iterates c times until eC ≤ ε or c reaches a maximum number of compounds cmax .
At the end, there will be c compounds with n CH-primitive molecules each. The
procedure is summarized in Algorithm 4.2 and Proposition 4.8 states its application.

Algorithm 4.2 CREATE-MIXTURE(Σ, n, cmax , ε): Mixture formation based on the approxi-
mation error.
Input: the system Σ = (x, y), the number of molecules n ⊆ 2, the maximum number of compounds
cmax > 0 and the small tolerance value ε > 0.
Output: the structure of mixture Γ .

i=0
R=Σ
while ( R > ε) and (i < cmax ) do
i =i+1
Ci =CREATE-COMPOUND(R, n)
R ← using (4.15)
end-while
Γ ← {C1 , . . . , Ci }
return Γ

Proposition 4.8 (forming of mixtures) Let Σ = (x, y) be any given system with
input signal x and output signal y, n ⊆ 2 and cmax be two positive integer numbers,
and ε be a small positive integer. In addition, let f be Algorithm 4.2 with inputs
Σ,n, cmax and ε. Also, let Γ = {C1 , . . . , Cc } be a mixture of c artificial hydrocarbon
compounds Ci for all i = 1, . . . , c. Then, f returns the structure of Γ as a mixture of
artificial hydrocarbon compounds that is close related to the energy of Σ. Moreover,
f satisfies the three-level model energy.
Example 4.7 Using Algorithm 4.2, build a mixture of compounds Γ with three CH-
primitive molecules per compound and a maximum of 2 compounds, to capture the
energy of the system Σ = 29 x 5 + 19 x 3 − 23 for all x ≥ [−1, 1] as depicted in Fig. 4.14.
Consider a tolerance value of ε = 0.001.
Solution 4.7 Let n = 3 be the number of molecules per compound and cmax = 2
be the maximum number of compounds. Also, let Γ be the set of compounds Ci .
Let initialize the residue R = Σ. Then, the residue Σ has to be split getting three
partitions Σi as shown in Fig. 4.14.
92 4 Artificial Hydrocarbon Networks

-0.3
E 1 = 39.8202 E 2 = 29.7780 E 3 = 22.1958
-0.4

-0.5

-0.6
y

-0.7

-0.8

-0.9

-1

0.05
E 1 = 3.4E-5 E 2 = 4.1E-5 E 3 = 0.0028
0.03

0.01
y

R1
-0.01

-0.03

-0.05
-3
x 10
5

1
R2
y

-1

-3

-5
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
x

Fig. 4.14 System Σ split into 3 partitions with two residues R, for Example 4.7

For i = 0:
i = 1.
C1 = ({CH3 , CH, CH2 }, {1, 2}) using Algorithm 4.1.
R = R − C1 is shown in Fig. 4.14; obtained later from Proposition 4.9.
R = 0.0029 using the L2-norm of R.
R > ε and i < cmax .
4.2 Basics of Artificial Hydrocarbon Networks 93

For i = 1:
i = 2.
C2 = ({CH3 , CH, CH2 }, {1, 2}) using Algorithm 4.1.
R = R − C2 is shown in Fig. 4.14; obtained later from Proposition 4.9.
R = 0.0029 using the L2-norm of R.
R > ε and i = cmax , this ends the loop.
Finally, Γ = (C1 , C2 ). Figure 4.15 shows the resultant mixture of compounds.

4.2.3.2 Basic Algorithm of Artificial Hydrocarbon Networks

Notice that (4.15) requires a valued-function ψ. Moreover, ψ depends on CH-


primitive molecular behaviors ϕi for all i = 1, . . . , n. For instance, consider (4.4)
in Proposition 4.2 to define molecular behaviors ϕi . Note that (4.4) is defined via
hydrogen-atom values of the form as (4.3). Then, suppose that ψ is defined as 4.16,


⎨ϕ1 (x) L0 ≤ x < L1
ψ(ϕ1 , . . . , ϕn , x) = · · · ··· (4.16)


ϕn (x) Ln−1 ≤ x ≤ Ln

where, the set of Lj for all j = 0, . . . , n represents the bounds of the input domain
in molecular behaviors with lower bound L0 = a and upper bound Ln = b in the
overall input domain x ≥ [a, b]. Particularly to Algorithm 4.1, these bounds are set
by partitions Σi , such that, they follow (4.17).

b−a
Lj − Lj−1 = , j = 1, . . . , n (4.17)
n
At last, it should be an algorithm for finding the collection of hydrogen values
that parameterizes all ϕi functions and obtain a valued-function ψ due to input x, as
follows.
Suppose that each molecular behavior ϕi = ϕi (hi1 , . . . , hid , x), where the set of
him is the set of hydrogen values associated to the i-th CH-primitive molecule and
d is the degree of freedom of that molecule, models a partition Σi = (xi , yi ) of the
overall system Σ. In order to do that, an optimization process is proposed to find
the values of hydrogen atoms him using the objective function Ei in (4.18); where,
xik ≥ xi is the k-th sample of xi for all k = 1, . . . , qi ; yik ≥ yi is the k-th sample of yi
for all k = 1, . . . , qi ; and qi is the number of samples in the partition Σi .

1
qi
Ei = (yik − ϕi (xik ))2 (4.18)
2
k=1
94 4 Artificial Hydrocarbon Networks

Fig. 4.15 Mixture of com-


pounds capturing the energy H H
of Σ using Algorithm 4.1 H
H C C C
H
H

H H
H
H C C C
H
H

Thus, using the least squares estimates method (LSE) with objective function
(4.18) and ϕi = ϕi (hi1 , . . . , hid , x), the set of all him is obtained. In that sense, ψ is
now a valued-function with parameters him , and (4.15) can be computed.
On the other hand, Algorithm 4.2 does not calculate the behavior S(x) of the
final mixture Γ . Since the structure Γ has to model Σ, (4.19) has to be minimized;
where, EΣ stands for the objective function between the system and the model; xk ≥ x
is the k-th sample of x for all k = 1, . . . , q; yk ≥ y is the k-th sample of y for all
k = 1, . . . , q; q is the total number of samples in Σ; ψi is the i-th compound behavior
of Ci in the mixture Γ of the form as (4.16); αi is the i-th stoichiometric coefficient
αi ≥ Λ; and c is the number of compounds in the mixture.

q ⎛ ⎝2
1 
c
EΣ = yk − αi ψi (xk ) (4.19)
2
k=1 i=1

Again, the least squares estimates method with objective function (4.19) can be
applied in order to obtain the set of stoichiometric coefficients Λ.
In addition, Algorithm 4.2 can be modified to determine both the structure and
the behavior of the mixture of artificial hydrocarbon compounds as depicted in
Algorithm 4.3. Actually, this algorithm builds a simple artificial hydrocarbon net-
work as stated in Proposition 4.9.
Proposition 4.9 (basic AHN-algorithm) Let Σ = (x, y) be any given system with
input signal x and output signal y, n ⊆ 2 and cmax be two positive integer numbers,
and ε be a small positive integer. In addition, let f be Algorithm 4.3 with inputs
Σ, n, cmax and ε. Also, let Γ = {C1 , . . . , Cc } be a mixture of c artificial hydrocarbon
compounds Ci for all i = 1, . . . , c, let H be the set of hydrogen values in molecular
behaviors of each Ci , and let Λ be the set of stoichiometric coefficients that weights
the mixture of compounds Γ . Then, f satisfies the three-level model energy and
returns the structure of Γ as a mixture of artificial hydrocarbon compounds that is
4.2 Basics of Artificial Hydrocarbon Networks 95

Algorithm 4.3 SIMPLE-AHN(Σ, n, cmax , ε): Basic algorithm of artificial hydrocarbon net-
works.
Input: the system Σ = (x, y), the number of molecules n ⊆ 2, the maximum number of compounds
cmax > 0 and the small tolerance value ε > 0.
Output: the structure of mixture Γ , the set of hydrogen values H, and the set of stoichiometric
coefficients Λ.

i=0
R=Σ
while (R > ε) and (i < cmax ) do
i =i+1
Ci =CREATE-COMPOUND(R, n)
Hi = {hi1 , . . . , hid } minimizing (4.18) using LSE and information of Ci
R ← using (4.15)
end-while
Γ ← {C1 , . . . , Ci }
H ← {H1 , . . . , Hi }
Λ ← minimizing (4.19) using LSE.
return Γ, H and Λ

close related to the energy of Σ and completely determines the parameters of Γ in


H and Λ. Moreover, f approximates Σ.
Example 4.8 Using Algorithm 4.3, build a mixture of compounds Γ with three CH-
primitive molecules per compound and a maximum of 2 compounds, to capture the
energy of the system Σ = 29 x 5 + 19 x 3 − 23 for all x ≥ [−1, 1] as depicted in Fig. 4.14.
Consider a tolerance value of ε = 0.001.
Solution 4.8 Let n = 3 be the number of molecules per compound and cmax = 2
be the maximum number of compounds. Also, let Γ be the set of compounds Ci .
Let initialize the residue R = Σ. Then, the residue Σ has to be split getting three
partitions Σi as shown in Fig. 4.14.
For i = 0:
i = 1.
C1 = ({CH3 ,⎧ CH, CH2 }, {1, 2}) using Algorithm 4.1.

⎨ϕ1 (x) = (x − h11 )(x − h12 )(x − h13 ) −1 ≤ x < − 3
1

ψ1 (x) = ϕ2 (x) = (x − h21 ) −3 ≤ x < 3


1 1 using


ϕ3 (x) = (x − h31 )(x − h32 ) 3 ≤x ≤1
1

(4.16) and C1 .
H1 = {0.42, −0.79 + 0.76i, −0.79 − 0.76i, 76.84, 1.26, −0.39} using ψ1
with least squares estimates.
R = R − ψ1 is shown in Fig. 4.14.
R = 0.0029 using the L2-norm of R.
R > ε and i < cmax .
For i = 1:
i = 2.
96 4 Artificial Hydrocarbon Networks

C2 = ({CH3 ,⎧ CH, CH2 }, {1, 2}) using Algorithm 4.1.



⎨ϕ1 (x) = (x − h11 )(x − h12 )(x − h13 ) −1 ≤ x < − 3
1

ψ2 (x) = ϕ2 (x) = (x − h21 ) − 13 ≤ x < 13 using




ϕ3 (x) = (x − h31 )(x − h32 ) 3 ≤x ≤1
1

(4.16) and C2 .
H2 = {−0.31, −0.78 + 0.16i, −0.78 − 0.16i, 0.14, 0.95, 0.50} using ψ2
with least squares estimates.
R = R − ψ2 is shown in Fig. 4.14.
R = 0.0029 using the L2-norm of R.
R > ε and i = cmax , this ends the loop.
Finally:
Γ = (C1 , C2 )
H = (H1 , H2 )
Λ = (1.0, −0.99) using S(x) = α1 ψ1 + α2 ψ2 with least squares estimates.
Figure 4.16 shows the resultant mixture of compounds.
As noted so far, Algorithm 4.3 sets the input domain of CH-primitive molecu-
lar behaviors ϕi constrained to (4.17). In other words, CH-primitive molecules in
hydrocarbon compounds are distributed equally over the input domain x ≥ [a, b]. In
fact, this assumption is not optimal because some molecules could act over bigger
regions while others are preferred to act in smaller regions. To solve this problem,
Proposition 4.5 is used for finding the optimal values of length rj between molecules.
For instance, consider that an artificial hydrocarbon compound has two CH-
primitive molecules M1 and M2 with molecular behaviors ϕ1 and ϕ2 , respectively.
Then, this compound will model a system Σ = (x, y) with partitions Σ1 (x1 , y1 ) and
Σ2 = (x2 , y2 ) splitted using an arbitrary value of length r ≥ x. Also, suppose that
the objective function (4.18) is applied to obtain the hydrogen values associated to
behaviors ϕ1 , ϕ2 using Σ1 , Σ2 , resulting in two error function values E1 and E2 .

Fig. 4.16 Mixture of com- −0.79 + 0.76i 76.84


pounds capturing the energy 1.26
of Σ using Algorithm 4.3 0.42 C C C 1= 1.0
− 0.39
−0.79 − 0.76i

−0.78 + 0.16i 0 .14


0.95
−0.31 C C C 2= − 0.99
0.50
− 0.78 − 0.16i
4.2 Basics of Artificial Hydrocarbon Networks 97

In that sense, the problem is to find the optimal value of r, such that, E1 and E2 be
minimized and E1 = E2 .
Let π be the behavior of the nonpolar covalent bond b12 , between molecules
M1 , M2 , as described in (4.9). Then, π can be used as the objetive function to find
the optimal value of r guiding the process in the search space. Thus, proposing
property δ2 = Erm , π is expressed as (4.20); where, Er is the objective function to
be minimized, δ1 is selected depending on the order of bond between M1 , M2 , r is
the optimal intermolecular distance (length of bond), and Em = E1 − E2 is the error
between the two error values E1 , E2 . It is remarkable to say that δ2 measures the
length of bond r related to the quantity Em . Then, the larger the quantity Em is, the
shorter the length of bond needs to be.
⎞  2 ⎠   2 
δ1 r 1 δ1 r
Er = π = δ3 1 − exp − , δ1 ≥ {1, 2, 3} (4.20)
Em 2 Em

Since (4.20) depends on the error functions E1 , E2 calculated from the current
partitions due to r, and r is updated from (4.20), the gradient descent method is used
as an strategy to optimize the length of bond r. There is the updating rule of r in
(4.21); where, rt+1 is the future value of r, rt is the current value of r, and Δrt is the
current change in length of bond r defined as (4.22) with step size 0 < η < 1, and
current error function values E1t , E2t .

rt+1 = rt + Δrt (4.21)

Δrt = −η (E1t − E2t ) (4.22)

In the general case where there are n CH-primitive molecules, there are n − 1
nonpolar covalent bonds of the form bjk between two molecules Mj , Mk with j ⊂= k.
Thus, for each nonpolar covalent bond, there is a length of bond r jk associated to
it. In particular, each r jk is found by applying (4.21). Interestingly, these lengths of
bonds follow the arrangement of bounds (4.23) in the overall input domain x ≥ [a, b].
However, notice that r n−1,n = b; thus, it can be set initially.
    
a, r 12 , r 12 , r 23 , r 23 , r 34 , .., r n−2,n−1 , b (4.23)

The above discussion is summarized in Algorithm 4.4 which obtains the optimal
values of r jk and generating the optimal artificial hydrocarbon compound that uses
these bounds.
Example 4.9 Using Algorithm 4.4, optimize the artificial hydrocarbon compound
C = ({CH3 , CH2 , CH3 }, {1, 1}) that captures the energy of the system Σ = sin(π x+
0.75) for all x ≥ [0, 2]. Consider a step size of η = 0.1.
Solution 4.9 Let n = 3 be the number of molecules in C. Also, let a and b the
lower and upper bounds in the input domain, such that, a = 0 and b = 2. Then,
98 4 Artificial Hydrocarbon Networks

Algorithm 4.4 OPTIMUM-COMPOUND(Σ, C, n, η): Optimal artificial hydrocarbon com-


pound.
Input: the system Σ = (x, y), the compound C to be optimized, the number of molecules n ⊆ 2
and the learning rate 0 < η < 1.
Output: the set of hydrogen values H and the set of intermolecular distances Π .

determine the initial values of r jk ≥ Π using (4.17)


while stop condition is not reached do
split Σ into n partitions Σi using r jk
for each Σi do
Hi = {hi1 , . . . , hid } minimizing (4.18) using LSE and information of C
Ei ← error function value using (4.18)
end-for
update all r jk ≥ Π applying rule (4.21) and (4.22) with step size η
end-while
H ← {H1 , . . . , Hn }
return H and Π

Algorithm 4.4 requires to initialize the intermolecular distances r jk ≥ Π such that


(4.17) holds, as written in (4.24). Moreover, the bounds of molecules L are computed
as (4.25), giving L = (0.0, 0.66, 1.33, 2.0).

b−a 2
ri = = , ∀i = 1, . . . , n − 1 (4.24)
n 3


⎨L0 = a
Lk = rk−1 + Lk−1 , ∀k = 1, . . . , n − 1 (4.25)


Ln = b

Since the following process is iterative, consider the first iteration at t = 0:


Split Σ in n = 3 partitions using bounds L as (4.26).

Σi (xk , yk ), ∀k Li−1 ≤ xk ≤ Li , ∀i = 1, . . . , n (4.26)

Figure 4.17 (first) shows the partitions at t = 0.


For Σ1 :
H1 = {−5.47, 1.34, 0.22} using molecule CH 3 with LSE.
E1 = 0.0083 using (4.18).
For Σ2 :
H2 = {1.32, 0.53} using molecule CH 2 with LSE.
E2 = 0.2694 using (4.18).
For Σ3 :
H3 = {1.06, 0.57, 0.39} using molecule CH 3 with LSE.
E3 = 0.0008 using (4.18).
Update intermolecular distances using (4.21) and (4.22):
4.2 Basics of Artificial Hydrocarbon Networks 99

r1 = r1 − η (E1 − E2 ) = 23 − 0.1 (0.0083 − 0.2694) = 0.6928


r2 = r2 − η (E2 − E3 ) = 23 − 0.1 (0.2694 − 0.0008) = 0.6398
L = (0.0, 0.6928, 1.3326, 2.0)
Figure 4.17 (second) shows the partition at t = 1, and the following frames in
Fig. 4.17 represent the evolution of compound C until a stop criterion is reached, e.g.
a maximum number of iterations.
Finally:
H = (H1 , H2 , H3 )
Π = (r1 , r2 )
To this end, Algorithm 4.4 is applied into Algorithm 4.3 obtaining the modified
version of the basic artificial hydrocarbon network algorithm (Algorithm 4.5) as
stated in Proposition 1.1.

Algorithm 4.5 SIMPLE-AHN(Σ, n, cmax , ε, η): Modified algorithm of artificial hydrocarbon


networks.
Input: the system Σ = (x, y), the number of molecules n ⊆ 2, the maximum number of compounds
cmax > 0, the small tolerance value ε > 0 and the learning rate η.
Output: the structure of mixture Γ , the set of hydrogen values H, the set of stoichiometric coeffi-
cients Λ and the set of intermolecular distances Π .

i=0
R=Σ
while (R > ε) and (i < cmax ) do
i =i+1
Ci =CREATE-COMPOUND(R, n)
[Hi , Πi ] =OPTIMUM-COMPOUND(R, Ci , n, η)
R ← using (4.15)
end-while
Γ ← {C1 , . . . , Ci }
H ← {H1 , . . . , Hi }
Λ ← minimizing (4.19) using LSE.
Π = {Π1 , . . . , Πi }
return Γ, H, Λ and Π

Proposition 4.10 (modified AHN-algorithm) Let Σ be any given system. Also, let
f be Algorithm 4.5 with proper inputs. Then, f satisfies the three-level model energy
and returns an artificial hydrocarbon network with structure close related to the
energy of Σ and behavior totally parameterized, such that, it approximates Σ.
It is important to notice that intermolecular distances define the behavior of mole-
cular behaviors. Thus, in notation of (4.16), the intermolecular distances in (4.23) are
expressed as (4.27). Recalling that L0 = a and Ln = b in the input domain x ≥ [a, b].

ri = r i,i+1 , i = 1, . . . , n − 1 (4.27)
100 4 Artificial Hydrocarbon Networks

t=0
1

0.5

0
y

−0.5

−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x
t=1
1

0.5

0
y

−0.5

−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x
t = 10
1
0.5
0
y

−0.5
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x
t = 20
1
0.5
0
y

−0.5
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x

t = 30
1
0.5
y

0
−0.5
−1
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
x

Fig. 4.17 Evolution of the optimum compound using Algorithm 4.4: (first) at iteration t = 0,
(second) at iteration t = 1, (third) at iteration t = 10, (forth) at iteration t = 20, (fifth) at iteration
t = 30. Dashed lines represent bounds of molecules
4.2 Basics of Artificial Hydrocarbon Networks 101

Example 4.10 Using Algorithm 4.5, build an artificial hydrocarbon network with
n = 3 and cmax = 1 of the system Σ = sin(π x + 0.75) for all x ≥ [0, 2]. Consider
a step size of η = 0.1 and a tolerance value of ε = 0.1.
Solution 4.10 Let n = 3 be the number of molecules in C. Also, initialize i = 0
and the residue R = Σ.
For i = 0:
i = 1.
C1 = ({CH3 , CH2 , CH3 }, {1, 1}) using Algorithm 4.1.
H1 = {−5.91, 1.31, 0.36, 1.30, 0.54, 1.08, 0.57, 0.38} using Algorithm 4.4.
Π1 = (0.7809, 0.4793) using Algorithm 4.4.
R = 1.0231 using the L2-norm of R.
R > ε and i = cmax , this ends the loop.
Finally,
Γ = (C1 )
H = (H1 )
Λ = (1.0) because there is one compound.
Π = (Π1 )
Actually, the resultant response of the artificial hydrocarbon network is the one
depicted at last in Fig. 4.17.
Previous algorithms were implemented in the Artificial Organic Networks Toolkit
using LabVIEW™ (see Appendix D). Please, refer to Appendix E for some examples
of this toolkit.

4.2.4 Mathematical Formulation

Once components, interactions and different chemical rules were defined and pro-
posed, the model of artificial hydrocarbon networks is formally introduced in Defi-
nition 4.6:
Definition 4.6 (artificial hydrocarbon networks) Let AHN = →Γf , Φ, Λ, X be an
artificial organic network made of a mixture S, and A(8) be a set of hydrogen and
carbon atomic units that exists in AHN. Also, let G be the unique functional group
of AHN. Then, AHN is an artificial hydrocarbon network and G is called the CH
functional group. Moreover, AHN has the following properties:
1. Γf is a mixture of artificial hydrocarbon compounds with basis Φ and a set of
stoichiometric coefficients Λ.
2. Γf and S are obtained using Algorithm 4.5 as the three-level energy rule f .
3. S(X) is the behavior of AHN due to X.
102 4 Artificial Hydrocarbon Networks

4.3 Metrics of Artificial Hydrocarbon Networks

In this section, two metrics of the modified artificial hydrocarbon networks algo-
rithm are computed in order to determine the computational time complexity and the
stability of Algorithm 4.5. For further information about computational complexity
and stability analysis theory, refer to Sect. 1.4.

4.3.1 Computational Complexity

Consider any given system of the form Σ = (x, y) with an input signal x and an
output signal y. Also, consider q training samples of Σ in order to get a model AHN
using Algorithm 4.5 with c compounds and n molecules per compound. Assume that
a minimum error ε > 0 satisfies the stop criterion for finding the best intermolecular
distances r jk ≥ Π . Then, Theorem 4.1 holds for Algorithm 4.5.
Theorem 4.1 (time complexity) Algorithm 4.5 has time complexity O(cnq ln 1ε ) with
q ⊆ n ⊆ c ⊆ 2 and a small value ε > 0.
Proof Consider Algorithm 4.5. Time complexities are measured using the worst
case assumption.
CREATE-COMPOUND: Consider Algorithm 4.1.
The first instruction is an assignment, thus it has time complexity O(1).
The second step splits signal Σ into n partitions Σk for k = 1, . . . , n, using the
set of intermolecular distances {r j,j+1 } with j = 0, . . . , n − 1, and r 01 = a and
r n,n+1 = b; where, x ≥ [a, b]. Assume a simple algorithm for splitting the signal
using the set of rules of the form as (4.28), for all i = 1, . . . , q.

Rα : if r α−1,α ≤ xi < r α,α+1 , then (xi , yi ) ≥ Σα , with α = 1, . . . , n (4.28)

Then, each rule Rα in (4.28) requires 4 operations (i.e. 2 comparisons, one AND-
operation and one assignment). Since there are n rules in (4.28), and assuming the
worst case when the q samples are in the last rule Rn , then the time complexity is
O(4nq).
The third step considers a for-loop. First, the measurement of energy in a partition
is required. Suppose that Σk has qk samples. Using (4.13), the energy of the partition
can be obtained from (4.29); where, yk represents the k-th sample of output signal
in Σk . Notice that this representation is valid because the integral of (4.13) depends
on the number of qk samples in Σk .


qk
Ek = yk 2 (4.29)
i=1
4.3 Metrics of Artificial Hydrocarbon Networks 103

Clearly, (4.29) needs qk − 1 additions and qk multiplications. Thus, the time


complexity for computing the energy in each partition is O(qk ). Since, there are
two measures of energy, one sum of those, three comparisons and a decision from
(4.14), and a selection of two CH-primitive molecules in a lookup table, the time
complexity until now is O(qk + qk + 1 + 4 + 1) ∼ O(qk ).
Then, a while-loop encloses a decision procedure. At most, the while-loop enters
three times, only if two subsequent bonds with orders kj∅ , kj+1
∅ are equal to 3. In
addition, the if-else decision has order O(3) ∼ O(1) because the total operations are:
one for the decision, one for the subtraction and one for the selection in a lookup
table. At last, the while-loop has time complexity O(9) ∼ O(1).

n−1
At the end, the for-loop has time complexity (O(qk ) + O(1)) ∼ O(q + n),
k=1

n
since q = qk .
k=1
The overall time complexity of Algorithm 4.1 is O(1 + 4nq + q + n + 1) ∼
O(4nq + n + q + 2) assuming that the returning operation is one instruction.
OPTIMUM-COMPOUND: Consider Algorithm 4.4.
The first step requires a split procedure. From the above discussion, splitting the
signal using (4.28) has time complexity O(4nq).
The second step has a while-loop. Starting with the inner instructions, the first
step is again a splitting function with time complexity O(4nq). The second step has a
for-loop from k = 1, . . . , n. Firstly, it computes the parameters His with s = 1, . . . , d
(d represents the maximum number of hydrogen atoms associated to the carbon atom)
using least squares estimates. In fact, this method has time complexity O(C 2 N) [4];
where C is the number of features and N is the number of training samples. In
terms of Algorithm 4.4, the time complexity of this step is O(d 2 qk ) and it can be
approximated to O(qk ) because d = {1, 2, 3} is restricted to the number of hydrogen
atoms in the CH-primitive molecules, and d  qk . Then, the for-loop also computes
the error between the partition Σk and the structure of CH-primitive molecules using
(4.18). It can be seen that this computation requires qk subtractions, qk − 1 additions
and qk + 1 multiplications. Then, the time complexity is O(3qk ) ∼ O(qk ). At last,
the loop runs n times for the n partitions of Σ. Thus, the upper bound complexity
n n
of the for-loop in Algorithm 4.4, is (O(qk )) ∼ O(q), since q = qk . Finally,
k=1 k=1
the third step of the while-loop updates the set of intermolecular distances using
(4.21) and (4.22). Clearly, they need three operations (i.e. two subtractions and one
multiplication) for the n − 1 intermolecular distances not including the extremes of
the interval r 01 and r n,n+1 . Thus, the time complexity for the last step is O(3n − 3) ∼
O(n). In that sense, the overall time complexity of one iteration in the while-loop
is O(4nq + q + n) ∼ O(nq + q + n) ∼ O(nq) because O(nq) > O(q) ⊆ O(n) if
q ⊆ n ⊆ 2.
In order to determine the number of cycles in the while-loop, the following dis-
cussion uses the notion of the analytical complexity in descent methods as (4.21)
and (4.22). For instance, consider the optimization problem (4.30),
104 4 Artificial Hydrocarbon Networks

min f (r) (4.30)


r≥Rm

Where, f (r) is the objective function to find the optimal values of intermolecular
distances {r j,j+1 }∅ for j = 0, . . . , n. Moreover, consider a relaxation of (4.30) using
a sequence of values {f (r)}, such that, f (rt+1 ) ≤ f (rt ) for all t = 0, 1, . . . . This
problem can be solved applying a descent method, if the following properties hold
[5]:
• f (r) is bounded below.
• f (r) has a first derivative f  (r) continuos in Rm .
• f  (r) is Lipschitz, such that, f  (rt+1 ) − f  (rt ) ≤ L rt+1 − rt with some positive
value L, and · standing for the Euclidean norm (4.31); where, x = (x1 , . . . , xn )
is a vector in Rn .

x = x12 + · · · + xn2 (4.31)

If this is the case, the values of the set {r}∅ are reached with an accuracy ε > 0
at time complexity O(ln 1ε ), using a gradient descent method [5]. Notice that the
steepest descent method (4.21) and (4.22) is valid for solving (4.30) because the
gradient descent method coincides with the steepest descent method when ·
stands for the Euclidean norm [6].
In order to apply (4.21) and correctly, let f (r) be a function Ef (r) = Eu − Ev
representing the difference of the energy of any molecule u and the energy of any
molecule v with intermolecular distance r uv as in (4.22). It is enough to prove that
Ef (r) has the above properties.
By definition of (4.18), energies Eu and Ev are bounded below to Eu = Ev = 0;
thus, Ef (r) is bounded below at Ef (r) = 0, proving the first property. On the other
hand, it is easy to see that Ef (r)  ΔEf (rt ) = Ef (rt+1 )−Ef (rt ) and because Ef (rk ) ≥
Rm , ∀k = 0, 1, . . . , then ΔEf (r) ≥ Rm , proving the second property. Finally, to
prove the third property, Ef (r) needs to have a second derivative Ef (r), such that,
Ef (r) ≤ L, ∀r ≥ Rm . It is easy to see that Ef (r)  Δ2 Ef (rt ), then Δ2 Ef (rt ) =
Ef (rt+2 )−2Ef (rt+1 )+Ef (rt ) is properly defined. Moreover, Δ2 Ef (rt ) ≤ L. Thus,
it proves the third property.
At last, the while-loop in Algorithm 4.4 has time complexity O(nq ln 1ε ).
Then, Algorithm 4.4 has time complexity O(4nq + nq ln 1ε + 1) ∼ O(nq +
nq ln 1ε ) assuming that the returning operation is one instruction.
SIMPLE-AHN: Consider Algorithm 4.5.
This algorithm has the first two assignments with time complexity O(2) ∼ O(1).
Then, the while-loop has at most c compounds. Inside, the first assignment has
time complexity O(1), the CREATE-COMPOUND algorithm has time complexity
O(4nq + n + q + 2), the OPTIMUM-COMPOUND algorithm has O(nq + nq ln 1ε ),
and the residue of (4.15) has time complexity O(2q − 1) ∼ O(q). At last, the
while-loop has time complexity O(c(1 + 4nq + n + q + 2 + nq + nq ln 1ε + q)) ∼
4.3 Metrics of Artificial Hydrocarbon Networks 105

O(cnq+cn+cq+cnq ln 1ε +1). Then, computing the set of stoichiometric coefficients


implies the least squares method. Using the above discussion, the time complexity
of this procedure is O(c2 q). The step of returning parameters might be considered
constant, such that, O(1).
In the worst case, the upper bound of time complexity in Algorithm 4.5 is calcu-
lated based on (4.32):
 
1
O cn + cnq + cq + cnq ln + c2 q + 3 (4.32)
ε
 
It is easy to show that asymptothically O(1) < O(cn) ≤ O(cnq) ≤ O cnq ln 1ε .
Then, (4.32) can be expressed as (4.33):
 
1
O cnq ln + c2 q (4.33)
ε
 
If transformation x = cq is applied, then (4.33) is rewritten as O xn ln 1ε + xc .
 
Also, if n ⊆ c ⊆ 2, then n ln 1ε ⊆ c. At last, O xn ln 1ε ⊆ O (xc), thus (4.33) is
asymptothically equal to (4.34) with q ⊆ n ⊆ c ⊆ 2 and a small value ε > 0:
 
1
O cnq ln (4.34)
ε

So, it is shown that Algorithm 4.5 has an upper bound time complexity as (4.34) with
q ⊆ n ⊆ c ⊆ 2 and a small value ε > 0. 

4.3.2 Stability

The efficiency in accuracy of learning algorithms is desirable when proposing new


learning methodologies. In fact, stability analysis of algorithms gives an interesting
measurement in terms of efficiency. Particularly, the analysis of stability of Algorithm
4.5 is presented, using the sensivity analysis developed by Bousquet and Eliseeff [7]
in which it aims to determine how much a variation in the input of the algorithm
modifies the behavior of its output [7].
Consider any given system of the form Σ = (x, y) with an input signal x and
an output signal y. Also, consider q training samples of Σ in order to get a model
AHN using Algorithm 4.5 with c compounds and n molecules per compound. Also,
let L be a loss function such that it measures accuracy in Algorithm 4.5 between the
model AHN in x and the output value y, expressed as (4.35).

L(AHN(x), y) = (AHN(x) − y)2 (4.35)


106 4 Artificial Hydrocarbon Networks

Moreover, the loss function L is said to be σ -admissible if L is convex with respect


to its first argument z ≥ R and (4.36) holds:

∀z1 , z2 ≥ R, ∀y ≥ Y , |L(z1 , y) − L(z2 , y)| ≤ σ |z1 − z2 | (4.36)

Then, Algorithm 4.5 has uniform stability β with respect to the loss function L
if (4.37) holds; where, L(AHN Σ , .) represents the loss function of all samples in Σ
and L(AHN Σ \i , .) represents the loss function of all samples in Σ without the i-th
sample.

∀Σ, ∀i ≥ {1, . . . , q}, L(AHN Σ , .) − L(AHN Σ \i , .) ∞ ≤β (4.37)

Thus, (4.37) assures that the maximum difference between the cost of some input
data set to Algorithm 4.5 and the cost of the same input data set with one variation,
is less or equal to an upper bound β. At last, AHN will be stable if β decreases
proportionally to q1 .
In order to define the response of algorithm AHN, a reproducing kernel Hilbert
space (RKHS) F is introduced. The fundamental property of a RKHS is written
as (4.38); where, there is a unique kernel k in the Hilbert space H, such that,
∀x, y ≥ H, kx (y) = k(y, x) is a function in the Hilbert space, and f ≥ H is a set
of functions in a convex subset of a linear space F.

∀f ≥ F, ∀x, f (x) = →f , k(x, .) (4.38)

Thus, the following Theorem 4.2 holds for stability in artificial hydrocarbon net-
works in which it assures that Algorithm 4.5 is spanned by the optimum set of
functions f that minimizes the loss function L, and by a unique kernel k.
Theorem 4.2 (uniform stability) Let k be a bounded kernel such that k(x, x) ≤ κ 2
and Y ≥ [a, b] be the output vector of AHN bounded in the range B = b − a. Then,
Algorithm 4.5 has uniform stability β with respect to the loss function L defined in
(4.35) that is 2B-admissible if AHN is defined by

1
q
AHN Σ = arg min L(f , yi ) + λ C 2k , ∀yi ≥ Y (4.39)
f ≥F q
i=1

Moreover,

1 
c n
2κ 2 B2
β≤ , ∀i ≥ {1, . . . , n}, j ≥ {1, . . . , c} with q = qij (4.40)
λ min{qij } c
j=1 i=1

Proof The definition of Algorithm 4.5 in terms of (4.39) comes directly from the
theorem related to the definition of algorithms in Hilbert spaces as described by
Bousquet and Eliseeff [7]. In fact, the uniform stability β of any given algorithm AS
4.3 Metrics of Artificial Hydrocarbon Networks 107

in the Hilbert space reproduced by the kernel k is like (4.41):

σ 2κ 2
β≤ (4.41)
2λm
Where, σ refers to the admissibility of loss function L associated to AS , κ is the
upper bound of the kernel k such that ∀x, k(x, x) ≤ κ 2 < ∞, the parameter λ is the
penalty in the norm of the basis N(f ), ∀f ≥ F such that N(f ) = f 2k ≤ Bλ with some
positive value of B, and m is the number of training samples.
In order to prove (4.40), consider that the uniform stability β of least squares
estimates is equal to (4.41) with σ = 2B (as described by Bousquet and Eliseeff [7])
like in (4.42):
2κ 2 B2
β≤ (4.42)
λm
Actually, Algorithm 4.5 uses least squares estimates in each of the n molecules
to form a compound. Thus, let qi be the number of samples in the i-th molecule,
n
such that q = qi . Then, using (4.42), each molecule provides a maximum risk
i=1
bounded by (4.43).
2κ 2 B2
βi ≤ , ∀i ≥ {1, . . . , n} (4.43)
λqi

Notice that from definition, B = b − a remains constant for all molecules. Thus,
the overall uniform stability βC of a compound is given by the maximum of all
uniform stabilities {βi } as depicted in (4.44).


n  
2κ 2 B2 1
βC = βi ≤ max , ∀i ≥ {1, . . . , n} (4.44)
λ qi
i=1

Moreover, Algorithm 4.5 uses c compounds. Thus, the total uniform stability βT
(j)
of AHN is given by the maximum of all uniform stabilities {βC } of compounds like
(4.45); where qij is the number of samples in the i-th molecule of the j-th compound
c  n
such that cq = qij .
j=1 i=1


c  
(j) 2κ 2 B2 1
βT = βC ≤ max , ∀i ≥ {1, . . . , n}, j ≥ {1, . . . , c} (4.45)
λ qij
j=1

Then, (4.45) proves the uniform stability of Algorithm 4.5 depicted in (4.40). 
Notice that Theorem 4.2 provides the upper bound of uniform stability. In fact,
(4.40) gives the notion that increasing the number of samples per molecule per com-
pound qij gives a better stability of Algorithm 4.5. Finally, it is remarkable to say that
108 4 Artificial Hydrocarbon Networks

Theorem 4.2 guarantees stability of Algorithm 4.5 only for approximation problems,
since artificial hydrocarbon networks is mainly developed for these problems.

4.4 Artificial Hydrocarbon Networks Practical Features

In this section, some characteristics of artificial hydrocarbon networks are high-


lighted in order to exploit them, but also to understand them. In practice, artificial
hydrocarbon networks approximate given systems; araising in different applications
like inference, clustering, classification, pattern recognition, and even optimization.

4.4.1 Partial Knowledge Representation

One of the objectives of artificial organic networks and in particular for artificial
hydrocarbon networks is the idea to extract information from the modeled system
to provide metadata of it and in consequence to better understand it. For instance,
the following five different levels of information are given when the training process
of AHNs finishes: molecular clustering, clustering of compounds, stoichiometric
coefficients, hydrogen coefficients, and intermolecular distances.
In order to have a visual representation of artificial hydrocarbon networks, con-
sider the adoption of the structural formulas in chemistry. Figure 4.18 shows an
example of a structural formula applied to AHNs. Notice that there are two com-
pounds, each one showing its associated stoichiometric coefficient at the bottom. In

Fig. 4.18 Example of an h11 h21


structural formula applied
to visual representations
of artificial hydrocarbon h12 C C h22 1
networks
h13 h23

h11 h21
h31
h12 C C C 2

h32
h13
4.4 Artificial Hydrocarbon Networks Practical Features 109

addition, each compound is made of different CH-primitive molecules. Each CH-


molecule is identified by the carbon atom C. Then, the number of carbon atoms
in the compound refers to the number of CH-primitive molecules associated to the
compound. Also, each molecule has a finite number of hydrogen atoms hij (the j-th
hydrogen atom of the i-th molecule in the compound) representing instances of the
specific trained AHN. Finally, the values show above the stoichiometric coefficients
represent intermolecular distances in compounds, e.g. the length between carbon
atoms.

4.4.1.1 Molecular Clustering

Artificial hydrocarbon networks organize molecules in order to capture clusters of


information. In fact, these clusters are grouped by similar characteritics. For example,
Algorithm 4.5 implicitly determines the criterion of clustering similar points. One
of those is approximated by one molecule; thus, the number of molecules in the
AHN-structure corresponds to the number of clusters generated (see Fig. 4.18). In
Algorithm 4.5, finding the optimum intermolecular distances assures the minimum
error (4.18) per molecule that it geometrically represents similar points. In a later
chapter, the optimal number of molecules is treated.

4.4.1.2 Clustering of Compounds

In the essence of molecular clustering, the number of compounds refers to the num-
ber of components that will form a basis of the final approximation. Consider, for
example, the behavior of the network S(x) be a vector that is spanned by a linear com-
bination of behaviors of compounds ψi ≥ Φ, that it is the basis of S(x). With a slight
modification of Algorithm 4.5, each compound should extract particular information.
For instance, consider the decomposition of an audio signal. In that sense, artificial
hydrocarbon networks might be used for representing the fundamental signal and
the armonics by compounds.

4.4.1.3 Stoichiometric Coefficients

Since the set of compounds is the basis of the AHN approximation, then stoichio-
metric coefficients are the coordinates of the behavior of the AHN. In fact, each
coefficient can be interpreted as the total weight each compound is supporting to
the final approximation of the model. This is easy to check because finding the
stoichiometric coefficients corresponds to solve a multilinear regression.
110 4 Artificial Hydrocarbon Networks

4.4.1.4 Hydrogen Parameters

Indeed, hydrogen parameters are highly important in the interpretation of knowledge


captured by artificial hydrocarbon networks. It is easy to check from Proposition 4.2
that hydrogen parameters are the roots (or zeros) of the approximated molecular
behavior. Moreover, since hydrogen parameters are complex numbers, it is evident
that real-valued hydrogen parameters refers to molecular behaviors that actually cross
in ya = 0, ya ≥ y. In addition, from the fundamental theorem of algebra, the number
of real-valued hydrogen parameters represents the number of times approximated
output y crosses by zero. For example, in control theory, the number of poles (roots
in the denominator) in a transfer function is closely related to the stability of the
physical system. In that sense, the transfer function refers to the model obtained
from any physical system. Moreover, if AHNs are applied for modeling the given
system and applying the proper transformation in order to capture the frequency
domain of the system, hydrogen parameters would be very useful in the analysis of
the given system. Another geometrical interpretation of hydrogen parameters is that
since h ≥ C can be decomposed into a rectangular representation h = σ + ωi, ∀σ ,
ω ≥ R, the real part σ gives an insight about when the molecular behavior changes
its concavity but not necessarily crosses in ya = 0.
On the other hand, if hydrogen parameters are read using the intermolecular
distances as the interval domain of the molecular behavior; then, hydrogen parameters
inside that interval will say how far or close are the similar points (x, y) clustered by
the molecular behavior.
In addition, all data captured by any molecular behavior might be reproduced
using hydrogen parameters and Propositions 4.1 or 4.2. This direct application of
hydrogen parameters can be used for learning purposes when the AHN-structure
turns to an inference system.

4.4.1.5 Intermolecular Distances

Other important parameters are intermolecular distances r ij that measures the


optimal length between two molecules i and j. In fact, these distances provide the
interval of action in molecular behaviors. Since these distances implicitly measures
similarity in points (x, y), those can be used for clustering. It is remarkable to say
that unlikeness in clusters might refer to possible discontinuities or other interesting
points in the behavior of the modeled system. At last, intermolecular distances can
capture these interesting points.

4.4.2 Practical Issues in Partial Knowledge Extraction

As described above, partial knowledge extraction is obtained from different instances


of the artificial hydrocarbon network. Some characteristics were highlighted, but
4.4 Artificial Hydrocarbon Networks Practical Features 111

those can be used practically when considering AHNs as a learning algorithm for
modeling problems.
For instance, the intermolecular distances can be extracted from the AHN-
structure and then use them for another processing, e.g. extract disimilar information
or analyzing critical points in the system.
Since these intermolecular distances and molecules perform clustering while mod-
eling any given system, these molecular units might be used for classification. Again,
as said above, one direct application of hydrogen parameters is generalization of data
in an inference system, i.e. learning procedures with AHNs.
Notice that molecular units and compounds encapsulate information as charac-
teristics of systems. Thus, one previous training molecule or compound can be used
in future systems when similar characteristics are expected. Hence, molecules and
compounds inherit information, and reusability might be considered.

References

1. Ponce H, Ponce P (2012) Artificial hydrocarbon networks: a new algorithm bio-inspired on


organic chemistry. Int J Artif Intell Comput Res 4(1):39–51
2. Ponce H, Ponce P, Molina A (2013) A new training algorithm for artificial hydrocarbon net-
works using an energy model of covalent bonds. In: Proceedings of the 7th IFAC conference
on manufacturing modelling, managament, and control. International federation of automatic
control, Saint Petersburg, Russia, pp 602–608
3. Ponce H, Ponce P, Molina A (2012) A novel adaptive filtering for audio siganls using artifi-
cial hydrocarbon networks. In: Proceedings of IEEE 9th international conference on electrical
engineering, computing science and automatic control. Mexico, Mexico, pp 277–282
4. Schoukens J, Rolain Y, Gustafsson F, Pintelon R (1998) Fast calculation of least-squares esti-
mates for system identification. In: Proceedings of the 37th IEEE conference on decision and
control, vol 3. Tampa, Florida, pp 3408–3410
5. Nesterov Y (2004) Introductory lectures on convex optimization. Kluwer Academic Publishers,
Dordrecht
6. Boyd S, Vandenberghe L (2009) Convex optimization. Cambridge University Press, Cambridge
7. Bousquet O, Eliseeff A (2002) Stability and generalization. J Mach Learn Res 2:499–526
Chapter 5
Enhancements of Artificial Hydrocarbon
Networks

Artificial hydrocarbon networks (AHN) algorithm builds and trains a model for
any given system. However, that model considers a single-input-and-single-output
(SISO) system and a fixed number of molecules. These assumptions limit the per-
formance of the obtained model. For example, systems that are not SISO cannot be
handled easy with artificial hydrocarbon networks, or the number of molecules is
difficult to determine, except from experience tuning or trail-and-error.
Thus, this chapter presents three enhancements of the AHN-algorithm: an
optimization in the number of molecules, an extension to model multiple-inputs-and-
multiple-outputs (MIMO) systems, and a generalization to create recurrent carbon-
based networks inspired on chemical aromatic compounds.

5.1 Optimization of the Number of Molecules

As noted in simple and modified artificial hydrocarbon networks algorithm, the topol-
ogy depends on the number of molecules n and the number of compounds c. However,
these values has to be selected from previous experience or subjected to the system Σ.
In this section, an improvement of the number of molecules is presented. Function-
ally, it optimizes the structure of artificial hydrocarbon networks to the minimum
artificial hydrocarbon compound that approximates Σ with some accuracy ε > 0.

5.1.1 The Hess’ Law

In chemistry, enthalpy ΔH ∈ refers to the energy transfer in a thermodynamic sys-


tem (see Sect. 2.4.2). In particular, enthalpy is the measure of heating interchange
in a chemical reaction at constant pressure. Closely to molecules, it is the energy
required to form or break a chemical bond. Thus, enthalpy is the chemical energy

H. Ponce-Espinosa et al., Artificial Organic Networks, 113


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1_5,
© Springer International Publishing Switzerland 2014
114 5 Enhancements of Artificial Hydrocarbon Networks


in a reaction at constant pressure. In order to compute the enthalpy ΔHreaction in a
chemical reaction, it is helpful to understand the Hess’s law. Roughly, it states that
the enthalpy of the reaction is independent of the pathway between the initial and
final states [1], as shown in Fig. 5.1. The process begins with an initial set of mole-
cules that chemically react forming new compounds. Those react again and again
several times, e.g. a chain of chemical reactions or paths, until the process stops with
a final set of molecules. Then, the Hess’ law assures that the initial set of molecules
(reactants) has enthalpy equal to the enthalpy of the set of final molecules (products).
Let Ωreactans be the set of initial molecules called reactants, and Ωproducts be the
set of final molecules called products. Then, a chemical reaction is a process to
transform Ωreactants in Ωproducts and is expressed as (5.1) with enthalpy of reaction

ΔHreaction
Ωreactants ≥ Ωproducts (5.1)

Moreover, each molecule Mi ⊂ Ωreactants has enthalpy ΔHi∈ in the set of reactant

enthalpies ΔHreactants and each molecule Mj ⊂ Ωproducts has enthalpy ΔHj∈ in the set

of product enthalpies ΔHproducts ∈
. Then, the enthalpy of reaction ΔHreaction is equal
to (5.2):  
∈ ∈ ∈
ΔHreaction = ΔHproducts − ΔHreactants (5.2)

5.1.2 Boiling and Melting Points in Hydrocarbons

In organic chemistry, there is known that hydrocarbon compounds have higher boiling
and melting points while the number of carbon atoms increases [2]. Figure 5.2 shows
that relationship. Notice that the boiling point in hydrocarbons increases monoton-
ically while the melting point increases between two bounds when the number of
carbon atoms is small. In both cases, this tendency occurs similarly in alkanes, alkenes

Reactants Products

A + B + C A B + C A B C
PATH 1

A + B + C A + B C A B C
PATH 2

A + B + C A B C
PATH 3

A B C ABC

Fig. 5.1 Enthalpy of reaction a in chain of chemical reactions


5.1 Optimization of the Number of Molecules 115

1000
900
800 Boiling Point

Temperature (K)
700
600 Melting Point
500
400
300
200
100
0
0 5 10 15 20 25 30 35 40 45
Carbon Atoms (n)

Fig. 5.2 Estimation of the boiling point (K) with respect to the number of carbon atoms (n) in
hydrocarbon compounds

and alkynes. In addition, branched hydrocarbon compounds contribute to decreasing


boiling and melting points. However, from the point of view of artificial hydrocarbon
networks, branching is out of scope. At last, only boiling points will be considering
next because at these temperatures chemical bondings are subjected to be formed or
broken.

5.1.3 Enthalpy in Artificial Hydrocarbon Networks

In fact, enthalpy ΔH ∈ is proportionally to the temperature of a given compound, as


written in (5.3); where, ΔCp is the change of specific heat, T is the temperature of the
compound in K and ΔH0K ∈ is the enthalpy of the compound at temperature T = 0K.
0

ΔH ∈ = ΔCp T + ΔH0K

(5.3)

Thus, if a molecule M is subjected to a chemical reaction; the enthalpy of reaction


ΔHM ∈ in M can be derived from (5.3) with temperature T equal to the boiling point

Tb , as expressed in (5.4).
∈ ∈
ΔHM = ΔCp Tb + ΔH0K (5.4)

From (5.4) and the discussion of boiling points in Sect. 5.1.2, the following obser-
vations hold:
• The greater the boiling point Tb is in a molecules M, the larger the number of
carbon atoms n is present in M.
• The greater the boiling point Tb is in a molecule M, the larger the value of enthalpy
of reaction ΔHM ∈ is in M.

Concluding, the larger the number of carbon atoms n is in a molecule M, the larger the
∈ is in M, and vice versa. Actually, this conclusion is
value of enthalpy of reaction ΔHM
116 5 Enhancements of Artificial Hydrocarbon Networks

Fig. 5.3 Structure of the


artificial hydrocarbon network
AHN that models Σ X Y

System

1
=1

M1 M2 ... Mn
Y

C
AHN

used for optimizing the number of CH-primitive molecules in artificial hydrocarbon


compounds.
Let Σ be any given system and AHN be an artificial hydrocarbon network that
models Σ. Also, let c = 1 be the number of compounds in AHN. Then, the artificial
hydrocarbon network AHN only have one artificial hydrocarbon compound C made
of n ≥ 2 CH-primitive molecules; thus, the stoichiometric coefficient is α1 = 1, as
shown in Fig. 5.3. The objective is to find the optimal number of molecules n∗ such
that AHN models Σ closely.
Let ΔHΣ ∈ be the enthalpy of the system, i.e. the energy E of the system calculated

as (4.12). In addition, let ΔHM∈ be the enthalpy of the compound C in AHN. Also,

suppose that Γp is the set of all possible optimal compounds of the form as (5.5);
where Γ is the set of all compounds in an artificial hydrocarbon network.
 
Γp = Ci ⊂ Γ |ΔHC∈ i = ΔHM

(5.5)

Then, the compound C that models Σ must have an enthalpy equal to the enthalpy
of the system like (5.6):
∈ ∈
ΔHM = ΔHΣ (5.6)

Since, a CH-primitive molecule has exactly one carbon atom, then the number of
CH-primitive molecules ni is equal to the number of carbon atoms in a compound
Ci ⊂ Γp . In addition, it is easy to check that no matter how compounds in Γp
were formed because of the Hess’ law, these compounds have the same value of
enthalpy as the system has, if there is no loss of energy in the chemical reaction, i.e.

ΔHreaction = 0. In that sense, the enthalpy of a compound Ci ⊂ Γp is equal to the
sum of enthalpies of the ni CH-primitive molecules that conforms Ci as (5.7); where,
ΔHk∈ represents the enthalpy of the k-th CH-primitive molecule in Ci .
5.1 Optimization of the Number of Molecules 117


ni
ΔHk∈ = ΔHC∈ i (5.7)
k=1

Notice that the enthalpy of a compound Ci ⊂ Γp in (5.7) depends on the number


of molecules ni and the enthalpies ΔHk∈ . However, the number of molecules does
not have an upper bound, i.e. ni ≥ 2 and ni ⊂ [2, →). Additionally, enthalpies ΔHk∈
vary continuously in the interval [0, →). Thus, find the optimum compound C in
Γp is an NP-problem. Then, a chemical rule based on the enthalpy of hydrocarbon
compounds is designed as follows.
Suppose that enthalpies ΔHk∈ are fixed, then (5.6) is difficult to reach varying only
the number of molecules ni arising to ΔHC∈ i ≤ ΔHΣ ∈ . Thus, there exists a positive

number m such that (5.8) holds:

mΔHC∈ i = ΔHΣ

(5.8)

From (5.8) and using the conclusion about the relationship between the number
of carbon atoms and the enthalpy of a molecule (ni ⊆ ΔHC∈ i ), the heuristic (5.9) is
proposed:
ΔHΣ ∈ ΔHΣ ∈
m= ∈ ⊆ (5.9)
ΔHCi ni

Finally, (5.9) is useful to announce the following rules, assuming that the initial
value of ni = 2:
• If m > 1, then increase n1 by one, and recalculate m.
• If m ≤ 1, then remain ni unchanged, and stop the process.

Algorithm 5.1 ENTHALPY-RULE(Σ, C, n): Updating the number of molecules in compound


C using an enthalpy-based rule.
Input: the system Σ = (x, y), the current compound C and the current number of molecules n in
C.
Output: the updated number of molecules n in C and value m.

ΔHΣ ∈ ⊇ using (4.12)

ΔHC∈ ⊇ using (4.12)


m ⊇ using ΔHΣ ∈ and ΔH ∈ in (5.9)
C
if m > 1
n=n+1
end-if
return n and m

At last, the optimum number of CH-primitive molecules n∗ of the optimum com-


pound C ⊂ Γp is equal to the unchanged value ni coming from the above rules. To
this end, Algorithm 5.1 presents the procedure to optimize the number of molecules
in a compound and Proposition 5.1 formalizes it.
118 5 Enhancements of Artificial Hydrocarbon Networks

Proposition 5.1 (enthalpy rule) Let Σ be any given system, C be a compound of an


artificial hydrocarbon network that models Σ and n be the number of molecules in C.
Then, using Algorithm 5.9 with inputs Σ, C and current n, the number of molecules
in C will be updated such that C will tend to be the optimum compound with the
minimum number of molecules n, if and only if the initial value of n = 2.
Example 5.1 Consider that any given system Σ has energy E = 300 and it has to
be modeled with an artificial hydrocarbon network of one compound C. Table 5.1
shows the enthalpy values of the compound C with respect to different number
of molecules n. Using the enthalpy-rule of Algorithm 5.1, determine the optimum
number of molecules in C.

Solution 5.1 Initialize the number of molecules n = 2, and the enthalpy of system
using the energy signal E such that ΔHΣ∈ = 300. Then, compute the heuristic rule

(5.9):
300
m(n = 2) = = 1.76
170
Since, m > 1, then n = 3. Recalculating:
300
m(n = 3) = = 1.43
170
Since, m > 1, then n = 4. Recalculating:
300
m(n = 4) = = 1.30
230
Since, m > 1, then n = 5. Recalculating:
300
m(n = 5) = = 1.01
297
Since, m > 1, then n = 6. Recalculating:

Table 5.1 Different enthalpy


n ΔHC∈
values of C with respect to
the number of molecules n of 2 170
Example 5.1 3 210
4 230
5 297
6 299
7 300
8 300
5.1 Optimization of the Number of Molecules 119

n=5

n=4

y
n Δ H oC n=3
3 260
4 500
5 480 = 485

Fig. 5.4 Example of a non-monotonic increasing function of enthalpy with respect to the number
of molecules in a compound of AHNs

300
m(n = 6) = = 1.01
299
Since, m > 1, then n = 7. Recalculating:
300
m(n = 3) = = 1.00
300
Since, m = 1; then, stop the process and return the optimal value n = 7. Notice
that in this example, it is easy to know that the optimum number of molecules is n = 7
by examining Table 5.1 directly because at this value, the enthalpy of the compound
ΔHC∈ is equal to the energy of the system E. However, in real life applications, that
kind of tables is unknown. Then, Algorithm 5.1 is useful for optimizing the number
of molecules in a compound. Also note that the enthalpy is a monotonic increasing
function with respect to the number of molecules. But, this is not the general case in
artificial hydrocarbon networks because in some cases the approximation of mole-
cules may compute higher values of enthalpies than the expected ones, as shown in
Fig. 5.4.
Using Proposition 5.1, the artificial hydrocarbon networks algorithm depicted in
Algorithm 4.5 can be modified in order to optimize the number of molecules in a
compound, as shown in Algorithm 5.2.

5.2 Extension to the Multidimensional Case

Learning algorithms that deal with several attribute and target variables are highly
important in machine learning. Thus, an extension of artificial hydrocarbon networks
to the multidimensional case is presented in this section.
120 5 Enhancements of Artificial Hydrocarbon Networks

Algorithm 5.2 OPTIMIZED-AHN(Σ, nmax , cmax , ε, η): Modified algorithm of artificial


hydrocarbon networks with optimum number of molecules.
Input: the system Σ = (x, y), the maximum number of molecules nmax ≥ 2, the maximum number
of compounds cmax > 0, the small tolerance value ε > 0 and the learning rate η.
Output: the structure of mixture Γ , the set of hydrogen values H, the set of stoichiometric coeffi-
cients Λ and the set of intermolecular distances Π .

i=0
R=Σ
while (R > ε) and (i < cmax ) do
i =i+1
n=2
m≥→
while m > 1 and n ≤ nmax do
Ci =CREATE-COMPOUND(R, n)
[Hi , Πi ] =OPTIMUM-COMPOUND(R, Ci , n, η)
[n, m] =ENTHALPY-RULE(R, Ci , n)
end-while
R ⊇ using (4.15)
end-while
Γ ⊇ {C1 , ..., Ci }
H ⊇ {H1 , ..., Hi }
Λ ⊇ minimizing (4.19) using LSE.
Π = {Π1 , ..., Πi }
return Γ, H, Λ and Π

5.2.1 Components and Interactions

Useful CH-molecules, artificial hydrocarbon compounds, mixtures and nonpolar


covalent bonds propositions for multidimensional purposes are introduced. Recall
that H and C represent a hydrogen atom and a carbon atom, respectively. Refer to
Sect. 4.2 for a detailed information about definitions of components and interactions
in artificial hydrocarbon networks.
Proposition 5.2 (first model of multidimensional CH-molecules) Let M be a CH-
molecule with ac = C. Also, let ϕ be the behavior of molecule M due to an input signal
x = (x1 , ..., xn ) in Rn with norm x < 1. Then, the behavior ϕ = (ϕ1 , ..., ϕj , ..., ϕm )
in Rm holds:
vHi = hi , hi ⊂ Cn (5.10)

 C
n d≤e
ϕj (x) = hir · xri (5.11)
r=1 i=1

Where, vHi represents a set of hydrogen values of the Hi atom, hi = (hi1 , ..., hin )
is the vector of constants values of Hi , d is the degree of freedom of the C atom, and
eC is the number of valence electrons of C.
5.2 Extension to the Multidimensional Case 121

Example 5.2 Consider an input signal x = (x, y) that excites a CH-molecule.


Determine the shape of the behavior ϕ = ϕ1 using Proposition 5.2 with degree
of freedom d = 3.
Solution 5.2 Using (5.11), define the behavior ϕ1 :

ϕ1 (x) = h11 x11 + h21 x12 + h31 x13 + h12 x21 + h22 x22 + h32 x23 (5.12)

ϕ1 (x) = h11 x + h21 x 2 + h31 x 3 + h12 y + h22 y2 + h32 y3 (5.13)

Then,
h1 = (h11 , h12 ) (5.14)

h2 = (h21 , h22 ) (5.15)

h3 = (h31 , h32 ) (5.16)

Figure 5.5 shows the representation of this CH-molecule. Notice that the position
of each hi remains the counterclockwise convention proposed in Sect. 4.2.1.2.

Remark 5.1 Let ϕr1D be a molecular behavior of the form as (4.2). Then, (5.11) can
be rewritten as (5.17) for all xr ⊂ x:


n
ϕj (x) = ϕr1D (xr ) (5.17)
r=1

Equation (5.17) allows to define the following proposition as a sum of product


forms.
Proposition 5.3 (second model of multidimensional CH-molecules) Let M be a
CH-molecule with ac = C. Also, let ϕ be the behavior of molecule M due to an
input signal x = (x1 , ..., xn ) in Rn with norm x < 1. Then, the behavior ϕ =
(ϕ1 , ..., ϕj , ..., ϕm ) in Rm holds:

Fig. 5.5 Representation of the


CH-molecule of Example 5.2
using the counterclockwise
convention
C

1
122 5 Enhancements of Artificial Hydrocarbon Networks

Fig. 5.6 Representation of


the CH-molecule of Example
5.3 using Proposition 5.3

vHi = hi , hi ⊂ Cn (5.18)

 C
n d≤e
ϕj (x) = (xr − hir ) (5.19)
r=1 i=1

Where, vHi represents a set of hydrogen values of the Hi atom, hi = (hi1 , ..., hin )
is the vector of constants values of Hi , d is the degree of freedom of the C atom, and
eC is the number of valence electrons of C.
Notice that (5.19) comes from directly from (5.17), allowing to position hi in any
location around the carbon atom C, as shown in Fig. 5.6.
Example 5.3 Consider an input signal x = (x, y) that excites a CH-molecule. Deter-
mine the shape of the behavior ϕ = ϕ1 using Proposition 5.3 with degree of freedom
d = 3.
Solution 5.3 Using 5.19, define the behavior ϕ1 :

ϕ1 (x) = (x1 − h11 ) (x1 − h21 ) (x1 − h31 ) + (x2 − h12 ) (x2 − h22 ) (x2 − h32 )
(5.20)
ϕ1 (x) = (x − h11 ) (x − h21 ) (x − h31 ) + (y − h12 ) (y − h22 ) (y − h32 ) (5.21)

Then,
h1 = (h11 , h12 ) (5.22)

h2 = (h21 , h22 ) (5.23)

h3 = (h31 , h32 ) (5.24)

Figure 5.6 shows the the representation of this CH-molecule. Notice that it does
not matter where hi are located around the C atom.
It is important to say that Proposition 5.2, and more exactly (5.11), is a simplifi-
cation of a multivariate polymonial of degree d in order to compute the least squares
estimates easily between any given system Σ in a multidimensional space and the
5.2 Extension to the Multidimensional Case 123

behavior of a CH-molecule. Moreover, Proposition 5.3, and more exactly (5.19),


extends the notion of polynomial roots. However, hi does not represent a root of
a multivariate polynomial; but it does represent the roots of the projection of the
multivariate polynomial onto a specific dimension r.
Proposition 5.4 (multidimensional hydrocarbon compounds) Let C = (Ω, BN )
be a compound consisting of primitive molecules Ω linked with a set of nonpolar
covalent bonds BN . Then, C is said to be a multidimensional hydrocarbon com-
pound if Ω = {M1 , ...Mk } is a set of CH-primitive molecules, spanned from the
CH functional group, with molecular behaviors ϕ1 , ...ϕk each one in Rm . Moreover,
ψ = (ψ1 , ..., ψm ) in Rm is the behavior of C due to any input signal x ⊂ Rn , such
that, ψ : ϕ1 × · · · × ϕk ≥ Rm . Actually, ψ : Rk×m ≥ Rm .
ij
Proposition 5.5 (model of multidimensional nonpolar covalent bonds) Let bk ⊂ B
be a nonpolar covalent bond of a pair of molecules Mi , Mj with behaviors ϕi , ϕj each
one in Rm due to an input x ⊂ Rn . Let Δ = δ1 , δ2 , δ3 ∅ be a tuple of properties of
ij
bk . Then, the behavior of nonpolar covalent bonds π holds:
⎜ ⎟ 
1
π(ϕi , ϕj , x) = δ3 1 − (δ1 δ2 )2 exp − (δ1 δ2 )2 , π ≥ 0 (5.25)
2

With,
δ2 : θi × θj ≥ R (5.26)

Where, δ1 ⊂ {1, 2, 3} is called the order of bond; δ2 ⊂ R with δ2 ≥ 0 represents


the length of bond and it is a metric on the parameters θi , θj each one in Rp that char-
acterize ϕi , ϕj ; and, δ3 ≥ 0 represents the minimum energy of bond. Moreover, the
behavior ψ of an artificial hydrocarbon compound C consisting of molecules Mi , Mj
is equal to the composite molecular behaviors, such that, ψ : ϕi (θi ) × ϕj (θj ) ≥ Rm .
For instance, consider the definition of artificial hydrocarbon compounds in (4.16).
Then, using Proposition 5.4, the latter can be extended to the multidimensional case
as expressed in (5.27) in the input domain x ⊂ [a, b] with a, b ⊂ Rn :


⎧ϕ1j (x) r0 ≤ x < r1
ψj ϕ1j , ..., ϕkj , x = · · · ··· , ∀j = 1, ..., m (5.27)


ϕkj (x) rk−1 ≤ x < rk

Where, ψ = (ψ1 , ..., ψj , ..., ψm ) is the behavior of a compound due to an input


signal x ⊂ Rn that depends on the set of molecular behaviors Φ = {ϕ1 , ..., ϕu , ..., ϕk }
each one in Rm such that ϕu = (ϕu1 , ..., ϕum ), and the set of parametes ri for all
i = 0, ..., k that characterize molecular behaviors stands for the i-th bound in Rn of
the input domain in molecular behaviors with lower bound r0 = a and upper bound
rk = b.
124 5 Enhancements of Artificial Hydrocarbon Networks

Proposition 5.6 (mixture of multidimensional molecules) Let Γ = {M1 , ..., Mk }


be a set of molecules with a set of behaviors Φ = {ϕ1 , ..., ϕu , ..., ϕk } each one in Rm
such that ϕu = (ϕu1 , ..., ϕum ). Then, the mixture of multidimensional molecules S =
(S1 , ..., Sj , ..., Sm ) in Rm is a linear combination of behaviors of molecules in Φ such
that there exists a set of coefficients Λ = {α1 , ..., αj , ..., αm } with αj = {α1j , ..., αkj }
of real values, called the stoichiometric coefficients. Hence,


k
Sj (x) = αij ϕij (5.28)
i=1

Moreover, Φ is the basis of the mixture of multidimensional molecules, Γ is the


structure of the mixture of multidimensional molecules, and S(x) is the behavior of
the mixture of multidimensional molecules.
Example 5.4 Using (5.28), determine the behavior of the mixture of 3 CH-molecules
in R2 if the set of behaviors of that molecules Φ is defined as (5.29) and the set of
stoichiometric coefficients Λ is defined as (5.30).
⎨ ⎩
2.5 4.0 7.6
Φ= (5.29)
3.1 8.3 9.2
⎛ ⎞
0.5 0.8
Λ = ⎝ 0.3 0.2 ⎠ (5.30)
0.1 0.6

Solution 5.4 In matrix notation, the set of equations in (5.28) can be rewritten as
(5.31):  
S(x) = diag ΛT Φ T (5.31)

Where, diag(A) represents the vector of elements in the diagonal of a square matrix
A and AT represents the transpose matrix of A. Then, the behavior of the mixture of
multidimensional molecules is easy to compute using (5.31):
⎨ ⎩
3.21 4.96
S(x) = diag = (3.21, 9.66) (5.32)
7.36 9.66

Then, the behavior of the mixture of 3 CH-molecules is S(x) ⊂ R2 .

5.2.2 Multidimensional AHN-Algorithm

Using propositions of Sect. 5.2.1, Algorithm 5.2 can be modified to implement


artificial hydrocarbon networks for modeling multiple-inputs-and-multiple-outputs
5.2 Extension to the Multidimensional Case 125

systems Σ = (x, y) with x ⊂ Rn and y ⊂ Rm , as shown in Algorithm 5.3. Notice that


k refers to the number of molecules, kmax is the maximum number of molcules in a
compound and μ is the heuristic (5.9). In addition, several practical implementations
in Algorithm 5.3 are described as follows.

Algorithm 5.3 MULTIDIMENSIONAL-AHN(Σ, kmax , cmax , ε, η): Algorithm of multidimen-


sional artificial hydrocarbon networks.
Input: the system Σ = (x, y), the maximum number of molecules kmax ≥ 2, the maximum number
of compounds cmax > 0, the small tolerance value ε > 0 and the learning rate η.
Output: the structure of mixture Γ , the set of hydrogen values H, the set of stoichiometric coeffi-
cients Λ and the set of intermolecular distances Π .

i=0
R=Σ
while (R > ε) and (i < cmax ) do
i =i+1
k=2
μ≥→
while μ > 1 and k ≤ kmax do
Ci =CREATE-COMPOUND(R, k)
[Hi , Πi ] =OPTIMUM-COMPOUND(R, Ci , k, η)
[k, μ] =ENTHALPY-RULE(R, Ci , k)
end-while
R ⊇ using (4.15)
end-while
Γ ⊇ {C1 , ..., Ci }
H ⊇ {H1 , ..., Hi }
Λ ⊇ minimizing (4.19) using LSE.
Π = {Π1 , ..., Πi }
return Γ, H, Λ and Π

5.2.2.1 Creating Compounds

The multidimensional artificial hydrocarbon networks algorithm uses the CREATE-


COMPOUND(R, k) Algorithm (4.1). In particular, that algorithm has to calculate the
energy of a signal partition Σj = (xj , yj ). In practice, this energy signal Ej can be
computed using (5.33); where, [rj−1 , rj ] stands for the interval in Rn of Σj , xq ⊂ x
is an input sample, yq ⊂ y is an output sample and g stands for the norm of g:

Ej = yq 2
, ∀xq ⊂ [rj−1 , rj ] (5.33)
q
126 5 Enhancements of Artificial Hydrocarbon Networks

5.2.2.2 Optimizing Compounds

On the other hand, Algorithm 5.3 uses the OPTIMUM-COMPOUND(R, Ci , k, η)


Algorithm (4.4). First, it requires to initialize the values of intermolecular distances in
Π , then (5.34) can be used to determine the initial values of intermolecular distances
rjk ⊂ Π , if [a, b] ← Rn is the interval of the input domain.

b−a
rj − rj−1 = , j = 1, ..., k (5.34)
k
Next, Algorithm (4.4) uses the least squares estimates for finding the best values of
hydrogen atoms. In that sense, the objective function Ei for multivariate polynomial
functions in (5.35) is used; where, ϕj ⊂ Rm is the j-th molecular behavior, qj is
the number of samples in the partition Σj , xjk ⊂ xj is the j-th sample of xj for all
k = 1, ..., qj and yjk ⊂ yj is the j-th sample of yj for all k = 1, ..., qj .

qj
1 2
Ej = yjk − ϕj xjk (5.35)
2
k=1

Finally, the intermolecular distances rjk ⊂ Π have to be updated using the mul-
tidimensional rule expressed in (5.36) and (5.37); where, rt+1 is the future value of
rjk , rt is the current value of rjk , Δrt is the change in length of bond rjk , I is the
identity matrix of dimension n, diag(A) is the diagonal matrix of A, 0 < η < 1 is
the step size, and E1t , E2t are the current error functions of two adjacent molecules
Mj , Mk calculated as (5.35).

rt+1 = rt + diag (Δrt I) (5.36)

Δrt = −η (E1t − E2t ) (5.37)

5.2.2.3 Calculating the Enthalpy Rule

Also, Algorithm 5.3 uses the ENTHALPY-RULE(R, Ci , k) Algorithm (5.1) to optimize


the number of molecules in a compound. In fact, it calculates the enthalpy of the
system ΔHΣ ∈ and also the enthalpy of compound ΔH ∈ . This rule can be derived by
C
applying (5.33) for both enthalpies in the interval [a, b] ← Rn .
Example 5.5 Using Algorithm 5.3, build an artificial hydrocarbon network with
kmax = 2 and cmax = 1 of the system (5.38). Consider a step size of η = 0.01 and a
tolerance value of ε = 0.1.

y1 = x1 x2 , x1 ⊂ [0, 2] , x2 ⊂ [0, 2] (5.38)


5.2 Extension to the Multidimensional Case 127

Solution 5.5 Initialize i = 0 and the residue R = Σ.


For i = 0:
i = 1.
k = 2.
μ ≥ →.
For j = 0 :
C1 = ({CH3 , CH3 }, {1}) using Algorithm 4.1.
H1 = {h11 , h12 , h13 , h21 , h22 , h23 } with:
h11 = (1.50, 0.00)
h12 = (0.44, 0.23 + 0.05i)
h13 = (0.27, 0.23 − 0.05i)
h21 = (0.48, 0.00)
h22 = (0.16 + 0.17i, 0.11 + 0.13i)
h23 = (0.16 − 0.17i, 0.11 + 0.13i) using Algorithm 4.4 with (5.35), and
Π1 = (0.9923, 0.9844) using (5.36) and (5.37).
336.1423 = 1.0061 using Algorithm 5.1.
k = 3 and m = 338.1953
μ > 1 and k > kmax , this ends the loop.
R = 18.33 using (4.15).
R > ε and i = cmax , this ends the loop.

Finally,
Γ = (C1 )
H = (H1 )
Λ = (1.0) because there is one compound with one output.
Π = (Π1 )
Actually, the overall response of the artificial hydrocarbon network is shown in
Fig. 5.7 and the AHN-structure is depicted in Fig. 5.8.

5.3 Recursive Networks Using Aromatic Compounds

Until now, artificial hydrocarbon networks have been structured in linear chains of
CH-primitive molecules. However, these topologies do not present recursive proper-
ties. In that sense, a cyclic network topology is proposed to handle dynamic systems.
In fact, these recursive networks are inspired on chemical aromatic compounds, i.e.
arenes. Figure 5.9 shows the proposed topology. It is remarkable to say that the
recursive CH molecule is inspired on arenes because it is the most stable structure
of cyclic compounds in nature.
As noted in Fig. 5.9, the recursive AHN-structure can be treated as another CH-
primitive molecule as the following Proposition 5.7 describes:
Proposition 5.7 (recursive CH molecules) Let M be a CH-molecule of the form as
Fig. 5.9 with ac = C1 . Also, let ϕ be the behavior of molecule M due to an input
signal x with |x| < 1. Then, the behavior of ϕ holds:
128 5 Enhancements of Artificial Hydrocarbon Networks

3.5
System
3 AHN
2.5

1.5
1
y

0.5

−0.5
2
1.5 2
1.5
1 1
x2 0.5 0.5 x1
0 0

Fig. 5.7 Model obtained from Algorithm 5.3 in Example 5.5

C C

Fig. 5.8 AHN-structure of Example 5.5

vHi = hi , hi ⊂ C (5.39)


N−1
ϕ(xn ) = hi · xn−i (5.40)
i=1

Where, N is the number of carbon atoms in M, and n is the current state.


In this case, the parameters in the recursive CH molecule do not mean any roots
in a polynomial. In contrast, these hydrogen parameters are subjected to compute the
lastest value of themodeled signal in terms of some values locally stored in carbon
5.3 Recursive Networks Using Aromatic Compounds 129

Fig. 5.9 Topology of a recur-


sive artificial hydrocarbon
network inspired on arenes H4 H3

C5 C4

H5 C6 C3 H2

C1 C2

H1

atoms. For instance, it can be seen as a difference equation with coefficients equal
to hydrogen parameters.

References

1. Ganguly J (2009) Thermodynamics in earth and planetary sciences. Springer, Berlin


2. Li H, Higashi H, Tamura K (2006) Estimation of boiling and melting points of light, heavy and
complex hydrocarbons by means of a modified group vector space method. ELSEVIER Fluid
Phase Equilib 239(2):213–222
Chapter 6
Notes on Modeling Problems Using Artificial
Hydrocarbon Networks

This chapter introduces several notes on using artificial hydrocarbon networks


(AHNs) for modeling problems. In particular, it discusses some aspects on modeling
univariate and multivariate systems, and designing linear and nonlinear classifiers
using the AHN-algorithm. In addition, few inference and clustering applications are
described. Finally, a review of the most important characteristics on artificial hydro-
carbon networks in real-world applications are covered like how to inherit informa-
tion with molecules, how to use information of parameters in AHN-structures and
how to improve the training process of artificial hydrocarbon networks implementing
a catalog of artificial hydrocarbon compounds.

6.1 Approximation Problems

This section covers the usage of artificial hydrocarbon networks to approximate


univariate and multivariate systems. Without loss of generalization, let Σ be an
unknown system represented as a set of multivariate functions f j of the form as
(6.1); where, {x1 , ..., xi , ..., xn } is a set of attribute variables and {y1 , ..., y j , ..., ym }
is a set of target variables.

y j = f j (x1 , ..., xn ) , ∈ j = 1, ..., m (6.1)

In compact notation, let x = (x1 , ..., xn ) be the row vector of attribute variables
and y = (y1 , ..., ym ) be the row vector of target variables such that (6.1) can be
rewritten as (6.2):
y = F (x) (6.2)

With F as a row vector of functions f i like (6.3); where, A T represents the


transpose of A.

H. Ponce-Espinosa et al., Artificial Organic Networks, 131


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1_6,
© Springer International Publishing Switzerland 2014
132 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

 T
f 1 (x)
F(x) =  · · ·  (6.3)
f m (x)

Moreover, suppose that there are q samples of inputs and outputs. Each attribute
variable is a column vector of samples xi = (x1i , ..., xqi ) and each target variable is a
column vector of samples y j = (y1 j , ..., yq j ). Then, let X be a matrix q × n of input
samples and Y be a matrix q × m of output samples as (6.4) and (6.5), respectively.
 
x11 · · · x1n
⎜ .. . . .. ⎟
X = . . .  (6.4)
xq1 · · · xqn
 
y11 · · · y1m
⎜ .. . . .. ⎟
Y = . . .  (6.5)
yq1 · · · yqm

Let say that a system Σ = (X, Y ) is equivalent to (6.2), i.e. Σ(X, Y ) = F. Then,
Σ can be represented with a model F̂ of the form as (6.6); where, AHN is an artificial
hydrocarbon network, ω > 0 is a small real number representing the maximum error
and ≥g≥ stands for the norm of g.
 
 
Y − F̂  = ≥Y − AH N ≥ ⊂ ω (6.6)

Following, some notes on modeling univariate and multivariate systems, i.e. uni-
variate and multivariate set of functions F, with artificial hydrocarbon networks are
discussed.

6.1.1 Approximation of Univariate Functions

For instance, suppose that there is an unknown single-input-and-single-output system


Σ = (X, Y ) with a column vector of input samples X = x1 = (x11 , ..., xq1 ) and a
column vector of output samples Y = y1 = (y11 , ..., yq1 ). Then, (6.2) is reduced to
(6.7) with an unknown function f .

y1 = f (x1 ) (6.7)

Thus, an artificial hydrocarbon network AHN will be used as the model fˆ of f .


Using Algorithm 4.5 and X and Y vector samples, an artificial hydrocarbon network
AHN is trained and then used as an inference system like (6.7).
6.1 Approximation Problems 133

Example 6.1 Let f be the exact behavior of Σ as expressed in (6.8). Find a model
fˆ of f using artificial hydrocarbon networks in the input domain x ∈ [−1, 2).

f (x) = x 5 − 4x 2 + 1 , x ∈ [−1, 2) (6.8)

Solution 6.1 3000-sample pairs (x1 , y1 ) were uniformly obtained in the interval
x ∈ [−1, 2). Then, running Algorithm 4.5 a model fˆ is obtained using the 3000-
sample pairs, a maximum number of molecules n max = 6, a maximum number
of compounds cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.01.
Figure 6.1 shows a comparison between function f and the model fˆ with dashed
lines representing the bounds between molecules in the AHN-structure. In particular,
this model has an absolute squared error value of 1.3452.
Notice that the algorithm found 4 molecules with a compound as depicted in
Fig. 6.2. As shown in the latter figure, hydrogen values represent the roots of mole-
cular behaviors, the superscript numbers of carbon atoms represent a gain in the
molecular behavior, so-called the carbon atom value vC . For example, this simple
AHN-model can be written mathematically as (6.9). Since there is one compound,
there is one stoichiometric coefficient η1 = 1; thus, the final model fˆ is expressed
as (6.10).

20
System
AHN
15

10
y

−5
−1 −0.5 0 0.5 1 1.5 2
x

Fig. 6.1 Comparison of function f in (6.8) and the response of its AHN-model

−0.31 + 0.80i 0.51 2.22 0.89 + 0.42i

−0.50 C 7.2 C − 4.1 C 3.6 C 25.7 1.53

−0.31 − 0.80i −0.49 0.55 0.89 − 0.42i

0.31 1.29 0.59

Fig. 6.2 Artificial hydrocarbon network structure of model fˆ of function f in (6.8)


134 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

7.17 (x + 0.50) (x + 0.31 − 0.80i) (x + 0.31 + 0.80i)


−1 ⊂ x < −0.69
−4.08 (x − 0.51) (x + 0.49) −0.69 ⊂ x < 0.60
τ(x) = (6.9)
3.58 (x − 2.22) (x − 0.55)
 0.60 ⊂ x < 1.19


25.67 (x − 1.53) (x − 0.89 − 0.42i) (x + 0.89 + 0.42i) 1.19 ⊂ x < 2

fˆ(x) = η1 τ(x) , x ∈ [−1, 2) (6.10)

It is important to highlight that in Example 6.1 Algorithm 4.5 uses the second
model of CH-molecules (Proposition 4.2) that defines a molecular behavior in a
product form, and the first chemical rule (Proposition 4.6) that designs a saturated
linear chain of CH-primitive molecules.
Example 6.2 Let f be the behavior of Σ as expressed in (6.11); where, g N stands
for a normalized random function. Find a model fˆ of f using artificial hydrocarbon
networks in the input domain x ∈ [−1, 2).

f (x) = x 5 − 4x 2 + 1 + 2g N , x ∈ [−1, 2) (6.11)

Solution 6.2 1000-sample pairs (x1 , y1 ) were uniformly obtained in the interval
x ∈ [−1, 2). Then, running Algorithm 4.5 a model fˆ is obtained using the 1000-
sample pairs, a maximum number of molecules n max = 6, a maximum number
of compounds cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.001.
Figure 6.3 shows a comparison between function f and the model fˆ with dashed
lines representing the bounds between molecules in the AHN-structure. In particular,
this model has an absolute squared error value of 334.05 due to the noise of the system.
Notice that the algorithm found 4 molecules with a compound as depicted in
Fig. 6.4. This result shows that artificial hydrocarbon networks act as filters over
the data. However, the profile of molecules slightly changed in terms of inter-
molecular distances as obtained in Example 6.1. Again, the algorithm computes a

20
System
AHN
15

10
y

−5
−1 −0.5 0 0.5 1 1.5 2
x

Fig. 6.3 Comparison of function f in (6.11) and the response of its AHN-model
6.1 Approximation Problems 135

2.07 0.71 1.71 1.46

−0.67 C 1.4 C − 4.4 C 5.4 C 14.1 −0.50

1.13 −0.63 0.73 1.08

0.84 0.72 0.73

Fig. 6.4 Artificial hydrocarbon network structure of model fˆ of function f in (6.11)

saturated linear chain of CH-primitive molecules with the product form based mole-
cular behaviors.
An important property of artificial hydrocarbon networks algorithm is its ability
to find discontinuities in the system using intermolecular distances, as shown in the
next example.
Example 6.3 Let f be the behavior of Σ as expressed in (6.12). Find a model fˆ of
f using artificial hydrocarbon networks in the input domain x ∈ [−1, 1].


arctan (α x) −1.0 ⊂ x < 0.1
f (x) = sin (α x) 0.1 ⊂ x < 0.9 (6.12)


cos (α x) 0.9 ⊂ x ⊂ 1.0

Solution 6.3 200-sample pairs (x1 , y1 ) were uniformly obtained in the interval x ∈
[−1, 1]. Then, running Algorithm 4.5 a model fˆ is obtained using the 200-sample
pairs, a maximum number of molecules n max = 6, a maximum number of compounds
cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.01. Figure 6.5 shows
a comparison between function f and the model fˆ with dashed lines representing

1 System
AHN

0.5

0
y

−0.5

−1

−1.5
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
x

Fig. 6.5 Comparison of function f in (6.12) and the response of its AHN-model
136 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

−1.31 + 0.93i 0.98 1.43

0.0009 C 1.3 C − 4.3 C 0.8 −5.14

−1.31 − 0.93i 0.02 0.53

1.06 0.83

Fig. 6.6 Artificial hydrocarbon network structure of model fˆ of function f in (6.12)

the bounds between molecules in the AHN-structure. In particular, this model has
an absolute squared error value of 1.03 × 10−2 .
Three carbon atoms in the artificial hydrocarbon compound were optimally found,
as shown in Fig. 6.6. Moreover, the bounds of intermolecular distances captured the
discontinuities of f in (6.12). For instance, the first discontinuity ocurrs at x = 0.1
and the first bound is located at x L = 0.06. The second discontinuity occurs at
x = 0.9 and the second bound is located at x L = 0.89.

Example 6.4 Suppose that f is a periodic function representing the behavior of Σ


as written in (6.13). Find a model fˆ of f using artificial hydrocarbon networks in
the input domain x ∈ [0, 5].

f (x) = sin (α x) , x ∈ [0, 5] (6.13)

Solution 6.4 500-sample pairs (x1 , y1 ) were uniformly obtained in the interval x ∈
[0, 5]. Then, running Algorithm 4.5 a model fˆ is obtained using the 500-sample pairs,
a maximum number of molecules n max = 6, a maximum number of compounds
cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.001. Figure 6.7 shows

1 System
AHN
0.8
0.6
0.4
0.2
0
y

−0.2
−0.4
−0.6
−0.8
−1

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5


x

Fig. 6.7 Comparison of function f in (6.13) and the response of its AHN-model
6.1 Approximation Problems 137

1.93 3.02 4.96

0.04 C 3.1 C −3.5 C −3.1 3.07

0.99 1.98 4.01

1.79 1.39

Fig. 6.8 Artificial hydrocarbon network structure of model fˆ of function f in (6.13)

a comparison between function f and the model fˆ with dashed lines representing
the bounds between molecules in the AHN-structure. In particular, this model has
an absolute squared error value of 1.7978.
Notice that three CH-primitive molecules can model f as depicted in Fig. 6.8. At
last, this example demonstrates that artificial hydrocarbon networks can also model
periodic functions easily.

6.1.2 Approximation of Multivariate Functions

Until now, artificial hydrocarbon networks have been used for univariate systems. In
this section, let suppose that there exists an unknown multiple-inputs-and-multiple-
outputs system Σ = (X, Y ) expressed as (6.2). Thus, an artificial hydrocarbon
network AHN will be used as the model F̂ of F, using Algorithm 5.3 and X and Y
matrix samples.
Example 6.5 Suppose that F is a single function representing the behavior of Σ as
written in (6.14). Find a model F̂ of F using artificial hydrocarbon networks in the
input domain x1 ∈ [0, 3] and x2 ∈ [0, 3].

sin (α x1 ) sin (α x2 )
F = y1 = + (6.14)
α x1 α x2

Solution 6.5 1000-sample tuples (x1 , x2 , y1 ) were randomly obtained in the proper
input space. Then, running Algorithm 5.3 a model F̂ is obtained using the 1000-
sample tuples, a maximum number of molecules n max = 6, a maximum number of
compounds cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.01. Figure 6.9
shows a comparison between function F and the model F̂. In particular, this model
has an absolute squared norm error value of 503.89.
Three multidimensional CH-primitive molecules can model F as depicted in
Fig. 6.10. As noted, both surfaces F and F̂ are similar, proving that AHNs can model
multivariate systems. It is remarkable to say that the way systems are splitted, i.e.
138 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

System
2 AHN
1.5

1
y1

0.5

−0.5
0
0.5 0
1
1.5 1
x2 2
2 x1
2.5
3 3

Fig. 6.9 Comparison of function F in (6.14) and the response of its AHN-model

Fig. 6.10 Artificial hydro- 1.76 + 0.98i 1.95 + 1.48i 2.97 + 1.37i
carbon network structure of 2.68 0.00 1.03 + 1.35i
model F̂ of function F in
−2.44 +0.00
(6.14) +0.00 C (0.2,0.2) C (0.5,0.4) C (−0.3,−0.3) +3.97

1.76 − 0.98i 1.95 − 1.48i 2.97 − 1.37i


−1.60 4.58 1.03 − 1.35i

1.25 1.25
0.22 0.22

the meaning of intermolecular distances, gives different responses of how molecules


capture the information.
In this particular example, bounds of molecules in 1-dimension is extended to
hyperplanes of the form as (6.15); where, L i ∈ Rn represents the i-th bound of
molecules and {x1 , ..., xn } is the set of attribute variables. Then, any given point
x p = (x p1 , ..., x pn ) is part of a molecule Mk if (6.16) holds, for all positive values
of L (k−1) ∈ Rn and L k ∈ Rn :

x1 xn
+ ··· + =1 (6.15)
L1 Ln
x p1 x pn x p1 x pn
+ ··· + ⊂1⊂ + ··· + (6.16)
L (k−1)1 L (k−1)n L k1 L kn
6.1 Approximation Problems 139

Example 6.6 Suppose that F is a single function representing the nonlinear behavior
of Σ like (6.17). Find a model F̂ of F using artificial hydrocarbon networks in the
input domain (x1 , x2 ) ∈ R2 .


x1 = cos(t)
F : x2 = t , t ∈ [0, 15] (6.17)


y1 = sin(t)

Solution 6.6 1500-sample tuples (x1 , x2 , y1 ) were uniformly obtained in the proper
input space. Then, running Algorithm 5.3 a model F̂ is obtained using the 1500-
sample tuples, a maximum number of molecules n max = 6, a maximum number
of compounds cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.01.
Figure 6.11 shows a comparison between function F and the model F̂. In particular,
this model has an absolute squared norm error value of 1.1 × 10−2 .
Five multidimensional CH-primitive molecules model F as depicted in Fig. 6.12.
Moreover, artificial hydrocarbon networks can capture nonlinear and coupled attribute
variables in systems.
In comparison with Example 6.5, the splitting procedure defines a center of
molecule C Mi ∈ Rn as the i-th midpoint between two adjacent bounds of mole-
cules L (i−1) , L i as (6.18). In addition, a sample value (xk , yk ) ∈ Rn × Rm is
part of a molecule Mi if (6.19) holds; where, Σi is the set of observation tuples in
molecule Mi :
L (i−1) + L i
C Mi = (6.18)
2

1.5
System
AHN
1

0.5
y1

−0.5

−1

−1.5
15 1.5
10 1
0.5
5 0
−0.5
x2 0 −1.5
−1 x1

Fig. 6.11 Comparison of function F in (6.17) and the response of its AHN-model
140 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

−2.04 +6.44 +9.64 +0.00 +16.24


2.38 + 5.07i +0.00 +0.00 −0.11 + 13.13i 7.56 + 8.08i

+3.83 +10.57
+0.00 C (−0.16,0.04) C (0.33,0.18) C (−0.32,−0.21) C (0.31,0.21) C (−0.17,0.008) +0.00

+0.80 +2.98 +6.07 +22.13 +13.39


2.38 − 5.07i +0.02 −0.009 −0.11 − 13.13i 7.56 − 8.08i

0.40 0.40 0.39 0.39


3.00 3.00 2.99 2.99

Fig. 6.12 Artificial hydrocarbon network structure of model F̂ of function F in (6.17)


⎧ ⎪
Σi = (xk , yk ) ∈ R × R |i = arg min xk − C M j , ∈ j = 1, ..., n
n m
(6.19)
j

Currently, artificial hydrocarbon networks have been used for single-input-and-


single-output (SISO) and for multiple-inputs-and-single-output (MISO) systems. In
the following example, a demonstration of effectiveness on predicting multiple-
inputs-and-multiple-outputs (MIMO) systems with artificial hydrocarbon networks
is presented.
Example 6.7 Suppose that F is a vector of functions
⎩ ⎛ representing the nonlinear
ˆ ˆ
behavior of Σ like (6.20). Find a model F̂ = f 1 , f 2 of F using artificial hydro-
carbon networks in the input domain (x1 , x2 ) ∈ R2 .
⎝ ⎞T
f 1 (x) = cos(x1 ) cos(x2 )
F= (6.20)
f 2 (x) = (x1 x2 )2

Solution 6.7 1000-sample tuples (x1 , x2 , y1 , y2 ) were randomly obtained in the


proper input space. Then, running Algorithm 5.3 a model F̂ is obtained using the
1000-sample tuples, a maximum number of molecules n max = 6, a maximum num-
ber of compounds cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.01.
Figure 6.13 shows a comparison between function f 1 and the model fˆ1 and Fig. 1.2
shows a comparison between function f 2 and the model fˆ2 . In particular, the overall
model has an absolute squared norm error value of 309.94 (Fig. 6.14).
Five multidimensional CH-primitive molecules model F as depicted in Fig. 6.15.
Moreover, artificial hydrocarbon networks can capture nonlinear and coupled attribute
variables in systems. Again, (6.19) was used for partitioning the system. As shown
in this example, training an artificial hydrocarbon network for multivariate systems
is still simple. Notice that multidimensional artificial hydrocarbon networks can be
decomposed into m AHN-structures; where, m denotes the dimensionality of tar-
get variables. However, the structure of these m artificial hydrocarbon networks are
attached by multidimensional molecules; thus, it does not imply that m MISO sys-
tems can be used for training m different AHN-structures because in some cases, the
6.1 Approximation Problems 141

System
Molecule−1
Molecule−2
Molecule−3
Molecule−4
1 Molecule−5

0.5

0
y1

−0.5

−1

−1.5
0
0.5 3
1 2.5
1.5 2
x1 2 1.5
1 x2
2.5 0.5
3 0

Fig. 6.13 Comparison of function f 1 in (6.20) and the response of its AHN-model fˆ1

System
Molecule−1
Molecule−2
80 Molecule−3
Molecule−4
Molecule−5
60

40
y1

20

−20
3
3
2 2
x2 1 1 x1
0 0

Fig. 6.14 Comparison of function f 2 in (6.20) and the response of its AHN-model fˆ2

m AHN models could not have the same number of molecules and/or noncovalent
bonds (i.e. order and length of bondings).
In contrast, the same example were done using artificial neural networks (ANN).
In that case, a feedforward multilayer neural network was used with 6-hidden neu-
rons with hyperbolic-tangent-sigmoid activation functions like (6.21) and 2-output
142 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

h 111 h 121 h 131 h 141 h 151


h 211 h 221 h 231 h 241 h 251

h 112 h 152
h 212 C C C C C h 252

h 113 h 122 h 132 h 142 h 153


h 213 h 222 h 232 h 242 h 253

Fig. 6.15 Artificial hydrocarbon network structure of model F̂ of function F in (6.20)

neurons with linear activation fuctions like (6.22); where, x represents an input of
the activation function and e stands for the exponential function.

2
f (x) = −1 (6.21)
1 + e−2x

f (x) = x (6.22)

The backpropagation algorithm was used to train it. After 193 epochs, the response
of the artificial neural network was obtained, as shown in Figs. 6.16 and 6.17 with an
absolute squared norm error value of 115.75. The same attribute variables and target
variables from the example fed the neural network. Notice that in both ANN and
AHN models, the result is very similar. However, the weights of the neural network
do not represent any information about the system, while intermolecular distances
and hydrogen values give a partial understanding of the system. For example, the
latter values can be used as metadata for generating other processes (refer to Chap. 7).

6.2 Clustering Problems

The clustering problem can be easily treated with artificial hydrocarbon networks
using a target variable y1 as a categorical variable using for labeling attribute vari-
ables x = (x1 , ..., xn ). In that sense, this section introduces the design of linear and
nonlinear classifiers.

6.2.1 Linear Classifiers

Example 6.8 Suppose that f represents a classification process of Σ as expressed


in (6.23). Find a classifier fˆ of f using artificial hydrocarbon networks in the input
domain (x1 , x2 ) ∈ R2 . ⎠
x1 x2
1 0.7 + 0.6 <1
f = (6.23)
0 other wise
6.2 Clustering Problems 143

System
ANN
1

0.5

0
y1

−0.5

−1
0
3
0.5
2.5
1 2
1.5 1.5
x1 2 x2
1
2.5 0.5
3 0

System
AHN
1

0.5

0
y1

−0.5

−1
0
3
0.5
2.5
1 2
1.5 1.5
x1 2 x2
1
2.5 0.5
3 0

Fig. 6.16 Comparison between the model fˆ1 from an artificial neural network (ANN) and an
artificial hydrocarbon network (AHN)

Solution 6.8 Firstly, note that (6.23) classifies data over the line (6.24):

0.6
x2 = − x1 + 0.6 (6.24)
0.7

Then, a linear artificial hydrocarbon network classifier fˆ was trained with 1000
random samples (x1 , x2 , y1 ); where, y1 is the label of a given attribute variable, and
144 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

80
System
ANN
60

40
y2

20

−20
3 3
2 2

x2 1 1 x1
0 0

80
System
60 AHN

40
y2

20

−20
3 3
2 2

x2 1 1 x1
0 0

Fig. 6.17 Comparison between the model fˆ2 from an artificial neural network (ANN) and an
artificial hydrocarbon network (AHN)

x1 and x2 are in the interval [0, 1]. Then, running Algorithm 5.3 a model fˆ was
obtained using the training data with a maximum number of molecules n max = 4, a
maximum number of compounds cmax = 1, a tolerance ω = 1 × 10−4 , and a step
size φ = 0.001. Figure 6.18 shows the response of the linear AHN-classifier fˆ and
the line (6.24). In particular, the classifier has a misclassification rate of 0 %.
The linear AHN-classifier is depicted in Fig. 6.19. As noted, two CH-primitive
molecules clusters data as (6.23). Actually, the bounds of molecules L are used as
6.2 Clustering Problems 145

1
Molecule-1
0.9 Molecule-2

0.8

0.7

0.6
x2

0.5

0.4

0.3

0.2

0.1

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
x1

Fig. 6.18 Response of the linear AHN-classifier fˆ of f classifier over the black line (6.24)

Fig. 6.19 Artificial hydro- 0.0 0.0


carbon network structure of 0.0 0.0
linear classifier fˆ of function
f in (6.23) 0.0 0.0
0.0 C (1 .0,0.0) C (0 .0,0.0) 0.0

0.0 0.0
0.0 0.0

0.52
0.17

centers of molecules and partitions are calculated as (6.19). Notice that these centers
are separated by intermolecular bonds r.

6.2.2 Nonlinear Classifiers

In order to demonstrate nonlinear artificial hydrocarbon networks classifiers, two


examples are presented. The first example discusses a nonlinear classifier for a XOR
classifier and the second example uses the well-known Fisher’s Iris dataset.
Example 6.9 Consider the XOR boolean operator f between two attribute variables
x1 , x2 as written in (6.25); where, a · b stands for the AND operator between a and b,
and ā stands for the NOT operator of a. Also, suppose that the logical value false are
146 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

values in the interval [0.0, 0.3] and the logical value true are values in the interval
[0.7, 1.0]. Then, build a nonlinear classifier fˆ of f using artificial hydrocarbon
networks.
f = x1 · x¯2 + x̄1 · x2 (6.25)

Solution 6.9 A nonlinear artificial hydrocarbon network classifier fˆ was built using
200 random samples of the form (x1 , x2 , y1 ); where, y1 is the label of the logical
value f . Then, running Algorithm 5.3 a model fˆ was obtained using the training data
with a maximum number of molecules n max = 5, a maximum number of compounds
cmax = 1, a tolerance ω = 1 × 10−4 , and a step size φ = 0.001. Figure 6.20 shows
the classification of 300 random samples as a testing set using the obtained nonlinear
AHN-classifier fˆ. In particular, the classifier has a misclassification rate of 4.3 %.
The nonlinear AHN-classifier is depicted in Fig. 6.21. As noted, three CH-
primitive molecules clusters the dataset. These molecules act over three spaces delim-
ited by the bounds of molecules L. Then, the dataset is clustered using (6.16). Notice
that these bounds are separated by intermolecular bonds r.
In contrast, a self-organizing map (SOM) was trained for the same example in oder
to compare it against to the AHN-model. In that way, the SOM was designed with an
output layer of 20 × 20 neurons using the same 200-samples (x1 , x2 , y1 ) as the input
vector of the network. After 200 epochs, the response of the self-organizing map
was obtained, as shown in Fig. 6.22. It is remarkable to say that Fig. 6.22 represents
white regions as the logical value false and black regions as the logical value true;
thus these regions are not the current positions of the 400 neurons. Moreover, the
classification process obtained a misclassification rate of 9.5 %.

1
Molecule−1
0.9 Molecule−2
Molecule−3
0.8

0.7

0.6
x2

0.5

0.4

0.3

0.2

0.1

0
0 0.2 0.4 0.6 0.8 1
x1

Fig. 6.20 Response of the nonlinear AHN-classifier fˆ of Example 6.9


6.2 Clustering Problems 147

+0.61 0.0 0.0


+0.33 0.0 0.0
+0.00 0.0
−0.16 C( −24.4,−67.5) C (1.0 ,0.0) C (0 .0,0.0) 0.0

−0.33 0.0 0.0


+0.16 0.0 0.0

0.21 0.21
0.63 0.63

Fig. 6.21 Artificial hydrocarbon network structure of the nonlinear classifier fˆ of the XOR-
classifier

1 1

0.9 0.9

0.8 0.8

0.7 0.7

0.6 0.6
x2

0.5 0.5

0.4 0.4

0.3 0.3

0.2 0.2

0.1 0.1

0 0
0 0.2 0.4 0.6 0.8 1
x1

Fig. 6.22 Response of the nonlinear self-organizing map of Example 6.9

Notice in Fig. 6.22 that some white regions are located inside black regions due
to the topology of the SOM. In contrast, Fig. 6.20 reveals that the AHN-model can
classify slightly better the XOR problem than the SOM net, i.e. misclassification rates
of 4.5 and 9.5 %, respectively. To this end, it is important to highlight that an artificial
hydrocarbon network classifier is a supervised algorithm while a self-organizing map
classifier is an unsupervised algorithm.
Now, consider more attribute variables in a dataset like the Fisher’s Iris dataset
[1, 2]. It is a 150-samples dataset comes from three different species of Iris flowers to
which four variables were measured: sepal length (x1 ), sepal width (x2 ), petal length
(x3 ), and petal width (x4 ). All the 150-samples has numeric labels for the three
different species of flowers: Iris setosa (1), Iris versicolor (2), and Iris virginica (3).
148 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

Example 6.10 Build a nonlinear classifier fˆ of the Iris dataset using artificial hydro-
carbon networks.

Solution 6.10 A nonlinear artificial hydrocarbon network classifier fˆ was built


using 75 % of the dataset, chosen random samples of the form (x1 , x2 , x3 , x4 , y1 );
where, y1 is the label of Iris flowers. Then, running Algorithm 5.3 a model fˆ was
obtained using the training data with a maximum number of molecules n max = 3, a
maximum number of compounds cmax = 1, a tolerance ω = 1 × 10−4 , and a step
size φ = 0.095. Figures 6.23 and 6.24 show the response of the nonlinear AHN-
classifier fˆ. In particular, the classifier has a misclassification rate of 10.7 % using
the 150-samples as testing samples.
The nonlinear AHN-classifier is depicted in Fig. 6.25. As noted, three CH-
primitive molecules clusters the dataset. In particular, each CH-primitive molecule

50
sepal width

40

30

20
40 50 60 70 80
sepal length

30
petal width

20

10

0
40 50 60 70 80
sepal length

30
petal width

20

10

0
20 25 30 35 40 45
sepal width

Fig. 6.23 (First part) Response of the nonlinear AHN-classifier fˆ over the Iris dataset: molecule-
1 (◦) represents the Iris setosa, molecule-2 () represents the Iris versicolor and molecule-3 (β)
represents the Iris virginica
6.2 Clustering Problems 149

80

petal length
60

40

20

0
40 50 60 70 80
sepal length

80
petal length

60

40

20

0
20 25 30 35 40 45
sepal width

30
petal width

20

10

0
10 20 30 40 50 60 70
petal length

Fig. 6.24 (Second part) Response of the nonlinear AHN-classifier fˆ over the Iris dataset: molecule-
1 (◦) represents the Iris setosa, molecule-2 () represents the Iris versicolor and molecule-3 (β)
represents the Iris virginica

coresponds to a different class of Iris flowers. Actually, the bounds of molecules L


are used as centers of molecules and partitions are calculated as (6.19). Notice that
these centers are separated by intermolecular bonds r.

6.3 Guidelines for Real-World Applications

In the previous section, some notes on modeling problems using artificial hydrocar-
bon networks were presented. As noted, built models are subjected to the structure
of artificial hydrocarbon networks, and these structures with their parameters can
be useful to train other artificial hydrocarbon networks or to analyze systems. Since
molecules can be treated as packages of information, inheritance of data would be
150 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

0.0 0.0 0.0


0.0 26.25 + 95.15i 0.0
0.0 0.0 0.0
0.0 0.0 148.9 + 61.9i 29.13 − 13.81i
0.0
0.0 85.2 − 55.87i
0.0 46.5 − 21.44i
0.0 C (0.0,0.0,0.0,0.0) C (140,4,−38,5)E−4 C (−8,5,1,−1)E−4 −0.19

0.0 21.78 29.13 + 13.81i


0.0 26.25 − 95.15i 85.2 + 55.87i
0.0 46.42 46.5 + 21.44i
0.0 128.8 148.9 − 61.9i

43.0 20.0
51.9 28.5
54.4 30.9
79.0 44.0

Fig. 6.25 Artificial hydrocarbon network structure of the nonlinear classifier fˆ of the Fisher’s Iris
dataset

useful to other AHNs or they can be stored in a catalog for future training models.
In addition, hydrogen values, stoichiometric coefficients, bounds of molecules and
intermolecular distances would be treated as metadata to analyze modeled systems
or to perform new actions in an overall intelligent system. Then, three post-processes
of artificial hydrocarbon networks are identified and discussed, as follows.

6.3.1 Inheritance of Information

Molecules in artificial hydrocarbon networks package information that can be inher-


ited to other AHN-structures to preserve that information. For instance, consider the
two artificial hydrocarbon compounds depicted in Fig. 6.26. As it can be seen, the
first molecule in each structure is completely different while the other two molecules
in both structures are similar. Then, it supposes that the two systems from which the
AHN-structures were built, are similar. In fact, the first AHN-structure in Fig. 6.26
models the system f depicted in (6.12) and the second AHN-structure models the
system f in (6.26).


x + 0.2 −1 ⊂ x < 0.1
f = sin(α x) 0.1 ⊂ x < 0.9 (6.26)


cos(α x) 0.9 ⊂ x ⊂ 1

Moreover, combinations of molecules can be used to create new artificial hydro-


carbon networks that exhibit behaviors as combinations of behaviors of these mole-
cules. For example, Fig. 6.27 shows two different artificial hydrocarbon compounds
made of three molecules. As a result, another artificial hydrocarbon compound
6.3 Guidelines for Real-World Applications 151

−1.31 + 0.93i 0.98 1.43

0.0009 C 1.3 C − 4.3 C 0.8 −5.14

−1.31 − 0.93i 0.02 0.53

1.06 0.83

+1.84i 0.98 1.43

0.0 C 2E − 15 C − 4.4 C 0.8 −5.14

−1.84i 0.02 0.53

1.08 0.81

Fig. 6.26 Two AHN-structures that reveal similar behaviors because they model two similar
systems

2.47 4.98 1.28

0.95 C C C 6.43

−9.31 0.32 0.53

5.38 4.28 7.18

8.20 C C C 1.92

2.19 0.12 0.22

2.47 4.98 7.18

0.95 C C C 1.92

− 9.31 0.32 0.22

Fig. 6.27 Three artificial hydrocarbon compounds. The first two compounds are made of molecules
that are combined to make the resultant artificial compound shown at last

(see the last compound in Fig. 6.27) can be made with a combination of molecules
in the other compounds.
152 6 Notes on Modeling Problems Using Artificial Hydrocarbon Networks

6.3.2 Catalog Based on Artificial Compounds

If molecules inherit information, then they can be stored and create new compounds
using these molecules. As an example of its application, consider the training process
of an artificial hydrocarbon network. In some point, one can assume that an unknown
system Σ presents a particular behavior that can be associated to a molecule M p
stored previously. Then, one can feed the training process with M p . If the behavior
of M p is similar to the unknown behavior of the system, then M p will be assimilated
rapidly. Otherwise, M p will be vanished from the training process. For example,
(6.26) was modeled using the second molecule shown in Fig. 6.26 as M p , and the
process finishes in 5 iterations while using Fig. 6.28 as M p the process finishes in
380 iterations vanishing M p .
But, the most prominent application of storing molecules is that they can be
analyzed in order to form a catalog of molecules with description. Then, the catalog
of compounds can be implemented to inherit information, to create new compounds
or to analyze if a system assimilates some previously recorded behaviors. In addition,
the catalog can be implemented for large sets of observations in systems in order to
reduce the time of training artificial hydrocarbon networks.

6.3.3 Using Metadata

As said before, different parameters in artificial hydrocarbon networks can be used


for analyzing systems. Through this chapter hydrogen values have been used for
packaging information and bounds of molecules have been used for explaining the
limits of molecular behaviors. These and other characteristics were described in
Sect. 4.4.
For instance, let assume that an intelligent system has the objective to identify
if two functions f 1 and f 2 are similar or not (they can represent the characteristic
vectors of two images, the target image and the testing image). One can prove that
they are similar if f 1 is used as a training function and f 2 as the testing function of
the model of f 1 . Maybe in some regions they would be similar and in other input
intervals they would not. Then, two artificial hydrocarbon networks can be used for
modeling f 1 and f 2 , let say, fˆ1 and fˆ2 as shown in Fig. 6.29.

Fig. 6.28 Molecule used in 0.0


the training process to show
the impact of stored molecules
C 1.0

0.0
6.3 Guidelines for Real-World Applications 153

9.8 4.5 9.7 4.5

1.25 C C 7.6 1.32 C C 8.1

3.2 3.4 3.1 3.6

Fig. 6.29 Two artificial hydrocarbon networks modeling two different functions. Molecular infor-
mation is compared to find similarities on them, because implicitly, it means that the two functions
are similar

Since two molecules are similar due to their parameters, then similarities in para-
meters mean similarities in modeled systems. Using the latter criterion, parameters
can be compared in Fig. 6.29 and if these parameters are statistically equivalent,
then both systems (or parts of them) are similar. Otherwise, systems are not similar.
Finally, from Fig. 6.29, one can conclude that f 1 and f 2 are similar functions (e.g.
matching of two images) because all hydrogen parameters and intermolecular dis-
tances reveal that there is a maximum absolute error of 0.5 between them. To this
end, parameters in artificial hydrocarbon networks can be treated as metadata giving
the opportunity to analyze systems or to perform new actions in an overall intelligent
system.

References

1. Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml


2. Wong WC, Cho SY, Quek C (2009) R-POPTVR: a novel reinforcement-based POPTVR fuzzy
neural network for pattern classification. IEEE Trans Neural Netw 20(11):1740–1755
Chapter 7
Applications of Artificial Hydrocarbon Networks

Artificial hydrocarbon networks (AHNs) present several characteristics that are


useful for learning, classifying, predicting, analyzing, filtering and controlling tasks,
as described in previous chapters. Precisely, CH-molecules in their structures allow
to capture and to cluster information about systems that can be exploited to solve
engineering problems. In that sense, artificial hydrocarbon networks can be applied
successfully in many real-world engineering applications.
Thus, this chapter presents three different applications in which artificial hydro-
carbon networks have been implemented, such as: design of adaptive filters for
noisy audio signals, design of position controllers of direct current motors using
the so-called AHN-fuzzy inference systems and design of a facial recognition sys-
tem. Examples of program codes of these applications can be found in Appendix C.

7.1 Filtering Process in Audio Signals

Audio signal applications present problems like the addition of noise signals that
interfere with the original ones, arriving in poor performance of audio. For signal
analysis, this addition of noise causes imprecise data of information that blinds
important features in time and frequency domain, while for human ears, it causes
non-intelligibility of audio. Nowadays, the huge industry of mobile applications
allow to search an unknown segment of music by recording it and sending over the
web to query the basic information about the track. In fact, the main process of these
applications is to filter the recorded signal in order to clean the original segment of
music because it is corrupted with environmental and hardware noise.
In that sense, artificial hydrocarbon networks were proved in different scenarios in
order to achieve noise reduction in audio signal to perform the music search problem.
Three metrics were used: a direct comparison with a classical finite impulse response
(FIR) filter, the short-time objective intelligibility (STOI) value, and the signal-to-
noise ratio (SNR).

H. Ponce-Espinosa et al., Artificial Organic Networks, 155


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1_7,
© Springer International Publishing Switzerland 2014
156 7 Applications of Artificial Hydrocarbon Networks

7.1.1 Background and Problem Statement

In general, audio filters are analog or digital. In the following section, consider
digital audio filters, e.g. filters implemented in software. In digital signal processing,
audio filters can be classified into finite impulse response (FIR) filters and infinite
impulse response (IIR) filters. The first one uses a filter kernel that reaches a finite
zero frequency response while the second one uses a filter kernel that responds
in frequency with infinite exponential decaying sinusoidal functions. In practice,
convolution implements FIR filters and recursion implements IIR filters. In advance,
FIR filters use convolution of the input signal while IIR filters uses convolution of
the input but also of the output signals [13].
Several audio filtering applications consider two interests: improving either time
domain response or frequency domain response. Smoothing, noise reduction and
direct current (DC) removal are typical problems in time domain while separating
frequencies is typical in frequency domain. However, it is very difficult to improve
time and frequency responses with the same filter. Thus other techniques have been
proposed. For example, the moving average and windowed-sinc filters are widely
used in time domain. In contrast, the single 2-pole filter and Chebyshev’s filter are
used in frequency domain. The above examples fall into linear filters. These kinds
of filters have a linear response from the input signal. However, there are other
prominent filters that respond in a nonlinear way from the input signal. Classical
nonlinear filters are based on the time-varying Wiener’s filter, probabilistic filters
and homomorphic filters [13].
Consider a special kind of noise so-called white or uniform noise. In particular,
white noise reduction cannot be handled easily in audio signals because it contains
similar low magnitude components in all the spectrum of frequencies that is typically
confusing or overlapping with mainly voice signals and musical signals. In that sense,
linear filters cannot clean the signal efficiently and nonlinear filters for these purposes
have been proposed. In fact, recent tendencies on filtering signals fall into adaptive
nonlinear filters. Those are primary nonlinear filters that react to segments of audio
signals. Depending on the time and/or frequency responses in these intervals, the
filter provides different actions. In particular, most of adaptive filters are covered by
artificial intelligence techniques like artificial neural networks or fuzzy systems [1].
Furthermore, artificial hydrocarbon networks have a filtering property that in
conjunction with molecular units can be adapted to audio signals in order to filter
noisy signals. For instance, consider a segment of an audio signal corrupted with
white noise. Then, an artificial hydrocarbon network can act as filter of this audio
signal using the above properties, as stated in Problem 7.1.
Problem 7.1 Let s be a corrupted audio signal of the form as (7.1); where, f is the
original audio signal, ρ is a white noise signal and t is the time. Then, a filter F is
required in order to recover f as much as possible, as expressed in (7.2).

s(t) = f (t) + ρ(t) (7.1)


7.1 Filtering Process in Audio Signals 157

f (t) ∈ F (s(t)) (7.2)

On the other hand, since artificial hydrocarbon networks are filtering structures
and they can adapt to different scenarios using CH-molecules, the problem to find a
filter F is reduced to find an artificial hydrocarbon network structure AHN given s,
such that, (7.3) holds:
f (t) ∈ AHN (s(t)) (7.3)

Following, a methodology of using artificial hydrocarbon networks for noise


reduction is presented as well as some experimental results over three different
environments: filtering the segment of an audio track corrupted with white noise
in software, filtering the segment of an audio signal recorded with a microphone and
corrupted in software, and filtering the segment of an audio track recorded with a
mobile phone and corrupted in a non-ideal environment.

7.1.2 Methodology

First, an audio signal s is divided into batches, so-called windows. Each window that
starts in time t0 has a finite size of time tW measured in seconds, i.e. milliseconds.
(i)
Then, each window is initially divided into 2 partitions with time length t P of i
partition, as shown in Fig. 7.1.
(i)
In order to determine the best set of time lengths {t P }≥ for all partitions, an
artificial hydrocarbon networks AHN implements its intermolecular distances r jk ,
between two adjacent molecules M j and Mk , in time domain such that each time
(i)
length t P is equal to the interval between two intermolecular distances as (7.4);
where, r 01 = t0 and r n,n+1 = t0 + tW .

(1) (2) (i)


tp tp tp

Signal
Amplitude

Filtered

window (tw)

Time (s)

Fig. 7.1 Segment of audio signal for training an artificial hydrocarbon network structure
158 7 Applications of Artificial Hydrocarbon Networks

Algorithm 7.1 ADAPTIVE-AHN-FILTER(s, tW , n max , ε, η): Adaptive filtering process to


reduce noise in audio signals using artificial hydrocarbon networks.
Input: the corrupted audio signal s = (t, s(t)), window time tW , the maximum number of molecules
n max ⊂ 2, the small tolerance value ε > 0 and the learning rate η.
Output: the filtered audio signal fˆ.

cmax = 1
fˆ = ∅
Split s into m equal windows si using tW .
for each si do
[Γ, H, Λ, Π ] = SIMPLE-AHN(si , n max , cmax , ε, η)
fˆi ← using [Γ, H, Λ, Π ] in the i-th input domain
fˆ = fˆ + fˆi
end-for
return fˆ

(i)
t P = r i,i+1 − r i−1,i , →i = 1, . . . , n (7.4)

Thus, the structure of the AHN is adapting to each window with time tW using a
compound, i.e. cmax = 1 in order to minimize computational timing, and a maximum
of n max molecules, previously set. It is remarkable to say that an stoichiometric
coefficient α1 has not to be computed since for one compound α1 = 1. For each
window, an artificial hydrocarbon network AHN is performed, as summarized in
Algorithm 7.1. At last, this process called adaptive AHN-filter will reduce noise in
signal s. As noted, this is a simple and transparent way to perform a filter using
artificial hydrocarbon networks.
On the other hand, three evaluations are implemented to measure the performance
of the adaptive AHN-filter. The first one considers a comparison between the adaptive
AHN-filter and the classical FIR-filter. For instance, the FIR-filter [1, 13] is expressed
as a finite weighted sum as (7.5); where, x[t] is a finite sequence of data representing
the input signal in t, {bk } is a set of filter coefficients found by designing methods
like windowing, y[t] is the finite sequence representing the output signal in t, and N
represents the order of the FIR-filter.


N
y[t] = bk · x[t − k] (7.5)
k=0

The second evaluation considers the short-time objective intelligibility (STOI)


value which it represents a monotonic relation with the average intelligibility of
filtering signals. In fact, the STOI metric offers an objective, non-qualitative way to
measure the intelligibility of filtered signals with respect to original signals, in the
range of 0.0–1.0. Thus, a higher value of STOI stands for a better performance about
filters [15].
The last evaluation considers the well-known signal-to-noise ratio (SNR) which
it measures the ratio between the average power of the interested signal P(s) and the
7.1 Filtering Process in Audio Signals 159

0.5
Amplitude
0

−0.5

−1
0 1 2 3 4 5
Time (s)

Fig. 7.2 Audio signal of experiment 1, (black) original audio segment, (gray) noisy signal

average power of the noise signal P(ρ) as written in (7.6). If the SNR-value is greater
than 1, then the signal contains much more meaningful information than noise [14].

P(s)
SNR = (7.6)
P(ρ)

7.1.3 Results and Discussion

In order to prove the performance of the adaptive AHN-filter, three experiments were
designed. The first experiment uses an audio signal digitally corrupted with white
noise; the second experiment was run on a digital signal processing (DSP) hardware;
and the third experiment considers a naturally corrupted audio signal coming from
a recording using a mobile phone.

7.1.3.1 Filtering Audio Signal Digitally Corrupted with White Noise

The experiment considers a five-second mono-channel audio signal segment sampled


at 44.10 kHz of a popular rock band. In addition, it was normalized and corrupted with
white noise of 15 % SNR. Figure 7.2 shows the original and the corrupted signals.
In particular, the adaptive AHN-filter was built using n max = 10 and tW = 10 ms.
Figure 7.3 (top) presents the response of the adaptive AHN-filter in time domain.
In addition, a classical 30-th order FIR-filter was implemented and its response is
presented in Fig. 7.3 (bottom).
To better analyze the performance of the adaptive AHN-filter, Fig. 7.4 depicts a
comparison in frequency domain of the original signal (Fig. 7.4 (top)), the response
of adaptive AHN-filter (Fig. 7.4 (middle)) and the response of the FIR-filter (Fig. 7.4
(bottom)). Notice that the cut-off frequency is approximately at 5000 Hz in the AHN-
filter response and 7000 Hz in the FIR-filter response, qualitatively demonstrating
that adaptive AHN-filters can achieve lowpass responses. It is important to remark
160 7 Applications of Artificial Hydrocarbon Networks

0.5
Amplitude
0

−0.5

−1
0 1 2 3 4 5
Time (s)

0.5
Amplitude

−0.5

−1
0 1 2 3 4 5
Time (s)

Fig. 7.3 Audio segment filtered of experiment 1. (top) Response of the adaptive AHN-filter,
(bottom) Response of the FIR-filter

that the cut-off frequency of the FIR-filter was set at 5000 Hz because important data
of original audio signal is removed at the same cut-off frequency in the adaptive
AHN-filter. In addition, the STOI-value was measured in both responses obtaining
0.6628 for the adaptive AHN-filter and 0.8083 for the FIR-filter. Thus, a FIR-filter
is 18 % more intelligible than the adaptive AHN-filter. Finally, the SNR-value of the
filtered signal when applying the adaptive AHN-filter is 2.3318 and 0.8609 when
applying the FIR-filter, proving that adaptive AHN-filters can clean signals in a better
way.
From the above experiment, a filter based on artificial hydrocarbon networks can
reduce considerably the noise in audio signals. However, the STOI-value suggests
that the filtered signal using the adaptive AHN-filter is not much intelligible, but
in terms of noise reduction, the SNR-value reveals that the AHN-filter has better
performance than the classical FIR-filter. In fact, Fig. 7.4 shows that a non-adaptive
filter, i.e. FIR-filter, has poor performance of filtering white noise, while the adaptive
AHN-filter can reach a better performance.

7.1.3.2 Filtering Audio Signal on DSP Hardware

This experiment considers the implementation of the adaptive AHN-filter on DSP


NI-Speedy 33 hardware; an educational board for signal processing that can be
7.1 Filtering Process in Audio Signals 161

0.025

0.02

|Y(f)| 0.015

0.01

0.005

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

0.025

0.02

0.015
|Y(f)|

0.01

0.005

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

0.025

0.02

0.015
|Y(f)|

0.01

0.005

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

Fig. 7.4 Analysis in frequency domain of experiment 1. (top) black: noisy signal, gray: original
signal, (middle) Response of the adaptive AHN-filter, (bottom) Response of the FIR-filter

easily programmed using the LabVIEW platform. In that case, the audio signal was
obtained via a microphone connected to the analog input of the hardware. Then, the
audio signal was corrupted programmatically with white noise of 20 % SNR. The
response of the adaptive AHN-filter is then obtained and finally the filtered signal
is reproduced on headphones connected to the analog output port of the hardware.
Figure 7.5 shows the block diagram of the program implementing this experiment in
the DSP NI-Speedy 33 using LabVIEW.
Figure 7.6 (top) shows a 3-second audio signal collected from the hardware sam-
pled at 44.10 kHz. Figure 7.6 (middle) shows the response of the adaptive AHN-filter
and Fig. 7.6 (bottom) shows the response of the FIR-filter, in time domain. The AHN-
filter was built using n max = 10 and tW = 10 ms. Notice that offline performance
162 7 Applications of Artificial Hydrocarbon Networks

(a)

(b)

(c)

(d)

(e)

Fig. 7.5 Block diagram of the program implemented on DSP NI-Speedy 33 using LabVIEW.
a Block diagram for recording the input signal coming from a microphone. b Block diagram for
reproducing the response of the AHN-filter. c Block diagram of the X SubVI in b. d Block diagram
of the AHN SubVI in b. e Block diagram of the simplified CH molecule C(x) SubVI in d
7.1 Filtering Process in Audio Signals 163

0.5
Amplitude
0

−0.5

−1
0 0.5 1 1.5 2 2.5 3
Time (s)

0.5
Amplitude

−0.5

−1
0 0.5 1 1.5 2 2.5 3
Time (s)

0.5
Amplitude

−0.5

−1
0 0.5 1 1.5 2 2.5 3
Time (s)

Fig. 7.6 Audio segment of experiment 2. (top) black: noisy signal, gray: original signal, (middle)
Response of the adaptive AHN-filter, (bottom) Response of the FIR-filter

is needed for calculating all parameters in the adaptive AHN-structure. If online


filtering is required, real-time techniques may be used; but presetting parameters is
mandatory.
In can be seen in Fig. 7.6 (bottom) that classical FIR-filter achieves lowpass fil-
tering but does not attenuates noisy signal while voiceless, in comparison with the
response of adaptive AHN-filter (Fig. 7.6 (middle)).
In addition, Fig. 7.7 shows the comparison of the adaptive AHN-filter and the FIR-
filter in frequency domain. Figure 7.7 (middle) reveals that the adaptive AHN-filter
can eliminate most of the white noise in the corrupted signal, but it does not cancel
all white noise after 5000 Hz. Actually, the FIR-filter can deal with high frequencies,
164 7 Applications of Artificial Hydrocarbon Networks

0.015

0.01
|Y(f)|

0.005

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

0.012
0.01
0.008
|Y(f)|

0.006
0.004
0.002
0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10
0.015

0.01
|Y(f)|

0.005

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

Fig. 7.7 Analysis in frequency domain of experiment 2. (top) black: noisy signal, gray: original
signal, (middle) Response of the adaptive AHN-filter, (bottom) Response of the FIR-filter

e.g. after 5000 Hz; but it cannot filter white noise in low frequencies, as revealed in
Fig. 7.7 (bottom) in the interval from 3000 to 5000 Hz. A qualitative comparison is
also complemented with the quantitative STOI metric; in which, the response of the
adaptive AHN-filter has 0.6404 of STOI value and 0.6648 STOI value obtained from
the filtered audio signal using the FIR-filter. Thus, FIR-filter is relatively 3.7 % more
efficient than the adaptive AHN-filter, in terms of intelligibility. Finally, applying
the SNR metric, the response of the adaptive AHN-filter obtained 8.3917 and the
FIR-filter 1.4556, showing that the adaptive AHN-filter reduces white noise better
than FIR-filter.
7.1 Filtering Process in Audio Signals 165

7.1.3.3 Filtering Audio Signal Naturally Corrupted with White Noise

The last experiment considers an audio segment corrupted with noise of the envi-
ronment. In that case, a mobile phone was used for recording the audio signal from
an audio CD track. In fact, this is a real-world application in mobile devices to
recognize audio segments. Actually, in order to compute the results of the artifi-
cial hydrocarbon network based filter and the FIR filter; the same segment of audio
signal was extracted from the audio CD. At last, five-second mono-channel audio
segment sampled at 44.10 kHz was used in the experiment as shown in Fig. 7.8 (top).
Analyzing Fig. 7.8 (top), the recorded audio signal is corrupted with 15.1 % SNR.
Figure 7.8 (middle) shows the response of the adaptive AHN-filter with n max = 10
and tW = 10 ms, and Fig. 7.8 (bottom) shows the response of a 30-th order FIR-filter,
both in time domain.
In addition, Fig. 7.9 shows a comparison between the responses of both filters in
frequency domain. The cut-off frequency of the adaptive AHN-filter is approximately
4500 and 5000 Hz in the FIR-filter. In fact, Fig. 7.9 (bottom) reveals that some of white
noise could not be removed by FIR-filter in low frequencies, but it is very efficient
in high frequencies. In contrast, the AHN-filter in Fig. 7.9 (middle) has a better
performance in low frequencies eliminating white noise. In terms of intelligibility,
the STOI-value of the adaptive AHN-filter is 0.3514 and 0.4918 for the FIR-filter.
At last, the SNR-value for the AHN-filter and FIR-filter are 6.1314 and 1.9239,
respectively. At last, from the above experiments, adaptive AHN-filters can achieve
the music search problem because they can attenuate noise significantly; however, a
tradeoff between the STOI and the SNR has to be considered.

7.2 Position Control of DC Motor Using AHN-Fuzzy


Inference Systems

Controlling direct current (DC) motors is an important task in industrial applications


because they are used in computer numeric control (CNC) machines, robotic sys-
tems, domotics, and so forth. Nowadays, fuzzy control systems have been widely
used to perform these tasks. In particular, type-1 and type-2 fuzzy systems have been
implemented; however, the first type of systems has not dealt appropriately in pres-
ence of noise while the second type of systems have done precisely. But the latter is
much more complicated to compute because a large number of operational compu-
tations are required. Thus, in practice, type-1 fuzzy systems are implemented where
a system is not subjected to large noise or when hardware does not allow too much
computations; in contrast, type-2 fuzzy systems are implemented where a system
is subjected to large noise and where the software and hardware is appropriate to
perform all calculations [6].
In that sense, a new fuzzy inference system based on artificial hydrocarbon net-
works is presented in order to improve the design of both types of fuzzy systems.
166 7 Applications of Artificial Hydrocarbon Networks

0.5
Amplitude
0

−0.5

−1
0 1 2 3 4 5
Time (s)

0.5
Amplitude

−0.5

−1
0 1 2 3 4 5
Time (s)

0.5
Amplitude

−0.5

−1
0 1 2 3 4 5
Time (s)

Fig. 7.8 Audio segment of experiment 3. (top) black: mobile phone audio signal, gray: CD track
audio signal, (middle) Response of the adaptive AHN-filter, (bottom) Response of the FIR-filter

Particularly, the proposed inference system based on AHNs was implemented on the
design of a position controller of DC motors in a real-world application, demonstrat-
ing accuracy in both types with minimal modifications.

7.2.1 Background and Problem Statement

In control theory, a control system can be considered as an autonomous system


that regulates the behavior of other systems. For instance, regulating the temperature
7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 167

0.04

0.03
|Y(f)|
0.02

0.01

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

0.04

0.03
|Y(f)|

0.02

0.01

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

0.04

0.03
|Y(f)|

0.02

0.01

0
0 0.5 1 1.5 2
4
Frequency (Hz) x 10

Fig. 7.9 Analysis in frequency domain of experiment 3. (top) black: mobile phone audio signal,
gray: CD track audio signal, (middle) Response of the adaptive AHN-filter, (bottom) Response of
the FIR-filter

inside a room or controlling the velocity of a robotic car are examples of the response
of control systems.
For instance, consider the following system, as shown in Fig. 7.10. A trainer
hardware module is prepared for sending a reference signal r (t) from a knob and
a feedback signal y(t) (i.e. the current position of a DC motor) to a host in which
a control law is running. The correction signal u(t) computed is sent back to the
trainer module in order to feed a DC motor. In particular to this case study, a NI
CompactRIO reconfigurable and embedded system based on field programmable gate
arrays (FPGA) is used as the host. In addition, LabVIEW is used for programming
168 7 Applications of Artificial Hydrocarbon Networks

Trainer Module
Host (DC motor)
(with FPGA)

Computer
(monitoring with LabVIEW)

Fig. 7.10 Overall system of the case study. The trainer module interacts with the NI CompactRIO
(host) as a control law and it is monitored with LabVIEW

e(t)
r(t) u(t)
Control y(t)
DC motor
y(t) Law
-1
z

Fig. 7.11 Block diagram of the PD position control of a DC motor

the control law on the NI CompactRIO and for monitoring the performance of the
fuzzy-molecular control system.
On one hand, both the reference signal r (t) that comes from a knob and the
position signal y(t) are in the voltage range [0.0, 5.0] V; where, 0.0 V represents an
angle of 0◦ and 5.0 V represents an angle of 180◦ . On the other hand, the correction
signal u(t) is the input voltage of the DC motor in the range [0.0, 5.0] V; where, 0.0 V
represents the maximum angular velocity of the motor to rotate counterclockwise,
5.0 V represents the maximum angular velocity of the motor to rotate clockwise and
2.5 V means no rotation (halt). It is remarkable to say that the position of the DC
motor increases in counterclockwise direction and decreases in clockwise direction.
In addition, consider the control system of the case study depicted in Fig. 7.11 in
which a position control of the DC motor is implemented. It consists of a control
law that regulates the behavior of the overall system in order to reach a specific goal
(i.e. control the position of a DC motor) using two inputs—the error signal e(t) and
the change of the position signal ẏ(t)—and one output—the input voltage of the DC
motor u(t); and the DC motor, referred as the plant.
7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 169

The objective of a control system like Fig. 7.11 is to track a reference signal r (t)
with the position of the DC motor y(t). In that sense, classical linear and nonlinear
control system have been widely studied to this purpose. The most prominent control
law is called proportional-integral-derivative (PID) controller [11] as expressed in
(7.7); where, P is the proportional constant, I is the integral constant, D is the
derivative constant, e(t) stands for the error between a reference signal r (t) and the
current value of the controlled variable y(t) as in (7.8), ė(t) stands for the derivative
of error, and u(t) is the correction signal.

u(t) = Pe(t) + I e(t)dt + D ė(t) (7.7)

e(t) = r (t) − y(t) (7.8)

If I = 0, then the control law is considered as a PD-controller and also if D = 0


then the resultant control law is called P-controller [11]. As noted, the PID-controller
is based on the error signal e(t); however, other control laws can act over different
variables.
In most of the cases, a model of the plant is required, e.g. the model of the DC
motor. Formally, the DC motor can be represented in the state space as (7.9) [5];
where, Ia and Ua are the flow current and the applied voltage in the rotor, Ra and
L a are the resistance and the inductance values of the rotor, I f and L f are the flow
current and the inductance value of the stator, ω is the angular velocity of the rotor,
k is the construction constant of the motor related to the flux generation, J is the
moment of inertia of the rotor, β is the viscous friction coefficient, Tl is the load
torque; and, ȧ stands for the first derivative of a. Figure 7.12 shows the scheme of a
DC motor.
  ⎜ Ra L f I f ⎟   ⎜ 1 ⎟ 
I˙a − La − La Ia La 0 Ua
= + (7.9)
ω̇ k
−β ω 0 −1 Tl
J J J

However, there are other controllers like the so-called fuzzy controllers that can
be implemented without knowing the model of the plant, e.g. (7.9). In addition, those
are very important in real-world applications because they can deal with uncertain
and imprecise information, mapping nonlinearities from the input domain (attribute
variables) to a fuzzy space and transforming the latter into the output domain (target
variables). Focusing on fuzzy control systems, three models can be found in literature:
Takagi-Sugeno inference systems [16], Mamdani’s fuzzy control systems [4, 7] and
Tsukamoto’s inference models [17].
In a nutshell, Takagi-Sugeno inference systems apply polynomial functions to
construct the consequent values using pairs of input-output data of a given system
to model [16]. Mamdani’s fuzzy control systems refer to control laws that apply
fuzzy inference models with fuzzy sets in the defuzzification phase [7]. In contrast,
Tsukamoto’s inference models implement monotonical membership functions [17].
170 7 Applications of Artificial Hydrocarbon Networks

Ra La

B
+ J

Rotor
Ia M
Ua
Tl

Rf Lf

+
Stator

Uf If

Fig. 7.12 Scheme of a DC motor

In any case, the basic fuzzy inference model, known as type-1 fuzzy system,
consists of three steps, as shown in Fig. 7.13. The first step is the fuzzification process
in which the input domain is mapped into a fuzzy space using fuzzy sets that assign
a membership value of any element in the input domain to be part of the fuzzy
set. Once all membership values are computed, the inference step is performed using
propositional rules and computed consequent membership values, and the third step is
the defuzzification process in which membership values obtained from the inference
process are transformed into an element of the output domain.
As said before, fuzzy inference models use fuzzy partitions, so-called fuzzy sets
and they are of two types: the type-1 fuzzy sets (Fig. 7.14 (top)) which assign a
membership value of any element to be part of the fuzzy set; and the type-2 fuzzy
sets (Fig. 7.14 (bottom)) which assigns a two membership values, i.e. primary and
secondary memberships, in order to model uncertainties on boundaries of type-1
fuzzy sets. In particular, the region inside these two membership functions is called
the footprint of uncertainty [6, 9].
The generalized fuzzy system is the so-called type-2, as shown in Fig. 7.15. In fact,
type-1 is a particular case of type-2 fuzzy systems because the footprint of uncertainty
in type-2 systems embeds type-1 fuzzy systems, e.g. a type-2 fuzzy system can be
reduced to a type-1 fuzzy system if the secondary membership function is equivalent
to the primary membership function.
Then, type-2 fuzzy systems consist of three steps. The first two steps are fuzzifi-
cation and inference, while the third step is known as output processing. The latter

if _
Inputs Outputs
then _

Fuzzification Inference Defuzzification

Fig. 7.13 Block diagram of type-1 fuzzy inference systems


7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 171

1
membership
0.8 function
0.6 membership value
µ (x)
0.4

0.2

0
0 1 2 3 4 5 6 7 8 9 10
x

1
primary
0.8 membership
function
0.6 primary value
µ (x)

0.4 secondary
secondary value
0.2 membership
function
0
0 1 2 3 4 5 6 7 8 9 10
footprint of x
uncertainty

Fig. 7.14 Types of fuzzy sets. (top) Type-1 fuzzy set representation, and (bottom) type-2 fuzzy set
representation

Output Processing
Type-Reducer

if _
Inputs
then _

Fuzzification Inference

Outputs

Defuzzification

Fig. 7.15 Block diagram of type-2 fuzzy inference systems

is subdivided in type-reducer and defuzzification steps. In that sense, type-reducer


step computes the centroids of all embedded type-1 fuzzy systems in a footprint of
uncertainty. As a result, these centroids are used for calculating the output value in
the defuzzification step.
In particular, consider Mamdani’s fuzzy control systems [4]. Roughly speaking,
type-1 Mamdani’s fuzzy control systems are important because fuzzy sets can deal
with uncertain data, but when the input domain is corrupted with noise the response
of type-1 is significantly poor. In addition, type-2 Mamdani’s fuzzy control systems
enhance the latter problem; but the implementation of type-2 systems are highly
complex in comparison with type-1 in terms of the number of operational computa-
tions and the ideal unlimited fuzzy sets to evaluate under the footprint of uncertainty
172 7 Applications of Artificial Hydrocarbon Networks

region [6, 10]. Then, applications based on Mamdani’s fuzzy control systems have
to make a tradeoff between using type-1 or type-2 fuzzy systems.
In that sense, a fuzzy inference system based on artificial hydrocarbon networks
has been proposed in order to exploit both types of fuzzy systems when a tradeoff
between them is required. Formally, it is stated in Problem 7.2.
Problem 7.2 Let F be the fuzzy control law in a position controller of a DC motor
like Fig. 7.11, such that, (7.10) holds; where, e(t) and ẏ(t) are the inputs, i.e. the
error signal and the derivative of position, respectively; and u(t) is the output, i.e.
the input voltage of the DC motor.

u(t) = F(e, ẏ, t) (7.10)

Moreover, suppose that F has to be implemented in both type-1 and type-2 fuzzy
systems with minimal modifications. Then, design a fuzzy control law F such that
a position controller of a DC motor can be performed with accuracy in both fuzzy
types with minimal modifications.

7.2.2 Methodology

The application consists in two parts. At first, a fuzzy inference system based on
artificial hydrocarbon networks named fuzzy-molecular inference (FMI) model is
proposed. Then, the design of the fuzzy controller F is described using the FMI-
model.

7.2.2.1 Fuzzy-Molecular Inference Model

Consider a type-1 fuzzy inference system like Fig. 7.16. The fuzzy-molecular infer-
ence (FMI) model can be viewed as a block with inputs that are mapped to a fuzzy
space in which processes are run and then resulting values are returned to as outputs.
In order to do so, the fuzzy-molecular inference model considers three steps: fuzzifi-
cation, fuzzy inference engine and defuzzification. In addition, the exists a knowledge
base in which information is stored about a given specific problem.
Fuzzification is the step of the FMI-model in which an input x, known as a
linguistic variable, is mapped to a fuzzy value in the range [0, 1]. Then, let A be a
fuzzy set and μ A (x) the membership function of A for all x ⊆ X ; where X is the input
domain space. Moreover, μ A (x) is a real value in the interval [0, 1] representing the
degree of belonging x to the fuzzy set A as shown in (7.11).

μ A : x ⊇→ [0, 1] (7.11)
7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 173

Knowledge
Base

x if_,then_ y

Fuzzification Inference Defuzzification


Engine

Fig. 7.16 Block diagram of the fuzzy-molecular inference model

In fact, this linguistic variable is partitioned into k different fuzzy sets Ai , for all
i = 1, . . . , k. As an example, consider a linguistic variable x partitioned into fuzzy
sets like: “small”, “medium” and “large”. Then, its membership value is calculated
over the set of membership functions μ Ai (x) for all i = 1, . . . , k. The shape of
these membership functions depends on the purpose of the problem domain. Typical
membership functions are the triangular function (7.12), the S-function (7.13) and
the Z-function (7.14); where, a, b and c are real value parameters for each function
as shown in Fig. 7.17.


 b−a a ≤ x ≤ b
x−a

μ A (x) = c−x c−b b ≤ x ≤ c


(7.12)


0 otherwise


0 x <a
μ A (x) = x−a
a≤x ≤b (7.13)
 b−a

1 x >b


1 x <a
μ A (x) = b−x
a≤x ≤b (7.14)
 b−a

0 x >b

The next step in the FIM-model is the fuzzy inference engine. This is a mechanism
that computes the consequent value yl of a set of fuzzy rules that accept membership
values of linguistic variables. Let, Rl be the l-th fuzzy rule of the form as (7.15);
where, {x1 , . . . , xk } is the set of linguistic variables in the antecedent, {A1 , . . . , Ak }
is the set of fuzzy partitions of the input space X , yl is the variable of the consequent
in the l-th rule, M j is the j-th C H -primitive molecule of an artificial hydrocar-
bon network excited with the resulting fuzzy implication value μΔ (x1, . . . , xk ) (see
below), and Δ is any T -norm function.

Rl : if Δ (x1 is Ai , . . . , xk is Ak ) , then yl is M j (7.15)


174 7 Applications of Artificial Hydrocarbon Networks

1
0.8

Triangular
0.6

µ (x) 0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
x
1
0.8
S-Shape

0.6
µ (x)

0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
x
1
0.8
Z-Shape

0.6
µ (x)

0.4
0.2
0
0 1 2 3 4 5 6 7 8 9 10
x

Fig. 7.17 Common membership functions in fuzzy inference systems

If assuming that μΔ (x1 , . . . , xk ) is the result of any T -norm function, then (7.15)
can be rewritten as (7.16); where, ϕ j is the molecular behavior of M j .

Rl : if Δ (x1 is Ai , . . . , xk is Ak ) , then yl = ϕ j (μΔ (x1 , . . . , xk )) (7.16)

For simplicity to this case study, let μΔ (x1 , . . . , xk ) be the min function (7.17):

μΔ (x1 , . . . , xk ) = min{μ A1 (x1 ), . . . , μ Ak (xk )} (7.17)

The last step in the FMI-model is the defuzzification. It calculates the crisp output
value y as (7.18) using all fuzzy rules of the form as (7.16); where, yl is the l-th
consequent value and μΔl (x1 , . . . , xk ) is the fuzzy evaluation of the antecedents, for
all fuzzy rules.
μΔl (x1 , . . . , xk ) · yl
y= (7.18)
μΔl (x1 , . . . , xk )
7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 175

Since, the fuzzy-molecular inference model has a generic fuzzy inference engine,
proper knowledge of a specific problem domain can be enclosed into a knowledge
base. It is a matrix that summarizes all fuzzy rules of the form as (7.16) in the
following way:
• For all input variables x1 , . . . , xk , represent all possible combinations of them
using the label of each set in the fuzzy partition of inputs, such that all antecedents
in the fuzzy rules will be covered.
• For each combination (summary of antecedents), assign the corresponding label
of molecule M j that will act when the fuzzy rule is fired.
At last, the fuzzy-molecular inference model combines interesting properties from
both fuzzy logic and artificial hydrocarbon networks. Advantages of the FMI-model
are the following:
• Fuzzy partitions in the output domain might be seen as linguistic units, e.g. “small”,
“large”.
• Fuzzy partitions have a degree of understanding (parameters are metadata).
• Molecular units deal with noise and uncertainties.
It is remarkable to say that molecules are excited by consequent values, thus
molecules does not model a given system, but transfer information from a fuzzy
subspace to a crisp set. Moreover, molecular units have the property of filtering
noise and uncertainties, especially important in real-world control applications.

7.2.2.2 Design of Control Law Based on the FMI-Model

Consider the control law depicted in Fig. 7.11. It is based on the fuzzy-molecular
inference model to implement a position controller of a DC motor.
For instance, the fuzzification step considers the two input variables (error signal
e(t) and first derivative of position signal ẏ(t)) in the partitioned space with three
type-2 fuzzy sets: “negative” (N ), “zero” (Z ) and “positive” (P). Figure 7.18 shows
the fuzzy sets for input e(t) and Fig. 7.19 shows the fuzzy sets for input ẏ(t) . All
parameters were tuned manually. Notice that type-2 fuzzy sets are determined using
the primary membership function μUA (x) and the secondary membership function
μ LA (x), the latter as an additional value of uncertainty. As a matter of fact, the region
inside both membership functions is called the footprint of uncertainty (7.19) of a
fuzzy set A. ⎧ ⎪
FOU(A) = μ LA (x), μUA (x) (7.19)
x⊆X

It is remarkable to say that when the secondary membership function is equal to


the primary membership function, then the type-2 fuzzy inference system is reduced
to a type-1 fuzzy inference system.
Then, the fuzzy inference engine calculates the consequent values of both primary
and secondary membership values μUA (x), μ LA (x) using the knowledge base shown
176 7 Applications of Artificial Hydrocarbon Networks

N Z P
1

Membership Value
0.8
0.6
0.4
0.2
0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
Error Signal

Fig. 7.18 Fuzzy sets of the input error signal. (solid line) Primary membership function, (dashed
line) secondary membership function

N Z P
1
Membership Value

0.8
0.6
0.4
0.2
0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
First Derivative Position Signal

Fig. 7.19 Fuzzy sets of the input first derivative position signal. (solid line) Primary membership
function, (dashed line) secondary membership function

in Table 7.1. Consequent values y L and yU are similarly obtained using (7.18) for
both primary and secondary membership values, respectively.
As noted in Table 7.1, the output signal was partitioned into three CH-molecules
M j for all j = 1, . . . , 3 that represent the action to be held. In particular, the output
signal was partitioned into the following molecules: “clockwise” (CW), “halt” (H )
and “counterclockwise” (CCW). Figure 7.20 shows the artificial hydrocarbon com-
pound used for this position controller. Parameters were found using Algorithm 4.5.
At last, the Nie-Tan method was used for computing the final value of the output
variable u(t) for the type-2 fuzzy controller because of its simplicity of computation.
Consider other methods like Karnik-Mendel, Greenfield-Chiclana, or Wu-Mendel
for type-reduction. In fact, the Nie-Tan method generates a type reduction of the
form as (7.20); where, y is the crisp output value u(t).

Table 7.1 Knowledge base


e(t) | ẏ(t) N Z P
of FMI-model based fuzzy
controller N MCW MCW MCW
Z MCW MH MCCW
P MCCW MCCW MCCW
7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 177

−0.4 0.0 −0.4

−0.8 C −1 C0 C1 −0.8

0.4 0.0 0.4

Fig. 7.20 Artificial hydrocarbon network used in the FMI-model based position controller

y L + yU
y= (7.20)
2

7.2.3 Results and Discussion

In order to demonstrate that the fuzzy-molecular inference model for fuzzy control
systems can be used as an alternative of type-2 fuzzy control systems, an exper-
iment over system of Fig. 7.10. The experiment considers a comparison between
the FIM-model based fuzzy control system designed previously and a Mamdani’s
fuzzy controller system designed with the same parameters for type-1 and type-
2 fuzzy systems. Actually, the output variable was partitioned for the Mamdani’s
fuzzy controller system into three type-2 fuzzy sets: “clockwise” (CW), “halt”
(H ) and “counterclockwise” (CCW). Figure 7.21 shows this partition for the output
variable u(t).

7.2.3.1 Type-1 Fuzzy System Comparison

For this experiment, the fuzzy-molecular position controller for a DC motor was
reduced to a type-1 fuzzy system by only considering the primary membership func-
tions in the fuzzification step, as well as in the Mamdani’s fuzzy controller.

CW H CCW
1
Membership Value

0.8

0.6

0.4

0.2

0
0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5
Correction Signal

Fig. 7.21 Fuzzy sets of the defuzzification step in the Mamdani’s fuzzy control system. (solid line)
Primary membership function, (dashed line) secondary membership function
178 7 Applications of Artificial Hydrocarbon Networks

The system was subjected to a step function without noise as shown in Fig. 7.22.
Results of the FMI controller determine that it had a step response of 0 % of maximum
overshoot, a rise time of 1.0 s and a maximum error of 2.5◦ in steady state. On the
other hand, the system was subjected to a step function with 35 % of noise as shown
in Fig. 7.23. Results of the FMI controller reports a 0 % of maximum overshooting,
a rise time of 1.1 s and a maximum error of 5.8◦ in steady state measured from
position 180◦ . For contrasting, Table 7.2 summarizes the overall results of the FMI
and Mamdani fuzzy controllers.
Notice in Fig. 7.22 that the response of the FMI controller is 50 % faster than the
response of the Mamdani controller and has a less value of maximum error in steady
state than then Mamdani controller. In comparison, Fig. 7.23 shows that both fuzzy
controllers are well stable as measured (5.8◦ and 5.5◦ of maximum error in steady
state). However, FMI controller is still faster (1.1 s of rise time) than the response
of the Mamdani controller (2.5 s of rise time). As noted, FMI controller has a better
response for dynamic uncertainties than the Mamdani controller.
Also, the system was subjected to a ramp function without noise as shown in
Fig. 7.24. Results determine that the FIM controller has a maximum error of 3.6◦ in
steady state while the Mamdani controller has 6.7◦ . On the other hand, the system
was subjected to a ramp function with 35 % of noise as shown in Fig. 7.25. The FMI
controller reports 11.0◦ of maximum error in steady state and the Mamdani controller
reports 12.3◦ . Also, Table 7.2 summarizes the overall results of this experiment with
respect to the response of FMI and Mamdani fuzzy controllers.
It is evident from Table 7.2 that both fuzzy controllers decrease their performance
in presence of noise. However, the FIM controller can track the reference signal
better than the Mamdani controller, as shown in the steady state error. In addition,
note that the FMI controller is slightly faster than the Mamdani controller.

180

160 Reference
FMI
140 Mamdani
120
Position (º)

100

80

60

40

20

0
0 1 2 3 4 5 6
Time (s)

Fig. 7.22 Comparison of the step response without noise of the FMI and Mamdani type-1
controllers
7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 179

200
180

Position (º) 150

100

50
Reference
FMI
0 Mamdani

0 1 2 3 4 5 6
Time (s)

Fig. 7.23 Comparison of the step response with 35 % noise of the FMI and Mamdani type-1
controllers

Table 7.2 Experimental results of type-1 fuzzy controllers


Fuzzy controller Noise (%) Rise time (s) Steady-state error (◦ )
Step response
FIM 0 1.0 2.5
35 2.0 5.8
Mamdani 0 1.1 4.7
35 2.5 5.5
Ramp response
FIM 0 – 3.6
35 – 11.0
Mamdani 0 – 6.7
35 – 12.3

7.2.3.2 Type-2 Fuzzy System Comparison

The type-2 fuzzy-molecular position controller for a DC motor described previously


was implemented as well as the type-2 Mamdani controller. Again, the system was
subjected to a step function with 35 % noise and without it as shown in Figs. 7.26
and 7.27, respectively. The same process was done with a ramp function and the
responses of both controllers are shown in Figs. 7.28 and 7.29, respectively. The
overall results are summarized in Table 7.3.
As noted from Tables 7.2 and 7.3, the step responses of both FIM and Mamdani
type-2 fuzzy controllers remain similar to type-1 controllers, as expected. Thus,
type-1 and type-2 FIM fuzzy controllers are slightly equivalent with or without
perturbations.
From Figs. 7.28 and 7.29, it can be seen that the response of type-2 fuzzy con-
trollers are slightly better than type-1 controllers, as expected. From the point of view
180 7 Applications of Artificial Hydrocarbon Networks

180

150
Position (º)

100

50 Reference
FMI
Mamdani

0
0 1 2 3 4 5 6 7
Time (s)

Fig. 7.24 Comparison of the ramp response without noise of the FMI and Mamdani type-1
controllers

180

150
Position (º)

100

50
Reference
FMI
0
Mamdani

0 1 2 3 4 5 6 7
Time (s)

Fig. 7.25 Comparison of the ramp response with 35 % noise of the FMI and Mamdani type-1
controllers

of ramp response, the FIM controller presents similar performance than the Mamdani
controller without noise (3.8◦ and 3.7◦ maximum steady state errors, respectively).
Again, both controllers present the same tendency when they are exposed to noise,
and in comparison with type-1 controllers, type-2 fuzzy controllers act slightly better
as found in Tables 7.2 and 7.3 (FIM: 17.2 % better, Mamdani: 1.7 % better).
On one hand, from the above results, fuzzy-molecular inference models can
achieve fuzzy control applications. Moreover, these FIM-model based controllers
can be used as an alternative for type-2 fuzzy control systems. This statement comes
from the evaluation and comparison of step and ramp responses between the FIM-
controller and the Mamdani fuzzy controller, both models subjected to static and
7.2 Position Control of DC Motor Using AHN-Fuzzy Inference Systems 181

180

160
Reference
140 FMI
120 Mamdani
Position (º)

100

80

60

40

20

0
0 1 2 3 4 5 6
Time (s)

Fig. 7.26 Comparison of the step response without noise of the FMI and Mamdani type-2 con-
trollers

200
180

150
Position (º)

100

50
Reference
FMI
0 Mamdani

0 1 2 3 4 5 6
Time (s)

Fig. 7.27 Comparison of the step response with 35 % noise of the FMI and Mamdani type-2
controllers

dynamic uncertainties. In this case study, a Mamdani’s fuzzy control system was
used because it is the fuzzy inference system most implemented in industry.
On the other hand, it is important to distinguish the fuzzy-molecular inference
model from other fuzzy inference models like Takagi-Sugeno inference systems
or Mamdani’s fuzzy control systems. For instance, the defuzzification process in
each fuzzy inference model is different. As FMI-model uses artificial hydrocar-
bon networks, each molecule represents a linguistic partition of the output variable.
In above results simple CH-molecules were implemented, but complex molecules
can either be used. Thus, defuzzification can have complex nonlinear mappings in
the FMI-model. In contrast, Takagi-Sugeno’s model uses polynomial functions and
Mamdani’s model represents linguistic partitions with membership functions
182 7 Applications of Artificial Hydrocarbon Networks

180

150
Position (º)

100

50 Reference
FIM
Mamdani

0
0 1 2 3 4 5 6 7
Time (s)

Fig. 7.28 Comparison of the ramp response without noise of the FMI and Mamdani type-2
controllers

180

150
Position (º)

100

50
Reference
FIM
0 Mamdani

0 1 2 3 4 5 6 7
Time (s)

Fig. 7.29 Comparison of the ramp response with 35 % noise of the FMI and Mamdani type-2
controllers

associated to fuzzy sets. Parameters inside artificial hydrocarbon networks are hydro-
gen and carbon values, polynomial coefficients for Takagi-Sugeno’s model, and
parameters of membership functions in Mamdani’s model. In addition, molecules
in FMI-model make a mapping from membership or truth-values to output values,
also dealing with uncertainties. This is remarkable because Takagi-Sugeno’s model
maps from input values to output values and using fuzzy inference values linearly
acts on the final output value. At last, Mamdani’s model makes a mapping from
membership values to output values. In fact, the fuzzy-molecular inference model
combines linguistic partitions of output variables with molecular structures.
7.3 Facial Recognition Based on Signal Identification Using AHNs 183

Table 7.3 Experimental results of type-2 fuzzy controllers


Fuzzy controller Noise (%) Rise time (s) Steady-state error (◦ )
Step response
FIM 0 1.0 2.5
35 1.0 5.0
Mamdani 0 2.4 4.7
35 2.6 5.5
Ramp response
FIM 0 – 3.8
35 – 9.1
Mamdani 0 – 3.7
35 – 12.1

7.3 Facial Recognition Based on Signal Identification


Using AHNs

Nowadays, facial recognition systems are widely used for identifying, verifying or
tracking people from images or video frames, security, government activities, enter-
tainment and so forth. In security, facial recognition can be considered as a bio-
metric system that analyzes, extracts and classifies features, landmarks or any other
information within images. In that context, the following case study applies artificial
hydrocarbon networks with the discrete cosine transform to easily implement a facial
recognition system.

7.3.1 Background and Problem Statement

Facial recognition is a challenging problem in computer vision because it is compu-


tationally complexity due to position, illumination, environmental and instrumental
noise, pose, etc. that affect the performance of algorithms [8]. In that sense, many
robust facial recognition algorithms have been developed [3, 8]. However, unde-
sirable effects are drawn when they are applied in real-world applications. Several
algorithms are widely used, for example [8]: background removal, illumination nor-
malization, artificial neural networks, discrete cosine transform, and hybrid algo-
rithms. In that context, the purpose of this case study is to propose a novel facial
recognition system based on metadata of artificial hydrocarbon networks in order to
minimize timing of feature matching.
For instance, consider the facial recognition system of Fig. 7.30. As noted, it
consists of the following steps: pre-processing, feature extraction and classification.
The first block applies a transformation of input images in order to obtain suitable
images for a specific problem. Several pre-processing techniques typically used in
184 7 Applications of Artificial Hydrocarbon Networks

Image
Database

Pre- Feature Image


Image Classification
Processing Extraction Identification

Fig. 7.30 Block diagram of a facial recognition system

facial recognition systems are: scaling, edge detection, histogram equalization and
color to grayscale [8].
Then, feature extraction refers to the process of representing an image in a punch
of relevant parameters. One of the most prominent feature extraction techniques in
imaging is the so-called discrete cosine transform (DCT) [3, 12]. It is a separable
linear transformation that compact the energy of an image in frequency. For a two-
dimensional image, the DCT is expressed as (7.21); where I (x, y) is the intensity
of pixel at (x, y) coordinates, N and M are two integers values related to the size of
the image M × N , u and v vary from 0, . . . , N − 1 and 0, . . . , M − 1, respectively,
and α(u), α(v) hold (7.22).


N −1 M−1
 ⎨ ⎩ ⎨ ⎩
π u(2n + 1) π v(2m + 1)
F(u, v) = α(u)α(v) I (n, m) cos cos
2N 2M
n=0 m=0
⎛ (7.21)
 1 u=0
α(u) = ⎛ N
 2 u ∅= 0
N
⎛ (7.22)
 1 v=0
α(v) = ⎛ M
 2 v ∅= 0
M

Moreover, low frequencies of DCT contain the average intensity of an image that
are very useful in facial recognition systems. High frequencies of DCT contain other
components of an image like noise and illumination variations [8]. Since, the first
values of the DCT are required, a zigzag ordering or raster scan order is computed
to reorder the values of DCT in a vector, placing the most important values at first.
The above technique avoids some problems of pose and position.
Finally, a classification process is done using the resulting vector from feature
extraction analysis and obtaining a matching image, previously recorded on a data-
base. The most common technique is the Euclidean classifier (7.23) [8]; where, D is
the distance between two vector of images, pi is the i-th element of the target vector
(i.e. from the repository) and qi is the i-th element of the testing vector image. To
this end, the minimum distance D associated to a testing image means the matching
7.3 Facial Recognition Based on Signal Identification Using AHNs 185

of it with a target image. ⎝


⎞ N
⎞
D=⎠ ( pi − qi )2 (7.23)
i=1

As noted, (7.23) is computed k times for the k target images in the repository.
Moreover, if the database of target images is too large, then the computation of the
set of (7.23) will take a huge amount of time. Then, other techniques have to be
proposed.
Problem 7.3 Consider the above facial recognition system. The objective is to
design an easier facial recognition system under variant pose using artificial hydro-
carbon networks.

7.3.2 Methodology

The proposed facial recognition system is shown in Fig. 7.31. As reviewed earlier, the
facial recognition system considers three main components: pre-processing, feature
extraction and classification.
For this application, consider an input red-green-blue (RGB) image I (x, y) with
M × N pixels. Initially, the pre-processing step computes a RGB-color to grayscale
transformation of image I (x, y) to a grayscale image G(x, y).
Once the latter is obtained, a feature extraction step over G is computed. The two-
dimensional discrete cosine transform is applied to G(x, y) obtaining a transformed
image F(u, v) in the frequency domain. Later, a zigzag ordering is applied to F giving
a vector v with low frequency elements at first. In particular to this application, the
first 50 elements in v are considered. Furthermore, v is highly dynamic but for a set of
face images with different poses, the set of v vectors tends to be equal. Then, taking

Feature
Extraction

RGB Discrete
Image To Cosine
Grayscale Transform
Pre-Processing
Zigzag Image
Ordering Database

Euclidean Image
AHNs Identification
Classifier

Classification

Fig. 7.31 Facial recognition system based on artificial hydrocarbon networks


186 7 Applications of Artificial Hydrocarbon Networks

advantage of the latter, v is approximated with an artificial hydrocarbon network


AHN with cmax = 1 and a fixed number of CH-primitive molecules n = 6. Recall
that an AHN-model filters a system but also finds interesting points.
For example, consider a facial image like in Fig. 7.32. Doing the above procedure,
the image is transformed to a grayscale image and the DCT is computed. Later on, the
zigzag ordering rearranges the DCT frequency image into a vector of features trun-
cated to its first 50 elements. Finally, an approximation of that vector is obtained with
an artificial hydrocarbon network. Notice that the transitions between CH-primitive
molecules capture the tendency of elements in the vector. Then, intermolecular dis-
tances (or alternatively bounds of molecules) can be used for capturing information
in the vector of DCT elements of an image.
Finally, a feature vector f T with intermolecular distances (or alternatively bounds
of molecules) of a target image IT is stored in the database. Then, k different feature
vectors f T will be stored in the image gallery. In order to compute the feature vector
f T for a set of similar target images, the set of DCT ordering vectors {v} of each
sample image are used for training the AHN-structure associated to f T .
On the other hand, the facial recognition system calculates the feature vector f S
of a testing image I S . Thus, the classification step uses the Euclidean classifier (7.23)
with pi as the i-th element of a target vector f T and qi as the i-th element of a testing
vector f S . The minimumdistance DT associated to a target vector f T will determine

Zigzag-DCT

4
Zigzag−DCT

−2

−4

−6
5 10 15 20 25 30 35 40 45 50
Feature Vector
Bounds of Molecules

Fig. 7.32 An example of using AHN-metadata for capturing features of facial images
7.3 Facial Recognition Based on Signal Identification Using AHNs 187

the matching of I S as (7.24). To this end, C is the matching index of two images I S
and IT .
C = arg min {DT } (7.24)
T

7.3.3 Results and Discussion

Fifty images from the public head pose image database of Gournier, Hall and Crowley
[2] were used. Selected images come from 10 different people with 5 different poses
each. Figure 7.33 shows an example of pose variations ranging from +30◦ to −30◦ in
pan angles with 0◦ tilt angle, in the set of selected images. In that sense, the training
set consists of 60 % of the latter set, i.e. 3 random poses per person. Figure 7.34
shows one pose of the 10 people in the training sample.
The above facial recognition system based on artificial hydrocarbon networks
were implemented. Ten different AHN-structures were obtained, one for each target
person. These structures were trained with a fixed number of molecules n = 6, one
compound c = 1 and a step size of η = 0.05. At last, one feature vector f T with
7 elements (bounds of molecules) was found for each person. Table 7.4 shows the
resultant set of feature vectors for each person.
Then, the 50 selected images were used as the testing set. The facial recognition
system based on artificial hydrocarbon networks has a misclassification rate of 10 %.
In fact, the system can order the facial identification from the most probable person
to the less probable one. Actually, the system can identify 64 % of the images as

Fig. 7.33 Collection of images from person 1 in the set of images in the case study. All faces have
0◦ tilt-angle, and the pan-angle are +30◦ , +15◦ , 0◦ , −15◦ and −30◦ from left to right

Fig. 7.34 Image samples, (top) person 1 to person 5, (bottom) person 6 to person 10
188 7 Applications of Artificial Hydrocarbon Networks

its first choice, 18 % of the images as its second choice, and 8 % of the images as
its third choice. Figure 7.35 shows the recognition rate from each person versus the
choice position.
As shown in Fig. 7.35, the system can easily recognize images from people
1, 3, 6, 7, 9 and 10, while people 2, 5 and 8 are the most difficult ones to recog-
nize. It is evident from Fig. 7.34 that person 2 and person 8 are visually similar,
explaining the difficulty of recognition.
Concluding, the above experimental results demonstrate that a facial recognition
system based on artificial hydrocarbon networks can be easily implement for variant
poses. Interestingly, this approach uses metadata of AHN-structures, in contrast to
other applications.

Table 7.4 Set of feature vectors for each person in the set of images in the case study
Person L0 L1 L2 L3 L4 L5 L6
1 1 0 0 6 10 12 50
2 1 5 9 13 25 38 50
3 1 5 10 21 27 38 50
4 1 5 11 17 27 37 50
5 1 4 8 12 19 33 50
6 1 5 10 15 24 33 50
7 1 5 10 16 30 39 50
8 1 4 9 17 26 38 50
9 1 5 10 18 29 39 50
10 1 1 4 5 10 20 50

First Option
Second Option
Third Option
100
Recognition Rate (%)

80

60

40

20

0
1 2 3 4 5 6 7 8 9 10
Person (#)

Fig. 7.35 Variation of recognition rate with the option number. It represents the cumulative prob-
ability of image I S to be correctly classified
References 189

References

1. Giura MD, Serina N, Rizzotto G (1997) Adaptive fuzzy filtering for audio applications using
a neuro-fuzzy modelization. In: Proceedings of the international joint conference on neural
networks, vol 4. MD, USA, pp 2162–2166
2. Gournier N, Hall D, Crowley JL (2004) Estimating face orientation from robust detection of
salient facial features. In: Proceedings of pointing, ICPR, international workshop on visual
observation of deictic gestures, Cambridge, UK, pp 17–25
3. Hafed ZM, Levine MD (2001) Face recognition using the discrete cosine transform. Int J
Comput Vision 43(3):167–188
4. Iancu I (2012) A Mamdani type fuzzy logic controller. In: Dadios EP (ed) Fuzzy logic: controls,
concepts, theories and applications. InTech Croatia, Rijeka, pp 325–350
5. Jablonski R, Turkowski M, Szewczyk R (eds) (2007) Recent advances in mechatronics.
Springer, Berlin
6. Linda O, Manic M (2010) Comparative analysis of type-1 and type-2 fuzzy control in context of
learning behaviors for mobile robotics. In: IECON—36th annual conference on IEEE industrial
electronics society, pp 1092–1098
7. Mamdani HE (1974) Application of fuzzy algorithms for control of simple dynamic plant. In:
IEEE proceedings of the institution of electrical engineers, vol 121, pp 1585–1588
8. Manikantan K, Govindarajan V, Kiran VVSS, Ramachandran S (2012) Face recognition using
block-based DCT feature extraction. J Adv Comput Sci Technol 1(4):266–283
9. Mendel JM, John RIB (2002) Type-2 fuzzy sets made simple. IEEE Trans Fuzzy Syst
10(2):117–127
10. Musikasuwan S, Garibaldi JM (2006) On relationships between primary membership functions
and output uncertainties in interval type-2 and non-stationary fuzzy sets. In: IEEE international
conference on fuzzy systems, pp 1433–1440
11. Ogata K (2002) Modern control engineering. Prentice Hall, Upper Saddle River
12. Rao KR, Yip P (eds) (2010) The transform and data compression handbook. Taylor & Francis,
London
13. Smith J (2010) Introduction to digital filters with audio applications. Stanford University Press,
Stanford
14. Smith SW (1997) The scientist and engineers guide to digital signal processing. California
Technical Publishing, San Diego
15. Taal C, Hendriks R, Heusdeus R, Jensen J (2011) An algorithm of intelligibility prediction of
time-frequency weighted noisy speech. IEEE Trans Audio Speech Lang Process 19(7):2125–
2136
16. Takagi T, Sugeno M (1985) Fuzzy identification of systems and its applications to modeling
and control. IEEE Trans Syst Man Cybern 15(1):116–132
17. Tsukamoto, Y (1979) An approach to fuzzy reasoning method. In: Gupta M, Ragade R, Yager
R (eds) Advances in fuzzy set theory and applications. Elsevier, Amsterdam, pp 137–149
Appendix A
Brief Review of Graph Theory

Artificial organic networks as well as artificial hydrocarbon networks involve key


concepts of graph theory. In this context, relevant definitions of graphs are summa-
rized following.
Definition A.1 (general graph) Let G = (V, E, Σ) be an ordered triple, where
V ∈= ∅, V ≥ E = ∅ and Σ ⊂ P(V ) is a map from the set E to the power set of V ,
such that, |Σ(e)| ∈ {1, 2} for each e ∈ E. Then, G is called a general graph, V is a
set of vertices of G, E is a set of edges of G, and Σ is an edgemap with Σ(e) called
endvertices.
In Fig. A.1 are shown different general graphs. Notice that edgemaps assure that
edges has associated up to two vertices, i.e. |Σ(e)| = 1 if the edge loops at the same
vertex and |Σ(e)| = 2 if the edge connects two different vertices.
Definition A.2 (simple graph) Let G = (V, E) be an ordered pair, where V ∈= ∅
and E is a set of 2-element subsets of V , such that E ⊆ {{u, v}|u, v ∈ V, u ∈= v}.
Then, G is called a simple graph.
As noted, a simple graph is a particular case of general graphs. In fact, simple
graphs do not accept edges connecting one vertex to itself, as shown in Fig. A.2.
Actually, artificial organic networks and artificial hydrocarbon networks use the
notion of simple graphs to define molecular units.
Definition A.3 (degree of vertex) Let G = (V, E) be a graph and u ∈ V be a vertex
of G. The degree of u, denoted by dG (u), is defined as:

dG (u) = |{e ∈ E|e = {u, u i }, →u i ∈= u, u i ∈ V }| (A.1)

Where, |S| stands for the cardinality of the set S.


Figure A.3 shows a simple graph with the degree of its vertices, denoting the
number of edges (relationships) that a vertex has.

H. Ponce-Espinosa et al., Artificial Organic Networks, 191


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1,
© Springer International Publishing Switzerland 2014
192 Appendix A: Brief Review of Graph Theory

Fig. A.1 Examples of general


graphs

Fig. A.2 Examples of simple


graphs

Fig. A.3 Degree of vertices (1) (1)


in a simple graph

(2) (2)
(1) (1)
(6)

(1) (1)

Definition A.4 (subgraph) Let G = (V, E) and G  = (V  , E  ) be two graphs.


Then, G  is a subgraph of G, denoted G  ⊆ G, if V  ⊆ V and E  ⊆ E.
From Definition A.4, it can be concluded that G is induced by G  . This means
that G is an extension or it is spanned from G  . Thus, artificial organic networks
and artificial hydrocarbon networks use the terms induced and spanned indistinctly.
Figure A.4 shows some examples of subgraphs contained in a simple graph.
Appendix A: Brief Review of Graph Theory 193

(1) (2)
(1)

(3) (4) (5) (6) (3)


(4)

(7)
(7) (8)

(5) (6) (3) (4) (5) (6)

Fig. A.4 Examples of subgraphs in a simple graph


Appendix B
Experiment of Signal-Molecule Correlation

The following experiment was run in order to find any correlation between the output
response of a system and the type of CH-primitive molecules that form an artificial
hydrocarbon compound, using the energy information of the system. The methodol-
ogy and results of the experiment are present below.

B.1 Overview of the Experiment

Let ω be a system with an input signal x and an output signal y such that the behavior
of the system relates the input and the output as ω(x) = y. Also, let suppose that there
is no other information about the system. Then, assume that an artificial hydrocarbon
network AHN = ⊆φ f , η, τ, x⊇ with behavior S(x) has to approximate ω. Hence,
the build topological structure problem (BTSP) claims for finding molecules and
compounds, φ f , of an artificial hydrocarbon network using an algorithm f , such
that AHN can approximate ω. In that sense, an experiment was designed and run in
order to find any correlation between ω and φ f .
For instance, consider the energy E of signal y in the interval x ∈ [a, b], calculated
as (B.1); where, g stands for the norm of g.

b
E(y) = y 2 d x , x ∈ [a, b] (B.1)
a

Thus using (B.1), the energy E i of a partition ωi is expressed as (B.2); where, xi


is the input and yi is the output signals of ωi in the interval xi ∈ [r1 , r2 ].

r2
E i = E(yi ) = yi 2
d x , xi ∈ [r1 , r2 ] (B.2)
r1

H. Ponce-Espinosa et al., Artificial Organic Networks, 195


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1,
© Springer International Publishing Switzerland 2014
196 Appendix B: Experiment of Signal-Molecule Correlation

From signal theory, an energy signal is said to be any signal s such that 0 <
E(s) ≤ ∅. Thus, y is an energy signal, and also S(x). In that sense, the output
response y might be related to the behavior S(x).

B.2 Design of the Experiment

The experiment aims to determine the relationship between the energy signal of the
system and the structure of an artificial hydrocarbon compound. Initially, a set of
energy signals was defined and a prototype of an artificial hydrocarbon compound
was designed.

B.2.1 Sample Definition

The sample of the experiment consists in a subset ST of test signals sk ∈ ST of


the infinite set of all possible causal, aperiodic, energy signals in the closed interval
x ∈ [a, b] such that the energy of those is 0 < E(sk ) ≤ ∅. From signal theory, the
subset of test signals was randomly constructed with a composite of fundamental
signals (unit step, rectangular pulse, signed function, ramp, sine, sinc, exponential,
and unit impulse signals). In practice, the input domain was arbitrary chosen to be
x ∈ [0, 5] and the experiment required a 99 % confidence. Thus, a total of 22,312
test signals were chosen randomly.

B.2.2 Design of Artificial Hydrocarbon Compound Prototypes

A set of artificial hydrocarbon compounds SC was designed using n = 2 CH-


primitive molecules. In order to create the most stable compounds with two CH-
primitive molecules, three compounds Ci ∈ SC for all i = 1, 2, 3 were proposed:
a simple bond based molecule C1 = (CH 3 − CH 3 ), a double bond based molecule
C2 = (CH 2 = CH 2 ) and a triple bond based molecule C3 = (CH ≡ CH). Actually,
each molecule has behavior α j for all j = 1, 2, and each compound Ci has behavior
βi expressed as a piecewise function (B.3); where, r1 is the middle position in the
interval, i.e. r1 = 21 (a + b).

α1 (x) a ≤ x < r1
βi (x) = (B.3)
α2 (x) r1 ≤ x ≤ b
Appendix B: Experiment of Signal-Molecule Correlation 197

Algorithm B.1 Methodology of the signal-molecule correlation experiment.


for each test signal sk ∈ ST do
Measure E(sk ) using (B.2)
for each i = 1 : 3 do
Approximate sk with behavior βi of Ci using least squares estimates
Measure E(βi ) using (B.2)
end-for
Classify sk into group of Classi
end-for
Get statistics of each class

B.2.3 Methodology

The experiment consists of classifying each test signal sk ∈ ST depending on how


close the energy of the signal E(sk ) is equal to the energy of the compound behavior
E(βi ) via one of the three compounds Ci ∈ SC . Three groups for classification are
possible:
• Class1 : The best approximation via simple bond based molecules.
• Class2 : The best approximation via double bond based molecules.
• Class3 : The best approximation via triple bond based molecules.
The methodology for the experiment adopted is depicted in Algorithm B.1.
In order to classify each test signal, the criterion is based on the minimum distance
ei between the energy of test signal E(sk ) and the energy of each compound behavior
E(βi ) of compounds Ci ∈ SC for all i = 1, 2, 3. This criterion is synthetized in
(B.4) with distance (B.5); where, |g| stands for the absolute value of g.

sk ∈ Classm , m = arg min(ei ) (B.4)


i

ei = |E(sk ) − E(βi )| , i = 1, 2, 3 (B.5)

B.3 Results and Discussion

After the experiment was run, the distribution of each class was obtained, as shown in
Fig. B.1. Qualitatively, it can be seen that Class1 has the greatest number of samples,
in comparison with Class3 which it has the lowest number of test signals. Table B.1
summarizes statistics from Fig. B.1. In fact, from Table B.1, Class1 has 96.4 % of test
signals, Class2 has 2.8 % of test signals, and Class3 only has 0.08 % of test signals.
In that sense, it shows a strong relationship to real organic tendencies in which the
most number of forming bonds in nature are subjected to simple bonds, then double
bonds appear less, and the number of triple bonds appears last.
198 Appendix B: Experiment of Signal-Molecule Correlation

Fig. B.1 Distribution of classes. a Class1 : the best approximation via simple bond molecules,
b Class2 : the best approximation via double bond molecules, and c Class3 : the best approximation
via triple bond molecules

Table B.1 Statistics of


Type of class Mean energy Standard deviation Size
classes
1 152.74 341.64 13,577
2 132.21 488.04 388
3 48.18 108.27 110

Table B.1 also reports the mean energy. Roughly, it can be interpreted as the mea-
surement of dynamics of signals: the greater the mean energy is, the more dynamic
the signal is. Actually, dynamics of signals is due to oscillations and large values
of amplitude signals, as shown in Figs. B.2, B.3 and B.4. Interestingly, all signals in
each class are very similar.

Fig. B.2 Class1 : highly dynamic, unstable signals


Appendix B: Experiment of Signal-Molecule Correlation 199

Fig. B.3 Class2 : medium dynamic, quadratic-based signals

Fig. B.4 Class3 : low dynamic, linear and/or constant quasi-stable and stable signals.

For instance, Fig. B.2 shows the tendency of profiles in Class1 ; where signals
are irregular and with some peaks, a few more are periodic—several presenting
phase offset—, and they may have discontinuities. Class2 presents smooth signals
with concavities shaping quadratic functions as observed in Fig. B.3. Finally, Class3
depicted in Fig. B.4 presents constant and quasi-constant signals and some others
are linear. Thus, a relationship between energy in signals and compound behaviors
(explained by the dynamics of signals) was revealed. In nature, the most flexible
organic compounds present large staturated, linear chains and branched chains, while
the most rigid and stable organic compounds are made of double or triple bonds (or by
symmetric geometries, out of scope). This notion is shown in artificial hydrocarbon
networks by the above discussion in which signals with simple bonds (Class1 ) are
not too stable, but dynamic; signals with double bonds (Class2 ) are less dynamic;
and, signals with triple bonds (Class3 ) are the most stable ones. In fact, another
interpretation of experimental results deals with the notion of flexibility. Structures
of artificial hydrocarbon network with lower order of bonds are more flexible than
structures of artificial hydrocarbon networks with higher order of bonds. Finally,
Table B.1 is used for selecting the best pair of molecules when the energy of an
output signal of any given system can be measured.
Appendix C
Practical Implementation of Artificial
Hydrocarbon Networks

This appendix provides example codes of the programs employed in the real-world
applications depicted in Chap. 7. For detailed information about background and
description of these applications, refer to the latter chapter. In addition, a quick
guide of implementing artificial hydrocarbon networks is presented at the end of the
appendix. If required, Appendix D summarizes the documentation of the Artificial
Organic Networks Toolkit using LabVIEWTM .

C.1 Program Codes for Audio Filtering

The adaptive audio filter based on artificial hydrocarbon networks of Algorithm 7.1
is implemented in LabVIEWTM as shown in Fig. C.1. First, the original signal is
obtained from the WAV SubVI (see Fig. C.2). As shown in Fig. C.2, the audio signal
is extracted from a *.wav file using the first channel (monochannel). The signal is
transformed into a (X, Y ) system. Then, this signal is corrupted with white noise
using the Uniform White Noise VI (Fig. C.1).
The corrupted signal, also expressed as a (X, Y ) system, enters to a While Loop in
which the signal is splitted into windows (batches) using the BATCH SubVI depicted
in Fig. C.3. This batch code receives an initial time value (initial) and the length in
time (WindowTime). The outputs of the BATCH SubVI are two arrays containing
a portion of the system (X, Y ) in the given interval [WindowTime + initial, initial].
Once a segment of corrupted signal is obtained, it is introduced to the Modified-
SimpleAHN VI (see Fig. C.1) which it computes the modified artificial hydrocarbon
networks algorithm. In fact, it receives its training parameters (Training Proper-
ties) with the step size (StepSize), the maximum number of molecules in the com-
pound (NumberOfMolecules), the maximum number of compounds fixed to one
in this application, and the tolerance value (Tolerance). As a result, this VI gives
the response (MixtureResponse) of the trained artificial hydrocarbon network. At
last, both the corrupted signal and the adaptive AHN-filter are shown in a graph (XY
Graph).

H. Ponce-Espinosa et al., Artificial Organic Networks, 201


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1,
© Springer International Publishing Switzerland 2014
202 Appendix C: Practical Implementation of Artificial Hydrocarbon Networks

Fig. C.1 Block diagram of the simple adaptive AHN-filter

Fig. C.2 Block diagram of the WAV SubVI in Fig. C.1

Fig. C.3 Block diagram of the BATCH SubVI in Fig. C.1

For this example code, the audio signal had 5 s. Thus, the stop criterion of the
While Loop is fixed to compare the time of the signal to be greater or equal to the
value 4.999. The front panel of the code is shown in Fig. C.4.
Appendix C: Practical Implementation of Artificial Hydrocarbon Networks 203

Fig. C.4 Front panel of the simple adaptive AHN-filter

C.2 Program Codes for Position Control of DC Motors

The position control of the direct current (DC) motor designed in Chap. 7 requires
three steps: fuzzification, fuzzy inference engine and defuzzification. Since the first
two steps can be any fuzzification and fuzzy inference engine procedures detailed in
proper literature, this appendix only shows the defuzzification program code.
The defuzzification step of the fuzzy-molecular inference (FMI) controller written
in LabVIEWTM is shown in the block diagram of Fig. C.5. In that case, the fuzzy
membership values of antecedents in rules (FuzzyEvaluationValues) are collected
in an array. This array enters to a For Loop in order to evaluate the consequent values.
The evaluation is done via an artificial hydrocarbon compound implemented in the
Compound VI which receives the Molecular Parameters (i.e. type of molecule,
the number of molecules, order of bonding and hydrogen values), the Compound
Parameters (i.e. the type of behavior, bounds and optional properties of covalent
bonds) and the fuzzy membership values of the antecedents.
Notice that consequent values are multiplied by the fuzzy membership values of
antecedents. Then, these results are summed as well as the fuzzy membership values
of antecedents in order to compute the Crisp Output Value. It is remarkable to say
that the VI shown in Fig. C.5 can be used directly for type-1 fuzzy systems, and
reproducing it another time can be used for type-2 fuzzy systems.

Fig. C.5 Block diagram of the defuzzification step in the FMI controller
204 Appendix C: Practical Implementation of Artificial Hydrocarbon Networks

C.3 Program Codes for Facial Recognition

The facial recognition system using artificial hydrocarbon networks was imple-
mented in MATLAB®. The main script for training an artificial hydrocarbon net-
work and for extracting the feature vector made of bounds of molecules, is shown
following:
clear all;
close all;
warning off all;

% Lower and upper bounds of input domain


a = min(X);
b = max(X);

% Uniformly initialize intermolecular distances


over the input domain
r = a:(b-a)/(n-2):b;

% Starts the gradient descent method for length bonds


t = 1;

while (t < = 300)

% Calculate bounds of molecules


Intervals = zeros(1,n);
Intervals(1) = r(1) + a;
SizeR = size(r,2);
for k = 2:SizeR
temp = r(k) + Intervals(k-1);
if (temp <= b)
Intervals(k) = temp;
else
Intervals(k) = b;
end
end
Intervals(n) = b;

% Split the system


counter = 1;
SizeX = size(X,2);
SizeY = size(Y,2);
s = struct(’X’,[],’Y’,[]);
for k = 1:n
i = 1;
while ((X(counter) < Intervals(k)) && (counter <
SizeX))
s(k).X(i) = X(counter);
s(k).Y(i) = Y(counter);
i = i + 1;
counter = counter + 1;
end
Appendix C: Practical Implementation of Artificial Hydrocarbon Networks 205

end
s(n).X(i) = X(SizeX);
s(n).Y(i) = Y(SizeY);

% Create compound using a saturated linear chain


ord = 2 * ones(n);
ord(1) = 3;
ord(end) = 3;

% Calculate hydrogen values


AHNApproximation = [];
EMolecule = zeros(1,n);
HTemp = struct(’H’,[]);
HydrogenValues = struct(’H’,[],’A’,[]);
for k=1:n
HTemp(k).H = polyfit(s(k).X,s(k).Y,ord(k));
HydrogenValues(k).H = roots(HTemp(k).H);
HydrogenValues(k).A = HTemp(k).H(1);
f = polyval(HTemp(k).H,s(k).X);
EMolecule(k) = sum((s(k).Y - f).ˆ2);
s(k).Y = f;
AHNApproximation = [AHNApproximation f];
end

% Update all intermolecular distances


for k = 1:SizeR
r(k) = r(k) - StepSize * (Em(k) - Em(k+1));
end
t = t + 1;

end
Intervals = [a Intervals];

As shown above, the inputs of the script are: an array of the numbers of input
vectors coming from the discrete cosine transform (DCT) rastered in zigzag denoted
as X, an array of the feature vector coming from the DCT rastered in zigzag denoted
as Y, the number of molecules in the artificial hydrocarbon compound denoted as n,
and the step size denoted as StepSize. The resultant feature vector using artificial
hydrocarbon networks is obtained from the Intervals variable.

C.4 Quick Guide for Implementing Artificial Hydrocarbon


Networks

This section presents a guick guide for using artificial hydrocarbon networks to model
systems.
Observations to chemical organic compounds reveal enough information to derive
the artificial organic networks technique. From studies of organic chemistry, organic
compounds are the most stable ones in nature. In addition, molecules can be seen as
206 Appendix C: Practical Implementation of Artificial Hydrocarbon Networks

units of packaging information; thus, complex molecules and its combinations can
determine a nonlinear interaction of information. Moreover, molecules can be used
for encapsulation and potential inheritance of information. Thus artificial organic
networks take advantage of this knowledge, inspiring the artificial hydrocarbon net-
works algorithm that infer and classify information based on stability and chemical
rules that allow formation of molecules.
In that sense, artificial organic networks define four components, i.e. atoms, mole-
cules, compounds and mixtures; and two basic interactions among components, i.e.
covalent bonds and chemical balance interaction. In order to obey chemical rules,
the following definitions of artificial hydrocarbon networks hold:
• Atoms. They are the basic units with structure. No information is stored in them.
In addition, when two atoms have the same number of degrees of freedom they
are called similar atoms, and different atoms otherwise. The degree of freedom is
the number of valence electrons that allow atoms to be linked with others. Only
hydrogen atoms and carbon atoms can be found.
• Molecules. They are the interactions of two or more atoms made of covalent bonds.
These components have structural and behavioral properties. Structurally, they
conform the basis of an organized structure while behaviorally they can contain
information. Thus, molecules are known as the basic units of information. If a
molecule has filled out all of the valence electrons in atoms, it is stable; but if
a molecule has at least one valence electron without filling, it is considered as
unstable. Two behaviors of molecular units can be found in artificial hydrocarbon
networks: the sum form (5.10)–(5.11) and the product form (5.18)–(5.19).
• Compounds. Structurally, they are two or more molecules interacting each other
linked with covalent bonds. Their behaviors are mappings from the set of molecular
behaviors to real values.
• Mixtures. They are the interaction of two or more molecules and/or compounds
without physical bonds. Mixtures are linear combinations of molecules and/or
compounds like (5.28) forming a basis of molecules with weights so-called stoi-
chiometric coefficients.
• Covalent bonds. They are of two types. Polar covalent bonds refer to the interaction
of two similar atoms while nonpolar covalent bonds refer to the interaction of two
different atoms. Three models of covalent bond behaviors can be identified: the
weighted sum interaction (4.5)–(4.7), the exponential interaction (4.8) and the
piecewise interaction (4.16). For implementation purposes, the latter is the easiest
covalent bond behavior.
• Chemical balance interaction. It refers to find the proper values of stoichiometric
coefficients in mixtures in order to satisfy constrains in artificial hydrocarbon
networks.
On the other hand, artificial hydrocarbon networks algorithm is divided in two steps:
the training process and the inference process (refer to Sect. 3.4.3 for further details).
Following, both processes are summarized in order to implement them easily.
Appendix C: Practical Implementation of Artificial Hydrocarbon Networks 207

C.4.1 Training Process

This process will build and train an artificial hydrocarbon network AH N for mod-
eling a given system ω. In order to do this, compute the following steps:
1. Collect q samples of attribute X and target Y variables from the system and
represent it as a pair of variables ω = (X, Y).
2. Determine the maximum number of compounds cmax > 0 that the artificial hydro-
carbon network AH N can have.
3. Determine the maximum number of molecules kmax ← 2 each compound can
have.
4. Fix the learning rate value 0 < σ < 1 that is used for training the artificial
hydrocarbon network AH N . Small values of σ work well.
5. Determine the tolerance value Δ > 0 (error between the system and the model)
expected in the artificial hydrocarbon network AH N .
6. Use Algorithm 5.3 for training multidimensional artificial hydrocarbon networks.
At last, Algorithm 5.3 will return the artificial hydrocarbon network AH N in terms
of its structure φ (the total number of compounds used, the total number of molecules
per compound used and the links between molecules), and all parameters required
in that structure (the set of hydrogen values H , the stoichiometric coefficients τ and
the intermolecular distances Ω ).
In order to implement Algorithm 5.3, consider the flow diagram shown in Fig. C.6.
As noted, it consists of two main procedures: generate a set of compounds (build
and train a mixture of compounds) and mix them up to find optimal stoichiometric
coefficients.
For instance, consider Algorithm 5.3. In order to generate the set of compounds,
it is neccesary to generate one compound at a time (see Fig. C.6).
To generate a compound, it is required to initialize the number of the current
compound i and to set the minimal number of molecules k = 2 in the compound.
Once it is done, the CREATE-COMPOUND (R, k) procedure (Algorithm 4.1) is called
to build the structure of the compound; where, R stands for the residual system
(initially set to be the system ω) and k is the number of molecules in the compound.
In particular, this procedure has to split the system ω = (X, Y ) in equal partitions.
Thus, it requires to initialize the set of bounds of molecules L using (4.24) and (4.25).
Three different splitting processes were presented through the book. For example,
to split multivariate systems, it can be applied (6.16) to use a hyperplane criterion
or (6.19) to use a centers-of-molecules criterion. If the system is univariate, it can
be applied (4.26). After that, it is required to measure the energy of partitions using
(5.33) and compare this energy with Table 4.2.
Then, that compound is optimized and trained using the OPTIMUM-COMPOUND
(R, Ci , k, σ) procedure (Algorithm 4.4); where Ci stands for the current compound
in the set of all compounds. This procedure initializes the set of intermolecular
distances Θi and recalculates the set of bounds of molecules L using (4.25). Then,
it has to compute the least squares estimates using (5.35) and finally it has to update
intermolecular distances using (5.36) and (5.37).
208 Appendix C: Practical Implementation of Artificial Hydrocarbon Networks

START

Initialize
minimal compound

Build the structure

generate a set of compounds


of compound

generate one compound


Optimize and train
compound

no optimal number
of molecules?
yes

no optimal number
of compounds?
yes

Create mixture of compounds

END

Fig. C.6 Flow diagram of the artificial hydrocarbon networks algorithm

Finally, it is required to determine optimality in the number of molecules that


conform the compound. In that sense, the ENTHALPY-RULE (R, Ci , k) procedure
(Algorithm 5.1) is called. It returns the updated number of molecules k and the
heuristic value m. This finishes the process of generating a compound.
To determine if the number of compounds is useful to reach a tolerance value ε
between the system and the model, (4.15) is computed and R is updated. This finishes
the process of generating a set of compounds in the artificial hydrocarbon network.
Appendix C: Practical Implementation of Artificial Hydrocarbon Networks 209

The last step of Algorithm 5.3 mixes the set of compounds using the least squares
estimates (4.19) in order to obtain a set of stoichiometric coefficients τ which it
terminates the algorithm.

C.4.2 Inference Process

This process can be used for making a query x to the artificial hydrocarbon network
AHN and obtaining a value response y (prediction). In order to do this, compute the
following steps:
1. Build the structure of the AHN using φ , H , τ and Θ.
2. Feed AHN with a query x.
3. Calculate the value response y.
4. Repeat steps 2 and 3 until all queries are respond.
Appendix D
Artificial Organic Networks Toolkit Using
LabVIEWTM

This appendix presents a quick starting guide for designing applications with the
Artificial Organic Networks Toolkit using LabVIEWTM . First, an installation guide
is presented. Then, the functions palette is described. Finally, a complete list of
function blocks and their descriptions is summarized.

D.1 Installation Guide

Refer to the online reference of the book in order to download the release version
of the Artificial Organic Networks Toolkit using LabVIEWTM . Then, follow these
instructions in order to install it:
1. Close the LabVIEWTM software if running.
2. Unzip the AONToolkit_Package.zip file.
3. Locate the AON_Toolkit folder and the dir.mnu file and put them inside the
user.lib folder following the path: Program Files ∼ National Instruments ∼
LabVIEW ∼ user.lib. It is remarkable to say that the dir.mnu file already present
in the system will be replaced.
4. Open the LabVIEWTM software and verify that there is the Artificial Organic
Networks Toolkit (AON) under the User Libraries in the functions palette.

D.2 Functions Palette

When the Artificial Organic Networks Toolkit is installed, it appears under the User
Libraries in the functions palette. Figure D.1 shows the whole toolkit which it is clas-
sified into four categories: Components, Interactions, Chemical Rules and Training.

H. Ponce-Espinosa et al., Artificial Organic Networks, 211


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1,
© Springer International Publishing Switzerland 2014
212 Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM

Fig. D.1 Artificial Organic Networks Toolkit

Fig. D.2 Components palette of the Artificial Organic Networks Toolkit

The Components palette (Fig. D.2) contains function blocks to implement the
basic components of the artificial hydrocarbon networks like molecules, compounds
and mixtures.
The Interactions palette (Fig. D.3) contains function blocks to implement covalent
bonds between components of artificial hydrocarbon networks. In fact, these virtual
instruments (VI) functions implement three models of covalent bonding behaviors.
The Chemical Rules palette (Fig. D.4) contains function blocks to implement rules
to create compounds, to split systems, to calculate intermolecular distances, and to
measure energy in components and systems.
Finally, the Training palette (Fig. D.5) contains function blocks to implement
training algorithms of articial hydrocarbon networks. In particular, both the simple-
AHN and the modified-AHN algorithms are programmed.
Refer to the next section to obtain general information of each function block in
the toolkit.
Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM 213

Fig. D.3 Interactions palette of the Artificial Organic Networks Toolkit

Fig. D.4 Chemical Rules palette of the Artificial Organic Networks Toolkit

Fig. D.5 Training palette of the Artificial Organic Networks Toolkit


214 Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM

D.3 Function Blocks and Descriptions

This section summarizes the function blocks of the Artificial Organic Networks
Toolkit using LabVIEWTM .

D.3.1 Functions of Components Palette

CH Molecule Sum:

It computes the Molecular Behavior of a CH-primitive molecule, in its sum


form, due to an Input Signal. In that case, Atomic Values represents a cluster of
both Hydrogen Values of size equals to the order of bond (Degree Of Freedom)
and the carbon value (Offset) of the molecule.
CH Molecule Product:

It computes the Molecular Behavior of a CH-primitive molecule, in its product


form, due to an Input Signal. In that case, Atomic Values represents a cluster of
both Hydrogen Values of size equals to the order of bond (Degree Of Freedom)
and the carbon value (Offset) of the molecule.
CH Molecule Selector:

This VI computes the Molecular Behavior of a CH-primitive molecule due to an


Input Signal using a specific form determined manually in the Type Of Molecule
(i.e. Product Form, or Sum Form). Again, Atomic Values represents a cluster of both
Hydrogen Values of size equals to the order of bond (Degree Of Freedom) and the
carbon value (Offset) of the molecule.
CH Molecule (Polymorphic):
Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM 215

This is a polymorphic VI that computes the Molecular Behavior of a CH-


primitive molecule due to an Input Signal depending on the Atomic Values input
(Product Form if Hydrogen Values are in complex representation and Sum Form if
they are in double representation). In that case, Atomic Values represents a cluster
of both Hydrogen Values of size equals to the order of bond (Degree Of Freedom)
and the carbon value (Offset) of the molecule.
Compound:

It computes the Compound Behavior due to an Input Signal. It requires the set
of Molecular Parameters (Type of Molecule, Number of Molecules, Degrees Of
Freedom and Hydrogen Values) and the Compound Parameters (Type Of Com-
pound Behavior, Bounds and optional Properties of the Weighted Sum behavior).
Compound N Points:

It computes an array of Compound Behaviors due to an array of values in the


Input Signal. It requires the set of Molecular Parameters (Type of Molecule, Num-
ber of Molecules, Degrees Of Freedom and Hydrogen Values) and the Compound
Parameters (Type Of Compound Behavior, Bounds and optional Properties of
the Weighted Sum behavior).
Compound Piecewise Behavior:

It computes the Compound Behavior, using the piecewise behavior, due to an


Input Signal. It requires the Bounds parameters to define the piecewise molecular
actions and an array of Molecular Behaviors.
Compound Weighted Or Exponential Behavior:

It computes the Compound Behavior, using the weighted or exponential behav-


ior, due to an Input Signal. It requires to set manually the Type Of Compound
Behavior (Weighted Sum or Exponential) and the Properties of the Weighted Sum
if necessary. Also, it requires an array of Molecular Behaviors.
216 Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM

Mixture:

It computes the Mixture Behavior of an artificial hydrocarbon network due to


an Input Signal. It requires the Parameters Of Compounds (Molecular Parame-
ters and Compound Parameters as in the CH Compound VI), the Number of
Compounds and an array of Stoichiometric Coefficients.

D.3.2 Functions of Interactions Palette

Covalent Bonds Exponential:

It computes the covalent Bond Behavior between two Molecular Behaviors


using the exponential interaction.
Covalent Bonds Weighted Sum:

It computes the covalent Bond Behavior between two Molecular Behaviors


using the weighted sum interaction. It requires the Input Signal and the Properties
of the bond.
Covalent Bonds Weighted Sum Weights:

This VI calculates the Weights of two molecules. It requires an Input Signal and
the Properties of bond interactions.
Covalent Bonds (Polymorphic):

This is a polymorphic VI that computes the covalent Bond Behavior between


two Molecular Behaviors. The polymorphic VI can be selected from the Select
Type item of the contextual menu of this VI. The Weighted Sum selection requires
an Input Signal and the Properties of bond interactions.
Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM 217

D.3.3 Functions of Chemical Rules Palette

Bond Energy:

It creates a linear chain of CH-primitive molecules using the bond energy crite-
rion. This VI requires the System to model and the Number Of Molecules in the
compound, and returns the order of bonds (Bond Orders) between the Number Of
Molecules and the Degrees Of Freedom of each molecule.
Saturated Linear Chain:

It creates a saturated linear chain of CH-primitive molecules. This VI requires


the Number Of Molecules in the compound, and returns the order of bonds (Bond
Orders) between the Number Of Molecules and the Degrees Of Freedom of each
molecule.
Chemical Rule Selector:

It creates a linear chain of CH-primitive molecules to form a compound using one


of the Type Of Rules (Simple Bond or Bond Energy). It requires the Number Of
Molecules in the compound and the System to model, if required. This VI returns the
order of bonds (Bond Orders) between the Number Of Molecules and the Degrees
Of Freedom for each molecule. The Simple Bond type returns a saturated linear
chain, and the Bond Energy type returns a linear chain based on the bond energy
criterion.
Linear Chain (Polymorphic):

This is a polymorphic VI that creates a linear chain of CH-primitive molecules to


form a compound: 1. Saturated Linear Chain VI. 2. Bond Energy Criterion VI.
Intermolecular Distances Bounds:
218 Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM

It determines the Bounds of action of CH-primitive molecules in a compound.


It requires the interval of the input domain [Lower Bound, Upper Bound] and the
Intermolecular Distances that represent the length of bonds between molecules.
Intermolecular Distances Equal:

This VI determines a set of Intermolecular Distances between a Number Of


Molecules with equal lengths distributed in the interval [Lower Bound, Upper
Bound].
Energy System:

It calculates the Energy of a System using the energy signal theory.


Interval:

It determines the Lower Bound and the Upper Bound in the input domain of the
System.
Molecules Selector:

It determines the Degrees Of Freedom for each molecule in a compound using


the order of bonds (Bond Orders Extended) between molecules.
Look Up Table:

This is the look-up-table that helps to find the best order of bond (Bond Energy)
between two CH-primitive molecules.
Split System:

This VI splits the System, using the set of Bounds for each partition, in a Number
Of Molecules specified. It returns the set of partitions of the system (Partitions
System).
Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM 219

D.3.4 Functions of Training Palette

Find Hydrogen Values:

This VI determines the best Hydrogen Values of each CH-primitive molecule in


a compound. It requires the Degrees Of Freedom of each molecule, the Type Of
Molecules used and the partitions of the system (Partitions System) to model. It
also returns the Error between the Molecular Behaviors and the partitions of the
system (Partitions System), and the Molecular Behaviors.
Find Stoichiometric Coefficients:

It calculates the Stoichiometric Coefficients and the Mixture Response of the


artificial hydrocarbon network, using the System to model and the Compound
Behaviors.
Optimum Compound:

This VI builds an artificial hydrocarbon compound using the Optimum Compound


Algorithm. It requires the Range of the input domain, the Type Of Molecule, the
System to model, the Intermolecular Distances, the Number Of Molecules, the
Degrees Of Freedom and the Optimum Parameters (Step Size and Max Itera-
tions).
Optimum Lengths Delta Rule:

It calculates the change of intermolecular distances (Delta Length Of Bond)


using the error functions Energy 1 and Energy 2 with a real number Step Size
between 0 and 1.
220 Appendix D: Artificial Organic Networks Toolkit Using LabVIEWTM

Optimum Lengths Updated Lengths:

It calculates the Updated Intermolecular Distance between two molecules using


the Current Intermolecular Distance, and the error functions Energy 1 and Energy
2 with a real number Step Size.
Optimum Lengths Updated Lengths N Points:

It calculates the Updated Intermolecular Distances of N -molecules using the


Current Intermolecular Distance, the Error functions and the Step Size.
Simple AHN:

This VI trains an artificial hydrocarbon network using the Simple AHN Algo-
rithm. It requires the Training Properties, the System to model, the Number Of
Molecules, the Number of Compounds and the Tolerance value. It outputs the
Mixture Structure, the Hydrogen Values, the Stoichiometric Coefficients and the
Mixture Response.
Modified Simple AHN:

This VI trains an artificial hydrocarbon network using the modified version of


the Simple AHN Algorithm. It requires the Training Properties, the System to
model, the Number Of Molecules, the Number of Compounds and the Tolerance
value. It outputs the Mixture Structure, the Hydrogen Values, the Stoichiometric
Coefficients and the Mixture Response.
Appendix E
Examples of Artificial Hydrocarbon Networks in
LabVIEWTM

The Artificial Organic Networks Toolkit using LabVIEWTM is available for design-
ing, training and implementing artificial hydrocarbon networks (AHNs). Following,
a series of examples show the way how components of AHNs can be used. Then,
training virtual instruments (VI) nodes are described.

E.1 Using Components

Example E.1 Consider a CH-primitive molecules of with three hydrogen atoms h 1 =


0.2, h 2 = 0.6, h 3 = 0.4 and a carbon value vC = 1. Build a LabVIEWTM application
for drawing the behavior of that molecule in the input domain x ∈ [−1, 1]. In
particular:
1. Use the first model of CH-molecules (sum form) of Proposition 4.1.
2. Use the second model of CH-molecules (product form) of Proposition 4.2.

Solution E.1 Create a new blank VI and select the CHMoleculeSelector function
from the following path: AHN ∼ Components. This VI allows to select the type of
CH model to use (TypeOfMolecule), the degree of freedom (DegreeOfFreedom),
the hydrogen values and carbon value (AtomicValues) and the input (InputSignal).
Then, it performs the behavior of molecule (MolecularBehavior). In this case, select
Sum Form for TypeOfMolecule, 3 for DegreeOfFreedom, and the hydrogen values
h 1 , h 2 , h 3 and carbon value vC for AtomicValues.
However, this VI only calculates the behavior of molecule from one data point of
the input domain. Then, place it inside a For Loop and create an array of values from
x = −1 to x = 1, considering a step value between elements, e.g. 0.01. Figure E.1
shows the global block diagram of this example. As noted, a graph (XY Graph) was
connected to the output terminal of the CHMoleculeSelector VI. Figure E.2 shows
the front panel of the this application.

H. Ponce-Espinosa et al., Artificial Organic Networks, 221


Studies in Computational Intelligence 521, DOI: 10.1007/978-3-319-02472-1,
© Springer International Publishing Switzerland 2014
222 Appendix E: Examples of Artificial Hydrocarbon Networks in LabVIEWTM

Fig. E.1 Block diagram of a CH-primitive molecule (sum form) implementation

Fig. E.2 Front panel of Example E.1 (sum form)

In addition, the CHMoleculeSelector VI can perform the product form by chang-


ing to Product Form in TypeOfMolecule selector. Figure E.3 shows the block dia-
gram for this implementation and Fig. E.4 shows the front panel. Notice that the
molecular behavior depends on the type of the CH model selected, as well as the
order of hydrogen values (only in the sum form).
Optional: Create a SubVI node for the input signal in order to use it in the following
examples.
Example E.2 Consider an artificial hydrocarbon compound made of two CH-primitive
molecules M1 , M2 with hydrogen values h 11 = 0.3, h 12 = 0.6, h 21 = 0.4 and
h 22 = 0.7 and vC1 = vC2 = 1. Both molecules are modeled with the product
form. In addition, consider a piecewise interaction between molecules like (E.1).
Build a LabVIEWTM application to calculate the value of the compound behavior at
x = 0.45. 
M1 −1 ≤ x < 0
β(x) = (E.1)
M2 0 ≤ x ≤ 1
Appendix E: Examples of Artificial Hydrocarbon Networks in LabVIEWTM 223

Fig. E.3 Block diagram of a CH-primitive molecule (product form) implementation

Fig. E.4 Front panel of Example E.1 (product form)

Solution E.2 Create a new blank VI and select the Compound function from the
following path: AHN ∼ Components. This VI requires three inputs. The first input
terminal (MolecularParameters) is a cluster of the molecular parameters, including:
type of CH model (Product Form), the number of molecules in the compound (2), the
degrees of freedom of all molecules in the compound expressed as a vector (2, 2),
and an array of atomic parameters (hydrogen parameters and the carbon value). The
second input refers to the input value (InputSignal) of the compound (x = 0.45)
and the third input refers to paramaters of the compound (CompoundParameters),
including: type of compound interaction (Piecewise in this case), the bounds of
molecules in as a vector (−1, 0, 1) and an optional vector for additional parameters
to the Weighted Sum interaction. The Exponential interaction among molecules is
also available. The result of the VI is the compound behavior (CompoundBehavior).
Figure E.5 shows the global block diagram of this example. Run the application
and verify that the compound behavior at x = 0.45 is β(x) = −0.0125.
224 Appendix E: Examples of Artificial Hydrocarbon Networks in LabVIEWTM

Fig. E.5 Block diagram of the artificial hydrocarbon compound implementation

Table E.1 Parameters of the mixture in Example E.3


Compound 1 Compound 2
Description Value Description Value
Type of interaction Piecewise Type of interaction Piecewise
Bounds of molecules (−1, 0.5, 1) Bounds of molecules (−1, 0, 1)
Number of molecules 2 Number of molecules 2
Molecule 1
Type of CH model Product form Type of CH model Product form
Hydrogen values (0.6, 0.7) Hydrogen values (0.4, 0.9)
Carbon value 1 Carbon value 1
Molecule 2
Type of CH model Product form Type of CH model Product form
Hydrogen values (0.3, 0.8) Hydrogen values (0.1, 0.6)
Carbon value 1 Carbon value 1

Example E.3 Consider a mixture of two compounds with parameters shown in


Table E.1. Also, consider a set of stoichiometric coefficients τ = (0.9, 0.4). Build a
LabVIEWTM application to calculate the value of the mixture at x = 0.8.
Solution E.3 Create a new blank VI and select the Mixture function from the fol-
lowing path: AHN ∼ Components. This VI requires four inputs. The first input
terminal (ParametersOfCompounds) is an array of two clusters representing the
parameters of molecules of each compound (i.e. MolecularParameters and Com-
poundParameters like in Example E.1). The second input is the number of com-
pounds (NumberCompounds) in the mixture. The third input refers to the input
Appendix E: Examples of Artificial Hydrocarbon Networks in LabVIEWTM 225

Fig. E.6 Block diagram of the mixture implementation

value (InputSignal) of the compound (x = 0.8). Finally, the fourth input is a vector
of sotichiometric coefficients (StoichiometricCoefficients).
Figure E.6 shows the global block diagram of this example. Run the application
and verify that the behavior of the mixture at x = 0.8 is S(x) = 0.056.
Example E.4 Consider the same mixture of Example E.3. Improve the LabVIEWTM
application for drawing the behavior of the mixture in the input domain x ∈ [−1, 1].
Solution E.4 Open the application of Example E.3 and place it inside a For Loop.
In order to reduce the size of the ParametersOfCompound array, right-click in the
outer cluster inside the array and select the View Cluster As Icon option in the
context menu. Then, create an array of values from x = −1 to x = 1 with a step
of 0.01, or place the input signal VI created in Example E.1. Replace the constant
input signal (InputSignal) with the latter array of inputs, and relink the output of
the Mixture VI to a graph control (XY Graph). Figure E.7 shows the overall block
diagram and Fig. E.8 shows the front panel with the behavior of mixture.

E.2 Training Artificial Hydrocarbon Networks

Example E.5 Consider the function f as written in (E.2). Build a LabVIEWTM to


train an artificial hydrocarbon network for this function in the input domain x ∈ [0, 5].
Use 5 CH-primitive molecules in each compound, a maximum number of compounds
226 Appendix E: Examples of Artificial Hydrocarbon Networks in LabVIEWTM

Fig. E.7 Block diagram of the mixture implementation

Fig. E.8 Front panel of Example E.4

cmax = 2, a step size of σ = 0.05, a tolerance value of Δ = 0.001 and a maximum


number of iterations tmax = 100.

f (x) = sin (θ x) (E.2)

Solution E.5 Create a new blank VI and select the ModifiedSimpleAHN function
from the following path: AHN ∼ Training. This VI implements Algorithm 4.5 for
training an artificial hydrocarbon network. Five inputs are required. The first input
corresponds to some parameters (TrainingProperties) that determine how linear
chains of molecules are created (Simple Bond Based or Bond Energy Based), the
type of CH molecules in compounds (Sum Form or Product Form), the type of
interactions between molecules (Piecewise, Weighted Sum or Exponential), optional
properties for the Weighted Sum interactions, and a cluster for the step size and the
maximum number of iterations. In this case, Simple Bond Based algorithm, Product
Form type, Piecewise type, and σ = 0.05 and tmax = 100 values were used.
The second input requires a cluster (System) with two arrays containing the input
signal and the output signal, respectively. In this example, an array of inputs was
implemented (using the VI created in Example E.1) with lower bound 0 and upper
bound 5 and a step of 0.01. Function (E.2) was implemented with the Sine node
Appendix E: Examples of Artificial Hydrocarbon Networks in LabVIEWTM 227

Fig. E.9 Block diagram for training an artificial hydrocarbon network

Fig. E.10 Response of the trained artificial hydrocarbon network of Example E.5

inside a For Loop, as depicted in Fig. E.9. Both arrays are clustered with the Bundle
node.
The third input refers to the number of molecules (NumberMolecules) in the
AHN-structure, while the next input refers to the maximum number of compounds
in the mixture (NumberCompounds), and the fifth input is the tolerance value
(Tolerance). For this example, the corresponding values were 5, 2 and 0.001, respec-
tively.
Finally, the response of the trained artificial hydrocarbon network was plotted in
a graph (XY Graph). In order to graph the original function and the response (i.e.
the output terminal MixtureResponse), the latter is clustered with the input signal
228 Appendix E: Examples of Artificial Hydrocarbon Networks in LabVIEWTM

using the Bundle node. To this end, the original function and the response clusters
are used for creating a 2D-array with the Build Array node. Figure E.9 shows the
overall block diagram of the application and Fig. E.10 shows the response of the
trained AHN-structure in comparison with the original function.
If the AHN-structure and all trained parameters are required for a reasoning
process (predicting, forecasting, etc.), the ModifiedSimpleAHN function has other
three outputs: the structure of compounds in the mixture (MixtureStructure), the
hydrogen values with carbon values (HydrogenValues) and the set of stoichiometric
coefficients (StoichiometricCoefficients).

You might also like