Download textbook Artificial Intelligence Methodology Systems And Applications 16Th International Conference Aimsa 2014 Varna Bulgaria September 11 13 2014 Proceedings 1St Edition Gennady Agre ebook all chapter pdf
Download textbook Artificial Intelligence Methodology Systems And Applications 16Th International Conference Aimsa 2014 Varna Bulgaria September 11 13 2014 Proceedings 1St Edition Gennady Agre ebook all chapter pdf
Download textbook Artificial Intelligence Methodology Systems And Applications 16Th International Conference Aimsa 2014 Varna Bulgaria September 11 13 2014 Proceedings 1St Edition Gennady Agre ebook all chapter pdf
https://textbookfull.com/product/hybrid-artificial-intelligence-
systems-9th-international-conference-hais-2014-salamanca-spain-
june-11-13-2014-proceedings-1st-edition-marios-polycarpou/
https://textbookfull.com/product/business-process-
management-12th-international-conference-bpm-2014-haifa-israel-
september-7-11-2014-proceedings-1st-edition-shazia-sadiq/
https://textbookfull.com/product/nonlinear-dynamics-of-
electronic-systems-22nd-international-conference-
ndes-2014-albena-bulgaria-july-4-6-2014-proceedings-1st-edition-
valeri-m-mladenov/
Artificial Intelligence:
LNAI 8722
Methodology, Systems,
and Applications
16th International Conference, AIMSA 2014
Varna, Bulgaria, September 11–13, 2014
Proceedings
123
Lecture Notes in Artificial Intelligence 8722
Artificial Intelligence:
Methodology, Systems,
and Applications
16th International Conference, AIMSA 2014
Varna, Bulgaria, September 11-13, 2014
Proceedings
13
Volume Editors
Gennady Agre
Bulgarian Academy of Sciences
Institute of Information and Communication Technologies
Sofia, Bulgaria
E-mail: agre@iinf.bas.bg
Pascal Hitzler
Wright State University
Dayton, OH, USA
E-mail: pascal.hitzler@wright.edu
Adila A. Krisnadhi
Wright State University, Dayton, OH, USA
and
University of Indonesia, Depok, Indonesia
E-mail: krisnadhi@gmail.com
Sergei O. Kuznetsov
National Research University
Higher School of Economics
Moscow, Russia
E-mail: skuznetsov@hse.ru
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of
the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation,
broadcasting, reproduction on microfilms or in any other physical way, and transmission or information
storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology
now known or hereafter developed. Exempted from this legal reservation are brief excerpts in connection
with reviews or scholarly analysis or material supplied specifically for the purpose of being entered and
executed on a computer system, for exclusive use by the purchaser of the work. Duplication of this publication
or parts thereof is permitted only under the provisions of the Copyright Law of the Publisher’s location,
in its current version, and permission for use must always be obtained from Springer. Permissions for use
may be obtained through RightsLink at the Copyright Clearance Center. Violations are liable to prosecution
under the respective Copyright Law.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
While the advice and information in this book are believed to be true and accurate at the date of publication,
neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or
omissions that may be made. The publisher makes no warranty, express or implied, with respect to the
material contained herein.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface
AIMSA 2014 was the 16th in a biennial series of AI conferences that have been
held in Bulgaria since 1984. The series began as a forum for scientists from East-
ern Europe to exchange ideas with researchers from other parts of the world, at
a time when such meetings were difficult to arrange and attend. The conference
has thrived for 30 years, and now functions as a place where AI researchers from
all over the world can meet and present their research.
AIMSA continues to attract submissions from all over the world, with sub-
missions from 27 countries. The range of topics is almost equally broad, from
traditional areas such as computer vision and natural language processing to
emerging areas such as mining the behavior of Web-based communities. It is
good to know that the discipline is still broadening the range of areas that it
includes at the same time as cementing the work that has already been done in
its various established subfields.
The Program Committee selected just over 30% of the submissions as long
papers, and further accepted 15 short papers for presentation at the confer-
ence. We are extremely grateful to the Program Committee and the additional
reviewers, who reviewed the submissions thoroughly, fairly and very quickly.
Special thanks go to our invited speakers, Bernhard Ganter (TU Dresden),
Boris G. Mirkin (Higher School of Economics, Moscow) and Diego Calvanese
(Free University of Bozen-Bolzano). The invited talks were grouped around on-
tology design and application, whether using clustering and biclustering ap-
proaches (B.G. Mirkin), formal concept analysis, a branch of applied lattice
theory (B. Ganter), or being concerned with ontology-based data access (D.
Calvanese).
Finally, special thanks go to the AComIn project (Advanced Computing for
Innovation, FP7 Capacity grant 316087) for the generous support for AIMSA
2014, as well as Bulgarian Artificial Intelligence Association (BAIA), and Insti-
tute of Information and Communication Technologies at Bulgarian Academy of
Sciences (IICT-BAS) as sponsoring institutions of AIMSA 2014.
Program Committee
Gennady Agre Institute of Information Technologies,
Bulgarian Academy of Sciences, Bulgaria
Galia Angelova Institute for Parallel Processing, Bulgarian
Academy of Sciences, Bulgaria
Grigoris Antoniou University of Huddersfield, UK
Sören Auer Universität Leipzig, Germany
Sebastian Bader MMIS, Computer Science, Rostock University,
Germany
Roman Bartak Charles University in Prague, Czech Republic
Christoph Beierle University of Hagen, Germany
Meghyn Bienvenu CNRS, Université Paris-Sud, France
Diego Calvanese KRDB Research Centre, Free University
of Bozen-Bolzano, Italy
Virginio Cantoni Università di Pavia, Italy
Stefano A. Cerri LIRMM: University of Montpellier and CNRS,
France
Michelle Cheatham Wright State University, USA
Davide Ciucci University of Milan, Italy
Chris Cornelis Ghent University, Belgium
Madalina Croitoru LIRMM, University Montpellier II, France
Isabel Cruz University of Illinois at Chicago, USA
Claudia D’Amato Università di Bari, Italy
Artur D’Avila Garcez City University London, UK
Darina Dicheva Winston-Salem State University, USA
Ying Ding Indiana University, USA
Danail Dochev Institute of Information Technologies,
Bulgarian Academy of Sciences, Bulgaria
Stefan Edelkamp University of Bremen, Germany
Esra Erdem Sabanci University, Turkey
Floriana Esposito Università di Bari, Italy
William Michael Fitzgerald EMC Information Systems International,
Ireland
Miguel A. Gutiérrez-Naranjo University of Sevilla, Spain
Barbara Hammer Institute of Computer Science, Clausthal
University of Technology, Germany
Pascal Hitzler Wright State University, USA
Dmitry Ignatov Higher School of Economics, Moscow, Russia
VIII Organization
Additional Reviewers
Batsakis, Sotiris Kriegel, Francesco
Beek, Wouter Meriçli, Tekin
Bellodi, Elena Minervini, Pasquale
Borgo, Stefano Mutharaju, Raghava
Cordero, Pablo Nakov, Preslav
Fanizzi, Nicola Osenova, Petya
Gavanelli, Marco Papantoniou, Agissilaos
Gluhchev, Georgi Redavid, Domenico
Hu, Yingjie Rizzo, Giuseppe
Huan, Gao Schwarzentruber, François
Kashnitsky, Yury van Delden, André
Sponsoring Institutions
Bulgarian Artificial Intelligence Association (BAIA)
Institute of Information and Communication Technologies
at Bulgarian Academy of Sciences (IICT-BAS)
Keynote Presentation Abstracts
Scalable End-User Access to Big Data
Diego Calvanese
Keynote Abstract
Ontologies allow one to describe complex domains at a high level of abstraction,
providing end-users with an integrated coherent view over data sources that
maintain the information of interest. In addition, ontologies provide mechanisms
for performing automated inference over data taking into account domain knowl-
edge, thus supporting a variety of data management tasks. Ontology-based Data
Access (OBDA) is a recent paradigm concerned with providing access to data
sources through a mediating ontology, which has gained increased attention both
from the knowledge representation and from the database communities. OBDA
poses significant challenges in the context of accessing large volumes of data
with a complex structure and high dinamicity. It thus requires not only carefully
tailored languages for expressing the ontology and the mapping to the data,
but also suitably optimized algorithms for efficiently processing queries over the
ontology by accessing the underlying data sources. In this talk we present the
foundations of OBDA relying on lightweight ontology languages, and discuss
novel theoretical and practical results for OBDA that are currently under de-
velopment in the context of the FP7 IP project Optique. These results make it
possible to scale the approach so as to cope with the challenges that arise in
real world scenarios, e.g., those of two large European companies that provide
use-cases for the Optique project.
Bernhard Ganter
Keynote Abstract
Formal Concept Analysis has an elaborate and deep mathematical foundation,
which does not rely on numerical data. It is, so to speak, fierce qualitative math-
ematics, that builds on the algebraic theory of lattices and ordered sets. Since
its emergence in the 1980s, not only the mathematical theory is now mature,
but also a variety of algorithms and of practical applications in different areas.
Conceptual hierarchies play a role e.g., in classification, in reasoning about on-
tologies, in knowledge acquisition and the theory of learning. Formal Concept
Analysis provides not only a solid mathematical theory and effective algorithms;
it also offers expressive graphics, which can support the communication of com-
plex issues.
In our lecture we give an introduction to the basic ideas and recent devel-
opments of Formal Concept Analysis, a mathematical theory of concepts and
concept hierarchies and then demonstrate the potential benefits and applica-
tions of this method with examples. We will also review some recent application
methods that are currently being worked out. In particular we will present results
on a “methodology of learning assignments” and on “conceptual exploration”.
Boris G. Mirkin
Keynote Abstract
In the beginning, I am going to outline, in-brief, the current period of develop-
ments in the artificial intelligence research. This is of synthesis, in contrast
to the sequence of previous periods (romanticism, deduction, and induction).
Three more or less matured ontologies, and their use, will be reviewed: ACM
CCS, SNOMED CT and GO. The popular strategy of interpretation of sets of
finer granularity via the so-called overrepresented concepts will be mentioned. A
method for generalization and interpretation of fuzzy/crisp query sets by par-
simoniously lifting them to higher ranks of the hierarchy will be presented. Its
current and potential applications will be discussed.
Long Papers
Learning Probabilistic Semantic Network of Object-Oriented Action
and Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Masayasu Atsumi
Short Papers
Differentiation of the Script Using Adjacent Local Binary Patterns . . . . . 162
Darko Brodić, Čedomir A. Maluckov, Zoran N. Milivojević, and
Ivo R. Draganov
New Technology Trends Watch: An Approach and Case Study . . . . . . . . . 170
Irina V. Efimenko and Vladimir F. Khoroshevsky
Optimization of Polytopic System Eigenvalues by Swarm of Particles . . . 178
Jacek Kabziński and Jaroslaw Kacerka
Back-Propagation Learning of Partial Functional Differential Equation
with Discrete Time Delay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Tibor Kmet and Maria Kmetova
Dynamic Sound Fields Clusterization Using Neuro-Fuzzy Approach . . . . 194
Petia Koprinkova-Hristova and Kiril Alexiev
Neural Classification for Interval Information . . . . . . . . . . . . . . . . . . . . . . . . 206
Piotr A. Kowalski and Piotr Kulczycki
FCA Analyst Session and Data Access Tools in FCART . . . . . . . . . . . . . . 214
Alexey Neznanov and Andrew Parinov
Voice Control Framework for Form Based Applications . . . . . . . . . . . . . . . 222
Ionut Cristian Paraschiv, Mihai Dascalu, and Ştefan Trăuşan-Matu
Towards Management of OWL-S Effects by Means of a DL Action
Formalism Combined with OWL Contexts . . . . . . . . . . . . . . . . . . . . . . . . . . 228
Domenico Redavid, Stefano Ferilli, and Floriana Esposito
Computational Experience with Pseudoinversion-Based Training of
Neural Networks Using Random Projection Matrices . . . . . . . . . . . . . . . . . 236
Luca Rubini, Rossella Cancelliere, Patrick Gallinari,
Andrea Grosso, and Antonino Raiti
Test Case Prioritization for NUnit Based Test Plans in Agile
Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
Sohail Sarwar, Yasir Mahmood, Zia Ul Qayyum, and Imran Shafi
Table of Contents XIX
Masayasu Atsumi
1 Introduction
It is necessary for a human support robot to understand what a person is doing
in everyday living environment. In human motion in everyday life, there is a lot
of motion that interacts with objects, which is referred to as an “object-oriented
motion” in this research. The meaning of an object-oriented motion is determined
not only a motion itself but also an object with which the motion interacts
and this view corresponds to an affordance in which motion is dependent on
object perception. In addition, each motion is performed in a context which
is defined by a sequence of motions and a certain motion frequently occurs in
some context and rarely occurs in other contexts. For example, a motion using
a fork is frequently observed in a context of eating meals. In this research, each
object-oriented motion is referred to as an action and a sequence of actions is
referred to as an activity and it is assumed that an activity gives a context of
an action and promotes action recognition. This assumption is consistent with
findings that context improves category recognition of ambiguous objects in a
scene [1] and requires an extension to action recognition of several methods [2,3]
which incorporate context into object categorization. Though object-oriented
actions and activities can be clustered into motion classes according to visual
motion features and their semantic features can be labeled by using motion
G. Agre et al. (Eds.): AIMSA 2014, LNAI 8722, pp. 1–12, 2014.
c Springer International Publishing Switzerland 2014
2 M. Atsumi
labels and their target labels, motion classes and their labels do not have one-to-
one correspondence. Therefore, in this research, motion classes are labeled with
target synsets and motion synsets of case triplets in a form of “target synset,
case, motion synset” and motion classes and synsets are probabilistically linked
in probabilistic semantic networks of actions and activities, where a synset is
a synonymous set of the WordNet [4] which represents a primitive semantic
feature. In addition, contextual relationship between actions and activities is
acquired as co-occurrence between them.
This paper proposes a method of learning probabilistic semantic networks which
represent visual features and semantic features of object-oriented actions and their
contextual activities. It also provides a probabilistic recognition and inference
method of actions and activities based on probabilistic semantic networks. The
main characteristics of the proposed method are the following: (1) visual motion
feature classes of actions and activities are learned by an unsupervised “Incre-
mental Probabilistic Latent Component Analysis (I-PLCA)” [5], (2) visual feature
classes of motion and synsets of case triplets are integrated into probabilistic se-
mantic networks to visually recognize and verbally infer actions and activities, and
(3) actions are inferred in the context of activities through acquired co-occurrence
between them.
As for related work, Kitani et al. [6] proposed a categorization method of
primitive actions in video by leveraging related objects and relevant background
features as context. Yao et al. [7] proposed a mutual context model to jointly
recognize objects and human poses in still images of human-object interaction.
Also in [8], they proposed a model to identify different object functionalities by
estimating human poses and detecting objects in still images of different types of
human-object interaction. One difference between our method and these existing
methods is that our method not only uses an action and its target object as
mutual context but also uses activities as context of actions. Another difference is
that our method probabilistically infers actions and activities by using different
semantic features linked with different visual motion features in probabilistic
semantic networks.
2 Proposed Method
2.1 Overview
Human motion is captured as a temporal sequence of three-dimensional joint
coordinates of a human skeleton, which can be captured with the Microsoft
Kinect sensor. Since this research focuses on object-oriented motions of hands, a
temporal sequence of three-dimensional coordinates of both hand joints relative
to a shoulder center joint are computed from human skeleton data.
A motion feature of both hands is computed from a temporal sequence of
relative three-dimensional coordinates of both hand joints by the following pro-
cedure. First, relative three-dimensional coordinates of both hand joints are
spatially-quantized at a certain interval and a temporal sequence of quantized co-
ordinates and their displacement are obtained as a motion representation. Next,
Learning Probabilistic Semantic Net of Object-Oriented Action and Activity 3
Motion class: c0
Probabilistic semantic p(sn0, c0) p(c0), {p(fn|c0)}, {p(ma|c0)} p(sv0, c0)
network of activities p(sn0, c0, sv0)
Target synset: sn0[meal], p(sn0) Motion synset: sv0[eat], p(sv0)
O, p(sn0, sv0)
Probabilistic semantic Co-occurrence: ω(sni,svj,sn0,sv0), i=1,2, j=1,2,3
network of actions
Motion synset: sv1[eat], p(sv1) Motion synset: sv3[take], p(sv3) Motion synset: sv2[drink], p(sv2)
p(sn1, sv1) I O p(sn1, sv3) p(sn2, sv3) O I p(sn2, sv2)
Target synset: sn1[fork], p(sn1) p(sv3, 3
c) Target synset: sn2[teacup], p(sn2)
p(sv1, c1) p(sv2, c2)
p(sn1, 1
c) p(sn1, c) 3
p(sn2, 3
c) p(sn2, c2)
Motion class: c1 Motion class: c3 Motion class: c2
p(c1), {p(fn|c1)}, {p(ma|c1)} p(c3), {p(fn|c3)}, {p(ma|c3)} p(c2), {p(fn|c2)}, {p(ma|c2)}
p(sn1, c1, sv1) p(sni, c3, sv3), i=1,2 p(sn2, c2, sv2)
Fig. 1. An example of an ACTNET (Symbols in the figure are explained in the text.)
4 M. Atsumi
Let pl = (plx , ply , plz ) and pr = (prx , pry , prz ) be relative quantized three-dimensional
coordinates of left and right hands and let dl = (dlx , dly , dlz ) and dr = (drx , dry , drz )
be their displacement respectively. Here, l represents a left hand, r represents
a right hand and the displacement is given by the difference of quantized coor-
dinates between two successive frames. Let sn [wn ], r, sv [wv ] be a case triplet
which is used to annotate a temporal sequence of quantized coordinates and their
displacement of an action or an activity, where wn is a noun which represents a
target of motion and sn is its synset, wv is a verb which represents motion and
sv is its synset, and r is a case notation. Here, a synset is given by a synonymous
set of the WordNet [4] and a case is currently one of the objective case (O),
the instrumental case (I) and the locative case (L[at| inside| around| above|
below| beyond| f rom| to]). For a temporal sequence of quantized coordinates
and their displacement of an action, a case triplet of an activity which includes
the action is also given in addition to a case triplet of the action. Then, for a
motion m = {((pl , dl ), (pr , dr ))t }, a motion histogram is constructed to represent
a motion feature of both hands around a shoulder center as follows. Let B be
a set of modestly-sized regions which is obtained by dividing the space around
a shoulder center and let |B| be the number of regions. For each region b ∈ B,
a motion sub-histogram is computed for a set of coordinate and displacement
data {(pl , dl )} and {(pr , dr )} whose coordinate pl or pr is located in the region b.
A motion sub-histogram has 27 bins each bin of which corresponds to whether
displacement is positive, zero or negative along x-, y-, and z-axes and is counted
up according to values of displacement dl and dr . A motion histogram h(m) is
constructed as a |B|-plex histogram by combining these sub-histograms into one
histogram so that the size of h(m) is 27 × |B|.
The problem tobe solved for generating motion classes is estimating probabil-
ities p(ma , fn ) = c p(c)p(ma |c)p(fn |c), namely, a class probability distribution
{p(c)|c ∈ C}, instance probability distributions {p(ma |c)|ma ∈ M × A, c ∈ C},
probability distributions of class features {p(fn |c)|fn ∈ F, c ∈ C}, and the num-
ber of classes |C| that maximize the following log-likelihood
L= hma (fn ) log p(ma , fn ) (1)
ma fn
for a set of motion histogram H = {h(ma )}, where C is a set of motion classes,
M is a set of motions, A is a set of case triplets and ma is a motion m with a
case triplet a, that is, an instance of an action or an activity. These probabil-
ity distributions and the number of classes are estimated by the tempered EM
algorithm with subsequent class division. The process starts with one or a few
classes, pauses at every certain number of EM iterations less than an upper limit
and calculates the following dispersion index
δc = p(fn |c) − hma (fn ) × p(ma |c) (2)
hm (f )
m a fn fn a n
for ∀c ∈ C. Then a class whose dispersion index takes the maximum value
among all classes is divided into two classes. Let c∗ be a source class to be
divided and let c1 and c2 be target classes after division. Then, for a motion
m∗a = arg maxma {p(ma |c∗ )} which has the maximum instance probability and
its motion histogram h(m∗a ) = [hm∗a (f1 ), ..., hm∗a (f|F | )], one class c1 is set by
specifying a probability distribution of a class feature, an instance probability
distribution and a class probability as
hm∗a (fn ) + κ
p(fn |c1 ) = , ∀fn ∈ F (3)
fn (hm∗
a
(fn ) + κ)
p(c∗ )
p(c1 ) = (5)
2
respectively where κ is a positive correction coefficient. Another class c2 is set by
specifying a probability distribution of a class feature {p(fn |c2 )|fn ∈ F } at ran-
dom, an instance probability distribution {p(ma |c2 )} as 0 for m∗a and an equal
∗
1
probability |M|−1 for ∀ma (ma = m∗a ), and a class probability as p(c2 ) = p(c2 ) .
This class division process is continued until dispersion indexes or class prob-
abilities of all the classes become less than given thresholds. The temperature
coefficient of the tempered EM is set to 1.0 until the number of classes is fixed
and after that it is gradually decreased according to a given schedule until the
EM algorithm converges and all the probability distributions are determined.
6 M. Atsumi
where ∗ represents any word or any case. The network link between two nodes
of a motion class c and a target synset sn has a joint probability p(sn , c). The
network link between two nodes of a motion class c and a motion synset sv has a
joint probability p(sv , c). The network link between two nodes of a target synset
sn and a motion synset sv has a joint probability p(sn , sv ). The network nodes
of a target synset sn and a motion synset sv have probabilities p(sn ) and p(sv )
respectively. These probabilities are computed by the expressions (7).
p(sn , c) = sv p(sn , c, sv ),
p(sv , c) = sn p(sn , c, sv )
p(s
n , sv ) = c p(sn , c,
sv ) (7)
p(sn ) = c p(sn , c), p(sv ) = c p(sv , c)
In addition, a noun wn of a target synset sn and a verb wv of a motion synset
sv are set to the target synset node and the motion synset node respectively.
Co-occurrence between actions and activities is computed between a pair of
a target synset sn and a motion synset sv of an action and a pair of a tar-
get synset s0n and a motion synset s0v of an activity when the action has a
case triplet sn [wn ], r, sv [wn ] and is included in the activity with a case triplet
s0n [wn0 ], r0 , s0v [wn0 ]. Let p(sn , sv ) be a joint probability of a target synset sn and
a motion synset sv of an action and let p(s0n , s0v ) be a joint probability of a target
synset s0n and a motion synset s0v of an activity. Then, co-occurrence between
them is defined by the expression (8)
p(sn , sv , s0n , s0v )
ω(sn , sv , s0n , s0v ) = log (8)
p(sn , sv )p(s0n , s0v )
where a joint probability p(sn , sv , s0n , s0v ) is calculated from action instances ac-
cording to the expression (9)
p(sn , sv , s0n , s0v ) = p(c) × p(ma |c) (9)
c a=sn [∗],∗,sv [∗]@s0n [∗],∗,s0v [∗]
where a = sn [∗], ∗, sv [∗]@s0n [∗], ∗, s0v [∗] means that an action ma has a case
triplet which matches a pattern sn [∗], ∗, sv [∗] and its contextual activity has a
case triplet which matches a pattern s0n [∗], ∗, s0v [∗].
Learning Probabilistic Semantic Net of Object-Oriented Action and Activity 7
(β + β 0 )
+ λ × ω(sn , sv , s0n , s0v )
β(sn , sv , s0n , s0v |c, c0 ) = p(sn , sv |c) × p(s0n , s0v |c0 ) ×
2
(11)
where λ is a co-occurrence coefficient. When a pair of synsets (s0n , s0v ) of the
activity is fixed to (s∗n , s∗v ) by additional information, a pair of synsets (sn , sv )
of the action is inferred with the degree of confidence β(sn , sv , s∗n , s∗v |c, c0 ).
8 M. Atsumi
Table 1. Case triplets for activities and actions in a small data set
Activity Action
<03383948-n[fork],O,01216670-v[take]>
<03383948-n[fork],I,01166351-v[eat]>
<03383948-n[fork],O,01494310-v[put]>
<07573696-n[meal], <04398044-n[teapot],O,01216670-v[take]>
O, <04398044-n[teapot],I,02070296-v[pour]>
01166351-v[eat]> <04398044-n[teapot],O,01494310-v[put]>
<04397452-n[teacup],O,01216670-v[take]>
<04397452-n[teacup],I,01170052-v[drink]>
<04397452-n[teacup],O,01494310-v[put]>
<06415419-n[notebook],O,02311387-v[take-out]>
<06415419-n[notebook],O,01346003-v[open]>
<03561345-n[illustration], <06415419-n[notebook],O,01291941-v[close]>
O, <06415419-n[notebook],O,01308381-v[put-back]>
01684663-v[paint]> <03908204-n[pencil],O,01216670-v[take]>
<03908204-n[pencil],I,01684663-v[paint]>
<03908204-n[pencil],O,01494310-v[put]>
3 Experiments
<04397452-n[teacup],I,01170052-v[drink]> <06415419-n[notebook],O,01346003-v[open]>
<clothes,O,wear>
was 100%. The classification accuracy of actions was 81.3% when activities were
used as context, whereas it was 75.0% when activities were not used as context.
When additional information about object labels was given, the classification
accuracy of actions was increased up to 93.8%.
In the experiment 2 using a large data set, 4 video clips were prepared for each
of 4 activities and recognition and inference were evaluated for action sequences
through 4-fold cross validation. The right two rows of Table 3 shows the com-
position of ACTNETs and the right row of Table 4 shows results of recognition
and inference of actions and activities. The classification accuracy of activities
was 93.8%. The classification accuracy of actions was 62.5% when activities were
used as context, whereas it was 53.3% when activities were not used as context.
When additional information about object labels was given, the classification
accuracy of actions without and with contextual activities was respectively in-
creased up to 75.8% and 83.4%. The percentage values in parentheses in Table 4
are the classification accuracy of the top two guesses of actions and activities.
Learning Probabilistic Semantic Net of Object-Oriented Action and Activity 11
p(sn1,sv1) p(sn2,sv2)
sn1:07573696-n =0.49 sv1:01166351-v sn2:03561345-n =0.51 sv2:01684663-v
Synset
layer [meal] [eat] [illustration] [paint]
O p(sn2)=0.51 O p(sv2)=0.51
p(sn1)=0.49 p(sv1)=0.49
0.38 0.50
0.01 0.11
Motion {p(sn, c), p(sv, c)}
class c1: p(c1)=0.39, {p(fn|c1)}, c2: p(c2)=0.61, {p(fn|c2)},
layer p(sn1, c1, sv1)=0.38, p(sn2, c1, sv2)=0.01 {p(ma|c)} p(sn2, c2, sv2)=0.50, p(sn1, c2, sv1)=0.11
0.03 0.18
0.97 0.82
Motion
<07573696-n[meal],O,01166351-v[eat]> <03561345-n[illustration],O,01684663-v[paint]>
instance
layer ma1:sequence, h(ma1):histogram ma2:sequence, h(ma2):histogram
References
1. Bar, M.: Visual objects in context. Nature Reviews Neuroscience 5, 617–629 (2004)
2. Rabinovich, A., Vedaldi, C., Galleguillos, C., Wiewiora, E., Belongie, S.: Objects in
context. In: Proc. of IEEE Int. Conf. on Computer Vision (2007)
3. Atsumi, M.: Object categorization in context based on probabilistic learning of
classification tree with boosted features and co-occurrence structure. In: Bebis, G.,
et al. (eds.) ISVC 2013, Part I. LNCS, vol. 8033, pp. 416–426. Springer, Heidelberg
(2013)
4. Isahara, H., Bond, F., Uchimoto, K., Utiyama, M., Kanzaki, K.: Development of
Japanese WordNet. In: Proc. of the 6th Int. Conf. on Language Resources and
Evaluation, pp. 2420–2423 (2008)
5. Atsumi, M.: Learning visual categories based on probabilistic latent component
models with semi-supervised labeling. GSTF Int. Journal on Computing 2(1), 88–
93 (2012)
6. Kitani, K., Okabe, T., Sato, Y.: Discovering primitive action categories by leveraging
relevant visual context. In: Proc. of the IEEE Int. WS on Visual Surveillance (2008)
7. Yao, B., Fei-Fei, L.: Recognizing human-object interactions in still images by mod-
eling the mutual context of objects and human poses. IEEE Trans. on Pattern
Analysis and Machine Intelligence 34(9), 1691–1703 (2012)
8. Yao, B., Ma, J., Fei-Fei, L.: Discovering object functionality. In: Proc. of Int. Conf.
on Computer Vision (2013)
Semantic-Aware Expert Partitioning
1 Introduction
Expertise retrieval is not something new in the area of information retrieval.
Finding the right person in an organization with the appropriate skills and
knowledge is often crucial to the success of projects being undertaken [31]. Ex-
pert finders are usually integrated into organizational information systems, such
as knowledge management systems, recommender systems, and computer sup-
ported collaborative work systems, to support collaborations on complex tasks
[16]. Initial approaches propose tools that rely on people to self-assess their
skills against a predefined set of keywords, and often employ heuristics gener-
ated manually based on current working practice [13,36]. Later approaches try
to find expertise in specific types of documents, such as e-mails [9,11] or source
code [31]. Instead of focusing only on specific document types systems that in-
dex and mine published intranet documents as sources of expertise evidence are
discussed in [17]. In the recent years, research on identifying experts from online
data sources has been gradually gaining interest [4,19,23,37,40,43]. For instance,
Tsiporkova and Tourwé propose a prototype of a software tool implementing
an entity resolution method for topic-centered expert identification based on
bottom-up mining of online sources [40]. The tool extracts information from
online sources in order to build a repository of expert profiles to be used for
technology scouting purposes.
G. Agre et al. (Eds.): AIMSA 2014, LNAI 8722, pp. 13–24, 2014.
c Springer International Publishing Switzerland 2014
14 V. Boeva, L. Boneva, and E. Tsiporkova
Many scientists who work on the expertise retrieval problem distinguish two
information retrieval tasks: expert finding and expert profiling, where expert
finding is the task of finding experts given a topic describing the required ex-
pertise [10], and expert profiling is the task of returning a list of topics that
a person is knowledgeable about [3]. For instance, in [5,10] expertise retrieval
is approached as an association finding task between topics and people. In Ba-
log’s PhD thesis, the expert finding and profiling tasks are addressed by the
application of probabilistic generative models, specifically statistical language
models [5].
Document clustering is a widely studied problem with many applications such
as document organization, browsing, summarization, classification [1,28]. Clus-
tering analysis is a process that partitions a set of objects into groups, or clusters
in such a way that objects from the same cluster are similar and objects from
different clusters are dissimilar. A text document can be represented either in
the form of binary data, when we use the presence or absence of a word in the
document in order to create a binary vector. A more enhanced representation
would include refined weighting methods based on the frequencies of the indi-
vidual words in the document, e.g., TF-IDF weighting [35]. However, the sparse
and high dimensional representation of the different documents necessitate the
design of text-specific algorithms for document representation and processing.
Many techniques have been proposed to optimize document representation for
improving the accuracy of matching a document with a query in the information
retrieval domain [2,35]. Most of these techniques can also be used to improve
document representation for clustering. Moreover, researchers have applied topic
models to cluster documents. For example, clustering performance of probabilis-
tic latent semantic analysis (PLSA) and Latent Dirichlet Allocation (LDA) has
been investigated in [28]. LDA and PLSA are used to model the corpus and
each topic is treated as a cluster. Documents are clustered by examining topic
proportion vector.
In this work, we are concerned with the problem of how to cluster experts into
groups according to the degree of their expertise similarity. The cluster hypoth-
esis for document retrieval states that similar documents tend to be relevant to
the same request [21]. In the context of expertise retrieval this can be re-stated
that similar people tend to be experts on the same topics. Traditional clustering
approaches assume that data objects to be clustered are independent and of iden-
tical class, and are often modelled by a fixed-length vector of feature/attribute
values. The similarities among objects are assessed based on the attribute values
of involved objects. However, the calculation of expertise similarity is a compli-
cated task, since the expert expertise profiles usually consist of domain-specific
keywords that describe their area of competence without any information for the
best correspondence between the different keywords of two compared profiles.
Therefore Boeva et al. propose to measure the similarity between two expertise
profiles as the strength of the relations between the semantic concepts associated
with the keywords of the two compared profiles [7]. In addition, they introduce
the concept of expert’s expertise sphere and show how the subject in question
Another random document with
no related content on Scribd:
MP24989.
McWilliams, Carey.
MP24896.
Mead, Jennifer H.
MP25480.
Meader, Elaine A.
MU8937.
Meader, Frederick P.
MU8937.
Meader (Fred) Productions.
MU8937.
Mean streets.
LP43626.
Meanwhile back at the office.
LP43505.
Measurement and statistics.
MP25336.
Medcom, Inc.
MP25428.
Medical Center.
LP43272 - LP43274.
LP43284 - LP43306.
LP43319.
LP43321 - LP43343.
LP43471 - LP43494.
LP43578 - LP43601.
Medical Illustration and Audiovisual Education, Baylor College of
Medicine, Houston. SEE Baylor College of Medicine, Houston.
Medical Illustration and Audiovisual Education.
Medical University of South Carolina, Charleston.
MP25378.
Medical University of South Carolina, Charleston. Department of
Psychiatry.
MP25378.
Medication and treatment, your child’s eyes.
MP25119.
Meet Mister Mischief.
R570076.
Meier, Don.
MP24855 - MP24859.
Meiosis.
MP25308.
Melendez, Bill.
LP43227.
LP43627.
Melody maestro.
R570399.
Melody of youth.
R571695.
Memoirs of a strawberry.
MP24804.
Memory of the park.
MP24987.
Mendelson, Lee.
LP43227.
LP43627.
Menkle Services.
MP25328.
MP25329.
Men of two worlds.
LF126.
R574817.
Merrick, David.
LP43372.
Mertz, Paul M.
MU8938.
Mesa trouble.
LP43630.
Metric system.
MP25389.
Metro Goldwyn Mayer, Inc.
LP42938 - LP42941.
LP42950 - LP42952.
LP42963 - LP42968.
LP43099 - LP43100.
LP43210 - LP43225.
LP43265 - LP43306.
LP43318 - LP43343.
LP43471 - LP43494.
LP43578 - LP43601.
LP43619 - LP43620.
MP24742 - MP24745.
R566404.
R567066.
R567067.
R567068.
R568471.
R568696.
R570212.
R570213.
R570603.
R570604.
R571439.
R572096.
R572097.
R572724.
R572725.
R573465.
R574263.
R574264.
R577226 - R577229.
R577598.
R578942 - R578945.
R579955.
Metromedia Producers Corporation.
LP43203.
Mexican baseball.
R579969.
Mexican or American.
MP24738.
M F T lecture [1 - 9]
MP24959 - MP24967.
M F T / M V T lecture [1 - 9]
MP25182 - MP25190.
MGM-TV.
LP42941.
LP42968.
LP43214 - LP43225.
LP43266.
LP43272 - LP43274.
LP43284 - LP43306.
LP43319 - LP43343.
LP43471 - LP43494.
LP43578 - LP43601.
Mice.
LP43532.
Michigan Farm Bureau.
MP25357.
Michigan Kid.
R572007.
Michno, Eugeniusz.
MP8943.
Mickey and the beanstalk.
LP43542.
Mickey Mouse.
LP43544.
Mickey’s delayed date.
R577486.
Microscope illumination: the light path.
MP25307.
Microscopic middlemen.
MP25479.
Middle Ages: a wanderer’s guide to life and letters.
LP43079.
Middle of a heat wave.
LP43004.
Midland Federal Savings and Loan Association, Denver.
LP43348.
Midtown beat.
LP43392.
Mighty Mouse and the hep cat.
R572113.
Mighty Mouse in Aladdin’s lamp.
R579971.
Mighty Mouse in Crying “wolf.”
R572115.
Mighty Mouse in My old Kentucky home.
R570608.
Mighty Mouse in Svengali’s cat.
R567076.
Mighty Mouse in The Crackpot king.
R572111.
Mighty Mouse in the Dead end cats.
R579967.
Mighty Mouse in The Electronic mouse trap.
R572105.
Mighty Mouse in The Jail break.
R572100.
Mighty Mouse in The Johnstown Flood.
R572102.
Mighty Mouse in The Sky is falling.
R579974.
Mighty Mouse in The Trojan horse.
R572103.
Mighty Mouse in The Wicked wolf.
R570609.
Mighty Mouse in Throwing the bull.
R572108.
Mighty Mouse in Winning the West.
R572106.
Mighty Mouse meets Deadeye Dick.
R579975.
Milbaker Productions.
LP43349.
Miller on special problems in the older diabetic.
MP24854.
Millerson case.
R578900.
Millie’s daughter.
R577559.
Million dollar misunderstanding.
LP43499.
Million dollar roundup.
LP43382.
Milton Berle is the life of the party.
LP43500.
Mime of Marcel Marceau.
LP43051.
Miracle.
LP43240.
Mirisch Cinema Company, Inc.
LP43130 - LP43133.
LP43137.
LP43630 - LP43631.
LP43634 - LP43636.
Mirisch Corporation of California.
LP43609.
LP43633.
Miss Stewart, sir.
LP43213.
Mister Ace.
R569734.
Mister District Attorney.
R572325.
Mister Hex.
R578396.
Mister Majestyk.
LP43633.
Mitchell, Mary.
LF147.
Mitedos para operar el tractor hydrostatic.
MP25160.
Mitochondria in living cells.
MP25310.
Mitosis in animal cells.
MP25297.
MNP Texize Chemicals Company, division of Morton Norwich
Products, Inc. SEE Morton Norwich Products, Inc. MNP Texize
Chemicals Company.
Moat monster.
MP25333.
Mod, mod Lucy.
LP43550.
Mole Bajer, J.
MP25297.
MP25301.
MP25306.
MP25311.
Moment of decision.
LP43273.
Monaco and Fury versus Impala and Galaxie.
MP25028.
Monaco, Impala, Galaxie 500 comparison.
MP25027.
Monkey tone news.
R572005.
Monogram Pictures Corporation.
R569472 - R569477.
R572504 - R572506.
R574060 - R574061.
R577411 - R577416.
R578395 - R578396.
R579842 - R579843.
Monogram Publications, Inc.
MU8909.
Monte Carlo versus competition.
MU8948.
Moonwalk one.
MP25414.
Mordi Gassner’s immortal who’s who.
MU8910.
More exciting case.
LP43418.
Morris, Desmond.
LP43622.
Morton Norwich Products, Inc. MNP Texize Chemicals Company.
MP25323.
MP25489.
Mossinson, Yigal.
LP42940.
Mother lode.
LP43228.
Motion Picture Department, Brigham Young University. SEE
Brigham Young University, Provo, UT. Motion Picture
Department.
Motion Picture Services, Division of Instructional Communications,
Western Michigan University. SEE Western Michigan University,
Kalamazoo. Division of Instructional Communications. Motion
Picture Services.
Motion Picture Unit, University of Iowa. SEE University of Iowa,
Iowa City. Motion Picture Unit.
Motivision, Ltd.
LP43349.
Motors.
MP24921.
MP24922.
Motors Insurance Corporation.
MP25067.
Motor system and reflexes, normal and abnormal signs.
MP25267.
Mouris, Francis Peter.
MU8951.
MU8952.
Mouse factory.
LP43524 - LP43547.
Mouton, Jane Srygley.
MP25291.
MP25292.
Moyer, Martin.
MP24911.
Moyer (Martin) Productions.
MP24911
Ms.
MP25413.
Muggers.
LP42974.
Muir.
LP43233.
Multimedia Associates, Inc. Educational Innovators Press.
MP25043 - MP25048.
Murder.
LP43220.
Murder in Movieland.
LP43283.
Murder in the slave trade.
LP43281.
Murder machine.
LP43318.
Murder on the 13th floor.
LP43269.
Murdock’s gang.
LP43202.
Museum without walls.
MP25462.
Musical mania.
LP43169.
Musical moments from Chopin.
R578373.
Musifilm B. V.
LU3672.
Muskrat family.
MP24760.
Mutual of Omaha.
MP24855 - MP24859.
MP25437 - MP25440.
Mutual of Omaha’s Wild kingdom.
MP24855 - MP24859.
MP25437 - MP25440.
My darling Clementine.
R568006.
Myers, Robert Eugene.
LP43125.
My old Kentucky home.
R570608.
My Shoes.
LP43119.
Mystery in Dracula’s castle.
LP43191.
LP43192.
Mystery of Amelia Earhart.
LP43360.
Mystery of Chalk Hill.
LP43384.
Mystery of the cosmic ray.
R572329.
Mystery of the green feather.
LP43388.
Mystery of the white horses.
R578904.
Mystery of the yellow rose.
LP43383.
N
Naho Enterprises.
LP43625.
Naked ape.
LP43622.
Naples beat.
LP43387.
Narcisenfeld, Harvey.
MU8907.
Narcotics Education, Inc.
LP43344.
Nashville coyote.
LP43194.
National Association of Real Estate Editors.
MP25415.
National Broadcasting Company, Inc.
LP43615.
LP43616.
MP25483.
National Council of Senior Citizens.
MP24830.
National District Attorneys Association.
LP42972 - LP42976.
National Education Institute.
LP42972 - LP42976.
National Fire Protection Association.
MP25290.
National Foundation.
MP24886.
National Geographic Society.
MP24740.
MP24831.
MP25214 - MP25220.
MP25482.
National parks: America’s legacy.
MP25218.
National Society for the Prevention of Blindness, Inc., New York.
MP25227.
National Student Film Corporation.
LP43345.
National Telefilm Associates, Inc.
R568607 - R568608.
R569727 - R569745.
Nation of human pincushions.
LP43108.
Nature of life: living things interact.
MP25396.
Nature of life: the living organism.
MP25397.
Nature of light.
MP24891.
Nature’s colors: the craft of dyeing with plants.
MP25416.
Navajo arts and crafts.
MP24752.
Navajo girl.
LP42943.
NBC mystery movie.
LP43139.
NBC News.
MP25483.
NBC Sunday mystery movie.
LP43381 - LP43384.
LP43388 - LP43389.
LP43393 - LP43395.
NBC Wednesday mystery movie.
LP43385 - LP43387.
LP43390 - LP43392.
LP43496 - LP43499.
Neame, Ronald.
LF133.
Necessary end.
LP43419.
Neck.
MP25256.
Neff, Mort.
MP24860.
Neighbor pests.
R572096.
Nelson Company.
LP42972 - LP42976.
Nerves.
MU8904.
Nesting habits of Canada geese.
MP24793.
Network design.
MP24957.
New American Films, Inc.
LU3673.
Newdon Company.
LP43372.
New France.
MP25406.
Newman Foreman Company.
LP42953.
New Mexican connection.
LP43393.
New 66 series tractors.
MF24991.
News of the day.
R567417 - R567424.
R570429 - R570438.
R572657 - R572664.
R574062 - R574069.
R576809 - R576817.
R578731 - R578739.
Newspaper story.
MP24832.
New 2050A and 1850 loaders.
MP24990.
New voices in the South.
MP25105.
New York University.
LP43614.
Nicholson.
LP43233.
Night club boom.
MP25402.
Night cry.
LP43475.
Night in Casablanca.
R574926.
Night in paradise.
R570569.
Nightmare trip.
LP43437.
Night watch.
LP43413.
1974 cars: low speed crash costs.
MP24937.
1974 cars: low speed crash costs (foreign models)
MP25167.
1974 Chrysler and Plymouth station wagons.
MP25030.
1974 cleaner air system.
MP25023.
1974 Dodge station wagons.
MP25031.
1974 full size car body highlights.
MP25140.
Noah Films.
LP42940.
Noah’s ark.
LP43540.
Nobody loses all the time.
LP43034.
Nobody’s children.
MP25412.
Nocturne.
R570315.
Noise pollution.
LP43061.
No margin for error.
LP43476.
No medals.
LF155.
Nomination of Abraham Lincoln.
LP43359.
Nora Prentiss.
R571692.
Normal mitosis in plant cells (haemanthus)
MP25305.
North Carolina craftsman — Paul Minnis.
MU8975.
North Carolina craftsmen.
MU8975.
North from Mexico.
MP24896.
North Sea islanders.
MP24871.
No sanctuary.
LP43490.
Nosey, the sweetest skunk in the West.
LP43198.
Not just another woman.
LP42985.
Notorious.
R578231.
Not with a whimper.
LP43037.
Nova versus competition and Vega versus competition.
MU8944.
No way out.
LP43492.
Nowhere child.
LP43342.
Now is no more—A. J.’s family.
MU8905.
Now your injector.
MP25049.
Nurses wild.
LP43182.
O
O’Brien’s stand.
LP43448.
Observation system - improving instruction.
MP25043.
Occlusive arterial disease.
MP25258.
Ocelots — den and cubs.
MP24777.
O’Connor, Rod.
MP25293 - MP25296.
MP25303 - MP25304.
MP25459.
Odd lot caper.
LP43049.
Odd man out.
R578287.
Ode to nature.
MP24909
O’Donnell, Robert H.
MP24914.
MP24915.
Odyssey Pictures Corporation.
LP43352.
Office of Education, United States. SEE United States. Office of
Education.
Officer training.
MP24931.
MP24932.
MP24933.
Offshore Productions.
MP25040.
Often and familiar ghost.
LP43410.
O’Hara, United States Treasury.
LP43229.
LP43231.
Ohio Farm Bureau Federation, Inc.
MP25357.
Old Pueblo Enterprises.
MP25293 - MP25296.
MP25459.
Old Pueblo Films.
MP25303.
MP25304.
Oldsmobile Division, General Motors Corporation. SEE General
Motors Corporation. Oldsmobile Division.
Oliver Twist.
LF146.
Ollinger’s last case.
LP43399.
Omnicom Productions, Inc.
LP43123.
Once a jolly swagman.
LF141.
Once the ferns.
LP42978.
Once upon a dream.
LF139.
O’Neill, Eugene.
LP42935.
One meat brawl.