Nothing Special   »   [go: up one dir, main page]

Published Research Papers

Download as pdf or txt
Download as pdf or txt
You are on page 1of 11

International Journal of Computer Applications (0975 – 8887)

Volume 94 – No 2, May 2014

Improving Statistical Multimedia Information Retrieval


(MIR) Model by using Ontology
Gagandeep Singh Narula Vishal Jain
B.Tech, Guru Tegh Bahadur Institute of Research Scholar, Computer Science and
Technology, GGS Indraprastha University, Delhi Engineering Department, Lingaya’s University,
Faridabad

ABSTRACT [2] to support features of images but it did not tell anything
The process of retrieval of relevant information from massive about relationship and associations between different contents
collection of documents, either multimedia or text documents of image. It also resulted in vain. The third model developed
is still a cumbersome task. Multimedia documents include was Dublin Core [3] that deals with semantic as well as
various elements of different data types including visible and structural content of image and text but it failed to depict
audible data types (text, images and video documents), relationship between text and image.
structural elements as well as interactive elements. In this With advancement in technology and predictions, some
paper, we have proposed a statistical high level multimedia IR probabilistic and futuristic models were also developed. In
model that is unaware of the shortcomings caused by classical following paper, statistical multimedia IR model has been
statistical model. It involves use of ontology and different proposed and compared with classical multimedia IR model.
statistical IR approaches (Extended Boolean Approach,
Bayesian Network Model etc) for representation of extracted 1. INTRODUCTION
text-image terms or phrases. Human knowledge is richest multimedia storage system.
A typical IR system that delivers and stores information is There are various mechanisms like vision, language that
affected by problem of matching between user query and expresses knowledge and information obtained from them
available content on web. Use of Ontology represents the must be processed by system efficiently. There must be
extracted terms in form of network graph consisting of nodes, systems designed that interprets and process human queries,
edges, index terms etc. The above mentioned IR approaches thus producing relevant results. It is often seen that users get
provide relevance thus satisfying user‟s query. baffled while searching results of their queries. The reasons
behind this are:

The paper also emphasis on analyzing multimedia documents
and performs calculation for extracted terms using different The content of information is unclear and needs user
statistical formulas. The proposed model developed reduces to refine that information.
semantic gap and satisfies user needs efficiently.  The data stored on systems may or may not be
updated regularly.
Index Terms
Information Retrieval (IR), OWL, Statistical Approaches (BI  There lies lower level of interaction between user
model, Extended Boolean Approach, Bayesian Network request and stored information on systems. The low-
Model), Query Expansion and Refinement. level links are called Semantic Gap.
Statistical approaches involves retrieved documents that
State of Art matches query closely in terms of statistics i.e. it must have
Research on multimedia information retrieval seems to be statistical model, calculations and analysis. These approaches
gargantuan and challenging task. Its areas are so diversified break given query into TERMS. Terms are words that occur
that it has lead to independent research in its own in collection of documents and are extracted automatically.
components. Firstly, there used to be human centered systems For reducing inconsistencies and semantic gap in multimedia
that focus on user‟s behavior and needs. Various experiments information, it is necessary to remove different forms of same
and studies were conducted in lieu of these systems. The users word because it makes user confused in choosing specific
were asked to present a set of valuable things in daily life. It terms that lies close to query. Some IR systems extract
was done on similarity of users. Some of choices are same phrases from documents. A phrase is a combination of two or
while some are different. Few of them prefer to use images more words that is found in document.
instead of text caption.
We have used approaches like extended Boolean approach,
In further experiments, it was noticed that new users were network model that performs structural analysis for retrieving
taking feedback from previous users. It leads to concept of text or image pairs. They also assign weights to given term.
relevance feedback module in information model. In early The weight is defined as measure of effectiveness of given
years, most research was done on content- based image term in distinguishing one document from other documents.
retrieval. The existing models are of different level and The paper has following sections: Section 2 describes
scope. These models are semantically unambiguous. For e.g.: architecture of classical multimedia model. Section 3 lets
IPTC model [1] uses location fields that focus on location of reader go through proposed IR model that is implemented
data but this model also failed due to lack of statistical using statistical approaches with the use of ontology. It also
approach. Another metadata model was developed i.e. EXIF requires conversion of low level features to high level features

35
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

using multimedia analysis. Section 4 deals with experimental  There is communicational gap between user and
analysis and calculations depicting the relevance of proposed system. It is known that some systems are fast in
model. Finally, Section 5 concludes about paper. processing of calculations whereas human is not.
So, it leads to communication gap.
2. CONCEPT OF MULTIMEDIA IR
SYSTEM 2.1 Layout of Classical Multimedia Ir
The classical multimedia IR system has not proven effective
in extraction of relevant terms from document collections.
Model
Traditional IR systems are not intelligent that they are able to Since multimedia documents do not contain keywords or
produce accurate results. These systems use human perception symbols that facilitates easy process of searching through
to process query and returns results. The results may be document. Keeping this in mind, this classical model consists
relevant or non- relevant because these systems match query of Query Processing Module that translates the multimedia
with information stored in information database. information tokens into symbols / keywords which are easily
understood by system. The model has following modules:

The syntax of multimedia document is different from text
documents. Multimedia documents do not contain any Analysis Module: - IR system firstly analysis
information symbols or keywords that help in expressing multimedia documents and extract features from
information. They consist of: them. The features include low- level as well as


high- level features.

Visible and Audible Data Types: - It includes text,
images, graphs, videos and audio. Indexing Module: - The module that stores features


or terms retrieved from multimedia documents is
Structural Elements: - They are not visible. They called Indexing Module.

describe the organization of other data types.
Query Processing Module: - This module translates
The salient features of multimedia information [4] are given multimedia information tokens like audio, text-
below: pairs, videos etc into information symbols that are

now understood by system.


The information stored in document that is to be
searched can be audio, visual, videos etc. They Retrieval Module: - It finds rank of stored
communicate variety of messages and emotions that documents on basis of similar terms used in query.
helps to understand easily. After ranking of documents, the results satisfying
 Structure information gives organization and query are presented to user.
usability in performing communications.

Multimedia Analysis Query processing

Query Query
Indexing Retrieval Application
Indexer
Results Documents

User
Multimedia
document

Figure1: A Classical Multimedia IR Model [5]

2.2 Shortcomings of Classical Multimedia  It does not involve concept of ontology and
semantic associations for representing concepts
IR Model associated with terms in document.

They are explained below:

The terms which are relevant and similar to each
The classical model deals with terms or information other are identified at the end of phase by
symbols instead of maintaining relationships RETRIEVAL Module. The good model is one that
between them. It does not give any information has capability to distinguish between relevant and
about concepts used in extracted terms or image non relevant terms in the middle of phase in order to
pairs. prevent any confusion.
 It creates semantic gap [6] between user and system  The model does not involve the concept of re-use of
due to availability of irrelevant and superfluous queries. Once the query is expanded, it will not
information terms stored in information database of store in system for future use. Again, it has to
IR system. analyze large collection of documents and retrieve
terms from them.

36
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

 It does not employ any statistical or probabilistic terms. In order to overcome this problem, the model includes
approaches for determining relevance of IR system. only those approaches that perform extraction of terms like
images, video, and text from multimedia documents as well as
3. PROPOSED HIGH LEVEL text documents.
MULTIMEDIA IR MODEL The block diagram of proposed model containing several
A model is being designed that employ use of statistical IR modules is shown below:
approaches for extracting terms from multimedia documents.
Ontology Module has been introduced that serves the task of
representing concepts and relationships among retrieved

Structure Analysis
IR Systems Terms and information
Multimedia
Documents (SMART, Indexer
(Text, Image pairs) symbols are extracted.
WEB (images, videos, INQUERY) Terms are phrases or
(stores info
symbols)
text etc) Collections (video, audio)

Contains large amount of INDEXING


information that may be
relevant or irrelevant

Extraction of Relevant terms / phrases / concepts


from analyzed / retrieved documents.

Use of Statistical approach (low- level features) as


well as Semantic Approach (high-level features)

Statistical Approach (Extended Semantic Approach (NLP


Boolean Approach and Bayesian approaches and knowledge
Inference network Model) discovery)

P-norm model BI Model


model

Extended Boolean Bayesian Network model: It takes


Approach: It gives multiple queries at same time. It
n relevant terms creates a graph that has nodes
in less time. connected by edges. Nodes are
True/False statements.

Ontology Module
Figure2 (a): Proposed High- level Statistical Multimedia IR Model

37
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

Ontology Module

Representation of Extracted Concepts / Terms in hierarchical manner

Creation of Ontology for Gathering information of Ontology Builder


given document Di. relevant extracted terms. (Classification
Algorithm).

Extraction of new relevant Creation of new Generate classes from


terms and maintaining documents Inference Network graph using
semantic associations OWL and XML classes.

Ontology Phrase Extractor

INDEXING

Query Processing Module

Query Processing Query Expansion Query Transformation


Rules 1…………………

Uses methods namely Sketch Retrieval, Search by


keyword, Search by example, Adaptive retrieval, T a sfo ed ue ies 1…………..
Local Context Analysis (LCA)

Calculation of new
Query Refinement
and old weights

Dummy document

Re-Use of queries. The queries


Retrieval Module
that are captured are stored in
query database for future use.

Ranks document according Application


to similarity metrics Results

User

Figure2 (b): Proposed High- level Statistical Multimedia IR Model

The model has following aspects:  Low Computation and Cost: - The approaches that

are used to extract terms from documents are so
Improves user expressiveness: - It analyses terms efficient that they takes into account only relevant
that have close meaning to user‟s query and terms and discards non relevant terms. Only
expressive results are presented to user. relevant terms are expanded and it leads to saving of
 Supports different modules: - Several modules like time and work.
Ontology Module, Extraction Module, Query
Expansion and Refinement have been introduced in
proposed model.

38
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

 Good Retrieval Accuracy: - The model retrieves sub tasks that performs parallel to each other. It
only those terms from documents that satisfies helps in extraction of text- image documents by
user‟s information needs. dividing into smaller segments. Each segment holds


some information. As soon as each part is analyzed,
Pipelining Facility: - Pipelining means dividing of terms from different segments are retrieved and
complex tasks into certain number of independent combined to produce full document.

Original Image

Image terms (Segment 1) Image terns (Segment 2) Image terms (Segment 3)


Figure3: Extraction of Image Terms [7]

3.1 Multimedia Document Analysis Module fully determine that random chosen documents are relevant or
There is large number of multimedia documents consisting of non relevant. The classical model works on low level
video text collections on web. The IR systems used in model multimedia analysis. The proposed multimedia model works
performs structural analysis of documents and extracts text- on high level multimedia analysis algorithm rather than low-
image terms from them [8]. At this stage, it is not possible to level analysis because of following reasons:

Table1: Features of High – Level Multimedia Analysis


Low- Level Multimedia Analysis High- Level Multimedia Analysis
1. It produces low level features like text and image terms. 1. It produces high level features like it describes concepts
associated with extracted low level text terms.
2. It only extracts relevant terms from information that is 2. It extracts terms from derived documents even if
stored in information database on system. information is not stored on system.
3. It uses information symbols to build the index of multimedia 3. It uses keywords that are related to document and always
documents. These symbols may or may not specify given show presence of concepts that are described by terms in given
concepts. document.

3.2 Indexing Module 3.3.1 Extended Boolean Approach


The terms and information symbols are extracted in previous Problem: - The classical Boolean condition i.e. True/ False
module. The storage location of these extracted terms produces both relevant and non relevant results. They supply
(relevant/ irrelevant) is decided by Indexing Module. It is given solution in response to whole document. If some text or
done with the help of Indexer that stores the generated terms. image terms are relevant in document and some are not
This module has capability to store high dimensional relevant, then Boolean condition leads to irrelevant results
information i.e. it can also solve structured indexes or trees because it considers whole document.
along with information symbols.
Solution: - Extended Boolean Approach
3.3 Extraction Module Analysis: - A number of extended Boolean models have been
This module uses two or more statistical approaches for developed to provide ranked output of results i.e. the
extraction of relevant terms or phrases from retrieved documents that satisfies user‟s query. These models use
documents. It is a module that determines the relevance of IR extended Boolean operators that are called as Soft Boolean
system. It is able to provide distinction between relevant and Operators for finding relevant text- image pairs. This
non relevant terms on basis of results produced by statistical approach assigned different weights to different terms and
approaches. The approached are discussed in following sub computes relevance.
sections:
The classical Boolean operators are different from Soft
operators as follows:

39
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

Table2: Classical Operators vs. Soft Operators


Classical Boolean Operator Extended Operators (Soft Operators)
It evaluates its terms to return two values only i.e. True/ It evaluates its items to number on basis of degree to
False. The values are represented by zero (False) and 1 which condition matches document i.e. If condition
(True) respectively. It is represented by truth tables matches document, then it returns 1 else 0. If some part
graphically. satisfies condition while other part does not, then the
value is in fraction. It means soft operators do not leave
document as irrelevant.
the features of document. Prior Probability +
Example of Extended Boolean approach is p-norm model.
Posterior Probability = 1

P-Norm Model: - The model performs evaluation if and only
Conditional Models are also called as Probability
if terms satisfy user‟s query in accordance with user‟s views.
Kinematics model that is defined as flow of
The model uses two functions AND, OR for finding similar
probabilities of relevant terms to non relevant terms
documents and terms. Consider a query that has n terms
in whole document.
given by q1, q2, q3 ….qn-1, qn with corresponding weights
wq1, wq2, wq3……..wqn-1, wqn in a given document Di. The  It uses concept of Inverse Document Frequency
document is also assigned weights as wd1, wd2, (idf) for determining number of relevant terms by
wd3………..wdn-1, wdn. using formula as:
Firstly, the extended Boolean function AND finds similar Idf = ln N/n where N= No of total documents, n =
documents by combining (AND) query terms together. Then, No of relevant documents

terms are retrieved from those documents that satisfy user
needs. AND function follows condition that all components Probabilistic models helps in achieving relevance on
must be present in order to return relevant (non zero) terms. If basis of values estimated for different documents.
any component is absent, then it will give zero values. The statistical probabilistic models [9] are categorized into
(1) SAND (d (q1, wq1) AND …………. AND (qn, wqn)) = 1 – two parts:
[(∑(1-wdi) p * (wq1) p) / (∑ (wq) p)] 1/p (a) Binary Independence Model (BI): - The model in which
Where 1≤p≤∞ and SAND = Similar documents retrieved using each text- image term (relevant/ irrelevant) is independent of
AND function. other text- image pairs in collection of documents is called BI
model. So, the probability of any relevant/ irrelevant term is
The Extended Boolean OR function finds similar documents independent of probability of any other terms in documents.
with query that add (OR) the query terms together.
BI model is also called Relevance Weighting Theory. It says
(2) SOR (d (q1, wq1) OR …………. OR (qn, wqn)) = [(∑(wdi) p that each term is given weight that is used to rank documents
* (wq1) p) / (∑ (wq) p)] 1/p on basis of relevance, thus extracting relevant terms. Weights
Where 1≤p≤∞ and SOR = Similar documents retrieved using are assigned by product of Term Frequency and Inverse Term
Frequency i.e. (tf * idf) when we are taking random collection
OR function
of documents. Term Frequency (tf) means number of terms
So, we conclude that p- norm model returns n relevant occurred in document. So, tf varies from one document to
multimedia terms instead of binary terms. It reduces system another whereas Inverse Document Frequency (idf) measures
time and increases performance. how many times the given term occurs in document. It gives
probability of terms occurred in a document.
Drawbacks: Extended Boolean approach fails in extracting
relevant terms from given n terms. P- norm model assigns Consider number of finite terms tk in document di. Each term
weights to query terms as well as document terms. Both is assigned different weights wk that is to be calculated
queries are treated equally because p – norm functions according to given formula:
evaluate all term weights in a same way. It cannot distinguish
between relevant and non- relevant terms. The solution to this Wk = log [Pk (1- Uk) / Uk (1 – Pk)] (When we are given set of
problem lies in usage of probabilistic statistical IR data terms)
approaches. Where Pk = Probability of term tk occurring in relevant
documents
3.3.2 Bayesian Probability Models / Conditional Uk = Probability of term tk occurring in non relevant
Probability Models documents
Bayesian models give relationship between probability of Wk = Weight to each term. It is defined as measure of
random selected documents and probability that given distinguishing relevant terms from non relevant terms. It is
document is relevant. In such case, we are aware of features of also called as Term Relevance Weight or Log Odds Function.
document (image terms, text, statistics, phrases etc) and then
calculate its probability. Following are features of Odd ratio is calculated on basis of likelihood of terms in
probabilistic models: - relevant documents as well as in non relevant documents. Let


likelihood of terms in relevant documents is X = (Pk / 1- Pk)
They are related to prior and posterior probabilities. and in non relevant documents Y = (Uk / 1-Uk). Then Wk is
Prior means finding probability as earliest as given by X / Y. Wk is zero if Pk = UK, Wk > 0 if Pk > Uk
possible without knowing features of document.
Posterior means finding probability after examining The model concludes that the terms which occur many times
in single document is relevant but if same terms occur in large

40
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

number of large number of documents , then it is not relevant. statements describing term is relevant or not. A graph has
So, a weight function is developed that varies from idf to Wk following elements:

formula.
Document Nodes (Dn) : - They are called Root
Limitation of this model: - It is not able to distinguish Nodes

between low frequency terms and high frequency terms in
context of weights. It gives weight of low frequency terms as Text Nodes (Tn): - They are child nodes of
same as those of high frequency terms. It does not able to document nodes. It may include audio, video nodes,
extract terms from multiple queries also. So, to overcome text image nodes etc. So, child nodes have multiple
these problems, we have used Inference Network Model. representations of document.

(b) Bayesian Inference Network Model  Concept Representation Nodes (CRn): - They are
It is one of statistical approach for extraction of terms from child of text nodes. The concepts used in terms that
multimedia documents with the help of constructing graph are in text nodes are represented by CR nodes.
called as Inference Network Graph. Besides computing These nodes are index terms or keywords that are
probabilities for different nodes, this model also determines matched in document and retrieves relevant terms.
concepts between various retrieved terms. It provides surety  Document Network: - It is network consisting of
that user needs are fulfilled because it also combines multiple Document nodes, Text nodes, and CR nodes. It is
sources of evidence regarding relevance of document to user not tree as it has multiple roots and nodes.
query. Document Network is Directed Acyclic Graph
Graph Structure: - Inference Network is a graph that has (DAG) since it has no loop. The representation of
nodes connected by edges. Nodes represent True/ false document network for different documents from D1
to Dn is shown as:

D1 D2 ……………………. Dn-1 Dn

Tn
T2 Tn-1
T1

CR2 CRn
CR1 CRn-1

Figure4: Document Network (It describes concepts used in multiple terms from different documents)
 Query Network: - Since we have extracted concepts describe relevant terms are shown in form of results
in Document Network, it is possible that different and presented to user.
concepts are used in same query nodes or different
The representation of query network for different
concepts in different nodes. The concepts that
query nodes from Q1 to Qn is shown as:

CR1 CR2 ……….............…………….………..


CR CRn
n-1

Q1 Qn-1 Qn
Query Nodes

r1 r2 rn
Leaf Nodes (Results)

Figure5: Query Network (It describes generation of results (leaf nodes))


When we combine document network and query network, we probability of their respective parents‟ node. Consider
get inference graph. This graph computes probabilities of combination of 110 (1 stands for True, 0 for False). The
terms contained in child nodes of document nodes and so on. probability for combination is calculated as P1 * P2 * (1-P3).
It is done by using LINK MATRIX. Each node is assigned Weight function for such combination is (W1 + W2) / (W1 +
with its weight in each row of matrix. The column represents W2 + W3). Total probability is calculated as P1 * P2 * (1-P3)
number of possible combinations a node can have. * (W1 + W2) / (W1 + W2 + W3).
In link matrix, Number of parents = 2n Number of columns. If 3.4 Ontology Module
node has 3 parents, then there will be 8 columns. Then, This module is used to represent concepts and conceptual
probabilities and weight function are computed for all 8 relationships among nodes that are described by inference
columns of matrix. Each probability is multiplied by its network graph in previous module using concept of ontology.
weight and then all eight probabilities are added to get total Ontology is defined as Formal, Explicit, and Shared

41
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

Conceptualization of concepts, thus organizing them in Each document node has concept nodes that are treated as
hierarchical fashion [10]. Various phases of ontology module Vertices. An edge from one node to other node represents
are described below: - relationship among concepts.
(a) Creation of Ontology or Ontology Representation: -
Inference Graph consists of document nodes ( root nodes).

Di has concept nodes as CRi. Di


Edges represent relationship between them

Ti

End for
CRi

(b) Ontology Building: - It uses an algorithm for developing


ontology for inference graph. It requires use of OWL
(Ontology Web Language) that is used for writing ontology. It 3.5 Query Processing Module
is used for creating objects of each class. A query is called information need. It is final result with
optimal and effective terms. This module deals with
BEGIN expansion and refinement of query either automatically or
manually with user interaction. It analyze query according to
For each vertices V of inference graph G
query language, extract information symbols from it and pass
Class C = new (owl: class) it to Retrieval Module for searching index terms.
C.Id = C.label // Query Expansion through manual methods: It includes:

each concept has its unique identification and name//
Sketch Retrieval: - It is one of methods to query a
DatatypeProperty DP = new (owl: DatatypeProperty) // DatatypeProperty of parent
multimedia node means
database. Withwhat
this, should
user query is visual be type of
sketch given by user, and then system processes this
DP.Id = DP.Name, DP.Value;
drawing to extract its features and searches the
DP.AddDomain (C); // It index for similar images.

adds values of child nodes to given concept node C//
Search by Example: - In this, user gives query as an
For each edge E of Graph G example of image that he tends to find. A query
then extracts low level features.

DP.AddDomain (B.getClass ()) //
getClass is used to show relationship between concepts// Search by Keyword: - It is most popular method.
User describes information with set of relevant
End for terms and system searches it in documents.
End begin Query Expansion through automatic method: - It includes
(c) Generation of OWL class Local Context Analysis (LCA) approach.

Class Result = new (owl: class) // It is one of best methods for automatic query expansion. It
Result represents leaf nodes// expands terms from query, rank and weights them by using
certain formula.
Result.Id = Result. Name
LCA = Local Feedback Analysis + Global Analysis
DatatypeProperty ResultDP = new (owl: DatatypeProperty)
// to show value of leaf nodes// It is local because concept relevant terms are only retrieved
from globally retrieved documents. It is global because
ResultDP.Id = Result.Name, Result. Value; documents related to given query topic are selected randomly
// Leaf nodes have name and value// from huge collection of documents present on web (like we
Result.AddDomain (Result) have selected three documents related to semantic web from
web). When we put query in Google and press ENTER, query
For each edge E of Graph G is executed and it retrieves some documents. It is global
activity. LCA is concept based fixed length scheme. It
Class Relationship = new (owl: class)
expands user query and retrieves top n relevant terms that
Relationship.Id= “ “ closely satisfies query. It returns only fixed number of terms.
For each vertices of graph The retrieved terms are ranked accordingly as:
Relationship.Id= Relationship.Id + C.label; Belief (Q, C) =  [ + log (af(c, ta)) idfc / log (n)] idfa
End for Where C= Concepts related to query Q
ResultDP.AddDomain (Relationship) Belief (Q, C) = Ranking Function

42
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

ta = Query Term ∑ wtaNRD = All weights of NRD are added together


af(c, ta) = fta1d1 * fc1d1 + fta2d2 * fc2d2 + fta3d3 * fc3d3 + …………… y = It is constant that gives average of weights of terms in RD
ft an, dn * f cn, dn
z = It is constant that gives average of weights of terms in
d=n NRD. The result is that we get negative weights and they will
be discarded automatically.
af(c, ta ) = ∑ ftad fcd
d=1 3.6 Retrieval Module
It is module that retrieves final results/optimal queries that
Where d = documents from 1 to n have been extracted after going through various phases. It
ftad = Frequency of occurrence of query term ta in document d ranks document according to similar queries and maintains
index according to information symbols contained in that
fcd = Frequency of concepts (terms related to query) in query.
document d
3.6.1 Re-Use of Queries
idfc = It measures importance of concepts related to query Need for Re-Use of Queries: - The queries that were already
terms i.e. how many times the same concept is used in expanded and refined according to user‟s requirements are
document optimized and stored anywhere. If user needs some
idfa = It measures importance of query terms ta. information ion future, then what is way to retrieve those

 = It is constant used for distinguishing between relevant and


documents that satisfies query?

non relevant terms. It stores non relevant terms that are treated Solution: - Re-Use of queries.
as constant. Analysis: - The expanded and refined queries are stored in
database that is called as Query database. The query base
3.5.1 Query Refinement contains queries related to previously retrieved documents.
A tern can have different weights in each relevant document,
These queries are called Persistent Queries.
so there is need to refine query. Query Refinement means
calculation of old weights of expanded query terns in order to How to Use Persistent Queries with new Query?
produce new weights of same query terns. These query terms
are transformed into dummy document that is used for (a) If a new query is somewhat similar to persistent query,
Indexing. then result of new query is related to persistent query.

Here is formula used that calculates new weights of query (b) If user new query is not similar to persistent query in any
terms and produces optimal results by discarding non relevant way, then system has to find persistent query from database
terms. It is called Rocchio Formula. that satisfies new query to some extent.

Aim: - The aim of this formula is to increase weights of terms How to check similar queries?
that occur in relevant documents and decrease the weights of Using concept of Solution Region: - When search for an
terms occurring in non relevant documents. optimal query begins, system retrieves number of queries
Equation: - instead of only one query. All those queries are described in
query space. The region containing that query space is called
Qa (new) = x * Qa (old) + y * 1/ (RD) * ∑ wtaRD – z * 1/ Solution Region.
(NRD) * ∑ wtaNRD
We can check similarity between queries as the new queries
Where Qa (new) = New weight of query term a are compared with queries in solution region and if they get
matched, then both queries are said to be similar.
Qa (old) = old weights of tern s
RD = Relevant documents judged by user 4. EXPERIMENTAL ANALYSIS AND
NRD = Non- Relevant documents judged by user
CALCULATIONS
Consider a given sets of data. We have to compute
wtaRD = Weights of terms in relevant documents probabilities of relevant and non relevant terms and hence
calculate weight function for each term.
wtaNRD = Weights of terms in non relevant documents
∑ wtaRD = All weights of RD are added together
Given data:
Total number of relevant documents (R) = 10 Total number of Non relevant documents (N-r) = 15

Documents with Documents without Documents with Documents without


Term tk ( r) = 4 Term tk (R – r) = 6 Term tk (n-r) = 5 Term tk (N-r) – (n-r) = 10

Total no. of documents without term tk (N-r) – (n-r) + (R-


r) = 16
Total no. of documents having terms tk
n-r + r = 9
n=9

43
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

According to BI model, Uk = Probability of term tk occurring in non - relevant


Total number of documents N= 25 documents
Total number of documents with term tk (n) = 9 = 5/15 = 1/3
Total number of relevant documents (R) = 10 X = Pk / (1- Pk)
Total number of relevant documents with term tk (r) = 4 = (2/5)/ (3/5) = 2/3
From above data, Y = Uk / (1 – Uk)
Pk = Probability of term tk occurring in relevant documents = (1/3) / (2/3) = ½
= 4/10 = 2/5 Odd Ratio or Weighting Function Wk = X/Y = 4/3
Ranking function W = log (X/Y) = log (4/3) = 0.20068

30 Graph for BI model


Non Relevant
Documents

20
Total no. of relevant
10 documents
0 Total no. of non relevant
0 5 10 15 documents
Total no of documents
Relevant Documents

Figure6: - Computation of Probabilities of terms graphically


On the basis of above graph and probability values, we can For infinite long queries, various methods of calculating
find new weight function for terms from old weight function expected value [E] like Poisson distribution, Binomial
by using Rocchio Formula. distribution are employed.
Qa (new) = x * Qa (old) + y * 1/ (RD) * ∑ wtaRD – z * 1/ In this way, both short queries as well as long queries can be
(NRD) * ∑ wtaNRD reused and expanded.
Here Qa (old) = 4/3
Relevant Documents (RD) = 10 5. CONCLUSION
The paper illustrates the working of proposed high level
Non relevant documents (NRD) = 15 multimedia IR model consisting of various modules. Each
∑ wtaRD = 4 + 6 = 10 module is described separately. This module provides
extraction of relevant terms from huge collection of
∑ wtaNRD = 5 + 10 = 15 multimedia documents. Since multimedia documents produce
information tokens that are different from text tokens, so those
X = 1, y = (4+6) / 2 = 5, z = (5+10) / 2 = 15/2 = 7.5
statistical approaches are shown in paper that analyses
So, Qa (new) = 1 * (4/3) + 5 * (1/10) * 10 – 7.5 * (1/15) * 15 multimedia document and retrieves multimedia terms (text,
images, and videos) from them.
= 4/3 + 5 – 7.5 = - 3.7
The new model can replace ambiguities of traditional
Since new weight function is negative, so it is discarded and multimedia IR model that deals with information symbols
old function is considered as relevance function. only instead of maintaining relationships between them. It is
Catchy Concept: - The proposed High-Level Statistical beneficial in various aspects like there is module introduced in
Multimedia IR Model deals with the queries that have been it for maintaining conceptual relationships between extracted
expanded and refined according to user's requirements. In this terms and represents them using ontology. The model uses
way the queries can be reused. It is good idea if given queries probabilistic approaches for calculating ranking of documents
is short. HOW THIS MODEL CAN BECOME SUITABLE and retrieves optimal queries. The results are then presented to
FOR LONG QUERIES ALSO? user.

Catchy Answer: - The answer to above question is Use of 6. REFERENCES


Random Variables. These variables may be continuous as [1] International Press Telecommunications Council: \IPTC
well as Discrete. The terms that are found in multimedia text Core" Schema for XMP Version 1.0 Specification
documents can be treated as variables. If the terms are short or document (2005)
finite, then it is solved using concept of Discrete Random
Variables. It simply means adding product of probabilities of [2] Technical Standardization Committee on AV & IT
various terms/queries used in document. In this way, Storage Systems and Equipment: Exchangeable image
expected value [E] of term function is calculated and we can _le format for digital still cameras: Exif Version 2.2.
determine its relevance. Technical Report JEITA CP-3451 (April 2002)

For long queries, the concept of Continuous Random [3] Borgo, S., Masolo, C.: Foundational choices in DOLCE.
Variables can be used. Further, long queries may have some In: Handbook on Ontologies. 2nd edn. Springer (2009)
limit or they are infinite. For queries having limit, [4] Joao Miguel Costa Magalhaes: „Statistical Models for
approximation is used. The terms are integrated to particular Semantic – Multimedia Information Retrieval‟,
interval and produce results proximity to user‟s requirements. September 2008.

44
International Journal of Computer Applications (0975 – 8887)
Volume 94 – No 2, May 2014

[5] Meghini C, Sebastiani F, and Straccia U: „A model of 4th IET International Conference on Advances in
multimedia information retrieval‟ Journal of ACM Medical, Signal and Information Processing (MEDSIP
(JACM), 48(5), pages 909–970, 2001. 2008), January 2008 page 314.
[6] Grosky, W.I., Zhao, R.: „Negotiating the semantic gap: [20] S.Vigneshwari, M.Aramudhan: „An Ontological
From feature maps to semantic landscape‟, Lecture Notes Approach for effective knowledge engineering‟,
in Computer Science 2234 (2001). International Conference on Software Engineering and
Mobile Application Modeling and Development
[7] Adams, W. H., Iyengart, G., Lin, C. Y., Naphade, M. R., (ICSEMA 2012), January 2012 page 5.
Neti, C., Nock, H. J., and Smith, J.:‟ Semantic indexing
of multimedia content using visual, audio and text cues‟ [21] M.A. Moraga, C.Calero, and M.F. Bertoa: „Improving
EURASIP Journal on Applied Signal Processing 2003 interpretation of component-based systems quality
(2), pages 170-185. through visualization techniques‟, IET Software, Volume
4, Issue 1, February 2010, p. 79 – 90, DOI: 10.1049/iet-
[8] Datta, R., Joshi, D., Li, J., and Wang, J. Z.: „Image sen.2008.0056,Print ISSN 1751-8806, Online
retrieval: ideas, influences, and trends of the new age‟ ISSN 1751-8814.
ACM Computing Surveys, 2008.
[22] Michael S.Lew, Nicu Sebu, Chabane Djeraba and
[9] Hofmann, T., and Puzicha: „Statistical models for co- Ramesh Jain: „Content-based Multimedia Information
occurrence data. Technical Report‟, Massachusetts Retrieval: State of Art and Challenges‟, In ACM
Institute of Technology, 1998 Transactions on Multimedia Computing,
[10] M. Preethi, Dr. J. Akilandeswari,: „Combining Retrieval Communications, and Applications (TOMCCAP), Feb
with Ontology Browsing‟, International Journal of 2006.
Internet Computing, Vol.1, Issue-1”, 2011 [23] Alberto Del Bimbo, Pietro Pala: „Content- based retrieval
[11] Croft, W. B., Turtle, H. R., and Lewis, D. D.: „The use of of 3D Models‟, In ACM Transactions on Multimedia
phrases and structured queries in information retrieval‟, Computing, Communications, and Applications
In ACM SIGIR Conf. on research and development in (TOMCCAP), Vol. 2 Issue 1, Feb 2006, Pages 20-43.
information retrieval, Chicago, Illinois, United States [24] Carlo Meghini, Fabrizio Sebastiani and Umberto
2004 Straccia: „A model of multimedia information retrieval‟,
[12] Rifat Ozcan, Y. Alp: „Concept Based Information Access Journal of ACM (JACM), Vol 48, Issue 5 September
using Ontologies and Latent Semantic Analysis‟, 2001, Pages 909-970.
Technical Report, 2004-08. [25] Simone Sanitini: „Efficient Computation of queries on
[13] F. Crestani, M. Lalmas, C.J. van Rijsbergen, and I. feature streams‟, In ACM Transactions on Multimedia
Campbell: „Is this document relevant? . . . Probably: A Computing, Communications, and Applications
survey of probabilistic models in information retrieval‟, (TOMCCAP), Vol. 7 Issue 4, November 2011, Article
ACM Computing Surveys, 30(4), pages 528- 552, No. 38
December 1998. [26] Graham Bennett, Falk Scholer and Alexandra: „A
[14] Manning C.D., Raghavan P., and Schu¨tze H: „An comparative study of probabilistic and language models
Introduction to Information Retrieval‟, Cambridge for information retrieval‟, In Proceedings of nineteenth
University Press, Cambridge, 2007. conference on Australian database ADC‟08, Vol.75
ISBN: 978-1-920682-56-9, Pages 65-74.
[15] CAI, D., Yu, S. Wen, J.-R., and Ma, W.-Y: „Extracting
content structure for Web pages based on visual ABOUT THE AUTHORS
Representation‟. In Asia Pacific Web Conference 2003
[16] Metzler, D. Manmatha, R: „An inference network Gagandeep Singh has completed his B.Tech (CSE) from
approach to image retrieval‟, In Enser, P.G.B., GTBIT affiliated to Guru Gobind Singh Indraprastha
Kompatsiaris, Y., O‟Connor, N.E. Smeaton, A.F. University, Delhi. His Research areas include Semantic Web,
Smeulders, A.W.M., eds.: CIVR. Volume 3115 of Information Retrieval, Data Mining, Remote Sensing (GIS)
Lecture Notes in Computer Science. Springer (2004) 42– and Knowledge Engineering.
50.
Vishal Jain has completed his M.Tech (CSE) from USIT,
[17] Faloutsos C., Barber R., Flickner M., Hafner J., and Guru Gobind Singh Indraprastha University, Delhi and doing
Niblack W: „Efficient and effective querying by image PhD in Computer Science and Engineering Department,
content‟, J. Intell. Inform. Syst., 3:231–262, 1994. Lingaya‟s University, Faridabad. Presently, He is working as
Assistant Professor in Bharati Vidyapeeth‟s Institute of
[18] Ed Greengrass: „Information Retrieval: A Survey‟,
Computer Applications and Management, (BVICAM), New
November 2000
Delhi. His research area includes Web Technology, Semantic
[19] O.S. Al- Kadi: „Combined Statistical and Model based Web and Information Retrieval. He is also associated with
texture features for improved image classification‟, CSI, ISTE.

IJCATM : www.ijcaonline.org 45

You might also like