Nothing Special   »   [go: up one dir, main page]

Data Mining in Search Engine Analytics

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 7

Data Mining in Search Engine Analytics.

Power websites with Data Mining tools

ABSTRACT
Search engine analysis occurs by regularly monitoring vast volumes of data from internet usage
statistics, keyword usage statistics and many other parameters. Data mining tools can keep
track of this data by efficiently storing, analysing, and producing outputs as and when
necessary.

Many websites provide easy-to-use tools for analyzing your articles and content so that search
engines can show them on their initial pages, but seldom do we know that they use data mining
applications on a smaller scale.

Many questions arise in our minds, how can these tools mine the data, where these data are
stored, how the processing is done, and many more. The most important one is to provide real-
time analysis that makes it obvious to use data mining methods.

In the digital age, the vast amount of information generated on the internet has given
rise to the need for sophisticated tools and techniques to extract meaningful insights.
Data mining, a crucial analytics component, is pivotal in unravelling patterns and trends
within large datasets. One domain where data mining is extensively employed is in
search engine analytics. Search engines, the gatekeepers to the vast digital landscape,
rely on data mining to enhance user experience, improve search relevance, and
optimize advertising strategies.
In conclusion, data mining in search engine analytics is a multifaceted process that
contributes significantly to the effectiveness and efficiency of search engine operations.
From refining search algorithms to predicting user behaviour and enhancing advertising
strategies, the application of data mining techniques continues to evolve, shaping the
landscape of digital search. As search engines strive to deliver more personalized and
relevant results, the role of data mining in search engine analytics will only become
more critical in the dynamic and ever-expanding digital ecosystem.

Signature of the Guide Name of the Student

(K. VISWANATH)
Data Mining Seminar Report
Computer Science Seminar Topics

Collegelib 2023 Feb 18, Update:2024 Mar 24.

Abstract
Data mining is the process of extracting valuable insights and knowledge from large
volumes of data. It involves applying various techniques and algorithms to identify
patterns, relationships, and trends that can be used to make informed business
decisions, improve processes, and gain a competitive edge. Data mining encompasses
many tasks, including preprocessing, exploratory data analysis, pattern discovery, and
predictive modelling.
This abstract provides an overview of the data mining process, highlighting its
significance, essential techniques, and real-world applications. It emphasizes the
importance of data quality, appropriate data selection, and the need for ethical
considerations in data mining. Furthermore, it discusses the challenges and future
directions of data mining, including privacy concerns, scalability issues, and the
integration of artificial intelligence and machine learning techniques. Overall, data
mining offers immense potential for organizations to unlock valuable insights from their
data, enabling data-driven decision-making and fostering innovation in various domains.
Related Article: Data Analytics

Data Mining (image is for representation purposes only)


Data Mining Techniques, An Introduction to Data Mining
Data mining is the process of extracting patterns from data. Data mining is becoming
increasingly essential to transforming this data into information. It is commonly used in
many profiling practices, such as marketing, surveillance, fraud detection and scientific
discovery.
Data mining can uncover patterns in data but is often carried out only on data samples.
The mining process will be ineffective if the samples are not a good representation of
the larger body of data. Data mining cannot discover patterns that may be present in the
larger body of data if those patterns are not present in the sample being “mined”. The
inability to find ways may cause disputes between customers and service providers.
Therefore data mining is not foolproof but may be helpful if sufficiently representative
data samples are collected. The discovery of a particular pattern in a particular data set
does not necessarily mean that a pattern is found elsewhere in the more critical
data from that sample. An essential part of the process is the verification and validation
of patterns on other models of data.
The related terms data dredging, data fishing, and data snooping refer to the use of
data mining techniques to sample sizes that are (or maybe) too small for statistical
inferences to be made about the validity of any patterns discovered (see also data-
snooping bias). Data dredging may, however, be used to develop new hypotheses,
which must then be validated with sufficiently large sample sets.
Are you looking for the most trending topics related to Data Mining?
10 Topics from Data Mining, Data Analytics, Big data, Predictive
Analytics

Data Mining an Overview


Generally, data mining (sometimes called data or knowledge discovery) is the process
of analyzing data from different perspectives and summarizing it into useful information
– information that can be used to increase revenue, cut costs, or both. Data mining
software is one of several analytical tools for analyzing data. It allows users to analyze
data from many dimensions or angles, categorize it, and summarize the identified
relationships. Technically, data mining is the process of finding correlations or patterns
among dozens of fields in large relational databases.

Continuous Innovation
Although data mining is a relatively new term, the technology is not. Companies have
used powerful computers to sift through volumes of supermarket scanner data and
analyze market research reports for years. However, continuous innovations in
computer processing power, disk storage, and statistical software dramatically increase
analysis accuracy while reducing cost.
Realtime examples of Data Mining technology
For example, one Midwest grocery chain used the data mining capacity of Oracle
software to analyze local buying patterns. They discovered that when men bought
diapers on Thursdays and Saturdays, they also tended to buy beer. Further analysis
showed that these shoppers typically grocery shop on Saturdays. On Thursdays,
however, they only bought a few items. The retailer concluded that they purchased the
beer to have it available for the upcoming weekend. The grocery chain could use this
newly discovered information to increase revenue. For example, they could move the
beer display closer to the diaper display. And they could ensure beer and diapers were
sold at total price on Thursdays.
Data, Information, and Knowledge
Data
Data is any facts, numbers, or text a computer can process. Today, organizations are
accumulating vast and growing amounts of data in different formats and databases.
This includes:
operational or transactional data such as sales, cost, inventory, payroll, and accounting
nonoperational data, such as industry sales, forecast data, and macroeconomic data
metadata – data about the information itself, such as logical database design or data
dictionary definitions
Information
The patterns, associations, or relationships among all this data can provide information.
For example, retail point-of-sale transaction data analysis can yield information on
which products are selling and when.
Knowledge
Information can be converted into knowledge about historical patterns and future trends.
For example, summary information on retail supermarket sales can be analyzed in light
of promotional efforts to understand consumer buying behaviour. Thus, a manufacturer
or retailer could determine which items are most susceptible to promotional efforts.
Data Warehouses
Dramatic advances in data capture, processing power, data transmission, and storage
capabilities enable organizations to integrate their various databases into data
warehouses. Data warehousing is defined as a process of centralized data
management and retrieval. Data warehousing, like data mining, is a relatively new term,
although the concept has been around for years. Data warehousing represents an ideal
vision of maintaining a central repository of all organizational data. Centralization of data
is needed to maximize user access and analysis. Dramatic technological advances are
making this vision a reality for many companies. And equally dramatic advances in data
analysis software allow users to access this data freely. The data analysis software is
what supports data mining.
What can data mining do?
Data mining is primarily used today by companies with a strong consumer focus – retail,
financial, communication, and marketing organizations. It enables these companies to
determine relationships among “internal” factors such as price, product positioning, or
staff skills and “external” factors such as economic indicators, competition, and
customer demographics. And it enables them to determine the impact on sales,
customer satisfaction, and corporate profits. Finally, it enables them to “drill down” into
summary information to view detailed transactional data.
With data mining, a retailer could use point-of-sale records of customer purchases to
send targeted promotions based on an individual’s purchase history. By mining
demographic data from comments or warranty cards, retailers could develop products
and promotions to appeal to specific customer segments.
For example, Blockbuster Entertainment mines its video rental history database to
recommend rentals to individual customers. American Express can suggest products to
its cardholders based on an analysis of their monthly expenditures.
Walmart is pioneering massive data mining to transform its supplier relationships.
Walmart captures point-of-sale transactions from over 2,900 stores in 6 countries and
transmits this data to its massive 7.5 terabyte Teradata data warehouse. Walmart
allows more than 3,500 suppliers to access data on their products and perform data
analyses. These suppliers use this data to identify customer buying patterns at the store
display level. They use this information to manage local store inventory and identify new
merchandising opportunities. In 1995, WalMart computers processed over 1 million
complex data queries.
The National Basketball Association (NBA) is exploring a data mining application that
can be used in conjunction with image recordings of basketball games. The Advanced
Scout software analyzes players’ movements to help coaches orchestrate plays and
strategies. For example, an analysis of the play-by-play sheet of the game played
between the New York Knicks and the Cleveland Cavaliers on January 6, 1995, reveals
that when Mark Price played the Guard position, John Williams attempted four jump
shots and made each one! Advanced Scout not only finds this pattern but explains that
it is interesting because it differs considerably from the average shooting percentage of
49.30% for the Cavaliers during that game.
Using the NBA universal clock, a coach can automatically bring up the video clips
showing each jump shot attempted by Williams with Price on the floor without needing
to comb through hours of video footage. Those clips show a very successful pick-and-
roll play in which Price draws the Knicks ‘ defence and finds Williams for an open jump
shot.

How does data mining work?


While large-scale information technology has been evolving separate transaction and
analytical systems, data mining provides the link between the two. Data mining software
analyzes relationships and patterns in stored transaction data based on open-ended
user queries. Several types of analytical software are available: statistical, machine
learning, and neural networks. Generally, any of four types of relationships are sought:
Classes: Stored data is used to locate data in predetermined groups. For example, a
restaurant chain could mine customer purchase data to determine when customers visit
and what they typically order. This information could be used to increase traffic by
having daily specials.
Clusters: Data items are grouped according to logical relationships or consumer
preferences. For example, data can be mined to identify market segments or consumer
affinities.
Associations: Data can be mined to identify associations. The beer-diaper example is
an example of associative mining.
Sequential patterns: Data is mined to anticipate behaviour patterns and trends. For
example, an outdoor equipment retailer could predict the likelihood of a consumer
purchasing a backpack based on their purchase of sleeping bags and hiking shoes.

Data mining consists of five major elements


 1. Extract, transform, and load transaction data onto the warehouse system.
 2. Store and manage the data in a multidimensional database system.
 3. Provide data access to business analysts and information technology professionals.
 4. Analyze the data using application software.
 5. Present the data correctly, such as a graph or table.

Different levels of analysis are available:


Artificial neural networks: Non-linear predictive models that learn through training and
resemble biological neural networks in structure.
Genetic algorithms: Optimization techniques that use processes such as genetic
combination, mutation, and natural selection in a design based on the concepts of
natural evolution.
Decision trees: Tree-shaped structures that represent sets of decisions. These
decisions generate rules for the classification of a dataset. Specific decision tree
methods include Classification and Regression Trees (CART) and Chi-Square
Automatic Interaction Detection (CHAID) . CART and CHAID are decision tree
techniques used to classify a dataset. They provide a set of rules you can apply to a
new (unclassified) dataset to predict which records will have a given outcome. CART
segments a dataset by creating 2-way splits, while CHAID segments use chi-square
tests to create multi-way splits. CART typically requires less data preparation than
CHAID.
Nearest neighbour method: A technique that classifies each record in a dataset based
on a combination of the classes of the k record(s) most similar to it in a historical
dataset (where k 1). Sometimes called the k-nearest neighbour technique.
Rule induction: Extracting applicable if-then rules from data based on statistical
significance.
Data visualization: The visual interpretation of complex relationships in
multidimensional data. Graphics tools are used to illustrate data relationships.
What technological infrastructure is required?
Today, data mining applications are available on all-size systems for mainframe,
client/server, and PC platforms. System prices range from several thousand dollars for
most miniature applications to $1 million a terabyte for the largest. Enterprise-wide
applications generally range in size from 10 gigabytes to over 11 terabytes. NCR can
deliver applications exceeding 100 terabytes. There are two critical technological
drivers:
Size of the database: The more data is processed and maintained, the more influential
the system is.
Query complexity: the more complex the queries and the greater the number of
questions being processed, the more influential the system requires.
Relational database storage and management technology are adequate for many data
mining applications less than 50 gigabytes. However, this infrastructure needs to be
significantly enhanced to support larger applications. Some vendors have added
extensive indexing capabilities to improve query performance. Others use new
hardware architectures such as Massively Parallel Processors (MPP) to achieve order-
of-magnitude improvements in query time. For example, MPP systems from NCR link
hundreds of high-speed Pentium processors to achieve performance levels exceeding
those of the largest supercomputers.

You might also like