Nothing Special   »   [go: up one dir, main page]

BCOM304 Management Information System Unit-4: Multimedia Approach To Information Processing

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 10

BCOM304

Management Information System


Unit-4

Multimedia Approach to Information Processing

The multimedia approach empowers the student to speak to data utilizing a few unique
media. It can stimulate the interest of the student and give them striking impressions.
Multimedia can consider diverse learning styles – some students learn by translating the
content, while others require more graphical portrayals.

Can build up an inspirational state of mind among the students towards the educating learning
process. Multimedia Approach considers self-pacing, the system of simulation can be
successfully connected through the interactive media approach.

Aides being developed of higher order thinking abilities. The interactive media approach
gives the student the adaptability of ‘anyplace’, ‘whenever’ learning.

Aides in creating groups and relational abilities. Viable remediation projects can be executed
through the media approach. The multimedia approach can connect dialect hindrances since
the sound isn’t the main methods for communication.

The word multimedia is made up of the two Latin words “multi” which means many and
“media” which is the substance through which something is transmitted. In this case multi is
the multiple data types such as voice, video, image, animation, text etc. and media is the
computer environment used to transmit the information made up of this multiple data types.
Multimedia data imposes new requirements to the computer networks due to the large
volumes involved. In addition to huge Volumes the way we look at multimedia information is
also different.

A multimedia information system aims at integrating the various tools needed for the
acquisition, management, processing and dissemination of multimedia information related to
environment

Provides to the systems designers a `generic information system’ in form of a toolbox to be


used to implement their own information system

The variety of media types is an important feature of modern information systems. In order to
deal with the variety, integration is a critical concern. Therefore,

Multimedia Information System (MIS) is then one which allows end-users to share,
communicate and process a variety of forms of information in an integrated manner

MIS is attempting to solve the problems of information management by integrating the


various forms of media into the computer/communications infrastructure
Benefits of achieving this level of integration:

 The computer can help in the task of managing and processing the information;
 Information users only have to deal with one integrated environment rather than a
number of separate information systems.

Introduction

The concept of Data Warehousing and Data Mining is becoming increasingly


popular as a business information management tool where it is expected to disclose
knowledge structures that can guide decisions in conditions of limited certainty. A
data warehouse supports [1] business analysis and decision-making by creating an
enterprise-wide integrated database of summarized, historical information. It
integrates data from multiple, incompatible sources. By transforming data into
meaningful information, and a data warehouse allows the manager to perform more
substantive, accurate and consistent analysis.

The data warehouse is not the normal database, as we understand the term “database”.
The main difference is that the traditional databases hold operational-type most often,
transactional type data and that many of the decision-support type applications put too
much strain on the databases intervening into the day-to-day operation (operational
database). A data warehouse is of course a database, but it contains summarized
information. Data warehouse refers to database that is maintained separately from an
organizations operational databases. A warehouse holds read-only-data. Data mining,
also called Knowledge-Discovery in Databases or Knowledge-Discovery. Data
mining, the extraction of hidden predictive information from large databases, is a
powerful new technology with great potential to help companies focus on the most
important information in their data warehouses. Data mining tools predict future trends
and behaviors, allowing businesses to make proactive, knowledge- driven decisions.
Data mining tools can answer business questions that traditionally were too time
consuming to resolve. They scour databases for hidden patterns, finding predictive
information that experts may miss because it lies outside their expectations.

DATA WAREHOUSING
A data warehouse is a collection of integrated databases designed to support a DSS. It
is a collection of integrated, subject-oriented databases designed to support the DSS
function, where each unit of data is non-volatile and relevant to some moment in time.
Numerous roles and responsibilities will need to be acceded to in order to make data
warehouse efforts successful and generate return on investment. For the technical [6]
personnel (application programmer, system administrator, database administrator,
data administrator), it is recommended that the following roles be performed full-time
by dedicated personnel as much as possible and that each responsible person receive
specific Data Warehouse training. The data warehouse team needs to lead the
organization into assuming their roles and thereby bringing about a partnership with
the business. Management also needs to make actionable plans out of these directives
and make sure the staff executes on them. Following are the team, team members and
their responsibilities to make data warehouse make effective and helpful to user and
organization [2-4]:

Member Role
The data warehouse manager or director ensures support for the
Manager/Director data warehouse program at the highest levels of the organization
and understand high level requirements of the business. Manager
staff the team and ensure adherence to a set of guiding principles
for data warehousing.

Project Manager Project managers delivers commitments on time. Project managers


maintains highly detailed plan and caring about progress on it.
Project manager matching team member’s skills and issue list of
tasks to them.
The Manager/Director of data warehouse will need to rely on a
Chief Architect Chief Architect position, as one of his/her direct reports, to work on
complex issues of architecture, modeling, and tools. Chief
Architect would have significant interface with the internal clients
and increase their confidence in the data warehouse organization.
Chief Architect should
have great knowledge of business.
Data warehouse is made to meet end users requirements. Data
End User warehousing is used to answer the end users queries and generate
reporting. End user receive ID and password on the data
warehouse system and provide feedback to the data warehouse
team like performance, functionality, data quality, metadata quality
and completeness.
Data warehouse group is the placement of the database
Database Administrator administration function and the division of roles and responsibilities
between the support group and the user community. Database
administrator has many responsibilities like database maintenance,
backup and recovery, data replication, Performance Monitoring and
Summary table creation.
The Data Warehouse Application Programmer is responsible for
Application Programmer applying transformation rules as necessary to keep the data clean
Specialist and consistent. Application responsibilities has many responsibilities
like sourcing the data from operational systems, applying the
business transformation rules.
Responsibilities of system administrator are: Installing and
maintaining the Database Management, monitoring the
System Administrator performance, architecting the data warehouse architecture. The
Data Warehouse System Administrator is responsible for the
performance of data transfers, either in response to a query or as
part of a data replication or synchronization
the effort

Data are organized based on how the users refer to them.

Integrated: All inconsistencies regarding naming convention and value


representations are removed.
Nonvolatile: Data are stored in read-only format and do not change over time.
Time Variant: Data are not current but normally time series.
Summarized: Operational data are mapped into a decision-usable format
Large Volume: Time series data sets are normally quite large.
ot Normalized: DW data can be, and often are, redundant.
Metadata: Data about data are stored.
Data Sources: Data come from internal and external unintegrated operational
systems.

ARCHITECT AND WORKING OF DATA WAREHOUSING Data warehouse is a


database used for reporting and analysis. It is a place where data is stored by integrating
different Data bases. It can be used for storing current and historical data. With the help
of historical and current database new prediction can be drawn. The following diagram
[2-3] shows different compounds of data warehouse.

Data warehouse is a database used for reporting and data analysis. It is a central source
of data which is created by integrating data from one or more different sources. The data
stored in the warehouse are received from the operational systems.
The staging layer stores raw data collected from each of the different source data
systems. The integration layer integrates the disparate data sets by transforming the
data from the staging layer often storing this transformed data in an operational data
store database.
Raw Data
Integrated Data Source
Data Warehousing
Report
Decision Making.
A data mart is a small data warehouse concentrated on a specific area of interest.
Data warehouses can be subdivided into data marts for improved performance in use.
The company can have one or more data marts towards a larger and more complex
enterprise data warehouse.
A Data Warehouse saves time of business user and helps to generate the reports
quickly. Business users can quickly use these reports on one place and can take
decisions quickly. Business users won’t waste their precious time in collecting data
from multiple sources. With the help of data warehousing, business can query the
data themselves and saves money and time

DATA MINING
The data mining applications are available on all size systems for mainframe,
client/server, and PC platforms. Data base mining or Data mining is a process
that aims to use existing data to invent new facts and to uncover new
relationships.

Data mining includes several steps: problem analysis, data extraction, data
cleansing, rules development, output analysis and review. Data mining
sources are typically flat files extracted from on-line sets of files,
from data warehouses or other data source. Data may however be
derived from almost any source.

Whatever the source of data, data mining will often be an iterative process involving
these steps. Following are the steps[3-8] of data mining are:-.

1. Uniqueness Identification of the Objective -- Before you begin, be clear on what


you hope to accomplish with your analysis. Know in advance the business goal of
the data mining. Establish whether or not the goal is measurable.

2. Choice of the Data -- Once you have defined your goal, your next step is to select
the data to meet this goal. This may be a subset of your data warehouse or a data
mart that contains specific product information. It may be your customer information
file. Segment the data as much as possible the scope of the data to be mined. Here are
some key issues like 1.How current and relevant are the data to the business goal?
2. Are the data stable—will the mined attributes be the same after the analysis?

3. Compilation of the Data -- Once you've assembled the data, you must decide
which attributes to convert into usable formats. Consider the input of domain
experts/creators and users of the data. Establish strategies for handling missing data,
extraneous noise, and outliers. Decide on a log or square transformation, if
necessary. Determine the distribution frequencies of the data?

2. Evaluate the Data -- Evaluate the structure of your data. What is the nature and
structure of the database? What is the overall condition and distribution of the dataset?

3.Choice of Appropriate Tools -- Two important factors for the selection of the
appropriate data-mining tool business objectives and data structure. Both should guide
you to the same tool. No single tool is preferred to answer the queries
4. Prepare indented the Solution – Find out the answers of some questions like:
What are the available format options? What is the goal of the solution? What do the
end-users need graphs, reports, code?
5. Prepare the desired Model -- Now the data mining process begins. User split data
into sets, construct and evaluate the model. The generation of classification rules,
decision trees, clustering sub-groups, scores, code, weights and evaluation data/error
rates takes place at this stage.
6.Check and Validate the Findings -- Share and discuss the results of the analysis
with the business client or domain expert. Ensure that the findings are correct and
appropriate to the business objectives. Find out the answers of many queries like-Do
the findings make sense?
7. Reporting the Findings -- Prepare a final report for the business unit or client. The
report should document the entire data mining process including data preparation,
tools used, test results, source code, and rules. This report helps in decision making
and plays important role in the growth of organization.
8. Combine components to integrate the solution -- Share the findings with all
interested end-users. You might wind up incorporating the results of the analysis into
the company's business procedures. Although data mining tools automate database
analysis, they can lead to faulty findings and erroneous conclusions if you're not
careful.

A centralized database is stored at a single location such as a mainframe computer. It is maintained


and modified from that location only and usually accessed using an internet connection such as a
LAN or WAN. The centralized database is used by organisations such as colleges, companies, ba nks
etc.
As can be seen from the above diagram, all the information for the organisation is stored in a
single database. This database is known as the centralized database.

Advantages
Some advantages of Centralized Database Management System are −

 The data integrity is maximised as the whole database is stored at a single physical
location. This means that it is easier to coordinate the data and it is as accurate and
consistent as possible.
 The data redundancy is minimal in the centralised database. All the data is stored
together and not scattered across different locations. So, it is easier to make sure there
is no redundant data available.
 Since all the data is in one place, there can be stronger security measures around it.
So, the centralised database is much more secure.
 Data is easily portable because it is stored at the same place.
 The centralized database is cheaper than other types of databases as it requires less
power and maintenance.
 All the information in the centralized database can be easily accessed from the same
location and at the same time.
Disadvantages
Some disadvantages of Centralized Database Management System are −

 Since all the data is at one location, it takes more time to search and access it. If the
network is slow, this process takes even more time.
 There is a lot of data access traffic for the centralized database. This may create a
bottleneck situation.
 Since all the data is at the same location, if multiple users try to access it
simultaneously it creates a problem. This may reduce the efficiency of the system.
 If there are no database recovery measures in place and a system failure occurs, then
all the data in the database will be destroyed.

Distributed Processing
Distributed data processing is a computer-networking method in which multiple computers across
different locations share computer-processing capability. This is in contrast to a single, centralized
server managing and providing processing capability to all connected systems. Computers that
comprise the distributed data-processing network are located at different locations but
interconnected by means of wireless or satellite links.

Lower Cost
Larger organizations invest in expensive mainframe and supercomputers to function as
centralized servers. Each mainframe machine, for example, costs several hundred thousand
dollars versus several thousand dollars for a few minicomputers, according to the
University of New Mexico. Distributed data processing considerably lowers the cost of data
sharing and networking across an organization by comprising several minicomputers that
cost significantly less than mainframe machines.

Reliable
Hardware glitches and software anomalies can cause single-server processing to
malfunction and fail, resulting in a complete system breakdown. Distributed data
processing is more reliable, since multiple control centers are spread across different
machines. A glitch in any one machine does not impact the network, since another machine
takes over its processing capability. Faulty machines are quickly isolated and repaired. This
makes distributed data processing more reliable than single-server processing systems.

Improved Performance and Reduced Processing Time


Single computers are limited in their performance and efficiency. An easy way to increase
performance is by adding another computer to a network. Adding yet another computer will
further augment performance, and so on. Distributed data processing works on this
principle and holds that a job gets done faster if multiple machines are handling it in
parallel, or synchronously. Complicated statistical problems, for example, are broken into
modules and allocated to different machines where they are processed simultaneously. This
significantly reduces processing time and improves performance.

Flexible
Individual computers that comprise a distributed network are present at different
geographical locations. For example, an organizational-distributed network comprising of
three computers can have each machine in a different branch. The three machines are
interconnected via the Internet and are able to process data in parallel, even while at
different locations. This makes distributed data-processing networks more flexible. The
system is flexible also in terms of increasing or decreasing processing power. For example,
adding more nodes or computers to the network increases processing power and overall
system capability, while reducing computers from the network decreases processing power.

You might also like