Nothing Special   »   [go: up one dir, main page]

Cognos Interview Questions Gave by SatyaNarayan

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 12

1.

data module
2. Data Set
3. Difference between dataset and data module.

Data modules
Data modules are source objects that contain data from data servers, uploaded
files, or other data modules, and are saved in My content or Team content.
Data Modules: Data modules are a feature introduced in
Cognos 11 that offer a web-based data modeling and blending
experience. Data can be pulled from a large number of
databases as well as excel files and Cognos packages. Key
features include automatic relative time (YTD, MTD, etc…), table
split/merge, custom grouping, data cleansing and multi-fact and
even multi-database query aggregation.
Data sets
Data sets contain extracted data from a package or a data module, and are
saved in My content or Team content.
Simply put, a Data Set is data source type in Cognos Analytics
that contains data extracted from one or more sources and
stored within the Cognos system itself as an Apache parquet file.
The parquet file is then loaded into application server memory at
run-time on an as-needed basis. This (usually) greatly enhances
interactive performance for end users while reducing load on
source databases. When combined with Data Modules, Data
Sets offer incredible out-of-the-box capabilities like
automatic relative time, easy data prep and custom table
creation.

4. In the data module I have two csv files that I want to connect with package how
to achieve that.

5. Data server connection.


6. How to create drill path for un-related dimensions in cognos analytics ?
7. How to refresh dashboard automatically.
8. Why my report is showing old data even though there is change in the DB.
9. How to create drill through in dashboard.
10. Want to send report output in mail and in subject and in body want to display
report run date.
11. At the time cube build process, I am getting error that cube got locked, what
could be the cause.
12. When we adding new table or new object in the cube, default date is missing,
what could be the reason?

13. Difference between mdl, pyj, mdc file.

Cognos models file extensions - pyg, pyh, pyi, mdl,


pyj
The PYH and PYI models are compiled to a binary format and are Cognos series
7 version specific. IBM Cognos 8 uses a PYJ file format. Models stored in the
binary format are generally quicker to open and refresh.
MDL (Model Definition Language) file format is a model saved in an ASCII file
(its structure can be understood pretty easily). It is compatible between different
version of transformer and can be edited in any text editor.
It takes more time to work with MDL files because when it is being opened,
Transformer compiles it anyway in the background.

The biggest difference between PYI/PYJ and MDL file is that in the binary PYI
file stores passwords to the datasources(usually database connections) and
MDL file contains only an userid and password needs to be provided every time
the cube is refreshed.

The difference between these models is that=20


1. *.pyi is binary model and is compatible only with this version of =
software, where you created it
2. *.mdl is text model and is also compatible with previous versions =
(for example COGNOS Series 6 and 7).

Also, keep in mind that a Pyi model may become corrupt at some time. If =
you
only have a pyi, you will lose the complete model. It's best to save =
your
pyi regularly as a mdl to keep as a backup. A mdl file will not become
corrupt as quickly as a pyi.

14. What is max. size of the cube, if it exceeds max limit how you will do partition.
15. What are the possible sources of transformer model.
16. Transformer supports how many levels, what is alternative hierarchy? how you
will define drill through in cube.
17. How apply object level security at report. Ex particular region user can see only
list and other user can see only crosstab when they log in
Ans : User identity function ko case me le jaake expression likhna hai. Jab table
import karte hai tab Object level security ka option lagao and usme expression
we need to write case .. user identity function agar usa to 1 and Australia hai to 2
show karo naye column me. Package publish kar do. Report studio me jao and
render variable ka use kar do agar 1 hai to list 2 hai to crosstab.

18. Difference between DQM and CQM


DQM" stands for "Dynamic Query Mode", and "CQE" stands for "Cognos Query Engine.

CQE (sometimes called "CQM" or "Compatible Query Mode") is the older query
engine that has its roots in Cognos Series 7 products and was the only query engine
in Cognos ReportNet and Cognos 8. It is 32-bit and generally relies on having a full
(thick) database client installed to be able to run queries.

DQM was introduced a few years ago in Cognos 10. It is a 64-bit query engine and is
generally faster and more efficient. DQM also has more accessible methods of
analyzing your queries' structures and performance using a tool called "Dynamic
Query Analyzer". DQM also uses JDBC drivers to connect to databases.

Some points of note about CQM and DQM:

- The 64-bit build of Cognos can run in either 32-bit or 64-bit mode.

- 32-bit builds of Cognos (or a 64-bit server running in 32-bit mode) can process both
CQM and DQM queries, using a 32-bit variant of DQM that may be a bit slower than the
64-bit version of DQM.

- I recommend updating all models that use CQM by changing them to use DQM, as
CQM is likely going to be deprecated by IBM Cognos in the next couple of years. Only
use CQM if you are using a datasource that cannot be queried in DQM at this time.

19. What is the difference between ‘macro’ and ‘prompt?


20. What are Parameter Maps?
21. Where exactly determinants are used in cognos framework manager?
22. Stiched Query
23. How to cast dates to integer or number in report studio? For example, 2014-03-09 would be
changed to 41707. Any solutions?
Cast(‘date’, int)
24. How to replace the brackets ' [ ] ' with a blank space ' ', then do a trim, and then I can do a
substring (data item, etc, etc).

User identity function case


Model
Dashboard .. multiple pages. Multiple tabs
Complex calculation
Ytd
Mtd
Week to date
Ftd
Qtd
Matrics ek (calculation)pe 134 Drill through
15 16 queries join leke
Data warehousing no concept
Select cache se uthata ,

1. What is stich query?


2. What is role play dimension?
A table with multiple valid relationships between itself and another table is known
as a role-playing dimension. This is most commonly seen in dimensions such as
Time and Customer.
For example, the Sales fact has multiple relationships to the Time query subject
on the keys Order Day, Ship Day, and Close Day.
3. How to reduce(increase) the performance for report studio report if the data is very huge?
View banaunga.. filter restriction.. view table ka
4. What is determinant?
5. What is the complex/challenged report worked in the carrier?
6. How to create the model in FM and why do require 3 layers in FM?
7. How to create the DMR model in FM?
8. Write the SQL query for last but one record out of millions of records?
9. What are the errors getting while generating the cubes?
10. How to create the partion in cubes and what is the maximum cube limit?
11. How to get the 2018, 2019 year Q3 June months data by using MDX functions, if this year (2020)
Q3 June?
12. What is junk dimension?

Junk Dimension

In data warehouse design, frequently we run into a situation where there are
yes/no indicator fields in the source system. Through business analysis, we
know it is necessary to keep such information in the fact table. However, if
keep all those indicator fields in the fact table, not only do we need to build
many small dimension tables, but the amount of information stored in the fact
table also increases tremendously, leading to possible performance and
management issues.

Junk dimension is the way to solve this problem. In a junk dimension, we


combine these indicator fields into a single dimension. This way, we'll only
need to build a single dimension table, and the number of fields in the fact
table, as well as the size of the fact table, can be decreased. The content in
the junk dimension table is the combination of all possible values of the
individual indicator fields.

Let's look at an example. Assuming that we have the following fact table:
In this example, TXN_CODE, COUPON_IND, and PREPAY_IND are all
indicator fields. In this existing format, each one of them is a dimension. Using
the junk dimension principle, we can combine them into a single junk
dimension, resulting in the following fact table:

Note that now the number of dimensions in the fact table went from 7 to 5.

The content of the junk dimension table would look like the following:
In this case, we have 3 possible values for the TXN_CODE field, 2 possible
values for the COUPON_IND field, and 2 possible values for the
PREPAY_IND field. This results in a total of 3 x 2 x 2 = 12 rows for the junk
dimension table.

By using a junk dimension to replace the 3 indicator fields, we have


decreased the number of dimensions by 2 and also decreased the number of
fields in the fact table by 2. This will result in a data warehousing environment
that offer better performance as well as being easier to manage.

Slowly Changing Dimensions

The "Slowly Changing Dimension" problem is a common one particular to


data warehousing. In a nutshell, this applies to cases where the attribute for a
record varies over time. We give an example below:

Christina is a customer with ABC Inc. She first lived in Chicago, Illinois. So,
the original entry in the customer lookup table has the following record:

Customer Key Name State


1001 Christina Illinois

At a later date, she moved to Los Angeles, California on January, 2003. How
should ABC Inc. now modify its customer table to reflect this change? This is
the "Slowly Changing Dimension" problem.
There are in general three ways to solve this type of problem, and they are
categorized as follows:

Type 1: The new record replaces the original record. No trace of the old
record exists.

Type 2: A new record is added into the customer dimension table. Therefore,
the customer is treated essentially as two people.

Type 3: The original record is modified to reflect the change.

Type 1 Slowly Changing Dimension

In Type 1 Slowly Changing Dimension, the new information simply overwrites


the original information. In other words, no history is kept.

In our example, recall we originally have the following table:

Customer Key Name State


1001 Christina Illinois

After Christina moved from Illinois to California, the new information replaces
the new record, and we have the following table:

Customer Key Name State


1001 Christina California

Advantages:

- This is the easiest way to handle the Slowly Changing Dimension problem,
since there is no need to keep track of the old information.

Disadvantages:

- All history is lost. By applying this methodology, it is not possible to trace


back in history. For example, in this case, the company would not be able to
know that Christina lived in Illinois before.

Usage:

About 50% of the time.


When to use Type 1:

Type 1 slowly changing dimension should be used when it is not necessary


for the data warehouse to keep track of historical changes.

 Type 2 Slowly Changing Dimension

In Type 2 Slowly Changing Dimension, a new record is added to the table to


represent the new information. Therefore, both the original and the new record
will be present. The new record gets its own primary key.

In our example, recall we originally have the following table:

Customer Key Name State


1001 Christina Illinois

After Christina moved from Illinois to California, we add the new information
as a new row into the table:

Customer Key Name State


1001 Christina Illinois
1005 Christina California

Advantages:

- This allows us to accurately keep all historical information.

Disadvantages:

- This will cause the size of the table to grow fast. In cases where the number
of rows for the table is very high to start with, storage and performance can
become a concern.

- This necessarily complicates the ETL process.

Usage:

About 50% of the time.

When to use Type 2:

Type 2 slowly changing dimension should be used when it is necessary for


the data warehouse to track historical changes.
Type 3 Slowly Changing Dimension

In Type 3 Slowly Changing Dimension, there will be two columns to indicate


the particular attribute of interest, one indicating the original value, and one
indicating the current value. There will also be a column that indicates when
the current value becomes active.

In our example, recall we originally have the following table:

Customer Key Name State


1001 Christina Illinois

To accommodate Type 3 Slowly Changing Dimension, we will now have the


following columns:

 Customer Key
 Name
 Original State
 Current State
 Effective Date

After Christina moved from Illinois to California, the original information gets
updated, and we have the following table (assuming the effective date of
change is January 15, 2003):

Customer Key Name Original State Current State Effective Date


1001 Christina Illinois California 15-JAN-2003

Advantages:

- This does not increase the size of the table, since new information is
updated.

- This allows us to keep some part of history.

Disadvantages:

- Type 3 will not be able to keep all history where an attribute is changed more
than once. For example, if Christina later moves to Texas on December 15,
2003, the California information will be lost.

Usage:
Type 3 is rarely used in actual practice.

When to use Type 3:

Type III slowly changing dimension should only be used when it is necessary
for the data warehouse to track historical changes, and when such changes
will only occur for a finite number of time.

13. We have two options, FM model and transformer model what option you can suggest to the
user?
14. What is the purpose of Shortcut and alias in FM model?

1. What is the difference between Cognos 10 and Cognos Analytics?


There are a lot of brand new, never before imagined, features in v11 such as
data modules for self-service modeling and a new dashboarding capability.

2.       How do you create a PDF setting if the columns are breaking in next page?
3.       What are the Data Modules and Data Sets?
4.       Is there a way for External File Mapping in 11, if yes How?
5.       Explain about the different type of Charts you worked upon?

6.       Can Cognos Analytics 11 use data from two packages in a same report?
Yes

7.       How do you schedule a report in Cognos Analytics?


8.       What is difference between Notification and Subscription.
9.       As a Cognos developer, Roles and responsibilities performed in a previous project?
10.   Have you worked on Cognos 11 Analytics? and share your experience
11.   In Cognos FM, how do we resolve Stitch query issues?
12.   What are the determinants, and its usage?
13.   Conditional blocks, conditionally rendering and what is the difference?
14.   Cognos FM - Type Cognos securities and its implementation?
15.   What are the challenges faced while working with various stake holders, and multi-site teams?
16.   How do you migrate reports from Dev to QA & UAT, tell me at least 2 ways?
17.   Difference between Local Cache Processing and Database Only. When do we use these options?
18.   What are governor setting, If the set the Not to use Cross join at FM level can we overwrite at
report?. If yes how
19.   How do you tune performance of a report at report level /fm level /database level
20.   Most challenging report you worked upon and how did you find the solution.

Data Module : where we make joining condition


Data set : jo import karke files leke aate.. Fayda kya hai cache memory se data
uthega and performance fast hoga.
Data set se files leke aane k baad hum join maarte hain.

Update object karke ek option hota hai usse hi FM me data update hota

You might also like