Nothing Special   »   [go: up one dir, main page]

Quiz 2 - Statistics Coursera

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Lesson 2 Due Jan 11, 1:59 AM CST

Graded Quiz • 25 min

Module Overview
Congratulations! You passed! GRADE

Video: Course introduction


QUIZ • 25 MIN TO PASS 75% or higher
Keep Learning
75%
5 min

Reading: Module 1
assignments and materials
Lesson 2
3 min

1. Statistical Modeling
Lesson 2
Video: Objectives LATEST SUBMISSION GRADE

7 min
75%
Video: Modeling process Submit your assignment
Try again
8 min DUE DATE Jan 11, 1:59 AM CST ATTEMPTS 4 every 8 hours
1. Which of the following is one major difference between the frequentist and Bayesian approach to modeling data? 1 / 1 point
Quiz: Lesson 1
8 questions Frequentist models are deterministic (don't use probability) while Bayesian models are stochastic (based on
Receive grade Grade
probability).
Discussion Prompt: View Feedback
Statistical modeling process
TO PASS 75 % or higher 75%
Frequentist
Wemodels require
keep your highestascore
guess of parameter values to initialize models while Bayesian models require initial
15 min
distributions for the parameters.
2. Bayesian Modeling
The frequentist paradigm treats the data as fixed while the Bayesian paradigm considers data to be random.
Video: Components of
Bayesian models Frequentists treat the unknown parameters as fixed (constant) while Bayesians treat unknown parameters as
8 min random variables.

Video: Model specification


7 min
Correct
Video: Posterior derivation The only random variables in frequentist models are the data. The Bayesian paradigm also uses probability to
9 min describe one's uncertainty about unknown model parameters.

Video: Non-conjugate
models
7 min
2. Suppose we have a statistical model with unknown parameter θ , and we assume a normal prior θ ∼ N(μ0 , σ02 ), where μ0 0 / 1 point
Quiz: Lesson 2 is the prior mean and σ02 is the prior variance. What does increasing σ02 say about our prior beliefs about θ ?
8 questions
Increasing the variance of the prior narrows the range of what we think θ might be, indicating less confidence in
Reading: Reference:
our prior mean guess μ0 .
Common probability
distributions
Increasing the variance of the prior narrows the range of what we think θ might be, indicating greater confidence
3. Monte Carlo Estimation in our prior mean guess μ0 .

Background for Lesson 4 Increasing the variance of the prior widens the range of what we think θ might be, indicating less confidence in our
Markov Chains prior mean guess μ0 .

Increasing the variance of the prior widens the range of what we think θ might be, indicating greater confidence in
our prior mean guess μ0 .

Incorrect

It is true that increasing the variance of the prior widens the range for likely θ values, but accepting a wider
range of values as plausible does not mean we are more confident in the prior mean.

3. In the lesson, we presented Bayes' theorem for the case where parameters are continuous. What is the correct 1 / 1 point
expression for the posterior distribution of θ if it is discrete (takes on only specific values)?

p(y∣θ)⋅p(θ)
p(θ ∣ y) = ∫ p(y∣θ)⋅p(θ) dθ

p(y∣θj )⋅p(θj )
p(θj ∣ y) = ∑j p(y∣θj )⋅p(θj )

p(θ) = ∑j p(θ ∣ yj ) ⋅ p(yj )

p(θ) = ∫ p(θ ∣ y) ⋅ p(y) dy

Correct

This is the discrete version of Bayes' theorem.

4. For Questions 4 and 5, refer to the following scenario. 1 / 1 point

In the quiz for Lesson 1, we described Xie's model for predicting demand for bread at his bakery. During the lunch hour
on a given day, the number of orders (the response variable) follows a Poisson distribution. All days have the same mean
(expected number of orders). Xie is a Bayesian, so he selects a conjugate gamma prior for the mean with shape 3 and
rate 1/15. He collects data on Monday through Friday for two weeks.

Which of the following hierarchical models represents this scenario?

iid
yi ∣ λ ∼ Pois(λ) for i = 1, … , 10,
λ ∼ Gamma(3, 1/15)

ind
yi ∣ λi ∼ Pois(λi ) for i = 1, … , 10,
λi ∣ α ∼ Gamma(α, 1/15)
α ∼ Gamma(3.0, 1.0)

iid
yi ∣ λ ∼ Pois(λ) for i = 1, … , 10,
λ ∣ μ ∼ Gamma(μ, 1/15)
μ ∼ N(3, 1.02 )

iid
yi ∣ μ ∼ N(μ, 1.02 ) for i = 1, … , 10,
μ ∼ N(3, 152 )

Correct

The likelihood is Poisson with the same mean for all observations, called λ here. The mean λ has a gamma
prior.

5. Which of the following graphical depictions represents the model from Xie's scenario? 1 / 1 point

a)

b)

c)

d)

Correct

The observed data variables each depend on the mean demand.

6. Graphical representations of models generally do not identify the distributions of the variables (nodes), but they do reveal 0 / 1 point
the structure of dependence among the variables.

Identify which of the following hierarchical models is depicted in the graphical representation below.

You might also like