Anonymous

# statistics?

You wan to see if a redesign of the cover of a mail-order catalog will increase sales. A very large number of customers will receive the orgnal catalog, and a random sample of customers will receive te one with the new cover. For planning purposes, you are willing to assume that the sales from the new catalog will be approximately normal with σ=60 dollars and that the mean for the original catalog will be μ=40 dollars. You decide to use asample size of n=1000. You wish to test

H0:μ=40

Ha:μ>40

You decide to reject H0 if VARx>43.12 and to accept H0 otherwise.

(a) Find the probability of a Type I error, that is, the probability that your test refects H0 when in fact μ=40 dollars.

(b) Find the probability of a Type II error when μ=45 dollars. This is the probability that your test accepts H0 when in fact μ=45.

(c) Find the probability of a Type II error when μ=50.

Update:

I think the VARx is sample mean...

Update 2:

(d) The distribution of sales is not normal, because many customers buy nothing. Why is it nonetheless reasonable in this circumstance to assume that the mean will be approximately normal?

Relevance

First off, VARx > 43.12 implies you'd reject if the variance of the sample is greater than 43.12. this is a very bad idea since you are looking at the mean. So if VARx is the mean we should use a less confusing notation, such as Xbar for the sample mean.

(a)

Let α be the significance level of the test

consider the following table

_ _ _ _ _ _ Reject H0 _ _ _ _ Fail to Reject H0

H0 is true _ Type I error _ _ _ _ _ ☺ _ _ _

H0 is false _ _ _ ☺ _ _ _ _ _ _ Type II error _

So, a type I error is rejecting H0 when H0 is true, like sending an innocent person to prison

a type II error is letting a guilty person go free after the trial.

P(Type I Error) ≤ α

P(Type II Error) = β

We generally don't work with Type II errors and instead talk about Power

Power = 1 - P(Type II Error) = 1 - β

in developing tests we try to maximize the Power and minimize α.

In this case the

P(Type I Error)

= P( Z > (43.12 - 40) / (60 / sqrt(1000) )

= P( Z > 1.644384)

= 0.05004841

(b)

I think finding the power is easier to explain. Find the probability of being in the rejection region when H0 is false.

P( Xbar > 43.12)

P( Z > (43.12 - 45) / (60 / sqrt(1000)) )

P( Z > -0.990847)

= 0.8391199

this is the power.

Power = 1 - P(Type II error)

P(Type II Error) = 1 - Power

P(Type II Error) = 1 - 0.8391199

P(Type II Error) = 0.1608801

(c)

power = P(Z > (43.12 - 50) / (60 / sqrt(1000)) )

power = P(Z > -3.626078)

power = 0.9998561

P(Type II Error) = 1 - 0.9998561 = 0.0001439

(d) The distribution of sales is not normal, because many customers buy nothing. Why is it nonetheless reasonable in this circumstance to assume that the mean will be approximately normal?

It's okay because the central limit theorem tells use that no matter what distribution the data comes from, if the sample size is large enough the sample mean is normally distributed. With a sample size of 1000 it very plausible that the sample mean follows the normal distribution. Since all the tests are based on the sample mean it's distribution we are okay with the normality assumption here.

• a) With usual notations,

z= (43.12 - 40) / [60/sqrt(1000)]

when in fact μ=40 dollars, z= (43.12-40) / 1.8974

P( z > 1.644) = 0.0505 (prob of Type I error)

(This is the conventional method)

There seems to be a semantic confusion here. If VARx means variance of x(?), then VARx = 60 x 60 =3,600 and you'll always reject H0:

Is VarX the sample mean?

b) We reject H0 whenever the z value (using the sample mean ) exceeds 1.644.

P( type 2 error) = P( accept H0/ H1 is true)

P( z < (43.12-45) / 1.8974) = P( z < 0.9908) = 0.8389

c) P( z < (43.12-50) / 1.8974) = P( z < 3.62) = 1