2.99 See Answer

Question: Suppose you have some money to invest,


Suppose you have some money to invest, for simplicity $1, and you are planning to put a fraction w into a stock market mutual fund and the rest, 1 - w, into a mutual fund. Suppose that $1 invested in a stock fund yields Rs after one year and that $1 invested in mutual fund yields. Rb. Suppose that Rs is random with mean 0.06 and standard deviation 0.09, and suppose that Rb is random with mean 0.04 and standard deviation 0.05. The correlation between Rs and Rb is 0.3. If you place a fraction w of your money in the stock fund and the rest, 1 - w, in the mutual fund, then the return on your investment is R = wRs + (1 – w) Rb.
a. Suppose that w = 0.2. Compute the mean and standard deviation of R.
b. Suppose that w = 0.8. Compute the mean and standard deviation of R.
c. What value of w makes the mean of R as large as possible? What is the standard deviation of R for this value of w?
d. (Harder) What is the value of w that minimizes the standard deviation of R? (Show using a graph, algebra, or calculus.)
Answer

The mean and variance of R are given by

Where  follows from the definition of the correlation between Rs and Rb.

(a) 

(b) 

(c) w = 1 maximizes  for this value of w.

(d) The derivative of 2 with respect to w is

Solving for w yields  With 


> Suppose X is a Bernoulli random variable with Pr (X = 1) = p. a. Show E(X4) = p. b. Show E(Xk) = p for k > 0. c. Suppose that p = 0.53. Compute the mean, variance, skewness, and kurtosis of X.

> a. Y is an unbiased estimator of Y. Is Y2 an unbiased estimator of 2Y? b. Y is a consistent estimator of Y. Is Y2 a consistent estimator of 2Y?

> This exercise shows that the sample variance is an unbiased estimator of the population variance when Y1, ……, Yn are i.i.d. with mean Y and variance 2Y. a. Show that b. Show that

> Read the box “Social Class or Education? Childhood Circumstances and Adult Earnings Revisited”. a. Construct a 95% confidence interval for the difference in the household earnings of people whose father NS-SEC classification was higher between those with

> Assume that grades on a standardized test are known to have a mean of 500 for students in Europe. The test is administered to 600 randomly selected students in Ukraine; in this sample, the mean is 508, and the standard deviation (s) is 75. a. Construct a

> Ya and Yb are Bernoulli random variables from two different populations, denoted a and b. Suppose E (Ya) = pa and E (Yb) = pb. A random sample of size na is chosen from population a, with a sample average denoted p^a, and a random sample of size nb is ch

> Values of height in inches (X) and weight in pounds (Y) are recorded from a sample of 200 male college students. The resulting summary statistics are X = 71.2 in., Y = 164 lb, sX = 1.9 in., sY = 16.4 lb, sXY = 22.54 in. * lb, and rXY = 0.8. Convert these

> Data on fifth-grade test scores (reading and mathematics) for 400 school districts in Brussels yield average score Y = 712.1 and standard deviation sY = 23.2. a. Construct a 90% confidence interval for the mean test score in the population. b. When the d

> To investigate possible gender discrimination in a British firm, a sample of 120 men and 150 women with similar job descriptions are selected at random. A summary of the resulting monthly salaries follows: a. What do these data suggest about wage differe

> Consider the estimator Y∼, defined in Equation (3.1). Show that (a) E (Y∼) = Y and (b) var (Y∼) = 1.252Y / n. Data from Equation 3.1:

> Suppose a new standardized test is given to 150 randomly selected third-grade students in Amsterdam. The sample average score Y on the test is 42 points, and the sample standard deviation, sY, is 6 points. a. The authors plan to administer the test to al

> Using the random variables X and Y from Table 2.2, consider two new random variables, W = 4 + 8X and V = 11 - 2Y. Compute (a) E(W) and E(V); (b) 2W and 2V; and (c) WV and corr (W, V).

> In a population, Y = 75 and 2Y = 45. Use the central limit theorem to answer the following questions: a. In a random sample of size n = 50, find Pr (Y < 73). b. In a random sample of size n = 90, find Pr (76 < Y < 77) c. In a random sample of size n =

> For the conditional distribution of the number of network failures M given network age A. Let Pr(A=0) = 0.5; that is, you work in your room 50% of the time. a. Compute the probability of three network failures, Pr (M = 3). b. Use Bayes’ rule to compute P

> Consider the problem of predicting Y using another variable, X, so that the prediction of Y is some function of X, say g(X). Suppose that the quality of the prediction is measured by the squared prediction error made on average over all predictions, that

> Suppose that Y1, Y2, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;, Yn are random variables with a common mean &iuml;&#129;&shy;Y; a common variance &iuml;&#129;&sup3;2Y; and the same correlation &iuml;&#129;&sup2; (so that the correlation between Yi and Yj

> Let x1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;, xn denote a sequence of numbers; y1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;, yn denote another sequence of numbers; and a, b, and c denote three constants. Show that

> This exercise shows that the OLS estimator of a subset of the regression coefficients is consistent under the conditional mean independence assumption stated in Key Concept 6.6. Consider the multiple regression model in matrix form Where X and W, are, re

> Consider the regression model Yi = b0 + b1Xi + ui, where u1 = u&acirc;&#136;&frac14;1 and ui = 0.5ui - 1 + u&acirc;&#136;&frac14;i for i = 2, 3 &acirc;&#128;&brvbar; n. Suppose that u&acirc;&#136;&frac14;i are i.i.d. with mean 0 and variance 1 and are di

> Consider the regression model Yi = b1Xi + b2Wi + ui, where for simplicity the intercept is omitted and all variables are assumed to have a mean of 0. Suppose that Xi is distributed independently of (Wi, ui) but Wi and ui might be correlated, and let b^1

> Consider the regression model in matrix form, Y = X&iuml;&#129;&cent; + W&iuml;&#129;&sect; + U, where X is an n * k1 matrix of regressors and W is an n * K2 matrix of regressors. Then, as shown in Exercise 19.17, the OLS estimator &iuml;&#129;&cent;^ ca

> Let PX and MX be as defined in Equations (19.24) and (19.25). a. Prove that PXMX = 0n * n and that PX and MX are idempotent. b. Derive Equations (19.27) and (19.28). c. Show that rank (PX) = k + 1 and rank (MX) = n - k &acirc;&#128;&#147; 1. Data from E

> Consider the regression model Yi = &iuml;&#129;&cent;0 + &iuml;&#129;&cent;1Xi + ui from Chapter 4, and assume that the least squares assumptions in Key Concept 4.3 hold. a. Write the model in the matrix form given in Equations (19.2) and (19.3). b. Show

> Let W be an m * 1 vector with covariance matrix &Icirc;&pound;w, where &Icirc;&pound;w is finite and positive definite. Let c be a non-random m * 1 vector, and let Q = c&acirc;&#128;&sup2;W. a. Show that b. Suppose that c &acirc;&#137;&nbsp; 0m. Show tha

> Suppose that a sample of n = 20 households has the sample means and sample covariances below for a dependent variable and two regressors: a. Calculate the OLS estimates of &iuml;&#129;&cent;0, &iuml;&#129;&cent;1, and &iuml;&#129;&cent;2. Calculate &iuml

> Consider the homoskedastic linear regression model with two regressors, and let rX1, X2 = corr(X1, X2). Show that as n increases.

> Consider the regression model in matrix form Y = XB + WG + U, where X and W, are matrices of regressors and B and G are vectors of unknown regression coefficients. Let X&acirc;&#136;&frac14; = MWX and Y &acirc;&#136;&frac14; = MWY, where MW = I &acirc;&#

> Suppose Yi is distributed i.i.d. N (0, &iuml;&#129;&sup3;2) for i = 1, 2, &acirc;&#128;&brvbar;, n. a. Show that E (Y2i /&iuml;&#128;&nbsp;&iuml;&#129;&sup3;2) = 1 b. Show that c. Show that E(W) = n d. Show that

> This exercise takes up the problem of missing data discussed in Section 9.2. Consider the regression model Yi = Xib + ui, i = 1 &acirc;&#128;&brvbar; n, where all variables are scalars and the constant term/intercept is omitted for convenience. a. Suppos

> Consider the panel data model Yit = &iuml;&#129;&cent;Xit + &iuml;&#129;&iexcl;i + uit, where all variables are scalars. Assume that assumptions 1, 2, and 4 in Key Concept 10.3 hold and strengthen assumption 3, so that Xit and uit have eight nonzero fini

> Consider the regression model Y = XB + U. Partition X as [X1 X2] and B as [B&acirc;&#128;&sup2;1 B&acirc;&#128;&sup2;2]&acirc;&#128;&sup2;, where X1 has k1 columns and X2 has k2 columns. Suppose that X&acirc;&#128;&sup2; 2 Y = k2 * 1. Let R = [(k0k1 * k)

> Consider the problem of minimizing the sum of squared residuals, subject to the constraint that Rb = r, where R is q * (k + 1) with rank q. Let B&acirc;&#136;&frac14; be the value of b that solves the constrained minimization problem. a. Show that the La

> a. Show that B&acirc;&#136;&frac14;Eff.GMM is the efficient GMM estimator&acirc;&#128;&#148;that is, that B&acirc;&#136;&frac14;Eff.GMM in Equation (19.66) is the solution to Equation (19.65). b. Show that c. Show that Data from Equation 19.66: Data fr

> Suppose that C is an n * n symmetric idempotent matrix with rank r, and let V ∼ N (0n, In). a. Show that C = AA′, where A is n * r with A′A = Ir. b. Show that A′V ∼ N (0r, Ir). c. Show that V′CV ∼ x2r.

> Let C be a symmetric idempotent matrix a. Show that the eigenvalues of C are either 0 or 1. b. Show that trace(C) = rank(C). c. Let d be an n * 1 vector. Show that d′Cd >= 0.

> Consider the population regression of test scores against income and the square of income in Equation (8.1). a. Write the regression in Equation (8.1) in the matrix form of Equation (19.5). Define Y, X, U, and B. b. Explain how to test the null hypothesi

> Prove Equation (18.16) under assumptions 1 and 2 of Key Concept 18.1 plus the assumption that Xi and ui have eight moments. Data from Equation 18.16:

> Consider the regression model in Key Concept 18.1, and suppose that assumptions 1, 2, 3, and 5 hold. Suppose that assumption 4 is replaced by the assumption that var (ui | Xi) = θ0 + θ1 |Xi|, where |Xi| is the absolute value of Xi, θ0 > 0, and θ1 >= 0.

> This exercise provides an example of a pair of random variables, X and Y, for which the conditional mean of Y given X depends on X but corr (X, Y) = 0. Let X and Z be two independently distributed standard normal random variables, and let Y = X2 + Z. a.

> Suppose that X and u are continuous random variables and (Xi, ui), i = 1… n, are i.i.d. a. Show that the joint probability density function (p.d.f.) of (ui, uj, Xi, Xj) can be written as f(ui , Xi) f(uj , Xj) for i ≠ j, where f(ui , Xi) is the joint p.d.

> Show that if ^1 is conditionally unbiased, then it is unbiased; that is, show that if E (^1  X1, Xn) = 1, then E (^1) = 1.

> Suppose that W is a random variable with E (W4) < ∞. Show that E (W2) < ∞.

> Show the following results: a. Show that Where a2 is a constant, implies that &iuml;&#129;&cent;^1 is consistent. b. Show that Implies that

> This exercise fills in the details of the derivation of the asymptotic distribution of b^1 given in Appendix 4.3. a. Use Equation (18.19) to derive the expression Where vi = (Xi - mX) ui. b. Use the central limit theorem, the law of large numbers, and Sl

> Suppose that (Xi, Yi) are i.i.d. with finite fourth moments. Prove that the sample covariance is a consistent estimator of the population covariance&acirc;&#128;&#148;that is, that Where sXY is defined in Equation (3.24). Data from Equation 3.24:

> Z is distributed N (0, 1), W is distributed x2n, and V is distributed x2m. Show, as n --- &acirc;&#136;&#158; and m is fixed, that a. b. Use the result to explain why the t &acirc;&#136;&#158; distribution is the same as the standard normal distribution.

> Suppose that Yi, i = 1, 2 &acirc;&#128;&brvbar; n, are i.i.d. with E (Yi) = m, var (Yi) = &iuml;&#129;&sup3;2, and finite fourth moments. Show the following:

> Consider the heterogeneous regression model Yi = b0i + b1i Xi + ui, where b0i and b1i are random variables that differ from one observation to the next. Suppose that E (ui | Xi) = 0 and (&iuml;&#129;&cent;0i, &iuml;&#129;&cent;1i) are distributed indepen

> a. Suppose that u∼N (0, 2u). Show that E (eu) = e1/22u. b. Suppose that the conditional distribution of u given X = x is N (0, a + bx2), where a, and b are positive constants. Show that E (eu | X = x) = e 1/2(a + bx2).

> Suppose that X and Y are distributed bivariate normal with the density given in Equation (18.38). a. Show that the density of Y given X = x can be written as Where b. Use the result in (a) to show that c. Use the result in (b) to show that E(YX = x) = a

> Let &Icirc;&cedil;^ be an estimator of the parameter &Icirc;&cedil;, where &Icirc;&cedil;^ might be biased. Show that if Then

> Consider the regression model without an intercept term, Yi = b1Xi + ui (so the true value of the intercept, b0, is 0). a. Derive the least squares estimator of b1 for the restricted regression model Yi = b1Xi + ui. This is called the restricted least sq

> a. Suppose that E (ut | ut - 1, ut &acirc;&#128;&#147; 2 &acirc;&#128;&brvbar;) = 0, that var (ut ut - 1, ut - 2, &acirc;&#128;&brvbar;) follows the ARCH (1) model &iuml;&#129;&sup3;2t = a0 + a1u2t - 1, and that the process for ut is stationary. Show th

> Consider the following two-variable VAR model with one lag and no intercept: a. Show that the iterated two-period ahead forecast for Y can be written as And derive values for &iuml;&#129;&curren;1 and &iuml;&#129;&curren;2 in terms of the coefficients in

> Suppose that &acirc;&#136;&#134;Yt = ut, where ut is i.i.d. N (0, 1), and consider the regression Yt = bXt + error, where Xt = &acirc;&#136;&#134;Yt + 1 and error is the regression error. Show that

> A regression of Yt onto current, past, and future values of Xt yields a. Rearrange the regression so that it has the form shown in Equation (17.25). What are the values of u, &iuml;&#129;&curren;-1, &iuml;&#129;&curren;0, and &iuml;&#129;&curren;1? i. Su

> Verify Equation (17.20). Data from Equation 17.20:

> Suppose that Yt follows the AR (p) model Yt = 0 + 1Yt - 1 + g+ bpYt - p + ut, where E (ut  Yt - 1, Yt - 2 …) = 0. Let Yt + ht = E (Yt + h  Yt, Yt - 1,). Show that Yt + h|t = 0 + 1Yt - 1 + h|t + … + pYt - p + ht for h > p.

> Suppose that E (ut |ut - 1, ut - 2 . . .) = 0 and ut follows the ARCH process, 2t = 1.0 + 0.5 u2t - 1. a. Let E (u2t) = var (ut) be the unconditional variance of ut. Show that var (ut) = 2. (Hint: Use the law of iterated expectations, E (u2t) = E [Eu2t|

> X is a random variable with moments E(X), E(X2), E(X3), and so forth. a. Show E (X - (3 = E(X3) – 3[E(X2)] [E(X) + 2[E(X)]3 b. Show E (X - (4 = E(X4) – 4[E(X)] [E(X3) + 6[E(X)]2 [E(X2) – 3[E(X)] 4

> One version of the expectations theory of the term structure of interest rates holds that a long-term rate equals the average of the expected values of shortterm interest rates into the future plus a term premium that is I (0). Specifically, let Rkt deno

> Consider the cointegrated model YT = uXt + v1t and Xt = Xt - 1 + v2t, where v1t and v2t are mean 0 serially uncorrelated random variables with E (v1t v2j) = 0 for all t and j. Derive the vector error correction model for X and Y.

> Suppose that Yt follows a stationary AR (1) model, Yt = b0 + b1Yt - 1 + ut. a. Show that the h-period ahead forecast of Yt is given by b. Suppose that Xt is related to Yt by Show that

> Consider the constant-term-only regression model Yt = &iuml;&#129;&cent;0 + ut, where ut follows the stationary AR (1) model ut = &iuml;&#129;&brvbar;1ut - 1 + u&acirc;&#136;&frac14;t with u&acirc;&#136;&frac14;t i.i.d. with mean 0and variance &iuml;&#12

> Consider the model in Exercise 16.7 with Xt = u&acirc;&#136;&frac14;t + 1. a. Is the OLS estimator of &iuml;&#129;&cent;1 consistent? Explain. b. Explain why the GLS estimator of &iuml;&#129;&cent;1 is not consistent. c. Show that the infeasible GLS esti

> Consider the regression model Yt = 0 + 1Xt + ut, where ut follows the stationary AR (1) model ut = 1ut - 1 + u∼t with u∼t i.i.d. with mean 0 and variance 2u and |1| < 1. a. Suppose that Xt is independent of u∼j for all t and j. Is Xt exogenous (past

> Consider the regression model Yt = &iuml;&#129;&cent;0 + &iuml;&#129;&cent;1Xt + ut, where ut follows the stationary AR (1) model ut = &iuml;&#129;&brvbar;1ut - 1 + u&acirc;&#136;&frac14;t with u&acirc;&#136;&frac14;t i.i.d. with mean 0 and variance &ium

> Derive Equation (16.7) from Equation (16.4), and show that &iuml;&#129;&curren;0 = &iuml;&#129;&cent;0, &iuml;&#129;&curren;1 = &iuml;&#129;&cent;1, &iuml;&#129;&curren;2 = &iuml;&#129;&cent;1 + &iuml;&#129;&cent;2, &iuml;&#129;&curren;3 = &iuml;&#129;&c

> Suppose that oil prices are strictly exogenous. Discuss how you could improve on the estimates of the dynamic multipliers in Exercise 16.1. Data from Exercise 16.1: Increases in oil prices have been blamed for several recessions in developed countries.

> Consider two different randomized experiments. In experiment A, oil prices are set randomly, and the central bank reacts according to its usual policy rules in response to economic conditions, including changes in the oil price. In experiment B, oil pric

> Consider three random variables, X, Y, and Z. Suppose that Y takes on k values y1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;., yk; that X takes on l values x1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;., xl; and that Z takes on m values z1, &acirc;&#12

> Use the probability distribution given in Table 2.2 to compute (a) E(Y) and E(X); (b) &iuml;&#129;&sup3;2X and &iuml;&#129;&sup3;2Y; and (c) &iuml;&#129;&sup3;XY and corr (X, Y). Data from Table 2.2:

> Macroeconomists have also noticed that interest rates change following oil price jumps. Let Rt denote the interest rate on three-month Treasury bills (in percentage points at an annual rate). The distributed lag regression relating the change in Rt(&acir

> Suppose Yt = &iuml;&#129;&cent;0 + ut, where ut follows a stationary stationary AR (1) ut = &iuml;&#129;&brvbar;1ut - 1 + u&acirc;&#136;&frac14;t with u&acirc;&#136;&frac14;t i.i.d. with mean 0 and variance &iuml;&#129;&sup3;2u and |&iuml;&#129;&brvbar;1

> Suppose that a(L) = (1 - L), with |1| < 1, and b(L) = 1 + L +2L2 +3L3 ……. a. Show that the product b(L)a(L) = 1, so that b(L) = a(L)-1. b. Why is the restriction |1| < 1 important?

> Consider the ADL model Yt = 5.3 + 0.2Yt - 1 + 1.5Xt - 0.1Xt - 1 + u∼t, where Xt is strictly exogenous. a. Derive the impact effect of X on Y. b. Derive the first five dynamic multipliers. c. Derive the first five cumulative multipliers. d. Derive the lon

> Increases in oil prices have been blamed for several recessions in developed countries. To quantify the effect of oil prices on real economic activity, researchers have run regressions like those discussed in this chapter. Let GDPt denote the value of qu

> The moving average model of order q has the form Yt = &iuml;&#129;&cent;0 + et + &iuml;&#129;&cent;1et - 1 + &iuml;&#129;&cent;2et - 2 + &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;+ &iuml;&#129;&cent;qet - q, where et is a serially uncorrelated random var

> Suppose Yt is the monthly value of the number of new home construction projects started in the United States. Because of the weather, Yt has a pronounced seasonal pattern; for example, housing starts are low in January and high in June. Let Jan denote t

> Suppose Yt follows the stationary AR (1) model Yt = 2.5 + 0.7Yt - 1 + ut, where ut is i.i.d. with E (ut) = 0 and var (ut) = 9. a. Compute the mean and variance of Yt. b. Compute the first two autocovariances of Yt. c. Compute the first two autocorrelatio

> In this exercise, you will conduct a Monte Carlo experiment to study the phenomenon of spurious regression discussed. In a Monte Carlo study, artificial data are generated using a computer, and then those artificial data are used to calculate the statist

> Prove the following results about conditional means, forecasts, and forecast errors: a. Let W be a random variable with mean W and variance 2w, and let c be a constant. Show that E [(W – c)2] = 2w + (W – c)2. b. Consider the problem of forecasting Yt

> Consider two random variables, X and Y. Suppose that Y takes on k values y1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;., yk and that X takes on l values x1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;. ,xl. a. Show that; b. Use your answer to (a) to veri

> The forecaster in Exercise 15.2 augments her AR (4) model for IP growth to include four lagged values of &acirc;&#136;&#134;Rt, where Rt is the interest rate on three-month U.S. Treasury bills (measured in percentage points at an annual rate). a. The F-s

> Using the same data as in Exercise 15.2, a researcher tests for a stochastic trend in ln (IPt), using the following regression: where the standard errors shown in parentheses are computed using the homoskedasticity-only formula and the regressor t is a l

> The Index of Industrial Production (IPt) is a monthly time series that measures the quantity of industrial commodities produced in a given month. This problem uses data on this index for the United States. All regressions are estimated over the sample pe

> Suppose Yt follows a random walk, Yt = Yt−1 + ut, for t = 1, ……, T, where Y0 = 0 and ut is i.i.d. with mean 0 and variance 2u. a. Compute the mean and variance of Yt. b. Compute the covariance between Yt and Yt−k. c. Use the results in (a) and (b) to sh

> Consider the stationary AR (1) model Yt = b0 + b1Yt−1 + ut, where ut is i.i.d. with mean 0 and variance 2u. The model is estimated using data from time periods t = 1 through t = T, yielding the OLS estimators b^0 and b^1. You are interested in forecasti

> Suppose ∆Yt follows the AR (1) model ∆Yt = 0 +∆Yt - 1 + ut. a. Show that Yt follows an AR (2) model. b. Derive the AR (2) coefficients for Yt as a function of 0 and 1.

> A researcher carries out a QLR test using 30% trimming, and there are q = 5 restrictions. Answer the following questions, using the values in Table 15.5 (&acirc;&#128;&#156;Critical Values of the QLR Statistic with 15% Trimming&acirc;&#128;&#157;) and Ap

> Consider the AR (1) model Yt = 0 + 1Yt - 1 + ut. Suppose the process is stationary. a. Show that E (Yt) = E (Yt – 1). b. Show that E (Yt) = 0 / (1 - 1).

> You have a sample of size n = 1 with data y1 = 2 and x1 = 1. You are interested in the value of  in the regression Y = X + u. a. Plot the sum of squared residuals (y1 - bx1)2 as function of b. b. Show that the least squares estimate of b is b^OLS = 2.

> Let X and Y be two random variables. Denote the mean of Y given X = x by (x) and the variance of Y by 2(x). a. Show that the best (minimum MSPE) prediction of Y given X = x is (x) and the resulting MSPE is 2(x). b. Suppose X is chosen at random. Use

> In any year, the weather can inflict storm damage to a home. From year to year, the damage is random. Let Y denote the dollar value of damage in any given year. Suppose that in 95% of the years Y = $0, but in 5% of the years Y = $30,000. a. What are the

> In Exercise 14.5(b), suppose you predict Y using Y - 1 instead of Y. a. Compute the bias of the prediction. b. Compute the mean of the prediction error. c. Compute the variance of the prediction error. d. Compute the MSPE of the prediction. e. Does Y - 1

> In Exercise 14.5(b), suppose you predict Y using Y/2 instead of Y. a. Compute the bias of the prediction. b. Compute the mean of the prediction error. c. Compute the variance of the prediction error. d. Compute the MSPE of the prediction e. Does Y/2 prod

> Y is a random variable with mean  = 2 and variance 2 = 25. a. Suppose you know the value of  i. What is the best (lowest MSPE) prediction of the value of Y? That is, what is the oracle prediction of Y? ii. What is the MSPE of this prediction? b. Supp

> Describe the relationship, if any, between the standard error of a regression and the square root of the MSPE of the regression’s out-of-sample predictions.

> Consider the fixed-effects panel data model Yjt = &iuml;&#129;&iexcl;j + ujt for j = 1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;, k and t = 1, &acirc;&#128;&brvbar;&acirc;&#128;&brvbar;, T. Assume that ujt is i.i.d. across entities j and over time t wi

> Let X1 and X2 be two positively correlated random variables, both with variance 1. a. (Requires calculus) The first principal component, PC1, is the linear combination of X1 and X2 that maximizes var (w1X1 + w2X2), where Show that b. The second principal

2.99

See Answer