**You may discuss homework problems with other students, but you have to prepare the written assignments yourself.**

**Please combine all your answers, the computer code and the figures into one file, and submit a copy in your dropbox on coursework.**

**Due date: February 3, 2017.**

**Grading scheme: 10 points per question, total of 50.**

In a recent, exciting, but also controversial Science article, Tomasetti and Vogelstein attempt to explain why cancer incidence varies drastically across tissues (e.g. why one is much more likely to develop lung cancer rather than pelvic bone cancer). The authors show that a higher average lifetime risk for a cancer in a given tissue correlates with the rate of replication of stem cells in that tissue. The main inferential tool for their statistical analysis was a simple linear regression, which we will replicate here.

You can download the dataset as follows:

In [1]:

```
tomasetti = read.csv("https://stats191.stanford.edu/data/Tomasetti.csv")
```

The dataset contains information about 31 tumour types. The `Lscd`

(Lifetime stem cell divisions) column refers to the total number of stem cell divisions during the average lifetime, while `Risk`

refers to the lifetime risk for
cancer of that tissue type.

Fit a simple linear regression model to the data with

`log(Risk)`

as the dependent variable and`log(Lscd)`

as the independent variable. Plot the estimated regression line.Add upper and lower 95% prediction bands for the regression line on the plot, using

`predict`

. That is, produce one line for the upper limit of each interval over a sequence of densities, and one line for the lower limits of the intervals. Interpret these bands at a`Lscd`

of $10^{10}$.Add upper and lower 95% confidence bands for the regression line on the plot, using

`predict`

. That is, produce one line for the upper limit of each interval over a sequence of densities, and one line for the lower limits of the intervals. Interpret these bands at a`Lscd`

of $10^{10}$.Test whether the slope in this regression is equal to 0 at level $\alpha=0.05$. State the null hypothesis, the alternative, the conclusion and the $p$-value.

Give a 95% confidence interval for the slope of the regression line. Interpret your interval.

Report the $R^2$ and the adjusted $R^2$ of the model, as well as an estimate of the variance of the errors in the model.

Provide an interpretation of the $R^2$ you calculated above. According to a Reuters article "Plain old bad luck plays a major role in determining who gets cancer and who does not, according to researchers who found that two-thirds of cancer incidence of various types can be blamed on random mutations and not heredity or risky habits like smoking." Is this interpretation of $R^2$ correct?

Let $Y$ and $X$ denote variables in a simple linear regression of median home prices versus median income in state in the US. Suppose that the model $$ Y = \beta_0 + \beta_1 X + \epsilon $$ satisfies the usual regression assumptions.

The table below is a table similar to the output of `anova`

when passed a simple linear regression model.

```
Response: Y
Df Sum Sq Mean Sq F value Pr(>F)
X 1 NA 5291 NA NA
Residuals 48 181289 NA
```

Compute the missing values of in the above table.

Test the null hypothesis $H_0 : \beta_1 = 0$ at level $\alpha = 0.05$ using the above table. Can you test the hypothesis $H_0 : \beta_1 < 0$ using Table 1?

Compute the $R^2$ for this simple linear regression.

If $Y$ and $X$ were reversed in the above regression, what would you expect $R^2$ to be?

Power is an important quantity in many applications of statistics. This question investigates the power of a test in simple linear regression. In a simple linear regression setting, suppose the true slope of the regression line is $\beta_1$ and the true intercept is $\beta_0$. If we assume $\sigma$ is known, then we can test $H_0: \beta_1 =0$ using $$ Z = \frac{\hat{\beta}_1 - 0}{SD(\hat{\beta}_1)} $$ where $SD(\hat{\beta}_1)$ is the standard deviation of our estimator $\hat{\beta}_1$.

The power of this test is a function of the true value $\beta_1$ as well as the accuracy of our estimate $SD(\hat{\beta}_1)$. The power is defined as $$ P_{(\beta_0,\beta_1)}(\text{$H_0$ is rejected}). $$ That is, the probability we reject the null hypothesis as a function of $(\beta_0, \beta_1)$. Actually, the power will generally not depend on $\beta_0$ in this model, so it is really a function of $\beta_1$ (and $SD(\hat{\beta}_1)$).

As we change the true $\beta_1$, the probability we reject $H_0$ changes: if the true value of $\beta_1$ is much larger than 0 relative to $SD(\hat{\beta}_1)$ then we are very likely to reject $H_0$.

What rule would you use to determine whether or not you reject $H_0$ at level $\alpha=0.1$.

What is the distribution of our test statistic $Z$? Show that the distribution depends only on the value $\beta_1 / SD(\hat{\beta}_1)$. We call this quantity the non-centrality parameter or signal to noise ratio (SNR).

Plot the power of your test as your function of the SNR.

Roughly how large does the non-centrality parameter have to be in order to achieve power of 80%?

The tables below show the regression output of a multiple regression model relating `Salary`

, the beginning salaries in dollars of employees in a given company to the following predictor variables: `Education, Experience`

and a variable `STEM`

indicating whether or not they have an undergraduate degree in a STEM field or not. (The units of both `Education`

and `Experience`

are years.)

```
ANOVA table:
Response: Salary
Df Sum Sq Mean Sq F value Pr(>F)
Regression NA 2216338 NA NA NA
Residuals 62 8913083 NA
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 3226.4 937.7 NA NA
Education 850.0 NA 3.646 NA
Experience 923.4 260.1 NA NA
STEM NA 330.1 1.675 NA
```

Below, specify the null and alternative hypotheses, the test used, and your conclusion using $\alpha=0.05$ throughout. You may not necessarily be able to compute everything, but be as explicit as possible.

Fill in the missing values in the above table.

Test whether or not the linear regression model explains significantly more variability in

`Salary`

than a model with no explanatory variables. What assumptions are you making?Is there a positive linear relationship between

`Salary`

and`Experience`

, after accounting for the effect of the variables`STEM`

and`Education`

? (Hint: one-sided test)What salary interval would you forecast for an electrical engineer with 10 years of education and 5 years working in a related field?

What salary interval would you forecast, on average, for english majors with 10 years of education and 6 years in a related field?

A national insurance organization wanted to study the consumption pattern of cigarettes in all 50 states and the District of Columbia. The variables chosen for the study are:

Age: Median age of a person living in a state.

HS: Percentage of people over 25 years of age in a state who had completed high school.

Income: Per capita personal income for a state (income in dollars).

Black: Percentage of blacks living in a state.

Female: Percentage of females living in a state.

Price: Weighted average price (in cents) of a pack ofcigarettes in a state.

Sales: Number of packs of cigarettes sold in a state on a per capita basis.

The data can be found at http://www1.aucegypt.edu/faculty/hadi/RABE5/Data5/P088.txt.

Below, specify the null and alternative hypotheses, the test used, and your conclusion using a 5% level of significance.

Test the hypothesis that the variable

`Female`

is not needed in the regression equation relating Sales to the six predictor variables.Test the hypothesis that the variables

`Female`

and`HS`

are not needed in the above regression equation.Compute a 95% confidence interval for the true regression coefficient of the variable

`Income`

.What percentage of the variation in

`Sales`

can be accounted for when`Income`

is removed from the above regression equation? Which model did you use?