Feb 17th, 2021
Know how to compute probabilities when a phenomena is modeled as multiple continuous random variables.
Q: What about Chris story time?
A1: I think the poem is standing in for Chris Story Time today. I’ll remind him about Chris Story Time for Friday’s lecture. :)
Q: Is chess really zero sum if there are ties? What does zero sum mean for our purposes?
A1: It’s still considered a zero-sum game. In the case of win-loss, that would be catalogued as +1 versus -1. In the case of a tie, it would be catalogued as 0 and 0. Sums are zero in both cases.
Q: Why is the varience (-1)^2 again？
A1: Var(aX) = a^2Var(X), so Var(-X) = (-1)^Var(X). This is true for any constant a, even negative ones.
Q: Hi got delayed by 5 mins in submitting the assignment due to network errors. Would that be a penalty? Thanks!
A1: You will by default, but you can email me directly and I’ll ensure it’s marked as on time.
Q: Many thanks!
Q: If we sample from X twice and then add the two, would the variance be 4 * sigma^2
A1: yep, that sampling approach is just the data science equivalent of the theoretical approach Chris presented a few slides ago.
Q: So if X and Y are not independent what would the general form for the variance be?
A1: In that case, the mean is still the sum of the two means, but Var(X + Y) = Var(X) + Var(Y) + 2Cov(X, Y). It’s the 2Cov(X,Y) term that goes to 0 in the situation where X and Y are independent.
Q: If P(E) = 0, does that mean that E is impossible or something like that?
A1: Yep, it means E as an event space is empty.
Q: how did you make these cool 3d graphs?
A1: The multicolor one is a snapshot from the Ross textbook, but python has some pretty good plotting libraries as well.
Q: When Chris says area under the curve, is that the volume under the curve?
A1: yep, that’s what he meant… I noticed the same thing.
Q: Just to confirm, if we sample from X_1 ~ N(mu, sigma^2) and then sample from X_2 ~ N(mu, sigma^2), we will have X_1 + X_2 ~ N(2*mu, 2*sigma^2) since X_1 and X_2 are independent (even though they have the same mu and sigma). However, if we only sample from X_1 ~ N(mu, sigma^2), then 2 * X_1 ~ N(2*mu, 4*sigma^2)?
A1: that’s correct, because X_1 and X_2 are independent. X_1 is not independent of itself, though, so the variance quadruples for 2X_1.
Q: How can two RV’s (on one hand X1+X1, and on the other hand X1+X2) have different distributions when X1 and X2 have the exact same distribution (given that X1 and X2 are independant)? How does this make sense / what’s the intuition behind this?
A1: Sampling isn’t the best anology here, though, because you only have one random variable, and therefore you only have one dimesion, not two. You don’t get two dimensions by considering X + X.
A2: Because X is independent of Y, a high value of X doesn’t inform how Y behaves—that is, Y isn’t more likely to be high or low just because X was. When we’re speaking of 2X, were’s really only doing one sample that gets scaled by a factor of two. If you really want to think of X + X as two sampled distributions, a high value from the first X is matched with the same exact high value of the second X. You don’t say that the second sample is independent of the first. That’s maximum dependence.
Q: Why can’t we write it as 2P(X-Y<-10), and use the distribution of X-Y~N to solve the problem?
A1: Nancy, I messed up and was thinking you were asking about X and Y were Normal RVs. :) The correct answer is that the sum of two Uniforms isn’t itself a Uniform.
A2: You can, but the normal that results isn’t the one we identified earlier in the lecture, because X and Y aren’t independent.
Q: I might have missed this, but where does the (1/30)^2 come from? Thanks!
A1: Because X and Y are each Uni(0, 30), the probability density of each is 1/30 everywhere between 0 and 30 includive. The area of that rectangle (height 1/30, width 30) is 1.
Q: So if we wanted to use the method Nancy stated we would have to know the covariance of X, Y?
A1: I misinterpreted her question and just added a follow up. The sum of two Uniforms isn’t even a Uniform. Whatever the distribution is, however, comes with a variance of Var(X) + Var(Y) + 2Cov(X, Y).
Q: Thanks! I’m still confused on how independance plays into distribution. I thought that a RV X with normal distribution with a set mean and a set variance will have a set distribution. If X and Y are both RV’s with the same distributions, then why is the distribution of their sum different if X and Y are dependant or if X and Y are independant?
A1: Because two distributions just happen to have the same mean and variance doesn’t mean they are dependent or independent. The fact that they have the same mean and variance might be incidental. In the case of X versus X, the fact that they have the same mean and variance isn’t incidental. It’s trivially required that be the same, since there’s the same RV.
Q: just to clarify: F represents a CDF, while f represents a PDF?
A1: 100% correct
Q: Can you factorize constants however you want?
A1: For the purpose of proving independence? yes? If you want g(x) and h(y) to be valid marginal PDFs, then there’s only one way to split the constant between the two.
Q: Is it possible for the stadard deviation to be negative？(what would that mean）
A1: Naw, the stdev is always defined to be the positive square root of the variance.
Q: Is there no concept check for this lecture?
A1: oh, not yet :) there will be in 10 minutes. :)
Q: do all the X’s have to be nnormal for the bivariata normal?
A1: yes :), but they can have covariance