### Learning Goals

Know how and when to use a Gaussian distribution!

### Concept Check

Q: Could the slides be posted?

A1:  http://web.stanford.edu/class/cs109/lectures/10-Gaussian/

Q: is bullet 3 the central limit theorem?

A1:  yes! assuming that the variables are IID. We will cover CLT later in the course in more detail

Q: When you said integration is impossible, is it literally impossible or just beyond the scope of the class? (My b if I misheard)

A1:  its actually impossible to get an exact result (though perhaps one day math will evolve…). The best anyone can do is to use approximation methods!

Q: Are there proofs saying you can only approximate it or has no one found a method yet?

A1:  There is in fact a proof from a fellow called Liouville. Here is a resource! http://math.stanford.edu/~conrad/papers/elemint.pdf

Q: Usually (I think) ‘f’ is used to define a PDF and ‘F’ is used to define a CDF. On slide 12, does 2. mean to say the CDF is symmetric about the mean, or does it mean to use ‘f’ instead of ‘F’ to represent the PDF? Or neither?

A1:  deep question. Both are correct statements — I often see it in terms of pdf (f) but the statement is also true in terms of CDF (F)

Q: I see, that's kinda wild given its ubiquity - thank you!

A1:  no worries :)

Q: Per my other question about slide 12, to clarify, it is then true that the symmetry property works for both the PDF and CDF? So CDF: F(u - x) = 1 - F(u + x) and PDF: f(u - x) = 1 - f(u + x) ?

A1:  The PDF version is f(u - x) = f(u + x)

Q: So in general the PDF is zero at any given point, and the CDF is just the integral of the PDF over some range (can be non-zero)?

A1:  Almost. The PDF isn’t zero, rather the “probability” of the random variable taking on a value is zero. The pdf is the “derivative” of probability which is non-zero!

Q: wait how to do you use the table

A1:  take your input to phi and look at the value that corresponds to it. Recall that the input to phi should be (x - mu)/(sigma). Also another option is to just use python :) from scipy import stats A = stats.norm(3, math.sqrt(16)) # Declare A to be a normal random variable mean 3, var 16 print(A.pdf(4)) # f(3), the probability density at 3 print(A.cdf(2)) # F(2), which is also P(Y < 2) print(A.rvs()) # Get a random sample from A

Q: Is the probability of a given point zero because you’d just end up integrating from (a) to (a) and get 0?

A1:  that is correct! there’s zero area under a single point, because it has no width.

Q: I don’t think that the python program is in the slides on the website

Q: can you go back to the slide with the python syntax? I don’t see it in the lecture slides from the website.

Q: So the CDF is really just integrating over the PDF from negative infinity to some value of x and that translates to the probability of getting less than x?

A1:  exactly!

Q: Don’t some tables have the numerical values for values less than 0 instead of inverting?

A1:  ive only seen ones with positive values (saves paper?). But certainly the scipy cdf function takes in negative values :)

Q: could you explain again why is r = 0.5 instead of 0?

Q: Why don’t we calculate a z for the bit sending question?

A1:  Yes, thats the full process. I think that step just wasn’t included

Q: So in basketball the best teams have higher winning percentages than in baseball, would this imply that either there is more variance in the curves or that the ELO scores have bigger differences in their means?

A1:  great question. it would imply that variance is lower for basketball — and thats exactly the case. Now, modern sports predicts winners with something more complex than elo… but elo was used for years and is still used in chess!

Q: So Im guessing chess websites can use ELO scores to try to optimize match ups such that most people usually play others they have a fair chance of beating (and internet speeds and maybe other factors)

A1:  absolutely!

Q: to solve analytically we would just use the noisy wires method?

A1:  no because its two random variables! So we need to think about two random variables jointly

Q: do people continuously update elo scores of teams/players? if so how does that change the probabilities?

A1:  yes! Elo also invented a method for updating scores given the results of a chess game

Q: so noisy wires worked because there was one random var (the noise) and wouldnt work here because the two random vars are the teams

A1:  exactly