Lecture 7 Zoom Q&A


Q: How do we sign up for interactive Grading?

A1: Your section leader will email you with instructions, stay tuned for that email

A2: You section leader will send instructions once they’re ready to host IGs (it will be done through Paperless)


Q: Will our section leaders send us an email with when the interactive grading session is?

A1: They will send you an email with instructions on how to sign up for an appointment time

A2: Yes, they will send you a list of times to sign up


Q: Do you only want to use a reference when you’re going to change the entity you’re referencing or does it not matter. For instance, in this example you’re referencing the vector but it’s not changing in the function

A1: We will also generally pass collections by reference even when we are not changing them. Pass by reference avoids making a “deep” copy which can be expensive for a large collection


Q: if i say something like var = someFunction()

A1: The assignment and function all are each one time step, but there is also all the steps that happen inside someFunction to accumulate into total


Q: Indexing into other data structures like stack or queue would take more time steps right?

A1: Yes, depending on which element you are trying to access

A2: Neither stack nor queue allow direct indexing

A3: ^to clarify I meant taking the step to pop off until you get to the relevent element in the respective data structure. Which wouldn’t necessarily be indexing, but moreso accessing the element you wish to find


Q: how many steps is that?

A1: See my earlier answer


Q: where does the 2* come from?

A1: i++ is doing an addition and an assignment, each counted as one step


Q: Are time complexities always additive like this? E.g. if function a is O(n^2) and b is O(n) then is a + b = O(n^2)

A1: Correct, the leading term dominates


Q: Is counting primitive operations the only way to determine complexity by looking at code (not doing time tests)?

A1: We primarily analyze code as opposed to using time tests to determine complexity

A2: Sort of, you don’t need to account for individual level of count (i.e. this loop is 5 operations versus 6) but you do need to work out does this loop execute N times or sqrt(N) times, etc.

A3: We generally use the timing data to confirm our mathematical analysis


Q: is the python code available for us to use?

A1: yes, the script is in the zip of lecture code, check it out!


Q: wait why do we ignore the if statement in the for loop? it gets run as often as the for loop, right?

A1: yes, but the time it takes to run is negligible


Q: So if we just had a for loop that did nothing it would have the same computational complexity

A1: live answered


Q: Following the same logic, would a for-each loop have a lower complexity?

A1: A for-each loop executes N times where N is the number of elements in the collection, so O(N) also


Q: If you had for(int i = 0; i < (n/2); i++), would this still be O(N) complexity or O(logN)?

A1: O(N)

A2: This would be O(N) since you do N/2 operations, but the constant facotr of 1/2 gets dropped and so its overall O(N)


Q: Using this Big O notation, could we also say that n is O(n^2) (since there would be a real constant c>0 and n_0 > 1 such that n <= c*n^2 for every integer n>=n_0?

A1: Yes, there are quadratic runtimes in Big O analysis

A2: It is correct to say so as a formality, but we generally look for the tightest bound.


Q: Do CS103 or 107 go more in depth into the math behind Big-O?

A1: CS161 would actually be your best bet


Q: I am not sure the exact relation but I remember that the P vs NP problem is about polynomial vs non polynomial type functions. Is that problem talking about big O analysis of different types of programs?

A1: Yes, it is. We will very lightly touch on this when we get to recursion, but for more, take CS103 or Cs161


Q: Does n represent the actual number of data points being sent or the factor by which the amount of data points changes?

A1: It is the size of the input, i.e. number of elements in vector in case of vectorMax


Q: Is there ever a time you use a O(n^2) program for small inputs because for a brief period n > n^2

A1: This is only true for the smallest values of N for which it would not matter which algorithm you chose


Q: If you had 50000000000n versus just n^2, is 50000000000n still preferred because it is a linear big O notation as opposed to quadratic? What if it is unlikely you are working with large values of n? (I know 50000000000n is unrealistic, but speaking hypothetically…)

A1: The linear function will “win out” in the limit, but if practically speaking you knew that you would also be dealing with much smaller values of N, you might use the low-constant N^2 intead


Q: is the if statement itself also considered in runtime or is it generally negligible?

A1: Negligible


Q: for exponential functions does the base matter (for example is O(2^n) = O(4^n)?

A1: O(2^n) is not the same as O(4^n), the bases do matter


Q: Would something like smartersum from Assignment 1 be O(sqrt(N)) ? He said we won't care about other types in this class but that seems relevant

A1: Correct! it is O(sqrt(N)) which grows more slowly than O(N)


Q: is the chart organized based on fastest to slowest?

A1: live answered


Q: so to determine what matters, we just have to look at what depends on the size, n?

A1: Correct


Q: What if the 7 in front of 7nlog(n) was a fraction? Would that change our O?

A1: No, we discard those constants


Q: For recursive functions, would the call back to the original function be counted as 1 ops or is the complexity higher?

A1: The cost of a recursive call is 1 op for hte call, but then you also have to count all the work done in the call itself (which could involve further recursive calls)


Q: Is it still n^2 if you’re looping through a matrix with n rows and m columns?

A1: It would be O(n*m) but if n == m then yes it would be O(n^2)

A2: No, in this case it would be O(nm). So definign your variables and terminology is important.


Q: Does Big-O always depend on the worst case scenario?

A1: Yes

A2: We can separate into best/worst/average cases if we need to be more precise


Q: can the same thing happen with cascasding if statements?

A1: Typically, we considered the run time of if statements to be negligible


Q: would it have the same big-o if the inner for loop executed for j < i?

A1: Yes, but the analysis is a little trickier (Gaussian sum, there is a nice explanation in the textbook with a good visual of why)


Q: To clarify, is n the amount of data or is it the amount of operations? Similarly, is the result we get from O(n^2) the amount of operations or the amount of time it takes to run?

A1: n is the size of the input, e.g. for vectorMax is was the number of elements in the vector


Q: What’s the notation to assign a function a BIg O “value”

A1: There is no real notation. When you discuss it, you will just usually say “this function runs in O of [whatever] time”

A2: it is also helpful to let the person that you’re talking to know what value you are assigning N to be


Q: So if you have a function that only runs certain for loops sometimes you would have to consider the worst case scenario that those for loops run EVERY time when decideing which big o it is?

A1: You can specify whether your analysis is for worst-case, best-case-, average-case.


Q: would the results/ effects change if you were to use a while loop as opposed to a for loop

A1: No


Q: Would it be correct to say that the slower the function representing the algorithm grows, the more efficient of the algorithm it is?

A1: Correct

A2: Correct, an algorithm that grows more steeply doesn’t scale as well as one that grows more shallowly


Q: Is that why v.add() adds to the end of the vector? Because it is O(1) instead of O(n)

A1: Yes, exactly!


Q: if you KNOW your n is going to be really small wouldn’t some of those “slower” patterns be better? Like at the beginning the run time for logarithmic functions would grow faster than that of linear functions right

A1: When n is that small, the algorithm that you choose to go with provides negligible results


Q: So for Big O we need to primarily be careful with loops right

A1: Yes


Q: So there’s kinda like a tradeoff between adding elements to a stack but then it is harder to access elements from the stack

A1: Correct. A compelling reason to use a stack is when you don’t need to access any element other than the top, so you don’t mind that that is not efficient


Q: Would vec.insert(0, n) alone be O(n) or O(n^2)?

A1: It’ll be O(n) since you need to shift all n elements over to make space at index 0


Q: Are we going to speak about Omega today or on Wednesday?

A1: We’ll only focus on Big-O in this class and won’t talk about big omega or big theta

A2: No, we leave the more formal Big-O analysis to CS103 and CS161


Q: Are there any actually-used algorithms for which the only possible implemenation is exponentional?

A1: Yes, these algorithms are called NP (non-polynomial) no known efficient approach


Q: how did you derive the other orders given log n?

A1: live answered


Q: were we expected to do this kind of calculatrion for the first assignment?

A1: live answered

A2: Nope, the first assignment was just meant to build intuition!


Q: Early you said we need to arrange meetings with our sections leader to grade assignment 1. Where do we do that?

A1: live answered

A2: They will send you an email with information when its time


Q: Do we have to remember the number that replaces n in the for loops?

A1: live answered


Q: In terms of time

A1: live answered


Q: for looping through grids, should we avoid nested for loops then?

A1: Not really any way to avoid it! Sometimes there are tasks where we just have to deal with the consequences


Q: What kind of combination of primitive operations would give you an O(log n) or O(n log n) behavior

A1: live answered


Q: Did you use mathematical calculations? Or did you just run a different type of code to find the different time of the different orders?

A1: live answered


Q: So in our assignments do you want us to try to use O(1) statements as much as possible as opposed to other slower statements?

A1: live answered


Q: Does P = NP or does P ≠ NP? :)

A1: P = NP when N = 1

A2: The whole modern depends on them not being equal :-)


Q: Log is assumed base 2 right?

A1: yes

A2: Yes, although we generally don’t care about the base of the logarithm (they’re all the same when it comes to big O, similar to other constant factors that are dropped)


Q: Will we ever have to calculate the time complexity of recursive functions in this class?

A1: live answered

A2: Yes, we will tlak about that when we get to recursion next week.