CS246

Mining Massive Data Sets

Winter 2016

Tuesday & Thursday 9AM - 10:20AM in NVIDIA Auditorium, Jen-Hsun Huang Engineering Center.

In the first two weeks of the class, we will also hold 2 recitation sessions that will serve as refreshers on important course material:

- Review of basic probability. Location: Gates B01, Time: Friday, January 8 from 4:30pm to 6pm
- Review of basic linear algebra. Location: Gates B01, Time: Friday, January 15 from 4:30pm to 6pm

The course will discuss data mining and machine learning algorithms for analyzing very large amounts of data. The emphasis will be on Map Reduce as a tool for creating parallel algorithms that can process very large amounts of data.

**Topics include:** Frequent itemsets and Association rules, Near Neighbor Search in High Dimensional Data, Locality Sensitive Hashing (LSH), Dimensionality reduction, Recommendation Systems, Clustering, Link Analysis, Large scale supervised machine learning, Data streams, Mining the Web for Structured Data, Web Advertising.

CS246 is the first part in a two part sequence CS246--CS341. CS246 will discuss methods and algorithms for mining massive data sets, while CS341: Project in Mining Massive Data Sets will be a project-focused advanced class with an unlimited access to a large MapReduce cluster.

For students who want to learn more about Hadoop we are also offering CS246H: Mining Massive Data Sets: Hadoop Labs. In CS246H Hadoop will be covered in depth to give students a more complete understanding of the platform and its role in data mining.

Tentative list of topics to be covered. These topics may change as the quarter progresses.

- Introduction and MapReduce
- Association Rules: Frequent itemsets and Association rules
- Near Neighbor Search in High Dimensional Data
- Locality Sensitive Hashing (LSH)
- Dimensionality reduction: SVD and CUR
- Recommendation Systems
- Clustering
- Link Analysis: Personalized PageRank, Hubs and Authorities
- Web spam and TrustRank
- Proximity search on Graphs: Random Walks with Restarts
- Large scale supervised machine learning (1): k-nearest neighbor, Perceptron
- Large scale supervised machine learning (2): Classification and regression trees
- Large scale supervised machine learning (3): Support Vector Machines
- Mining data streams
- Web Advertising

See FAQ for information on how to submit assignments and other work.

Gradiance quizzes are usually out on Tuesdays and due 9 days later, on Thursdays. **Note that we cannot under any circumstances extend the quiz deadline**. Once the deadline has passed students will not be able to submit their quizzes. The table below will be updated with quiz deadlines as and when they are live.

Students are expected to have the following background:

- Knowledge of basic computer science principles and skills, at a level sufficient to write a reasonably non-trivial computer program (e.g., CS107 or CS145 or equivalent are recommended).
- Good knowledge of Java will be extremely helpful since most assignments will require the use of Hadoop which is written in Java.
- Familiarity with the basic probability theory (CS109 or Stat116 or equivalent is sufficient but not necessary).
- Familiarity with writing rigorous proofs (at a minimum, at the level of CS 103).
- Familiarity with basic linear algebra (e.g., any of Math 51, Math 103, Math 113, CS 205, or EE 263 would be much more than necessary).
- Familiarity with algorithmic analysis (e.g., CS 161 would be much more than necessary).

The recitation sessions in the first weeks of the class will give the overview of the expected background.

Lecture notes and slides will be posted on-line. Readings have been derived from the book Mining of Massive Datasets by Jure Leskovec, Anand Rajaraman, and Jeff Ullman.

**Automated Quizzes**: We will be using Gradiance. Everyone should create an account there
(passwords are at least 10 letters and digits with at least one of each) and enter the class code 62B99A55. Please use your real first and last name, with the standard capitalization, e.g., "Jeffrey Ullman" so we can match your Gradiance score report to
other class grades.

**Books**: Leskovec-Rajaraman-Ullman: Mining of Massive Datasets can be downloaded for free. It can be purchased from Cambridge University Press, but you are not required to do so.

**MOOC**: There is a Coursera MOOC that is similar to this course. You may find
it useful to view some of the videos there.

**Piazza**: Piazza Discussion Group for this class (access code "mmds").

**Course handouts**: Available here.

The coursework for the course will consist of:

- Gradiance quizzes: Short weekly Gradiance quizzes. 20% of the final grade.
- Homeworks: Four biweekly homeworks that include programming. 40% of the final grade.
- Final exam. 40% of the final grade

With regard to the weekly quizzes on Gradiance. Here are the instructions:

- Go to http://www.gradiance.com/services/
- Register and use the class token 62B99A55
- Make sure you register using your Stanford email (SUNet ID) so we can match enrollment records
- Please use your real first and last name, with the standard capitalization, e.g., "Jeffrey Ullman" so we can match your Gradiance score report to other class grades.
- You will have exactly 9 days to complete the quiz.
**(There are no late days!)**

You can try the work as many times as you like, and we hope everyone will eventually get 100%. The secret is that each of the questions involves a "long-answer" problem, which you should work. The Gradiance system gives you random right and wrong answers each time you open it, and thus samples your knowledge of the full problem. While there are ways to game the system, we group several questions at a time, so it is hard to get 100% without actually working the problems. Also notice that you have to wait 10 minutes between openings, so brute-force random guessing will not work.

Solutions appear after the problem-set is due. However, you must submit at least once, so your most recent solution appears with the solutions embedded.

Gradiance quizzes are generally out on Tuesdays and due on Thursdays, 9 days later. (Thursday 11:59pm Pacific time). Note that we cannot under any circumstances extend the quiz deadline. Once the deadline has passed students will not be able to submit their quizzes.

Four biweekly homeworks that will involve programming, working with Hadoop, as well as regular numerical/algebraic theory problems.

** Questions:** We try very hard to make questions unambiguous, but some ambiguities may remain. Ask (i.e., post a question on Piazza) if confused or state your assumptions explicitly. Reasonable assumptions will be accepted in case of ambiguous questions.

** Honor code:** We strongly encourage students to form study groups. Students may discuss and work on homework problems in groups. However, each student must write down the code and solutions independently, and without referring to written notes from the joint session. In other words, each student must understand the solution well enough in order to reconstruct it by him/herself. In addition, each student should write on the problem set the set of people with whom she/he interacted.

Since we occasionally reuse problem set questions from previous years, we expect students not to copy, refer to, or look at the solutions in preparing their answers. It is an **honor code violation to intentionally refer to a previous year's solutions**. This applies both to the official solutions and to solutions that you or someone else may have written up in a previous year.

Finally, we consider it an Honor Code Violation to post your homework solutions to a place where it is easy for other students to access it. This includes uploading your solutions to publicly-viewable repositories like on GitHub.

The standard penalty for a first offense includes a one-quarter suspension from the University and 40 hours of community service. And the standard penalty for multiple violations (e.g. cheating more than once in the same course) is a three-quarter suspension and 40 or more hours of community service. Stanford Office of community standards has more information.

** Late assignments:** Each student will have a total of

** Assignment submission:**
All students (SCPD and non-SCPD) submit their assignments via GradeScope. You can typeset or scan you assignment. Make sure that you start answer for each question on a new page.

To register for GradeScope,

- Create an account on GradeScope if you don't have one already.
- Join CS246 course using Entry Code 92B7E9

Students also need to upload their code at http://snap.stanford.edu/submit. Put all the code for a single question into a single file and upload it.

** Regrade policy:** We take great care to ensure that grading is fair and consistent. Since we will always use the same grading procedure, any grades you receive are unlikely to change significantly. However, if you feel that your work deserves a regrade, please submit a written request within a week of receiving your grade. In your request, indicate which components of your submission you would like regraded, and prepare a clear and concise argument why you feel we should regrade those components.

Regrades requests should be submitted through Gradescope.

However, note that we reserve the right to regrade the entire assignment. Moreover, if the regrade request is unjustified and thus not honored, then every future unsuccessful regrade request will be penalized 5 points.

Most assignments will require some level of programming in Hadoop. Hadoop is the open source implementation of MapReduce distributed data processing environment for mining large data sets across clusters of computers.

You will be running Hadoop jobs on your local laptop/desktop. However, since installing and setting up Hadoop is non-trivial we prepared a Linux virtual machine with Hadoop already installed. We will post the instructions and the VM soon.

Two recitation sessions will be held:

- Linear Algebra Review: properties such as rank and nullspace, operations such as inverse and trace, quadratic forms, eigendecomposition.
- Probability Review: random variables, moments, basic limits and bounds, maximum likelihood, basic optimization algorithms like gradient descent.

The recitation sessions are only intended to be refreshers; it is expected that you have already taken courses that include this material.

The previous version of the course is CS345A: Data Mining which also included a course project. CS345A has now been split into two courses CS246 (Winter, 3 Units, homeworks, final, no project) and CS341 (Spring, 3 Units, project focused).

You can access class notes and slides of previous versions of the course here:General course questions should be posted Piazza.