CS 124 / LING 180 From Languages to Information,
Dan Jurafsky, Winter 2018
Week 3: Group Exercises on Text Cat/NB/Sentiment
Jan 30, 2018
Part 1: Group Exercise
We want to build a naive bayes sentiment classifier using add1 smoothing,
as described in the lecture (not binary naive bayes, regular naive bayes). Here is our training corpus:
Training Set:
 just plain boring
 entirely predictable and lacks energy
 no surprises and very few laughs
+ very powerful
+ the most fun film of the summer
Test Set:
predictable with no originality
 Compute the prior for the two classes + and , and the likelihoods for each word given the class
(leave in the form of fractions).
 Then compute whether the sentence in the test set is of class positive or negative
(you may need a computer for this final computation).
 Would using binary multinomial Naive Bayes change anything?
 Why do you add V to the denominator of add1 smoothing, instead of just counting the words in one class?
 What would the answer to question 2 be without add1 smoothing?
 Naive Bayes treats words as if they are independent conditioned upon the class (that is why we multiply the individual probabilities). What other features could you add to Naive Bayes in order to predict sentiment that still (roughly) hold this independence assumption?
Part 2: Challenge Problems

Go to the Sentiment demo at
http://nlp.stanford.edu:8080/sentiment/rntnDemo.html.
Come up with 5 sentences that the classifier gets wrong.
Can you figure out what is causing the errors?

Binary multinomial NB seems to work better on some problems than full count NB,
but full count works better on others.
For what kinds of problems might binary NB be better, and why?
(There is no known right answer to this question, but
I'd like you to think about the possibilities.)