This page answers frequently asked questions (FAQs) for CS224N / Ling 284.

5/5/10 I don't understand this whole feature index thing in PA3.
4/28/09 Distortion Parameters for Model 2
4/28/09 What should getAlignmentProb() return?
4/28/09 For PA2 Part I, why is the test data part of the training data?
4/03/09 checkModel() : How to Implement
4/01/09 Increasing perplexity with more training data?
4/01/09 Do we smooth at train time or test time?
4/01/09 How do I use stop tokens in my n-gram model?
4/01/09 Does PA1 require a strict proof?
4/01/09 Smoothing implementation details
4/01/09 Smoothing and conditional probabilities
4/01/09 Smoothing and unknown words
4/01/09 Do I have to do my final project in Java?
4/01/09 Where do I hand in my report late?


I don't understand this whole feature index thing in PA3.

May 5, 2010

The use of IndexLinearizer in assignment 3 may create some confusion. Consider the miniTest example. There are five Φ features (fuzzy, claws, small, big, medium) and two labels (cat and bear). The IndexLinearizer will assign indices i to the <feature, class> pairs as follows:

cat: 0 bear: 1
fuzzy: 0 i = 0 i = 1
claws: 1 i = 2 i = 3
small: 2 i = 4 i = 5
big: 3 i = 6 i = 7
medium: 4 i = 8 i = 9

The i values are the indexes into the weights vector (the λs). And where you see the handout refer to fi, that's the same index. So what might be a little confusing is that a given fi (such as f2) corresponds not to a single observed feature (like claws) but to a <feature, class> pair (like <claws, cat>).

So, for example, in the formulas on pp. 3-4 of the handout, you see things like fi(c', d). How do you evaluate this? Well, suppose i is 2 and c' is cat. f2 is supposed to be "on" whenever the class is cat and the feature claws is present, so f2(cat, d) will be on just in case the datum d has the feature claws.

Now what if c' is bear? In this case, f2(bear, d) will be off, regardless of whether the datum d has the feature claws or not.

People may be confused about the choice of how features are represented in our EncodedDatum class. Basically, there are two different ways that one can think about the relationship between features, labels and weights. The first is that you can think of features as functions of both the observations and the labels (e.g., "word=The & label=protein") and then learn a weight for each feature. These are our fi features. An alternative representation is to build features over only the observations (e.g., "word=The") - these are our Φj features - and then to learn a weight for each (feature,label) pair (e.g., ("word=The",protein)). The first version is technically more expressive, but in practice people usually only use features that can be factorized in this way as a conjunction of a data pattern and checking for some class value. We opted for using this latter representation because we felt that it would be simpler for you guys, it would generally make it harder for you to accidentally "cheat" when building your features, and it simplifies a lot of things, and makes calculations more efficient computationally (the set of Φ features doesn't vary for a particular observed datum, regardless of the class being considered). But, this choice results in the use of the IndexLinearizer class which might be a bit confusing to the uninitiated. To help, here a little bit of code which will iterate through each datum, and each possible label, and get the weight for that (feature,label) pair. Hopefully this will help clear up some confusion:

        for (EncodedDatum datum : data) {
          for (int label = 0; label < encoding.getNumLabels(); label++) {
            int numFeats = datum.getNumActiveFeatures();
            for (int i = 0; i < numFeats; i++) {
              int feat = datum.getFeatureIndex(i);
              int index = indexLinearizer.getLinearIndex(feat, label);
              double val = datum.getFeatureCount(i);
              double weight = weights[index];
            }            
          }
        }
     



Distortion Parameters for Model 2

28 April 2007

There's been a lot of questions about the distortion parameters used for Model 2.

"For each bucket j, you should have a parameter d(j) to indicate the probability of that distortion. These parameters should be learned during the EM process. (It turns out that choosing reasonable functions for d, instead of trying to learn parameters, can work pretty well. We'd like you to attempt learning distortion parameters with the EM algorithm, but dealing with distortions in some other sensible way could also warrant credit.)"

Note that since d is over buckets determined by the indices of the English/French words and the lengths of the English/French sentences, it is indeed represented by a one-dimensional table of floating points as mentioned in Knight's tutorial.



What should getAlignmentProb() return?

28 April 2007

Say f is the source sentence, e is the target sentence (as it is in all our examples). Then getAlignmentProb() should return p(a , f | e).



For PA2 Part I, why is the test data part of the training data?

28 April 2007

If you look at the starter code for PA2, you'll notice there's a suspicious line:

trainingSentencePairs.addAll(testSentencePairs);

The reason why we include the test data is just to avoid unseen word problem and to simplify this assignment. And since we're doing unsupervised learning, the EM process doesn't make use of the annotated alignments in the test data, so we can say it's not cheating.



checkModel() : How to Implement

03 April 2009

Your checkModel() function should loop over the entire vocabulary as well as the unknown words you have allocated mass for, and return 1. The assignment says for higher order n-grams (anything over unigrams), you should just choose a few words to condition on (the w1 in P(w2 | w1)), and check each of those words individually to make sure they each sum to 1.

One easy way to do this is to choose 20 random words for w1, and sum the probabilities for each w1, then return that number divided by 20. It should be 1 if you've properly created conditional probabilities.

Increasing perplexity with more training data?

1 April 2009

Some students who have been trying to investigate learning curves have reported seeing test-set perplexity increase as the amount of training data grows. This is counter-intuitive: shouldn't more training data yield a better model, which is therefore able to attain lower perplexity? Chris came up with a possible explanation involving the handling of the <UNK> token. Remember that the <UNK> token actually represents an equivalence class of tokens. As more training data is added, this equivalence class shrinks. Because the meaning of the <UNK> token is changing, model perplexities are not directly comparable. Especially when the amount of training is small, adding more data will rapidly lower the model probability of <UNK>, causing the entropy and perplexity of the model distribution to grow.

If you've been looking at learning curves, an interesting investigation — not specifically required for the assignment — would be to measure the learning curve while holding the definition of <UNK> constant. This would mean allowing the models trained on small data sets to "know about" all the words in the largest training set. All known words in this sense would get explicit counts, which could be 0, and then you'd still have an <UNK> token representing all words which did not appear in even the largest training set.


Do we smooth at train time or test time?

1 April 2009

Generally speaking, smoothing should be the last step of training a model: first you collect counts from your training data, and then you compute a smoothed model distribution which can be applied to (that is, used to make predictions about) any test data.

In principle, it would be possible to postpone the computation of a smoothed probability until test time. But (a) it's not very efficient, because most smoothing algorithms require iterating through all the training data, which you shouldn't have to do more than once, and (b) if you're wanting to do this because your smoothing computation depends upon something in the test data, then you're doing things wrong. (For example, model probabilities should not depend on how many unknown words appear in the test data.)


How do I use stop tokens in my n-gram model?

1 April 2009

Real sentences are not infinite; they begin and end. To capture this in your n-gram model, you'll want to use so-called "stop" tokens, which are just arbitrary markers indicating the beginning and end of the sentence.

It's typically done as follows. Let <s> and </s> (or whatever) be arbitrary tokens indicating the start and end of a sentence, respectively. During training, wrap these tokens around each sentence before counting n-grams. So, if you're building a bigram model, and the sentence is

I like fish tacos

you'll change this to

<s> I like fish tacos </s>

and you'll collect counts for 5 bigrams, starting with (<s>, I) and ending with (tacos, </s>). If you encountered the same sentence during testing, you'd predict its probability as follows:

P(<s> I like fish tacos </s>) = P(<s>) · P(I | <s>) · ... · P(tacos | fish) · P(</s> | tacos)

where P(<s>) = 1. (After all, the sentence must begin.)


Does PA1 require a strict proof?

1 April 2009

Q. Is it necessary to give strict mathematical proof that the smoothing we've done is proper probability distribution? Or is it enough to just give a brief explanation?

A. You should give a concise, rigorous proof. No hand-waving. I'll show an example on Friday. Note that it's important that your proof applies to your actual implementation, not some ideal abstraction.


Smoothing implementation details

1 April 2009

Do you have questions regarding details of various smoothing methods? (For example, maybe you're wondering how to compute those alphas for Katz back-off smoothing.)

You might benefit from looking at a smoothing tutorial Bill put together last year.

For greater detail, an excellent source is the Chen & Goodman paper, An empirical study of smoothing techniques for language modeling.


Smoothing and conditional probabilities

1 April 2009

Some people have the wrong idea about how to combine smoothing with conditional probability distributions. You know that a conditional distribution can be computed as the ratio of a joint distribution and a marginal distribution:

P(x | y) = P(x, y) / P(y)

What if you want to use smoothing? The wrong way to compute the smoothed conditional probability distribution P(x | y) would be:

  1. From the joint P(x, y), compute a smoothed joint P'(x, y).
  2. Separately, from the marginal P(y), compute a smoothed marginal P''(y).
  3. Divide them: let P'''(x | y) = P'(x, y) / P''(y).

The problem is that steps 1 and 2 do smoothing separately, so it makes no sense to divide the results. (In fact, doing this might even yield "probabilities" greater than 1.) The right way to compute the smoothed conditional probability distribution P(x | y) is:

  1. From the joint P(x, y), compute a smoothed joint P'(x, y).
  2. From the smoothed joint P'(x, y), compute a smoothed marginal P'(y).
  3. Divide them: let P'(x | y) = P'(x, y) / P'(y).

Here, there is only one smoothing operation. We compute a smoothed joint distribution, and compute everything else from that.

If there's interest, I can show a worked-out example of this in Friday's section.

(It would also be correct to compute all the conditional distributions before doing any smoothing, and then to smooth each conditional distribution separately. This is a valid alternative to smoothing the joint distribution, and because it's simple to implement, this is often the approach used in practice. However, the results might not be as good, because less information is used in computing each smoothing function. This would be an interesting question to investigate in your PA1 submission.)


Smoothing and unknown words

1 April 2009

A few people have inquired about smoothing and unknown words (or more generally, n-grams). The basic idea of smoothing is to take some probability mass from the words seen during training and reallocate it to words not seen during training. Assume we have decided how much probability mass to reallocate, according to some smoothing scheme. The question is, how do we decide how to allocate this probability mass among unknown words, when we don't even know how many unknown words there are? (No fair peeking at the test data!)

There are multiple approaches, but no perfect solution. (This is an opportunity for you to experiment and innovate.) A straightforward and widely-used approach is to assume a special token <UNK> which represents (an equivalence class of) all unknown words. All of the reallocated probability mass is assigned to this special token, and any unknown word encountered during testing is treated as an instance of this token.

Another approach is to make the (completely unwarranted) assumption that there is some total vocabulary of fixed size B from which all data (training and test) has been drawn. Assuming a fixed value for B allows you to fix a value for N0, the number of unknown words, and the reallocated probability mass can then be divided equally (or according to some other scheme) among the N0 unknown words. The question then arises: how do you choose B (or equivalently, N0)? There is no principled way to do it, but you might think of B as a hyperparameter to be tuned using the validation data (see M&S p. 207).

Both of these approaches have the shortcoming that they treat all unknown words alike, that is, they will assign the same probability to any unknown word. You might think that it's possible to do better than this. Here are two unknown words you might encounter, say, on the internet: "flavodoxin" and "B000EQHXQY". Intuitively, which should be considered more probable? What kind of knowledge are you applying? Do you think a machine could make the same judgment?

A paper by Efron & Thisted, Estimating the number of unseen species: How many words did Shakespeare know?, addresses related issues.


Do I have to do my final project in Java?

1 April 2009

No. You can use Perl, C, C++, or any other widely used programming language. Extra credit if you design a Turing machine to compute your final project. Double extra credit if you build a diesel-powered mechanical computer to compute your final project. Triple extra credit if you build a human-level AI capable of autonomously conceiving, executing, and presenting your final project.


Where do I hand in my report late?

19 January 2011

There is a hand-in box in the basement of Gates, near the bottom of the A-wing stairwell. You can find directions to it here. To get into the basement after the building is locked, slide your SUID card in the card reader by the main basement entrance. For code submitted late, please write the date and time of submission on your report and sign it before placing it in the box.


 

Site design by Bill MacCartney