The purpose of this assignment is to gain familiarity with a variety of ADTs such as Vectors, Stacks, Queues, Maps, Sets, Lexicons and Grids. This is an individual assignment. You should write your own solution and not work in a pair on this program.

This assignment consists of three parts: Word Ladder, where you find ways to morph one word into another, Random Writer, where your program randomly generates text based on a provided input text, and Maze Generator, where your program randomly generates solvable Mazes of different sizes. Each part can be programmed separately, but they should be submitted together. Your output should match exactly as much as possible (e.g. randomization for RandomWriter is acceptable, as are different word ladders of the same length as the sample output, assuming they are valid). The starter code for this project is available as a ZIP archive (this contains 3 separate QT projects, one for each part of this assignment).

Due Date: October 12 at 11:00am.
Submit: You can submit multiple times. We will grade your latest submission. Submit via Paperless here.

Important Files and Links

To run the demo, unzip the Demo folder and right click on the contained JAR files to run. Your program should exactly match the demo output content as much as possible.

Turn in only the following files:

  1. wordladder.cpp, the C++ code for the Word Ladder program (including a main function)
  2. randomwriter.cpp, the C++ code for the Random Writer program (including a main function)
  3. mazegenerator.cpp, the C++ code for the Maze Generator program (excluding a main function)
  4. debugging.txt, a file detailing a bug you encountered in this assignment and how you approached debugging it

Development Strategy and Hints

This program can be tricky if you don't develop and debug it step-by-step. Don't try to write everything all at once. Make sure to test each part of the algorithm before you move on.

  1. Think about exactly what types of collections to use for each part. Are duplicates allowed? Does order matter? Do you need random access? Where will you add/remove elements? Etc. Note that some parts of each program require you to make compound collections, that is, a collection of collections.
  2. Refer to the Stanford Library Documentation! This page documents all methods on Stanford Library collections.
  3. Test each function with a very small input first. For example, use input file tiny.txt with a small number of words; this will let you print your entire map and examine its contents for debugging purposes.
  4. Recall that you can print the contents of any collection to cout and examine its contents for debugging.
  5. Remember that when you assign one collection to another using the = operator, it makes a full copy of the entire contents of the collection. This could be useful if you want to copy a collection.
  6. You can loop over the elements of a vector or set using a for-each loop. A for-each also works on a map, iterating over the keys in the map. You can look up each associated value based on the key in the loop.
  7. Don't forget to test your input on unusual inputs, like large and small values of N, large and small # of words to generate, large and small input files, and so on. It's hard to verify random input, but you can look in smallish input files to verify that a given word really does follow a given prefix from your map.


As in other assignments, you should follow our Style Guide for information about expected coding style. You are also expected to follow all of the general style constraints emphasized in the Homework 1 specs, such as the ones about good problem decomposition, parameters, using proper C++ idioms, and commenting. The following are additional points of emphasis and style constraints specific to this problem:

Algorithms: You should follow the general algorithms as described in this document and should not substitute a very different algorithm. You should not write a recursive algorithm for any of these problems.

Collections: You are expected to make intelligent decisions about what kind of collections from the Stanford C++ library to use at each step of your algorithm, as well as using those collections properly. Limit yourself to collections that have been taught in class; for example, do not use the Graph or BasicGraph collections. As much as possible, pass collections by reference, because passing them by value makes an expensive copy of the collection. Do not use pointers, arrays, or STL containers on this program.

Redundancy: You should avoid expensive operations that would cause you to reconstruct bulky collections multiple times unnecessarily. For example, you should not make a deep copy of all of the walls or cells in the maze unless absolutely necessary.

Word Ladder

A word ladder is a bridge from one word to another, formed by changing one letter at a time, with the constraint that at each step the sequence of letters still forms a valid word. For example, here is a word ladder connecting the word "code" to the word "data". Each changed letter is underlined as an illustration:


There are many other word ladders that connect these two words, but this one is the shortest path between them. That is, there might be others of the same length, but none with fewer steps than this one.

In the first part of this assignment, write a program that repeatedly prompts the user for two words and finds a minimum-length ladder linking the words. You must use the Stack and Queue collections from Chapter 5, along with following a particular provided algorithm to find the shortest word ladder.

Example Run:

Here is an example log of interaction between your program and the user (with console input in red):

Welcome to CS 106B/X Word Ladder!
Please give me two English words, and I will convert the
first into the second by modifying one letter at a time.

Dictionary file name: dictionary.txt

Word 1 (or Enter to quit): code 
Word 2 (or Enter to quit): data 
A ladder from data back to code:
data date cate cade code

Word 1 (or Enter to quit): 
Have a nice day.

Notice that the word ladder prints out in reverse order, from the second word back to the first. If there are multiple valid word ladders of the same length between a given starting and ending word, your program can choose any of them. In this example, it would be fine for your program to choose any ladder of length 5, as that is the minimum length in this case.

Your code should ignore case; in other words, the user should be able to type uppercase, lowercase, mixed case, etc. words and the ladders should still be found and displayed in lowercase. You should also check for several kinds of user input errors, and not assume that the user will type valid input. Specifically, you should check that both words typed by the user are valid words found in the dictionary, that they are the same length, and that they are not the same word. If you receive invalid input, your program should print an error message and re-prompt the user. See the example output for examples.

Before the program can actually calculate a word ladder, it needs to first read in a dictionary of all english words. At the start, your program should prompt the user to enter a dictionary file name to use as the source of the English words. If the user types a filename that does not exist, reprompt them - see the example output for examples. We provide a file dictionary.txt for you to use, with one word per line (there are also smaller dictionary files included to help with testing). Read the dictionary file just once at the start of your program into a Lexicon. A Lexicon is another Stanford Library collection that can very efficiently store dictionaries of words (the runtime of many common single-word operations is O(W), where W is the length of the word). Please see the Lexicon class reference for details. Once you read in the dictionary of words into the Lexicon, you can use the contains() method to check if a word is in the dictionary. You should not loop over the dictionary as part of solving this problem.


Finding a word ladder is a specific instance of what is known as a shortest-path problem: a problem in which we try to find the shortest possible route from a given start to a given end point. Shortest-path problems come up in routing Internet packets, comparing gene mutations, Google Maps, and many other domains. The strategy we will use for finding the shortest path between our start and end words is called breadth-first search ("BFS"). This is a search process that gradually expands the set of paths among which we search “outwards:” BFS first considers all possible paths that are one step away from the starting point, then all possible paths two steps away, and so on, until a path is found connecting the start and end point. A step can be understood as one unit of measurement—depending on the problem, this could be a millisecond, a minute, a mile, a subway stop, and so on. By exploring all possible paths of a given length before incrementing to the next length, BFS guarantees that the first solution you find will be the shortest possible.

For word ladders, start by examining ladders that contain only words that are one “step” away from the starting word—i.e., words in which only one letter has been changed. If you find your ending word among these one-step-away ladders, congratulations—you’re done! If not, look for your target word in all ladders containing words that are two steps away, i.e., ladders in which two letters have been changed. Then check three letters, four, etc., until your target word is located. We implement the breadth-first algorithm using a Queue that stores partial ladders, each of which represents a possibility that we will examine in turn to see if it is a complete path between our start and ends words. Each partial ladder is represented as a Stack, which means that your overall collection will be a Queue of Stacks.

Here is a partial pseudocode description of the algorithm to solve the word-ladder problem. Note: some students complain at this point that we have given you too much information and that they want to figure the problem out on their own — if so, great! Don’t look at the pseudocode below if you want to try it on your own!

Finding a word ladder between words w1 and w2:
Create an empty queue of stacks.

Create/add a stack containing {w1} to the queue. 
While the queue is not empty:

Dequeue the partial-ladder stack from the front of the queue.

For each valid English "neighbor" word (differs by 1 letter) 
of the word on top of the stack:
	If that neighbor word has not already been used
	in a ladder before:
		If the neighbor word is w2:
			Hooray! we have found a solution (and it is the stack
			you are working on in the queue).
			Create a copy of the current partial-ladder stack.
			Put the neighbor word on top of the copy stack. 
			Add the copy stack to the end of the queue.

Some of the pseudo-code corresponds almost one-to-one with actual C++ code. Other parts are more abstract, such as the instruction to examine each "neighbor" of a given word. A neighbor of a given word w is a word of the same length as w that differs by exactly 1 letter from w. For example, “date” and “data” are neighbors; “dog” and “bog” are neighbors.

Your solution is not allowed to look for neighbors by looping over the dictionary every time; this is much too inefficient. To find all neighbors of a given word, use two nested loops: one that goes through each character index in the word, and one that loops through the letters of the alphabet from a-z, replacing the character in that index position with each of the 26 letters in turn. For example, when examining neighbors of “date”, you'd try:

  • aate,bate,cate,...,zateall possible neighbors where only the 1st letter is changed.
  • date,dbte,dcte,...,dzteall possible neighbors where only the 2nd letter is changed.
  • ...
  • data,datb,datc,...,datzall possible neighbors where only the 4th letter is changed.

Note that many of the possible letter combinations along the way (aate, dbte, datz, etc.) are not valid English words. Your algorithm has access to an English dictionary, and each time you generate a word using this looping process, you should look it up in the dictionary to make sure that it is actually a legal English word. Only valid English words should be included in your group of neighbors.

Another way of visualizing the search for neighboring words is to think of each letter index in the word as being a "spinner" that you can spin up and down to try all values A-Z for that letter. The diagram below tries to depict this:

Another subtle issue is that you should not reuse words that have been included in a previous, shorter ladder. For example, suppose that you have the partial ladder cat → cot → cog in your queue. Later on, if your code is processing the ladder cat → cot → con, one neighbor of con is cog, so you might want to examine cat → cot → con → cog. But doing so is unnecessary. If there is a word ladder that begins with these four words, then there must be a shorter one that, in effect, cuts out the middleman by eliminating the unnecessary word con. As soon as you've enqueued a ladder ending with a specific word, you've found a minimum-length path from the starting word to the end word in that ladder, so you never have to enqueue that end word again.

To implement this strategy, keep track of the set of words that have already been used in any ladder. Ignore those words if they come up again. Keeping track of which words you've used also eliminates the possibility of getting trapped in an infinite loop by building a circular ladder, such as cat → cot → cog → bog → bag → bat → cat.

A final tip: it may be helpful to test your program on smaller dictionary files first to find bugs or issues related to your dictionary or to word searching.

Random Writer

In the second part of this assignment, you will write a program that takes in a text file and then (randomly) generates new text in the same style. In effect, your program will automate an exercise that writers have used for centuries to improve their style: imitating another writer. Benjamin Franklin, for example, wrote in his autobiography that he taught himself to write elegantly by trying to re-write passages from The Spectator (a popular 18th-century magazine) in the same style. In the 20th century, the writer Raymond Queneau featured the practice of trying to imitate the voice of other writers prominently among his famous "Exercises in Style". A more recent computational style imitator is the postmodern essay generator, created in 1996; hit refresh a few times to see what it can do.

Your program will accomplish the task of imitating a text’s style by building and relying on a large data structure of word groups called "N-grams." A collection of N-grams will serve as the basis for your program to randomly generate new text that sounds like it came from the same author as your input file.

Below is an example log of interaction between your program and the user (console input in red).

elcome to CS 106B/X Random Writer ('N-Grams')!
This program generates random text based on a document.
Give me an input file and an 'N' value for groups
of words, and I'll create random text for you.

Input file name? hamlet.txt 
Value of N? 3

# of random words to generate (0 to quit)? 40
... chapel. Ham. Do not believe his tenders, as you 
go to this fellow. Whose grave's this, sirrah? 
Clown. Mine, sir. [Sings] O, a pit of clay for to 
the King that's dead. Mar. Thou art a scholar; speak 
to it. ...

# of random words to generate (0 to quit)? 20
... a foul disease, To keep itself from noyance; but 
much more handsome than fine. One speech in't I 
chiefly lov'd. ...

# of random words to generate (0 to quit)? 0 

Key Idea: N-Grams

But what, you may ask, is an N-gram? The "Infinite Monkey Theorem" states that an infinite number of monkeys typing random keys forever would eventually produce the works of William Shakespeare. (See here.) That's silly, but could a monkey randomly produce a new work that sounded like Shakespeare's works, with similar vocabulary, sentence structure, punctuation, etc.? What if we chose words at random, instead of individual letters? Suppose that rather than each word having an equal probability of being chosen, we weighted the probability based on how often that word appeared in Shakespeare's works?

Picking random words would likely produce gibberish, but let's look at chains of two words in a row. For example, perhaps Shakespeare uses the word "to" 10 times total, and 7 of those occurrences are followed by "be", 1 time by "go", and 2 times by "eat". We can use those ratios when choosing the next word. If the last word we chose is "to", randomly choose "be" next 7/10 of the time, "go" 1/10 of the time, and "eat" 2/10 of the time. We never choose any other word to follow "to". We call a chain of two words like this, such as "to be", a 2-gram. Here’s an example sentence produced by linking together a sequence of 2-grams:

Go, get you have seen, and now he makes as itself? (2-gram)

A sentence of 2-grams tends to not make any sense (or even be grammatically correct), so let’s consider chains of 3 words (3-grams). If we chose the words "to be", what word should follow? If we had a collection of all sequences of 3 words-in-a-row, together with their probabilities, we could make a weighted random choice. If Shakespeare uses "to be" 22 times and follows them with "or" 5 times, "in" 3 times, "with" 10 times, and "alone" 4 times, we could use these weights to randomly choose the next word. So now the algorithm would pick the third word based on the first two, and the fourth based on the second+third, and so on. Here’s a sample sentence produced by linking 3-grams together:

One woe doth tread upon another's heel, so fast they follow. (3-gram)

We can generalize this idea from 2- or 3-grams to N-grams for any integer N. If you make a collection of all groups of N-1 words along with their possible following words, you can use this to select an Nth word given the preceding N-1 words. The higher your choice of N, the more similar the new random text will be to the original data source. Here is a random sentence generated from 5-grams of Hamlet:

I cannot live to hear the news from England, But I do prophesy th'election lights on Fortinbras. (5-gram)

As you can see, the text is now starting to sound a lot like the original. Each particular piece of text randomly generated in this way is also called a Markov chain. Markov chains are very useful in computer science and elsewhere, such as artificial intelligence, machine learning, economics, and statistics.

See the sample output files for more examples.

Step 1: Building Map of N-Grams

Your program will read the input file one word at a time and build a specific kind of compound collection, namely, a map from each (N-1)-gram in your input text to all of the words that occur directly after it in the input. For example, if you are building sentences out of 3-grams, that is, N-grams for N=3, then your code should examine each sequence of 2 words and keep track of which third word directly follows it each time it occurs. So if you are using 3-grams and the pair of words "to be" is followed by "or" twice and "just" once, your collection should map the key {to, be} to the value {or, just, or}. We might think of these (N-1)-grams as “prefixes” and the words that follow them as various possible “suffixes.” Using this terminology, your map should be built so that it connects a collection of each prefix sequence (of N-1 words) to another collection of all possible suffixes (all Nth words that follow the prefix in the original text).

The following figure illustrates the map you should build from the input file. As this figure shows, when reading the input file, you should keep track of a window of N-1 words at all times. As you read each word from the file, you should insert the previous window as a key in your collection of prefixes, with the newly-read word, the current suffix, as one of its corresponding values. (Remember that your collection of suffixes must be able to contain multiple occurrences of the same word—such as "or" occurring twice after the prefix {to, be}—otherwise, your collection will not have the word-frequency information needed to weight your probabilities.) Then, discard the first word from the window, append the new word to the window, and repeat. The map is thus built over time as you read each new word:

File Location Data
to be | or not to be just ...
map    = {}
window = {to, be}
to be or | not to be just ...
map    = { {to, be} : {or} }
window = {be, or}
to be or not | to be just ...
map    = { {to, be} : {or},
   {be, or} : {not} }
window = {or, not}
to be or not to | be just ...
map    = { {to, be} : {or},
   {be, or} : {not},
   {or, not} : {to} }
window = {not, to}
to be or not to be | just ...
map    = { {to, be} : {or},
   {be, or} : {not},
   {or, not} : {to},
   {not, to} : {be} }
window = {to, be}
to be or not to be just | ...
map    = { {to, be} : {or, just},
   {be, or} : {not},
   {or, not} : {to},
   {not, to} : {be} }
window = {be, just}
to be or not to be just
be who you want to be
or not okay you want okay|
map    = { {to, be} : {or, just, or},
   {be, or} : {not, not}, 
   {or, not} : {to, okay},
   {not, to} : {be}, 
   {be, just} : {be}, 
   {just, be} : {who}, 
   {be, who} : {you}, 
   {who, you} : {want}, 
   {you, want} : {to, okay}, 
   {want, to} : {be}, 
   {not, okay} : {you}, 
   {okay, you} : {want}, 
   {want, okay} : {to},
   {okay, to} : {be} }

Note that the order of sequences matters: for example, the prefix {you, are} is different from the prefix {are, you}. Also notice that the map wraps around. For example, if you are computing 3-grams like in the above example, perform 2 more iterations to connect the last 2 prefixes at the end of the file to the first 2 words at the start of the file. In our example above, this leads to {want, okay} connecting to "to" and {okay, to} connecting to "be". If we were doing 5-grams, we would perform 4 more iterations and connect the last 4 prefixes to the first 4 words in the file, and so on. This will be very useful for your algorithm later on in the program.

Note that you should not change case or strip punctuation of words as you read them. The casing and punctuation turns out to help the sentences start and end in a more authentic way. Just store the words in the map as you read them.

To read in a file word by word (instead of line by line, as getline does), you can use the following syntax:

ifstream stream;
... // initialize stream somehow
string word;
while (stream >> word) {
// each time through this loop, word is the next word in the file

You can also read in a single word using the following syntax:

ifstream stream;
... // initialize stream somehow
string word;
stream >> word;

Step 2: Generating Random Text

To generate random text from your map of N-grams, first choose a random starting point for the document. To do this, randomly chose a key from your map - you can use the map's keys() member function, which returns a Vector containing all of the keys in the map. (For randomness in general, #include "random.h" and call the global function randomInteger(min, max). where min and max are inclusive). Remember that each key is prefix, a collection of N-1 words. Those N-1 words will form the start of your random text, and will be the first sequence in your sliding "window" as you create your text.

For all subsequent words, use your map to look up all possible prefixes that can follow the current N-1 words, and randomly choose one according to their appropriately-weighted probabilities. If you have built your map the way we described above, as a map from {prefix} → {suffixes}, this simply amounts to choosing one of the possible suffix words at random. Once you have chose your random suffix word, slide your current window of N-1 words by discarding the first word in the window and appending the new suffix, and repeat the process.

Since your random text is unlikely to start and end at the proper beginning or end of a sentence, just preface and conclude your random text with "..." to indicate this. Here is another partial log of execution:

Input file? tiny.txt
Value of N? 3
# of random words to generate (0 to quit)? 16
... who you want okay to be who you want to be or not to be or ...

Your code should check for several kinds of user input errors, and not assume that the user will type valid input. Specifically, re-prompt the user if they type the name of a file that does not exist. Also re-prompt the user if they type a value for N that is not an integer, or is an integer less than 2 (a 2-gram has prefixes of length 1; but a 1-gram is essentially just choosing random words from the file and is uninteresting for our purposes). When prompting the user for the number of words to randomly generate, re-prompt them if the number of random words to generate is not at least N. You may assume that the value the user types for N is not greater than the number of words found in the file. See the sample output files for examples of proper program output for various cases.

Maze Generator

In this problem you will write code to display the construction of a two-dimensional maze. Here's an example of one your program might generate:

The goal is to create an n by n maze so there's one unique path from the upper left corner to the lower right one. Our algorithm for generating such mazes is fairly straightforward to describe and can be implemented using the services of the Stanford collections library. (By the way, it's not really our algorithm. It's a simplified version of Kruskal's algorithm for minimal spanning trees, which we'll more formally discuss later on when we talk about graphs.)

The basic idea has you start with a maze where all possible walls are present, as in the figure shown below. All walls are present, and the grid is divided into 10 x 10 = 100 squares that we'll call locations or cells. (Our example here happens to involve a 10 x 10 grid, but it generalizes to any dimension.)

For this program we will talk about "chambers," meaning groups of one or more locations that can reach each other because there is a path between them that is not blocked by walls. For example, if you were to knock down the walls separating three location cells L1, L2, and L3, we would now say that those three cells are part of the same chamber because they can all reach each other. With each iteration, your algorithm considers a randomly selected wall that has not been considered before, and removes that wall if and only if it separates two cells that currently cannot reach each other; that is, two cells that are part of two different chambers. The first randomly selected wall can always be removed, because all location cells are initially their own chambers.

The series of figures below illustrate the algorithm in action. The leftmost figure below shows a randomly chosen first wall being removed. After a several more iterations, you'll see that more randomly selected walls have been removed, as shown in the middle figure. All walls are considered exactly once in some random order until all location cells are part of one large connected chamber together. Initially there are 100 one-cell chambers in the 10 x 10 maze, so eventually 99 walls are erased to yield the completed maze shown in the rightmost figure below.

a single wall removed

some walls removed

99 walls removed (done)

Your algorithm should not remove any more walls than necessary to complete the maze. This means that a wall must NOT be removed if the location cells on each side of that wall are already reachable from each other; that is, if they are already part of the same connected chamber. For example, the figure below is the same as the middle figure above, except with some walls circled. The circled walls in that figure should NOT be removed by the algorithm if it randomly checks them, because the location cells on the two sides of those walls are already part of the same chamber and can already reach each other.

some walls that should not be removed

Since the mazes you generate are random, it's almost impossible for you to match our output and mazes exactly. You can verify your program manually by inspecting the maze to ensure that all location cells are connected into a single chamber and the right number of walls have been removed.

Implementation Details

void generateMaze() {
	// TODO: implement this function

For this program you will write a function generateMaze that performs the work to generate suitable mazes by removing walls. (Of course you should decompose the problem into smaller functions as you write it.) Here is a high-level pseudocode description of the algorithm you should use to generate your maze:

how to generate a maze:
    generate a collection of chambers, where each chamber initially includes one unique cell.
    repeat until all chambers are connected:
        randomly choose a wall W from the maze,
                where W separates two location cells that we'll call L1 and L2.
        if L1 and L2 cannot reach each other (if they are in different chambers):
            remove wall W from the maze.
            merge the chambers of L1 and L2.

A large part of the challenge of this problem lies in figuring out what collections are best for representing the various data. Notice that your code needs to be able to perform operations like the following efficiently:

  • Randomly pick out a wall from the maze;
  • Figure out which chamber a given location cell is in;
  • Figure out whether two chambers are the same;
  • Merge two chambers.

Part of your grade comes from choosing a good collection(s) for storing the data needed. You will probably need to use a nested collection (a.k.a. compound collection) to solve the problem. Hint: Though it's a two-dimensional maze on the screen, a Grid is a poor choice for solving this problem.

Efficiency: We discussed efficiency in terms of Big-Oh notation in lecture. While your algorithms are not restricted to any particular Big-Oh complexity class, you should verify that your algorithm removes the right number of walls and runs efficiently. The main way you should ensure your efficiency is in your choice of appropriate collections to store the data. Our solution can finish generating a 20x20 maze in under 1 second, a 30x30 maze in under 3-4 seconds, and a 40x40 maze in around 7-8 seconds (with the animation turned off), though this varies heavily from machine to machine. If your solution requires several multiples longer to finish (e.g. one minute or more for large mazes), it is likely implemented inefficiently and should be optimized further.

Your code will interact with some files provided by the instructor. You should include the header file "mazetypes.h" in your program to allow your code to access our provided types. The mazetypes.h file defines two types of object structures named Cell and Wall types that you should use in your algorithm. We have not talked much about using objects in the course so far, but these two are simple ones. A Cell object has only two public data fields inside it, two integers named row and col. A Wall object has only two public data fields inside it, two Cells named one and two representing the two location cells that are separated by this wall. Below are examples of using Cell and Wall objects:

// using Cell objects
Cell mycell;
mycell.row = 4;
mycell.col = 17;

// declare and initialize
Cell mycell2 {9, 22};

// store Cell in collection
Vector v;

// print a Cell: {r09 c22}
Set set;
for (const Cell& cell : set) {
    cout << cell << endl;
// using Wall objects
Cell mycell1 {4, 17};
Cell mycell2 {22, 9};
Wall wall {mycell1, mycell2};
cout << << endl;   // {r04 c17}
cout << wall.two << endl;   // {r22 c09}

// store Wall in collection
Vector v;


Unlike most of the programs you have written so far in this course, in this problem you will not write the main function, nor will you perform any direct user interaction with cout or user input functions. Instead you should include the header file "mazegui.h" in your file to allow your code to utilize an instructor-provided graphical user interface (GUI). The mazegui.h file has the program's main function and handles the overall user interaction. The graphical UI has only a few functions that you will need to use in your code. They are listed in the table below.

Command Description
MazeGUI::getAllCells() Returns all location cells in the maze as a Vector of Cell objects.
MazeGUI::getAllWalls() Returns all walls between cells in the maze as a Vector of Wall objects. The vector has already been shuffled into a random order.
MazeGUI::removeWall(wall); Removes the given wall (represented as a Wall object) from the maze.

Here is a screenshot of the GUI in action. You can use the GUI to create a maze of a given size (containing walls between all neighboring locations), and press the Generate button to engage your code to try to remove the walls as appropriate. To speed up the program, turn off the "Animation" checkbox before generating.

The instructor-provided GUI

Debugging: Though the program has a graphical UI, you can still use cout to print statements to the console to help you debug your program. Such messages will appear in the bottom console pane of Qt Creator. Remember to remove any such debugging output from your code before you turn it in. Also see the debugging handout for more tips on how to debug your programs.

Possible Extra Features

Consider adding extra features to these programs if you'd like. Note that since we use automated testing for part of our grading process, it is important that you submit a program that conforms to the preceding assignment spec, even if you do extra features. You should submit two versions of your program for extensions: the provided one without any extra features added, and a second one with extra features. Please distinguish them by explaining which is which in the comment header. In the comment header, please also list all extra features that you worked on and where in the code they can be found (what functions, lines, etc. so that the grader can look at their code easily).

Here are some example ideas for extra features that you could add to your program.

  1. Allow word ladders between words of different length: The typical solution forbids word ladders between words that are not the same length. But as an extra feature, you could make it legal to add or remove a single letter from your string at each hop along the way. This would make it possible to, for example, generate a word ladder from "car" to "cheat": car, cat, chat, cheat.
  2. Allow word ladder end-points to be outside the dictionary: Generally we want our word ladders to consist of words that are valid English words found in the dictionary. But it can be fun to allow only the start and end words to be non-dictionary words. For example, "Marty" is not an English word, but if you did this extra feature, you could produce a word ladder from "Marty" to "curls" as: marty, party, parts, carts, cards, curds, curls.
  3. Make the N-grams random text form complete sentences: The assignment indicates that you should start and end your input with "..." since it will likely not begin with the start of a sentence nor end with the end of a sentence from the original input. As an extra feature, extend your program so that when you create your map of (N-1)-gram prefixes, you also keep track of which prefixes are at the start of a sentence (i.e., prefixes whose first word begins with an uppercase letter) and which words are at the end of a sentence (i.e., words that end with a period, question mark, or exclamation mark). Use this extra data to begin your randomly generated text with a random sentence start, rather than any random prefix. And instead of generating exactly the number of words requested by the user, keep going until you reach the end of a sentence. That is, if the user requests 100 words, after generating those 100 words, if you aren't at the end of a sentence, keep going until you end it.
  4. Other: If you have your own creative idea for an extra feature, go for it! Feel free to ask your SL and/or the instructor about it.