![]() |
Assignment 6: ThreadPool |
The The Internet Archive maintains one of the largest archives of websites, books, movies, and more. Their Wayback Machine allows you to enter a website’s address and see how it looked at various points in the past. (See how Stanford’s website looked in 1996.) In this assignment, you’ll build the Because of the time constraints for this assignment, we have written the “crawl the web pages from the network” part of the assignment for you, and you are only responsible for writing the Here is how the program works: given a website to archive, the program crawls through the network of pages it links to and resources it uses, archiving them to disk. Each page is scheduled to download via a thread from your thread pool. Due date: Friday, November 15th, 2019 at 11:59pm The program builds an To download Stanford’s website, we can run the following. (The seed website is This produces the following output: After crawling a ton of I can enter I can click around on the links, and everything works, as long as I only click links pointing to You may want to play around with The You can only specify one Add If you want to run Important note: If you want to start fresh, run As shown above, this assignment features a web server that shows the pages your program has archived. We’re running this server on ports 2000-65535, but unfortunately, for security reasons, the You have several options: You can run Use an SSH proxy. SSH has a feature that allows us to send traffic to an SSH server, and it will forward that traffic to a web server. If we SSH into a Stanford computer, we can then use that computer to forward web requests to your This method might be a little annoying if you have a bad network or frequently sleep your computer (logging into By the way, you may be wondering, why not use SSH into Connect to the campus network using a VPN; instructions are here. If you feel like taking the time to install the VPN client and get everything set up, then this will probably be the easiest option in the long run. (I haven’t taken the effort to set it up, though, so I can’t really tell you what it’s like.) Use the terminal-based If you don’t care much for the interactivity of This simply downloads the page from If you look at the sample output of If you try downloading a website, and you notice that everything is being skipped even though you’ve whitelisted it, it’s possible this is happening because of robots.txt. This file is used by website administrators to tell crawlers (like us) not to download particular parts of the website. You can see it by going to Clone the repository that we’ve set up for you by typing: Compile often, test incrementally and almost as often as you compile, run Because you are building the archive.h/cc: This would be the main function for the program, but it is not used when doing a normal document.h/cc: This is another file you can ignore, but feel free to look at it. It contains the index.h/cc: Again, ignore this file unless you are working on whitelist.h/cc: This class implements access control to the URLs you might try to access while crawling the web. Every whitelisted URL (specified by log.h/cc: This contains a thread-pool.h/cc: This is where you’ll implement the tptest.cc: This is a trivial program that verifies basic ThreadPool functionality. We call this from sanitycheck to see if your ThreadPool is working. Don’t change this file. tpcustomtest.cc: This contains a collection of functions you can use to test your ThreadPool yourself. This is for you to play with and add as many additional tests as you can think of to make sure your ThreadPool is working brilliantly, as described above. You have one thing to do for this assignment: create a rock-solid An easier approach would have been as follows: every time you had a number to factor, you could Because the The It does this at the beginning of the program. Then, work is added to the pool (using the The thread pool concept is practically inescapable in software engineering; you’ll see it in database libraries, web servers, some graphics rendering engines, and more. The code we’re implementing is quite general-purpose, and the concepts involved will serve you well. The ThreadPool class has the following interface: A simple program can use this pool to execute 10 function calls across 4 threads: Your constructor should do the following: Launch a single dispatcher thread, which pulls work off the queue, wakes up a particular worker, and hands the function to be executed to that worker. Assume Launch a specific number of worker threads (assume Your The The The The You can test your Yes, it’s quite possible to implement a scheme where workers are notified of incoming work, and then they pull work off the queue without the dispatcher specifically handing the work to them. However, we want you to implement ThreadPool part of this assignment was written by Jerry Cain. The Internet Archive part of the assignment was written by Ryan Eberhardt.
ThreadPool class, which will be used by a simplified version of the Internet Archive. A thread pool is a set of threads that are created when a ThreadPool class is instantiated in a program, and these threads are then used by the program to perform work in parallel. The benefit of using a thread pool over creating threads on-the-fly (as you did for the RSS reader from assignment 5) is to save thread creation time while your program runs. In other words, the threads are created once and can be re-used without the need to .join or recreate them. For the RSS reader, thread pools could have been used, and (for example) you would only need a thread pool of eight threads for the number of feeds. Instead of creating a thread for every feed, those eight threads would be shared, with the thread pool managing the waiting necessary when the pool ran out of available threads.ThreadPool class itself. We have compiled the Internet Archive part of the assignment into a static library that already has the binary that expects a working version of the ThreadPool class. If you have time, you are welcome to also implement the web-crawling function as well, but it is not necessary, nor will it be graded (see archive.cc for details of how to use your version instead of ours).What the finished product will do
archive executable. Given a seed website, this will begin downloading that website, any websites it links to, any websites those link to, and so on. Because downloading this network of interconnected websites could end up downloading a fair portion of the internet (depending on your seed website), you are restricting your downloads to a whitelist, so we only download from particular websites of interest.https://www.stanford.edu, and we are restricting to sites on the domain www.stanford.edu. You could use a wildcard and whitelist *.stanford.edu, but this ends up indexing a huge number of pages, and it takes a very long time to download. You can whitelist multiple different domains by using multiple w flags.)./archive -w www.stanford.edu -d https://www.stanford.edu[24-07-2018 08:07:00] Beginning download of https://www.stanford.edu
[24-07-2018 08:07:01] End download of https://www.stanford.edu (1.104319 seconds)
[24-07-2018 08:07:01] Skipping download of https://fonts.googleapis.com (not whitelisted, or blocked by robots.txt)
[24-07-2018 08:07:01] Skipping download of https://s.w.org (not whitelisted, or blocked by robots.txt)
[24-07-2018 08:07:01] Beginning download of https://www.stanford.edu/wp-json/
[24-07-2018 08:07:01] Beginning download of https://www.stanford.edu/wp-includes/wlwmanifest.xml
[24-07-2018 08:07:01] Beginning download of https://www.stanford.edu/xmlrpc.php?rsd
[24-07-2018 08:07:01] Beginning download of https://www.stanford.edu/wp-content/plugins/awesome-weather-pro/awesome-weather.css?ver=4.9.7
[24-07-2018 08:07:01] Skipping download of https://fonts.googleapis.com/css?family=Open+Sans%3A400%2C300&ver=4.9.7 (not whitelisted, or blocked by robots.txt)
[24-07-2018 08:07:13] Skipping download of https://www.stanford.edu/wp-content/plugins/awesome-weather-pro/js/js-cookie.js?ver=1.1 (already downloaded)
<many lines omitted...>
[24-07-2018 08:07:13] End download of https://www.stanford.edu/list/admin/#admin-finance (0.323541 seconds)
[24-07-2018 08:07:13] End download of https://www.stanford.edu/list/admin/#admin-research (0.318452 seconds)
[24-07-2018 08:07:13] End download of https://www.stanford.edu/list/admin/#admin-staff (0.314381 seconds)
[24-07-2018 08:07:13] End download of https://www.stanford.edu/list/admin/#admin-students (0.317646 seconds)
myth66.stanford.edu listening on port 9979...www.stanford.edu pages, this launches a server; the last line tells you where to connect. (It will be different for you.) If I connect to http://myth66.stanford.edu:9979 while leaving archive running, I see the following:
https://www.stanford.edu, click “Go!”, and see Stanford’s homepage, in all its glory:
www.stanford.edu sites (the domain I whitelisted). Even if Stanford’s website goes offline, this archive can still continue serving it as if nothing had ever happened.Program usage
./samples/archive_soln before you begin, just to get a sense of how to test the program.
-d specifies the seed page that you’d like to begin crawling from. You should specify a full URL, including http/https. For example: ./archive -d https://web.stanford.edu/class/cs110/summer-2018/-d flag, but if you want to index several pages, you can run archive several times. (The files it downloads are persisted in the indexed-documents/ directory.)-w flags to whitelist domains for our crawl. You can specify multiple -w flags, and you can use wildcards if you’d like (though that may include many more pages than you want). For example: ./archive -w "*.stanford.edu" -w fonts.googleapis.com -d https://www.stanford.eduarchive without the server, you can add the -n or --no-serve flag to disable it. (Your program will exit after downloading the pages.)archive starts the server with a port number generated from your sunet ID. It’s unlikely anyone else will conflict with your port. However, if you want to run the server on a different port, add the -p or --port flag: ./archive -p 12345archive saves downloaded files in the indexed-documents/ directory. Running archive several times will add to this database without clearing it. This allows you to crawl several different websites and have them all be accessible from your archive web server, but it might not be what you want in testing.make filefree to clear the indexed-documents directory. You can also run archive with the -m flag to make it memory-only (it won’t read from disk or persist downloads on disk).Instructions for off-campus students
myth machines only allow access to these ports from on-campus computers.
archive with the web server disabled. When you run archive, add the -nim flag. archive won’t run the web server, but will spit out a list of downloaded content that you can check for correctness. This is the easiest option of this list, but you miss out on the cool factor of being able to use your archive from your browser.archive server.
archive. Let’s say my server is listening on myth55 port 9979.ssh -L 9979:myth55.stanford.edu:9979 rice.stanford.edu. Leave this running on the side. (Make sure this is running in a separate terminal window from the one you’re using to connect to myth, and make sure it continues running while you try to use archive with your browser.) You should replace myth55 with whatever server archive is running on.http://localhost:9979.rice requires 2 factor authentication), but it is probably the easiest way to be able to use your browser with archive.myth and have myth forward the traffic to archive – why use rice? That would definitely be more familiar and straightforward, but unfortunately, myth has SSH tunneling disabled.lynx browser. Open up another terminal window, SSH into myth, and then run something like this:lynx http://myth66.stanford.edu:9979/https://www.stanford.edulynx, you can ask your server for a single page by using curl:curl http://myth66.stanford.edu:9979/https://www.stanford.edumyth and prints it out on your terminal window. If you get anything back, your archive is probably working correctly. (Errors in the page output probably come from the crawling framework that I’ve written for you.)A note on robots.txt
archive that I posted above, you’ll notice that some downloads were skipped. A download from fonts.googleapis.com was skipped – this makes sense, because that was in our whitelist. A download from www.stanford.edu was also skipped – this makes less sense./robots.txt on any website. For example, I tried using archive to download Linux man pages, but I ran into issues. When I checked https://linux.die.net/robots.txt, I saw that everything was disallowed for our crawler.Getting started
git clone /usr/class/cs110/repos/assign6/$USER assign6./tools/sanitycheck a bunch to test your work, and run ./tools/submit when you’re done. As you’ve seen in past assignments, you can play with a sample application by invoking ./samples/archive_soln.ThreadPool class and need to debug it, you should make judicious use of the tpcustomtest.cc tests. You can add to these tests yourself to test the functionality of your ThreadPool class. There are four tests in the program already, and to add another test, you update the buildMap function with a flag describing the test and pointing to a function that runs the test. To run, make the assignment and type (for example):$ ./tpcustomtest --single-thread-no-wait
This is a test.Files
You can ignore the first five files in this list – they are used to build the downloading feature.
make. Instead, a library is used in its place. A skeleton is written for you if you’d like to complete it (compile with make myarchive), and you would need to implement Archive::download to crawl the internet starting from seed URL. Again, you do not need to work on this file.Document class, used to represent web pages that you encounter. The program creates a Document for every URL it considers downloading. It calls Document::download to download a web page and parse it for any links to other web pages, and call Document::getLinks to get those outgoing links.archive.cc or just want to see the details. It contains the Index class, used to store all the Documents that get downloaded, and it is used to implement the archive web server that the program uses to access downloaded web pages.-w command-line arguments) should be added to the whitelist by calling addHost. Before you download a URL, you should call Whitelist::canAccess to check whether you’re allowed to access the URL. (This checks both the whitelist from the -w flags, and it does some much more complicated checking of robots.txt files to see if we’re allowed to access this specific URL – see “note on robots.txt” above.)Log class that can be used to produce the logging messages you see in the sample output. If you were writing the archive functionality, you could use this class if you want to use your own logging – it’s easy to use and may save you time, so you should read the interface in the log.h file. It is not necessary to use or understand this file for the assignment in general.Start paying attention here
ThreadPool class.Assignment roadmap
ThreadPool class. Think back to farm from Assignment 3. You created several worker processes, then distributed work to each worker as it finished its previous work. There were only eight child processes ever created, but each child factored several numbers when there was a lot of work to be done.fork to launch a worker to factor that number. You could have implemented it such that a maximum of 8 workers were running at a time (on an 8-core machine), and some might argue that this would be just as good as your implementation, since this avoids contention for hardware resources just like your implementation did. However, this approach is definitely worse, because even if it has the same number of processes running at a time, it creates many more processes over the entire execution of the program. Creating processes is relatively expensive, so if we can reuse processes to do work, we should do so.archive program downloads many files (thousands, for big websites), the code has the potential to use many, many threads. We don’t want to create thousands of individual threads, especially because we are going to limit the number of threads hammering the website we are trying to download. So, instead of creating those threads, we are going to have a fixed number of threads and then just re-use them with the thread pool.archive program limits itself to sixteen threads by setting up a sixteen-thread thread pool:ThreadPool tp(16);tp.schedule() function) and one of the threads wakes up to do the work. Concretely, we can add functions to the thread pool’s queue, and one of the threads will wake up, execute the function we added, and then go back to sleep. (Note: specifically, thunks are added to the queue. Thunks are just functions that take no parameters and return no values.)How ThreadPool is used
class ThreadPool {
public:
ThreadPool(size_t numThreads);
void schedule(const std::function<void(void)>& thunk);
void wait();
~ThreadPool();
};static const size_t kNumThreads = 4;
static const size_t kNumFunctions = 10;
int main(int argc, char *argv[]) {
ThreadPool pool(kNumThreads);
for (size_t id = 0; id < kNumFunctions; id++) {
pool.schedule([id] {
cout << oslock << "Thread (ID: " << id << ") has started." << endl << osunlock;
size_t sleepTime = (id % 3) * 10;
sleep_for(sleepTime);
cout << oslock << "Thread (ID: " << id << ") has finished." << endl << osunlock;
});
}
pool.wait();
cout << "All done!" << endl;
return 0;
}Implementation
dt is a private ThreadPool member of type thread: dt = thread([this]() { dispatcher(); });wts is a private ThreadPool member of type vector<thread>): wts[workerID] = thread([this, workerID]() { worker(workerID); });schedule function should accept a thunk (a function that takes no parameters and returns no value – expressed as type function<void(void)>) and append it to the end of a queue of such functions. Each time a function is added, the dispatcher thread should be notified. Once the dispatcher is notified, schedule should return right away so that more functions can be scheduled. schedule should be thread-safe (i.e. if your program has more threads that are running outside of the ThreadPool, it should be possible to call schedule from multiple different threads and not have any chance of encountering race conditions).dispatcher thread should loop almost interminably. In each iteration, it should sleep until schedule tells it that something has been added to the queue. It should then wait for a worker to become available, select the worker, mark it as unavailable, dequeue a function, put a copy of that function in a place where the worker can access it, and then signal the worker to execute it.worker threads should also loop repeatedly, blocking within each iteration until the dispatcher thread signals it to execute an assigned function. Once signaled, the worker should invoke the function, wait for it to execute, then mark itself as available so that it can be discovered and selected again by the dispatcher.wait function should wait until the ThreadPool is completely idle. It should be possible to call wait multiple times, and wait should be thread-safe.ThreadPool destructor should wait until all functions have been executed (it’s fine to call wait), then dispose of any ThreadPool resources. Our solution doesn’t use any dynamic memory allocation, but if you use it, then be sure to free resources.ThreadPool using tptest.c and tpcustomtest.c (which compile to tptest and tpcustomtest). If you use dynamic memory allocation, make sure that you do not leak any memory. (You shouldn’t need to.)Can I implement ThreadPool without a dispatcher thread?
ThreadPool with a dispatcher thread. It’s better practice with thread communication/synchronization, and the dispatcher thread is essential to implementing more capable ThreadPools (such as a ThreadPool with lazy initialization, where the worker threads aren’t created unless they’re actually needed).