|
|
Requirements
Getting StartedFirst, you need to get a copy of the Nutch code. You can download a release from http://www.nutch.org/release/. Unpack the release and connect to its top-level directory. Or, check out the latest source code from Subversion and build it with Ant. Try the following command: bin/nutchThis will display the documentation for the Nutch command script. Now we're ready to crawl. There are two approaches to crawling:
Intranet CrawlingIntranet crawling is more appropriate when you intend to crawl up to around one million pages on a handful of web servers. Intranet: ConfigurationTo configure things for intranet crawling you must:
Intranet: Running the CrawlOnce things are configured, running the crawl is easy. Just use the crawl command. Its options include:
bin/nutch crawl urls -dir crawl.test -depth 3 >& crawl.logTypically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. Once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10. Once crawling has completed, one can skip to the Searching section below. Whole-web CrawlingWhole-web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines.Whole-web: ConceptsNutch data is of two types:
mkdir db mkdir segments Whole-web: Boostrapping the Web DatabaseThe admin tool is used to create a new, empty database:bin/nutch admin db -createThe injector adds urls into the database. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+Mb file, so this will take a few minutes.) wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz gunzip content.rdf.u8.gzNext we inject a random subset of these pages into the web database. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We inject one out of every 3000, so that we end up with around 1000 URLs: bin/nutch inject db -dmozfile content.rdf.u8 -subset 3000This also takes a few minutes, as it must parse the full file. Now we have a web database with around 1000 as-yet unfetched URLs in it. Whole-web: FetchingTo fetch, we first generate a fetchlist from the database:bin/nutch generate db segmentsThis generates a fetchlist for all of the pages due to be fetched. The fetchlist is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable >s1: s1=`ls -d segments/2* | tail -1` echo $s1Now we run the fetcher on this segment with: bin/nutch fetch $s1When this is complete, we update the database with the results of the fetch: bin/nutch updatedb db $s1Now the database has entries for all of the pages referenced by the initial set. Next we run five iterations of link analysis on the database in order to prioritize which pages to next fetch: bin/nutch analyze db 5Now we fetch a new segment with the top-scoring 1000 pages: bin/nutch generate db segments -topN 1000 s2=`ls -d segments/2* | tail -1` echo $s2 bin/nutch fetch $s2 bin/nutch updatedb db $s2 bin/nutch analyze db 2Let's fetch one more round: bin/nutch generate db segments -topN 1000 s3=`ls -d segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch updatedb db $s3 bin/nutch analyze db 2By this point we've fetched a few thousand pages. Let's index them! Whole-web: IndexingTo index each segment we use the index command, as follows:bin/nutch index $s1 bin/nutch index $s2 bin/nutch index $s3Then, before we can search a set of segments, we need to delete duplicate pages. This is done with: bin/nutch dedup segments dedup.tmpNow we're ready to search! SearchingTo search you need to put the nutch war file into your servlet container. (If instead of downloading a Nutch release you checked the sources out of CVS, then you'll first need to build the war file, with the command ant war.) Assuming you've unpacked Tomcat as ~/local/tomcat, then the Nutch war file may be installed with the commands:rm -rf ~/local/tomcat/webapps/ROOT* cp nutch*.war ~/local/tomcat/webapps/ROOT.warThe webapp finds its indexes in ./segments, relative to where you start Tomcat, so, if you've done intranet crawling, connect to your crawl directory, or, if you've done whole-web crawling, don't change directories, and give the command: ~/local/tomcat/bin/catalina.sh startThen visit http://localhost:8080/ and have fun! |
Except where otherwise noted, this site is licensed under a Creative Commons License. ca | de | en | es | fi | fr | hu | jp | ms | nl | pl | pt | sv | th | zh |
||