Improving the performance of focused web crawlers

作者:

Highlights:

摘要

This work addresses issues related to the design and implementation of focused crawlers. Several variants of state-of-the-art crawlers relying on web page content and link information for estimating the relevance of web pages to a given topic are proposed. Particular emphasis is given to crawlers capable of learning not only the content of relevant pages (as classic crawlers do) but also paths leading to relevant pages. A novel learning crawler inspired by a previously proposed Hidden Markov Model (HMM) crawler is described as well. The crawlers have been implemented using the same baseline implementation (only the priority assignment function differs in each crawler) providing an unbiased evaluation framework for a comparative analysis of their performance. All crawlers achieve their maximum performance when a combination of web page content and (link) anchor text is used for assigning download priorities to web pages. Furthermore, the new HMM crawler improved the performance of the original HMM crawler and also outperforms classic focused crawlers in searching for specialized topics.

论文关键词:Focused crawler,Learning crawler,Hidden Markov Model (HMM) crawler,World Wide Web

论文评审过程:Received 23 April 2008, Revised 6 April 2009, Accepted 7 April 2009, Available online 21 April 2009.

论文官网地址:https://doi.org/10.1016/j.datak.2009.04.002