Commercial Internet filters: Perils and opportunities

作者:

摘要

Organizations are becoming increasingly aware of Internet abuse in the workplace. Such abuse results in loss of workers' productivity, network congestion, security risks, and legal liabilities. To address this problem, organizations have started to adopt Internet usage policies, management training, and filtering software. Several commercial Internet filters are experiencing an increasing number of organizational adoptions. These products mainly rely on black lists, white lists, and keyword/profile matching to filter out undesired web pages. In this paper, we describe three top-ranked commercial Internet filters – CYBERSitter, Net Nanny, and CyberPatrol – and evaluate their performance in the context of an Internet abuse problem. We then propose a text mining approach to address the problem and evaluate its performance using six different classification algorithms: naïve Bayes, multinominal naïve Bayes, support vector machine, decision tree, k-nearest neighbor, and neural network. The evaluation results point to the perils of using commercial Internet filters on one hand, and to the prospects of using text mining on the other. The proposed text mining approach outperforms the commercial filters. We discuss the possible reasons for the relatively poor performance of the filters and the steps that could be taken to improve their performance.

论文关键词:Internet abuse,Internet filters,Text mining,Text classification

论文评审过程:Received 11 August 2008, Revised 1 October 2009, Accepted 8 November 2009, Available online 14 November 2009.

论文官网地址:https://doi.org/10.1016/j.dss.2009.11.002