文件名称:SubjectSpider_ByKelvenJU
- 所属分类:
- JSP源码/Java
- 资源属性:
- [Java] [源码]
- 上传时间:
- 2012-11-26
- 文件大小:
- 1.82mb
- 下载次数:
- 0次
- 提 供 者:
- 相关连接:
- 无
- 下载说明:
- 别用迅雷下载,失败请重下,重下不扣分!
介绍说明--下载内容均来自于网络,请自行研究使用
1、锁定某个主题抓取;
2、能够产生日志文本文件,格式为:时间戳(timestamp)、URL;
3、抓取某一URL时最多允许建立2个连接(注意:本地作网页解析的线程数则不限)
4、遵守文明蜘蛛规则:必须分析robots.txt文件和meta tag有无限制;一个线程抓完一个网页后要sleep 2秒钟;
5、能对HTML网页进行解析,提取出链接URL,能判别提取的URL是否已处理过,不重复解析已crawl过的网页;
6、能够对spider/crawler程序的一些基本参数进行设置,包括:抓取深度(depth)、种子URL等;
7、使用User-agent向服务器表明自己的身份;
8、产生抓取统计信息:包括抓取速度、抓取完成所需时间、抓取网页总数;重要变量和所有类、方法加注释;
9、请遵守编程规范,如类、方法、文件等的命名规范,
10、可选:GUI图形用户界面、web界面,通过界面管理spider/crawler,包括启停、URL增删等
-1, the ability to lock a particular theme crawls; 2, can produce log text file format : timestamp (timestamp), the URL; 3. crawls up a URL to allow for the establishment of two connecting (Note : local website for a few analytical thread is not limited) 4, abide by the rules of civilized spiders : to be analyzed robots.txt file and meta tag unrestricted; End grasp a thread after a website to sleep two seconds; 5, capable of HTML pages for analysis, Links to extract URL, the extract can judge whether the URL have been processed. Analysis has not repeat crawl over the web; 6. to the spider/crawler some of the basic procedures for setting up parameters, including : Grasp depth (depth), seeds URL; 7. use User-agent to the server to identify themselves; 8, crawls produce statistical informati
2、能够产生日志文本文件,格式为:时间戳(timestamp)、URL;
3、抓取某一URL时最多允许建立2个连接(注意:本地作网页解析的线程数则不限)
4、遵守文明蜘蛛规则:必须分析robots.txt文件和meta tag有无限制;一个线程抓完一个网页后要sleep 2秒钟;
5、能对HTML网页进行解析,提取出链接URL,能判别提取的URL是否已处理过,不重复解析已crawl过的网页;
6、能够对spider/crawler程序的一些基本参数进行设置,包括:抓取深度(depth)、种子URL等;
7、使用User-agent向服务器表明自己的身份;
8、产生抓取统计信息:包括抓取速度、抓取完成所需时间、抓取网页总数;重要变量和所有类、方法加注释;
9、请遵守编程规范,如类、方法、文件等的命名规范,
10、可选:GUI图形用户界面、web界面,通过界面管理spider/crawler,包括启停、URL增删等
-1, the ability to lock a particular theme crawls; 2, can produce log text file format : timestamp (timestamp), the URL; 3. crawls up a URL to allow for the establishment of two connecting (Note : local website for a few analytical thread is not limited) 4, abide by the rules of civilized spiders : to be analyzed robots.txt file and meta tag unrestricted; End grasp a thread after a website to sleep two seconds; 5, capable of HTML pages for analysis, Links to extract URL, the extract can judge whether the URL have been processed. Analysis has not repeat crawl over the web; 6. to the spider/crawler some of the basic procedures for setting up parameters, including : Grasp depth (depth), seeds URL; 7. use User-agent to the server to identify themselves; 8, crawls produce statistical informati
相关搜索: crawler
spider
Java
Crawler
crawler
java
agent
抓取网页
html
解析
java
gui
网页
蜘蛛
SubjectSpider_ByKelvenJU
spider
Java
Crawler
crawler
java
agent
抓取网页
html
解析
java
gui
网页
蜘蛛
SubjectSpider_ByKelvenJU
(系统自动生成,下载前可以参看下载内容)
下载文件列表
bothlexu8.txt
CheckLinks.java
data
....\sforeign_u8.txt
....\snotname_u8.txt
....\snumbers_u8.txt
....\ssurname_u8.txt
....\tforeign_u8.txt
....\tnotname_u8.txt
....\tnumbers_u8.txt
....\tsurname_u8.txt
HTMLParse.java
ISpiderReportable.java
segmenter.java
simplexu8.txt
Spider.java
tf.java
tradlexu8.txt
祝庆荣-主题网络蜘蛛程序设计及JAVA实现.doc
CheckLinks.java
data
....\sforeign_u8.txt
....\snotname_u8.txt
....\snumbers_u8.txt
....\ssurname_u8.txt
....\tforeign_u8.txt
....\tnotname_u8.txt
....\tnumbers_u8.txt
....\tsurname_u8.txt
HTMLParse.java
ISpiderReportable.java
segmenter.java
simplexu8.txt
Spider.java
tf.java
tradlexu8.txt
祝庆荣-主题网络蜘蛛程序设计及JAVA实现.doc