43-爬虫的基本使用
python3,pip install Scrapy 注:windows平台需要依赖pywin32,pip install pypiwin32
创建工程:开发工具命令行(Terminal):scrapy startproject yixiu(工程名,自定义)
python为什么叫爬虫。Microsoft Windows [版本 10.0.18362.592]
(c)2019Microsoft Corporation。保留所有权利。
python数据分布统计、(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫\第04天>cd ..
(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫>scrapy startproject yixiu
New Scrapy project'yixiu', using template directory 'c:\users\xiongjiawei\pycharmprojects\spider\venv\lib\site-packages\scrapy\templates\project', created in:
C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫\yixiu
You can start your first spider with:
cd yixiu
scrapy genspider example example.com
(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫>
创建爬虫:ali为爬虫名,自定义:scrapy genspider ali sogo.com
(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫>cd yixiu
(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫\yixiu>scrapy genspider ali sogo.com
Created spider'ali' using template 'basic' inmodule:
yixiu.spiders.ali
(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫\yixiu>
打开创建的工程:File-Open...,找到工程目录,打开。
运行scrapy爬虫工程(方法一):在开发工具Terminal中执行:scrapy crawl ali
运行结果:
Microsoft Windows [版本 10.0.18362.592]
(c)2019Microsoft Corporation。保留所有权利。
(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫\yixiu>scrapy crawl ali2020-02-12 10:35:52 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: yixiu)2020-02-12 10:35:52 [scrapy.utils.log] INFO: Versions: lxml 4.4.2.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) [MSC v.1900 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019), cryptography 2.8, Platform Windows-10.0.18362
2020-02-12 10:35:52 [scrapy.crawler] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'yixiu.spiders', 'SPIDER_MODULES': ['yixiu.spiders'], 'BOT_NAME': 'yixiu', 'ROBOTSTXT_OBEY':
True}2020-02-12 10:35:52[scrapy.extensions.telnet] INFO: Telnet Password: 27e35f760e10cc192020-02-12 10:35:52[scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.telnet.TelnetConsole','scrapy.extensions.corestats.CoreStats','scrapy.extensions.logstats.LogStats']2020-02-12 10:35:53[scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware','scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware','scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware','scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware','scrapy.downloadermiddlewares.useragent.UserAgentMiddleware','scrapy.downloadermiddlewares.retry.RetryMiddleware','scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware','scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware','scrapy.downloadermiddlewares.redirect.RedirectMiddleware','scrapy.downloadermiddlewares.cookies.CookiesMiddleware','scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware','scrapy.downloadermiddlewares.stats.DownloaderStats']2020-02-12 10:35:53[scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware','scrapy.spidermiddlewares.offsite.OffsiteMiddleware','scrapy.spidermiddlewares.referer.RefererMiddleware','scrapy.spidermiddlewares.urllength.UrlLengthMiddleware','scrapy.spidermiddlewares.depth.DepthMiddleware']2020-02-12 10:35:53[scrapy.middleware] INFO: Enabled item pipelines:
[]2020-02-12 10:35:53[scrapy.core.engine] INFO: Spider opened2020-02-12 10:35:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)2020-02-12 10:35:53 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-12 10:35:53 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to from
2020-02-12 10:35:53 [scrapy.core.engine] DEBUG: Crawled (200) (referer: None)2020-02-12 10:35:53 [scrapy.downloadermiddlewares.robotstxt] DEBUG: Forbidden by robots.txt:
2020-02-12 10:35:53[scrapy.core.engine] INFO: Closing spider (finished)2020-02-12 10:35:53[scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 1,'downloader/exception_type_count/scrapy.exceptions.IgnoreRequest': 1,'downloader/request_bytes': 438,'downloader/request_count': 2,'downloader/request_method_count/GET': 2,'downloader/response_bytes': 1472,'downloader/response_count': 2,'downloader/response_status_count/200': 1,'downloader/response_status_count/301': 1,'elapsed_time_seconds': 0.656264,'finish_reason': 'finished','finish_time': datetime.datetime(2020, 2, 12, 2, 35, 53, 923514),'log_count/DEBUG': 3,'log_count/INFO': 10,'response_received_count': 1,'robotstxt/forbidden': 1,'robotstxt/request_count': 1,'robotstxt/response_count': 1,'robotstxt/response_status_count/200': 1,'scheduler/dequeued': 1,'scheduler/dequeued/memory': 1,'scheduler/enqueued': 1,'scheduler/enqueued/memory': 1,'start_time': datetime.datetime(2020, 2, 12, 2, 35, 53, 267250)}2020-02-12 10:35:53[scrapy.core.engine] INFO: Spider closed (finished)
(venv) C:\Users\xiongjiawei\PycharmProjects\Spider\13天搞定Python分布式爬虫\yixiu>
View Code
运行scrapy爬虫工程(方法二):
建立启动文件main.py(名字自定义,也可以start.py)
from scrapy.cmdline importexecute
execute('scrapy crawl ali'.split()) #写法一#execute(['scrapy', 'crawl', 'ali']) # 写法二
修改爬虫文件ali.py,增加打印信息:
#-*- coding: utf-8 -*-
importscrapyclassAliSpider(scrapy.Spider):
name= 'ali'allowed_domains= ['sogo.com']
start_urls= ['http://sogo.com/']defparse(self, response):print('=================================================================')print(response.text)print('=================================================================')
修改settings.py
# Obey robots.txt rules
ROBOTSTXT_OBEY = False #注:(为True则看不到print(response.text)打印信息)
在main.py文件上右击选择:Run 'main'
44-爬虫的数据提取:extract()、extract_first()、re()、xpath()、css()
#-*- coding: utf-8 -*-
importscrapyclassQidianSpider(scrapy.Spider):
name= 'qidian'allowed_domains= ['qidian.com']
start_urls= ['https://www.qidian.com/rank/yuepiao']defparse(self, response):
names= response.xpath('//h4/a/text()').extract()
authors= response.xpath('//p[@class="author"]/a[1]/text()').extract()#print(names)
#print(authors)
book =[]for name, author inzip(names, authors):
book.append({'name': name, 'author': author})return book
运行爬虫并将结果保存到文件:scrapy crawl 爬虫名 -o 文件名.后缀 -t 文件类型
scrapy crawl qidian -o book.json -t json 注:-t json为指定文件类型,可以省略,因为已经写了后缀名book.json
scrapy crawl qidian -o book.xml
scrapy crawl qidian -o book.csv
45-scrapy中pipeline中的使用
46-scrapy中settings的设置
在settings.py中加LOG_ENABLED=False 设置控制台不打印日志
47-scrapy中细节问题
48-scrapy爬取小说
49-scrapy中crawlspider的使用
……
版权声明:本站所有资料均为网友推荐收集整理而来,仅供学习和研究交流使用。
工作时间:8:00-18:00
客服电话
电子邮件
admin@qq.com
扫码二维码
获取最新动态