scrapy和redis不能爬到数据数据 【赏金可以提升】

分布式爬虫一直都是显示Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

github的原地址是https://github.com/CUHKSZ-TQL/WeiboSpider_SentimentAnalysis

配置环境之后对代码修改之后是

链接:https://pan.baidu.com/s/1jHbz7ak8VqO-MMHeGj9_UA 

提取码:iecl

运行第三个程序的结果是:

= RESTART: C:\Users\ap645\Desktop\WeiboSpider_SentimentAnalysis-master\WeiboSpider\sina\spiders\weibo_spider.py

2020-04-16 11:04:10 [scrapy.utils.log] INFO: Scrapy 2.0.1 started (bot: sina)

2020-04-16 11:04:10 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 20.3.0, Python 3.8.1 (tags/v3.8.1:1b293b6, Dec 18 2019, 23:11:46) [MSC v.1916 64 bit (AMD64)], pyOpenSSL 19.1.0 (OpenSSL 1.1.1f 31 Mar 2020), cryptography 2.9, Platform Windows-10-10.0.18362-SP0

2020-04-16 11:04:10 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor

2020-04-16 11:04:10 [scrapy.crawler] INFO: Overridden settings:

{'BOT_NAME': 'sina',

 'DOWNLOAD_DELAY': 2,

 'DUPEFILTER_CLASS': 'scrapy_redis_bloomfilter.dupefilter.RFPDupeFilter',

 'NEWSPIDER_MODULE': 'sina.spiders',

 'SCHEDULER': 'scrapy_redis_bloomfilter.scheduler.Scheduler',

 'SPIDER_MODULES': ['sina.spiders']}

2020-04-16 11:04:10 [scrapy.extensions.telnet] INFO: Telnet Password: 3c9f648b6ca7a947

2020-04-16 11:04:10 [scrapy.middleware] INFO: Enabled extensions:

['scrapy.extensions.corestats.CoreStats',

 'scrapy.extensions.telnet.TelnetConsole',

 'scrapy.extensions.logstats.LogStats']

2020-04-16 11:04:10 [weibo_spider] INFO: Reading start URLs from redis key 'weibo_spider:start_urls' (batch size: 16, encoding: utf-8

2020-04-16 11:04:12 [scrapy.middleware] INFO: Enabled downloader middlewares:

['sina.middlewares.RedirectMiddleware',

 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',

 'sina.middlewares.CookieMiddleware',

 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',

 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',

 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',

 'scrapy.downloadermiddlewares.retry.RetryMiddleware',

 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',

 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',

 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',

 'scrapy.downloadermiddlewares.stats.DownloaderStats']

2020-04-16 11:04:12 [scrapy.middleware] INFO: Enabled spider middlewares:

['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',

 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',

 'scrapy.spidermiddlewares.referer.RefererMiddleware',

 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',

 'scrapy.spidermiddlewares.depth.DepthMiddleware']

2020-04-16 11:04:12 [scrapy.middleware] INFO: Enabled item pipelines:

['sina.pipelines.MongoDBPipeline']

2020-04-16 11:04:12 [scrapy.core.engine] INFO: Spider opened

2020-04-16 11:04:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

2020-04-16 11:04:12 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023

2020-04-16 11:05:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)


INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)

挂机了 ,设置请求超时时间,请求间隔,重试次数,检查ua、代理


UA是有的,从一开始就是0了,请求超时时间请求间隔应该没问题,或许我应该试试代理

代理应该也没事,因为我用其他的爬虫程序能够爬

降速试试吧


RETRY_ENABLED = True # 重新请求

RETRY_TIMES = 3 # 请求次数

DOWNLOAD_DELAY = 2 # 下载延时

CONCURRENT_REQUESTS = 3

DOWNLOAD_TIMEOUT = 120 ## 超时限制

抱歉,这个方法尝试了,结果还是无效,或者请问我在网上查到了一个解决方法,你能够看看是什么意思是否有用嘛


问题:分布式爬虫一直都是显示Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)。

解决:你使用的肯定是window的redis数据库,而且lpush时,使用的是redis。windos那个文件,这样插入的redis_key,在你爬虫服务器上是查不到这个值的。

使用 redis-cli -h redis的ip -p 6379 这样链接数据,再lpush值进去才行。

————————————————

版权声明:本文为CSDN博主「清风怜明月」的原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/qq_43671933/article/details/103378201

应该跟这个没关系,不过不妨一试,先不用redis,换成正常的scrapy爬取机制试试看