scrapy启动后就结束时到底是什么原因

调试台:

2021-04-19 11:53:58 [scrapy.utils.log] INFO: Scrapy 2.4.1 started (bot: qianchengwuyou)
2021-04-19 11:53:58 [scrapy.utils.log] INFO: Versions: lxml 4.6.3.0, libxml2 2.9.5, cssselect 1.1.0, parsel 1.6.0, w3lib 1.22.0, Twisted 21.2.0, Python 3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 13:44:55) [MSC v.1928 64 bit (AMD64)], pyOpenSSL 20.0.1 (OpenSSL 1.1.1k  25 Mar 2021), cryptography 3.4.7, Platform Windows-10-10.0.17763-SP0
2021-04-19 11:53:58 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.selectreactor.SelectReactor
2021-04-19 11:53:58 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'qianchengwuyou',
 'DOWNLOAD_DELAY': 1,
 'NEWSPIDER_MODULE': 'qianchengwuyou.spiders',
 'SPIDER_MODULES': ['qianchengwuyou.spiders']}
2021-04-19 11:53:58 [scrapy.extensions.telnet] INFO: Telnet Password: 9134d1ae4fd36b14
2021-04-19 11:53:58 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2021-04-19 11:53:59 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2021-04-19 11:53:59 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2021-04-19 11:53:59 [scrapy.middleware] INFO: Enabled item pipelines:
['qianchengwuyou.pipelines.QianchengwuyouPipeline']
2021-04-19 11:53:59 [scrapy.core.engine] INFO: Spider opened
2021-04-19 11:53:59 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2021-04-19 11:53:59 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6024
2021-04-19 11:53:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://search.51job.com/list/030200,000000,0000,00,9,99,%25E8%25B4%25A7%25E8%25BF%2590%25E6%2593%258D%25E4%25BD%259C,2,1.html%20> (referer: None)
2021-04-19 11:53:59 [scrapy.core.engine] INFO: Closing spider (finished)
2021-04-19 11:53:59 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 401,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 21386,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'elapsed_time_seconds': 0.453,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2021, 4, 19, 3, 53, 59, 656927),
 'log_count/DEBUG': 1,
 'log_count/INFO': 10,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2021, 4, 19, 3, 53, 59, 203927)}
2021-04-19 11:53:59 [scrapy.core.engine] INFO: Spider closed (finished)
 

爬虫代码:

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders.crawl import CrawlSpider, Rule
import re
from ..items import QianchengwuyouItem

class ZhaopinSpider(CrawlSpider):
    name = 'zhaopin'
    allowed_domains = ['51job.com']
    start_urls = ['https://search.51job.com/list/030200,000000,0000,00,9,99,%25E8%25B4%25A7%25E8%25BF%2590%25E6%2593%258D%25E4%25BD%259C,2,1.html ',]

    rules = (
        #https://jobs.51job.com/guangzhou-yxq/127969863.html?s=sou_sou_soulb&t=0
        Rule(LinkExtractor(allow=r'https://jobs.51job.com/guangzhou-\w+/\d+.html.*',restrict_xpaths=["//div[@class='e']//a"]), callback='parse_job', follow=False),

    )

    def parse_job(self, response):
        #标题
        title = response.css(".tHjob h1::text").get()
        #公司
        company = response.css(".catn::text").get()
        #工资
        salary = response.css(".tHjob .cn strong::text").get()
        #工作地点
        city_list = response.css(".ltype::text").get()
        city = re.sub(r"\xa0", "", city_list)
        #福利待遇
        fuli_list = response.css(".t1 ::text").getall()
        fuli = "".join(fuli_list).strip()
        #职位描述
        desc_list = response.xpath("//div[@class='bmsg job_msg inbox']/p/text()").getall()
        desc = "".join(desc_list).strip()
        desc = re.sub(r"    、", "", desc)
        #薪酬福利
        desc_list_1 = response.xpath("//div[@class='bmsg job_msg inbox']//li/text()").getall()
        desc_1 = "".join(desc_list_1).strip()
        #工作地址
        work_address = response.css(".fp::text").get()

        item = QianchengwuyouItem(title=title,company=company,salary=salary,city=city,fuli=fuli,desc=desc,desc_1=desc_1,work_address=work_address)
        yield item
pipelines:
class QianchengwuyouPipeline(object):
    def __init__(self):
        self.fp = open("jobs.txt","w",encoding='utf-8')

    def process_item(self, item, spider):
        self.fp.write(json.dumps(dict(item),ensure_ascii=False) + "\n")
        return item

    def close_spider(self,spider):
        self.fp.close()

 

items:

import scrapy


class QianchengwuyouItem(scrapy.Item):
    title = scrapy.Field()
    company = scrapy.Field()
    city  = scrapy.Field()
    salary = scrapy.Field()
    fuli = scrapy.Field()
    desc = scrapy.Field()
    desc_1 = scrapy.Field()
    work_address = scrapy.Field()

 

settings:
 

BOT_NAME = 'qianchengwuyou'

SPIDER_MODULES = ['qianchengwuyou.spiders']
NEWSPIDER_MODULE = 'qianchengwuyou.spiders'


ROBOTSTXT_OBEY = False

DOWNLOAD_DELAY = 1

DEFAULT_REQUEST_HEADERS = {
  'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
  'Accept-Language': 'en',
  'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36'
}


ITEM_PIPELINES = {
   'qianchengwuyou.pipelines.QianchengwuyouPipeline': 300,
}

 

您好,我是有问必答小助手,你的问题已经有小伙伴为您解答了问题,您看下是否解决了您的问题,可以追评进行沟通哦~

如果有您比较满意的答案 / 帮您提供解决思路的答案,可以点击【采纳】按钮,给回答的小伙伴一些鼓励哦~~

ps:问答VIP仅需29元,即可享受5次/月 有问必答服务,了解详情>>>https://vip.csdn.net/askvip?utm_source=1146287632

非常感谢您使用有问必答服务,为了后续更快速的帮您解决问题,现诚邀您参与有问必答体验反馈。您的建议将会运用到我们的产品优化中,希望能得到您的支持与协助!

速戳参与调研>>>https://t.csdnimg.cn/Kf0y

这个代码没有报错,正常结束的哦,如果不符合预期说明你的某个条件有问题。