python scrapy 转scrapy redis报错

问题遇到的现象和发生背景

python scrapy-redis 执行工程spider.py时报错

问题相关代码,请勿粘贴截图

相关setting.py配置

# 指定管道
ITEM_PIPELINES = {
    #daf
   'scrapy_redis.pipelines.RedisPipeline': 301,
}

#指定Redis调度器

SCHEDULER_ORDER = 'BFO'

SCHEDULER = 'scrapy_redis.scheduler.Scheduler'
#所有spider通过redis使用同一个去重过滤器
DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter'
#不清除Redis队列、这样可以暂停/恢复 爬取
SCHEDULER_PERSIST = True
SCHEDULER_QUEUE_CLASS ='scrapy_redis.queue.PriorityQueue' #默认队列,优先级队列


#设置redis地址 端口 密码
# REDIS_HOST = '192.168.30.66'
REDIS_HOST = '127.0.0.1'
REDIS_HOST = 6379
REDIS_ENCODING = 'utf-8'

运行结果及报错内容
D:\codespace\scrapy\zxxk\zxxk\spiders>scrapy runspider xkw.py
Unhandled error in Deferred:
2022-07-31 20:35:03 [twisted] CRITICAL: Unhandled error in Deferred:

Traceback (most recent call last):
  File "D:\Programs\Python\Python310\lib\site-packages\scrapy\crawler.py", line 205, in crawl
    return self._crawl(crawler, *args, **kwargs)
  File "D:\Programs\Python\Python310\lib\site-packages\scrapy\crawler.py", line 209, in _crawl
    d = crawler.crawl(*args, **kwargs)
  File "D:\Programs\Python\Python310\lib\site-packages\twisted\internet\defer.py", line 1905, in unwindGenerator
    return _cancellableInlineCallbacks(gen)
  File "D:\Programs\Python\Python310\lib\site-packages\twisted\internet\defer.py", line 1815, in _cancellableInlineCallbacks
    _inlineCallbacks(None, gen, status)
---  ---
  File "D:\Programs\Python\Python310\lib\site-packages\twisted\internet\defer.py", line 1660, in _inlineCallbacks
    result = current_context.run(gen.send, result)
  File "D:\Programs\Python\Python310\lib\site-packages\scrapy\crawler.py", line 103, in crawl
    yield self.engine.open_spider(self.spider, start_requests)
builtins.TypeError: getaddrinfo() argument 1 must be string or None

2022-07-31 20:35:03 [twisted] CRITICAL:
Traceback (most recent call last):
  File "D:\Programs\Python\Python310\lib\site-packages\twisted\internet\defer.py", line 1660, in _inlineCallbacks
    result = current_context.run(gen.send, result)
  File "D:\Programs\Python\Python310\lib\site-packages\scrapy\crawler.py", line 103, in crawl
    yield self.engine.open_spider(self.spider, start_requests)
**TypeError: getaddrinfo() argument 1 must be string or None
**
我的解答思路和尝试过的方法

尝试scrapy降级,python升级都仍然出现

这篇文章:python笔记(爬虫scrapy框架 redis 队列和栈,优先级) 也许能够解决你的问题,你可以看下

问题关闭,原因找到了,是在settings里的管道配置多写了一个逗号引起的

ITEM_PIPELINES = {
    'scrapy_redis.pipelines.RedisPipeline': 300
}

在300后多了一个,