scrapy爬虫出现 DEBUG: Crawled (404)

为什么会出现解析页面错误呢?

2019-04-17 16:14:46 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.xxx.xx/xxgk/xwzx/zwdt/None> (referer: http://www.xxx.xx/xxgk/xwzx/zwdt/index_1.htm)
2019-04-17 16:14:46 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.xxx.xx/xxgk/xwzx/zwdt/None> (referer: http://www.xxx.xx/xxgk/xwzx/zwdt/index_1.htm)
2019-04-17 16:14:46 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.xxx.xx/xxgk/xwzx/zwdt/None> (referer: http://www.xxx.xx/xxgk/xwzx/zwdt/index_1.htm)
2019-04-17 16:14:46 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.xxx.xx/xxgk/xwzx/zwdt/None> (referer: http://www.xxx.xx/xxgk/xwzx/zwdt/index_1.htm)
2019-04-17 16:14:46 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.xxx.xx/xxgk/xwzx/zwdt/None> (referer: http://www.xxx.xx/xxgk/xwzx/zwdt/index_1.htm)

代码在上边,解析一个页面为什么返回了 页面的None,?是网站监测到爬虫了吗?scrapy需要再哪更改headers呢 我尝试了在主程序里更改 然后yield 然而失败了
谢谢各位

如果楼主是用scrapy框架爬的话,可以在settings.py加上User-Agent信息,这样应该就可以了