scrapy.meta传值问题

爬网站meta值传递不过去?

yield scrapy.Request(url=self.url.format(uid=uid,page=1), callback=self.parse, headers=headers,
                             cookies=cookies,meta={"page":copy.deepcopy('1'),"uid":copy.deepcopy(uid)})

def parse(self,response):
  print(response.meta.keys())
  print('值:',response.meta.values())          
  page=response.meta['page']+1

错误提示:
dict_keys(['download_timeout', 'download_slot', 'download_latency'])
值: dict_values([180.0, 'm.weibo.cn', 0.29690074920654297])
2021-12-14 20:14:05 [scrapy.core.scraper] ERROR: Spider error processing <GET https://******> (referer: None)
Traceback (most recent call last):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\twisted\internet\defer.py", line 858, in runCallbacks
current.result = callback( # type: ignore[misc]
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python310\lib\site-packages\scrapy\spiders_init
.py", line 90, in _parse
return self.parse(response, kwargs)
File "
****", line 102, in parse
page=response.meta['page']+1
KeyError: 'page'

同志们帮帮忙看看,刚接触不太懂,到底哪里出错了?

你好,我是有问必答小助手,非常抱歉,本次您提出的有问必答问题,技术专家团超时未为您做出解答


本次提问扣除的有问必答次数,将会以问答VIP体验卡(1次有问必答机会、商城购买实体图书享受95折优惠)的形式为您补发到账户。


因为有问必答VIP体验卡有效期仅有1天,您在需要使用的时候【私信】联系我,我会为您补发。