scrapy.Request中unexpected keyword argument 'callback'

在编写scrapy爬虫时,想翻页爬取数据,使用了yield Request(next_url, callback=self.parse)
但是始终提示TypeError: init() got an unexpected keyword argument 'callback'


```python
class TestspiderSpider(scrapy.Spider):
    name = "testspider"
    allowed_domains = ["www.zzairport.com"]
    start_urls = ["http://www.zzairport.com/s/index.php?a=flights&airCode=01&threeCharCode=&corpTwoCharCode=&flightno=&flightdybegin=00:00&flightdyend=&flightdyend&pageid=1"]
    pageid = 1

    def parse(self, response):
        item = FirstspiderItem()
        item["plan_time"] = []
        item["number"] = []
        item["company"] = []
        item["des"] = []
        item["leave"] = []
        item["state"] = []

        for i in range(len(response.xpath("//tbody//tr[@class='ligang']//td[1]/text()").extract())):
            item["plan_time"].append(response.xpath("//tbody//tr[@class='ligang'][%d]//td[1]/text()" % (i + 1)).extract_first())
            item["number"].append(response.xpath("//tbody//tr[@class='ligang'][%d]//td[2]/text()"%(i+1)).extract_first())
            item["company"].append(response.xpath("//tbody//tr[@class='ligang'][%d]//td[3]/text()"%(i+1)).extract_first())
            item["des"].append(response.xpath("//tbody//tr[@class='ligang'][%d]//td[4]/text()"%(i+1)).extract_first())
            temp = response.xpath("//tbody//tr[@class='ligang'][%d]//td[7]/text()"%(i+1)).extract_first()
            item["leave"].append(temp if temp != None else "")
            temp = response.xpath("//tbody//tr[@class='ligang'][%d]//td[8]/text()" % (i + 1)).extract_first()
            item["state"].append(temp if temp != None else "")
        yield item

        self.pageid = self.pageid + 1
        if self.pageid <= 3:
            next_url = "http://www.zzairport.com/s/index.php?a=flights&airCode=01&threeCharCode=&corpTwoCharCode=&flightno=&flightdybegin=00:00&flightdyend=&flightdyend&pageid=%d" % (self.pageid)
            yield Request(next_url, callback=self.parse)

        return item

```

yield item放在循环中,否则只会返回一个。其次 我记得不是yield Request这个吧,应该是:

yield scrapy.Request(url, callback=self.parse)

望采纳!!!