为什么names只有最后一页的内容,循环是哪里出问题了?

from selenium import webdriver
from lxml import etree
from bs4 import BeautifulSoup
import time
import pandas as pd

urls = []
for i in range(1, 4):
    if i == 1:
        url = "https://www.coinbase.com/price/s/listed"
        urls.append(url)

    else:
        url = "https://www.coinbase.com/price/s/listed" + f"?page={i}"
        urls.append(url)
print(urls)

for url in urls:
    wd = webdriver.Chrome()
    wd.get(url)

    time.sleep(30)
    resp =wd.page_source
    html = BeautifulSoup(resp,"lxml")
    tr = html.find_all("tr",class_="AssetTableRowDense__Row-sc-14h1499-1 lfkMjy")
    print(len(tr))
    names =[]
    for i in tr:
        name1 = i.find("span",class_="TextElement__Spacer-hxkcw5-0 cicsNy Header__StyledHeader-sc-1xiyexz-0 kwgTEs AssetTableRowDense__StyledHeader-sc-14h1499-14 AssetTableRowDense__StyledHeaderDark-sc-14h1499-17 cWTMKR").text
        name2 = i.find("span",class_="TextElement__Spacer-hxkcw5-0 cicsNy Header__StyledHeader-sc-1xiyexz-0 bjBkPh AssetTableRowDense__StyledHeader-sc-14h1499-14 AssetTableRowDense__StyledHeaderLight-sc-14h1499-15 AssetTableRowDense__TickerText-sc-14h1499-16 cdqGcC").text
        names.append([name1,name2])

ns=pd.DataFrame(names)
date = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
path = "/Users/paul/jpn traffic/coinbase/coinbase"
ns.to_csv(path+date+date+'.csv',index=None)

我的本意是,抓取urls(一共三个url)每一页的若干数据,但是输出到csv后查看,发现只有最后一页数据。
那请问我写的循环是哪里出错了?

注:两个print是检验过程的,一个检验url,一个检验每页要抓的元素,都没有问题,返回如下:

['https://www.coinbase.com/price/s/listed', 'https://www.coinbase.com/price/s/listed?page=2', 'https://www.coinbase.com/price/s/listed?page=3']
26
30
16

找到问题了 names应该放到循环外

https://www.coinbase.com/price/s/listed?page=1 这个链接可用。
从33行开始,前面加4个空格,使最后4句作为for url in urls:函数体内语句