为什么连接成功了,还是取不出数据?

from selenium import webdriver
from lxml import etree
from bs4 import BeautifulSoup
import time
import pandas as pd
import requests

hd = {
    "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36"
}
resp = requests.get("https://panteracapital.com/portfolio", headers=hd)
print(resp.status_code)
html = BeautifulSoup(resp.text, "lxml")
total = []
temp = html.find_all("div", class_="cell card")
print(temp)

返回是:

200
[]

Process finished with exit code 0

非常奇怪,到底哪里错了?

网页中数据不一定都是包含在源代码中的, 很多时候是动态产生的

先把 resp.text打印出来看看有没有内容

确认 cell card确实有内容,我看了下, cell auto是有内容的

img