爬取猫眼电影top100所有海报图片到本地的images文件夹下,大佬求解🙏🙏
首先根据页面网址变化,构造出网址列表:
urls=[f'https://maoyan.com/board/4?offset={i}' for i in range(0,100,10)]
在用requests.get请求时,headers中加入cookies,同时加参数verify=False
后面解析代码 如下,加入写入文件语句就行了:
for url in urls[:1]:
res=requests.get(url,headers=headers,verify=False)
res.encoding=res.apparent_encoding
soup=bs(res.text,'lxml')
# with open('t1024.html','w',encoding='utf-8') as f:
# f.write(str(soup))
t=[t['title'] for t in soup.select('dl.board-wrapper dd a:nth-child(1)')]
s = ["https:"+i['data-src'] for i in soup.select(
'img.board-img')]
time.sleep(1)
print(t,s)
如有帮助,请点击一下采纳本回答的按钮。