爬虫爬网站报错404是咋回事

headers={
    # 'Referer': 'http://www.yhdm.so/'
    'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.36 Edg/92.0.902.73'
    # ,'Host': 'p.hnnzwhyy.com'
    # ,'Accept-Encoding':'gzip, deflate'
}
f=open('pre_urlss','r')
url=f.readline()
print(url)
print(requests.get(url,headers=headers))
f.close()

img
输出的url没问题可以访问,为什么爬虫就报404,请求头都试过了还是报错(cookie也试过没用)

所以方便给个url吗?

"X-Requested-With", "XMLHttpRequest"