一直报SSL的错,用了网上多种方法也无法解决

用来p站爬虫的,输入画师ID和保存文件地址就可以保存图片到指定文件夹,但是run后一直报ssl的错


# encoding = utf-8
import re,os,time
import requests,datetime
import urllib
from urllib.parse import quote

name = input('输入画师的ID: ')
text = quote(name,'utf-8')
while True:
    o_path = input('输入文件地址: ')
    if not os.path.exists(o_path):
        print('路径不存在')
    else:
        os.makedirs(o_path + '\\' + name)
        file_path = o_path + '\\' + name
        break

start = datetime.datetime.now()
headers = {
    'cookie':'Pycharm - 9b90c350 = 4fbc349d - 5094 - 4ce2 - 93a3 - b3e8124dd155;_gcl_au = 1.1.878710507.1629369399',
    'user-agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36',
    'referer':'https://pixiv.com/ajax/search/artworks/' + text + '?word=' + text + '&order=date_d&mode=all&p=1&s_mode=s_tag&type=all&lang=zh'
}
url = 'https://pixiv.com/ajax/search/artworks/' + text + '?word=' + text + '&order=date_d&mode=all&p=1&s_mode=s_tag&type=all&lang=zh'
session = requests.get(url,headers=headers)
time.sleep(3)
jsonfile = session.json()

i = 0
while i < len(jsonfile['body']['illustManga']['data']):
    id = jsonfile['body']['illustManga']['data'][i]['id']
    userID = jsonfile['body']['illustManga']['data'][i]['userID']
    title = jsonfile['body']['illustManga']['data'][i]['illustTitle']
    userName = jsonfile['body']['illustManga']['data'][i]['userName']
    url = jsonfile['body']['illustManga']['data'][i]['url']
    o_url = re.sub('/c/250x250_80_a2','',url)
    print('作品名称: ' + title, '作品ID: ' + str(id),'作者姓名和ID: ' + userName, str(userID))
    print(o_url,'\tDownloading...\n')
    image_url = o_url
    file_name = '作品名称: ' + title,'作品ID: ' + str(id)
    file_suffix = os.path.splitext(image_url)[1]
    filename = '{}{}{}{}'.format(file_path,os.sep,file_name,file_suffix)
    print(filename)
    urllib.request.urlretrieve(image_url,filename=filename)
    time.sleep(4)
    i += 1

    end = datetime.datetime.now()
    print('用时 :{}'.format(end-start))

requests.exceptions.SSLError: HTTPSConnectionPool(host='pixiv.com', port=443): Max retries exceeded with url: /ajax/search/artworks/1113943?word=1113943&order=date_d&mode=all&p=1&s_mode=s_tag&type=all&lang=zh (Caused by SSLError(SSLCertVerificationError("hostname 'pixiv.com' doesn't match either of '*.pixiv.net', 'pixiv.me', 'public-api.secure.pixiv.net', 'oauth.secure.pixiv.net', 'www.pixivision.net', 'fanbox.cc', '*.fanbox.cc', 'pixiv.net'")))


看一下你的代码里header中的cookie值是否有问题,横杠“-”与字符串之间不应该有空格的,改一下,最好是F12查看链接接口的请求头,请求方法和请求参数,要写正确。

参考一下: python3 错误 Max retries exceeded with url 解决方法_whatday的专栏-CSDN博客 在爬取时出现这种错误,ConnectionError: HTTPConnectionPool(host=‘xxx.xx.xxx.xxx’, port=xxxx): Max retries exceeded with url: /api/v1/login/ (Caused by NewConnectionError(’<urllib3.connection.HTTPConnection object at 0x7f7c98a867d0>: Failed to establish a new c https://blog.csdn.net/whatday/article/details/106960653