Python爬虫配合VPN爬取出现报错


```python

def get_url():
    url_list = []
    for a in range(1,3): #页数
        url = 'xxxxx/page/{}/'.format(a)
        html = requests.get(url,headers=headers).text
        soup = BeautifulSoup(html, 'lxml')
        soup = soup.find_all('div', class_='gridhub-posts gridhub-posts-grid')[0]
        for i in range(0,10): 
            data = soup.find_all('div',class_= 'gridhub-grid-post gridhub-5-col')[i]
            url = data.find_all('a',class_= 'gridhub-grid-post-thumbnail-link')[0].get('href')
            url = url + '1/'
            if len(url)>6 and url not in url_list:
                url_list.append(url)
            else:
                pass
    print('页数数据获取完毕')
    return url_list
 

```

运行结果及报错内容

```显示如下报错:
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xx', port=443): Max retries exceeded with url: xxx (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1124)')))

requests.exceptions.SSLError: HTTPSConnectionPool(host='xxx', port=443): Max retries exceeded with url:xxx (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1124)')))

我的解答思路和尝试过的方法

{pip install cryptography
pip install pyOpenSSL
pip install certifi -- 无效}

我想要达到的结果

若关闭VPN爬取其他网站(国内)就无报错?
如何解决VPN配合爬虫报错的问题?百度之后给予的方案(如下)无效

你这个是VPN代理问题,你可以将VPN设置成部分代理,不要全部代理你的网络。