python爬虫报错:Max retries exceeded with url

问题遇到的现象和发生背景

看教学视频别人成功运行一下这段代码,但自己手动尝试去不行

用代码块功能插入代码,请勿粘贴截图
import requests
from bs4 import BeautifulSoup as bs

# Load the webpage content
r = requests.get("https://keithgalli.github.io/web-scraping/example.html")

# Convert to a beautiful soup object
soup = bs(r.content)

# Print out our html
print(soup)

运行结果及报错内容

HTTPSConnectionPool(host='keithgalli.github.io', port=443): Max retries exceeded with url: /web-scraping/example.html (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 11001] getaddrinfo failed'))

我想要达到的结果

成功运行

使用try catch处理异常,循环请求。

可能是访问次数频繁,被禁止访问,使用代理来爬取

升级requests:
pip install --upgrade requests