下载大文件 >800M,程序异常报错

问题遇到的现象和发生背景

下载大文件 >800M,程序异常报错

程序代码
import time
import requests
url = "https://api.github.com/repos/specterbike/aosp.frameworks.base/zipball/refs/tags/android-cts-5.1_r15"
s_time = time.time()
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36",
    "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9",
    "accept-encoding": "gzip, deflate, br"
}
res = requests.get(url, headers=headers, stream=True)
code = res.status_code
res.encoding = "utf-8"
file_name = "android-cts-5.1_r15.zip"
if code == 200:
    with open(file_name, "wb") as f:
        for text in res.iter_content(chunk_size=1024 * 4):
            if text:
                f.write(text)

print(f"用时:{time.time() - s_time}")
运行结果及报错内容
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 685, in _update_chunk_length
    self.chunk_left = int(line, 16)
ValueError: invalid literal for int() with base 16: b''

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 425, in _error_catcher
    yield
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 752, in read_chunked
    self._update_chunk_length()
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 689, in _update_chunk_length
    raise httplib.IncompleteRead(line)
http.client.IncompleteRead: IncompleteRead(0 bytes read)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 753, in generate
    for chunk in self.raw.stream(chunk_size, decode_content=True):
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 560, in stream
    for line in self.read_chunked(amt, decode_content=decode_content):
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 781, in read_chunked
    self._original_response.close()
  File "/usr/lib/python3.8/contextlib.py", line 131, in __exit__
    self.gen.throw(type, value, traceback)
  File "/usr/lib/python3/dist-packages/urllib3/response.py", line 443, in _error_catcher
    raise ProtocolError("Connection broken: %r" % e, e)
urllib3.exceptions.ProtocolError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "download_.py", line 32, in <module>
    for text in res.iter_content(chunk_size=1024 * 4):
  File "/usr/local/lib/python3.8/dist-packages/requests/models.py", line 756, in generate
    raise ChunkedEncodingError(e)
requests.exceptions.ChunkedEncodingError: ('Connection broken: IncompleteRead(0 bytes read)', IncompleteRead(0 bytes read))
我的解答思路和尝试过的方法
- 使用 stream=True 和 iter_content 字节流下载文件,减少内存的使用

可能是github网站的限制,不是你的程序问题

你这个是连接中断了,

Connection broken: IncompleteRead(0 bytes read)

国内从 github 下东西都很慢,下大文件更是容易下到一半直接没速度触发 Timeout ,你这个就是触发Timeout了。

解决办法:挂代理 / 从别的地方下

res = requests.get(url, headers=headers, stream=True,timeout=120)

给你写个下载代码

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Sat Oct 23 13:54:39 2021
@author: huyi
"""
 
import os
import sys
 
import requests
 
 
 
def download(url, file_path):
    # 重试计数
    count = 0
    # 第一次请求是为了得到文件总大小
    r1 = requests.get(url, stream=True, verify=False)
    total_size = int(r1.headers['Content-Length'])
 
    # 判断本地文件是否存在,存在则读取文件数据大小
    if os.path.exists(file_path):
        temp_size = os.path.getsize(file_path)  # 本地已经下载的文件大小
    else:
        temp_size = 0
        
    # 对比一下,是不是还没下完
    print(temp_size)
    print(total_size)
    
    # 开始下载
    while count < 10:
        if count != 0:
            temp_size = os.path.getsize(file_path)
        # 文件大小一致,跳出循环
        if temp_size >= total_size:
            break
        count += 1
        print(
            "第[{}]次下载文件,已经下载数据大小:[{}],应下载数据大小:[{}]".format(
                count, temp_size, total_size))
        # 重新请求网址,加入新的请求头的
        # 核心部分,这个是请求下载时,从本地文件已经下载过的后面下载
        headers = {"Range": f"bytes={temp_size}-{total_size}"}
        # r = requests.get(url, stream=True, verify=False)
        r = requests.get(url, stream=True, verify=False, headers=headers)
 
        # "ab"表示追加形式写入文件
        with open(file_path, "ab") as f:
            if count != 1:
                f.seek(temp_size)
            for chunk in r.iter_content(chunk_size=1024 * 64):
                if chunk:
                    temp_size += len(chunk)
                    f.write(chunk)
                    f.flush()
                    ###这是下载实现进度显示####
                    done = int(50 * temp_size / total_size)
                    sys.stdout.write("\r[%s%s] %d%%" % (
                        '█' * done, ' ' * (50 - done), 100 * temp_size / total_size))
                    sys.stdout.flush()
        print("\n")
 
    return file_path

使用迅雷

下载大型文件出现损坏,一般可能是网卡品质不好,硬盘出现问题引起的请检修,建议你修复一下系统,在修复一下硬盘,然后在看效果如何