Python爬虫运行返回这个是什么问题

在运行百度批量保图片时遇到如下报错,怎么解决呀 求大家帮帮忙。

图片呢,没有看到哇

  • 你可以参考下这个问题的回答, 看看是否对你有帮助, 链接: https://ask.csdn.net/questions/7701640
  • 我还给你找了一篇非常好的博客,你可以看看是否有帮助,链接:Python图像处理是没有任何一门编程语言能比得上的!为什么这么说呢?看完这篇你就知道了!
  • 同时,你还可以查看手册:python-通用操作系统服务 中的内容
  • 除此之外, 这篇博客: python爬虫之字体反爬虫中的 总的来说,就是爬虫过程中,加入了下载字体文件,并识别字体文件的操作。 部分也许能够解决你的问题, 你可以仔细阅读以下内容或跳转源博客中阅读:
  • 下面附上代码:

    # !/usr/bin/env python 3.6
    # -*- coding: utf-8 -*-
    # Author: fcj
    # Time: 2019-04-29
    # Description: 字体反爬
    
    
    from fontTools.ttLib import TTFont
    import requests
    from lxml import etree
    import re
    
    
    def get_html(html_url):
        headers = {
            'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8'
                      ',application/signed-exchange;v=b3',
            'Accept-Encoding': 'gzip, deflate',
            'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8',
            'Connection': 'keep-alive',
            'Cookie'',
            'DNT': '1',
            'Host':',
            'Referer':',
            'Upgrade-Insecure-Requests': '1',
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 '
                          '(KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'
        }
        content, wof_url = get_content(get_url=html_url, headers=headers)
    
        file_path = get_file(wf_url=wof_url)
        online_fonts = TTFont(file_path)
        base_fonts = TTFont('D:/3yl1dB.woff')
        tmp = exec_wof(online=online_fonts, local=base_fonts)
        nums = content.xpath('//div[@class="col-md-1"]//text()')
        an = get_ans(nums=nums, tmp=tmp)
        return an
    
    
    def exec_wof(online, local):  #  得到对应的数字
        num_list = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9']
        eng_list = ['zero', 'four', 'eight', 'nine', 'five', 'six', 'two', 'seven', 'three', 'one']
        uni_list = online.getGlyphNames()[1:]
        uni_list.remove('glyph00011')
        tmp = {}
        for i in range(10):
            online_glyph = online['glyf'][uni_list[i]]  # 返回的是unicode对应信息的对象
            for j in range(10):
                base_glyph = local['glyf'][eng_list[j]]
                if online_glyph == base_glyph:
                    tmp[uni_list[i]] = num_list[j]
        return tmp
    
    
    def get_wof(content):  #  得到字体url
        url = re.findall('\("https.*?"\)', content)
        url = str(''.join(url)).replace('"', '').replace(')', '').replace('(', '')
        return url
    
    
    def get_file(wf_url):  # 下载保存字体文件
        data = requests.get(url=wf_url,  verify=False)
        r = data.content
        name = wf_url.split('/')[-1]
        path = 'D:/jiao cheng/'+name
        with open(path, 'wb') as f:
            f.write(r)
            f.close()
        return path
    
    
    def get_ans(nums, tmp):  # 得到答案
        an = 0
        for num in nums:
            num = re.findall('[0-9]', ''.join(num))
            real_list = []
            for n in num:
                real_list.append(str(translate_num(str(n), tmp)))
            real_num = int(''.join(real_list))
            an += real_num
        return an
    
    
    def get_content(get_url, headers):  # 下载处理网页
        data = requests.get(url=get_url, headers=headers, verify=False)
        r = data.content
        content = str(r, encoding='utf-8', errors='ignore')
        wof_url = get_wof(content=content)
        s = etree.HTML(content.replace('<br>', '').replace('\n', '').replace('</br>', '').replace('\t', '').
                       replace('\xa0', '').replace('\u3000', '').replace('\r', ''))
        return s, wof_url
    
    
    def translate_num(number, tmp):
        eng_list = ['zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine']
        number = eng_list[str(number) - 1]
        return tmp[number]
                
    
    if __name__ == '__main__':
        ul = '
        asn = 0
        for index in range(1, 1001):
            hl = ul + str(index)
            asn += get_html(html_url=hl
        print(asn)
    
    
    
  • 您还可以看一下 jeevan老师的Python量化交易,大操手量化投资系列课程之内功修炼篇课程中的 基础编程之股票行情构造小节, 巩固相关知识点