百度云盘如何提取直链(付费)

有没有方法可以提取直链,需要把这个链接复制下来。给我的下载机来下载

把图片和mp3合成一个视频,可以通过ffmpeg的命令进行,比如

ffmpeg -framerate 0.05 -f image2 -loop 1 -y  -i example.jpg -i example.mp3 -s 1080*1920 -r 25 -t 100 output.mp4

-framerate 速率,越小每张图片停留时间越长
-loop 循环一遍文件夹内的图片
-i 图片路径,正则匹配
-i 音频地址
-y 覆盖已有生成文件
-s 画质
-r 帧数
-t 总时长

然后在PHP中调用系统命令,通过passthru或system,执行上边ffmpeg的命令

我需要在 PHP中运行ffmpeg命令.
但PHP-ffmpeg不再支持最新版本和过时.
我可以知道在webfile中运行ffmpeg命令的替代方法(PHP,Javascript,jQuery).
我在PHP文件中尝试exec()和shell_exec()但得到空白输出.
echo shell_exec("/usr/local/bin/ffmpeg -i test.mp3 -codec:a libmp3lame -b:a 128k out.mp3");

echo shell_exec("ffmpeg -i test.mp3 -codec:a libmp3lame -b:a 128k out.mp3");
解决方法
ffmpeg在stderr上输出,因此您需要重定向输出.在命令行中添加2>& 1:
echo shell_exec("/usr/local/bin/ffmpeg -i test.mp3 -codec:a libmp3lame -b:a 128k out.mp3 2>&1");
然后你会看到输出.

你按第二个方法装是没问题的,就是第二个方法里面,你没有把那个exec 这个函数删除,意思就是启用这个删除,或者是你合并图片代码部分出错呀,你用你下载的那个视频拼接的方法试试


import requests
from bs4 import BeautifulSoup
import json
import pyperclip
def get_list(link,pwd):
    print('正在获取文件信息...')
    data={"link":link, "pwd":pwd}#请求文件信息所需参数(分享链接和提取码)
    data = json.dumps(data)#转化json类型
    url='http://www.a*****.com/pan/list'#文件信息接口
    header={"Accept":"*/*","Accept-Encoding":"gzip, deflate","Accept-Language":"zh-CN,zh;q=0.9","Connection":"keep-alive","Content-Length":"71","Content-Type":"application/json; charset=UTF-8","Cookie":"__51vcke__JQGRABELTIK919aI=f5234120-4649-56e4-9626-da06ddcd4e8c; __51vuft__JQGRABELTIK919aI=1657722727458; __51uvsct__JQGRABELTIK919aI=4; __vtins__JQGRABELTIK919aI=%7B%22sid%22%3A%20%229c805819-ae46-5d39-93fb-025a7c8d0772%22%2C%20%22vd%22%3A%202%2C%20%22stt%22%3A%206946%2C%20%22dr%22%3A%206946%2C%20%22expires%22%3A%201657789254600%2C%20%22ct%22%3A%201657787454600%7D","Host":"www.a*****.com","Origin":"http://www.a*****.com","Referer":"http://www.a*****.com/","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36","X-Requested-With":"XMLHttpRequest",}
    re=requests.post(url,headers=header,data=data)#请求文件信息
    if "null" in re.text:
        print('请检查分享链接和提取码是否正确,或已过期')
        exit()
    else:
        soup = BeautifulSoup(re.content, "lxml")
        return eval(soup.p.text)#返回文件信息字典
def get_super_link(surl,shareId,shareUk,bdclnd,fsid):
    print('正在请求高速链接...')
    data={"surl":surl,"shareId":shareId,"shareUk":shareUk,"bdclnd":bdclnd,"fsid":fsid}#请求高速链接所需参数(在文件信息里)
    data=json.dumps(data)#转化json类型
    url='http://www.a*****.com/pan/getsuperlink'#请求高速链接的接口
    header={"Accept":"*/*","Accept-Encoding":"gzip, deflate","Accept-Language":"zh-CN,zh;q=0.9","Connection":"keep-alive","Content-Length":"1911","Content-Type":"application/json; charset=UTF-8","Cookie":"__51vcke__JQGRABELTIK919aI=f5234120-4649-56e4-9626-da06ddcd4e8c; __51vuft__JQGRABELTIK919aI=1657722727458; __51uvsct__JQGRABELTIK919aI=4; __vtins__JQGRABELTIK919aI=%7B%22sid%22%3A%20%229c805819-ae46-5d39-93fb-025a7c8d0772%22%2C%20%22vd%22%3A%202%2C%20%22stt%22%3A%206946%2C%20%22dr%22%3A%206946%2C%20%22expires%22%3A%201657789254600%2C%20%22ct%22%3A%201657787454600%7D","Host":"www.a*****.com","Origin":"http://www.a*****.com","Referer":"http://www.a*****.com/","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36","X-Requested-With":"XMLHttpRequest",}
    re=requests.post(url,headers=header,data=data)#请求高速链接
    soup=BeautifulSoup(re.content,"lxml")
    if soup.p == None:
        print("目前仅支持单文件解析,请重新创建分享链接")
        exit()
    dicts=eval(soup.p.text)#提取高速链接的字典
    url=dicts['url']#提取高速链接
    return url#返回高速链接
list=get_list('https://pan.baidu.com/s/1TV8eyxBO21xVQ4hd_QVOuw','j6a')#输入正确的百度网盘分享链接和提取码,获取文件信息
#print(list)
link=get_super_link(list['surl'],list['shareId'],list['shareUk'],list['bdclnd'],list['data'][0]['fs_id'])#从文件信息提取所需参数并调用获取高速链接
pyperclip.copy(link)#将高速链接复制到剪切板
print(link)
print('高速链接已复制到剪贴板')
print("文件名:"+list['data'][0]['server_filename'])#输出文件名
print("文件大小:"+list['data'][0]['size'])#输出文件大小
print("注意:1.必须用IDM等下载器下载,浏览器不行\n"
import requests
from bs4 import BeautifulSoup
import json
import pyperclip
def get_list(link,pwd):
    print('正在获取文件信息...')
    data={"link":link, "pwd":pwd}#请求文件信息所需参数(分享链接和提取码)
    data = json.dumps(data)#转化json类型
    url='http://www.a*****.com/pan/list'#文件信息接口
    header={"Accept":"*/*","Accept-Encoding":"gzip, deflate","Accept-Language":"zh-CN,zh;q=0.9","Connection":"keep-alive","Content-Length":"71","Content-Type":"application/json; charset=UTF-8","Cookie":"__51vcke__JQGRABELTIK919aI=f5234120-4649-56e4-9626-da06ddcd4e8c; __51vuft__JQGRABELTIK919aI=1657722727458; __51uvsct__JQGRABELTIK919aI=4; __vtins__JQGRABELTIK919aI=%7B%22sid%22%3A%20%229c805819-ae46-5d39-93fb-025a7c8d0772%22%2C%20%22vd%22%3A%202%2C%20%22stt%22%3A%206946%2C%20%22dr%22%3A%206946%2C%20%22expires%22%3A%201657789254600%2C%20%22ct%22%3A%201657787454600%7D","Host":"www.a*****.com","Origin":"http://www.a*****.com","Referer":"http://www.a*****.com/","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36","X-Requested-With":"XMLHttpRequest",}
    re=requests.post(url,headers=header,data=data)#请求文件信息
    if "null" in re.text:
        print('请检查分享链接和提取码是否正确,或已过期')
        exit()
    else:
        soup = BeautifulSoup(re.content, "lxml")
        return eval(soup.p.text)#返回文件信息字典
def get_super_link(surl,shareId,shareUk,bdclnd,fsid):
    print('正在请求高速链接...')
    data={"surl":surl,"shareId":shareId,"shareUk":shareUk,"bdclnd":bdclnd,"fsid":fsid}#请求高速链接所需参数(在文件信息里)
    data=json.dumps(data)#转化json类型
    url='http://www.a*****.com/pan/getsuperlink'#请求高速链接的接口
    header={"Accept":"*/*","Accept-Encoding":"gzip, deflate","Accept-Language":"zh-CN,zh;q=0.9","Connection":"keep-alive","Content-Length":"1911","Content-Type":"application/json; charset=UTF-8","Cookie":"__51vcke__JQGRABELTIK919aI=f5234120-4649-56e4-9626-da06ddcd4e8c; __51vuft__JQGRABELTIK919aI=1657722727458; __51uvsct__JQGRABELTIK919aI=4; __vtins__JQGRABELTIK919aI=%7B%22sid%22%3A%20%229c805819-ae46-5d39-93fb-025a7c8d0772%22%2C%20%22vd%22%3A%202%2C%20%22stt%22%3A%206946%2C%20%22dr%22%3A%206946%2C%20%22expires%22%3A%201657789254600%2C%20%22ct%22%3A%201657787454600%7D","Host":"www.a*****.com","Origin":"http://www.a*****.com","Referer":"http://www.a*****.com/","User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36","X-Requested-With":"XMLHttpRequest",}
    re=requests.post(url,headers=header,data=data)#请求高速链接
    soup=BeautifulSoup(re.content,"lxml")
    if soup.p == None:
        print("目前仅支持单文件解析,请重新创建分享链接")
        exit()
    dicts=eval(soup.p.text)#提取高速链接的字典
    url=dicts['url']#提取高速链接
    return url#返回高速链接
list=get_list('https://pan.baidu.com/s/1TV8eyxBO21xVQ4hd_QVOuw','j6a')#输入正确的百度网盘分享链接和提取码,获取文件信息
#print(list)
link=get_super_link(list['surl'],list['shareId'],list['shareUk'],list['bdclnd'],list['data'][0]['fs_id'])#从文件信息提取所需参数并调用获取高速链接
pyperclip.copy(link)#将高速链接复制到剪切板
print(link)
print('高速链接已复制到剪贴板')
print("文件名:"+list['data'][0]['server_filename'])#输出文件名
print("文件大小:"+list['data'][0]['size'])#输出文件大小
print("注意:1.必须用IDM等下载器下载,浏览器不行")
 

不知道这个行不行
看看这篇文章可不可以帮到你https://developer.aliyun.com/ask/230887