I need php script for resumable file download from url to server. It should be able to start download, then when it snaps (30 sec- 5 min) resume, and so on until it completes whole file.
There is something similar in perl http://curl.haxx.se/programs/download.txt , but I want to do it in php, I don't know perl.
I think using CURLOPT_RANGE
to download chunks, and fopen($fileName, "a")
to append it to file on server.
Here is my try:
<?php
function run()
{
while(1)
{
get_chunk($_SESSION['url'], $_SESSION['filename']);
sleep(5);
flush();
}
}
function get_chunk( $url, $fileName)
{
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_BINARYTRANSFER, 1);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
if (file_exists($fileName)) {
$from = filesize($fileName);
curl_setopt($ch, CURLOPT_RANGE, $from . "-");//maybe "-".$from+1000 for 1MB chunks
}
$fp = fopen($fileName, "a");
if (!$fp) {
exit;
}
curl_setopt($ch, CURLOPT_FILE, $fp);
$result = curl_exec($ch);
curl_close($ch);
fclose($fp);
}
?>
If your intent is to download a file over a flaky connection, curl
has a --retry
flag to automatically retry the download in case of error and continue where it left off. Unfortunately it seems the PHP library is missing that option because libcurl is also missing that option.
Normally I recommend using a library rather than an external command, but rather than rolling your own it may be simpler in this case to just invoke curl --retry
or curl -C -
on the command line. wget -c
is another option.
Otherwise I don't see the need to always get the data in chunks. Download as much as you can and if there's an error resume using CURLOPT_RANGE and the file size as you are now.