Please look at the PHP code below. It is from a download script:
while(ob_get_level() > 0){
ob_end_clean();
}
set_time_limit(0);
ignore_user_abort(true);
$file = fopen(MATIN_FILE_PATH,"rb"); // the main file
$chunksize = 2*1024*1024;
while(!feof($file)){
echo @fread($file, $chunksize);
flush();
if (connection_status() == 1){ // if client aborted
@fclose($file);
exit;
}
}
@fclose($file);
exit;
In this code, you see that I send 2 MB per chunk. Imagine a client with speed of 100kb/s
After many times of debugging, I found out that when client downloads each 2MB, fwrite
happens and while
goes to next loop. so, what Is PHP doing at this time? is it waiting for the user to download 2MB completely and then send another 2MB? so, isn't it better that I send 10MB or 50MB per chunk?
Thanks for any detailed guide.
Imagine you have 10 simultaneous client requests for downloading some file with this script and you have set 50MBs per chunk size. For each of the requests a new php process will be invoked, each of them demanding 50MBs of your server's memory to process fread($file, 50*1024*1024)
. So, you will have 500MBs memory consumed.
If, as you suggested, a client speed is 100kb/s, then the probability that you have 100 simultaneous connections is not so low and you could get 100 concurrent requests, which is 5GBs of RAM already. Do you have that much or need that all?
You cannot make the user download the file faster than his actual download speed, so the chunk size does not significantly matter for this. Neither reducing the number of loops iterations will help to speed up. I have not tested this, but I think, the loop is executed faster than I/O operations with remote client. So, the only thing you should really be concerned of, is to make your server work reliably.