使用PHP / Python在url中下载特定文件

I previously used to use wget -r on the linux terminal for downloading files with certain extensions:

wget -r -A Ext URL

But now I was assigned by my lecturer to do the same thing using PHP or Python. Who can help?

You can use PHP function file_get_contents() to retrieve the contents of a documents. The first argument of the function is filename which can either be a local path to a file or a URL.
See example from PHP docs

<?php
    $homepage = file_get_contents('http://www.example.com/');
    echo $homepage;
?>

I guess urllib pretty well for you

import urllib
urllib.urlretrieve (URL, file)

Alternatively, you can use Requests: Requests is the only Non-GMO HTTP library for Python, safe for human consumption.

Example (from the doc):

>>> r = requests.get('https://api.github.com/user', auth=('user', 'pass'))
>>> r.status_code
200
>>> r.headers['content-type']
'application/json; charset=utf8'
>>> r.encoding
'utf-8'
>>> r.text
u'{"type":"User"...'
>>> r.json()
{u'private_gists': 419, u'total_private_repos': 77, ...}

For Python, use a web-crawler library such as scrapy.

It has classes that do all the work when passed arguments similar to those you put on the wget command line.

You can use scrapy pipelines to filter out unwanted downloads, and value-add the downloads such as adding thumbnails.