如何在python上编写curl get请求以减少解析时间

I have basic curl GET request to work with site API in php:

$headers = array(
  "Content-type: text/xml;charset=\"windows-1251\"",
  "Host:api.content.com",
  "Accept:*/*",
  "Authorization:qwerty"
);

$ch = curl_init();
curl_setopt($ch, CURLOPT_URL,"https://api.content.com/v1.xml");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_TIMEOUT, 60);
curl_setopt($ch, CURLOPT_AUTOREFERER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYHOST, false);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);

$data = curl_exec($ch);
$f = new SimpleXMLElement($data);

#echo $data;
$total = $f['total'];
curl_close($ch);
}

What is the best way to write this in python in case, that this request will be used in separate subprocesses to decrease parsing time?

You can use requests, from requests document;

>>> import json
>>> url = 'https://api.github.com/some/endpoint'
>>> payload = {'some': 'data'}
>>> headers = {'content-type': 'application/json'}

>>> r = requests.post(url, data=json.dumps(payload), headers=headers)

you can use any of the following module:

  • urllib2 (its in python by default)
  • requests (you need to install)

Example:

>>> import requests
>>> r = requests.get('http://example.com/')
>>> print r.text
.
.
>>> import urllib2
>>> response = urllib2.urlopen('http://example.com')
>>> print response.info()
.
.
>>> html = response.read()
>>> print html
.
.