具有cURL的Proxymillion IP

I am using proxymillion to scrape data from google. I am using cURL but not getting the result and getting error Error 405 (Method Not Allowed)!!1

my code

$proxies[] = 'username:password@IP:port';  // Some proxies require user, password, IP and port number
$proxies[] = 'username:password@IP:port';  // Some proxies require user,   password, IP and port number
$proxies[] = 'username:password@IP:port';  // Some proxies require user,     password, IP and port number
$proxies[] = 'username:password@IP:port';  // Some proxies require user,  password, IP and port number


if (isset($proxies)) {  // If the $proxies array contains items, then
$proxy = $proxies[array_rand($proxies)];    // Select a random proxy from the array and assign to $proxy variable
}
$ch = curl_init();
if (isset($proxy)) {    // If the $proxy variable is set, then
    curl_setopt($ch, CURLOPT_PROXY, $proxy);    // Set CURLOPT_PROXY with proxy in $proxy variable
}
$url="https://www.google.com.pk/?gws_rd=cr,ssl&ei=8kXQWNChIsTSvgSZ3J24DA#q=pakistan&*";


curl_setopt($ch, CURLOPT_URL, $url);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_HEADER, 0);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
// curl_setopt($ch, CURLOPT_COOKIEJAR, "cookies.txt");
// curl_setopt($ch, CURLOPT_COOKIEFILE, "cookies.txt");
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "PUT");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.3) Gecko/20070309 Firefox/2.0.0.3");
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
$page = curl_exec($ch);
curl_close($ch);
$dom = new simple_html_dom();
$html = $dom->load($page);

$title=$html->find("title",0);
echo $title->innertext;

If I guess right you are looking for a budget solution for scraping Google, that's why you switched out for the proxymillion provider in the sample code you linked in comments ?

You can not scrape with massively shared proxies (that's the provider you took), Google will spot them either directly or within a few pages and block.
Also using "&ei=8kXQWNChIsTSvgSZ3J24DA" is not the best idea, that's not a default entry into Google and will probably link your scrape request with your browser (where you have that parameter from originally).

If you look for a budget solution you can consider using a scraping service (php source code here: http://scraping.services/?api&chapter=Source%20Code ), that's cheaper than private proxies in most cases and allows to scrape ten thousands of keywords for a few USD.

Alternatively, if you want to continue that route I would suggest testing your proxymillion performance using a simple bash script.
Use curl or lynx in a bash script (If you use linux, otherwise you can do the same on windows with MinGW/msys) and just make them access Google with the proxies. See if it works at all or if you get blocked within a few pages.
But even if you succeed: any shared proxy provider will be unreliable in "performance".