ubuntu上python无法爬取网页

在ubuntu12.04下写了个简单的python测试代码:

-*- coding: UTF-8 -*-

import time,urllib2,urllib,StringIO,sys,os,multiprocessing,sqlite3
if name == '__main__':
stockUrl="http://www.baidu.com"
stockWeb = urllib.urlopen(stockUrl).read()
print stockWeb

但是出现了如下错误:
Traceback (most recent call last):
File "test.py", line 6, in
stockWeb = urllib.urlopen(stockUrl).read()
File "/usr/lib/python2.7/urllib.py", line 86, in urlopen
return opener.open(url)
File "/usr/lib/python2.7/urllib.py", line 207, in open
return getattr(self, name)(url)
File "/usr/lib/python2.7/urllib.py", line 344, in open_http
h.endheaders(data)
File "/usr/lib/python2.7/httplib.py", line 954, in endheaders
self._send_output(message_body)
File "/usr/lib/python2.7/httplib.py", line 814, in _send_output
self.send(msg)
File "/usr/lib/python2.7/httplib.py", line 776, in send
self.connect()
File "/usr/lib/python2.7/httplib.py", line 757, in connect
self.timeout, self.source_address)
File "/usr/lib/python2.7/socket.py", line 553, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
IOError: [Errno socket error] [Errno -2] Name or service not known

GOOGLE了也没找到解决办法,这是什么问题?
另外,同样的代码吗,我在另一台同样环境机器上是可以运行的

先尝试ping一下这个域名能不能通吧

看起来像是DNS解析失败了,估计你本机就是ping不通的吧。

你先看看你能上去这个地址不

刚刚我也遇到这个问题了,用tcpdump抓包分析,发现一直在发DNS解析包,也就是说Python不能根据网站地址正确解析网站IP,我的系统能
直接ping通www.baidu.com,但是用Python连接依然报错。然后,我直接用百度的IP地址,如下
response = urllib2.urlopen("http://180.97.33.107") ,就可以正常获取百度首页了。