Does urllib2.urlopen() actually fetch the page?

I was condering when I use urllib2.urlopen() does it just to header reads or does it actually bring back the entire webpage?

IE does the HTML page actually get fetch on the urlopen call or the read() call?

handle = urllib2.urlopen(url)
html =

The reason I ask is for this workflow...

  • I have a list of urls (some of them with short url services)
  • I only want to read the webpage if I haven't seen that url before
  • I need to call urlopen() and use geturl() to get the final page that link goes to (after the 302 redirects) so I know if I've crawled it yet or not.
  • I don't want to incur the overhead of having to grab the html if I've already parsed that page.



I just ran a test with wireshark. When I called urllib2.urlopen( 'url-for-a-700mbyte-file'), only the headers and a few packets of body were retrieved immediately. It wasn't until I called read() that the majority of the body came across the network. This matches what I see by reading the source code for the httplib module.

So, to answer the original question, urlopen() does not fetch the whole body over the network. It fetches the headers and usually some of the body. The rest of the body is fetched when you call read().

The partial body fetch is to be expected, because:

  1. Unless you read an http response one byte at a time, there is no way to know exactly how long the incoming headers will be and therefore no way to know how many bytes to read before the body starts.

  2. An http client has no control of how many bytes a server bundles into each tcp frame of a response.

In practice, since some of the body is usually fetched along with the headers, you might find that small bodies (e.g. small html pages) are fetched entirely on the urlopen() call.

urllib2 always uses HTTP method GET (or POST) and therefore inevitably gets the full page. To use HTTP method HEAD instead (which only gets the headers -- which are enough to follow redirects!), I think you just need to subclass urllib2.Request with your own class and override one short method:

class MyRequest(urllib2.Request):

    def get_method(self):
        return "HEAD"

and pass a suitably initialized instance of MyRequest to urllib2.urlopen.

Testing with a local web server, urllib2.urlopen(url) fires an HTTP request, and .read() does not.

On a side note, if you use Scrapy, it does HEAD intelligently for you. There's no point in rolling your own solution when this is already done so well elsewhere.

You can choose to read a part of the response with something like...

urllib2.Request(url, None, requestHeaders).read(CHUNKSIZE)

This only reads CHUNKSIZE bytes back from the server, I've just checked.

From looking at the docs and source I'm pretty sure it gets the contents of the page. The returned object contains the page.

Need Your Help

NHibernate: No persister error nhibernate

In my quest to further my knowledge, I'm trying to get get NHibernate running.

Wordpress If one of sidebars is active

php wordpress if-statement wordpress-theming conditional-statements

Im trying to build my first WordPress theme and need a little help.