This module provides a high-level interface for fetching data across the World Wide Web. In particular, the urlopen() function is similar to the built-in function open(), but accepts Universal Resource Locators (URLs) instead of filenames. Some restrictions apply -- it can only open URLs for reading, and no seek operations are available.
It defines the following public functions:
url[, data[, proxies]]) |
Except for the info() and geturl() methods, these methods have the same interface as for file objects -- see section 3.9 in this manual. (It is not a built-in file object, however, so it can't be used at those few places where a true built-in file object is required.)
The info() method returns an instance of the class mimetools.Message containing meta-information associated with the URL. When the method is HTTP, these headers are those returned by the server at the head of the retrieved HTML page (including Content-Length and Content-Type). When the method is FTP, a Content-Length header will be present if (as is now usual) the server passed back a file length in response to the FTP retrieval request. A Content-Type header will be present if the MIME type can be guessed. When the method is local-file, returned headers will include a Date representing the file's last-modified time, a Content-Length giving file size, and a Content-Type containing a guess at the file's type. See also the description of the mimetools module.
The geturl() method returns the real URL of the page. In some cases, the HTTP server redirects a client to another URL. The urlopen() function handles this transparently, but in some cases the caller needs to know which URL the client was redirected to. The geturl() method can be used to get at this redirected URL.
If the url uses the http: scheme identifier, the optional
data argument may be given to specify a POST
request
(normally the request type is GET
). The data argument
must be in standard application/x-www-form-urlencoded format;
see the urlencode() function below.
The urlopen() function works transparently with proxies which do not require authentication. In a Unix or Windows environment, set the http_proxy, ftp_proxy or gopher_proxy environment variables to a URL that identifies the proxy server before starting the Python interpreter. For example (the "%" is the command prompt):
% http_proxy="http://www.someproxy.com:3128" % export http_proxy % python ...
In a Windows environment, if no proxy environment variables are set, proxy settings are obtained from the registry's Internet Settings section.
In a Macintosh environment, urlopen() will retrieve proxy information from Internet Config.
Alternatively, the optional proxies argument may be used to
explicitly specify proxies. It must be a dictionary mapping scheme
names to proxy URLs, where an empty dictionary causes no proxies to be
used, and None
(the default value) causes environmental proxy
settings to be used as discussed above. For example:
# Use http://www.someproxy.com:3128 for http proxying proxies = {'http': 'http://www.someproxy.com:3128'} filehandle = urllib.urlopen(some_url, proxies=proxies) # Don't use any proxies filehandle = urllib.urlopen(some_url, proxies={}) # Use proxies from environment - both versions are equivalent filehandle = urllib.urlopen(some_url, proxies=None) filehandle = urllib.urlopen(some_url)
The urlopen() function does not support explicit proxy specification. If you need to override environmental proxy settings, use URLopener, or a subclass such as FancyURLopener.
Proxies which require authentication for use are not currently supported; this is considered an implementation limitation.
Changed in version 2.3: Added the proxies support.
url[, filename[, reporthook[, data]]]) |
(filename, headers)
where filename is the
local file name under which the object can be found, and headers
is whatever the info() method of the object returned by
urlopen() returned (for a remote object, possibly cached).
Exceptions are the same as for urlopen().
The second argument, if present, specifies the file location to copy
to (if absent, the location will be a tempfile with a generated name).
The third argument, if present, is a hook function that will be called
once on establishment of the network connection and once after each
block read thereafter. The hook will be passed three arguments; a
count of blocks transferred so far, a block size in bytes, and the
total size of the file. The third argument may be -1
on older
FTP servers which do not return a file size in response to a retrieval
request.
If the url uses the http: scheme identifier, the optional
data argument may be given to specify a POST
request
(normally the request type is GET
). The data argument
must in standard application/x-www-form-urlencoded format;
see the urlencode() function below.
Changed in version 2.5:
urlretrieve() will raise ContentTooShortError
when it detects that the amount of data available
was less than the expected amount (which is the size reported by a
Content-Length header). This can occur, for example, when the
download is interrupted.
The Content-Length is treated as a lower bound: if there's more data
to read, urlretrieve reads more data, but if less data is available,
it raises the exception.
You can still retrieve the downloaded data in this case, it is stored
in the content attribute of the exception instance.
If no Content-Length header was supplied, urlretrieve can
not check the size of the data it has downloaded, and just returns it.
In this case you just have to assume that the download was successful.
urllib._urlopener
variable before calling the desired function.
For example, applications may want to specify a different
User-Agent: header than URLopener defines. This
can be accomplished with the following code:
import urllib class AppURLopener(urllib.FancyURLopener): version = "App/1.7" urllib._urlopener = AppURLopener()
) |
string[, safe]) |
'/'
.
Example: quote('/~connolly/')
yields '/%7econnolly/'
.
string[, safe]) |
'/'
.
string) |
Example: unquote('/%7Econnolly/')
yields '/~connolly/'
.
string) |
query[, doseq]) |
POST
request. The resulting string is a series of
key=value
pairs separated by "&"
characters, where both key and value are quoted using
quote_plus() above. If the optional parameter doseq is
present and evaluates to true, individual key=value
pairs
are generated for each element of the sequence.
When a sequence of two-element tuples is used as the query argument,
the first element of each tuple is a key and the second is a value. The
order of parameters in the encoded string will match the order of parameter
tuples in the sequence.
The cgi module provides the functions
parse_qs() and parse_qsl() which are used to
parse query strings into Python data structures.
path) |
path) |
[proxies[, **x509]]) |
By default, the URLopener class sends a User-Agent: header of "urllib/VVV", where VVV is the urllib version number. Applications can define their own User-Agent: header by subclassing URLopener or FancyURLopener and setting the class attribute version to an appropriate string value in the subclass definition.
The optional proxies parameter should be a dictionary mapping
scheme names to proxy URLs, where an empty dictionary turns proxies
off completely. Its default value is None
, in which case
environmental proxy settings will be used if present, as discussed in
the definition of urlopen(), above.
Additional keyword parameters, collected in x509, may be used for authentication of the client when using the https: scheme. The keywords key_file and cert_file are supported to provide an SSL key and certificate; both are needed to support client authentication.
URLopener objects will raise an IOError exception if the server returns an error code.
...) |
For all other response codes, the method http_error_default() is called which you can override in subclasses to handle the error appropriately.
Note: According to the letter of RFC 2616, 301 and 302 responses to POST requests must not be automatically redirected without confirmation by the user. In reality, browsers do allow automatic redirection of these responses, changing the POST to a GET, and urllib reproduces this behaviour.
The parameters to the constructor are the same as those for URLopener.
Note: When performing basic authentication, a FancyURLopener instance calls its prompt_user_passwd() method. The default implementation asks the users for the required information on the controlling terminal. A subclass may override this method to support more appropriate behavior if needed.
msg[, content]) |
Restrictions:
/
, it is assumed to refer to
a directory and will be handled accordingly. But if an attempt to
read a file leads to a 550 error (meaning the URL cannot be found or
is not accessible, often for permission reasons), then the path is
treated as a directory in order to handle the case when a directory is
specified by a URL but the trailing /
has been left off. This can
cause misleading results when you try to fetch a file whose read
permissions make it inaccessible; the FTP code will try to read it,
fail with a 550 error, and then perform a directory listing for the
unreadable file. If fine-grained control is needed, consider using the
ftplib module, subclassing FancyURLOpener, or changing
_urlopener to meet your needs.