On Jan 15, 4:46?pm, r0g wrote:
Jo?o wrote:
On Jan 15, 2:38 pm, r0g wrote:
Jo?o wrote:
On Jan 14, 5:58 pm, r0g wrote:
Jo?o wrote:
On Jan 12, 10:07 pm, r0g wrote:
Jo?o wrote:
for the following data,
authentication = "UID=somestring&"
message = 'PROBLEM severity High: OperatorX Plat1(locationY) global
Succ. : 94.470000%'
dest_number = 'XXXXXXXXXXX'
url_values = urlencode({'M':message})
enc_data = authentication + url_values + dest_number
I'm getting null for
full_url = Request(url, enc_data, headers)
and thus,
response = urlopen(full_url).read()
TypeError: <exceptions.TypeError instance at 0x2b4d88ec6440>
Are you sure it's returning a null and not just some other unexpected
I think your problem may be that you are passing a urllib2 class to
urllib(1)'s urlopen. Try using urllib2's urlopen instead e.g.
import urllib2
request_object = urllib2.Request('http://www.example.com')
response = urllib2.urlopen(request_object)
the_page = response.read()
Thanks Roger.
I think it's a null because i did a print(full_url) right after the
I tried
request_object = urllib2.Request('http://www.example.com')
but when printing I get: <urllib2.Request instance at 0x2afaa2fe3f80>
Hi Jo?o,
That's exactly what you want, an object that is an instance of the
Request class. That object doesn't do anything by itself, you still need
to a) Connect to the server and request that URL and b) Read the data
from the server.
a) To connect to the web server and initialize the request you need to
call urllib2.urlopen() with the Request object you just created and
assign the result to a name e.g.
response = urllib2.urlopen(request_object)
That will give you an object (response) that you can call the .read()
method of to get the web page data.
the_page = response.read()
If that doesn't make sense or seem to work for you then please try
reading the following website from top to bottom before taking any
further steps...
I've read about Python 2.4 not playing well with proxies even with no
proxy activated.
Any sugestion?
I doubt any language can play well with proxies if there are none so I
doubt it's a factor ;)
Good luck,
I've expressed myself poorly,
I meant I read about some issues when getting the Request + urlopen
working when there's a proxy involved (like in my case)
even when activating a no_proxy configuration, something like,
proxy_support = urllib.ProxyHandler({})
opener = urllib.build_opener(proxy_support)
But I don't know how to use it :(
That is how you use it IIRC, this installs the proxy handler into urllib
and subsequent objects you subclass from urllib will use the custom handler.

From what I can tell, you should be using urllib2 though, not urllib.

Lets take a step back. You had the following line...

request_object = urllib2.Request('http://www.example.com')

...You printed it and it showed that you had created a Request object
right. Now what happens when you type...

response = urllib2.urlopen(request_object)
print response


Thanks for the patience Roger.
Your explanation opened my eyes.

I finally got it to work, and it turned out I didn't have to include
any custom proxy handler to avoid our proxy.
It ended on such a small and simple block of code after getting the
pieces together..

#!/usr/bin/env python

import sys
from urllib2 import Request, urlopen
from urllib import urlencode

authentication = "UID4675&PW=gdberishyb8LcIANtvT89QVQ==&"
url = ''
encoded_data = urlencode({'M':sys.argv[2],

# Was having problem with this one, shouldn't have tried to pass the
authentication as an urlencode parameter because it was breaking the
password string!
sent_data = authentication + encoded_data

full_url = Request(url,sent_data)
response = urlopen(full_url).read()

Search Discussions

Discussion Posts


Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 14 of 14 | next ›
Discussion Overview
grouppython-list @
postedJan 11, '10 at 2:00p
activeJan 18, '10 at 8:50p

2 users in discussion

João: 9 posts R0g: 5 posts



site design / logo © 2022 Grokbase