FAQ
How can I parse an HTML file and collect only that the A tags. I have a
start for the code but an unable to figure out how to finish the code.
HTML_parse gets the data from the URL document. Thanks for the help

def HTML_parse(data):
from HTMLParser import HTMLParser
parser = MyHTMLParser()

parser.feed(data)

class MyHTMLParser(HTMLParser):

def handle_starttag(self, tag, attrs):

def handle_endtag(self, tag):

def read_page(URL):
"this function returns the entire content of the specified URL
document"
import urllib
connect = urllib.urlopen(url)
data = connect.read()
connect.close()
return data

Search Discussions

  • Beza1e1 at Sep 24, 2005 at 6:03 pm
    I do not really know, what you want to do. Getting he urls from the a
    tags of a html file? I think the easiest method would be a regular
    expression.
    import urllib, sre
    html = urllib.urlopen("http://www.google.com").read()
    sre.findall('href="([^>]+)"', html)
    ['/imghp?hl=de&tab=wi&ie=UTF-8',
    'http://groups.google.de/grphp?hl=de&tab=wg&ie=UTF-8',
    '/dirhp?hl=de&tab=wd&ie=UTF-8',
    'http://news.google.de/nwshp?hl=de&tab=wn&ie=UTF-8',
    'http://froogle.google.de/frghp?hl=de&tab=wf&ie=UTF-8',
    '/intl/de/options/']
    sre.findall('href=[^>]+>([^<]+)</a>', html)
    ['Bilder', 'Groups', 'Verzeichnis', 'News', 'Froogle',
    'Mehr&nbsp;&raquo;', 'Erweiterte Suche', 'Einstellungen',
    'Sprachtools', 'Werbung', 'Unternehmensangebote', 'Alles \xfcber
    Google', 'Google.com in English']

    Google has some strange html, href without quotation marks: <a
    href=http://www.google.com/ncr>Google.com in English</a>
  • Mike Meyer at Sep 24, 2005 at 6:34 pm

    "beza1e1" <andreas.zwinkau at googlemail.com> writes:

    I do not really know, what you want to do. Getting he urls from the a
    tags of a html file? I think the easiest method would be a regular
    expression.
    I think this ranks as #2 on the list of "difficult one-day
    hacks". Yeah, it's simple to write an RE that works most of the
    time. It's a major PITA to write one that works in all the legal
    cases. Getting one that also handles all the cases seen in the wild is
    damn near impossible.
    import urllib, sre
    html = urllib.urlopen("http://www.google.com").read()
    sre.findall('href="([^>]+)"', html)
    This fails in a number of cases. Whitespace around the "=" sign for
    attibutes. Quotes around other attributes in the tag (required by
    XHTML). '>' in the URL (legal, but disrecommended). Attributes quoted
    with single quotes instead of double quotes, or just unqouted. It
    misses IMG SRC attributes. It hands back relative URLs as such,
    instead of resolving them to the absolute URL (which requires checking
    for the base URL in the HEAD), which may or may not be acceptable.
    Google has some strange html, href without quotation marks: <a
    href=http://www.google.com/ncr>Google.com in English</a>
    That's not strange. That's just a bit unusual. Perfectly legal, though
    - any browser (or other html processor) that fails to handle it is
    broken.

    <mike
    --
    Mike Meyer <mwm at mired.org> http://www.mired.org/home/mwm/
    Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
  • Beza1e1 at Sep 24, 2005 at 6:47 pm
    I think for a quick hack, this is as good as a parser. A simple parser
    would miss some cases as well. RE are nearly not extendable though, so
    your critic is valid.

    The point is, what George wants to do. A mixture would be possible as
    well:
    Getting all <a ...> by a RE and then extracting the url with something
    like a parser.
  • Mike Meyer at Sep 24, 2005 at 7:47 pm

    "beza1e1" <andreas.zwinkau at googlemail.com> writes:

    I think for a quick hack, this is as good as a parser. A simple parser
    would miss some cases as well. RE are nearly not extendable though, so
    your critic is valid.
    Pretty much any first attempt is going to miss some cases. There
    libraries available that are have stood the test of time. Simply
    usinng one of those is the right solution.
    The point is, what George wants to do. A mixture would be possible as
    well:
    Getting all <a ...> by a RE and then extracting the url with something
    like a parser.
    I thought the point was to extract all URLs? Those appear in
    attributes of tags other than A tags. While that's a meta-problem that
    requires properly configuring the parser to deal with, it's something
    that's *much* simpler to do if you've got a parser that understands
    the structure of HTML - you should be able to specify tag/attribute
    pairs to look for - than with something that is treating it as
    unstructured text.

    <mike

    --
    Mike Meyer <mwm at mired.org> http://www.mired.org/home/mwm/
    Independent WWW/Perforce/FreeBSD/Unix consultant, email for more information.
  • Stephen Prinster at Sep 24, 2005 at 6:12 pm

    George wrote:
    How can I parse an HTML file and collect only that the A tags. I have a
    start for the code but an unable to figure out how to finish the code.
    HTML_parse gets the data from the URL document. Thanks for the help
    Have you tried using Beautiful Soup?

    http://www.crummy.com/software/BeautifulSoup/
  • George Sakkis at Sep 24, 2005 at 10:29 pm

    "Stephen Prinster" wrote:
    George wrote:
    How can I parse an HTML file and collect only that the A tags. I have a
    start for the code but an unable to figure out how to finish the code.
    HTML_parse gets the data from the URL document. Thanks for the help
    Have you tried using Beautiful Soup?

    http://www.crummy.com/software/BeautifulSoup/
    I agree; you can do what you want in two lines:

    from BeautifulSoup import BeautifulSoup
    hrefs = [link['href'] for link in BeautifulSoup(urllib.urlopen(url)).fetch('a')]

    George
  • Leo Jay at Sep 24, 2005 at 7:53 pm
    you may define a start_a in MyHTMLParser.

    e.g.
    import htmllib
    import formatter

    class HTML_Parser(htmllib.HTMLParser):
    def __init__(self):
    htmllib.HTMLParser.__init__(self,
    formatter.AbstractFormatter(formatter.NullWriter()))

    def start_a(self, args):
    for key, value in args:
    if key.lower() == 'href':
    print value


    html = HTML_Parser()
    html.feed(open(r'a.htm','r').read())
    html.close()

    On 24 Sep 2005 10:13:30 -0700, George wrote:
    How can I parse an HTML file and collect only that the A tags. I have a
    start for the code but an unable to figure out how to finish the code.
    HTML_parse gets the data from the URL document. Thanks for the help

    def HTML_parse(data):
    from HTMLParser import HTMLParser
    parser = MyHTMLParser()

    parser.feed(data)

    class MyHTMLParser(HTMLParser):

    def handle_starttag(self, tag, attrs):

    def handle_endtag(self, tag):

    def read_page(URL):
    "this function returns the entire content of the specified URL
    document"
    import urllib
    connect = urllib.urlopen(url)
    data = connect.read()
    connect.close()
    return data

    --
    http://mail.python.org/mailman/listinfo/python-list

    --
    Best Regards,
    Leo Jay
  • Thorsten Kampe at Sep 24, 2005 at 10:31 pm
    * George (2005-09-24 18:13 +0100)
    How can I parse an HTML file and collect only that the A tags.
    import formatter, \
    htmllib, \
    urllib

    url = 'http://python.org'

    htmlp = htmllib.HTMLParser(formatter.NullFormatter())
    htmlp.feed(urllib.urlopen(url).read())
    htmlp.close()

    print htmlp.anchorlist
  • George at Sep 25, 2005 at 12:16 am
    I'm very new to python and I have tried to read the tutorials but I am
    unable to understand exactly how I must do this problem.

    Specifically, the showIPnums function takes a URL as input, calls the
    read_page(url) function to obtain the entire page for that URL, and
    then lists, in sorted order, the IP addresses implied in the "<A
    HREF=? ? ?>" tags within that page.


    """
    Module to print IP addresses of tags in web file containing HTML
    showIPnums('http://22c118.cs.uiowa.edu/uploads/easy.html')
    ['0.0.0.0', '128.255.44.134', '128.255.45.54']
    showIPnums('http://22c118.cs.uiowa.edu/uploads/pytorg.html')
    ['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11',
    '128.255.34.132', '128.255.44.51', '128.255.45.53',
    '128.255.45.54', '129.255.241.42', '64.202.167.129']

    """

    def read_page(url):
    import formatter
    import htmllib
    import urllib

    htmlp = htmllib.HTMLParser(formatter.NullFormatter())
    htmlp.feed(urllib.urlopen(url).read())
    htmlp.close()

    def showIPnums(URL):
    page=read_page(URL)

    if __name__ == '__main__':
    import doctest, sys
    doctest.testmod(sys.modules[__name__])
  • George Sakkis at Sep 25, 2005 at 1:06 am

    "George" wrote:

    I'm very new to python and I have tried to read the tutorials but I am
    unable to understand exactly how I must do this problem.

    Specifically, the showIPnums function takes a URL as input, calls the
    read_page(url) function to obtain the entire page for that URL, and
    then lists, in sorted order, the IP addresses implied in the "<A
    HREF=? ? ?>" tags within that page.


    """
    Module to print IP addresses of tags in web file containing HTML
    showIPnums('http://22c118.cs.uiowa.edu/uploads/easy.html')
    ['0.0.0.0', '128.255.44.134', '128.255.45.54']
    showIPnums('http://22c118.cs.uiowa.edu/uploads/pytorg.html')
    ['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11',
    '128.255.34.132', '128.255.44.51', '128.255.45.53',
    '128.255.45.54', '129.255.241.42', '64.202.167.129']

    """

    def read_page(url):
    import formatter
    import htmllib
    import urllib

    htmlp = htmllib.HTMLParser(formatter.NullFormatter())
    htmlp.feed(urllib.urlopen(url).read())
    htmlp.close()

    def showIPnums(URL):
    page=read_page(URL)

    if __name__ == '__main__':
    import doctest, sys
    doctest.testmod(sys.modules[__name__])

    You forgot to mention that you don't want duplicates in the result. Here's a function that passes
    the doctest:

    from urllib import urlopen
    from urlparse import urlsplit
    from socket import gethostbyname
    from BeautifulSoup import BeautifulSoup

    def showIPnums(url):
    """Return the unique IPs found in the anchors of the webpage at the given
    url.
    showIPnums('http://22c118.cs.uiowa.edu/uploads/easy.html')
    ['0.0.0.0', '128.255.44.134', '128.255.45.54']
    showIPnums('http://22c118.cs.uiowa.edu/uploads/pytorg.html')
    ['0.0.0.0', '128.255.135.49', '128.255.244.57', '128.255.30.11', '128.255.34.132',
    '128.255.44.51', '128.255.45.53', '128.255.45.54', '129.255.241.42', '64.202.167.129']
    """
    hrefs = set()
    for link in BeautifulSoup(urlopen(url)).fetch('a'):
    try: hrefs.add(gethostbyname(urlsplit(link["href"])[1]))
    except: pass
    return sorted(hrefs)


    HTH,
    George

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouppython-list @
categoriespython
postedSep 24, '05 at 5:13p
activeSep 25, '05 at 1:06a
posts11
users7
websitepython.org

People

Translate

site design / logo © 2022 Grokbase