FAQ

[P5P] git strangeness

Demerphq
Aug 21, 2010 at 11:54 am
Today I was trying to pull some updates over my wlan connection at the
hotel I'm in right now.

For some reason it repeatedly hung. I tried using the git protocol,
and using ssh, each time it hung at the same point (object transfer -
and after the same number of objects).

Eventually I opened a tunnel, with control master enabled to camel
(obviously not everybody can do this), and then tried to pull using
the established tunnel. At which point it pulled just fine - and damn
fast.

Anybody else experienced strangeness like this? Could we have a glitch
somewhere?

Also, I noticed that git-web, or perhaps our config of it, has a
glitch when using pick-axe. It seems to die in mid processing
(probably a timeout) and thus returns broken XML/HTML to the browser,
which in turn inconveniently means that firefox shows an XML error and
doesn't show the results that it /has/ found. Im wondering if there is
anything we should do about this?

Cheers,
yves

--
perl -Mre=debug -e "/just|another|perl|hacker/"
reply

Search Discussions

16 responses

  • Joshua Juran at Aug 21, 2010 at 7:32 pm

    On Aug 21, 2010, at 4:54 AM, demerphq wrote:

    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    Return text/html content? Unless you have clients that *require* XML,
    using XHTML entails risks but provides no benefit.

    Josh
  • Jan Dubois at Aug 21, 2010 at 9:32 pm

    On Sat, 21 Aug 2010, demerphq wrote:

    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?
    I had something similar happen when I did a `git pull` inside a VM with
    little real memory and little swap space. Process memory was growing for
    a minute or so and then it just stopped, with some error message that I
    forgot by now. The failure was repeatable and could be fixed by
    increasing the memory allocated for that VM.

    Cheers,
    -Jan
  • H.Merijn Brand at Aug 21, 2010 at 10:03 pm

    On Sat, 21 Aug 2010 13:54:28 +0200, demerphq wrote:

    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?

    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    Now that others said they hit some odd-ness, I'll have to say I just
    not to recently started from scratch on one of my boxes, as 'git pull
    --all' refused completely.

    I took a fresh clone, copied the important parts from my
    old .git/config over to the new one, checked out new branches
    for blead and maint and went on. No problems since. And the git
    config diff is minute:

    % diff -w perl-git{xx,}/.git/config
    15,17d14
    < [branch "maint-5.8-dor"]
    < remote = origin
    < merge = refs/heads/maint-5.8-dor
    21,23d17
    < [branch "maint-5.10"]
    < remote = origin
    < merge = refs/heads/maint-5.10

    --
    H.Merijn Brand http://tux.nl Perl Monger http://amsterdam.pm.org/
    using 5.00307 through 5.12 and porting perl5.13.x on HP-UX 10.20, 11.00,
    11.11, 11.23, and 11.31, OpenSuSE 10.3, 11.0, and 11.1, AIX 5.2 and 5.3.
    http://mirrors.develooper.com/hpux/ http://www.test-smoke.org/
    http://qa.perl.org http://www.goldmark.org/jeff/stupid-disclaimers/
  • Ævar Arnfjörð Bjarmason at Aug 23, 2010 at 5:59 pm

    On Sat, Aug 21, 2010 at 11:54, demerphq wrote:
    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?
    It would help to clarify what the strangeness is, but obviously you
    can't debug it *now*.

    If you have issues like this one useful thing is to try to use the
    plumbing tools to see if you can reproduce the issue. E.g. use
    git-fetch, and stuff like git-receive-pack / git-send-pack if you can.
    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    What were you looking at when you got the XML error? There was a
    recent report about this to the git list and it's been solved upstream
    IIRC. It was a simple matter of a missing escape_binary_crap()
    somewhere.
  • Demerphq at Aug 23, 2010 at 7:33 pm

    On 23 August 2010 19:59, Ævar Arnfjörð Bjarmason wrote:
    On Sat, Aug 21, 2010 at 11:54, demerphq wrote:
    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?
    It would help to clarify what the strangeness is, but obviously you
    can't debug it *now*.

    If you have issues like this one useful thing is to try to use the
    plumbing tools to see if you can reproduce the issue. E.g. use
    git-fetch, and stuff like git-receive-pack / git-send-pack if you can.
    I actually did use git-fetch. Same thing. It was weird. I had about
    1200 objects to transfer, after, i think, 345 objects it just hung.
    For minutes, after which i killed it. I tried again, and it hung
    again, etc, and like I said until I had opened a tunnel to camel and
    switched to ssh it huing every time, with ssh as the protocol and with
    git as the protocol.

    I actually still have the repo in unpulled form, so ill try again,
    what exactly should I do to obtain better diagnostics?
    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    What were you looking at when you got the XML error? There was a
    recent report about this to the git list and it's been solved upstream
    IIRC. It was a simple matter of a missing escape_binary_crap()
    somewhere.
    I was doing a pick-axe search for PERL_STRING_ROUNDUP (however it is
    actually spelled), after about 5 minutes the connection terminated and
    resulted in broken output...

    Yves



    --
    perl -Mre=debug -e "/just|another|perl|hacker/"
  • Ævar Arnfjörð Bjarmason at Aug 23, 2010 at 7:43 pm

    On Mon, Aug 23, 2010 at 19:33, demerphq wrote:
    On 23 August 2010 19:59, Ævar Arnfjörð Bjarmason wrote:
    On Sat, Aug 21, 2010 at 11:54, demerphq wrote:
    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?
    It would help to clarify what the strangeness is, but obviously you
    can't debug it *now*.

    If you have issues like this one useful thing is to try to use the
    plumbing tools to see if you can reproduce the issue. E.g. use
    git-fetch, and stuff like git-receive-pack / git-send-pack if you can.
    I actually did use git-fetch. Same thing. It was weird. I had about
    1200 objects to transfer, after, i think, 345 objects it just hung.
    For minutes, after which i killed it. I tried again, and it hung
    again, etc, and like I said until I had opened a tunnel to camel and
    switched to ssh it huing every time, with ssh as the protocol and with
    git as the protocol.

    I actually still have the repo in unpulled form, so ill try again,
    what exactly should I do to obtain better diagnostics?
    To start with, add the Git mailing list to the CC-list, which I've
    just done.

    I don't know what you should do exactly, but...:

    * If you rsync the perl.git repository from camel to somewhere else
    and use ssh+git to *there* does it still hang? Maybe you can make
    both copies of perl.git available online for others to try?

    * How does it hang? Run it with GIT_TRACE=1 <your commands>, What
    process hangs exactly? Is it using lots of CPU or memory in top?
    How about if you strace it, is it hanging on something there?

    * Does this all go away if you you upgrade git (e.g. build from
    master git.git) on either the client or server?

    * If not, maybe run it under gdb with tracing and see where it hangs?

    ..would seem like good places to start.
    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    What were you looking at when you got the XML error? There was a
    recent report about this to the git list and it's been solved upstream
    IIRC. It was a simple matter of a missing escape_binary_crap()
    somewhere.
    I was doing a pick-axe search for PERL_STRING_ROUNDUP (however it is
    actually spelled), after about 5 minutes the connection terminated and
    resulted in broken output...
    What's the gitweb link for that? I'm not familiar with how to make it
    do a blame search.
  • Demerphq at Aug 23, 2010 at 7:58 pm

    On 23 August 2010 21:43, Ævar Arnfjörð Bjarmason wrote:
    On Mon, Aug 23, 2010 at 19:33, demerphq wrote:
    On 23 August 2010 19:59, Ævar Arnfjörð Bjarmason wrote:
    On Sat, Aug 21, 2010 at 11:54, demerphq wrote:
    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?
    It would help to clarify what the strangeness is, but obviously you
    can't debug it *now*.

    If you have issues like this one useful thing is to try to use the
    plumbing tools to see if you can reproduce the issue. E.g. use
    git-fetch, and stuff like git-receive-pack / git-send-pack if you can.
    I actually did use git-fetch. Same thing. It was weird. I had about
    1200 objects to transfer, after, i think, 345 objects it just hung.
    For minutes, after which i killed it. I tried again, and it hung
    again, etc, and like I said until I had opened a tunnel to camel and
    switched to ssh it huing every time, with ssh as the protocol and with
    git as the protocol.

    I actually still have the repo in unpulled form, so ill try again,
    what exactly should I do to obtain better diagnostics?
    To start with, add the Git mailing list to the CC-list, which I've
    just done.

    I don't know what you should do exactly, but...:

    * If you rsync the perl.git repository from camel to somewhere else
    and use ssh+git to *there* does it still hang? Maybe you can make
    both copies of perl.git available online for others to try?

    * How does it hang? Run it with GIT_TRACE=1 <your commands>, What
    process hangs exactly? Is it using lots of CPU or memory in top?
    How about if you strace it, is it hanging on something there?

    * Does this all go away if you you upgrade git (e.g. build from
    master git.git) on either the client or server?

    * If not, maybe run it under gdb with tracing and see where it hangs?

    ..would seem like good places to start.
    Ill try some of the above and follow up... Well, as soon as i find the
    usb stick with the unpulled repo copy. :-)
    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    What were you looking at when you got the XML error? There was a
    recent report about this to the git list and it's been solved upstream
    IIRC. It was a simple matter of a missing escape_binary_crap()
    somewhere.
    I was doing a pick-axe search for PERL_STRING_ROUNDUP (however it is
    actually spelled), after about 5 minutes the connection terminated and
    resulted in broken output...
    What's the gitweb link for that? I'm not familiar with how to make it
    do a blame search.
    Select "pickaxe" in the drop down on the perl5 gitweb, and then search
    for PERL_STRLEN_ROUNDUP

    The url generated is:

    http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP

    Currently its running for me, and obviously wed prefer that we dont
    have N-gazillion people doing the search at once....

    Ah, it just finished... Same problem. I get the error:

    XML Parsing Error: no element found
    Location: http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP
    Line Number 81, Column 1:

    And the last couple of lines of the HTML are:

    </td>
    <td class="link"><a
    href="/perl.git/commit/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">commit</a>
    <a href="/perl.git/tree/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">tree</a></td>
    </tr>
    <tr class="light">

    seems to me like it timed out while searching....

    Makes me think the search logic would work better as an incremental
    asynchronous fetch....

    Yves
    --
    perl -Mre=debug -e "/just|another|perl|hacker/"
  • Ævar Arnfjörð Bjarmason at Aug 23, 2010 at 8:16 pm

    On Mon, Aug 23, 2010 at 19:58, demerphq wrote:
    On 23 August 2010 21:43, Ævar Arnfjörð Bjarmason wrote:
    On Mon, Aug 23, 2010 at 19:33, demerphq wrote:
    On 23 August 2010 19:59, Ævar Arnfjörð Bjarmason wrote:
    On Sat, Aug 21, 2010 at 11:54, demerphq wrote:
    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?
    It would help to clarify what the strangeness is, but obviously you
    can't debug it *now*.

    If you have issues like this one useful thing is to try to use the
    plumbing tools to see if you can reproduce the issue. E.g. use
    git-fetch, and stuff like git-receive-pack / git-send-pack if you can.
    I actually did use git-fetch. Same thing. It was weird. I had about
    1200 objects to transfer, after, i think, 345 objects it just hung.
    For minutes, after which i killed it. I tried again, and it hung
    again, etc, and like I said until I had opened a tunnel to camel and
    switched to ssh it huing every time, with ssh as the protocol and with
    git as the protocol.

    I actually still have the repo in unpulled form, so ill try again,
    what exactly should I do to obtain better diagnostics?
    To start with, add the Git mailing list to the CC-list, which I've
    just done.

    I don't know what you should do exactly, but...:

    * If you rsync the perl.git repository from camel to somewhere else
    and use ssh+git to *there* does it still hang? Maybe you can make
    both copies of perl.git available online for others to try?

    * How does it hang? Run it with GIT_TRACE=1 <your commands>, What
    process hangs exactly? Is it using lots of CPU or memory in top?
    How about if you strace it, is it hanging on something there?

    * Does this all go away if you you upgrade git (e.g. build from
    master git.git) on either the client or server?

    * If not, maybe run it under gdb with tracing and see where it hangs?

    ..would seem like good places to start.
    Ill try some of the above and follow up... Well, as soon as i find the
    usb stick with the unpulled repo copy. :-)
    Sweet, thanks.
    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    What were you looking at when you got the XML error? There was a
    recent report about this to the git list and it's been solved upstream
    IIRC. It was a simple matter of a missing escape_binary_crap()
    somewhere.
    I was doing a pick-axe search for PERL_STRING_ROUNDUP (however it is
    actually spelled), after about 5 minutes the connection terminated and
    resulted in broken output...
    What's the gitweb link for that? I'm not familiar with how to make it
    do a blame search.
    Select "pickaxe" in the drop down on the perl5 gitweb, and then search
    for PERL_STRLEN_ROUNDUP

    The url generated is:

    http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP

    Currently its running for me, and obviously wed prefer that we dont
    have N-gazillion people doing the search at once....

    Ah, it just finished... Same problem. I get the error:

    XML Parsing Error: no element found
    Location: http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP
    Line Number 81, Column 1:

    And the last couple of lines of the HTML are:

    </td>
    <td class="link"><a
    href="/perl.git/commit/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">commit</a>
    <a href="/perl.git/tree/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">tree</a></td>
    </tr>
    <tr class="light">

    seems to me like it timed out while searching....

    Makes me think the search logic would work better as an incremental
    asynchronous fetch....
    Ah, sounds like it's running a really expensive operation and then
    running into the cgi time execution limit on the webserver (or maybe
    in gitweb), so when the connection closes the browser ends up with
    invalid XHTML.

    An async fetch would only make sense in that case if your gitweb and
    webserver timeouts made sense, i.e. the gitweb timeout was say 1-2 sec
    less than the webserver timeout.

    Anyway, it has nothing to do with the escaping bug I cited above.
  • Demerphq at Aug 23, 2010 at 8:19 pm

    On 23 August 2010 22:16, Ævar Arnfjörð Bjarmason wrote:
    On Mon, Aug 23, 2010 at 19:58, demerphq wrote:
    On 23 August 2010 21:43, Ævar Arnfjörð Bjarmason wrote:
    On Mon, Aug 23, 2010 at 19:33, demerphq wrote:
    On 23 August 2010 19:59, Ævar Arnfjörð Bjarmason wrote:
    On Sat, Aug 21, 2010 at 11:54, demerphq wrote:
    Today I was trying to pull some updates over my wlan connection at the
    hotel I'm in right now.

    For some reason it repeatedly hung. I tried using the git protocol,
    and using ssh, each time it hung at the same point (object transfer -
    and after the same number of objects).

    Eventually I opened a tunnel, with control master enabled to camel
    (obviously not everybody can do this), and then tried to pull using
    the established tunnel. At which point it pulled just fine - and damn
    fast.

    Anybody else experienced strangeness like this? Could we have a glitch
    somewhere?
    It would help to clarify what the strangeness is, but obviously you
    can't debug it *now*.

    If you have issues like this one useful thing is to try to use the
    plumbing tools to see if you can reproduce the issue. E.g. use
    git-fetch, and stuff like git-receive-pack / git-send-pack if you can.
    I actually did use git-fetch. Same thing. It was weird. I had about
    1200 objects to transfer, after, i think, 345 objects it just hung.
    For minutes, after which i killed it. I tried again, and it hung
    again, etc, and like I said until I had opened a tunnel to camel and
    switched to ssh it huing every time, with ssh as the protocol and with
    git as the protocol.

    I actually still have the repo in unpulled form, so ill try again,
    what exactly should I do to obtain better diagnostics?
    To start with, add the Git mailing list to the CC-list, which I've
    just done.

    I don't know what you should do exactly, but...:

    * If you rsync the perl.git repository from camel to somewhere else
    and use ssh+git to *there* does it still hang? Maybe you can make
    both copies of perl.git available online for others to try?

    * How does it hang? Run it with GIT_TRACE=1 <your commands>, What
    process hangs exactly? Is it using lots of CPU or memory in top?
    How about if you strace it, is it hanging on something there?

    * Does this all go away if you you upgrade git (e.g. build from
    master git.git) on either the client or server?

    * If not, maybe run it under gdb with tracing and see where it hangs?

    ..would seem like good places to start.
    Ill try some of the above and follow up... Well, as soon as i find the
    usb stick with the unpulled repo copy. :-)
    Sweet, thanks.
    Also, I noticed that git-web, or perhaps our config of it, has a
    glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the browser,
    which in turn inconveniently means that firefox shows an XML error and
    doesn't show the results that it /has/ found. Im wondering if there is
    anything we should do about this?
    What were you looking at when you got the XML error? There was a
    recent report about this to the git list and it's been solved upstream
    IIRC. It was a simple matter of a missing escape_binary_crap()
    somewhere.
    I was doing a pick-axe search for PERL_STRING_ROUNDUP (however it is
    actually spelled), after about 5 minutes the connection terminated and
    resulted in broken output...
    What's the gitweb link for that? I'm not familiar with how to make it
    do a blame search.
    Select "pickaxe" in the drop down on the perl5 gitweb, and then search
    for PERL_STRLEN_ROUNDUP

    The url generated is:

    http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP

    Currently its running for me, and obviously wed prefer that we dont
    have N-gazillion people doing the search at once....

    Ah, it just finished... Same problem. I get the error:

    XML Parsing Error: no element found
    Location: http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP
    Line Number 81, Column 1:

    And the last couple of lines of the HTML are:

    </td>
    <td class="link"><a
    href="/perl.git/commit/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">commit</a>
    <a href="/perl.git/tree/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">tree</a></td>
    </tr>
    <tr class="light">

    seems to me like it timed out while searching....

    Makes me think the search logic would work better as an incremental
    asynchronous fetch....
    Ah, sounds like it's running a really expensive operation and then
    running into the cgi time execution limit on the webserver (or maybe
    in gitweb), so when the connection closes the browser ends up with
    invalid XHTML.
    Yeah, exactly, thats what i meant by "timeout".
    An async fetch would only make sense in that case if your gitweb and
    webserver timeouts made sense, i.e. the gitweb timeout was say 1-2 sec
    less than the webserver timeout.
    Well i was thinking it could search for a single item, and then stop,
    and the search again from there, etc... So each search would be
    lighter weight...
    Anyway, it has nothing to do with the escaping bug I cited above.
    Nod, I suspected as much.

    Yves



    --
    perl -Mre=debug -e "/just|another|perl|hacker/"
  • Jakub Narebski at Aug 31, 2010 at 7:55 am

    Ævar Arnfjörð Bjarmason writes:
    On Mon, Aug 23, 2010 at 19:58, demerphq wrote:

    Select "pickaxe" in the drop down on the perl5 gitweb, and then search
    for PERL_STRLEN_ROUNDUP

    The url generated is:

    http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP

    Currently its running for me, and obviously wed prefer that we dont
    have N-gazillion people doing the search at once....

    Ah, it just finished... Same problem. I get the error:

    XML Parsing Error: no element found
    Location: http://perl5.git.perl.org/perl.git?a=search&h=HEAD&st=pickaxe&s=PERL_STRLEN_ROUNDUP
    Line Number 81, Column 1:

    And the last couple of lines of the HTML are:

    </td>
    <td class="link"><a
    href="/perl.git/commit/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">commit</a>
    <a href="/perl.git/tree/7a9b70e91d2c0aa19f8cec5b0f8c133492a19280">tree</a></td>
    </tr>
    <tr class="light">

    seems to me like it timed out while searching....

    Makes me think the search logic would work better as an incremental
    asynchronous fetch....
    Ah, sounds like it's running a really expensive operation and then
    running into the cgi time execution limit on the webserver (or maybe
    in gitweb), so when the connection closes the browser ends up with
    invalid XHTML.

    An async fetch would only make sense in that case if your gitweb and
    webserver timeouts made sense, i.e. the gitweb timeout was say 1-2 sec
    less than the webserver timeout.
    Ah, modern gitweb supports incremental blame, in that it seeds the
    view with file contents, then runs "git blame --incremental" in
    background on server and updates 'blame_incremental' view with
    JavaScript, but does not support incremental pickaxe. Perhaps we
    could borrow code from git-browser?

    By the way, gitweb should have caching real soon now (TM)... :-)
    --
    Jakub Narebski
    Poland
    ShadeHawk on #git
  • Aristotle Pagaltzis at Sep 3, 2010 at 3:09 am

    * demerphq [2010-08-21 13:55]:
    Also, I noticed that git-web, or perhaps our config of it, has
    a glitch when using pick-axe. It seems to die in mid processing
    (probably a timeout) and thus returns broken XML/HTML to the
    browser, which in turn inconveniently means that firefox shows
    an XML error and doesn't show the results that it /has/ found.
    Im wondering if there is anything we should do about this?
    FWIW, that’s only Mozilla brain-death. The XML spec does not
    demand that the app throw away the good part of a failed parse,
    and indeed Opera and any WebKit-based browser will show you the
    part of the page up to the first parse error – which in your case
    is the entire page anyway. (I use Chrome.)

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Zefram at Sep 3, 2010 at 11:35 am

    Aristotle Pagaltzis wrote:
    FWIW, that???s only Mozilla brain-death. The XML spec does not
    demand that the app throw away the good part of a failed parse,
    The XML spec does demand that the XML writer produce documents that
    conform fully to the XML syntax. The fact that parsing fails, and the app
    therefore does not have a well-formed document to work with, is in no way
    the fault of Mozilla, it is the fault of the code that generates the page.

    -zefram
  • Aristotle Pagaltzis at Sep 4, 2010 at 1:09 pm

    * Zefram [2010-09-03 13:35]:
    Aristotle Pagaltzis wrote:
    FWIW, that’s only Mozilla brain-death. The XML spec does not
    demand that the app throw away the good part of a failed
    parse,
    The XML spec does demand that the XML writer produce documents
    that conform fully to the XML syntax. The fact that parsing
    fails, and the app therefore does not have a well-formed
    document to work with, is in no way the fault of Mozilla, it is
    the fault of the code that generates the page.
    From section 1.2, Terminology:

    fatal error:

    [Definition: An error which a conforming XML processor
    MUST detect and report to the application. After
    encountering a fatal error, the processor MAY continue
    processing the data to search for further errors and MAY
    report such errors to the application. In order to
    support correction of errors, the processor MAY make
    unprocessed data from the document (with intermingled
    character data and markup) available to the application.
    Once a fatal error is detected, however, the processor
    MUST NOT continue normal processing (i.e., it MUST NOT
    continue to pass character data and information about the
    document’s logical structure to the application in the
    normal way).]

    That does not forbid the XML processor from passing data to the
    application *up to* the point of the first fatal error, and it
    makes no constraints upon the application’s use of any data
    already passed to it from the XML processor.

    In other words, if there is a well-formedness error in the middle
    of an XHTML document, a browser is entirely free to render the
    first half of that document. And the Opera and WebKit browsers
    do (along with a message to inform the user that the rest of the
    page is broken).

    There is no spec support for throwing up a YSOD that prevents the
    user from seeing even the first half of the page, as Gecko does.
    That is purely a Mozilla design decision.

    Anyway. </offtopic>

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Zefram at Sep 4, 2010 at 1:28 pm

    Aristotle Pagaltzis wrote:
    That does not forbid the XML processor from passing data to the
    application *up to* the point of the first fatal error,
    Indeed. It's allowed to do all sorts of things. But the document
    is still erroneous, and the app is in no way at fault in rejecting an
    erroneous document. You implied an obligation on the app to fudge the
    syntax error to extract a usable partial document, an obligation that
    of course does not exist.

    -zefram
  • Aristotle Pagaltzis at Sep 5, 2010 at 2:00 pm

    * Zefram [2010-09-04 15:30]:
    But the document is still erroneous, and the app is in no way
    at fault in rejecting an erroneous document. You implied an
    obligation on the app to fudge the syntax error to extract
    a usable partial document, an obligation that of course does
    not exist.
    WebKit and Opera do nothing special to support their behaviour.
    If you load a broken XHTML document over a very slow link, you
    will see that even Gecko actually renders the partial document,
    up until the point where the parser encounters the error and
    Gecko *hides* the already partially rendered page. The XML spec
    does not even *imply* an obligation to do anything alike. It’s
    just unnecessary/misguided user (and publisher!) hostility on
    the part of Gecko.

    Regards,
    --
    Aristotle Pagaltzis // <http://plasmasturm.org/>
  • Leon Timmermans at Sep 5, 2010 at 4:40 pm

    On Sun, Sep 5, 2010 at 3:59 PM, Aristotle Pagaltzis wrote:
    WebKit and Opera do nothing special to support their behaviour.
    If you load a broken XHTML document over a very slow link, you
    will see that even Gecko actually renders the partial document,
    up until the point where the parser encounters the error and
    Gecko *hides* the already partially rendered page. The XML spec
    does not even *imply* an obligation to do anything alike. It’s
    just unnecessary/misguided user (and publisher!) hostility on
    the part of Gecko.
    Honestly, having dealt with XML documents that were so broken that
    regexps were the only thing I could throw at it, I wish more XML
    parsers out there would refuse it instead of fixing it things up.
    Gentle doctors make stinking wounds.

    But this is way off topic so if we really want to discuss this lets
    take it off the list.

    Leon

Related Discussions