FAQ
completely academic at the moment, but it would be interesting to see
the benchmark comparison thing done properly. If it were, the way
would be to specify a set of application functions, let people within
the various projects implement them as they wish, then benchmark. I
suppose ...

so what would be a decent set of tests? I'll have a stab ...


1. no db, no templating. Just have the app respond to a uri for a
random number n, and respond with the random number in a plain text
doc.

so /text_string/abcde would expect to get back the string "abcde" in a text doc

this could measure the ability of the app to parse the uri, and process it.

2. same with templating. Now we could expect the string back in a
simple html template ... although that doesn't expect the template
system to do much work ... /html_string/xyz ...

3. db access, no templating. The db type, config, schema and dataset
should be spec'd as part of the tests, to factor this out as far as
possible. Then we could have several tests:
- just retrieve a row and display results /db_retrieve
- same with one or more joins required /db_join
- write/update a row /db_write

4. a random mix of all the above.

Could use siege to actually do the tests. Of course, we might just end
up proving that the db makes more difference than anything else ...

This is just mindblobs at the moment, but the other thread made me
think, and I wondered if something like this has been done already.
Would be interesting

D

--
Daniel McBrearty
email : danielmcbrearty at gmail.com
www.engoi.com : the multi - language vocab trainer
BTW : 0873928131

Search Discussions

  • Robert 'phaylon' Sedlacek at Jan 15, 2007 at 11:21 am

    Daniel McBrearty wrote:

    completely academic at the moment, but it would be interesting to see
    the benchmark comparison thing done properly. If it were, the way
    would be to specify a set of application functions, let people within
    the various projects implement them as they wish, then benchmark. I
    suppose ...

    so what would be a decent set of tests? I'll have a stab ...
    I see your stab and raise by a punch.
    1. no db, no templating. Just have the app respond to a uri for a
    random number n, and respond with the random number in a plain text
    doc.

    so /text_string/abcde would expect to get back the string "abcde" in a
    text doc

    this could measure the ability of the app to parse the uri, and process it.
    I think this is a bit too simple. We should probably look at usual kinds
    of URIs used in applications here.

    /
    /foo/bar/baz
    /foo/1/bar/2/baz/3/4
    /foo?bar=baz
    ...and probably more...

    Also, there should be more than one action. I would say about 50 might
    be a good measure, though my current app has a lot more of them...
    2. same with templating. Now we could expect the string back in a
    simple html template ... although that doesn't expect the template
    system to do much work ... /html_string/xyz ...

    3. db access, no templating. The db type, config, schema and dataset
    should be spec'd as part of the tests, to factor this out as far as
    possible. Then we could have several tests:
    - just retrieve a row and display results /db_retrieve
    - same with one or more joins required /db_join
    - write/update a row /db_write

    4. a random mix of all the above.
    Personally, I don't care about templating and ORM benchmarks, so I'll
    skip here :)

    --
    # Robert 'phaylon' Sedlacek
    # Perl 5/Catalyst Developer in Hamburg, Germany
    { EMail => ' rs@474.at ', Web => ' http://474.at ' }
  • Daniel McBrearty at Jan 15, 2007 at 11:34 am
    e ability of the app to parse the uri, and process it.
    I think this is a bit too simple. We should probably look at usual kinds
    of URIs used in applications here.

    /
    /foo/bar/baz
    /foo/1/bar/2/baz/3/4
    /foo?bar=baz
    ...and probably more...

    Also, there should be more than one action. I would say about 50 might
    be a good measure, though my current app has a lot more of them...
    sure. It would certainly be possible to start simple and then get more
    complicated ...

    Personally, I don't care about templating and ORM benchmarks,
    why not?


    --
    Daniel McBrearty
    email : danielmcbrearty at gmail.com
    www.engoi.com : the multi - language vocab trainer
    BTW : 0873928131
  • Robert 'phaylon' Sedlacek at Jan 15, 2007 at 12:24 pm

    Daniel McBrearty wrote:

    Personally, I don't care about templating and ORM benchmarks,
    why not?
    Well, templating benchmarks maybe, but for an ORM I just have the
    feeling the larger factor is how you use it, not which.

    --
    # Robert 'phaylon' Sedlacek
    # Perl 5/Catalyst Developer in Hamburg, Germany
    { EMail => ' rs@474.at ', Web => ' http://474.at ' }
  • Daniel McBrearty at Jan 15, 2007 at 2:16 pm
    maybe. for such an exercise though, you would have to trust the
    implementors of submission to use the best tools available for that
    framework, and to use them well.

    I wouldn't see much point to trying to do something like this without
    having some tests that take a look at how well db access is performed.

    On 1/15/07, Robert 'phaylon' Sedlacek wrote:
    Daniel McBrearty wrote:
    Personally, I don't care about templating and ORM benchmarks,
    why not?
    Well, templating benchmarks maybe, but for an ORM I just have the
    feeling the larger factor is how you use it, not which.

    --
    # Robert 'phaylon' Sedlacek
    # Perl 5/Catalyst Developer in Hamburg, Germany
    { EMail => ' rs@474.at ', Web => ' http://474.at ' }

    _______________________________________________
    List: Catalyst@lists.rawmode.org
    Listinfo: http://lists.rawmode.org/mailman/listinfo/catalyst
    Searchable archive: http://www.mail-archive.com/catalyst@lists.rawmode.org/
    Dev site: http://dev.catalyst.perl.org/

    --
    Daniel McBrearty
    email : danielmcbrearty at gmail.com
    www.engoi.com : the multi - language vocab trainer
    BTW : 0873928131
  • Robert 'phaylon' Sedlacek at Jan 15, 2007 at 2:52 pm

    Daniel McBrearty wrote:

    maybe. for such an exercise though, you would have to trust the
    implementors of submission to use the best tools available for that
    framework, and to use them well.
    So, there's one best template and one best model for Catalyst? :)
    I wouldn't see much point to trying to do something like this without
    having some tests that take a look at how well db access is performed.
    I didn't say "don't do it." I said I'm personally not that interested in
    that part. I have experienced DBIC and TT as productive enough to
    compensate every slowness that might have happened, so I know it works
    for me[tm]. One of the biggest reasons I'd like to see _real_ benchmark
    between Catalyst and others is to bring some balance into the current
    resources.

    --
    # Robert 'phaylon' Sedlacek
    # Perl 5/Catalyst Developer in Hamburg, Germany
    { EMail => ' rs@474.at ', Web => ' http://474.at ' }
  • Perrin Harkins at Jan 15, 2007 at 4:20 pm

    On Mon, 2007-01-15 at 13:24 +0100, Robert 'phaylon' Sedlacek wrote:
    Daniel McBrearty wrote:
    Personally, I don't care about templating and ORM benchmarks,
    why not?
    Well, templating benchmarks maybe, but for an ORM I just have the
    feeling the larger factor is how you use it, not which.
    This is true, but the SQL generated by an ORM can have a big effect on
    performance. I usually hand-code the parts where I need the most
    performance, but many people will just rely on their ORM and hope for
    the best, and that can vary quite a bit between implementations (e.g.
    deleting multiple rows in one statement vs. thousands).

    - Perrin
  • Robert 'phaylon' Sedlacek at Jan 16, 2007 at 10:22 am

    Perrin Harkins wrote:
    On Mon, 2007-01-15 at 13:24 +0100, Robert 'phaylon' Sedlacek wrote:

    Well, templating benchmarks maybe, but for an ORM I just have the
    feeling the larger factor is how you use it, not which.
    This is true, but the SQL generated by an ORM can have a big effect on
    performance. I usually hand-code the parts where I need the most
    performance, but many people will just rely on their ORM and hope for
    the best, and that can vary quite a bit between implementations (e.g.
    deleting multiple rows in one statement vs. thousands).
    And you can do both in DBIC. That's why benchmarking is so hard,
    TIMTOWTDI, which one will you benchmark? Same with Catalyst. For my
    apps, I use uri_for massively, some people don't, which way are we going
    to benchmark? Same with controller base classes vs action classes vs
    external modules.

    IMHO you can only really benchmark developers together with their
    framework of choice.

    --
    # Robert 'phaylon' Sedlacek
    # Perl 5/Catalyst Developer in Hamburg, Germany
    { EMail => ' rs@474.at ', Web => ' http://474.at ' }
  • Daniel McBrearty at Jan 16, 2007 at 10:35 am
    so a set of benchmarks would give you the chance to show that
    TIMTOWTDI, and the trade offs that exist between them. That would be
    pretty interesting to someone trying to compare frameworks.

    That's where having simple tests that exercise one aspect of the
    framework in isolation, as far as is possible, might have advantages.
    You see the differences between techniques, you see the effect they
    have, you have the opportunity to try to extrapolate what that
    actually means in terms of your app.

    At least, that's a technique that has served well in other branches of
    engineering - you reduce complexity to it's simplest cases, then wotk
    back .... maybe this is fundamentally different and that can't work,
    but at the moment I don't see why. Or any documented attempts to do it
    that have clearly failed ...

    On 1/16/07, Robert 'phaylon' Sedlacek wrote:
    Perrin Harkins wrote:
    On Mon, 2007-01-15 at 13:24 +0100, Robert 'phaylon' Sedlacek wrote:

    Well, templating benchmarks maybe, but for an ORM I just have the
    feeling the larger factor is how you use it, not which.
    This is true, but the SQL generated by an ORM can have a big effect on
    performance. I usually hand-code the parts where I need the most
    performance, but many people will just rely on their ORM and hope for
    the best, and that can vary quite a bit between implementations (e.g.
    deleting multiple rows in one statement vs. thousands).
    And you can do both in DBIC. That's why benchmarking is so hard,
    TIMTOWTDI, which one will you benchmark? Same with Catalyst. For my
    apps, I use uri_for massively, some people don't, which way are we going
    to benchmark? Same with controller base classes vs action classes vs
    external modules.

    IMHO you can only really benchmark developers together with their
    framework of choice.

    --
    # Robert 'phaylon' Sedlacek
    # Perl 5/Catalyst Developer in Hamburg, Germany
    { EMail => ' rs@474.at ', Web => ' http://474.at ' }

    _______________________________________________
    List: Catalyst@lists.rawmode.org
    Listinfo: http://lists.rawmode.org/mailman/listinfo/catalyst
    Searchable archive: http://www.mail-archive.com/catalyst@lists.rawmode.org/
    Dev site: http://dev.catalyst.perl.org/

    --
    Daniel McBrearty
    email : danielmcbrearty at gmail.com
    www.engoi.com : the multi - language vocab trainer
    BTW : 0873928131
  • Robert 'phaylon' Sedlacek at Jan 16, 2007 at 10:55 am

    Daniel McBrearty wrote:

    so a set of benchmarks would give you the chance to show that
    TIMTOWTDI, and the trade offs that exist between them. That would be
    pretty interesting to someone trying to compare frameworks.
    I doubt that this is a simple list of features, but you are free to
    prove me wrong :)
    At least, that's a technique that has served well in other branches of
    engineering - you reduce complexity to it's simplest cases, then wotk
    back .... maybe this is fundamentally different and that can't work,
    but at the moment I don't see why. Or any documented attempts to do it
    that have clearly failed ...
    Simple: I have yet to see a benchmark of applications or frameworks that
    wasn't fundamentally flawed. But as above, I would be happy if Catalyst
    would be the start.

    --
    # Robert 'phaylon' Sedlacek
    # Perl 5/Catalyst Developer in Hamburg, Germany
    { EMail => ' rs@474.at ', Web => ' http://474.at ' }

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcatalyst @
categoriescatalyst, perl
postedJan 15, '07 at 10:49a
activeJan 16, '07 at 10:55a
posts10
users3
websitecatalystframework.org
irc#catalyst

People

Translate

site design / logo © 2022 Grokbase