FAQ
Our apps tend to use the "pattern #2" as simply described at: Catalyst
Models Definitive Guide <http://www.perlmonks.org/?node_id=915657> -- that
is, our model is DBIC with business logic added to the resultset classes.
But, much business logic is finding its way into the controllers. The
apps also tend to grow into monolithic beasts. And we have the need to
use the "app" for multiple uses -- for example we currently need to add a
custom API for a specific purpose that needs to live in its own Catalyst
app.

So, I'm looking at adding a separate model layer(s) ("pattern #3" in link
above), as is commonly suggested. My plan is to have one "distribution"
that is our DBIC layer and then use that in a number of separate model
layers (split out by functionality). The goal is to allow separate teams
to work on different parts of the app, have separate unit tests, and
separate release schedules. And to thin out the controllers. Much more
manageable and scalable.

Anyone here doing something like this? As I look into this I'm coming up
with quite a few questions, of course.

This is more of a Perl question than a Catalyst one, but one question I
have is about data validation. Catalyst provides a nice defined request
structure so, for example, I have input data validation managed very
consistently (e.g. validation classes can be mapped to Catalyst actions
automatically and likewise validation errors can be added to the response
in a common way). That makes the controller code simple since when the
controller runs it knows if the data it has received is valid or not and
the controller does not worry about gathering up error messages.

So, I'm wondering how best to do that if I provide a separate model layer
that includes data validation. For example, say I have a model for user
management which includes a method for creating new users. If I have a
model method $users->add_user( \%user_data ) I would tend to have it return
the new user object or throw an exception on failure. What probably makes
sense is using exception objects and have Catalyst catch those to render
the error in an appropriate way. Is this an approach you are using? Any
other tips on structuring the model layer that works well with both
Catalyst and non-Catalyst applications?

Looking back, I think my question isn't that much about data validation as
is about providing a framework for model creation such that a consistent
API is provided -- making it easy to hook it into Catalyst for things like
rendering errors in a consistent way.

Thanks for any feedback you can provide,

--
Bill Moseley
moseley@hank.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120102/716fb522/attachment.htm

Search Discussions

  • Jason Galea at Jan 2, 2012 at 3:36 am
    Hi Bill,
    On Mon, Jan 2, 2012 at 11:41 AM, Bill Moseley wrote:

    So, I'm looking at adding a separate model layer(s) ("pattern #3" in link
    above), as is commonly suggested. My plan is to have one "distribution"
    that is our DBIC layer and then use that in a number of separate model
    layers (split out by functionality). The goal is to allow separate teams
    to work on different parts of the app, have separate unit tests, and
    separate release schedules. And to thin out the controllers. Much more
    manageable and scalable.
    I think I've added another layer but I'm not sure where you draw the line..
    I have a model layer over DBIC pulling together related result classes
    under a single model class. Then the app? layer uses the model layer to get
    things done. So I'd probably have one "distribution" that is our DBIC
    wrapped in a model layer layer and use that in a number of apps.. 8) Each
    app can then be used as the single model in a Catalyst app or script or
    whatever.. (I think I need more names for the parts..)

    Anyone here doing something like this? As I look into this I'm coming up
    with quite a few questions, of course.
    I've been trying learn the steps to this little dance for a while now and
    still haven't put anything into production, but for what it's worth, here
    are some of the things I've implemented in my most recent code..

    I have "Sets" in lu of ResultSets and "Models" for Results. Although in
    most instances a Model will actually cover the usage of multiple Results.
    Each Set gets the dbic schema object and knows it's resultset name. Each
    model has a data attribute which contains a dbic row object and "handles"
    any methods I don't need to override via the Moose "handles" attribute
    attribute!?

    Set->create($hash) creates the dbic object and stuffs it into a model class
    and returns that.
    Each result class that has a model class overrides it's inflate_result
    method which again stuffs the dbic row object into the model object so
    searches on the related dbic resultsets return my model objects.

    Each model class has a validation class based on.. Validation::Class and
    create & update run their input through that. If there are errors I stuff
    the errors into a very basic exception object and return that. This way I
    can return the same exception object no matter where the error comes from,
    eg a dbic exception..

    So my app can use the Login set to create a login model which has methods
    to set/get email & username, check the password, set a temporary
    password, add to roles, and get roles by name. Beneath that is 3 or 4 DBIC
    result classes which the model class works with via custom methods or
    delegation.

    ok, sorry.. I'll stop there. This has turned into a brain dump and clarity
    has suffered badly.. I hope you got something for your trouble..

    cheers,

    J



    This is more of a Perl question than a Catalyst one, but one question I
    have is about data validation. Catalyst provides a nice defined request
    structure so, for example, I have input data validation managed very
    consistently (e.g. validation classes can be mapped to Catalyst actions
    automatically and likewise validation errors can be added to the response
    in a common way). That makes the controller code simple since when the
    controller runs it knows if the data it has received is valid or not and
    the controller does not worry about gathering up error messages.

    So, I'm wondering how best to do that if I provide a separate model layer
    that includes data validation. For example, say I have a model for user
    management which includes a method for creating new users. If I have a
    model method $users->add_user( \%user_data ) I would tend to have it return
    the new user object or throw an exception on failure. What probably makes
    sense is using exception objects and have Catalyst catch those to render
    the error in an appropriate way. Is this an approach you are using? Any
    other tips on structuring the model layer that works well with both
    Catalyst and non-Catalyst applications?

    Looking back, I think my question isn't that much about data validation as
    is about providing a framework for model creation such that a consistent
    API is provided -- making it easy to hook it into Catalyst for things like
    rendering errors in a consistent way.

    Thanks for any feedback you can provide,

    --
    Bill Moseley
    moseley@hank.org

    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120102/6d324826/attachment.htm
  • Jason Galea at Jan 2, 2012 at 3:48 am
    oh, I've also started playing with Bread::Board and its looking like my
    "model" layer consisting of the DBIC Schema and all my "Sets" will be
    pulled together into a single Bread::Board container.

    J
    On Mon, Jan 2, 2012 at 1:36 PM, Jason Galea wrote:

    Hi Bill,
    On Mon, Jan 2, 2012 at 11:41 AM, Bill Moseley wrote:

    So, I'm looking at adding a separate model layer(s) ("pattern #3" in link
    above), as is commonly suggested. My plan is to have one "distribution"
    that is our DBIC layer and then use that in a number of separate model
    layers (split out by functionality). The goal is to allow separate teams
    to work on different parts of the app, have separate unit tests, and
    separate release schedules. And to thin out the controllers. Much more
    manageable and scalable.
    I think I've added another layer but I'm not sure where you draw the
    line.. I have a model layer over DBIC pulling together related result
    classes under a single model class. Then the app? layer uses the model
    layer to get things done. So I'd probably have one "distribution" that is
    our DBIC wrapped in a model layer layer and use that in a number of apps..
    8) Each app can then be used as the single model in a Catalyst app or
    script or whatever.. (I think I need more names for the parts..)

    Anyone here doing something like this? As I look into this I'm coming
    up with quite a few questions, of course.
    I've been trying learn the steps to this little dance for a while now and
    still haven't put anything into production, but for what it's worth, here
    are some of the things I've implemented in my most recent code..

    I have "Sets" in lu of ResultSets and "Models" for Results. Although in
    most instances a Model will actually cover the usage of multiple Results.
    Each Set gets the dbic schema object and knows it's resultset name. Each
    model has a data attribute which contains a dbic row object and "handles"
    any methods I don't need to override via the Moose "handles" attribute
    attribute!?

    Set->create($hash) creates the dbic object and stuffs it into a model
    class and returns that.
    Each result class that has a model class overrides it's inflate_result
    method which again stuffs the dbic row object into the model object so
    searches on the related dbic resultsets return my model objects.

    Each model class has a validation class based on.. Validation::Class and
    create & update run their input through that. If there are errors I stuff
    the errors into a very basic exception object and return that. This way I
    can return the same exception object no matter where the error comes from,
    eg a dbic exception..

    So my app can use the Login set to create a login model which has methods
    to set/get email & username, check the password, set a temporary
    password, add to roles, and get roles by name. Beneath that is 3 or 4 DBIC
    result classes which the model class works with via custom methods or
    delegation.

    ok, sorry.. I'll stop there. This has turned into a brain dump and clarity
    has suffered badly.. I hope you got something for your trouble..

    cheers,

    J



    This is more of a Perl question than a Catalyst one, but one question I
    have is about data validation. Catalyst provides a nice defined request
    structure so, for example, I have input data validation managed very
    consistently (e.g. validation classes can be mapped to Catalyst actions
    automatically and likewise validation errors can be added to the response
    in a common way). That makes the controller code simple since when the
    controller runs it knows if the data it has received is valid or not and
    the controller does not worry about gathering up error messages.

    So, I'm wondering how best to do that if I provide a separate model layer
    that includes data validation. For example, say I have a model for user
    management which includes a method for creating new users. If I have a
    model method $users->add_user( \%user_data ) I would tend to have it return
    the new user object or throw an exception on failure. What probably makes
    sense is using exception objects and have Catalyst catch those to render
    the error in an appropriate way. Is this an approach you are using? Any
    other tips on structuring the model layer that works well with both
    Catalyst and non-Catalyst applications?

    Looking back, I think my question isn't that much about data validation
    as is about providing a framework for model creation such that a consistent
    API is provided -- making it easy to hook it into Catalyst for things like
    rendering errors in a consistent way.

    Thanks for any feedback you can provide,

    --
    Bill Moseley
    moseley@hank.org

    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120102/011e2deb/attachment.htm
  • Bill Moseley at Jan 9, 2012 at 5:14 am

    On Monday, January 2, 2012, Jason Galea wrote:
    I think I've added another layer but I'm not sure where you draw the
    line.. I have a model layer over DBIC pulling together related result
    classes under a single model class. Then the app? layer uses the model
    layer to get things done. So I'd probably have one "distribution" that is
    our DBIC wrapped in a model layer layer and use that in a number of apps..
    8) Each app can then be used as the single model in a Catalyst app or
    script or whatever.. (I think I need more names for the parts..)
    Yes, where to draw the line is difficult to know. I've only had a few
    hours to work on this but already I feel like I'm reinventing Catalyst --
    mostly because my model layer is pulling in much of the components that my
    Catalyst app would normally do -- DBIC, caching, even some concept of the
    "current user". Access control is another topic.


    I have "Sets" in lu of ResultSets and "Models" for Results. Although in
    most instances a Model will actually cover the usage of multiple Results.
    Each Set gets the dbic schema object and knows it's resultset name. Each
    model has a data attribute which contains a dbic row object and "handles"
    any methods I don't need to override via the Moose "handles" attribute
    attribute!?

    Set->create($hash) creates the dbic object and stuffs it into a model
    class and returns that.
    So you are mirroring DBICs class structure a bit. I need to consider that
    approach more as currently my model layer returns the DBIC row object
    directly. So, I have something like this:

    my $user_model = Model::User->new;
    my $new_user = $user->new_user( $user_data );

    Not as flexible as your approach but my goal currently is to just abstract
    out the ORM so that Model::User can hide the specifics of the database.
    Actually, it's not that hard to do directly with DBIC, either.


    Each result class that has a model class overrides it's inflate_result
    method which again stuffs the dbic row object into the model object so
    searches on the related dbic resultsets return my model objects.
    Can you show a code example of that? I'm not sure I'm following why you
    use that approach instead of having your layer on top of DBIC do that.


    Each model class has a validation class based on.. Validation::Class and
    create & update run their input through that. If there are errors I stuff
    the errors into a very basic exception object and return that. This way I
    can return the same exception object no matter where the error comes from,
    eg a dbic exception..
    Yes, I'm doing something very similar where validation happens before the
    method in the model and on validation errors and exception is thrown (if
    you are on the Moose list you may have seen my example).

    Thanks for the feedback and the ideas,



    --
    Bill Moseley
    moseley@hank.org
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120109/e13725bb/attachment.htm
  • Jason Galea at Jan 9, 2012 at 2:16 pm
    On Mon, Jan 9, 2012 at 3:14 PM, Bill Moseley wrote:
    On Monday, January 2, 2012, Jason Galea wrote:


    I think I've added another layer but I'm not sure where you draw the
    line.. I have a model layer over DBIC pulling together related result
    classes under a single model class. Then the app? layer uses the model
    layer to get things done. So I'd probably have one "distribution" that is
    our DBIC wrapped in a model layer layer and use that in a number of apps..
    8) Each app can then be used as the single model in a Catalyst app or
    script or whatever.. (I think I need more names for the parts..)
    Yes, where to draw the line is difficult to know. I've only had a few
    hours to work on this but already I feel like I'm reinventing Catalyst --
    mostly because my model layer is pulling in much of the components that my
    Catalyst app would normally do -- DBIC, caching, even some concept of the
    "current user". Access control is another topic.
    The problem parts for me are DBIC and TT. I thought I could just set up the
    components as usual, then load my app with them but it get's tricky calling
    one component from another at setup time, although it all works fine if you
    instantiate the app per request. So now I'm connecting/creating those
    myself.

    For other things provided by plugins I'm working more with Catalyst so for
    caching I will probably have my app accept a cache object at construction
    and pass in the Catalyst cache. For Authentication I've created my own
    store and user for the Catalyst Authentication plugin and they use my app
    to do what they have to. I've also created a store for the session plugin
    which uses my app, so all-in-all my app can see/touch everything that
    Catalyst is doing, and I can still make use of all the Catalyst stuff
    available (hopefully).

    I have "Sets" in lu of ResultSets and "Models" for Results. Although in
    most instances a Model will actually cover the usage of multiple Results.
    Each Set gets the dbic schema object and knows it's resultset name. Each
    model has a data attribute which contains a dbic row object and "handles"
    any methods I don't need to override via the Moose "handles" attribute
    attribute!?

    Set->create($hash) creates the dbic object and stuffs it into a model
    class and returns that.
    So you are mirroring DBICs class structure a bit. I need to consider that
    approach more as currently my model layer returns the DBIC row object
    directly. So, I have something like this:

    my $user_model = Model::User->new;
    my $new_user = $user->new_user( $user_data );
    Not as flexible as your approach but my goal currently is to just abstract
    out the ORM so that Model::User can hide the specifics of the database.
    Actually, it's not that hard to do directly with DBIC, either.
    yeh, I decided a while back that DBIx::Class is complicated enough and I'm
    too lazy to keep trying to work out complicated solutions in the DBIC
    classes to do things I know I can do quickly and easily with regular Moose
    classes.. and I like having nice clean DBIC classes..

    Each result class that has a model class overrides it's inflate_result
    method which again stuffs the dbic row object into the model object so
    searches on the related dbic resultsets return my model objects.
    Can you show a code example of that? I'm not sure I'm following why you
    use that approach instead of having your layer on top of DBIC do that.
    and the exception to the rule.. I did have my Set classes (which I now
    refer to as Model Controllers) grabbing search results and looping through,
    inflating them all into my Model Instances but then I couldn't just grab a
    resultset if I needed to limit/restrict/whatever, and any search or find
    had to be put through that ringer. With inflate_result I know that no
    matter how I get the results they'll be instances of my model. create is
    the only thing that doesn't work for so my controller does the wrapping
    there.

    package Lecstor::Schema::Result::Person;
    use base qw/DBIx::Class/;
    __PACKAGE__->load_components(qw/ Core /);
    __PACKAGE__->table('person');
    __PACKAGE__->add_columns('id' ,'firstname','surname');
    __PACKAGE__->set_primary_key('id');

    sub inflate_result {
    my $self = shift;
    my $ret = $self->next::method(@_);
    return unless $ret;
    return Lecstor::Model::Instance::Person->new( _record => $ret );
    }

    1;

    Each model class has a validation class based on.. Validation::Class and
    create & update run their input through that. If there are errors I stuff
    the errors into a very basic exception object and return that. This way I
    can return the same exception object no matter where the error comes from,
    eg a dbic exception..
    Yes, I'm doing something very similar where validation happens before the
    method in the model and on validation errors and exception is thrown (if
    you are on the Moose list you may have seen my example).

    Thanks for the feedback and the ideas,
    no worries at all, happy to be able to provide it.

    cheers,

    J



    --
    Bill Moseley
    moseley@hank.org

    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120110/ccdf103e/attachment.htm
  • Jason Galea at Jan 10, 2012 at 6:18 am
    hehe.. you want layers, I got layers..

    In addition to everything already mentioned I wanted to get Bread::Board in
    on the act..

    I've put Lecstor up on GitHub if you're interested along with a Catalyst
    app that uses it. Neither really do much but boy is there a lot of
    scaffolding! ..and all the tests pass! 8)

    The basic hook-up is Catalyst -> Catalyst Models -> Bread::Board Containers
    -> Lecstor App/Models -> DBIC and others.. decoupled like a broken bag o
    marbles..

    the Catalyst Models:
    - LecstorApp - application lifetime
    - LecstorModel - application lifetime
    - LecstorRequest - request lifetime
    - Lecstor - request lifetime

    LecstorModel & LecstorRequest return Bread::Board containers.
    LecstorApp returns a parameterized Bread::Board container
    Lecstor grabs the first two and shoves them into the third to make another
    Bread::Board container from which I get my app..

    have I gone walkabout!?

    https://github.com/lecstor/Lecstor

    https://github.com/lecstor/Lecstor-Shop-Catalyst

    comments welcome.

    cheers,

    J
    On Tue, Jan 10, 2012 at 12:16 AM, Jason Galea wrote:


    On Mon, Jan 9, 2012 at 3:14 PM, Bill Moseley wrote:


    On Monday, January 2, 2012, Jason Galea wrote:


    I think I've added another layer but I'm not sure where you draw the
    line.. I have a model layer over DBIC pulling together related result
    classes under a single model class. Then the app? layer uses the model
    layer to get things done. So I'd probably have one "distribution" that is
    our DBIC wrapped in a model layer layer and use that in a number of apps..
    8) Each app can then be used as the single model in a Catalyst app or
    script or whatever.. (I think I need more names for the parts..)
    Yes, where to draw the line is difficult to know. I've only had a few
    hours to work on this but already I feel like I'm reinventing Catalyst --
    mostly because my model layer is pulling in much of the components that my
    Catalyst app would normally do -- DBIC, caching, even some concept of the
    "current user". Access control is another topic.
    The problem parts for me are DBIC and TT. I thought I could just set up
    the components as usual, then load my app with them but it get's tricky
    calling one component from another at setup time, although it all works
    fine if you instantiate the app per request. So now I'm connecting/creating
    those myself.

    For other things provided by plugins I'm working more with Catalyst so for
    caching I will probably have my app accept a cache object at construction
    and pass in the Catalyst cache. For Authentication I've created my own
    store and user for the Catalyst Authentication plugin and they use my app
    to do what they have to. I've also created a store for the session plugin
    which uses my app, so all-in-all my app can see/touch everything that
    Catalyst is doing, and I can still make use of all the Catalyst stuff
    available (hopefully).

    I have "Sets" in lu of ResultSets and "Models" for Results. Although in
    most instances a Model will actually cover the usage of multiple Results.
    Each Set gets the dbic schema object and knows it's resultset name. Each
    model has a data attribute which contains a dbic row object and "handles"
    any methods I don't need to override via the Moose "handles" attribute
    attribute!?

    Set->create($hash) creates the dbic object and stuffs it into a model
    class and returns that.
    So you are mirroring DBICs class structure a bit. I need to consider
    that approach more as currently my model layer returns the DBIC row object
    directly. So, I have something like this:

    my $user_model = Model::User->new;
    my $new_user = $user->new_user( $user_data );
    Not as flexible as your approach but my goal currently is to just
    abstract out the ORM so that Model::User can hide the specifics of the
    database. Actually, it's not that hard to do directly with DBIC, either.
    yeh, I decided a while back that DBIx::Class is complicated enough and I'm
    too lazy to keep trying to work out complicated solutions in the DBIC
    classes to do things I know I can do quickly and easily with regular Moose
    classes.. and I like having nice clean DBIC classes..

    Each result class that has a model class overrides it's inflate_result
    method which again stuffs the dbic row object into the model object so
    searches on the related dbic resultsets return my model objects.
    Can you show a code example of that? I'm not sure I'm following why you
    use that approach instead of having your layer on top of DBIC do that.
    and the exception to the rule.. I did have my Set classes (which I now
    refer to as Model Controllers) grabbing search results and looping through,
    inflating them all into my Model Instances but then I couldn't just grab a
    resultset if I needed to limit/restrict/whatever, and any search or find
    had to be put through that ringer. With inflate_result I know that no
    matter how I get the results they'll be instances of my model. create is
    the only thing that doesn't work for so my controller does the wrapping
    there.

    package Lecstor::Schema::Result::Person;
    use base qw/DBIx::Class/;
    __PACKAGE__->load_components(qw/ Core /);
    __PACKAGE__->table('person');
    __PACKAGE__->add_columns('id' ,'firstname','surname');
    __PACKAGE__->set_primary_key('id');

    sub inflate_result {
    my $self = shift;
    my $ret = $self->next::method(@_);
    return unless $ret;
    return Lecstor::Model::Instance::Person->new( _record => $ret );
    }

    1;

    Each model class has a validation class based on.. Validation::Class and
    create & update run their input through that. If there are errors I stuff
    the errors into a very basic exception object and return that. This way I
    can return the same exception object no matter where the error comes from,
    eg a dbic exception..
    Yes, I'm doing something very similar where validation happens before the
    method in the model and on validation errors and exception is thrown (if
    you are on the Moose list you may have seen my example).

    Thanks for the feedback and the ideas,
    no worries at all, happy to be able to provide it.

    cheers,

    J



    --
    Bill Moseley
    moseley@hank.org

    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120110/ce96ce6b/attachment.htm
  • Bill Moseley at Feb 7, 2012 at 3:26 am

    On Tue, Jan 10, 2012 at 1:18 PM, Jason Galea wrote:

    hehe.. you want layers, I got layers..

    I just got out of yet another meeting about this architecture redesign.
    (I'd like to see the graph that relates productivity to the number of
    people involved some day...)

    Jason, this is probably a question best for you and your experience, but I
    (ignoring that graph above) would like to hear other's opinions and
    reasoning.


    My goal was to put a layer between Catalyst and DBIC for a few reasons,
    including:

    1. To have a place to put common model code that cannot be represented
    in DBIC (e.g. data from other sources)
    2. To be able to split up the model into logical units that can be
    tested and used independently.
    3. To abstract out the physical layout of the database -- be able to
    change data layer w/o changing the API.


    My idea was that Catalyst would call a method in the new model layer and
    possibly get a DBIC object back. There is concern from some at my meeting
    that we don't want to give the Catalyst app developer a "raw" DBIC object
    and that we should wrap it (as it appears you are doing, Jason) in yet
    another object. That is, we want to allow $user->first_name, but not
    $user->search_related or $user->delete.

    That requires writing new wrapper classes for every possible result -- not
    just mirroring DBIC's result classes but possibly many more because the new
    model might have multiple calls (with different access levels) for fetching
    user data. That is, $user->email might work for some model methods that
    return a user but not methods called on the model.

    Frankly, to me this seems like a lot of code and work and complexity just
    to prevent another developer from doing something stupid -- which we cannot
    prevent anyway. And smart programmers can get at whatever they want,
    regardless. Seems more risky to make the code more complex and thus harder
    to understand. The cost/benefit ratio just doesn't seem that great.

    Am I missing something?


    I suppose this is not unlike the many discussions about what to pass to the
    view. Does the controller, for example, fetch a user object and pull the
    data required for the view into a hash and then pass that to the view? Or
    does the controller just fetch a user object and pass that directly to the
    view to decide what needs to display?

    I prefer just passing the object to the view. The controller code is much
    cleaner and then when the view needs to change don't need to also change
    the controller. And when there's a different view (like an API or moble )
    the same controller action can be used.

    Thanks,



    In addition to everything already mentioned I wanted to get Bread::Board
    in on the act..

    I've put Lecstor up on GitHub if you're interested along with a Catalyst
    app that uses it. Neither really do much but boy is there a lot of
    scaffolding! ..and all the tests pass! 8)

    The basic hook-up is Catalyst -> Catalyst Models -> Bread::Board
    Containers -> Lecstor App/Models -> DBIC and others.. decoupled like a
    broken bag o marbles..

    the Catalyst Models:
    - LecstorApp - application lifetime
    - LecstorModel - application lifetime
    - LecstorRequest - request lifetime
    - Lecstor - request lifetime

    LecstorModel & LecstorRequest return Bread::Board containers.
    LecstorApp returns a parameterized Bread::Board container
    Lecstor grabs the first two and shoves them into the third to make another
    Bread::Board container from which I get my app..

    have I gone walkabout!?

    https://github.com/lecstor/Lecstor

    https://github.com/lecstor/Lecstor-Shop-Catalyst

    comments welcome.

    cheers,

    J

    On Tue, Jan 10, 2012 at 12:16 AM, Jason Galea wrote:


    On Mon, Jan 9, 2012 at 3:14 PM, Bill Moseley wrote:


    On Monday, January 2, 2012, Jason Galea wrote:


    I think I've added another layer but I'm not sure where you draw the
    line.. I have a model layer over DBIC pulling together related result
    classes under a single model class. Then the app? layer uses the model
    layer to get things done. So I'd probably have one "distribution" that is
    our DBIC wrapped in a model layer layer and use that in a number of apps..
    8) Each app can then be used as the single model in a Catalyst app or
    script or whatever.. (I think I need more names for the parts..)
    Yes, where to draw the line is difficult to know. I've only had a few
    hours to work on this but already I feel like I'm reinventing Catalyst --
    mostly because my model layer is pulling in much of the components that my
    Catalyst app would normally do -- DBIC, caching, even some concept of the
    "current user". Access control is another topic.
    The problem parts for me are DBIC and TT. I thought I could just set up
    the components as usual, then load my app with them but it get's tricky
    calling one component from another at setup time, although it all works
    fine if you instantiate the app per request. So now I'm connecting/creating
    those myself.

    For other things provided by plugins I'm working more with Catalyst so
    for caching I will probably have my app accept a cache object at
    construction and pass in the Catalyst cache. For Authentication I've
    created my own store and user for the Catalyst Authentication plugin and
    they use my app to do what they have to. I've also created a store for the
    session plugin which uses my app, so all-in-all my app can see/touch
    everything that Catalyst is doing, and I can still make use of all the
    Catalyst stuff available (hopefully).

    I have "Sets" in lu of ResultSets and "Models" for Results. Although in
    most instances a Model will actually cover the usage of multiple Results.
    Each Set gets the dbic schema object and knows it's resultset name. Each
    model has a data attribute which contains a dbic row object and "handles"
    any methods I don't need to override via the Moose "handles" attribute
    attribute!?

    Set->create($hash) creates the dbic object and stuffs it into a model
    class and returns that.
    So you are mirroring DBICs class structure a bit. I need to consider
    that approach more as currently my model layer returns the DBIC row object
    directly. So, I have something like this:

    my $user_model = Model::User->new;
    my $new_user = $user->new_user( $user_data );
    Not as flexible as your approach but my goal currently is to just
    abstract out the ORM so that Model::User can hide the specifics of the
    database. Actually, it's not that hard to do directly with DBIC, either.
    yeh, I decided a while back that DBIx::Class is complicated enough and
    I'm too lazy to keep trying to work out complicated solutions in the DBIC
    classes to do things I know I can do quickly and easily with regular Moose
    classes.. and I like having nice clean DBIC classes..

    Each result class that has a model class overrides it's inflate_result
    method which again stuffs the dbic row object into the model object so
    searches on the related dbic resultsets return my model objects.
    Can you show a code example of that? I'm not sure I'm following why you
    use that approach instead of having your layer on top of DBIC do that.
    and the exception to the rule.. I did have my Set classes (which I now
    refer to as Model Controllers) grabbing search results and looping through,
    inflating them all into my Model Instances but then I couldn't just grab a
    resultset if I needed to limit/restrict/whatever, and any search or find
    had to be put through that ringer. With inflate_result I know that no
    matter how I get the results they'll be instances of my model. create is
    the only thing that doesn't work for so my controller does the wrapping
    there.

    package Lecstor::Schema::Result::Person;
    use base qw/DBIx::Class/;
    __PACKAGE__->load_components(qw/ Core /);
    __PACKAGE__->table('person');
    __PACKAGE__->add_columns('id' ,'firstname','surname');
    __PACKAGE__->set_primary_key('id');

    sub inflate_result {
    my $self = shift;
    my $ret = $self->next::method(@_);
    return unless $ret;
    return Lecstor::Model::Instance::Person->new( _record => $ret );
    }

    1;

    Each model class has a validation class based on.. Validation::Class
    and create & update run their input through that. If there are errors I
    stuff the errors into a very basic exception object and return that. This
    way I can return the same exception object no matter where the error comes
    from, eg a dbic exception..
    Yes, I'm doing something very similar where validation happens before
    the method in the model and on validation errors and exception is thrown
    (if you are on the Moose list you may have seen my example).

    Thanks for the feedback and the ideas,
    no worries at all, happy to be able to provide it.

    cheers,

    J



    --
    Bill Moseley
    moseley@hank.org

    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/
    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/

    --
    Bill Moseley
    moseley@hank.org
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120207/f5ffbcfd/attachment.htm
  • Octavian Rasnita at Feb 7, 2012 at 8:00 am
    Hi Bill,

    From: Bill Moseley
    On Tue, Jan 10, 2012 at 1:18 PM, Jason Galea wrote:

    hehe.. you want layers, I got layers..


    I just got out of yet another meeting about this architecture redesign. (I'd like to see the graph that relates productivity to the number of people involved some day...)


    Jason, this is probably a question best for you and your experience, but I (ignoring that graph above) would like to hear other's opinions and reasoning.




    My goal was to put a layer between Catalyst and DBIC for a few reasons, including:
    1.. To have a place to put common model code that cannot be represented in DBIC (e.g. data from other sources)
    2.. To be able to split up the model into logical units that can be tested and used independently.
    3.. To abstract out the physical layout of the database -- be able to change data layer w/o changing the API.



    I also needed that flexibility for exactly the same reasons, but the "bad" thing is that Catalyst does so many things automaticly and DBIC the same that it would imply a decrease in productivity if the app would do those things. ...At least if the app is not very big and complex.

    Some actions, like getting some records from a db, is surely the job of a model, but that model could use records as simple hashrefs (as returned by DBI's fetchrow_hashref), or it could use DBIC records, or other models could offer the data with another structure. But there is no standard structure defined for a model that should unify the data from more such models and offer it to the view. I guess that it could be hard to define such a structure because it would differ among different apps and it might also imply a performance degradation.

    But some other actions are considered to be the job of the controller, for example the authentication/authorization, or anyway the job of the web framework, however sometimes that authentication/authorization should be made in other ways, not by web, but by a simple command line script, or by a GUI interface.

    I guess that for beeing able to totally decouple the web interface from the app, that app should offer a certain interface which would be compatible with Catalyst and the developer would just need to configure Catalyst part to handle the app foo at /foo, and another app bar to /bar and another app baz to /.
    And the interface of all those apps should accept an authorization/authentication object with a standard format, and the authentication should be made by Catalyst or the GUI app, or the CLI script... And the apps used by Catalyst could offer their authentication/authorization and the developer could configure Catalyst to use the authentication offered by the app foo, or the app bar, or the app baz, or an external authenticator that uses the database of another app, authenticator that should do the validation and offer the authentication object in that standard format accepted by the apps.

    This way would be more simple to create adapters for existing apps and combine them in a single web site, or change the authentication...

    Anyway, the question regarding the common format of the data returned by the model to the view remains, and because it could imply performance degrading to change the data structures returned by the underlying modules, it might not be a good way. I am also thinking that there are many developers that like the very limited style of other web frameworks which accept a single ORM, a single templating system and don't even think to decouple the app from the web framework...

    Just thoughts.... Yeah I know, patches welcome. :-)



    My idea was that Catalyst would call a method in the new model layer and possibly get a DBIC object back. There is concern from some at my meeting that we don't want to give the Catalyst app developer a "raw" DBIC object and that we should wrap it (as it appears you are doing, Jason) in yet another object. That is, we want to allow $user->first_name, but not $user->search_related or $user->delete.


    That requires writing new wrapper classes for every possible result -- not just mirroring DBIC's result classes but possibly many more because the new model might have multiple calls (with different access levels) for fetching user data. That is, $user->email might work for some model methods that return a user but not methods called on the model.


    Frankly, to me this seems like a lot of code and work and complexity just to prevent another developer from doing something stupid -- which we cannot prevent anyway. And smart programmers can get at whatever they want, regardless. Seems more risky to make the code more complex and thus harder to understand. The cost/benefit ratio just doesn't seem that great.



    **
    Yep, for not allowing the developer to do something stupid, but also for making the application not depend so much on the underlying model... DBIC for example.

    So if the team will decide to change DBIC with something else, they should be able to continue to use $user->email without changing the controller or the views.

    But in this model of work (using fat models and thin controllers), most of the code is in the model anyway, so no matter if the DBIC model or the business model would use the biggest part of the code, changing DBIC with something else would imply a lot of work if the new underlying module uses a totally different interface than DBIC.

    So it becomes less important if the developer would need to change just a few lines of code in the controller or and/or templates.

    And this is theory, but I am wondering how many times a team decided to change DBIC with another ORM or another source/destination of data in practice.
    I guess that if they decide to do that, it would be easier to rewrite the entire application.

    As I shown above, making an app with the interface totally decoupled would be wonderful but this only if there will be not much performance degradation which I doubt, and it should be also a standard interface defined for Perl programs that should be largely accepted, interface that will allow the developer to choose to publish it with Catalyst, or with another web framework that will accept that interface, but this will be complicated because that interface would depend on the app, would be less flexible and might imply performance degradation.



    Am I missing something?




    I suppose this is not unlike the many discussions about what to pass to the view. Does the controller, for example, fetch a user object and pull the data required for the view into a hash and then pass that to the view? Or does the controller just fetch a user object and pass that directly to the view to decide what needs to display?



    ***
    As its name implies, the controller should control things. So it should decide what should be presented, not the view. The view should just present the data offered by the controller.
    The view should not be able to present something which is not allowed. But if many things are allowed, than the controller could offer all those things and don't restrict the user object by creating and offering another object which is more limited. The controller should be in control even if that control is very limited sometimes.



    I prefer just passing the object to the view. The controller code is much cleaner and then when the view needs to change don't need to also change the controller. And when there's a different view (like an API or moble ) the same controller action can be used.




    ***
    Yes, I also prefer that way, because I usually don't need too many restrictions. But sometimes the view should not get too much data, because the view could be say a WxPerl app which is in a remote location, and it couldn't receive locally an object and execute methods on it, but it should receive a serialized string, which shouldn't be too big for a faster transfer, and in that case the controller should choose to offer a smaller serialized object.

    Octavian
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120207/61f286fa/attachment.htm
  • Dave Howorth at Feb 7, 2012 at 10:15 am

    Bill Moseley wrote:
    That is, we want to allow $user->first_name, but not
    $user->search_related or $user->delete.

    That requires writing new wrapper classes for every possible result -- not
    just mirroring DBIC's result classes but possibly many more because the new
    model might have multiple calls (with different access levels) for fetching
    user data. That is, $user->email might work for some model methods that
    return a user but not methods called on the model.

    Frankly, to me this seems like a lot of code and work and complexity just
    to prevent another developer from doing something stupid -- which we cannot
    prevent anyway. And smart programmers can get at whatever they want,
    regardless. Seems more risky to make the code more complex and thus harder
    to understand. The cost/benefit ratio just doesn't seem that great.
    You don't necessarily need new classes. You need to change existing
    behaviours, so you could override methods. To prevent programmers doing
    things, you don't need to make it impossible, just make it easy to see
    when they are naughty. A log entry, or an audit record in the database,
    or even peer review of the code, can all be sufficient to stop a
    programmer calling delete. As long as they know they're not supposed to
    call it! What's appropriate all depends on the application and your
    circumstances, of course. JMHO.
    I suppose this is not unlike the many discussions about what to pass to the
    view. Does the controller, for example, fetch a user object and pull the
    data required for the view into a hash and then pass that to the view? Or
    does the controller just fetch a user object and pass that directly to the
    view to decide what needs to display?

    I prefer just passing the object to the view. The controller code is much
    cleaner and then when the view needs to change don't need to also change
    the controller. And when there's a different view (like an API or moble )
    the same controller action can be used.
    You might (re)read what Martin Fowler has to say about facades. It may
    help you to firm up your own opinion, even if you disagree with him.

    Cheers, Dave
  • Jason Galea at Feb 7, 2012 at 2:38 pm

    On Tue, Feb 7, 2012 at 1:26 PM, Bill Moseley wrote:


    My idea was that Catalyst would call a method in the new model layer and
    possibly get a DBIC object back. There is concern from some at my meeting
    that we don't want to give the Catalyst app developer a "raw" DBIC object
    and that we should wrap it (as it appears you are doing, Jason) in yet
    another object. That is, we want to allow $user->first_name, but not
    $user->search_related or $user->delete.

    That requires writing new wrapper classes for every possible result -- not
    just mirroring DBIC's result classes but possibly many more because the new
    model might have multiple calls (with different access levels) for fetching
    user data. That is, $user->email might work for some model methods that
    return a user but not methods called on the model.

    Frankly, to me this seems like a lot of code and work and complexity just
    to prevent another developer from doing something stupid -- which we cannot
    prevent anyway. And smart programmers can get at whatever they want,
    regardless. Seems more risky to make the code more complex and thus harder
    to understand. The cost/benefit ratio just doesn't seem that great.

    Am I missing something?
    nope.. the complexity involved continues to reveal itself to me..

    - account for the fact that sometimes a DBIC relation gives us our
    instance object and sometimes a dbic result object. (to do with how it got
    there)

    my ($self) = @_;

    2727**

    my $user = $self->_record->user;

    2828**

    my $user_class = $self->user_instance_class;

    29 **

    - return $user ?

    30 **

    - $user_class->new('_record' => $user )

    31 **

    - : $user_class->new();

    29**

    + $user = $user_class->new('_record' => $user ) if $user &&
    !$user->isa($user_class);

    30**

    + return $user || $user_class->new();

    3231**

    }

    3332**





    J


    I suppose this is not unlike the many discussions about what to pass to
    the view. Does the controller, for example, fetch a user object and pull
    the data required for the view into a hash and then pass that to the view?
    Or does the controller just fetch a user object and pass that directly to
    the view to decide what needs to display?

    I prefer just passing the object to the view. The controller code is much
    cleaner and then when the view needs to change don't need to also change
    the controller. And when there's a different view (like an API or moble )
    the same controller action can be used.

    Thanks,



    In addition to everything already mentioned I wanted to get Bread::Board
    in on the act..

    I've put Lecstor up on GitHub if you're interested along with a Catalyst
    app that uses it. Neither really do much but boy is there a lot of
    scaffolding! ..and all the tests pass! 8)

    The basic hook-up is Catalyst -> Catalyst Models -> Bread::Board
    Containers -> Lecstor App/Models -> DBIC and others.. decoupled like a
    broken bag o marbles..

    the Catalyst Models:
    - LecstorApp - application lifetime
    - LecstorModel - application lifetime
    - LecstorRequest - request lifetime
    - Lecstor - request lifetime

    LecstorModel & LecstorRequest return Bread::Board containers.
    LecstorApp returns a parameterized Bread::Board container
    Lecstor grabs the first two and shoves them into the third to make
    another Bread::Board container from which I get my app..

    have I gone walkabout!?

    https://github.com/lecstor/Lecstor

    https://github.com/lecstor/Lecstor-Shop-Catalyst

    comments welcome.

    cheers,

    J

    On Tue, Jan 10, 2012 at 12:16 AM, Jason Galea wrote:


    On Mon, Jan 9, 2012 at 3:14 PM, Bill Moseley wrote:


    On Monday, January 2, 2012, Jason Galea wrote:


    I think I've added another layer but I'm not sure where you draw the
    line.. I have a model layer over DBIC pulling together related result
    classes under a single model class. Then the app? layer uses the model
    layer to get things done. So I'd probably have one "distribution" that is
    our DBIC wrapped in a model layer layer and use that in a number of apps..
    8) Each app can then be used as the single model in a Catalyst app or
    script or whatever.. (I think I need more names for the parts..)
    Yes, where to draw the line is difficult to know. I've only had a few
    hours to work on this but already I feel like I'm reinventing Catalyst --
    mostly because my model layer is pulling in much of the components that my
    Catalyst app would normally do -- DBIC, caching, even some concept of the
    "current user". Access control is another topic.
    The problem parts for me are DBIC and TT. I thought I could just set up
    the components as usual, then load my app with them but it get's tricky
    calling one component from another at setup time, although it all works
    fine if you instantiate the app per request. So now I'm connecting/creating
    those myself.

    For other things provided by plugins I'm working more with Catalyst so
    for caching I will probably have my app accept a cache object at
    construction and pass in the Catalyst cache. For Authentication I've
    created my own store and user for the Catalyst Authentication plugin and
    they use my app to do what they have to. I've also created a store for the
    session plugin which uses my app, so all-in-all my app can see/touch
    everything that Catalyst is doing, and I can still make use of all the
    Catalyst stuff available (hopefully).

    I have "Sets" in lu of ResultSets and "Models" for Results. Although
    in most instances a Model will actually cover the usage of multiple
    Results. Each Set gets the dbic schema object and knows it's resultset
    name. Each model has a data attribute which contains a dbic row object and
    "handles" any methods I don't need to override via the Moose "handles"
    attribute attribute!?

    Set->create($hash) creates the dbic object and stuffs it into a model
    class and returns that.
    So you are mirroring DBICs class structure a bit. I need to consider
    that approach more as currently my model layer returns the DBIC row object
    directly. So, I have something like this:

    my $user_model = Model::User->new;
    my $new_user = $user->new_user( $user_data );
    Not as flexible as your approach but my goal currently is to just
    abstract out the ORM so that Model::User can hide the specifics of the
    database. Actually, it's not that hard to do directly with DBIC, either.
    yeh, I decided a while back that DBIx::Class is complicated enough and
    I'm too lazy to keep trying to work out complicated solutions in the DBIC
    classes to do things I know I can do quickly and easily with regular Moose
    classes.. and I like having nice clean DBIC classes..

    Each result class that has a model class overrides it's inflate_result
    method which again stuffs the dbic row object into the model object so
    searches on the related dbic resultsets return my model objects.
    Can you show a code example of that? I'm not sure I'm following why
    you use that approach instead of having your layer on top of DBIC do that.
    and the exception to the rule.. I did have my Set classes (which I now
    refer to as Model Controllers) grabbing search results and looping through,
    inflating them all into my Model Instances but then I couldn't just grab a
    resultset if I needed to limit/restrict/whatever, and any search or find
    had to be put through that ringer. With inflate_result I know that no
    matter how I get the results they'll be instances of my model. create is
    the only thing that doesn't work for so my controller does the wrapping
    there.

    package Lecstor::Schema::Result::Person;
    use base qw/DBIx::Class/;
    __PACKAGE__->load_components(qw/ Core /);
    __PACKAGE__->table('person');
    __PACKAGE__->add_columns('id' ,'firstname','surname');
    __PACKAGE__->set_primary_key('id');

    sub inflate_result {
    my $self = shift;
    my $ret = $self->next::method(@_);
    return unless $ret;
    return Lecstor::Model::Instance::Person->new( _record => $ret );
    }

    1;

    Each model class has a validation class based on.. Validation::Class
    and create & update run their input through that. If there are errors I
    stuff the errors into a very basic exception object and return that. This
    way I can return the same exception object no matter where the error comes
    from, eg a dbic exception..
    Yes, I'm doing something very similar where validation happens before
    the method in the model and on validation errors and exception is thrown
    (if you are on the Moose list you may have seen my example).

    Thanks for the feedback and the ideas,
    no worries at all, happy to be able to provide it.

    cheers,

    J



    --
    Bill Moseley
    moseley@hank.org

    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/
    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/

    --
    Bill Moseley
    moseley@hank.org

    _______________________________________________
    List: Catalyst@lists.scsys.co.uk
    Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
    Searchable archive:
    http://www.mail-archive.com/catalyst@lists.scsys.co.uk/
    Dev site: http://dev.catalyst.perl.org/
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120208/fbe12358/attachment.htm
  • Tomas Doran at Jan 2, 2012 at 11:50 am

    On 2 Jan 2012, at 01:41, Bill Moseley wrote:

    So, I'm wondering how best to do that if I provide a separate model
    layer that includes data validation. For example, say I have a
    model for user management which includes a method for creating new
    users. If I have a model method $users->add_user( \%user_data ) I
    would tend to have it return the new user object or throw an
    exception on failure. What probably makes sense is using exception
    objects and have Catalyst catch those to render the error in an
    appropriate way. Is this an approach you are using? Any other
    tips on structuring the model layer that works well with both
    Catalyst and non-Catalyst applications?
    Yes, it is an approach I'm using - at least for api type applications.

    I'm doing something very like using https://metacpan.org/module/HTTP::Throwable
    , although my code doing this pre-exists that module.

    Cheers
    t0m
  • Bill Moseley at Jan 9, 2012 at 5:21 am
    On Mon, Jan 2, 2012 at 6:50 PM, Tomas Doran
    ({}, 'cvml',
    'bobtfish@bobtfish.net');>
    wrote:
    I'm doing something very like using https://metacpan.org/module/**
    HTTP::Throwable <https://metacpan.org/module/HTTP::Throwable>, although
    my code doing this pre-exists that module.
    Are you doing that in your non-Catalyst model? I'm using Throwable, but
    HTTP::Throwable seems, well, pretty HTTP specific.

    What about access control? Hum, I seems to be pushing much of what was in
    Catalyst down into the Model in this current exercise.



    --
    Bill Moseley
    moseley@hank.org <javascript:_e({}, 'cvml', 'moseley@hank.org');>


    --
    Bill Moseley
    moseley@hank.org
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://lists.scsys.co.uk/pipermail/catalyst/attachments/20120109/e909ebed/attachment.htm

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcatalyst @
categoriescatalyst, perl
postedJan 2, '12 at 1:41a
activeFeb 7, '12 at 2:38p
posts12
users5
websitecatalystframework.org
irc#catalyst

People

Translate

site design / logo © 2021 Grokbase