FAQ
Hey goophers! I'm looking for some wisdom! I'm currently writing a program
with an in memory slice of pointers to structs (read "database" of sorts).
Two things I need your help, first and the most obvious I have a struct
with a lot of fields lets call it Thing, so I'm getting those Things from a
JSON file and Unmarshaling on startup. What is better? To parse into a
[]Things or a []*Things, As far as I can see the second one takes less
memory, and for that I'll need to explain the second point with a little
bit of code.

type Thing struct {
     .... // Lots of fields from strings to maps to bools
}

type Things []*Thing //Slice of Pointers for now...

Then I'm writing a lot of functions to search my slice ("database") in a
web app. And most of them look like this:

func (t Things) SearchByName(name, lang string) Things {
     results := make(Things, 0, DefaultCapacity) *// any idea on a good
value??*
     for _, thing := range(t) {
         *// Start of function specific code*
         if name.Partially.Matches.Or.Something(thing.Name) {
              results = append(results, thing)
         }
     }
     return results
}

func (t Things) SearchBySomeBool(some bool) Things {
     results := make(Things, 0, DefaultCapacity)
     for _, thing := range(t) {
         // Start of function specific code
         if thing.SomeBool == some {
              results = append(results, thing)
         }
     }
     return results
}

Etcetera... you get the point. Working like this is nice cause I can filter
my original Slice (database) and then use the same functions to filter the
resulting slice (as it is of type Things) and by that keep chaining
searches (filters) until I get what I want. And it's working nicely, but as
you can see there is a lot of repeated code (in blue) on each of those
functions. So I tried to refactor this into a single prototype function
(which will receive specific functions maybe?) to look like this.

func (t Things) Search(......) Things {
     results := make(Things, 0, DefaultCapacity)
     for _, thing := range(t) {
         // Start of function specific code
         // I'm thinking of receiving a function that takes a 'thing' and
does its stuff then returns true or false
         if function.YieldsTrue {
              results = append(results, thing)
         }
     }
     return results
}

Thing is, I could send the specific function to Search, and that function
must receive a Thing to check whether the thing matches the search and
return true or not and return false. But then I stomped into a wall!!! Each
SearchBy function receives different parameters (strings, bools, etc...) so
they can check specific matching. And it was all fine when I could chain my
searches from the top cause I had those parameters at hand and I could send
them to each function separately.

Anyone knows how to refactor this so I could use a single Search func to
DRY the code a bit and make it more idiomatic? Maybe use of channels?? I
dunno, every bit of help is really appreciated.

Thanks a lot.
Guillermo Estrada



--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

  • Egon at Sep 30, 2014 at 6:21 am

    On Tuesday, 30 September 2014 06:06:45 UTC+3, Guillermo Estrada wrote:
    Hey goophers! I'm looking for some wisdom! I'm currently writing a program
    with an in memory slice of pointers to structs (read "database" of sorts).
    Two things I need your help, first and the most obvious I have a struct
    with a lot of fields lets call it Thing, so I'm getting those Things from a
    JSON file and Unmarshaling on startup. What is better?
    It depends, but here it seems you have a lots of fields, so duplicating
    every Thing would use memory and involve a lot of copying. Of course with a
    pointer you may accidentally modify the Thing and cause unintended changes.
    I would go with a pointer and try to be careful how I use results.

    To parse into a []Things or a []*Things, As far as I can see the second
    one takes less memory, and for that I'll need to explain the second point
    with a little bit of code.

    type Thing struct {
    .... // Lots of fields from strings to maps to bools
    }

    type Things []*Thing //Slice of Pointers for now...

    Then I'm writing a lot of functions to search my slice ("database") in a
    web app. And most of them look like this:

    func (t Things) SearchByName(name, lang string) Things {
    results := make(Things, 0, DefaultCapacity) *// any idea on a good
    value??*
    Choose the value based on how many elements you expect to return. One
    possibility is to guess that 10% will match the query and use make(Things,
    0, len(t)/10), or that the results are very small e.g. make(Things, 0).

    Anyways, if you don't know exactly what to put there, don't put anything
    there... start worrying about when you have performance problems.

    for _, thing := range(t) {
    *// Start of function specific code*
    if name.Partially.Matches.Or.Something(thing.Name) {
    results = append(results, thing)
    }
    }
    return results
    }

    func (t Things) SearchBySomeBool(some bool) Things {
    results := make(Things, 0, DefaultCapacity)
    for _, thing := range(t) {
    // Start of function specific code
    if thing.SomeBool == some {
    results = append(results, thing)
    }
    }
    return results
    }

    Etcetera... you get the point. Working like this is nice cause I can
    filter my original Slice (database) and then use the same functions to
    filter the resulting slice (as it is of type Things) and by that keep
    chaining searches (filters) until I get what I want. And it's working
    nicely, but as you can see there is a lot of repeated code (in blue) on
    each of those functions. So I tried to refactor this into a single
    prototype function (which will receive specific functions maybe?) to look
    like this.

    func (t Things) Search(......) Things {
    results := make(Things, 0, DefaultCapacity)
    for _, thing := range(t) {
    // Start of function specific code
    // I'm thinking of receiving a function that takes a 'thing' and
    does its stuff then returns true or false
    if function.YieldsTrue {
    results = append(results, thing)
    }
    }
    return results
    }
    You can do:

    func (ts Things) Search(fn func(*Thing) bool) (result Things) {
         for _, t := range ts {
             if fn(t) {
                 result = append(result, v)
             }
         }
         return result
    }

    When using it,

    query := 123
    result := things.Where(func(t *Thing){ return t.Value == query })

    Thing is, I could send the specific function to Search, and that function
    must receive a Thing to check whether the thing matches the search and
    return true or not and return false. But then I stomped into a wall!!! Each
    SearchBy function receives different parameters (strings, bools, etc...) so
    they can check specific matching. And it was all fine when I could chain my
    searches from the top cause I had those parameters at hand and I could send
    them to each function separately.

    Anyone knows how to refactor this so I could use a single Search func to
    DRY the code a bit and make it more idiomatic? Maybe use of channels?? I
    dunno, every bit of help is really appreciated.

    Thanks a lot.
    Guillermo Estrada


    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Guillermo Estrada at Sep 30, 2014 at 10:08 pm

    It depends, but here it seems you have a lots of fields, so duplicating
    every Thing would use memory and involve a lot of copying. Of course with a
    pointer you may accidentally modify the Thing and cause unintended changes.
    I would go with a pointer and try to be careful how I use results.
    My slice (database) is ReadOnly, if changes are made, a special deployment
    of the original JSON file would be made and server have to be restarted, so
    I guess sticking to pointers is a good idea as long as I keep an eye on
    their integrity. Even if I screw up in the code, restarting the server
    solves any problem.

    Choose the value based on how many elements you expect to return. One
    possibility is to guess that 10% will match the query and use make(Things,
    0, len(t)/10), or that the results are very small e.g. make(Things, 0).

    Anyways, if you don't know exactly what to put there, don't put anything
    there... start worrying about when you have performance problems.
    I'm keeping default capacity at 20 (arbitrary number I know) but although
    the database contains around 20k structs, and searches cad vary a TON, 20
    look good for 'expected result' so I'll follow your advice and just keep an
    eye for performance problems. I guess 20 pointers (those are int64 right?)
    per request is not a bad deal if they are not used, most likely they will
    be. I don't know how I will detect performance problems though, profiling
    will give me a lot to work on other than that with an in memory database!
    (changing my slice into a tree for example) I'm iterating the slice on
    every search I guess that could have an impact eventually (concurrency),
    but right now it's just blazing fast.

    You can do:

    func (ts Things) Search(fn func(*Thing) bool) (result Things) {
    for _, t := range ts {
    if fn(t) {
    result = append(result, v)
    }
    }
    return result
    }

    When using it,

    query := 123
    result := things.Where(func(t *Thing){ return t.Value == query })
    Now this is the point I wanted. That looks good! and indeed it uses
    closures, but that kinda looks awful when chaining and the variables MUST
    be in scope of the closure (can't keep code clean like that). My search
    parameters are binded into a struct so you gave me an idea of doing
    something like this.

    type Things []*Thing

    func (t Things) Search(s *SearchParams, fn func(t Thing, s *SearchParams)
    bool) Things{

    }

    func ByName(t Thing, s *SearchParams) bool {

    }

    And call them like this.

    results := database.Search(&s, ByName).Search(&s, ByBool).Search(&s,
    ByWhatever) // This will filter sequentially

    although I'm still not liking having to send the search params in every
    request... it's just a pointer and it's better than the alternative...
    Thoughts?

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Tamás Gulácsi at Oct 1, 2014 at 4:18 am
    You can make factories:
      ByName(*SearchParams) func (Things) Things
    And functors named And, Or... so you have to iterate only once.

    This is the same as the closure thing, but creates those closures cleanly and only once.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Guillermo Estrada at Oct 1, 2014 at 6:34 pm
    Factories sounds nice!! Kinda like working in Ruby. Do you have any example of this ? (ir any project in github using them) to give me am idea of am idiomatic way to refactor this.

    Thanx a lot

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Egon at Oct 1, 2014 at 6:43 pm

    On Wednesday, 1 October 2014 21:34:09 UTC+3, Guillermo Estrada wrote:
    Factories sounds nice!! Kinda like working in Ruby. Do you have any
    example of this ? (ir any project in github using them) to give me am idea
    of am idiomatic way to refactor this.

    He just means a function that returns a function... e.g.

    func ByName(name string) func(*Thing) bool {
         return func(t *Thing) { return t.Name == name }
    }

    Also, you might be overthinking this..
    for loops are simpler solution in these cases... e.g.

    for _, t := range things {
        if t.Name == "something" && t.Age > 10 {
           fmt.Println(t)
        }
    }

    + egon

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Guillermo Estrada at Oct 3, 2014 at 5:11 pm
    Thanx to everyone, I finally end doing something like this.

    *// When I parse my SearchParams, I check which ones are present and I know
    which filters to apply*

    *// and append them to a slice of filters on the struct (pointers to
    functions).*

    type Filter func(*SearchParams, *Thing) bool

    func (t Things) Search(s *SearchParams) Things {
       results := make(Things, 0, DefaultCap)
       for _, thing := range(t) { *// iterate over my things*
         for _, f := range(s.Filters) {
           if f(s, thing) { *//and over my filters*

             *// if any filter match, that thing is good to ship*

             results = append(results, thing)
             break
           } else { * // if any filter do not match, then we won't check the
    others.*
             break
           }
         }
       }
       return results
    }


    An example of a filter.

    func ByName(s *SearchParams, t *Thing) bool {
       if strings.Contains(t.Name, s.Name) {
         return true
       }
       return false
    }

    I'm liking the code right know, I tried to optimize as much as I could
    while trying to keep the code understandable and small. Thoughts on
    implementation are appreciated. And thanks everyone for their comments.

    On Wednesday, October 1, 2014 1:43:24 PM UTC-5, egon wrote:


    On Wednesday, 1 October 2014 21:34:09 UTC+3, Guillermo Estrada wrote:

    Factories sounds nice!! Kinda like working in Ruby. Do you have any
    example of this ? (ir any project in github using them) to give me am idea
    of am idiomatic way to refactor this.

    He just means a function that returns a function... e.g.

    func ByName(name string) func(*Thing) bool {
    return func(t *Thing) { return t.Name == name }
    }

    Also, you might be overthinking this..
    for loops are simpler solution in these cases... e.g.

    for _, t := range things {
    if t.Name == "something" && t.Age > 10 {
    fmt.Println(t)
    }
    }

    + egon
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Guillermo Estrada at Oct 3, 2014 at 5:33 pm
    Sorry I screwed up the Search function... it should look like this. (more
    readable code and working)

    *// I wrote around 20 different filters for the searches*
    *// When I parse the request with the SearchParams, I check which ones*
    *// are to be used in the search func.*
    type Filter func(*SearchParams, *Thing) bool

    *// SearchParams has a []Filter which contains all filters*
    *// that are going to be applied (previously parsed)*
    func (t Things) Search(s *SearchParams) Things {

         results := make(Things, 0, CAP)

        * // Things Loop*
         Things:
         for _, thing := range(t) {
             *// Filters Loop*
             for _, f := range(s.Filters) {
                 if !f(s, thing) {
                    * // If one filter does not patch*
    * // we don't keep checking filters*
    * // and proceed with the next thing*
                     continue Things
                 }
             }
             *// All filters match we can append this thing*
             results = append(results, thing)
         }

         return results
    }

    One thing to notice is that when I parse my SearchParams (that come from
    the request) with this approach I parse them in performance cost desc
    order, that way faster filters get executed first and slower filters ONLY
    get a chance to be executed if the cheaper ones match. This is far superior
    to a jus loop check all approach, text search is pretty costtly but bool
    check or integer match is pretty fast.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Egon at Oct 3, 2014 at 6:39 pm
    Any particular reason you are not using a database? It's designed for such
    things.

    Also, you can still make it simpler... e.g.

    http://play.golang.org/p/f1au4XKyAq

    Of course depending where/how you actually want to use it, there may be
    better approaches.

    + egon
    On Friday, 3 October 2014 20:33:09 UTC+3, Guillermo Estrada wrote:

    Sorry I screwed up the Search function... it should look like this. (more
    readable code and working)

    *// I wrote around 20 different filters for the searches*
    *// When I parse the request with the SearchParams, I check which ones*
    *// are to be used in the search func.*
    type Filter func(*SearchParams, *Thing) bool

    *// SearchParams has a []Filter which contains all filters*
    *// that are going to be applied (previously parsed)*
    func (t Things) Search(s *SearchParams) Things {

    results := make(Things, 0, CAP)

    * // Things Loop*
    Things:
    for _, thing := range(t) {
    *// Filters Loop*
    for _, f := range(s.Filters) {
    if !f(s, thing) {
    * // If one filter does not patch*
    * // we don't keep checking filters*
    * // and proceed with the next thing*
    continue Things
    }
    }
    *// All filters match we can append this thing*
    results = append(results, thing)
    }

    return results
    }

    One thing to notice is that when I parse my SearchParams (that come from
    the request) with this approach I parse them in performance cost desc
    order, that way faster filters get executed first and slower filters ONLY
    get a chance to be executed if the cheaper ones match. This is far superior
    to a jus loop check all approach, text search is pretty costtly but bool
    check or integer match is pretty fast.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Guillermo Estrada at Oct 3, 2014 at 7:50 pm
    I like a lot what youd did in the example, I'm going to check it thoroughly
    to see what I can implement. And/Or options looks like nice extra
    functionality. Also the way you return functions in every step of the
    implementation. I'll have to test that approach, looks more 'idiomatic' of
    sorts, that is more or less what iI was asking to refactor with 'closures'
    but cleaner.

    There are many reasons not to use a database, the best I can think of is
    that it would be it's a nice learning exercise, but the real ones:

        - Service is ReadOnly all the way, the "database" as I mentioned is a
        big file that will change once every 2-3 months, it is loaded on startup
        and contains around 20k of the structs I need. (More will come, but
        implementing one is all we need)
        - The in memory cost of all the data is usually less that running my own
        DBA at least with the ones that run in servers (MySQL, Postgres) or in
        memory databases(MemCached, Redis) which use RAM just for the service. Even
        in the case the DBA does not take memory or very little... next point makes
        up for it
        - Service needs to be run with as few hardware as possible, though the
        in memory database may seem like a big commit upfront, as every struct
        returned in the response is a pointer to the original slice, there are no
        Copy operations of the actual data, this is very memory efficient, I sieged
        the server with 100k queries with 500 concurrency level. No more than
        5-15MB were used to fullfill all the requests which is pretty good if we
        don't know how many to expect and even if we do.
        - Also, there is no cost for populating Go structs, and no code to the
        the parsing and matching the datatypes of the DBA (in some cases removing
        reflection is HUGE)
        - Service needs to be FAST, in memory Slice is the fastest I could think
        of, removing disk i/o from the equation is good, and we can forget about
        data corruption in disks, backup and latency, etc... Also removing copy
        operations from DBA and their own computation costs (which we cannot change)
        - Service needs to be tightly packed for deployment, that means I don't
        have to deal with a DBA service/server installation and stuff, deploy and
        run and works, with different setups, and different platforms, less code to
        maintain, etc...
        - More granularity over service, A few lines of code can get me custom
        data types, custom searches, saving custom searches to disk for download,
        etc...
        - And last but not least... there is no fun in DBA :P

    Thanks for all the feedback! I'll keep testing this and keep adding
    functionality, I dunno if this looks good it could be a nice library for in
    memory databases with some more effort, I'll be sure to share it if I reach
    that.

    Right now I'm dealing with case insensitive text search/match, using
    string.ToLower and strings.Contains in each comparison is "fast" (still
    pretty slow), but uses a lot of memory, on the other hand
    regexp.MustCompile and Regexp.Match, is 3-4x times slower but uses half the
    memory. I'm tempted in keeping all lower cased names on a tree and
    implement a fast search just for that, but I need a better aproach for
    larger text fields :S



    On Friday, October 3, 2014 1:39:53 PM UTC-5, egon wrote:

    Any particular reason you are not using a database? It's designed for such
    things.

    Also, you can still make it simpler... e.g.

    http://play.golang.org/p/f1au4XKyAq

    Of course depending where/how you actually want to use it, there may be
    better approaches.

    + egon
    On Friday, 3 October 2014 20:33:09 UTC+3, Guillermo Estrada wrote:

    Sorry I screwed up the Search function... it should look like this. (more
    readable code and working)

    *// I wrote around 20 different filters for the searches*
    *// When I parse the request with the SearchParams, I check which ones*
    *// are to be used in the search func.*
    type Filter func(*SearchParams, *Thing) bool

    *// SearchParams has a []Filter which contains all filters*
    *// that are going to be applied (previously parsed)*
    func (t Things) Search(s *SearchParams) Things {

    results := make(Things, 0, CAP)

    * // Things Loop*
    Things:
    for _, thing := range(t) {
    *// Filters Loop*
    for _, f := range(s.Filters) {
    if !f(s, thing) {
    * // If one filter does not patch*
    * // we don't keep checking filters*
    * // and proceed with the next thing*
    continue Things
    }
    }
    *// All filters match we can append this thing*
    results = append(results, thing)
    }

    return results
    }

    One thing to notice is that when I parse my SearchParams (that come from
    the request) with this approach I parse them in performance cost desc
    order, that way faster filters get executed first and slower filters ONLY
    get a chance to be executed if the cheaper ones match. This is far superior
    to a jus loop check all approach, text search is pretty costtly but bool
    check or integer match is pretty fast.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.
  • Egon at Oct 4, 2014 at 5:16 am

    On Friday, 3 October 2014 22:50:45 UTC+3, Guillermo Estrada wrote:
    I like a lot what youd did in the example, I'm going to check it
    thoroughly to see what I can implement. And/Or options looks like nice
    extra functionality. Also the way you return functions in every step of the
    implementation. I'll have to test that approach, looks more 'idiomatic' of
    sorts, that is more or less what iI was asking to refactor with 'closures'
    but cleaner.

    There are many reasons not to use a database, the best I can think of is
    that it would be it's a nice learning exercise, but the real ones:

    - Service is ReadOnly all the way, the "database" as I mentioned is a
    big file that will change once every 2-3 months, it is loaded on startup
    and contains around 20k of the structs I need. (More will come, but
    implementing one is all we need)
    - The in memory cost of all the data is usually less that running my
    own DBA at least with the ones that run in servers (MySQL, Postgres) or in
    memory databases(MemCached, Redis) which use RAM just for the service. Even
    in the case the DBA does not take memory or very little... next point makes
    up for it
    - Service needs to be run with as few hardware as possible, though the
    in memory database may seem like a big commit upfront, as every struct
    returned in the response is a pointer to the original slice, there are no
    Copy operations of the actual data, this is very memory efficient, I sieged
    the server with 100k queries with 500 concurrency level. No more than
    5-15MB were used to fullfill all the requests which is pretty good if we
    don't know how many to expect and even if we do.
    - Also, there is no cost for populating Go structs, and no code to the
    the parsing and matching the datatypes of the DBA (in some cases removing
    reflection is HUGE)
    - Service needs to be FAST, in memory Slice is the fastest I could
    think of, removing disk i/o from the equation is good, and we can forget
    about data corruption in disks, backup and latency, etc... Also removing
    copy operations from DBA and their own computation costs (which we cannot
    change)
    DB-s usually have indexes/structures for doing efficient queries. Going
    over a slice is not that fast, when the slice is big. The more data you
    have, the bigger the difference. But indeed the conversion has some cost.
    Whether it outweighs the cost of DB, you need to measure.

    And there are databases without any external dependencies. e.g.

    https://github.com/mattn/go-sqlite3 (http://www.sqlite.org/inmemorydb.html)
    https://github.com/boltdb/bolt

    - Service needs to be tightly packed for deployment, that means I
    don't have to deal with a DBA service/server installation and stuff, deploy
    and run and works, with different setups, and different platforms, less
    code to maintain, etc...


    - More granularity over service, A few lines of code can get me custom
    data types, custom searches, saving custom searches to disk for download,
    etc...
    - And last but not least... there is no fun in DBA :P

    Thanks for all the feedback! I'll keep testing this and keep adding
    functionality, I dunno if this looks good it could be a nice library for in
    memory databases with some more effort, I'll be sure to share it if I reach
    that.

    Right now I'm dealing with case insensitive text search/match, using
    string.ToLower and strings.Contains in each comparison is "fast" (still
    pretty slow), but uses a lot of memory, on the other hand
    regexp.MustCompile and Regexp.Match, is 3-4x times slower but uses half the
    memory. I'm tempted in keeping all lower cased names on a tree and
    implement a fast search just for that, but I need a better aproach for
    larger text fields :S

    On Friday, October 3, 2014 1:39:53 PM UTC-5, egon wrote:

    Any particular reason you are not using a database? It's designed for
    such things.

    Also, you can still make it simpler... e.g.

    http://play.golang.org/p/f1au4XKyAq

    Of course depending where/how you actually want to use it, there may be
    better approaches.

    + egon
    On Friday, 3 October 2014 20:33:09 UTC+3, Guillermo Estrada wrote:

    Sorry I screwed up the Search function... it should look like this.
    (more readable code and working)

    *// I wrote around 20 different filters for the searches*
    *// When I parse the request with the SearchParams, I check which ones*
    *// are to be used in the search func.*
    type Filter func(*SearchParams, *Thing) bool

    *// SearchParams has a []Filter which contains all filters*
    *// that are going to be applied (previously parsed)*
    func (t Things) Search(s *SearchParams) Things {

    results := make(Things, 0, CAP)

    * // Things Loop*
    Things:
    for _, thing := range(t) {
    *// Filters Loop*
    for _, f := range(s.Filters) {
    if !f(s, thing) {
    * // If one filter does not patch*
    * // we don't keep checking filters*
    * // and proceed with the next thing*
    continue Things
    }
    }
    *// All filters match we can append this thing*
    results = append(results, thing)
    }

    return results
    }

    One thing to notice is that when I parse my SearchParams (that come from
    the request) with this approach I parse them in performance cost desc
    order, that way faster filters get executed first and slower filters ONLY
    get a chance to be executed if the cheaper ones match. This is far superior
    to a jus loop check all approach, text search is pretty costtly but bool
    check or integer match is pretty fast.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
    For more options, visit https://groups.google.com/d/optout.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedSep 30, '14 at 3:11a
activeOct 4, '14 at 5:16a
posts11
users3
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase