I like a lot what youd did in the example, I'm going to check it thoroughly
to see what I can implement. And/Or options looks like nice extra
functionality. Also the way you return functions in every step of the
implementation. I'll have to test that approach, looks more 'idiomatic' of
sorts, that is more or less what iI was asking to refactor with 'closures'
but cleaner.

There are many reasons not to use a database, the best I can think of is
that it would be it's a nice learning exercise, but the real ones:

    - Service is ReadOnly all the way, the "database" as I mentioned is a
    big file that will change once every 2-3 months, it is loaded on startup
    and contains around 20k of the structs I need. (More will come, but
    implementing one is all we need)
    - The in memory cost of all the data is usually less that running my own
    DBA at least with the ones that run in servers (MySQL, Postgres) or in
    memory databases(MemCached, Redis) which use RAM just for the service. Even
    in the case the DBA does not take memory or very little... next point makes
    up for it
    - Service needs to be run with as few hardware as possible, though the
    in memory database may seem like a big commit upfront, as every struct
    returned in the response is a pointer to the original slice, there are no
    Copy operations of the actual data, this is very memory efficient, I sieged
    the server with 100k queries with 500 concurrency level. No more than
    5-15MB were used to fullfill all the requests which is pretty good if we
    don't know how many to expect and even if we do.
    - Also, there is no cost for populating Go structs, and no code to the
    the parsing and matching the datatypes of the DBA (in some cases removing
    reflection is HUGE)
    - Service needs to be FAST, in memory Slice is the fastest I could think
    of, removing disk i/o from the equation is good, and we can forget about
    data corruption in disks, backup and latency, etc... Also removing copy
    operations from DBA and their own computation costs (which we cannot change)
    - Service needs to be tightly packed for deployment, that means I don't
    have to deal with a DBA service/server installation and stuff, deploy and
    run and works, with different setups, and different platforms, less code to
    maintain, etc...
    - More granularity over service, A few lines of code can get me custom
    data types, custom searches, saving custom searches to disk for download,
    - And last but not least... there is no fun in DBA :P

Thanks for all the feedback! I'll keep testing this and keep adding
functionality, I dunno if this looks good it could be a nice library for in
memory databases with some more effort, I'll be sure to share it if I reach

Right now I'm dealing with case insensitive text search/match, using
string.ToLower and strings.Contains in each comparison is "fast" (still
pretty slow), but uses a lot of memory, on the other hand
regexp.MustCompile and Regexp.Match, is 3-4x times slower but uses half the
memory. I'm tempted in keeping all lower cased names on a tree and
implement a fast search just for that, but I need a better aproach for
larger text fields :S

On Friday, October 3, 2014 1:39:53 PM UTC-5, egon wrote:

Any particular reason you are not using a database? It's designed for such

Also, you can still make it simpler... e.g.


Of course depending where/how you actually want to use it, there may be
better approaches.

+ egon
On Friday, 3 October 2014 20:33:09 UTC+3, Guillermo Estrada wrote:

Sorry I screwed up the Search function... it should look like this. (more
readable code and working)

*// I wrote around 20 different filters for the searches*
*// When I parse the request with the SearchParams, I check which ones*
*// are to be used in the search func.*
type Filter func(*SearchParams, *Thing) bool

*// SearchParams has a []Filter which contains all filters*
*// that are going to be applied (previously parsed)*
func (t Things) Search(s *SearchParams) Things {

results := make(Things, 0, CAP)

* // Things Loop*
for _, thing := range(t) {
*// Filters Loop*
for _, f := range(s.Filters) {
if !f(s, thing) {
* // If one filter does not patch*
* // we don't keep checking filters*
* // and proceed with the next thing*
continue Things
*// All filters match we can append this thing*
results = append(results, thing)

return results

One thing to notice is that when I parse my SearchParams (that come from
the request) with this approach I parse them in performance cost desc
order, that way faster filters get executed first and slower filters ONLY
get a chance to be executed if the cheaper ones match. This is far superior
to a jus loop check all approach, text search is pretty costtly but bool
check or integer match is pretty fast.
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

Discussion Posts


Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 10 of 11 | next ›
Discussion Overview
groupgolang-nuts @
postedSep 30, '14 at 3:11a
activeOct 4, '14 at 5:16a



site design / logo © 2021 Grokbase