FAQ
Thanks for the input everyone.

I whipped up a lock-free, pointer-free hash table library after reviewing
Caleb's. If Caleb's wasn't safe for primetime, this is DEFINITELY not safe.
Do not use except for play. Here's the
gist: https://gist.github.com/vessenes/ce7d1254ad92c6d2a996

The package assumes you want to map a record type to an int offset, which
would presumably be used to get your data out of an array. It could be
easily reworked to store any sort of fixed length data with similar
performance characteristics. It doesn't resize the table, but it can tell
you if you're out of space, and will return an error.

The good news is that this has the expected characteristics. A size 1
billion hash table with 500million elements can be garbage collected in
150ms on my beefy server. (The first GC costs like 20seconds, subsequent
are fast.) So, thanks for the help! That's like a 40GB or so data
structure. Lookups are in the hundreds of nanoseconds.

The lock free algorithm I used is a naive <<--- implementation
of http://preshing.com/20130605/the-worlds-simplest-lock-free-hash-table/.
It seems to work pretty well; I instrumented the gist with a bunch of
command-line options to play with. A multi-core friendly hash table on my
system is a LOT faster than single core; this particular one runs about 10x
the speed using all 40 cores compared to single threaded.

Anyway, thanks guys! This is a big help, and a fun learning experience.
On Friday, January 23, 2015 at 7:42:33 PM UTC-8, Jian Zhen wrote:

Not sure if this is the right solution but this package on github seems
interesting for your problem:

https://github.com/smartystreets/mafsa

Package mafsa implements Minimal Acyclic Finite State Automata (MA-FSA)
with Minimal Perfect Hashing (MPH). Basically, it's a set of strings that
lets you test for membership, do spelling correction (fuzzy matching) and
autocomplete, but with higher memory efficiency than a regular trie.

On Jan 23, 2015, at 3:46 PM, Peter Vessenes <pe...@coinlab.com
<javascript:>> wrote:

Hi all,

I'm working on a program that loads up 100+GB of data into three
datastructures, two maps and an array. These are then only used for data
analysis and never changed again.

Once the data is loaded, it's used to back analytics for a web service.

The problem I'm currently facing is 13-15second GC times, quite often.
Everything stops while we wait for the GC to run, and obviously that's too
long for anything really, but certainly a web API call.

Any help or ideas? Intuitively, it seems to me there should be some way to
mark this data "constant" or "ignore" or "locked" and have the GC just skip
it, but I can't seem to find anything in go that would facilitate this. I
had great success running without the garbage collector until inevitably
ran out of addressable memory; my server has 256gig, but go1.4 only allows
128GB of allocation so that's out. (Many) more details below.

Thanks,

Peter

Details:
At peak loading the objects in creates about 1 billion objects according
to gctrace data. I spent the last few days working on trimming this number
down, but I've only got it down to 300million or so, and that's through
some fairly aggressive work. It hasn't changed the GC runtime very much,
which makes me think the allocations / deallocations aren't the big deal,
it's the amount of memory being used. In any event post-loading the
700million objects used to load and parse can be collected.

I started with the following datatypes:

type Data struct {
Name []string
Subs []Subdata
Id string
}

type Subdata struct {
id []string
other int
}

data := []Datastruct // about 50 million of these
lookup_one := map[string]string // relating two data items
lookup_two := map[string]int // relating data name to position in array

Post-testing, I have concluded that
data := make([]Datastruct2, 50000000)
l1 := map[[10]byte][10]byte
l2 := map[[10]byte][10]byte

and rewritten datastructures:

Datastruct2 {
Name [10]byte
Subs [100]Sub2
id [10]byte
}

Sub2 {
ID [10]byte
N int
}

Of course, this comes at a cost for dev time and annoyance, lots of byte
casting now and more memory use in the typical use case with 1 or 2 subs
worth of data, not 100. It's a large codebase, so I'm still in the middle
of reworking it all. But, as I'm doing it, I have begun to worry. I haven't
really seen anything which indicates I'm going to get down to 50-100ms GC
time in what I'm doing, that would be probably the max acceptable for a
realtime web API.

How can I achieve this with my data and use case in go? Any tips, help,
thoughts are appreciated.



--
You received this message because you are subscribed to the Google Groups
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to golang-nuts...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to golang-nuts+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 9 of 23 | next ›
Discussion Overview
groupgolang-nuts @
categoriesgo
postedJan 23, '15 at 11:46p
activeFeb 1, '15 at 1:47a
posts23
users11
websitegolang.org

People

Translate

site design / logo © 2021 Grokbase