FAQ
Hi all. I'm trying to find an efficient (GC, mem footprint and speed) way
to create a storage package that can accept different struct types, create
a bucket for each type and then recycle the memory as objects are deleted
and added (e.g. reuse slots). It's designed to use offsets and also
optionally allows linking of objects (note the bucket is not one big list,
but rather can contain many smaller lists).

The motivation was unacceptable GC pause with large mem footprints of map
-> pointers -> objects.

So my questions are:
1. Is below on the right track?
2. I can't work out how to write an object at a given slice offset? Below
does not work (see Add() function). I get that i need to reflect the
interface somehow, but haven't managed to find a way.
3. From reading, many people say the reflection overhead is expensive, so
is this not worth doing?

Many thanks.
Hamish


*package main*
*
*
*import (*
* "fmt"*
* "reflect"*
*)*
*
*
*
*
*type StorageEngine struct {*
* Buckets map[uint16]Bucket // Max 65,535 buckets*
* NoOfBuckets uint16 *
*}*
*
*
*// Basic structure of a data bucket. Next and Prev are optional*
*type Bucket struct {*
* ID uint16 // Unique ID for each bucket*
* Datatype string // Description only*
* Length uint32 // Max number of objects 4,294,967,295*
* Offset uint32 // Current write position. Once Offset=Length, only the
DeletedSlots can be used or a new bucket must be created*
* Data interface{} // Any struct can be used as the data type, once created
it cannot change*
* Next []uint32 // Optional offset for the next record in a linked list*
* Prev []uint32 // Optional offset for the previous record in a linked list*
* DeletedSlots chan uint32 // Channel reuses slots when they are deleted*
*}*
*
*
*
*
*// Initialise a new storage engine*
*func (s *StorageEngine) Init() {*
* s.Buckets = make(map[uint16]Bucket, 100)*
* s.NoOfBuckets = 0*
*}*
*
*
*// Create a new bucket of type datastruct*
*func (s *StorageEngine) NewBucket(datatype string, length uint32, next
bool, prev bool, datastruct interface{}) uint16 {*
* b := new(Bucket)*
* b.ID = s.ReqNewBucketID()*
* b.Datatype = datatype*
* b.DeletedSlots = make(chan uint32, length)*
* myType := reflect.TypeOf(datastruct)*
* b.Data = reflect.MakeSlice(reflect.SliceOf(myType), int(length),
int(length)).Interface()*
* if next {*
* b.Next = make([]uint32, length)*
* }*
* if prev {*
* b.Prev = make([]uint32, length)*
* }*
* return b.ID*
*}*
*
*
*// Allocate and return a new bucket ID*
*func (s *StorageEngine) ReqNewBucketID() uint16 {*
* count := s.NoOfBuckets*
* s.NoOfBuckets++*
* return count*
*}*
*
*
*// Add a new data object to a bucket*
*func (s *StorageEngine) Add(id uint16, object interface{}, next uint32,
prev uint32) bool {*
* bucket := s.Buckets[id]*
* // If there is a deleted slot, use it, otherwise write to the offset*
* var slot uint32*
* select {*
* case slot = <- bucket.DeletedSlots:*
*
*
* default:*
* slot = bucket.Offset*
* }*
* data := reflect.TypeOf(bucket.Data)*
* data[slot] = object*
* bucket.Offset++*
* return true*
*}*
*
*
*
*
*
*
*type Cat struct {*
* ID int*
* Name string*
*}*
*
*
*type Dog struct {*
* ID int*
* Name string*
* Breed string*
*}*
*
*
*func main() {*
* store := new(StorageEngine)*
* store.Init()*
* x := new(Cat)*
* id := store.NewBucket("term", 100000, true, true, x)*
* fmt.Println("New bucket with ID = ",id)*
* x.ID = 1*
* x.Name = "whatever"*
* store.Add(id, x, 0, 0)*
*
*
*}*

--
You received this message because you are subscribed to the Google Groups "golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Rob Pike at Oct 17, 2013 at 6:22 pm
    I realize the desire to do this sort of thing but I'd like to discourage
    you. First, you're recreating a lot of the work the existing allocator
    already does. Second, you'll need to write this code correctly, and
    correctly in a multicore world, which is not exactly hard but non-trivial
    to do well. Third, you're fighting the language instead of using it.

    Finally, and most important, this idea is for C, not Go. In C code you
    spend all your design time thinking about memory management, creating
    custom free lists and so on. But you're also exposed to use-after-free
    bugs, stale pointers, and other issues that are completely eliminated if
    you let Go manage the memory for you. That is what Go is for: freeing you
    from low-level considerations. If you write a custom allocator like this,
    you'll be back to debugging C-like memory corruption when your program
    crashes and you'll have lost the ability to trust the program to be
    memory-safe. That can't be what you want.

    Sometimes it's necessary to do extra work to gain efficiency, I admit that.
    And Go lets you. But please don't *start* the design process by writing the
    memory allocator. That's putting the optimization cart before the
    correctness horse.

    -rob

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Hamish Ogilvy at Oct 20, 2013 at 5:12 am
    Thanks Rob. Fair enough. Is there an idiomatic way to manage hundreds of
    millions of struct elements with a high recycling rate (and minimal GC
    pause / memory fragmentation issues)?

    We previously used a series of map[uint32]*MyStruct and let Go manage the
    adding and deleting of elements via channels, which is fine until the
    number gets large. I've also read some comments recently on how large maps
    to pointers cause CPU cache misses for the GC and hence large GC pauses
    (looking into this now). The uint32 is a unique ID, so we don't re-use the
    keys, but we should obviously be reusing the element memory allocations.

    I'm still getting my head around how Go handles the allocation and
    reallocation under the hood under this situation, any input is appreciated.

    H

    On Friday, 18 October 2013 05:21:46 UTC+11, Rob Pike wrote:

    I realize the desire to do this sort of thing but I'd like to discourage
    you. First, you're recreating a lot of the work the existing allocator
    already does. Second, you'll need to write this code correctly, and
    correctly in a multicore world, which is not exactly hard but non-trivial
    to do well. Third, you're fighting the language instead of using it.

    Finally, and most important, this idea is for C, not Go. In C code you
    spend all your design time thinking about memory management, creating
    custom free lists and so on. But you're also exposed to use-after-free
    bugs, stale pointers, and other issues that are completely eliminated if
    you let Go manage the memory for you. That is what Go is for: freeing you
    from low-level considerations. If you write a custom allocator like this,
    you'll be back to debugging C-like memory corruption when your program
    crashes and you'll have lost the ability to trust the program to be
    memory-safe. That can't be what you want.

    Sometimes it's necessary to do extra work to gain efficiency, I admit
    that. And Go lets you. But please don't *start* the design process by
    writing the memory allocator. That's putting the optimization cart before
    the correctness horse.

    -rob
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Rémy Oudompheng at Oct 20, 2013 at 9:12 am

    2013/10/20 Hamish Ogilvy <[email protected]>:
    Thanks Rob. Fair enough. Is there an idiomatic way to manage hundreds of
    millions of struct elements with a high recycling rate (and minimal GC pause
    / memory fragmentation issues)?

    We previously used a series of map[uint32]*MyStruct and let Go manage the
    adding and deleting of elements via channels, which is fine until the number
    gets large. I've also read some comments recently on how large maps to
    pointers cause CPU cache misses for the GC and hence large GC pauses
    (looking into this now). The uint32 is a unique ID, so we don't re-use the
    keys, but we should obviously be reusing the element memory allocations.

    Or don't use map to pointers, use ordinary structs.

    Rémy.

    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Uli Kunitz at Oct 20, 2013 at 9:43 am
    May I suggest to use pointers instead of the uint32 ids? If you have have
    to support different types then use interface, interface{} if there is no
    common functionality. There is no problem in sending pointers and interface
    values over channels.

    It may even be useful to get back a step and rethink the problem you are
    trying to solve. What is the actual data flow? Is a central storage
    component needed? Be radical: If the central storage component is the
    performance bottleneck, get rid of it.
    On Sunday, October 20, 2013 7:12:37 AM UTC+2, Hamish Ogilvy wrote:

    Thanks Rob. Fair enough. Is there an idiomatic way to manage hundreds of
    millions of struct elements with a high recycling rate (and minimal GC
    pause / memory fragmentation issues)?

    We previously used a series of map[uint32]*MyStruct and let Go manage the
    adding and deleting of elements via channels, which is fine until the
    number gets large. I've also read some comments recently on how large maps
    to pointers cause CPU cache misses for the GC and hence large GC pauses
    (looking into this now). The uint32 is a unique ID, so we don't re-use the
    keys, but we should obviously be reusing the element memory allocations.

    I'm still getting my head around how Go handles the allocation and
    reallocation under the hood under this situation, any input is appreciated.

    H

    On Friday, 18 October 2013 05:21:46 UTC+11, Rob Pike wrote:

    I realize the desire to do this sort of thing but I'd like to discourage
    you. First, you're recreating a lot of the work the existing allocator
    already does. Second, you'll need to write this code correctly, and
    correctly in a multicore world, which is not exactly hard but non-trivial
    to do well. Third, you're fighting the language instead of using it.

    Finally, and most important, this idea is for C, not Go. In C code you
    spend all your design time thinking about memory management, creating
    custom free lists and so on. But you're also exposed to use-after-free
    bugs, stale pointers, and other issues that are completely eliminated if
    you let Go manage the memory for you. That is what Go is for: freeing you
    from low-level considerations. If you write a custom allocator like this,
    you'll be back to debugging C-like memory corruption when your program
    crashes and you'll have lost the ability to trust the program to be
    memory-safe. That can't be what you want.

    Sometimes it's necessary to do extra work to gain efficiency, I admit
    that. And Go lets you. But please don't *start* the design process by
    writing the memory allocator. That's putting the optimization cart before
    the correctness horse.

    -rob
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Hamish Ogilvy at Dec 6, 2013 at 1:08 am
    For anyone in a similar situation, cgo has worked well, so far our garbage
    pause is down ~12x. We keep the data in C and do all the fancy code in Go,
    which so far has given the best of both worlds. I hear some big GC
    improvements are coming in 1.3, so I will definitely revisit this then.

    If anyone is interested, I very crudely did some simple experiments
    allocating large arrays and slices in different ways to see what would
    happen with the GC pause. Experiments and gogctrace attached. All used Go
    1.2 r5 on linux to allocate 100,000,000 structs.

    a) keep appending 100,000,000 struct pointers to a nil slice
    b) allocate slice of struct pointers with length 100,000,000 with make,
    append to it until full.
    c) same as b), but use the actual struct instead of pointers (cc @Remy)
    d) same as c), but uses an array instead of a struct

    Results:
    a) obviously does a lot of reallocation extending the slice. Not good.
    b) less GC's triggered, but longer pause than a). Not good.
    c) similar to b), but 40% less pause. Better.
    d) easily lowest GC pause, 3x better than c), but very inflexible obviously.

    I'm surprised at the large difference in mark time difference between an
    array and a slice? I also would have thought b) would be no worse than a)
    in mark time...

    On Thursday, 17 October 2013 11:04:28 UTC+11, Hamish Ogilvy wrote:

    Hi all. I'm trying to find an efficient (GC, mem footprint and speed) way
    to create a storage package that can accept different struct types, create
    a bucket for each type and then recycle the memory as objects are deleted
    and added (e.g. reuse slots). It's designed to use offsets and also
    optionally allows linking of objects (note the bucket is not one big list,
    but rather can contain many smaller lists).

    The motivation was unacceptable GC pause with large mem footprints of map
    -> pointers -> objects.

    So my questions are:
    1. Is below on the right track?
    2. I can't work out how to write an object at a given slice offset? Below
    does not work (see Add() function). I get that i need to reflect the
    interface somehow, but haven't managed to find a way.
    3. From reading, many people say the reflection overhead is expensive, so
    is this not worth doing?

    Many thanks.
    Hamish


    *package main*

    *import (*
    * "fmt"*
    * "reflect"*
    *)*


    *type StorageEngine struct {*
    * Buckets map[uint16]Bucket // Max 65,535 buckets*
    * NoOfBuckets uint16 *
    *}*

    *// Basic structure of a data bucket. Next and Prev are optional*
    *type Bucket struct {*
    * ID uint16 // Unique ID for each bucket*
    * Datatype string // Description only*
    * Length uint32 // Max number of objects 4,294,967,295*
    * Offset uint32 // Current write position. Once Offset=Length, only the
    DeletedSlots can be used or a new bucket must be created*
    * Data interface{} // Any struct can be used as the data type, once
    created it cannot change*
    * Next []uint32 // Optional offset for the next record in a linked list*
    * Prev []uint32 // Optional offset for the previous record in a linked
    list*
    * DeletedSlots chan uint32 // Channel reuses slots when they are deleted*
    *}*


    *// Initialise a new storage engine*
    *func (s *StorageEngine) Init() {*
    * s.Buckets = make(map[uint16]Bucket, 100)*
    * s.NoOfBuckets = 0*
    *}*

    *// Create a new bucket of type datastruct*
    *func (s *StorageEngine) NewBucket(datatype string, length uint32, next
    bool, prev bool, datastruct interface{}) uint16 {*
    * b := new(Bucket)*
    * b.ID = s.ReqNewBucketID()*
    * b.Datatype = datatype*
    * b.DeletedSlots = make(chan uint32, length)*
    * myType := reflect.TypeOf(datastruct)*
    * b.Data = reflect.MakeSlice(reflect.SliceOf(myType), int(length),
    int(length)).Interface()*
    * if next {*
    * b.Next = make([]uint32, length)*
    * }*
    * if prev {*
    * b.Prev = make([]uint32, length)*
    * }*
    * return b.ID*
    *}*

    *// Allocate and return a new bucket ID*
    *func (s *StorageEngine) ReqNewBucketID() uint16 {*
    * count := s.NoOfBuckets*
    * s.NoOfBuckets++*
    * return count*
    *}*

    *// Add a new data object to a bucket*
    *func (s *StorageEngine) Add(id uint16, object interface{}, next uint32,
    prev uint32) bool {*
    * bucket := s.Buckets[id]*
    * // If there is a deleted slot, use it, otherwise write to the offset*
    * var slot uint32*
    * select {*
    * case slot = <- bucket.DeletedSlots:*

    * default:*
    * slot = bucket.Offset*
    * }*
    * data := reflect.TypeOf(bucket.Data)*
    * data[slot] = object*
    * bucket.Offset++*
    * return true*
    *}*



    *type Cat struct {*
    * ID int*
    * Name string*
    *}*

    *type Dog struct {*
    * ID int*
    * Name string*
    * Breed string*
    *}*

    *func main() {*
    * store := new(StorageEngine)*
    * store.Init()*
    * x := new(Cat)*
    * id := store.NewBucket("term", 100000, true, true, x)*
    * fmt.Println("New bucket with ID = ",id)*
    * x.ID = 1*
    * x.Name = "whatever"*
    * store.Add(id, x, 0, 0)*

    *}*
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Egon at Dec 6, 2013 at 4:59 am
    Your c.go is wrong, it first allocates a slice of length N and then appends
    N more elements to the slice, i.e. ending up with 2*N elements. Correct
    would be:

    a := T{make([]A, 0, num)}
    for i := 0; i < num; i++ {
      var d A
    a.Data = append(a.Data, d)
    }

    Alternatively you can simply do T{make([]A, num)}... the values are always
    zeroed.

    + egon
    On Friday, December 6, 2013 3:08:28 AM UTC+2, Hamish Ogilvy wrote:

    For anyone in a similar situation, cgo has worked well, so far our garbage
    pause is down ~12x. We keep the data in C and do all the fancy code in Go,
    which so far has given the best of both worlds. I hear some big GC
    improvements are coming in 1.3, so I will definitely revisit this then.

    If anyone is interested, I very crudely did some simple experiments
    allocating large arrays and slices in different ways to see what would
    happen with the GC pause. Experiments and gogctrace attached. All used Go
    1.2 r5 on linux to allocate 100,000,000 structs.

    a) keep appending 100,000,000 struct pointers to a nil slice
    b) allocate slice of struct pointers with length 100,000,000 with make,
    append to it until full.
    c) same as b), but use the actual struct instead of pointers (cc @Remy)
    d) same as c), but uses an array instead of a struct

    Results:
    a) obviously does a lot of reallocation extending the slice. Not good.
    b) less GC's triggered, but longer pause than a). Not good.
    c) similar to b), but 40% less pause. Better.
    d) easily lowest GC pause, 3x better than c), but very inflexible
    obviously.

    I'm surprised at the large difference in mark time difference between an
    array and a slice? I also would have thought b) would be no worse than a)
    in mark time...

    On Thursday, 17 October 2013 11:04:28 UTC+11, Hamish Ogilvy wrote:

    Hi all. I'm trying to find an efficient (GC, mem footprint and speed) way
    to create a storage package that can accept different struct types, create
    a bucket for each type and then recycle the memory as objects are deleted
    and added (e.g. reuse slots). It's designed to use offsets and also
    optionally allows linking of objects (note the bucket is not one big list,
    but rather can contain many smaller lists).

    The motivation was unacceptable GC pause with large mem footprints of map
    -> pointers -> objects.

    So my questions are:
    1. Is below on the right track?
    2. I can't work out how to write an object at a given slice offset? Below
    does not work (see Add() function). I get that i need to reflect the
    interface somehow, but haven't managed to find a way.
    3. From reading, many people say the reflection overhead is expensive, so
    is this not worth doing?

    Many thanks.
    Hamish


    *package main*

    *import (*
    * "fmt"*
    * "reflect"*
    *)*


    *type StorageEngine struct {*
    * Buckets map[uint16]Bucket // Max 65,535 buckets*
    * NoOfBuckets uint16 *
    *}*

    *// Basic structure of a data bucket. Next and Prev are optional*
    *type Bucket struct {*
    * ID uint16 // Unique ID for each bucket*
    * Datatype string // Description only*
    * Length uint32 // Max number of objects 4,294,967,295*
    * Offset uint32 // Current write position. Once Offset=Length, only the
    DeletedSlots can be used or a new bucket must be created*
    * Data interface{} // Any struct can be used as the data type, once
    created it cannot change*
    * Next []uint32 // Optional offset for the next record in a linked list*
    * Prev []uint32 // Optional offset for the previous record in a linked
    list*
    * DeletedSlots chan uint32 // Channel reuses slots when they are deleted*
    *}*


    *// Initialise a new storage engine*
    *func (s *StorageEngine) Init() {*
    * s.Buckets = make(map[uint16]Bucket, 100)*
    * s.NoOfBuckets = 0*
    *}*

    *// Create a new bucket of type datastruct*
    *func (s *StorageEngine) NewBucket(datatype string, length uint32, next
    bool, prev bool, datastruct interface{}) uint16 {*
    * b := new(Bucket)*
    * b.ID = s.ReqNewBucketID()*
    * b.Datatype = datatype*
    * b.DeletedSlots = make(chan uint32, length)*
    * myType := reflect.TypeOf(datastruct)*
    * b.Data = reflect.MakeSlice(reflect.SliceOf(myType), int(length),
    int(length)).Interface()*
    * if next {*
    * b.Next = make([]uint32, length)*
    * }*
    * if prev {*
    * b.Prev = make([]uint32, length)*
    * }*
    * return b.ID*
    *}*

    *// Allocate and return a new bucket ID*
    *func (s *StorageEngine) ReqNewBucketID() uint16 {*
    * count := s.NoOfBuckets*
    * s.NoOfBuckets++*
    * return count*
    *}*

    *// Add a new data object to a bucket*
    *func (s *StorageEngine) Add(id uint16, object interface{}, next uint32,
    prev uint32) bool {*
    * bucket := s.Buckets[id]*
    * // If there is a deleted slot, use it, otherwise write to the offset*
    * var slot uint32*
    * select {*
    * case slot = <- bucket.DeletedSlots:*

    * default:*
    * slot = bucket.Offset*
    * }*
    * data := reflect.TypeOf(bucket.Data)*
    * data[slot] = object*
    * bucket.Offset++*
    * return true*
    *}*



    *type Cat struct {*
    * ID int*
    * Name string*
    *}*

    *type Dog struct {*
    * ID int*
    * Name string*
    * Breed string*
    *}*

    *func main() {*
    * store := new(StorageEngine)*
    * store.Init()*
    * x := new(Cat)*
    * id := store.NewBucket("term", 100000, true, true, x)*
    * fmt.Println("New bucket with ID = ",id)*
    * x.ID = 1*
    * x.Name = "whatever"*
    * store.Add(id, x, 0, 0)*

    *}*
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
  • Hamish Ogilvy at Dec 6, 2013 at 5:21 am
    Oops good pick up.

    On 6 December 2013 15:59, egon wrote:

    Your c.go is wrong, it first allocates a slice of length N and then
    appends N more elements to the slice, i.e. ending up with 2*N elements.
    Correct would be:

    a := T{make([]A, 0, num)}
    for i := 0; i < num; i++ {
    var d A
    a.Data = append(a.Data, d)
    }

    Alternatively you can simply do T{make([]A, num)}... the values are always
    zeroed.

    + egon
    On Friday, December 6, 2013 3:08:28 AM UTC+2, Hamish Ogilvy wrote:

    For anyone in a similar situation, cgo has worked well, so far our
    garbage pause is down ~12x. We keep the data in C and do all the fancy code
    in Go, which so far has given the best of both worlds. I hear some big GC
    improvements are coming in 1.3, so I will definitely revisit this then.

    If anyone is interested, I very crudely did some simple experiments
    allocating large arrays and slices in different ways to see what would
    happen with the GC pause. Experiments and gogctrace attached. All used Go
    1.2 r5 on linux to allocate 100,000,000 structs.

    a) keep appending 100,000,000 struct pointers to a nil slice
    b) allocate slice of struct pointers with length 100,000,000 with make,
    append to it until full.
    c) same as b), but use the actual struct instead of pointers (cc @Remy)
    d) same as c), but uses an array instead of a struct

    Results:
    a) obviously does a lot of reallocation extending the slice. Not good.
    b) less GC's triggered, but longer pause than a). Not good.
    c) similar to b), but 40% less pause. Better.
    d) easily lowest GC pause, 3x better than c), but very inflexible
    obviously.

    I'm surprised at the large difference in mark time difference between an
    array and a slice? I also would have thought b) would be no worse than a)
    in mark time...

    On Thursday, 17 October 2013 11:04:28 UTC+11, Hamish Ogilvy wrote:

    Hi all. I'm trying to find an efficient (GC, mem footprint and speed)
    way to create a storage package that can accept different struct types,
    create a bucket for each type and then recycle the memory as objects are
    deleted and added (e.g. reuse slots). It's designed to use offsets and also
    optionally allows linking of objects (note the bucket is not one big list,
    but rather can contain many smaller lists).

    The motivation was unacceptable GC pause with large mem footprints of
    map -> pointers -> objects.

    So my questions are:
    1. Is below on the right track?
    2. I can't work out how to write an object at a given slice offset?
    Below does not work (see Add() function). I get that i need to reflect the
    interface somehow, but haven't managed to find a way.
    3. From reading, many people say the reflection overhead is expensive,
    so is this not worth doing?

    Many thanks.
    Hamish


    *package main*

    *import (*
    * "fmt"*
    * "reflect"*
    *)*


    *type StorageEngine struct {*
    * Buckets map[uint16]Bucket // Max 65,535 buckets*
    * NoOfBuckets uint16 *
    *}*

    *// Basic structure of a data bucket. Next and Prev are optional*
    *type Bucket struct {*
    * ID uint16 // Unique ID for each bucket*
    * Datatype string // Description only*
    * Length uint32 // Max number of objects 4,294,967,295*
    * Offset uint32 // Current write position. Once Offset=Length, only the
    DeletedSlots can be used or a new bucket must be created*
    * Data interface{} // Any struct can be used as the data type, once
    created it cannot change*
    * Next []uint32 // Optional offset for the next record in a linked list*
    * Prev []uint32 // Optional offset for the previous record in a linked
    list*
    * DeletedSlots chan uint32 // Channel reuses slots when they are deleted*
    *}*


    *// Initialise a new storage engine*
    *func (s *StorageEngine) Init() {*
    * s.Buckets = make(map[uint16]Bucket, 100)*
    * s.NoOfBuckets = 0*
    *}*

    *// Create a new bucket of type datastruct*
    *func (s *StorageEngine) NewBucket(datatype string, length uint32, next
    bool, prev bool, datastruct interface{}) uint16 {*
    * b := new(Bucket)*
    * b.ID = s.ReqNewBucketID()*
    * b.Datatype = datatype*
    * b.DeletedSlots = make(chan uint32, length)*
    * myType := reflect.TypeOf(datastruct)*
    * b.Data = reflect.MakeSlice(reflect.SliceOf(myType), int(length),
    int(length)).Interface()*
    * if next {*
    * b.Next = make([]uint32, length)*
    * }*
    * if prev {*
    * b.Prev = make([]uint32, length)*
    * }*
    * return b.ID*
    *}*

    *// Allocate and return a new bucket ID*
    *func (s *StorageEngine) ReqNewBucketID() uint16 {*
    * count := s.NoOfBuckets*
    * s.NoOfBuckets++*
    * return count*
    *}*

    *// Add a new data object to a bucket*
    *func (s *StorageEngine) Add(id uint16, object interface{}, next uint32,
    prev uint32) bool {*
    * bucket := s.Buckets[id]*
    * // If there is a deleted slot, use it, otherwise write to the offset*
    * var slot uint32*
    * select {*
    * case slot = <- bucket.DeletedSlots:*

    * default:*
    * slot = bucket.Offset*
    * }*
    * data := reflect.TypeOf(bucket.Data)*
    * data[slot] = object*
    * bucket.Offset++*
    * return true*
    *}*



    *type Cat struct {*
    * ID int*
    * Name string*
    *}*

    *type Dog struct {*
    * ID int*
    * Name string*
    * Breed string*
    *}*

    *func main() {*
    * store := new(StorageEngine)*
    * store.Init()*
    * x := new(Cat)*
    * id := store.NewBucket("term", 100000, true, true, x)*
    * fmt.Println("New bucket with ID = ",id)*
    * x.ID = 1*
    * x.Name = "whatever"*
    * store.Add(id, x, 0, 0)*

    *}*
    --
    You received this message because you are subscribed to a topic in the
    Google Groups "golang-nuts" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/golang-nuts/VJcJB_8acig/unsubscribe.
    To unsubscribe from this group and all its topics, send an email to
    [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups "golang-nuts" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupgolang-nuts @
categoriesgo
postedOct 17, '13 at 12:04a
activeDec 6, '13 at 5:21a
posts8
users5
websitegolang.org

People

Translate

site design / logo © 2023 Grokbase