FAQ
Hi,

We're close to deploying redis in production as our primary datastore.
My goal is to avoid a SPOF and have automatic failover, when the
master goes down.
I'm mostly picking up from where this thread left off
https://groups.google.com/forum/#!searchin/redis-db/redis$20ha/redis-db/GsgMXvsTkbc/discussion

From my understanding of the previous discussion, for READS this is a
solved problem.
Easiest in my opinion is to put read slaves behind a load balancer(LB)
like HAproxy or something.
So that when one read slave goes down, the LB will send requests to
healthy instances in the pool.

Now when the master goes down, two things have to be done,

1) Promote a healthy slave to master
2) Update clients to point to new master

My approach to solving this is here, it mostly follows ideas outlined
by Salvatore in the previous discussion.
I have written a python script to be run in daemon mode on the
application servers. It takes a simple config file, which contains the
host:port of master redis server and a list of slaves. Something like:
{
"master" : {
"host" : "127.0.0.1"
"port" : 6379
},
"slaves" : [
{
"host" : "127.0.0.1"
"port" : 6380
},
{
"host" : "127.0.0.1"
"port" : 6381
} ]
}

The order of slaves is important, they should be daisy-chained. The
first should be SLAVEOF the master, the second slave of the first
slave and so on. I'm doing this so that when the master goes down,
only the first slave needs to be sent SLAVEOF NO ONE
and everything else should function normally, also it's better than
pointing all slaves to the MASTER together at once to start a sync. Is
there a flaw in this?

For 1 ,The script pings the master continuously at a pre-set interval.
If master is down, it pings the first slave in the list, if that is
down too, do nothing, loop again. If the slave is up, it reads LLEN
'master-errors' from the slave, if i.e greater than a pre-set
--maxfailures, it promotes this slave to be new master else writes to
all slaves LPUSH 'master-errors <uuid>:<unix time>.'
After this step, a new master has been chosen.

Now for 2, updating the clients to point to new master.
I have some ideas for this, please comment

1) All clients read master and slave addresses from environment
variables, this script will just edit those. Clients, should retry a
connection if they get a ConnectionError, hoping that the daemon will
update the env variable to correct address. Advantage of this is that,
it doesnt require messing with the client library.

2) This one is mostly a cheat. All clients point to a load balancer,
one ip:port for writes and one for reads. The load balancer sends the
read request to a pool of read slaves and for writes it sends to the
master. There is NO pool for masters, just one master instance handles
ALL writes. In this case when a failover takes place, the daemon will
remove the newly elected master from the slaves pool and put it behind
the loadbalancer ip:port for writes. Advantage of this is that nothing
needs to be changed on the client side. They keep talking to the load
balancer like before. Disadvantages of this are that ideally only the
loadbalancer should be talking to the redis instances behind it, not
application servers. I don't know how this will be achieved with
HAproxy and the likes, but if you're on ec2, it becomes as simple as
using the boto library to configure the changes.

3) All app servers run doozerd https://github.com/ha/doozerd.
Clients read master/slave address from doozer. Likewise, the script
edits these in case of failover. Advantages are that the daemon script
doesnt need to be on each app server (though it would be a good idea,
just to have client's world view) only doozerd needs to be running.
When the script edits anything in doozer, it is replicated across all
instances of app servers running doozerd. Disadvantage: I've not
studied PAXOS yet :)

As of now, I've implemented the number 2 approach (Load
Balancer).Mainly because it solves HA and failover for reads out of
the box. It is very crude right now and more of a PoC as i research
further.

Note: All of the above doesn't take any 'shards' into the picture.
That is a beast I've left for another day!

Please offer your suggestions and comments.

Thanks
Gurteshwar

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To post to this group, send email to redis-db@googlegroups.com.
To unsubscribe from this group, send email to redis-db+unsubscribe@googlegroups.com.
For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.

Search Discussions

  • Guru singh at Apr 24, 2012 at 1:30 pm
    Another thing I forgot to add.
    There's no mechanism to deal with a master coming backup. For now,
    it's manual intervention. But one could fire an async process upon
    failure, to check if the master comes back up, and make it a SLAVEOF
    new master.

    gurteshwar
    On Tue, Apr 24, 2012 at 6:54 PM, guru singh wrote:
    Hi,

    We're close to deploying redis in production as our primary datastore.
    My goal is to avoid a SPOF and have automatic failover, when the
    master goes down.
    I'm mostly picking up from where this thread left off
    https://groups.google.com/forum/#!searchin/redis-db/redis$20ha/redis-db/GsgMXvsTkbc/discussion

    From my understanding of the previous discussion, for READS this is a
    solved problem.
    Easiest in my opinion is to put read slaves behind a load balancer(LB)
    like HAproxy or something.
    So that when one read slave goes down, the LB will send requests to
    healthy instances in the pool.

    Now when the master goes down, two things have to be done,

    1) Promote a healthy slave to master
    2) Update clients to point to new master

    My approach to solving this is here, it mostly follows ideas outlined
    by Salvatore in the previous discussion.
    I have written a python script to be run in daemon mode on the
    application servers. It takes a simple config file, which contains the
    host:port of master redis server and a list of slaves. Something like:
    {
    "master" : {
    "host" : "127.0.0.1"
    "port" : 6379
    },
    "slaves"  : [
    {
    "host" : "127.0.0.1"
    "port"  : 6380
    },
    {
    "host" : "127.0.0.1"
    "port" : 6381
    } ]
    }

    The order of slaves is important, they should be daisy-chained. The
    first should be SLAVEOF the master, the second slave of the first
    slave and so on. I'm doing this so that when the master goes down,
    only the first slave needs to be sent SLAVEOF NO ONE
    and everything else should function normally, also it's better than
    pointing all slaves to the MASTER together at once to start a sync. Is
    there a flaw in this?

    For 1 ,The script pings the master continuously at a pre-set interval.
    If master is down, it pings the first slave in the list, if that is
    down too,  do nothing, loop again. If the slave is up, it reads LLEN
    'master-errors' from the slave, if i.e greater than a pre-set
    --maxfailures, it promotes this slave to be new master else writes to
    all slaves LPUSH 'master-errors <uuid>:<unix time>.'
    After this step, a new master has been chosen.

    Now for 2, updating the clients to point to new master.
    I have some ideas for this, please comment

    1) All clients read master and slave addresses from environment
    variables, this script will just edit those. Clients, should retry a
    connection if they get a ConnectionError, hoping that the daemon will
    update the env variable to correct address. Advantage of this is that,
    it doesnt require messing with the client library.

    2) This one is mostly a cheat. All clients point to a load balancer,
    one ip:port for writes and one for reads. The load balancer sends the
    read request to a pool of read slaves and for writes it sends to the
    master. There is NO pool for masters, just one master instance handles
    ALL writes. In this case when a failover takes place, the daemon will
    remove the newly elected master from the slaves pool and put it behind
    the loadbalancer ip:port for writes. Advantage of this is that nothing
    needs to be changed on the client side. They keep talking to the load
    balancer like before. Disadvantages of this are that ideally only the
    loadbalancer should be talking to the redis instances behind it, not
    application servers. I don't know how this will be achieved with
    HAproxy and the likes, but if you're on ec2, it becomes as simple as
    using the boto library to configure the changes.

    3) All app servers run doozerd https://github.com/ha/doozerd.
    Clients read master/slave address from doozer. Likewise, the script
    edits these in case of failover. Advantages are that the daemon script
    doesnt need to be on each app server (though it would be a good idea,
    just to have client's world view) only doozerd needs to be running.
    When the script edits anything in doozer, it is replicated across all
    instances of app servers running doozerd. Disadvantage: I've not
    studied PAXOS yet :)

    As of now, I've implemented the number 2 approach (Load
    Balancer).Mainly because it solves HA and failover for reads out of
    the box. It is very crude right now and more of a PoC as i research
    further.

    Note: All of the above doesn't take any 'shards' into the picture.
    That is a beast I've left for another day!

    Please offer your suggestions and comments.

    Thanks
    Gurteshwar
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To post to this group, send email to redis-db@googlegroups.com.
    To unsubscribe from this group, send email to redis-db+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
  • Ryan LeCompte at Apr 24, 2012 at 2:41 pm
    Hey Gurteshwar,

    You may be interested in my redis_failover library, as it attempts to solve
    some of the same problems you're working
    on: https://github.com/ryanlecompte/redis_failover

    It has more of a Ruby focus, but you may find some inspiration there.

    Ryan
    On Tuesday, April 24, 2012 6:29:45 AM UTC-7, guru singh wrote:

    Another thing I forgot to add.
    There's no mechanism to deal with a master coming backup. For now,
    it's manual intervention. But one could fire an async process upon
    failure, to check if the master comes back up, and make it a SLAVEOF
    new master.

    gurteshwar
    On Tue, Apr 24, 2012 at 6:54 PM, guru singh wrote:
    Hi,

    We're close to deploying redis in production as our primary datastore.
    My goal is to avoid a SPOF and have automatic failover, when the
    master goes down.
    I'm mostly picking up from where this thread left off
    https://groups.google.com/forum/#!searchin/redis-db/redis$20ha/redis-db/GsgMXvsTkbc/discussion
    From my understanding of the previous discussion, for READS this is a
    solved problem.
    Easiest in my opinion is to put read slaves behind a load balancer(LB)
    like HAproxy or something.
    So that when one read slave goes down, the LB will send requests to
    healthy instances in the pool.

    Now when the master goes down, two things have to be done,

    1) Promote a healthy slave to master
    2) Update clients to point to new master

    My approach to solving this is here, it mostly follows ideas outlined
    by Salvatore in the previous discussion.
    I have written a python script to be run in daemon mode on the
    application servers. It takes a simple config file, which contains the
    host:port of master redis server and a list of slaves. Something like:
    {
    "master" : {
    "host" : "127.0.0.1"
    "port" : 6379
    },
    "slaves" : [
    {
    "host" : "127.0.0.1"
    "port" : 6380
    },
    {
    "host" : "127.0.0.1"
    "port" : 6381
    } ]
    }

    The order of slaves is important, they should be daisy-chained. The
    first should be SLAVEOF the master, the second slave of the first
    slave and so on. I'm doing this so that when the master goes down,
    only the first slave needs to be sent SLAVEOF NO ONE
    and everything else should function normally, also it's better than
    pointing all slaves to the MASTER together at once to start a sync. Is
    there a flaw in this?

    For 1 ,The script pings the master continuously at a pre-set interval.
    If master is down, it pings the first slave in the list, if that is
    down too, do nothing, loop again. If the slave is up, it reads LLEN
    'master-errors' from the slave, if i.e greater than a pre-set
    --maxfailures, it promotes this slave to be new master else writes to
    all slaves LPUSH 'master-errors <uuid>:<unix time>.'
    After this step, a new master has been chosen.

    Now for 2, updating the clients to point to new master.
    I have some ideas for this, please comment

    1) All clients read master and slave addresses from environment
    variables, this script will just edit those. Clients, should retry a
    connection if they get a ConnectionError, hoping that the daemon will
    update the env variable to correct address. Advantage of this is that,
    it doesnt require messing with the client library.

    2) This one is mostly a cheat. All clients point to a load balancer,
    one ip:port for writes and one for reads. The load balancer sends the
    read request to a pool of read slaves and for writes it sends to the
    master. There is NO pool for masters, just one master instance handles
    ALL writes. In this case when a failover takes place, the daemon will
    remove the newly elected master from the slaves pool and put it behind
    the loadbalancer ip:port for writes. Advantage of this is that nothing
    needs to be changed on the client side. They keep talking to the load
    balancer like before. Disadvantages of this are that ideally only the
    loadbalancer should be talking to the redis instances behind it, not
    application servers. I don't know how this will be achieved with
    HAproxy and the likes, but if you're on ec2, it becomes as simple as
    using the boto library to configure the changes.

    3) All app servers run doozerd https://github.com/ha/doozerd.
    Clients read master/slave address from doozer. Likewise, the script
    edits these in case of failover. Advantages are that the daemon script
    doesnt need to be on each app server (though it would be a good idea,
    just to have client's world view) only doozerd needs to be running.
    When the script edits anything in doozer, it is replicated across all
    instances of app servers running doozerd. Disadvantage: I've not
    studied PAXOS yet :)

    As of now, I've implemented the number 2 approach (Load
    Balancer).Mainly because it solves HA and failover for reads out of
    the box. It is very crude right now and more of a PoC as i research
    further.

    Note: All of the above doesn't take any 'shards' into the picture.
    That is a beast I've left for another day!

    Please offer your suggestions and comments.

    Thanks
    Gurteshwar
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/redis-db/-/7_zYmD3t-dQJ.
    To post to this group, send email to redis-db@googlegroups.com.
    To unsubscribe from this group, send email to redis-db+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
  • Guru singh at Apr 24, 2012 at 6:14 pm
    Hi Ryan,

    Very interesting. The problem your library is solving is exactly what
    I'm looking to tackle.
    I don't know ruby, so i'll be implementing this in python. I just
    skimmed through the README,
    will take a closer look tomorrow. Are you using this in a production scenario?

    Thanks
    gurteshwar
    On Tue, Apr 24, 2012 at 8:11 PM, Ryan LeCompte wrote:
    Hey Gurteshwar,

    You may be interested in my redis_failover library, as it attempts to solve
    some of the same problems you're working
    on: https://github.com/ryanlecompte/redis_failover

    It has more of a Ruby focus, but you may find some inspiration there.

    Ryan

    On Tuesday, April 24, 2012 6:29:45 AM UTC-7, guru singh wrote:

    Another thing I forgot to add.
    There's no mechanism to deal with a master coming backup. For now,
    it's manual intervention. But one could fire an async process upon
    failure, to check if the master comes back up, and make it a SLAVEOF
    new master.

    gurteshwar
    On Tue, Apr 24, 2012 at 6:54 PM, guru singh wrote:
    Hi,

    We're close to deploying redis in production as our primary datastore.
    My goal is to avoid a SPOF and have automatic failover, when the
    master goes down.
    I'm mostly picking up from where this thread left off

    https://groups.google.com/forum/#!searchin/redis-db/redis$20ha/redis-db/GsgMXvsTkbc/discussion

    From my understanding of the previous discussion, for READS this is a
    solved problem.
    Easiest in my opinion is to put read slaves behind a load balancer(LB)
    like HAproxy or something.
    So that when one read slave goes down, the LB will send requests to
    healthy instances in the pool.

    Now when the master goes down, two things have to be done,

    1) Promote a healthy slave to master
    2) Update clients to point to new master

    My approach to solving this is here, it mostly follows ideas outlined
    by Salvatore in the previous discussion.
    I have written a python script to be run in daemon mode on the
    application servers. It takes a simple config file, which contains the
    host:port of master redis server and a list of slaves. Something like:
    {
    "master" : {
    "host" : "127.0.0.1"
    "port" : 6379
    },
    "slaves"  : [
    {
    "host" : "127.0.0.1"
    "port"  : 6380
    },
    {
    "host" : "127.0.0.1"
    "port" : 6381
    } ]
    }

    The order of slaves is important, they should be daisy-chained. The
    first should be SLAVEOF the master, the second slave of the first
    slave and so on. I'm doing this so that when the master goes down,
    only the first slave needs to be sent SLAVEOF NO ONE
    and everything else should function normally, also it's better than
    pointing all slaves to the MASTER together at once to start a sync. Is
    there a flaw in this?

    For 1 ,The script pings the master continuously at a pre-set interval.
    If master is down, it pings the first slave in the list, if that is
    down too,  do nothing, loop again. If the slave is up, it reads LLEN
    'master-errors' from the slave, if i.e greater than a pre-set
    --maxfailures, it promotes this slave to be new master else writes to
    all slaves LPUSH 'master-errors <uuid>:<unix time>.'
    After this step, a new master has been chosen.

    Now for 2, updating the clients to point to new master.
    I have some ideas for this, please comment

    1) All clients read master and slave addresses from environment
    variables, this script will just edit those. Clients, should retry a
    connection if they get a ConnectionError, hoping that the daemon will
    update the env variable to correct address. Advantage of this is that,
    it doesnt require messing with the client library.

    2) This one is mostly a cheat. All clients point to a load balancer,
    one ip:port for writes and one for reads. The load balancer sends the
    read request to a pool of read slaves and for writes it sends to the
    master. There is NO pool for masters, just one master instance handles
    ALL writes. In this case when a failover takes place, the daemon will
    remove the newly elected master from the slaves pool and put it behind
    the loadbalancer ip:port for writes. Advantage of this is that nothing
    needs to be changed on the client side. They keep talking to the load
    balancer like before. Disadvantages of this are that ideally only the
    loadbalancer should be talking to the redis instances behind it, not
    application servers. I don't know how this will be achieved with
    HAproxy and the likes, but if you're on ec2, it becomes as simple as
    using the boto library to configure the changes.

    3) All app servers run doozerd https://github.com/ha/doozerd.
    Clients read master/slave address from doozer. Likewise, the script
    edits these in case of failover. Advantages are that the daemon script
    doesnt need to be on each app server (though it would be a good idea,
    just to have client's world view) only doozerd needs to be running.
    When the script edits anything in doozer, it is replicated across all
    instances of app servers running doozerd. Disadvantage: I've not
    studied PAXOS yet :)

    As of now, I've implemented the number 2 approach (Load
    Balancer).Mainly because it solves HA and failover for reads out of
    the box. It is very crude right now and more of a PoC as i research
    further.

    Note: All of the above doesn't take any 'shards' into the picture.
    That is a beast I've left for another day!

    Please offer your suggestions and comments.

    Thanks
    Gurteshwar
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To view this discussion on the web visit
    https://groups.google.com/d/msg/redis-db/-/7_zYmD3t-dQJ.
    To post to this group, send email to redis-db@googlegroups.com.
    To unsubscribe from this group, send email to
    redis-db+unsubscribe@googlegroups.com.
    For more options, visit this group at
    http://groups.google.com/group/redis-db?hl=en.
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To post to this group, send email to redis-db@googlegroups.com.
    To unsubscribe from this group, send email to redis-db+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.
  • Ryan LeCompte at Apr 24, 2012 at 8:51 pm
    Hi Gurteshwar,

    I am actively testing this in local deployments but we have not used it in
    production yet. I'm hoping that will change soon!

    Thanks,
    Ryan
    On Tuesday, April 24, 2012 11:14:07 AM UTC-7, guru singh wrote:

    Hi Ryan,

    Very interesting. The problem your library is solving is exactly what
    I'm looking to tackle.
    I don't know ruby, so i'll be implementing this in python. I just
    skimmed through the README,
    will take a closer look tomorrow. Are you using this in a production
    scenario?

    Thanks
    gurteshwar
    On Tue, Apr 24, 2012 at 8:11 PM, Ryan LeCompte wrote:
    Hey Gurteshwar,

    You may be interested in my redis_failover library, as it attempts to solve
    some of the same problems you're working
    on: https://github.com/ryanlecompte/redis_failover

    It has more of a Ruby focus, but you may find some inspiration there.

    Ryan

    On Tuesday, April 24, 2012 6:29:45 AM UTC-7, guru singh wrote:

    Another thing I forgot to add.
    There's no mechanism to deal with a master coming backup. For now,
    it's manual intervention. But one could fire an async process upon
    failure, to check if the master comes back up, and make it a SLAVEOF
    new master.

    gurteshwar
    On Tue, Apr 24, 2012 at 6:54 PM, guru singh wrote:
    Hi,

    We're close to deploying redis in production as our primary datastore.
    My goal is to avoid a SPOF and have automatic failover, when the
    master goes down.
    I'm mostly picking up from where this thread left off
    https://groups.google.com/forum/#!searchin/redis-db/redis$20ha/redis-db/GsgMXvsTkbc/discussion
    From my understanding of the previous discussion, for READS this is a
    solved problem.
    Easiest in my opinion is to put read slaves behind a load balancer(LB)
    like HAproxy or something.
    So that when one read slave goes down, the LB will send requests to
    healthy instances in the pool.

    Now when the master goes down, two things have to be done,

    1) Promote a healthy slave to master
    2) Update clients to point to new master

    My approach to solving this is here, it mostly follows ideas outlined
    by Salvatore in the previous discussion.
    I have written a python script to be run in daemon mode on the
    application servers. It takes a simple config file, which contains the
    host:port of master redis server and a list of slaves. Something like:
    {
    "master" : {
    "host" : "127.0.0.1"
    "port" : 6379
    },
    "slaves" : [
    {
    "host" : "127.0.0.1"
    "port" : 6380
    },
    {
    "host" : "127.0.0.1"
    "port" : 6381
    } ]
    }

    The order of slaves is important, they should be daisy-chained. The
    first should be SLAVEOF the master, the second slave of the first
    slave and so on. I'm doing this so that when the master goes down,
    only the first slave needs to be sent SLAVEOF NO ONE
    and everything else should function normally, also it's better than
    pointing all slaves to the MASTER together at once to start a sync. Is
    there a flaw in this?

    For 1 ,The script pings the master continuously at a pre-set interval.
    If master is down, it pings the first slave in the list, if that is
    down too, do nothing, loop again. If the slave is up, it reads LLEN
    'master-errors' from the slave, if i.e greater than a pre-set
    --maxfailures, it promotes this slave to be new master else writes to
    all slaves LPUSH 'master-errors <uuid>:<unix time>.'
    After this step, a new master has been chosen.

    Now for 2, updating the clients to point to new master.
    I have some ideas for this, please comment

    1) All clients read master and slave addresses from environment
    variables, this script will just edit those. Clients, should retry a
    connection if they get a ConnectionError, hoping that the daemon will
    update the env variable to correct address. Advantage of this is that,
    it doesnt require messing with the client library.

    2) This one is mostly a cheat. All clients point to a load balancer,
    one ip:port for writes and one for reads. The load balancer sends the
    read request to a pool of read slaves and for writes it sends to the
    master. There is NO pool for masters, just one master instance handles
    ALL writes. In this case when a failover takes place, the daemon will
    remove the newly elected master from the slaves pool and put it behind
    the loadbalancer ip:port for writes. Advantage of this is that nothing
    needs to be changed on the client side. They keep talking to the load
    balancer like before. Disadvantages of this are that ideally only the
    loadbalancer should be talking to the redis instances behind it, not
    application servers. I don't know how this will be achieved with
    HAproxy and the likes, but if you're on ec2, it becomes as simple as
    using the boto library to configure the changes.

    3) All app servers run doozerd https://github.com/ha/doozerd.
    Clients read master/slave address from doozer. Likewise, the script
    edits these in case of failover. Advantages are that the daemon script
    doesnt need to be on each app server (though it would be a good idea,
    just to have client's world view) only doozerd needs to be running.
    When the script edits anything in doozer, it is replicated across all
    instances of app servers running doozerd. Disadvantage: I've not
    studied PAXOS yet :)

    As of now, I've implemented the number 2 approach (Load
    Balancer).Mainly because it solves HA and failover for reads out of
    the box. It is very crude right now and more of a PoC as i research
    further.

    Note: All of the above doesn't take any 'shards' into the picture.
    That is a beast I've left for another day!

    Please offer your suggestions and comments.

    Thanks
    Gurteshwar
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To view this discussion on the web visit
    https://groups.google.com/d/msg/redis-db/-/7_zYmD3t-dQJ.
    To post to this group, send email to redis-db@googlegroups.com.
    To unsubscribe from this group, send email to
    redis-db+unsubscribe@googlegroups.com.
    For more options, visit this group at
    http://groups.google.com/group/redis-db?hl=en.
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To view this discussion on the web visit https://groups.google.com/d/msg/redis-db/-/01Z_67K7mpMJ.
    To post to this group, send email to redis-db@googlegroups.com.
    To unsubscribe from this group, send email to redis-db+unsubscribe@googlegroups.com.
    For more options, visit this group at http://groups.google.com/group/redis-db?hl=en.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupredis-db @
categoriesredis
postedApr 24, '12 at 1:24p
activeApr 24, '12 at 8:51p
posts5
users2
websiteredis.io
irc#redis

2 users in discussion

Guru singh: 3 posts Ryan LeCompte: 2 posts

People

Translate

site design / logo © 2022 Grokbase