FAQ
Hi.

I'm using Redis strictly as synchronous memory cache.
Recently, we started experiencing performance problems with it. In all of
our testing, or prior use, we never had Redis be even close to a
bottleneck, so I'm a bit puzzled.

Main application is J2EE, using Jedis as the communication driver.
The "slow" symptom is slow down in response processing. JVM thread dump
shows 44 threads simulatenously inside Redis request/response code. Below
are 2 code paths that constitute overwhelming majority of these threads.
The CPU consumption by the main redis process doesn't go above 25% (of a
single CPU). Redis operations executed by these code paths, and redis info
output are also below.

Any clues where do I start digging further? I don't think that any of my
operations are that heavy that warrant long execution time. As I said, we
previously never had Redis to be a bottleneck, this may be due to the fact
the server has been running for a while. Should I upgrade to 2.6.10 (my
approach was - if it works doesn't touch it)? Does cursory look at the
operations that we execute have something jump out?

Redis operations:

store:
PIPELINE
if (new object) {
SELECT 1
INCR
SYNC (save SYNC#1)
PIPELINE
ZADD
}
SELECT 0
SET
if (somelogic) {
DEL
}
SYNC (save SYNC#2)
PIPELINE
SELECT 1
SET last_op_stamp, now
SYNC

load:
MULTI
SELECT 0
GET
if (somelogic) {
SETNX
} else {
EXISTS
}
EXEC
if (somelogic) {
DEL
} else if (somelogic) {
EXPIRE
}
PIPELINE
SELECT 1
SET last_op_stamp, now
SYNC

Out of these 44 stuck threads, the actual operation that the thread dump
was caught on is:
14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
operations)
9 in Pipeline.sync (save SYNC#2)
8 in Jedis.expire (load code path)
6 in Transaction.exec (load code path)
6 in Pipeline.sync (save SYNC#1)
1 in Pipeline.sync (some other code path)

"[ACTIVE] ExecuteThread: '51' for queue:
'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.read(SocketInputStream.java:129)
java.net.SocketInputStream.read(SocketInputStream.java:90)

redis.clients.util.RedisInputStream.fill(RedisInputStream.java:109)

redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:45)
redis.clients.jedis.Protocol.process(Protocol.java:64)
redis.clients.jedis.Protocol.read(Protocol.java:131)
redis.clients.jedis.Connection.getAll(Connection.java:225)
redis.clients.jedis.Connection.getAll(Connection.java:217)
redis.clients.jedis.Pipeline.sync(Pipeline.java:68)

com.gsm.nosql.redis.GSMJedisWrapper.execInRedis(GSMJedisWrapper.java:1119)

com.gsm.nosql.redis.GSMJedisWrapper.storeImpression(GSMJedisWrapper.java:403)

"[ACTIVE] ExecuteThread: '46' for queue:
'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
java.net.SocketInputStream.socketRead0(Native Method)
java.net.SocketInputStream.read(SocketInputStream.java:129)
java.net.SocketInputStream.read(SocketInputStream.java:90)

redis.clients.util.RedisInputStream.fill(RedisInputStream.java:109)

redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:45)
redis.clients.jedis.Protocol.process(Protocol.java:64)
redis.clients.jedis.Protocol.read(Protocol.java:131)
redis.clients.jedis.Connection.getAll(Connection.java:225)

redis.clients.jedis.BinaryTransaction.exec(BinaryTransaction.java:26)

com.gsm.nosql.redis.GSMJedisWrapper$10.exec(GSMJedisWrapper.java:725)

com.gsm.nosql.redis.GSMJedisWrapper$10.exec(GSMJedisWrapper.java:700)

com.gsm.nosql.redis.GSMJedisWrapper.execInRedis(GSMJedisWrapper.java:1112)

com.gsm.nosql.redis.GSMJedisWrapper.loadImpression(GSMJedisWrapper.java:799)

info:
redis 127.0.0.1:6379[1]> info
redis_version:2.4.16
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:1709
uptime_in_seconds:5184711
uptime_in_days:60
lru_clock:1843694
used_cpu_sys:13843.72
used_cpu_user:10718.59
used_cpu_sys_children:663.00
used_cpu_user_children:4780.55
connected_clients:41
connected_slaves:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:584569648
used_memory_human:557.49M
used_memory_rss:739061760
used_memory_peak:772807944
used_memory_peak_human:737.01M
mem_fragmentation_ratio:1.26
mem_allocator:jemalloc-3.0.0
loading:0
aof_enabled:0
changes_since_last_save:253570
bgsave_in_progress:0
last_save_time:1360614202
bgrewriteaof_in_progress:0
total_connections_received:114926
total_commands_processed:1193792973
expired_keys:8402
evicted_keys:0
keyspace_hits:84136809
keyspace_misses:8214268
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:11521
vm_enabled:0
role:master
slave0:209.18.95.230,6379,online
db0:keys=1254589,expires=0
db1:keys=3873,expires=3869

Thank you so much.

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
To post to this group, send email to redis-db@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Yiftach Shoolman at Feb 11, 2013 at 9:23 pm
    Can you send the output of your SlowLog <http://redis.io/commands/slowlog>
    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In all of
    our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread dump
    shows 44 threads simulatenously inside Redis request/response code. Below
    are 2 code paths that constitute overwhelming majority of these threads.
    The CPU consumption by the main redis process doesn't go above 25% (of a
    single CPU). Redis operations executed by these code paths, and redis info
    output are also below.

    Any clues where do I start digging further? I don't think that any of my
    operations are that heavy that warrant long execution time. As I said, we
    previously never had Redis to be a bottleneck, this may be due to the fact
    the server has been running for a while. Should I upgrade to 2.6.10 (my
    approach was - if it works doesn't touch it)? Does cursory look at the
    operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread dump
    was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
    operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.socketRead0(Native Method)
    java.net.SocketInputStream.read(SocketInputStream.java:129)
    java.net.SocketInputStream.read(SocketInputStream.java:90)

    redis.clients.util.RedisInputStream.fill(RedisInputStream.java:109)

    redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:45)
    redis.clients.jedis.Protocol.process(Protocol.java:64)
    redis.clients.jedis.Protocol.read(Protocol.java:131)
    redis.clients.jedis.Connection.getAll(Connection.java:225)
    redis.clients.jedis.Connection.getAll(Connection.java:217)
    redis.clients.jedis.Pipeline.sync(Pipeline.java:68)

    com.gsm.nosql.redis.GSMJedisWrapper.execInRedis(GSMJedisWrapper.java:1119)

    com.gsm.nosql.redis.GSMJedisWrapper.storeImpression(GSMJedisWrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.socketRead0(Native Method)
    java.net.SocketInputStream.read(SocketInputStream.java:129)
    java.net.SocketInputStream.read(SocketInputStream.java:90)

    redis.clients.util.RedisInputStream.fill(RedisInputStream.java:109)

    redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:45)
    redis.clients.jedis.Protocol.process(Protocol.java:64)
    redis.clients.jedis.Protocol.read(Protocol.java:131)
    redis.clients.jedis.Connection.getAll(Connection.java:225)

    redis.clients.jedis.BinaryTransaction.exec(BinaryTransaction.java:26)

    com.gsm.nosql.redis.GSMJedisWrapper$10.exec(GSMJedisWrapper.java:725)

    com.gsm.nosql.redis.GSMJedisWrapper$10.exec(GSMJedisWrapper.java:700)

    com.gsm.nosql.redis.GSMJedisWrapper.execInRedis(GSMJedisWrapper.java:1112)

    com.gsm.nosql.redis.GSMJedisWrapper.loadImpression(GSMJedisWrapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:114926
    total_commands_processed:1193792973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,online
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel at Feb 11, 2013 at 9:42 pm

    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog <http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on a time
    interval, to aggregate the data. Unless they somehow interfere with the
    request based operations. But I don't even see code paths in the thread
    dump...


    On Mon, Feb 11, 2013 at 11:18 PM, Pawel <pawel....@gmail.com <javascript:>
    wrote:
    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In all of
    our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread dump
    shows 44 threads simulatenously inside Redis request/response code. Below
    are 2 code paths that constitute overwhelming majority of these threads.
    The CPU consumption by the main redis process doesn't go above 25% (of a
    single CPU). Redis operations executed by these code paths, and redis info
    output are also below.

    Any clues where do I start digging further? I don't think that any of my
    operations are that heavy that warrant long execution time. As I said, we
    previously never had Redis to be a bottleneck, this may be due to the fact
    the server has been running for a while. Should I upgrade to 2.6.10 (my
    approach was - if it works doesn't touch it)? Does cursory look at the
    operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread dump
    was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
    operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.socketRead0(Native Method)

    java.net.SocketInputStream.read(SocketInputStream.java:129)
    java.net.SocketInputStream.read(SocketInputStream.java:90)

    redis.clients.util.RedisInputStream.fill(RedisInputStream.java:109)

    redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:45)
    redis.clients.jedis.Protocol.process(Protocol.java:64)
    redis.clients.jedis.Protocol.read(Protocol.java:131)
    redis.clients.jedis.Connection.getAll(Connection.java:225)
    redis.clients.jedis.Connection.getAll(Connection.java:217)
    redis.clients.jedis.Pipeline.sync(Pipeline.java:68)

    com.gsm.nosql.redis.GSMJedisWrapper.execInRedis(GSMJedisWrapper.java:1119)

    com.gsm.nosql.redis.GSMJedisWrapper.storeImpression(GSMJedisWrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.socketRead0(Native Method)

    java.net.SocketInputStream.read(SocketInputStream.java:129)
    java.net.SocketInputStream.read(SocketInputStream.java:90)

    redis.clients.util.RedisInputStream.fill(RedisInputStream.java:109)

    redis.clients.util.RedisInputStream.readByte(RedisInputStream.java:45)
    redis.clients.jedis.Protocol.process(Protocol.java:64)
    redis.clients.jedis.Protocol.read(Protocol.java:131)
    redis.clients.jedis.Connection.getAll(Connection.java:225)

    redis.clients.jedis.BinaryTransaction.exec(BinaryTransaction.java:26)

    com.gsm.nosql.redis.GSMJedisWrapper$10.exec(GSMJedisWrapper.java:725)

    com.gsm.nosql.redis.GSMJedisWrapper$10.exec(GSMJedisWrapper.java:700)

    com.gsm.nosql.redis.GSMJedisWrapper.execInRedis(GSMJedisWrapper.java:1112)

    com.gsm.nosql.redis.GSMJedisWrapper.loadImpression(GSMJedisWrapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:114926
    total_commands_processed:1193792973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,online
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Yiftach Shoolman at Feb 11, 2013 at 9:47 pm
    No - Redis is based on a single threaded and each of these commands takes
    more than 10sec. And if you do "slowlog get N" you will probably see the
    more slow commands
    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on a
    time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In all
    of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread dump
    shows 44 threads simulatenously inside Redis request/response code. Below
    are 2 code paths that constitute overwhelming majority of these threads.
    The CPU consumption by the main redis process doesn't go above 25% (of a
    single CPU). Redis operations executed by these code paths, and redis info
    output are also below.

    Any clues where do I start digging further? I don't think that any of my
    operations are that heavy that warrant long execution time. As I said, we
    previously never had Redis to be a bottleneck, this may be due to the fact
    the server has been running for a while. Should I upgrade to 2.6.10 (my
    approach was - if it works doesn't touch it)? Does cursory look at the
    operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread dump
    was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
    operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:217)
    redis.clients.jedis.Pipeline.**sync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1119)
    com.gsm.nosql.redis.**GSMJedisWrapper.**storeImpression(
    **GSMJedisWrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**BinaryTransaction.exec(**
    BinaryTransaction.java:26)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:725)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:700)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1112)
    com.gsm.nosql.redis.**GSMJedisWrapper.**loadImpression(*
    *GSMJedisWrapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114926
    total_commands_processed:**1193792973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**online
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • CharSyam at Feb 11, 2013 at 9:59 pm
    Did you use RDB?
    at that time. It can be slowed because of swapping.


    2013/2/11 Yiftach Shoolman <yiftach.shoolman@gmail.com>
    No - Redis is based on a single threaded and each of these commands takes
    more than 10sec. And if you do "slowlog get N" you will probably see the
    more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on a
    time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In all
    of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread dump
    shows 44 threads simulatenously inside Redis request/response code. Below
    are 2 code paths that constitute overwhelming majority of these threads.
    The CPU consumption by the main redis process doesn't go above 25% (of a
    single CPU). Redis operations executed by these code paths, and redis info
    output are also below.

    Any clues where do I start digging further? I don't think that any of
    my operations are that heavy that warrant long execution time. As I said,
    we previously never had Redis to be a bottleneck, this may be due to the
    fact the server has been running for a while. Should I upgrade to 2.6.10
    (my approach was - if it works doesn't touch it)? Does cursory look at the
    operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread
    dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
    operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**
    process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:217)
    redis.clients.jedis.Pipeline.**sync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1119)
    com.gsm.nosql.redis.**GSMJedisWrapper.**
    storeImpression(**GSMJedisWrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**
    process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**BinaryTransaction.exec(**
    BinaryTransaction.java:26)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:725)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:700)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1112)
    com.gsm.nosql.redis.**GSMJedisWrapper.**
    loadImpression(**GSMJedisWrapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114926
    total_commands_processed:**1193792973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**online
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel at Feb 11, 2013 at 10:15 pm

    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping. But there
    is no memory shortage whatsoever, I haven't seen any swapping activity on
    the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that work
    was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving no
    breathing room having all of the pending commands queued up, I would expect
    the main Redis process to start consuming 100% of the CPU. If that's not
    happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com <javascript:>>
    No - Redis is based on a single threaded and each of these commands takes
    more than 10sec. And if you do "slowlog get N" you will probably see the
    more slow commands


    On Mon, Feb 11, 2013 at 11:40 PM, Pawel <pawel....@gmail.com<javascript:>
    wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on a
    time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In all
    of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread
    dump shows 44 threads simulatenously inside Redis request/response code.
    Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that any of
    my operations are that heavy that warrant long execution time. As I said,
    we previously never had Redis to be a bottleneck, this may be due to the
    fact the server has been running for a while. Should I upgrade to 2.6.10
    (my approach was - if it works doesn't touch it)? Does cursory look at the
    operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread
    dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
    operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native
    Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**
    process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:217)
    redis.clients.jedis.Pipeline.**sync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1119)
    com.gsm.nosql.redis.**GSMJedisWrapper.**
    storeImpression(**GSMJedisWrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native
    Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**
    process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**BinaryTransaction.exec(**
    BinaryTransaction.java:26)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:725)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:700)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1112)
    com.gsm.nosql.redis.**GSMJedisWrapper.**
    loadImpression(**GSMJedisWrapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114926
    total_commands_processed:**1193792973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**online
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Yiftach Shoolman at Feb 12, 2013 at 6:09 am
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and if
    nothing is found, it's probably client related issue
    3. BTW - is it a dedicated Redis machine or is it shared with other
    elements of your app ?
    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping. But
    there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that work
    was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving no
    breathing room having all of the pending commands queued up, I would expect
    the main Redis process to start consuming 100% of the CPU. If that's not
    happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these commands
    takes more than 10sec. And if you do "slowlog get N" you will probably see
    the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on a
    time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In
    all of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread
    dump shows 44 threads simulatenously inside Redis request/response code.
    Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that any of
    my operations are that heavy that warrant long execution time. As I said,
    we previously never had Redis to be a bottleneck, this may be due to the
    fact the server has been running for a while. Should I upgrade to 2.6.10
    (my approach was - if it works doesn't touch it)? Does cursory look at the
    operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread
    dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
    operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(**
    RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**.getAll(Connection.
    **java:225)
    redis.clients.jedis.**Connection**.getAll(Connection.
    **java:217)
    redis.clients.jedis.Pipeline.**s**
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr**apper.execInRedis(*
    *GSMJedisWrap**per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    storeImpression(**GSMJedis**Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(**
    RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**.getAll(Connection.
    **java:225)
    redis.clients.jedis.**BinaryTran**saction.exec(**
    BinaryTransaction**.java:26)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(**
    GSMJedisWrapper.**java:725)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(**
    GSMJedisWrapper.**java:700)
    com.gsm.nosql.redis.**GSMJedisWr**apper.execInRedis(*
    *GSMJedisWrap**per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    loadImpression(**GSMJedisW**rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114**926
    total_commands_processed:**11937**92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli**ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel Veselov at Feb 12, 2013 at 7:04 am

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman wrote:

    You are right slowlog is in micro-sec and 10-12msec shouldn't be an issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same 10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and if
    nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool has 20
    connections per app server instance. Is it generally better to serialize
    redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with other
    elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping. But
    there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that work
    was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving no
    breathing room having all of the pending commands queued up, I would expect
    the main Redis process to start consuming 100% of the CPU. If that's not
    happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these commands
    takes more than 10sec. And if you do "slowlog get N" you will probably see
    the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on a
    time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In
    all of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread
    dump shows 44 threads simulatenously inside Redis request/response code.
    Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that any
    of my operations are that heavy that warrant long execution time. As I
    said, we previously never had Redis to be a bottleneck, this may be due to
    the fact the server has been running for a while. Should I upgrade to
    2.6.10 (my approach was - if it works doesn't touch it)? Does cursory look
    at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread
    dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all
    Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(**
    RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s**
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr**apper.execInRedis(
    **GSMJedisWrap**per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    storeImpression(**GSMJedis**Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(**
    RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran**saction.exec(**
    BinaryTransaction**.java:26)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(**
    GSMJedisWrapper.**java:725)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(**
    GSMJedisWrapper.**java:700)
    com.gsm.nosql.redis.**GSMJedisWr**apper.execInRedis(
    **GSMJedisWrap**per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    loadImpression(**GSMJedisW**rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114**926
    total_commands_processed:**11937**92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli**ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    With best of best regards
    Pawel S. Veselov

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Yiftach Shoolman at Feb 12, 2013 at 7:16 am
    is it connection pool, or a new connection is setup/teardown with every
    request ? make sure it is a connection pool.

    In general from my experience there should be no
    significant performance reduction when you work with 500 conns or with 20
    conns at the Redis server, but maybe your app server has issue too many
    connections ?

    Last thing to check, what is your snapshot policy ?

    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov wrote:

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same 10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and if
    nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool has
    20 connections per app server instance. Is it generally better to serialize
    redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with other
    elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping. But
    there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that work
    was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving no
    breathing room having all of the pending commands queued up, I would expect
    the main Redis process to start consuming 100% of the CPU. If that's not
    happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these commands
    takes more than 10sec. And if you do "slowlog get N" you will probably see
    the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on
    a time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In
    all of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread
    dump shows 44 threads simulatenously inside Redis request/response code.
    Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that any
    of my operations are that heavy that warrant long execution time. As I
    said, we previously never had Redis to be a bottleneck, this may be due to
    the fact the server has been running for a while. Should I upgrade to
    2.6.10 (my approach was - if it works doesn't touch it)? Does cursory look
    at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread
    dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all
    Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(**
    RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s**
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr**
    apper.execInRedis(**GSMJedisWrap**per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    storeImpression(**GSMJedis**Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(**
    RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran**saction.exec(**
    BinaryTransaction**.java:26)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(**
    GSMJedisWrapper.**java:725)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(**
    GSMJedisWrapper.**java:700)
    com.gsm.nosql.redis.**GSMJedisWr**
    apper.execInRedis(**GSMJedisWrap**per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    loadImpression(**GSMJedisW**rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114**926
    total_commands_processed:**11937**92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli**ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    With best of best regards
    Pawel S. Veselov

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel at Feb 12, 2013 at 6:50 pm

    On Monday, February 11, 2013 11:16:11 PM UTC-8, Yiftach wrote:

    is it connection pool, or a new connection is setup/teardown with every
    request ? make sure it is a connection pool.
    I'm pretty much using what came with Jedis driver, and I'm relatively sure
    it's not reconnecting every time.

    In general from my experience there should be no
    significant performance reduction when you work with 500 conns or with 20
    conns at the Redis server, but maybe your app server has issue too many
    connections ?
    I looked at the latency doc, the only thing that I can imagine applies to
    our case is expiration latency. Either that, or there is a problem with the
    actual command sets that we are using on request threads that is.
    Considering that my app server, at the time of the probe, had 46 active
    threads, and 44 out of them were waiting for Redis operation, and all of
    these 44 were caught in java.net.SocketInputStream.socketRead0(), I find it
    reasonable to believe that if it is a network problem, it applies only to
    an established connection (eliminating the question of re-establishing
    connections, or having any kind of socket/connection starvation). Both the
    app servers and Redis are deployed on the same net at a data center, but
    they are separated by a firewall. But then firewalls are typically not
    concerned with established connections.

    I also wonder if the problem is specific to MULTI or pipelining.

    Last thing to check, what is your snapshot policy ?
    save 900 1
    save 300 10
    save 60 10000

    So it seems to be writing every 1 min, since there are over 10k changes per
    minute for sure.


    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov <pawel....@gmail.com<javascript:>
    wrote:
    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <yiftach....@gmail.com<javascript:>
    wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an
    issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same 10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and if
    nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool has
    20 connections per app server instance. Is it generally better to serialize
    redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with other
    elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel <pawel....@gmail.com<javascript:>
    wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping. But
    there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that work
    was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving no
    breathing room having all of the pending commands queued up, I would expect
    the main Redis process to start consuming 100% of the CPU. If that's not
    happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these commands
    takes more than 10sec. And if you do "slowlog get N" you will probably see
    the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on
    a time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In
    all of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread
    dump shows 44 threads simulatenously inside Redis request/response code.
    Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that any
    of my operations are that heavy that warrant long execution time. As I
    said, we previously never had Redis to be a bottleneck, this may be due to
    the fact the server has been running for a while. Should I upgrade to
    2.6.10 (my approach was - if it works doesn't touch it)? Does cursory look
    at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the
    thread dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all
    Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(*
    *RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s**
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr**
    apper.execInRedis(**GSMJedisWrap**per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    storeImpression(**GSMJedis**Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc**ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea**
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS**tream.fill(**
    RedisInputStream.**java:109)
    redis.clients.util.**RedisInputS**tream.readByte(*
    *RedisInputStrea**m.java:45)
    redis.clients.jedis.Protocol.**p**
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r**
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection**
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran**saction.exec(**
    BinaryTransaction**.java:26)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(*
    *GSMJedisWrapper.**java:725)
    com.gsm.nosql.redis.**GSMJedisWr**apper$10.exec(**
    GSMJedisWrapper.**java:700)
    com.gsm.nosql.redis.**GSMJedisWr**
    apper.execInRedis(**GSMJedisWrap**per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr**apper.**
    loadImpression(**GSMJedisW**rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114**926
    total_commands_processed:**11937**92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli**ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Yiftach Shoolman at Feb 12, 2013 at 7:44 pm
    You should definitely change your persistence storage setting, writing your
    entire dataset to disk every one minute can create a lot of load on your
    system.
    If you want to maintain high level of data durability use AOF every 1 sec,
    as it only writes the changes.


    On Tue, Feb 12, 2013 at 8:50 PM, Pawel wrote:
    On Monday, February 11, 2013 11:16:11 PM UTC-8, Yiftach wrote:

    is it connection pool, or a new connection is setup/teardown with every
    request ? make sure it is a connection pool.
    I'm pretty much using what came with Jedis driver, and I'm relatively sure
    it's not reconnecting every time.

    In general from my experience there should be no significant performance
    **reduction when you work with 500 conns or with 20 conns at the Redis
    server, but maybe your app server has issue too many connections ?
    I looked at the latency doc, the only thing that I can imagine applies to
    our case is expiration latency. Either that, or there is a problem with the
    actual command sets that we are using on request threads that is.
    Considering that my app server, at the time of the probe, had 46 active
    threads, and 44 out of them were waiting for Redis operation, and all of
    these 44 were caught in java.net.SocketInputStream.socketRead0(), I find it
    reasonable to believe that if it is a network problem, it applies only to
    an established connection (eliminating the question of re-establishing
    connections, or having any kind of socket/connection starvation). Both the
    app servers and Redis are deployed on the same net at a data center, but
    they are separated by a firewall. But then firewalls are typically not
    concerned with established connections.

    I also wonder if the problem is specific to MULTI or pipelining.

    Last thing to check, what is your snapshot policy ?
    save 900 1
    save 300 10
    save 60 10000

    So it seems to be writing every 1 min, since there are over 10k changes
    per minute for sure.

    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov wrote:

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <
    yiftach....@gmail.com> wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an
    issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same 10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and if
    nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool has
    20 connections per app server instance. Is it generally better to serialize
    redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with other
    elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping. But
    there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that
    work was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving
    no breathing room having all of the pending commands queued up, I would
    expect the main Redis process to start consuming 100% of the CPU. If that's
    not happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these commands
    takes more than 10sec. And if you do "slowlog get N" you will probably see
    the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread,
    on a time interval, to aggregate the data. Unless they somehow interfere
    with the request based operations. But I don't even see code paths in the
    thread dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it.
    In all of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM
    thread dump shows 44 threads simulatenously inside Redis request/response
    code. Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that
    any of my operations are that heavy that warrant long execution time. As I
    said, we previously never had Redis to be a bottleneck, this may be due to
    the fact the server has been running for a while. Should I upgrade to
    2.6.10 (my approach was - if it works doesn't touch it)? Does cursory look
    at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the
    thread dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all
    Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****tream.fill(**
    RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s****
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    storeImpression(**GSMJedis****Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****tream.fill(**
    RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran****saction.exec(
    **BinaryTransaction****.java:26)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:725)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:700)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    loadImpression(**GSMJedisW****rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114****926
    total_commands_processed:**11937****92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli****ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel Veselov at Feb 12, 2013 at 8:48 pm

    On Tue, Feb 12, 2013 at 11:44 AM, Yiftach Shoolman wrote:

    You should definitely change your persistence storage setting, writing
    your entire dataset to disk every one minute can create a lot of load on
    your system.
    What would you recommend then? Not for my case, specifically but in
    general, as - what is considered a performance safe RDB save interval for
    high throughput Redis use?

    However, IMHO, I don't see how save process will affect the operations
    performance. The write lasts for about 10 seconds, but it's a multi-CPU
    system, so writing has it's own CPU to mooch off of. Writing causes no
    swapping, and otherwise only needs disk, which is not used by main Redis
    process; I don't believe that swiping the data for RDB persistence impacts
    get/set operations, at least noticeably. There is of course, the infamous
    forking time, but it's forking once a minute.

    If you want to maintain high level of data durability use AOF every 1 sec,
    as it only writes the changes.
    I don't really depend on RDB, so smaller intervals will be fine by me.


    On Tue, Feb 12, 2013 at 8:50 PM, Pawel wrote:
    On Monday, February 11, 2013 11:16:11 PM UTC-8, Yiftach wrote:

    is it connection pool, or a new connection is setup/teardown with every
    request ? make sure it is a connection pool.
    I'm pretty much using what came with Jedis driver, and I'm relatively
    sure it's not reconnecting every time.

    In general from my experience there should be no significant performance
    **reduction when you work with 500 conns or with 20 conns at the Redis
    server, but maybe your app server has issue too many connections ?
    I looked at the latency doc, the only thing that I can imagine applies to
    our case is expiration latency. Either that, or there is a problem with the
    actual command sets that we are using on request threads that is.
    Considering that my app server, at the time of the probe, had 46 active
    threads, and 44 out of them were waiting for Redis operation, and all of
    these 44 were caught in java.net.SocketInputStream.socketRead0(), I find it
    reasonable to believe that if it is a network problem, it applies only to
    an established connection (eliminating the question of re-establishing
    connections, or having any kind of socket/connection starvation). Both the
    app servers and Redis are deployed on the same net at a data center, but
    they are separated by a firewall. But then firewalls are typically not
    concerned with established connections.

    I also wonder if the problem is specific to MULTI or pipelining.

    Last thing to check, what is your snapshot policy ?
    save 900 1
    save 300 10
    save 60 10000

    So it seems to be writing every 1 min, since there are over 10k changes
    per minute for sure.

    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov wrote:

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <
    yiftach....@gmail.com> wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an
    issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same 10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and if
    nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool
    has 20 connections per app server instance. Is it generally better to
    serialize redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with other
    elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping. But
    there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that
    work was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving
    no breathing room having all of the pending commands queued up, I would
    expect the main Redis process to start consuming 100% of the CPU. If that's
    not happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these commands
    takes more than 10sec. And if you do "slowlog get N" you will probably see
    the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread,
    on a time interval, to aggregate the data. Unless they somehow interfere
    with the request based operations. But I don't even see code paths in the
    thread dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it.
    In all of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication
    driver.
    The "slow" symptom is slow down in response processing. JVM
    thread dump shows 44 threads simulatenously inside Redis request/response
    code. Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that
    any of my operations are that heavy that warrant long execution time. As I
    said, we previously never had Redis to be a bottleneck, this may be due to
    the fact the server has been running for a while. Should I upgrade to
    2.6.10 (my approach was - if it works doesn't touch it)? Does cursory look
    at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the
    thread dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all
    Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****tream.fill(*
    *RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s****
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    storeImpression(**GSMJedis****Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****tream.fill(*
    *RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran****
    saction.exec(**BinaryTransaction****.java:26)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:725)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:700)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    loadImpression(**GSMJedisW****rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114****926
    total_commands_processed:**11937****92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli****ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Yiftach Shoolman at Feb 12, 2013 at 8:59 pm
    Last thing, have you tried to run redis-cli --latency -h `host` -p
    `port` during
    the load time ? if everything looks good it is probably your client
    On Tue, Feb 12, 2013 at 10:48 PM, Pawel Veselov wrote:

    On Tue, Feb 12, 2013 at 11:44 AM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    You should definitely change your persistence storage setting, writing
    your entire dataset to disk every one minute can create a lot of load on
    your system.
    What would you recommend then? Not for my case, specifically but in
    general, as - what is considered a performance safe RDB save interval for
    high throughput Redis use?

    However, IMHO, I don't see how save process will affect the operations
    performance. The write lasts for about 10 seconds, but it's a multi-CPU
    system, so writing has it's own CPU to mooch off of. Writing causes no
    swapping, and otherwise only needs disk, which is not used by main Redis
    process; I don't believe that swiping the data for RDB persistence impacts
    get/set operations, at least noticeably. There is of course, the infamous
    forking time, but it's forking once a minute.

    If you want to maintain high level of data durability use AOF every 1
    sec, as it only writes the changes.
    I don't really depend on RDB, so smaller intervals will be fine by me.


    On Tue, Feb 12, 2013 at 8:50 PM, Pawel wrote:
    On Monday, February 11, 2013 11:16:11 PM UTC-8, Yiftach wrote:

    is it connection pool, or a new connection is setup/teardown with every
    request ? make sure it is a connection pool.
    I'm pretty much using what came with Jedis driver, and I'm relatively
    sure it's not reconnecting every time.

    In general from my experience there should be no
    significant performance **reduction when you work with 500 conns or
    with 20 conns at the Redis server, but maybe your app server has issue too
    many connections ?
    I looked at the latency doc, the only thing that I can imagine applies
    to our case is expiration latency. Either that, or there is a problem with
    the actual command sets that we are using on request threads that is.
    Considering that my app server, at the time of the probe, had 46 active
    threads, and 44 out of them were waiting for Redis operation, and all of
    these 44 were caught in java.net.SocketInputStream.socketRead0(), I find it
    reasonable to believe that if it is a network problem, it applies only to
    an established connection (eliminating the question of re-establishing
    connections, or having any kind of socket/connection starvation). Both the
    app servers and Redis are deployed on the same net at a data center, but
    they are separated by a firewall. But then firewalls are typically not
    concerned with established connections.

    I also wonder if the problem is specific to MULTI or pipelining.

    Last thing to check, what is your snapshot policy ?
    save 900 1
    save 300 10
    save 60 10000

    So it seems to be writing every 1 min, since there are over 10k changes
    per minute for sure.

    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov wrote:

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <
    yiftach....@gmail.com> wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an
    issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same 10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and
    if nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool
    has 20 connections per app server instance. Is it generally better to
    serialize redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with other
    elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping.
    But there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that
    work was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving
    no breathing room having all of the pending commands queued up, I would
    expect the main Redis process to start consuming 100% of the CPU. If that's
    not happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these
    commands takes more than 10sec. And if you do "slowlog get N" you will
    probably see the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread,
    on a time interval, to aggregate the data. Unless they somehow interfere
    with the request based operations. But I don't even see code paths in the
    thread dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it.
    In all of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication
    driver.
    The "slow" symptom is slow down in response processing. JVM
    thread dump shows 44 threads simulatenously inside Redis request/response
    code. Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that
    any of my operations are that heavy that warrant long execution time. As I
    said, we previously never had Redis to be a bottleneck, this may be due to
    the fact the server has been running for a while. Should I upgrade to
    2.6.10 (my approach was - if it works doesn't touch it)? Does cursory look
    at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the
    thread dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all
    Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****tream.fill(
    **RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s****
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    storeImpression(**GSMJedis****Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****tream.fill(
    **RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran****
    saction.exec(**BinaryTransaction****.java:26)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:725)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:700)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    loadImpression(**GSMJedisW****rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114****926
    total_commands_processed:**11937****92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli****ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Yiftach Shoolman at Feb 12, 2013 at 9:12 pm
    One more question - what is the reason for you to use 2 databases on the
    same Redis process ?
    On Tue, Feb 12, 2013 at 10:59 PM, Yiftach Shoolman wrote:

    Last thing, have you tried to run redis-cli --latency -h `host` -p `port` during
    the load time ? if everything looks good it is probably your client

    On Tue, Feb 12, 2013 at 10:48 PM, Pawel Veselov wrote:

    On Tue, Feb 12, 2013 at 11:44 AM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    You should definitely change your persistence storage setting, writing
    your entire dataset to disk every one minute can create a lot of load on
    your system.
    What would you recommend then? Not for my case, specifically but in
    general, as - what is considered a performance safe RDB save interval for
    high throughput Redis use?

    However, IMHO, I don't see how save process will affect the operations
    performance. The write lasts for about 10 seconds, but it's a multi-CPU
    system, so writing has it's own CPU to mooch off of. Writing causes no
    swapping, and otherwise only needs disk, which is not used by main Redis
    process; I don't believe that swiping the data for RDB persistence impacts
    get/set operations, at least noticeably. There is of course, the infamous
    forking time, but it's forking once a minute.

    If you want to maintain high level of data durability use AOF every 1
    sec, as it only writes the changes.
    I don't really depend on RDB, so smaller intervals will be fine by me.


    On Tue, Feb 12, 2013 at 8:50 PM, Pawel wrote:
    On Monday, February 11, 2013 11:16:11 PM UTC-8, Yiftach wrote:

    is it connection pool, or a new connection is setup/teardown with
    every request ? make sure it is a connection pool.
    I'm pretty much using what came with Jedis driver, and I'm relatively
    sure it's not reconnecting every time.

    In general from my experience there should be no
    significant performance **reduction when you work with 500 conns or
    with 20 conns at the Redis server, but maybe your app server has issue too
    many connections ?
    I looked at the latency doc, the only thing that I can imagine applies
    to our case is expiration latency. Either that, or there is a problem with
    the actual command sets that we are using on request threads that is.
    Considering that my app server, at the time of the probe, had 46 active
    threads, and 44 out of them were waiting for Redis operation, and all of
    these 44 were caught in java.net.SocketInputStream.socketRead0(), I find it
    reasonable to believe that if it is a network problem, it applies only to
    an established connection (eliminating the question of re-establishing
    connections, or having any kind of socket/connection starvation). Both the
    app servers and Redis are deployed on the same net at a data center, but
    they are separated by a firewall. But then firewalls are typically not
    concerned with established connections.

    I also wonder if the problem is specific to MULTI or pipelining.

    Last thing to check, what is your snapshot policy ?
    save 900 1
    save 300 10
    save 60 10000

    So it seems to be writing every 1 min, since there are over 10k changes
    per minute for sure.

    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov wrote:

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <
    yiftach....@gmail.com> wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an
    issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same 10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and
    if nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool
    has 20 connections per app server instance. Is it generally better to
    serialize redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with
    other elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping.
    But there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that
    work was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving
    no breathing room having all of the pending commands queued up, I would
    expect the main Redis process to start consuming 100% of the CPU. If that's
    not happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these
    commands takes more than 10sec. And if you do "slowlog get N" you will
    probably see the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate
    thread, on a time interval, to aggregate the data. Unless they somehow
    interfere with the request based operations. But I don't even see code
    paths in the thread dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with
    it. In all of our testing, or prior use, we never had Redis be even close
    to a bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication
    driver.
    The "slow" symptom is slow down in response processing. JVM
    thread dump shows 44 threads simulatenously inside Redis request/response
    code. Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think that
    any of my operations are that heavy that warrant long execution time. As I
    said, we previously never had Redis to be a bottleneck, this may be due to
    the fact the server has been running for a while. Should I upgrade to
    2.6.10 (my approach was - if it works doesn't touch it)? Does cursory look
    at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the
    thread dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for
    all Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****
    tream.fill(**RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s****
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    storeImpression(**GSMJedis****Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****
    tream.fill(**RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran****
    saction.exec(**BinaryTransaction****.java:26)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:725)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:700)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    loadImpression(**GSMJedisW****rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114****926
    total_commands_processed:**11937****92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli****ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621


    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel Veselov at Feb 13, 2013 at 1:12 am

    On Tue, Feb 12, 2013 at 1:12 PM, Yiftach Shoolman wrote:

    One more question - what is the reason for you to use 2 databases on the
    same Redis process ?
    Is there any kind of a performance penalty to that?
    The reason is just the key space design, they value types are different,
    and I don't want them to have any chance of clashing...


    On Tue, Feb 12, 2013 at 10:59 PM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    Last thing, have you tried to run redis-cli --latency -h `host` -p
    `port` during the load time ? if everything looks good it is probably
    your client

    On Tue, Feb 12, 2013 at 10:48 PM, Pawel Veselov wrote:

    On Tue, Feb 12, 2013 at 11:44 AM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    You should definitely change your persistence storage setting, writing
    your entire dataset to disk every one minute can create a lot of load on
    your system.
    What would you recommend then? Not for my case, specifically but in
    general, as - what is considered a performance safe RDB save interval for
    high throughput Redis use?

    However, IMHO, I don't see how save process will affect the operations
    performance. The write lasts for about 10 seconds, but it's a multi-CPU
    system, so writing has it's own CPU to mooch off of. Writing causes no
    swapping, and otherwise only needs disk, which is not used by main Redis
    process; I don't believe that swiping the data for RDB persistence impacts
    get/set operations, at least noticeably. There is of course, the infamous
    forking time, but it's forking once a minute.

    If you want to maintain high level of data durability use AOF every 1
    sec, as it only writes the changes.
    I don't really depend on RDB, so smaller intervals will be fine by me.


    On Tue, Feb 12, 2013 at 8:50 PM, Pawel wrote:
    On Monday, February 11, 2013 11:16:11 PM UTC-8, Yiftach wrote:

    is it connection pool, or a new connection is setup/teardown with
    every request ? make sure it is a connection pool.
    I'm pretty much using what came with Jedis driver, and I'm relatively
    sure it's not reconnecting every time.

    In general from my experience there should be no
    significant performance **reduction when you work with 500 conns or
    with 20 conns at the Redis server, but maybe your app server has issue too
    many connections ?
    I looked at the latency doc, the only thing that I can imagine applies
    to our case is expiration latency. Either that, or there is a problem with
    the actual command sets that we are using on request threads that is.
    Considering that my app server, at the time of the probe, had 46 active
    threads, and 44 out of them were waiting for Redis operation, and all of
    these 44 were caught in java.net.SocketInputStream.socketRead0(), I find it
    reasonable to believe that if it is a network problem, it applies only to
    an established connection (eliminating the question of re-establishing
    connections, or having any kind of socket/connection starvation). Both the
    app servers and Redis are deployed on the same net at a data center, but
    they are separated by a firewall. But then firewalls are typically not
    concerned with established connections.

    I also wonder if the problem is specific to MULTI or pipelining.

    Last thing to check, what is your snapshot policy ?
    save 900 1
    save 300 10
    save 60 10000

    So it seems to be writing every 1 min, since there are over 10k
    changes per minute for sure.

    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov wrote:

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <
    yiftach....@gmail.com> wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be an
    issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same
    10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page and
    if nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client pool
    has 20 connections per app server instance. Is it generally better to
    serialize redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with
    other elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping.
    But there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of that
    work was, say, slow (like my ZRANGEBYSCORE and DEL ... commands), leaving
    no breathing room having all of the pending commands queued up, I would
    expect the main Redis process to start consuming 100% of the CPU. If that's
    not happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these
    commands takes more than 10sec. And if you do "slowlog get N" you will
    probably see the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate
    thread, on a time interval, to aggregate the data. Unless they somehow
    interfere with the request based operations. But I don't even see code
    paths in the thread dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with
    it. In all of our testing, or prior use, we never had Redis be even close
    to a bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication
    driver.
    The "slow" symptom is slow down in response processing. JVM
    thread dump shows 44 threads simulatenously inside Redis request/response
    code. Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think
    that any of my operations are that heavy that warrant long execution time.
    As I said, we previously never had Redis to be a bottleneck, this may be
    due to the fact the server has been running for a while. Should I upgrade
    to 2.6.10 (my approach was - if it works doesn't touch it)? Does cursory
    look at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the
    thread dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for
    all Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****
    tream.fill(**RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s****
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    storeImpression(**GSMJedis****Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****
    tream.fill(**RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran****
    saction.exec(**BinaryTransaction****.java:26)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:725)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:700)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr****apper.**
    loadImpression(**GSMJedisW****rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114****926
    total_commands_processed:**11937****92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli****ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621


    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    With best of best regards
    Pawel S. Veselov

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Yiftach Shoolman at Feb 13, 2013 at 5:59 am
    If this is just for key space things - this is fine. Just be aware that
    these 2 DBs are sharing the same single thread process, so if one DB is
    busy the other is blocked.
    On Wed, Feb 13, 2013 at 3:04 AM, Pawel Veselov wrote:

    On Tue, Feb 12, 2013 at 1:12 PM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    One more question - what is the reason for you to use 2 databases on the
    same Redis process ?
    Is there any kind of a performance penalty to that?
    The reason is just the key space design, they value types are different,
    and I don't want them to have any chance of clashing...


    On Tue, Feb 12, 2013 at 10:59 PM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    Last thing, have you tried to run redis-cli --latency -h `host` -p
    `port` during the load time ? if everything looks good it is probably
    your client


    On Tue, Feb 12, 2013 at 10:48 PM, Pawel Veselov <pawel.veselov@gmail.com
    wrote:
    On Tue, Feb 12, 2013 at 11:44 AM, Yiftach Shoolman <
    yiftach.shoolman@gmail.com> wrote:
    You should definitely change your persistence storage setting, writing
    your entire dataset to disk every one minute can create a lot of load on
    your system.
    What would you recommend then? Not for my case, specifically but in
    general, as - what is considered a performance safe RDB save interval for
    high throughput Redis use?

    However, IMHO, I don't see how save process will affect the operations
    performance. The write lasts for about 10 seconds, but it's a multi-CPU
    system, so writing has it's own CPU to mooch off of. Writing causes no
    swapping, and otherwise only needs disk, which is not used by main Redis
    process; I don't believe that swiping the data for RDB persistence impacts
    get/set operations, at least noticeably. There is of course, the infamous
    forking time, but it's forking once a minute.

    If you want to maintain high level of data durability use AOF every 1
    sec, as it only writes the changes.
    I don't really depend on RDB, so smaller intervals will be fine by me.


    On Tue, Feb 12, 2013 at 8:50 PM, Pawel wrote:
    On Monday, February 11, 2013 11:16:11 PM UTC-8, Yiftach wrote:

    is it connection pool, or a new connection is setup/teardown with
    every request ? make sure it is a connection pool.
    I'm pretty much using what came with Jedis driver, and I'm relatively
    sure it's not reconnecting every time.

    In general from my experience there should be no
    significant performance **reduction when you work with 500 conns or
    with 20 conns at the Redis server, but maybe your app server has issue too
    many connections ?
    I looked at the latency doc, the only thing that I can imagine
    applies to our case is expiration latency. Either that, or there is a
    problem with the actual command sets that we are using on request threads
    that is. Considering that my app server, at the time of the probe, had 46
    active threads, and 44 out of them were waiting for Redis operation, and
    all of these 44 were caught in java.net.SocketInputStream.socketRead0(), I
    find it reasonable to believe that if it is a network problem, it applies
    only to an established connection (eliminating the question of
    re-establishing connections, or having any kind of socket/connection
    starvation). Both the app servers and Redis are deployed on the same net at
    a data center, but they are separated by a firewall. But then firewalls are
    typically not concerned with established connections.

    I also wonder if the problem is specific to MULTI or pipelining.

    Last thing to check, what is your snapshot policy ?
    save 900 1
    save 300 10
    save 60 10000

    So it seems to be writing every 1 min, since there are over 10k
    changes per minute for sure.

    On Tue, Feb 12, 2013 at 9:04 AM, Pawel Veselov wrote:

    On Mon, Feb 11, 2013 at 10:09 PM, Yiftach Shoolman <
    yiftach....@gmail.com> wrote:
    You are right slowlog is in micro-sec and 10-12msec shouldn't be
    an issue.

    I would do the following:
    1. Check the entire slowlog to make sure nothing is hidden there
    Did do that :)
    Found one instance of ZREM..BYRANK, but still within the same
    10-20ms.

    2. Go over the instruction listed in the Redis latency
    problems troubleshooting <http://redis.io/topics/latency> page
    and if nothing is found, it's probably client related issue
    I wonder if I should reduce the pool size. Right now the client
    pool has 20 connections per app server instance. Is it generally better to
    serialize redis operations on the app server side, or using Redis queue?

    And whether I should update to 2.6, before attempting anything else?

    3. BTW - is it a dedicated Redis machine or is it shared with
    other elements of your app ?
    Dedicated.

    On Tue, Feb 12, 2013 at 12:15 AM, Pawel wrote:
    On Monday, February 11, 2013 1:59:52 PM UTC-8, CharSyam wrote:

    Did you use RDB?
    at that time. It can be slowed because of swapping.
    I'm not sure how having an RDB on itself contributes to swapping.
    But there is no memory shortage whatsoever, I haven't seen any swapping
    activity on the Redis system ever.

    Also, if I were loading Redis with too much work, and some of
    that work was, say, slow (like my ZRANGEBYSCORE and DEL ... commands),
    leaving no breathing room having all of the pending commands queued up, I
    would expect the main Redis process to start consuming 100% of the CPU. If
    that's not happening, there must be some other contention point...


    2013/2/11 Yiftach Shoolman <yiftach....@gmail.com>
    No - Redis is based on a single threaded and each of these
    commands takes more than 10sec. And if you do "slowlog get N" you will
    probably see the more slow commands

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate
    thread, on a time interval, to aggregate the data. Unless they somehow
    interfere with the request based operations. But I don't even see code
    paths in the thread dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with
    it. In all of our testing, or prior use, we never had Redis be even close
    to a bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication
    driver.
    The "slow" symptom is slow down in response processing. JVM
    thread dump shows 44 threads simulatenously inside Redis request/response
    code. Below are 2 code paths that constitute overwhelming majority of these
    threads. The CPU consumption by the main redis process doesn't go above 25%
    (of a single CPU). Redis operations executed by these code paths, and redis
    info output are also below.

    Any clues where do I start digging further? I don't think
    that any of my operations are that heavy that warrant long execution time.
    As I said, we previously never had Redis to be a bottleneck, this may be
    due to the fact the server has been running for a while. Should I upgrade
    to 2.6.10 (my approach was - if it works doesn't touch it)? Does cursory
    look at the operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the
    thread dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for
    all Redis operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****
    tream.fill(**RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:217)
    redis.clients.jedis.Pipeline.**s****
    ync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1119)
    com.gsm.nosql.redis.**GSMJedisWr****apper.*
    *storeImpression(**GSMJedis****Wrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**soc****ketRead0(Native
    Method)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**129)
    java.net.SocketInputStream.**rea****
    d(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputS****
    tream.fill(**RedisInputStream.**ja**va:109)
    redis.clients.util.**RedisInputS****
    tream.readByte(**RedisInputStrea****m.java:45)
    redis.clients.jedis.Protocol.**p****
    rocess(Protocol.java:64)
    redis.clients.jedis.Protocol.**r****
    ead(Protocol.java:131)
    redis.clients.jedis.**Connection****
    .getAll(Connection.**java:225)
    redis.clients.jedis.**BinaryTran****
    saction.exec(**BinaryTransaction****.java:26)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:725)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper$10.exec(**GSMJedisWrapper.****java:700)
    com.gsm.nosql.redis.**GSMJedisWr****
    apper.execInRedis(**GSMJedisWrap****per.java:1112)
    com.gsm.nosql.redis.**GSMJedisWr****apper.*
    *loadImpression(**GSMJedisW****rapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114****926
    total_commands_processed:**11937****92973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**onli****ne
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621


    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --
    With best of best regards
    Pawel S. Veselov

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.



    --

    Yiftach Shoolman
    +972-54-7634621

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Josiah Carlson at Feb 13, 2013 at 6:31 am

    On Tue, Feb 12, 2013 at 12:48 PM, Pawel Veselov wrote:
    On Tue, Feb 12, 2013 at 11:44 AM, Yiftach Shoolman
    wrote:
    You should definitely change your persistence storage setting, writing
    your entire dataset to disk every one minute can create a lot of load on
    your system.

    What would you recommend then? Not for my case, specifically but in general,
    as - what is considered a performance safe RDB save interval for high
    throughput Redis use?
    That depends on how much data you are okay to lose in the case of a crash.
    However, IMHO, I don't see how save process will affect the operations
    performance. The write lasts for about 10 seconds, but it's a multi-CPU
    system, so writing has it's own CPU to mooch off of. Writing causes no
    swapping, and otherwise only needs disk, which is not used by main Redis
    process; I don't believe that swiping the data for RDB persistence impacts
    get/set operations, at least noticeably. There is of course, the infamous
    forking time, but it's forking once a minute.
    If you're running on certain VMs, the fork() may take a while. Looking
    at your INFO output, the fork() call only took 11 ms, so that's not
    going to be the cause of issues. On the "but snapshotting won't affect
    performance" front; it might. Multiple cores requesting regions of
    memory across the same memory controller may become (relatively)
    bandwidth starved, and shoddy hardware can cause nasty interrupts that
    can hang a machine. While neither of those are likely causes of your
    current issue, bad networking hardware could be (the rules that I've
    lived by when doing networking in the past is: check the cables, check
    the switches, check the NICs, check the drivers; in that order).

    If I were going to point fingers at anything, I'd ask myself what did
    Jedis do/what is it doing as part of the sync() operation. That seems
    to be dominating your tracebacks, which suggests to me that it's hung
    up on something. Is it actually waiting on a network call? Maybe it
    didn't send anything *to* Redis, so Redis doesn't know to respond? If
    you can run another service on that machine and hit it with arbitrary
    network load without issue, I'd look to Jedis.

    Regards,
    - Josiah

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel at Feb 13, 2013 at 8:57 am

    On Tuesday, February 12, 2013 10:31:02 PM UTC-8, Josiah Carlson wrote:
    On Tue, Feb 12, 2013 at 12:48 PM, Pawel Veselov wrote:
    On Tue, Feb 12, 2013 at 11:44 AM, Yiftach Shoolman
    <yiftach....@gmail.com <javascript:>> wrote:
    You should definitely change your persistence storage setting, writing
    your entire dataset to disk every one minute can create a lot of load
    on
    your system.
    What would you recommend then? Not for my case, specifically but in general,
    as - what is considered a performance safe RDB save interval for high
    throughput Redis use?
    That depends on how much data you are okay to lose in the case of a crash.
    Safety can be traded off with performance. I'm trying to understand what is
    the minimal setting under which the performance effect will become
    negligible. But considering what you said below, I can probably put that
    aside for now.
    However, IMHO, I don't see how save process will affect the operations
    performance. The write lasts for about 10 seconds, but it's a multi-CPU
    system, so writing has it's own CPU to mooch off of. Writing causes no
    swapping, and otherwise only needs disk, which is not used by main Redis
    process; I don't believe that swiping the data for RDB persistence impacts
    get/set operations, at least noticeably. There is of course, the infamous
    forking time, but it's forking once a minute.
    If you're running on certain VMs, the fork() may take a while. Looking
    at your INFO output, the fork() call only took 11 ms, so that's not
    going to be the cause of issues. On the "but snapshotting won't affect
    performance" front; it might. Multiple cores requesting regions of
    memory across the same memory controller may become (relatively)
    bandwidth starved, and shoddy hardware can cause nasty interrupts that
    can hang a machine. While neither of those are likely causes of your
    current issue, bad networking hardware could be (the rules that I've
    lived by when doing networking in the past is: check the cables, check
    the switches, check the NICs, check the drivers; in that order).
    I ought to come up with a stress test on just the Redis server, replicating
    it's use by the application. By my approximation, web requests were coming
    at >2k/s, that translates to ~10 times more in Redis requests, stress
    testing Redis use on that particular deployment is probably the only way I
    can expose any issues. Hardware is all in a data center, so I don't have
    much control over it.

    If I were going to point fingers at anything, I'd ask myself what did
    Jedis do/what is it doing as part of the sync() operation. That seems
    to be dominating your tracebacks, which suggests to me that it's hung
    up on something. Is it actually waiting on a network call? Maybe it
    didn't send anything *to* Redis, so Redis doesn't know to respond? If
    you can run another service on that machine and hit it with arbitrary
    network load without issue, I'd look to Jedis.
    I actually very much doubt that Jedis itself is a problem. Cursory check of
    Jedis code doesn't show anything that can cause a problem, but then it
    wouldn't be anything obvious. However, considering that the threads
    recover, and in a very short time, it ought to mean the the data being
    waited for is provided (otherwise they would be either stuck forever, or
    timed out with an exception) (that's the reason I don't believe Jedis
    missed some data that should have been sent, or expecting more data that
    was sent). The fact that I'm seeing these threads, in their quantity, and
    specific stack frame, is simply because the load is really high, and these
    are just the points of contention.

    The idea that I do have, however, is that may be there is problem due to
    pipelining. Pipelining is a virtual concept, Redis server doesn't care or
    know that incoming requests are pipelined by the client. Let's say that the
    server is under considerable load, and for every request made, there is a
    wait time, and the request distribution is even. A logical operation (from
    the perspective of the application), shoves a number of Redis operations
    into the pipe, but the main queue evenly serializes those requests with all
    other requests from all other logical operations that are in progress. If a
    logical operation consists of, say, 3 Redis operations, the logical
    operations will have to wait 3 times before it can be completed. If the
    operation was encased in a transaction (and therefore, be executed as a
    single element of the queue), it will only have to incur the wait time
    once. Basically, that's reordering operations so that it benefits logical
    operations. Does that make sense, or I am just blowing smoke?

    Another consideration that I have - is that is seems that *all* Redis
    commands are processed through that one single queue. And use of SELECTS
    and such contribute to the queue, and even if the command execution itself
    doesn't have much of an essence, the time it will take for the queue to
    advance to that command, and then wait before the queue is advanced to the
    next command may be considerable if there is enough clutter.

    Thank you!

    Regards,
    - Josiah
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Josiah Carlson at Feb 13, 2013 at 3:59 pm

    On Wed, Feb 13, 2013 at 12:57 AM, Pawel wrote:
    On Tuesday, February 12, 2013 10:31:02 PM UTC-8, Josiah Carlson wrote:
    [snip]
    That depends on how much data you are okay to lose in the case of a crash.
    Safety can be traded off with performance. I'm trying to understand what is
    the minimal setting under which the performance effect will become
    negligible. But considering what you said below, I can probably put that
    aside for now.
    For most folks, Redis' AOF + AOF sync every second is just about
    right. Unless you're writing large volumes of data in bytes, usually
    that setting doesn't result in substantial slowdowns.

    [snip]
    If you're running on certain VMs, the fork() may take a while. Looking
    at your INFO output, the fork() call only took 11 ms, so that's not
    going to be the cause of issues. On the "but snapshotting won't affect
    performance" front; it might. Multiple cores requesting regions of
    memory across the same memory controller may become (relatively)
    bandwidth starved, and shoddy hardware can cause nasty interrupts that
    can hang a machine. While neither of those are likely causes of your
    current issue, bad networking hardware could be (the rules that I've
    lived by when doing networking in the past is: check the cables, check
    the switches, check the NICs, check the drivers; in that order).
    I ought to come up with a stress test on just the Redis server, replicating
    it's use by the application. By my approximation, web requests were coming
    at >2k/s, that translates to ~10 times more in Redis requests, stress
    testing Redis use on that particular deployment is probably the only way I
    can expose any issues. Hardware is all in a data center, so I don't have
    much control over it.
    ... This may lead to the actual cause of the slowdown.
    If I were going to point fingers at anything, I'd ask myself what did
    Jedis do/what is it doing as part of the sync() operation. That seems
    to be dominating your tracebacks, which suggests to me that it's hung
    up on something. Is it actually waiting on a network call? Maybe it
    didn't send anything *to* Redis, so Redis doesn't know to respond? If
    you can run another service on that machine and hit it with arbitrary
    network load without issue, I'd look to Jedis.

    I actually very much doubt that Jedis itself is a problem. Cursory check of
    Jedis code doesn't show anything that can cause a problem, but then it
    wouldn't be anything obvious. However, considering that the threads recover,
    and in a very short time, it ought to mean the the data being waited for is
    provided (otherwise they would be either stuck forever, or timed out with an
    exception) (that's the reason I don't believe Jedis missed some data that
    should have been sent, or expecting more data that was sent). The fact that
    I'm seeing these threads, in their quantity, and specific stack frame, is
    simply because the load is really high, and these are just the points of
    contention.

    The idea that I do have, however, is that may be there is problem due to
    pipelining. Pipelining is a virtual concept, Redis server doesn't care or
    know that incoming requests are pipelined by the client. Let's say that the
    server is under considerable load, and for every request made, there is a
    wait time, and the request distribution is even. A logical operation (from
    the perspective of the application), shoves a number of Redis operations
    into the pipe, but the main queue evenly serializes those requests with all
    other requests from all other logical operations that are in progress. If a
    logical operation consists of, say, 3 Redis operations, the logical
    operations will have to wait 3 times before it can be completed. If the
    operation was encased in a transaction (and therefore, be executed as a
    single element of the queue), it will only have to incur the wait time once.
    Basically, that's reordering operations so that it benefits logical
    operations. Does that make sense, or I am just blowing smoke?
    Earlier in the thread you offered slowlog output. What portion of the
    commands are slower than 1ms inside Redis? 1%? 5%? Because if you're
    expecting 20k requests/second to Redis, and 1% of them take 1ms inside
    Redis, with another 1% taking 5ms, you've just accounted for 700ms in
    every second. It wouldn't take significantly more commands to mean
    you're overloading Redis with (relatively) expensive requests.
    Another consideration that I have - is that is seems that *all* Redis
    commands are processed through that one single queue. And use of SELECTS and
    such contribute to the queue, and even if the command execution itself
    doesn't have much of an essence, the time it will take for the queue to
    advance to that command, and then wait before the queue is advanced to the
    next command may be considerable if there is enough clutter.
    Well, this is an issue if you are having Redis do more work internally
    than it is capable of doing. Data structure manipulation is relatively
    inexpensive, but it's not free. And when you get large requests and
    responses coming in and out, then Redis might even be spending not
    insignificant time sending data out on buffers. Running a quick
    "redis-benchmark -q", you can see the effect of longer LRANGE requests
    on Redis command performance. Actually, you may want to do an strace
    on Redis, just to see if it might be spending a lot of time on
    network.

    Yeah. Long and short: you may be running Redis at its limit. If you
    can shard your data, and you have spare cores on that box, sharding on
    that one machine, and pinning Redis to alternating cores using taskset
    may let you accomplish what you need without requiring more machines.

    - Josiah

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel at Feb 25, 2013 at 5:59 am

    On Wednesday, February 13, 2013 7:59:44 AM UTC-8, Josiah Carlson wrote:
    On Wed, Feb 13, 2013 at 12:57 AM, Pawel <pawel....@gmail.com <javascript:>>
    wrote:
    On Tuesday, February 12, 2013 10:31:02 PM UTC-8, Josiah Carlson wrote:
    [skipped]
    If I were going to point fingers at anything, I'd ask myself what did
    Jedis do/what is it doing as part of the sync() operation. That seems
    to be dominating your tracebacks, which suggests to me that it's hung
    up on something. Is it actually waiting on a network call? Maybe it
    didn't send anything *to* Redis, so Redis doesn't know to respond? If
    you can run another service on that machine and hit it with arbitrary
    network load without issue, I'd look to Jedis.
    I actually very much doubt that Jedis itself is a problem. Cursory check of
    Jedis code doesn't show anything that can cause a problem, but then it
    wouldn't be anything obvious. However, considering that the threads recover,
    and in a very short time, it ought to mean the the data being waited for is
    provided (otherwise they would be either stuck forever, or timed out with an
    exception) (that's the reason I don't believe Jedis missed some data that
    should have been sent, or expecting more data that was sent). The fact that
    I'm seeing these threads, in their quantity, and specific stack frame, is
    simply because the load is really high, and these are just the points of
    contention.

    The idea that I do have, however, is that may be there is problem due to
    pipelining. Pipelining is a virtual concept, Redis server doesn't care or
    know that incoming requests are pipelined by the client. Let's say that the
    server is under considerable load, and for every request made, there is a
    wait time, and the request distribution is even. A logical operation (from
    the perspective of the application), shoves a number of Redis operations
    into the pipe, but the main queue evenly serializes those requests with all
    other requests from all other logical operations that are in progress. If a
    logical operation consists of, say, 3 Redis operations, the logical
    operations will have to wait 3 times before it can be completed. If the
    operation was encased in a transaction (and therefore, be executed as a
    single element of the queue), it will only have to incur the wait time once.
    Basically, that's reordering operations so that it benefits logical
    operations. Does that make sense, or I am just blowing smoke?
    Earlier in the thread you offered slowlog output. What portion of the
    commands are slower than 1ms inside Redis? 1%? 5%? Because if you're
    expecting 20k requests/second to Redis, and 1% of them take 1ms inside
    Redis, with another 1% taking 5ms, you've just accounted for 700ms in
    every second. It wouldn't take significantly more commands to mean
    you're overloading Redis with (relatively) expensive requests.
    These slow commands are running on a separate thread, and are approximately
    at rate of 1-2 per second.

    But, I probably mis-estimated the number of other commands that are
    executed per request/response (more below).

    Another consideration that I have - is that is seems that *all* Redis
    commands are processed through that one single queue. And use of SELECTS and
    such contribute to the queue, and even if the command execution itself
    doesn't have much of an essence, the time it will take for the queue to
    advance to that command, and then wait before the queue is advanced to the
    next command may be considerable if there is enough clutter.
    Well, this is an issue if you are having Redis do more work internally
    than it is capable of doing. Data structure manipulation is relatively
    inexpensive, but it's not free. And when you get large requests and
    responses coming in and out, then Redis might even be spending not
    insignificant time sending data out on buffers. Running a quick
    "redis-benchmark -q", you can see the effect of longer LRANGE requests
    on Redis command performance. Actually, you may want to do an strace
    on Redis, just to see if it might be spending a lot of time on
    network.
    If I don't see Redis process running up to consuming all available CPU (and
    I haven't, but then, I might've looked too late), then, yeah, the only
    other source of problems may be network. And it may be just bandwidth at
    this point, considering the short round-trips, and sheer volume of those
    too.

    Yeah. Long and short: you may be running Redis at its limit. If you
    can shard your data, and you have spare cores on that box, sharding on
    that one machine, and pinning Redis to alternating cores using taskset
    may let you accomplish what you need without requiring more machines.
    I ended up investing time into simplifying how application interacts with
    the server, reducing the number of commands that are ever executed per
    logical request/response. That should help. In addition, I also made sure
    that it's possible to shard the data across multiple instances.

    - Josiah
    Thank you for your support.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Pawel at Feb 11, 2013 at 10:01 pm

    On Monday, February 11, 2013 1:47:17 PM UTC-8, Yiftach wrote:

    No - Redis is based on a single threaded and each of these commands takes
    more than 10sec.
    Aren't those microseconds (#3 in slowlog) ?

    And if you do "slowlog get N" you will probably see the more slow commands
    I do, but they are all the same - bulk delete, and ZRANGEBYSCORE

    On Mon, Feb 11, 2013 at 11:40 PM, Pawel <pawel....@gmail.com <javascript:>
    wrote:
    On Monday, February 11, 2013 1:23:44 PM UTC-8, Yiftach wrote:

    Can you send the output of your SlowLog<http://redis.io/commands/slowlog>
    It's filled with those:

    redis 127.0.0.1:6379> slowlog get
    1) 1) (integer) 1274
    2) (integer) 1360618170
    3) (integer) 12458
    4) 1) "ZRANGEBYSCORE"
    2) "D8-D3-85-E1-40-3C"
    3) "0.0"
    4) "1.360614570895E12"
    5) "limit"
    6) "0"
    7) "10000"
    8) "withscores"

    and those:

    18) 1) (integer) 1263
    2) (integer) 1360617844
    3) (integer) 10113
    4) 1) "DEL"
    2) "37917061"
    3) "37917063"
    4) "37917064"
    5) "37917065"
    6) "37917068"
    7) "37917069"
    8) "37917070"
    9) "37917072"
    10) "37917073"
    11) "37917075"
    12) "37917077"
    13) "37917082"
    14) "37917084"
    15) "37917090"
    16) "37917091"
    17) "37917092"
    18) "37917095"
    19) "37917097"
    20) "37917098"
    21) "37917099"
    22) "37917103"
    23) "37917104"
    24) "37917105"
    25) "37917106"
    26) "37917107"
    27) "37917108"
    28) "37917111"
    29) "37917112"
    30) "37917113"
    31) "37917114"
    32) "... (4768 more arguments)"


    Which is fine. These operation is performed by a separate thread, on a
    time interval, to aggregate the data. Unless they somehow interfere with
    the request based operations. But I don't even see code paths in the thread
    dump...

    On Mon, Feb 11, 2013 at 11:18 PM, Pawel wrote:

    Hi.

    I'm using Redis strictly as synchronous memory cache.
    Recently, we started experiencing performance problems with it. In all
    of our testing, or prior use, we never had Redis be even close to a
    bottleneck, so I'm a bit puzzled.

    Main application is J2EE, using Jedis as the communication driver.
    The "slow" symptom is slow down in response processing. JVM thread dump
    shows 44 threads simulatenously inside Redis request/response code. Below
    are 2 code paths that constitute overwhelming majority of these threads.
    The CPU consumption by the main redis process doesn't go above 25% (of a
    single CPU). Redis operations executed by these code paths, and redis info
    output are also below.

    Any clues where do I start digging further? I don't think that any of
    my operations are that heavy that warrant long execution time. As I said,
    we previously never had Redis to be a bottleneck, this may be due to the
    fact the server has been running for a while. Should I upgrade to 2.6.10
    (my approach was - if it works doesn't touch it)? Does cursory look at the
    operations that we execute have something jump out?

    Redis operations:

    store:
    PIPELINE
    if (new object) {
    SELECT 1
    INCR
    SYNC (save SYNC#1)
    PIPELINE
    ZADD
    }
    SELECT 0
    SET
    if (somelogic) {
    DEL
    }
    SYNC (save SYNC#2)
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    load:
    MULTI
    SELECT 0
    GET
    if (somelogic) {
    SETNX
    } else {
    EXISTS
    }
    EXEC
    if (somelogic) {
    DEL
    } else if (somelogic) {
    EXPIRE
    }
    PIPELINE
    SELECT 1
    SET last_op_stamp, now
    SYNC

    Out of these 44 stuck threads, the actual operation that the thread
    dump was caught on is:
    14 in Pipeline.sync (for last_op_stamp update, executed for all Redis
    operations)
    9 in Pipeline.sync (save SYNC#2)
    8 in Jedis.expire (load code path)
    6 in Transaction.exec (load code path)
    6 in Pipeline.sync (save SYNC#1)
    1 in Pipeline.sync (some other code path)

    "[ACTIVE] ExecuteThread: '51' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**
    process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:217)
    redis.clients.jedis.Pipeline.**sync(Pipeline.java:68)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1119)
    com.gsm.nosql.redis.**GSMJedisWrapper.**
    storeImpression(**GSMJedisWrapper.java:403)

    "[ACTIVE] ExecuteThread: '46' for queue:
    'weblogic.kernel.Default (self-tuning)'" RUNNABLE native
    java.net.SocketInputStream.**socketRead0(Native Method)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**129)
    java.net.SocketInputStream.**
    read(SocketInputStream.java:**90)
    redis.clients.util.**RedisInputStream.fill(**
    RedisInputStream.java:109)
    redis.clients.util.**RedisInputStream.readByte(**
    RedisInputStream.java:45)
    redis.clients.jedis.Protocol.**
    process(Protocol.java:64)
    redis.clients.jedis.Protocol.**read(Protocol.java:131)
    redis.clients.jedis.**Connection.getAll(Connection.**
    java:225)
    redis.clients.jedis.**BinaryTransaction.exec(**
    BinaryTransaction.java:26)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:725)
    com.gsm.nosql.redis.**GSMJedisWrapper$10.exec(**
    GSMJedisWrapper.java:700)
    com.gsm.nosql.redis.**GSMJedisWrapper.execInRedis(**
    GSMJedisWrapper.java:1112)
    com.gsm.nosql.redis.**GSMJedisWrapper.**
    loadImpression(**GSMJedisWrapper.java:799)

    info:
    redis 127.0.0.1:6379[1]> info
    redis_version:2.4.16
    redis_git_sha1:00000000
    redis_git_dirty:0
    arch_bits:64
    multiplexing_api:epoll
    gcc_version:4.4.6
    process_id:1709
    uptime_in_seconds:5184711
    uptime_in_days:60
    lru_clock:1843694
    used_cpu_sys:13843.72
    used_cpu_user:10718.59
    used_cpu_sys_children:663.00
    used_cpu_user_children:4780.55
    connected_clients:41
    connected_slaves:1
    client_longest_output_list:0
    client_biggest_input_buf:0
    blocked_clients:0
    used_memory:584569648
    used_memory_human:557.49M
    used_memory_rss:739061760
    used_memory_peak:772807944
    used_memory_peak_human:737.01M
    mem_fragmentation_ratio:1.26
    mem_allocator:jemalloc-3.0.0
    loading:0
    aof_enabled:0
    changes_since_last_save:253570
    bgsave_in_progress:0
    last_save_time:1360614202
    bgrewriteaof_in_progress:0
    total_connections_received:**114926
    total_commands_processed:**1193792973
    expired_keys:8402
    evicted_keys:0
    keyspace_hits:84136809
    keyspace_misses:8214268
    pubsub_channels:0
    pubsub_patterns:0
    latest_fork_usec:11521
    vm_enabled:0
    role:master
    slave0:209.18.95.230,6379,**online
    db0:keys=1254589,expires=0
    db1:keys=3873,expires=3869

    Thank you so much.
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupredis-db @
categoriesredis
postedFeb 11, '13 at 9:18p
activeFeb 25, '13 at 5:59a
posts21
users4
websiteredis.io
irc#redis

People

Translate

site design / logo © 2022 Grokbase