FAQ
Hi,

I am not really sure if I am doing something wrong or if what is happening
is normal.
I attach an image with one redis instance memory use.
At the image there are 2 memory drops. Those drops coincide in time with a
"keys *". After some tests I think I can conclude those drops are caused by
"keys" command forcing to free already expired keys. If I am right it means
a lot of keys are already expired but still using memory. Is this normal? I
think in a very busy redis instance with big TTLs it can be a problem. Is
there maybe an option to modify this behavior?

The image is the memory of a redis 2.2 but I tested the same on a 2.6.13
and the same happens.

Thanks,
Dan.

--
You received this message because you are subscribed to the Google Groups "Redis DB" group.
To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
To post to this group, send email to redis-db@googlegroups.com.
Visit this group at http://groups.google.com/group/redis-db?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.

Search Discussions

  • Josiah Carlson at May 7, 2013 at 5:50 pm
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated
    calls to RANDOMKEY is a total hack for incrementally expiring old data, it
    is easily tuned depending on your current load and how much "old" data you
    are willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many keys
    of persistent and volatile data do you have? That will inform you of how
    many times you could/should send RANDOMKEY per time period to reduce that
    to something that you would find more reasonable.

    Regards,
      - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is happening
    is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time with a
    "keys *". After some tests I think I can conclude those drops are caused by
    "keys" command forcing to free already expired keys. If I am right it means
    a lot of keys are already expired but still using memory. Is this normal? I
    think in a very busy redis instance with big TTLs it can be a problem. Is
    there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a 2.6.13
    and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan C at May 8, 2013 at 8:48 am
    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a 10h
    TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?

    I did a test yesterday. In that test I just had an instance (2.6.13) and I
    SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the
    30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory and
    the dbsize. As I understand it, theoretically 60 seconds after the last SET
    all keys would have to be expired (except for the 18k with TTL 5000) and so
    dbsize would have to be 18k. It doesn't happen like that. After 60 seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so
    keys are being expired, deleted and memory is being freed, but much slower
    than what I was expecting knowing the 60 TTL (which would be all of them
    after 60s). The stranger thing yet is that, if in any moment after the 60s
    I do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact
    there are 18k keys).
    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.

    Sorry for this extended explanation, but I think my first post was
    too vague and I am not sure if this is this behavior supposed to be normal.


    Thanks a lot!

    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated
    calls to RANDOMKEY is a total hack for incrementally expiring old data, it
    is easily tuned depending on your current load and how much "old" data you
    are willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many keys
    of persistent and volatile data do you have? That will inform you of how
    many times you could/should send RANDOMKEY per time period to reduce that
    to something that you would find more reasonable.

    Regards,
    - Josiah






    On Tue, May 7, 2013 at 3:55 AM, Dan C <dco...@gmail.com <javascript:>>wrote:
    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time with
    a "keys *". After some tests I think I can conclude those drops are caused
    by "keys" command forcing to free already expired keys. If I am right it
    means a lot of keys are already expired but still using memory. Is this
    normal? I think in a very busy redis instance with big TTLs it can be a
    problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a 2.6.13
    and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+u...@googlegroups.com <javascript:>.
    To post to this group, send email to redi...@googlegroups.com<javascript:>
    .
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Jan-Erik Rediger at May 8, 2013 at 2:44 pm
    On every interval redis expires just a few keys, not all of them. "KEYS *"
    does read ALL keys in the database and that is why all keys get expired on
    doing so (and that is why KEYS should only used in development, never in
    production, it's slow)
    On Wednesday, May 8, 2013 10:48:18 AM UTC+2, Dan C wrote:

    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?

    I did a test yesterday. In that test I just had an instance (2.6.13) and I
    SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the
    30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory and
    the dbsize. As I understand it, theoretically 60 seconds after the last SET
    all keys would have to be expired (except for the 18k with TTL 5000) and so
    dbsize would have to be 18k. It doesn't happen like that. After 60 seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so
    keys are being expired, deleted and memory is being freed, but much slower
    than what I was expecting knowing the 60 TTL (which would be all of them
    after 60s). The stranger thing yet is that, if in any moment after the 60s
    I do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact
    there are 18k keys).
    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.

    Sorry for this extended explanation, but I think my first post was
    too vague and I am not sure if this is this behavior supposed to be normal.


    Thanks a lot!

    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated
    calls to RANDOMKEY is a total hack for incrementally expiring old data, it
    is easily tuned depending on your current load and how much "old" data you
    are willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform you of
    how many times you could/should send RANDOMKEY per time period to reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time with
    a "keys *". After some tests I think I can conclude those drops are caused
    by "keys" command forcing to free already expired keys. If I am right it
    means a lot of keys are already expired but still using memory. Is this
    normal? I think in a very busy redis instance with big TTLs it can be a
    problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a 2.6.13
    and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Josiah Carlson at May 8, 2013 at 2:44 pm
    Replies inline.
    On Wed, May 8, 2013 at 1:48 AM, Dan C wrote:

    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?
    No. It picks random keys, checking to see if they expire.

    I did a test yesterday. In that test I just had an instance (2.6.13) and I
    SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the
    30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory and
    the dbsize. As I understand it, theoretically 60 seconds after the last SET
    all keys would have to be expired (except for the 18k with TTL 5000) and so
    dbsize would have to be 18k. It doesn't happen like that. After 60 seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so
    keys are being expired, deleted and memory is being freed, but much slower
    than what I was expecting knowing the 60 TTL (which would be all of them
    after 60s). The stranger thing yet is that, if in any moment after the 60s
    I do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact
    there are 18k keys).
    Here's the critical fact that you are missing: Redis does not keep a list
    of the keys that should expire. It keeps a counter. So when it goes to
    expire keys, it doesn't have a list to iterate over - it performs some
    random probes in the hash table. Keys that need to be expired are expired.
    Keys that don't get skipped.

    Think of it like this. You've got some keys. Some of them have already
    expired, some haven't. If you do a random probe into the space looking for
    keys to delete due to expiration, whether you find some will depend on the
    number of probes you do, and your likelihood of finding a key to expire. As
    an example, if you have 100k keys, 75k of which are past their expiration
    times, 25k of which are not... then when Redis randomly probes for keys to
    delete, it's generally going to find a key to delete 3/4 of the time. But
    that quickly dwindles (depending on how often you are writing more keys
    with TTLs to Redis), as those old keys are deleted pretty reliably. Once
    you get to about 20-25% of your keys being ready for deletion, then only
    about 15-20% of the random checks will return keys that can be deleted.

    Then there's the other part that if Redis finds a lot of keys to expire
    during its random probing, it takes extra time to look for keys to clear
    out.

    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.
    It doesn't delete all of them with random probing. Statistically speaking,
    it is very improbable that it would actually find them all. When performing
    a KEYS call, Redis visits all of the keys, notices which ones have expired,
    and deletes them. The visiting all keys is part of why KEYS is so slow
    (compared to other commands).

    Sorry for this extended explanation, but I think my first post was
    too vague and I am not sure if this is this behavior supposed to be normal.
    Perfectly normal.

      - Josiah

    Thanks a lot!
    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated
    calls to RANDOMKEY is a total hack for incrementally expiring old data, it
    is easily tuned depending on your current load and how much "old" data you
    are willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform you of
    how many times you could/should send RANDOMKEY per time period to reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time with
    a "keys *". After some tests I think I can conclude those drops are caused
    by "keys" command forcing to free already expired keys. If I am right it
    means a lot of keys are already expired but still using memory. Is this
    normal? I think in a very busy redis instance with big TTLs it can be a
    problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a 2.6.13
    and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@**googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.

    Visit this group at http://groups.google.com/**group/redis-db?hl=en<http://groups.google.com/group/redis-db?hl=en>
    .
    For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
    .

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan C at May 8, 2013 at 3:28 pm
    Ok! Thanks Josiah and Jan-Erik,

    I get it now.
    So, the only way to "expire more" is increasing the probes. As I understand
    it the only way to do so is with the "hs" parameter.
    In my case though I will have to change something. Probably the TTL and/or
    the "hs" value. From my 1,7GMB database more than 500MB are expired. I
    think using 500MB of RAM in already expired keys is a big waste of memory!
    Anyway, is it normal this proportion? I mean, it seems to me that 1/3 of
    the database already expired (as I can see after de "keys *") is quite a
    lot.

    Thanks a lot guys!

    Dan.

    El miércoles, 8 de mayo de 2013 16:44:28 UTC+2, Josiah Carlson escribió:
    Replies inline.

    On Wed, May 8, 2013 at 1:48 AM, Dan C <dco...@gmail.com <javascript:>>wrote:
    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?
    No. It picks random keys, checking to see if they expire.

    I did a test yesterday. In that test I just had an instance (2.6.13) and I
    SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the
    30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory and
    the dbsize. As I understand it, theoretically 60 seconds after the last SET
    all keys would have to be expired (except for the 18k with TTL 5000) and so
    dbsize would have to be 18k. It doesn't happen like that. After 60 seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so
    keys are being expired, deleted and memory is being freed, but much slower
    than what I was expecting knowing the 60 TTL (which would be all of them
    after 60s). The stranger thing yet is that, if in any moment after the 60s
    I do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact
    there are 18k keys).
    Here's the critical fact that you are missing: Redis does not keep a list
    of the keys that should expire. It keeps a counter. So when it goes to
    expire keys, it doesn't have a list to iterate over - it performs some
    random probes in the hash table. Keys that need to be expired are expired.
    Keys that don't get skipped.

    Think of it like this. You've got some keys. Some of them have already
    expired, some haven't. If you do a random probe into the space looking for
    keys to delete due to expiration, whether you find some will depend on the
    number of probes you do, and your likelihood of finding a key to expire. As
    an example, if you have 100k keys, 75k of which are past their expiration
    times, 25k of which are not... then when Redis randomly probes for keys to
    delete, it's generally going to find a key to delete 3/4 of the time. But
    that quickly dwindles (depending on how often you are writing more keys
    with TTLs to Redis), as those old keys are deleted pretty reliably. Once
    you get to about 20-25% of your keys being ready for deletion, then only
    about 15-20% of the random checks will return keys that can be deleted.

    Then there's the other part that if Redis finds a lot of keys to expire
    during its random probing, it takes extra time to look for keys to clear
    out.

    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.
    It doesn't delete all of them with random probing. Statistically speaking,
    it is very improbable that it would actually find them all. When performing
    a KEYS call, Redis visits all of the keys, notices which ones have expired,
    and deletes them. The visiting all keys is part of why KEYS is so slow
    (compared to other commands).

    Sorry for this extended explanation, but I think my first post was
    too vague and I am not sure if this is this behavior supposed to be normal.
    Perfectly normal.

    - Josiah

    Thanks a lot!
    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated
    calls to RANDOMKEY is a total hack for incrementally expiring old data, it
    is easily tuned depending on your current load and how much "old" data you
    are willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform you of
    how many times you could/should send RANDOMKEY per time period to reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time
    with a "keys *". After some tests I think I can conclude those drops are
    caused by "keys" command forcing to free already expired keys. If I am
    right it means a lot of keys are already expired but still using memory. Is
    this normal? I think in a very busy redis instance with big TTLs it can be
    a problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a
    2.6.13 and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@**googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.

    Visit this group at http://groups.google.com/**group/redis-db?hl=en<http://groups.google.com/group/redis-db?hl=en>
    .
    For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
    .

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+u...@googlegroups.com <javascript:>.
    To post to this group, send email to redi...@googlegroups.com<javascript:>
    .
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Josiah Carlson at May 8, 2013 at 4:26 pm
    Actually, the option is 'hz', as in 'hertz'.

    For a more in-depth look at what goes on, visit:
    http://redis.io/commands/expire and go to the "How Redis expires keys"
    section. If you wanted to have less expired keys hanging out, you could
    definitely increase the hz configuration, but that probably won't get you
    what you want, as that will basically find less than 25% of 100*hz keys to
    expire (so if you increase your checks to 100, which is not recommended,
    you'd expire less than 2500 keys every second). You could also set the
    maxmemory configuration option, which will result in Redis being more
    aggressive with its expiration. Or, you could just have a client that
    repeatedly calls RANDOMKEY (as I've mentioned before).

    Really though, there are four things that will actually solve your
    perceived problem:
    1. You keep a list of keys that expire, and you manually expire them (this
    used to be pretty common; you can use a ZSET to keep keys and their
    expiration times)
    2. Call KEYS every hour or so to expire everything (if you provide a
    pattern that doesn't match any keys, then you reduce the data that Redis
    needs to send you, while still getting expiration)
    3. Get a bigger machine and deal with the fact that Redis will tend to keep
    up to about 25% of your keys as expired.
    4. Write less data to Redis

      - Josiah

    On Wed, May 8, 2013 at 8:28 AM, Dan C wrote:

    Ok! Thanks Josiah and Jan-Erik,

    I get it now.
    So, the only way to "expire more" is increasing the probes. As I
    understand it the only way to do so is with the "hs" parameter.
    In my case though I will have to change something. Probably the TTL and/or
    the "hs" value. From my 1,7GMB database more than 500MB are expired. I
    think using 500MB of RAM in already expired keys is a big waste of memory!
    Anyway, is it normal this proportion? I mean, it seems to me that 1/3 of
    the database already expired (as I can see after de "keys *") is quite a
    lot.

    Thanks a lot guys!

    Dan.

    El miércoles, 8 de mayo de 2013 16:44:28 UTC+2, Josiah Carlson escribió:
    Replies inline.
    On Wed, May 8, 2013 at 1:48 AM, Dan C wrote:

    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?
    No. It picks random keys, checking to see if they expire.

    I did a test yesterday. In that test I just had an instance (2.6.13) and
    I SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of
    the 30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory and
    the dbsize. As I understand it, theoretically 60 seconds after the last SET
    all keys would have to be expired (except for the 18k with TTL 5000) and so
    dbsize would have to be 18k. It doesn't happen like that. After 60 seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so
    keys are being expired, deleted and memory is being freed, but much slower
    than what I was expecting knowing the 60 TTL (which would be all of them
    after 60s). The stranger thing yet is that, if in any moment after the 60s
    I do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact
    there are 18k keys).
    Here's the critical fact that you are missing: Redis does not keep a list
    of the keys that should expire. It keeps a counter. So when it goes to
    expire keys, it doesn't have a list to iterate over - it performs some
    random probes in the hash table. Keys that need to be expired are expired.
    Keys that don't get skipped.

    Think of it like this. You've got some keys. Some of them have already
    expired, some haven't. If you do a random probe into the space looking for
    keys to delete due to expiration, whether you find some will depend on the
    number of probes you do, and your likelihood of finding a key to expire. As
    an example, if you have 100k keys, 75k of which are past their expiration
    times, 25k of which are not... then when Redis randomly probes for keys to
    delete, it's generally going to find a key to delete 3/4 of the time. But
    that quickly dwindles (depending on how often you are writing more keys
    with TTLs to Redis), as those old keys are deleted pretty reliably. Once
    you get to about 20-25% of your keys being ready for deletion, then only
    about 15-20% of the random checks will return keys that can be deleted.

    Then there's the other part that if Redis finds a lot of keys to expire
    during its random probing, it takes extra time to look for keys to clear
    out.

    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.
    It doesn't delete all of them with random probing. Statistically
    speaking, it is very improbable that it would actually find them all. When
    performing a KEYS call, Redis visits all of the keys, notices which ones
    have expired, and deletes them. The visiting all keys is part of why KEYS
    is so slow (compared to other commands).

    Sorry for this extended explanation, but I think my first post was
    too vague and I am not sure if this is this behavior supposed to be normal.
    Perfectly normal.

    - Josiah

    Thanks a lot!
    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated
    calls to RANDOMKEY is a total hack for incrementally expiring old data, it
    is easily tuned depending on your current load and how much "old" data you
    are willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform you of
    how many times you could/should send RANDOMKEY per time period to reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time
    with a "keys *". After some tests I think I can conclude those drops are
    caused by "keys" command forcing to free already expired keys. If I am
    right it means a lot of keys are already expired but still using memory. Is
    this normal? I think in a very busy redis instance with big TTLs it can be
    a problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a
    2.6.13 and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@**googlegroups.com**.
    To post to this group, send email to redi...@googlegroups.com.

    Visit this group at http://groups.google.com/**group**/redis-db?hl=en<http://groups.google.com/group/redis-db?hl=en>
    .
    For more options, visit https://groups.google.com/**grou**ps/opt_out<https://groups.google.com/groups/opt_out>
    .

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@**googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.
    Visit this group at http://groups.google.com/**group/redis-db?hl=en<http://groups.google.com/group/redis-db?hl=en>
    .
    For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
    .

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan C at May 9, 2013 at 7:26 am
    Thank you very much Josiah!

    It couldn't be more clear. Now it's time to decide the way to go.

    Again: Thanks!

    El miércoles, 8 de mayo de 2013 18:26:01 UTC+2, Josiah Carlson escribió:
    Actually, the option is 'hz', as in 'hertz'.

    For a more in-depth look at what goes on, visit:
    http://redis.io/commands/expire and go to the "How Redis expires keys"
    section. If you wanted to have less expired keys hanging out, you could
    definitely increase the hz configuration, but that probably won't get you
    what you want, as that will basically find less than 25% of 100*hz keys to
    expire (so if you increase your checks to 100, which is not recommended,
    you'd expire less than 2500 keys every second). You could also set the
    maxmemory configuration option, which will result in Redis being more
    aggressive with its expiration. Or, you could just have a client that
    repeatedly calls RANDOMKEY (as I've mentioned before).

    Really though, there are four things that will actually solve your
    perceived problem:
    1. You keep a list of keys that expire, and you manually expire them (this
    used to be pretty common; you can use a ZSET to keep keys and their
    expiration times)
    2. Call KEYS every hour or so to expire everything (if you provide a
    pattern that doesn't match any keys, then you reduce the data that Redis
    needs to send you, while still getting expiration)
    3. Get a bigger machine and deal with the fact that Redis will tend to
    keep up to about 25% of your keys as expired.
    4. Write less data to Redis

    - Josiah


    On Wed, May 8, 2013 at 8:28 AM, Dan C <dco...@gmail.com <javascript:>>wrote:
    Ok! Thanks Josiah and Jan-Erik,

    I get it now.
    So, the only way to "expire more" is increasing the probes. As I
    understand it the only way to do so is with the "hs" parameter.
    In my case though I will have to change something. Probably the TTL
    and/or the "hs" value. From my 1,7GMB database more than 500MB are expired.
    I think using 500MB of RAM in already expired keys is a big waste of memory!
    Anyway, is it normal this proportion? I mean, it seems to me that 1/3 of
    the database already expired (as I can see after de "keys *") is quite a
    lot.

    Thanks a lot guys!

    Dan.

    El miércoles, 8 de mayo de 2013 16:44:28 UTC+2, Josiah Carlson escribió:
    Replies inline.
    On Wed, May 8, 2013 at 1:48 AM, Dan C wrote:

    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?
    No. It picks random keys, checking to see if they expire.

    I did a test yesterday. In that test I just had an instance (2.6.13) and
    I SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of
    the 30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory
    and the dbsize. As I understand it, theoretically 60 seconds after the last
    SET all keys would have to be expired (except for the 18k with TTL 5000)
    and so dbsize would have to be 18k. It doesn't happen like that. After 60
    seconds (and after 300 seconds) dbsize is not yet 18k, it is "discounting"
    yes, so keys are being expired, deleted and memory is being freed, but much
    slower than what I was expecting knowing the 60 TTL (which would be all of
    them after 60s). The stranger thing yet is that, if in any moment after the
    60s I do a "keys *" on the instance, memory is freed and dbsize is 18k (in
    fact there are 18k keys).
    Here's the critical fact that you are missing: Redis does not keep a
    list of the keys that should expire. It keeps a counter. So when it goes to
    expire keys, it doesn't have a list to iterate over - it performs some
    random probes in the hash table. Keys that need to be expired are expired.
    Keys that don't get skipped.

    Think of it like this. You've got some keys. Some of them have already
    expired, some haven't. If you do a random probe into the space looking for
    keys to delete due to expiration, whether you find some will depend on the
    number of probes you do, and your likelihood of finding a key to expire. As
    an example, if you have 100k keys, 75k of which are past their expiration
    times, 25k of which are not... then when Redis randomly probes for keys to
    delete, it's generally going to find a key to delete 3/4 of the time. But
    that quickly dwindles (depending on how often you are writing more keys
    with TTLs to Redis), as those old keys are deleted pretty reliably. Once
    you get to about 20-25% of your keys being ready for deletion, then only
    about 15-20% of the random checks will return keys that can be deleted.

    Then there's the other part that if Redis finds a lot of keys to expire
    during its random probing, it takes extra time to look for keys to clear
    out.

    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.
    It doesn't delete all of them with random probing. Statistically
    speaking, it is very improbable that it would actually find them all. When
    performing a KEYS call, Redis visits all of the keys, notices which ones
    have expired, and deletes them. The visiting all keys is part of why KEYS
    is so slow (compared to other commands).

    Sorry for this extended explanation, but I think my first post was
    too vague and I am not sure if this is this behavior supposed to be normal.
    Perfectly normal.

    - Josiah

    Thanks a lot!
    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated
    calls to RANDOMKEY is a total hack for incrementally expiring old data, it
    is easily tuned depending on your current load and how much "old" data you
    are willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform you of
    how many times you could/should send RANDOMKEY per time period to reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time
    with a "keys *". After some tests I think I can conclude those drops are
    caused by "keys" command forcing to free already expired keys. If I am
    right it means a lot of keys are already expired but still using memory. Is
    this normal? I think in a very busy redis instance with big TTLs it can be
    a problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a
    2.6.13 and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it,
    send an email to redis-db+u...@**googlegroups.com**.
    To post to this group, send email to redi...@googlegroups.com.

    Visit this group at http://groups.google.com/**group**/redis-db?hl=en<http://groups.google.com/group/redis-db?hl=en>
    .
    For more options, visit https://groups.google.com/**grou**ps/opt_out<https://groups.google.com/groups/opt_out>
    .

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@**googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.
    Visit this group at http://groups.google.com/**group/redis-db?hl=en<http://groups.google.com/group/redis-db?hl=en>
    .
    For more options, visit https://groups.google.com/**groups/opt_out<https://groups.google.com/groups/opt_out>
    .

    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+u...@googlegroups.com <javascript:>.
    To post to this group, send email to redi...@googlegroups.com<javascript:>
    .
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Salvatore Sanfilippo at May 9, 2013 at 1:27 pm
    Hello Dan,

    you may try to hack the following defines and recompile if you want
    Redis to expire more:

    #define REDIS_EXPIRELOOKUPS_PER_CRON 10 /* lookup 10 expires per loop */
    #define REDIS_EXPIRELOOKUPS_TIME_PERC 25 /* CPU max % for keys collection */

    By default it will never use more than 25% of CPU time for lazy
    expiring of keys, but you may want to raise this, and even the lookups
    performed per loop.

    I would try with 50 lookups and 50% max CPU to see how this changes
    the behavior.

    Modifying the hz parameter is unlikely to result into an improvement
    as Redis will split that 25% max CPU time into smaller parts because
    the function to expire keys is called more often.

    Thanks,
    Salvatore
    On Wed, May 8, 2013 at 5:28 PM, Dan C wrote:
    Ok! Thanks Josiah and Jan-Erik,

    I get it now.
    So, the only way to "expire more" is increasing the probes. As I understand
    it the only way to do so is with the "hs" parameter.
    In my case though I will have to change something. Probably the TTL and/or
    the "hs" value. From my 1,7GMB database more than 500MB are expired. I think
    using 500MB of RAM in already expired keys is a big waste of memory!
    Anyway, is it normal this proportion? I mean, it seems to me that 1/3 of the
    database already expired (as I can see after de "keys *") is quite a lot.

    Thanks a lot guys!

    Dan.

    El miércoles, 8 de mayo de 2013 16:44:28 UTC+2, Josiah Carlson escribió:
    Replies inline.
    On Wed, May 8, 2013 at 1:48 AM, Dan C wrote:

    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?

    No. It picks random keys, checking to see if they expire.
    I did a test yesterday. In that test I just had an instance (2.6.13) and
    I SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the
    30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory and
    the dbsize. As I understand it, theoretically 60 seconds after the last SET
    all keys would have to be expired (except for the 18k with TTL 5000) and so
    dbsize would have to be 18k. It doesn't happen like that. After 60 seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so
    keys are being expired, deleted and memory is being freed, but much slower
    than what I was expecting knowing the 60 TTL (which would be all of them
    after 60s). The stranger thing yet is that, if in any moment after the 60s I
    do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact
    there are 18k keys).

    Here's the critical fact that you are missing: Redis does not keep a list
    of the keys that should expire. It keeps a counter. So when it goes to
    expire keys, it doesn't have a list to iterate over - it performs some
    random probes in the hash table. Keys that need to be expired are expired.
    Keys that don't get skipped.

    Think of it like this. You've got some keys. Some of them have already
    expired, some haven't. If you do a random probe into the space looking for
    keys to delete due to expiration, whether you find some will depend on the
    number of probes you do, and your likelihood of finding a key to expire. As
    an example, if you have 100k keys, 75k of which are past their expiration
    times, 25k of which are not... then when Redis randomly probes for keys to
    delete, it's generally going to find a key to delete 3/4 of the time. But
    that quickly dwindles (depending on how often you are writing more keys with
    TTLs to Redis), as those old keys are deleted pretty reliably. Once you get
    to about 20-25% of your keys being ready for deletion, then only about
    15-20% of the random checks will return keys that can be deleted.

    Then there's the other part that if Redis finds a lot of keys to expire
    during its random probing, it takes extra time to look for keys to clear
    out.
    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.

    It doesn't delete all of them with random probing. Statistically speaking,
    it is very improbable that it would actually find them all. When performing
    a KEYS call, Redis visits all of the keys, notices which ones have expired,
    and deletes them. The visiting all keys is part of why KEYS is so slow
    (compared to other commands).

    Sorry for this extended explanation, but I think my first post was too
    vague and I am not sure if this is this behavior supposed to be normal.

    Perfectly normal.

    - Josiah
    Thanks a lot!

    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated calls
    to RANDOMKEY is a total hack for incrementally expiring old data, it is
    easily tuned depending on your current load and how much "old" data you are
    willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform you of
    how many times you could/should send RANDOMKEY per time period to reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time
    with a "keys *". After some tests I think I can conclude those drops are
    caused by "keys" command forcing to free already expired keys. If I am right
    it means a lot of keys are already expired but still using memory. Is this
    normal? I think in a very busy redis instance with big TTLs it can be a
    problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a
    2.6.13 and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.

    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+u...@googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.


    --
    Salvatore 'antirez' Sanfilippo
    open source developer - VMware
    http://invece.org

    Beauty is more important in computing than anywhere else in technology
    because software is so complicated. Beauty is the ultimate defence
    against complexity.
            — David Gelernter

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Salvatore Sanfilippo at May 9, 2013 at 1:28 pm
    p.s. however raising "hz" will make the expiring more incremental
    On Thu, May 9, 2013 at 3:26 PM, Salvatore Sanfilippo wrote:
    Hello Dan,

    you may try to hack the following defines and recompile if you want
    Redis to expire more:

    #define REDIS_EXPIRELOOKUPS_PER_CRON 10 /* lookup 10 expires per loop */
    #define REDIS_EXPIRELOOKUPS_TIME_PERC 25 /* CPU max % for keys collection */

    By default it will never use more than 25% of CPU time for lazy
    expiring of keys, but you may want to raise this, and even the lookups
    performed per loop.

    I would try with 50 lookups and 50% max CPU to see how this changes
    the behavior.

    Modifying the hz parameter is unlikely to result into an improvement
    as Redis will split that 25% max CPU time into smaller parts because
    the function to expire keys is called more often.

    Thanks,
    Salvatore
    On Wed, May 8, 2013 at 5:28 PM, Dan C wrote:
    Ok! Thanks Josiah and Jan-Erik,

    I get it now.
    So, the only way to "expire more" is increasing the probes. As I understand
    it the only way to do so is with the "hs" parameter.
    In my case though I will have to change something. Probably the TTL and/or
    the "hs" value. From my 1,7GMB database more than 500MB are expired. I think
    using 500MB of RAM in already expired keys is a big waste of memory!
    Anyway, is it normal this proportion? I mean, it seems to me that 1/3 of the
    database already expired (as I can see after de "keys *") is quite a lot.

    Thanks a lot guys!

    Dan.

    El miércoles, 8 de mayo de 2013 16:44:28 UTC+2, Josiah Carlson escribió:
    Replies inline.
    On Wed, May 8, 2013 at 1:48 AM, Dan C wrote:

    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second (hz)
    redis will look for keys to expire. Does this mean that it will remove (and
    free memory) for ALL the already expired keys? Or it will just remove some
    of them?

    No. It picks random keys, checking to see if they expire.
    I did a test yesterday. In that test I just had an instance (2.6.13) and
    I SETed thousands of keys. Half of them with a TTL of 30 and the other half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of the
    30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory and
    the dbsize. As I understand it, theoretically 60 seconds after the last SET
    all keys would have to be expired (except for the 18k with TTL 5000) and so
    dbsize would have to be 18k. It doesn't happen like that. After 60 seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting" yes, so
    keys are being expired, deleted and memory is being freed, but much slower
    than what I was expecting knowing the 60 TTL (which would be all of them
    after 60s). The stranger thing yet is that, if in any moment after the 60s I
    do a "keys *" on the instance, memory is freed and dbsize is 18k (in fact
    there are 18k keys).

    Here's the critical fact that you are missing: Redis does not keep a list
    of the keys that should expire. It keeps a counter. So when it goes to
    expire keys, it doesn't have a list to iterate over - it performs some
    random probes in the hash table. Keys that need to be expired are expired.
    Keys that don't get skipped.

    Think of it like this. You've got some keys. Some of them have already
    expired, some haven't. If you do a random probe into the space looking for
    keys to delete due to expiration, whether you find some will depend on the
    number of probes you do, and your likelihood of finding a key to expire. As
    an example, if you have 100k keys, 75k of which are past their expiration
    times, 25k of which are not... then when Redis randomly probes for keys to
    delete, it's generally going to find a key to delete 3/4 of the time. But
    that quickly dwindles (depending on how often you are writing more keys with
    TTLs to Redis), as those old keys are deleted pretty reliably. Once you get
    to about 20-25% of your keys being ready for deletion, then only about
    15-20% of the random checks will return keys that can be deleted.

    Then there's the other part that if Redis finds a lot of keys to expire
    during its random probing, it takes extra time to look for keys to clear
    out.
    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the keys
    which are already expired. And somehow when I use "keys *" it really forces
    ALL expired keys to be deleted and memory freed.

    It doesn't delete all of them with random probing. Statistically speaking,
    it is very improbable that it would actually find them all. When performing
    a KEYS call, Redis visits all of the keys, notices which ones have expired,
    and deletes them. The visiting all keys is part of why KEYS is so slow
    (compared to other commands).

    Sorry for this extended explanation, but I think my first post was too
    vague and I am not sure if this is this behavior supposed to be normal.

    Perfectly normal.

    - Josiah
    Thanks a lot!

    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some other
    operations) 10 times/second. You can increase this value (the configuration
    option is called 'hz'), but it will result in Redis using more processor
    during idle. You could also perform your own RANDOMKEY calls, which will
    have much the same effect, except that it only does the key expiration part
    of the normal operations that are done 10 times/second. While repeated calls
    to RANDOMKEY is a total hack for incrementally expiring old data, it is
    easily tuned depending on your current load and how much "old" data you are
    willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform you of
    how many times you could/should send RANDOMKEY per time period to reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time
    with a "keys *". After some tests I think I can conclude those drops are
    caused by "keys" command forcing to free already expired keys. If I am right
    it means a lot of keys are already expired but still using memory. Is this
    normal? I think in a very busy redis instance with big TTLs it can be a
    problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a
    2.6.13 and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an email to redis-db+u...@googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.

    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+u...@googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.


    --
    Salvatore 'antirez' Sanfilippo
    open source developer - VMware
    http://invece.org

    Beauty is more important in computing than anywhere else in technology
    because software is so complicated. Beauty is the ultimate defence
    against complexity.
    — David Gelernter


    --
    Salvatore 'antirez' Sanfilippo
    open source developer - VMware
    http://invece.org

    Beauty is more important in computing than anywhere else in technology
    because software is so complicated. Beauty is the ultimate defence
    against complexity.
            — David Gelernter

    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
  • Dan C at May 10, 2013 at 9:31 am
    Thank you Salvatore,

    I'll try to find time to "play" with those defines.

    El jueves, 9 de mayo de 2013 15:26:52 UTC+2, Salvatore Sanfilippo escribió:
    Hello Dan,

    you may try to hack the following defines and recompile if you want
    Redis to expire more:

    #define REDIS_EXPIRELOOKUPS_PER_CRON 10 /* lookup 10 expires per loop
    */
    #define REDIS_EXPIRELOOKUPS_TIME_PERC 25 /* CPU max % for keys
    collection */

    By default it will never use more than 25% of CPU time for lazy
    expiring of keys, but you may want to raise this, and even the lookups
    performed per loop.

    I would try with 50 lookups and 50% max CPU to see how this changes
    the behavior.

    Modifying the hz parameter is unlikely to result into an improvement
    as Redis will split that 25% max CPU time into smaller parts because
    the function to expire keys is called more often.

    Thanks,
    Salvatore

    On Wed, May 8, 2013 at 5:28 PM, Dan C <dco...@gmail.com <javascript:>>
    wrote:
    Ok! Thanks Josiah and Jan-Erik,

    I get it now.
    So, the only way to "expire more" is increasing the probes. As I
    understand
    it the only way to do so is with the "hs" parameter.
    In my case though I will have to change something. Probably the TTL and/or
    the "hs" value. From my 1,7GMB database more than 500MB are expired. I think
    using 500MB of RAM in already expired keys is a big waste of memory!
    Anyway, is it normal this proportion? I mean, it seems to me that 1/3 of the
    database already expired (as I can see after de "keys *") is quite a lot.
    Thanks a lot guys!

    Dan.

    El miércoles, 8 de mayo de 2013 16:44:28 UTC+2, Josiah Carlson escribió:
    Replies inline.
    On Wed, May 8, 2013 at 1:48 AM, Dan C wrote:

    Hi Josiah,

    In this redis instance I have around 5M keys, almost all of them with
    a
    10h TTL.
    Let me see if I understood you properly. By default 10times/second
    (hz)
    redis will look for keys to expire. Does this mean that it will remove
    (and
    free memory) for ALL the already expired keys? Or it will just remove
    some
    of them?

    No. It picks random keys, checking to see if they expire.
    I did a test yesterday. In that test I just had an instance (2.6.13)
    and
    I SETed thousands of keys. Half of them with a TTL of 30 and the other
    half
    with a TTL of 60.
    The fisrt time I SET 12k keys everything seems fine. It works
    perfectly.
    Then I SET 18k keys with a TTL of 5000, and start SETing thousands of
    the
    30 and 60 TTL.
    After the last set with TTL 60 finished I started looking the memory
    and
    the dbsize. As I understand it, theoretically 60 seconds after the
    last SET
    all keys would have to be expired (except for the 18k with TTL 5000)
    and so
    dbsize would have to be 18k. It doesn't happen like that. After 60
    seconds
    (and after 300 seconds) dbsize is not yet 18k, it is "discounting"
    yes, so
    keys are being expired, deleted and memory is being freed, but much
    slower
    than what I was expecting knowing the 60 TTL (which would be all of
    them
    after 60s). The stranger thing yet is that, if in any moment after the
    60s I
    do a "keys *" on the instance, memory is freed and dbsize is 18k (in
    fact
    there are 18k keys).

    Here's the critical fact that you are missing: Redis does not keep a
    list
    of the keys that should expire. It keeps a counter. So when it goes to
    expire keys, it doesn't have a list to iterate over - it performs some
    random probes in the hash table. Keys that need to be expired are
    expired.
    Keys that don't get skipped.

    Think of it like this. You've got some keys. Some of them have already
    expired, some haven't. If you do a random probe into the space looking
    for
    keys to delete due to expiration, whether you find some will depend on
    the
    number of probes you do, and your likelihood of finding a key to
    expire. As
    an example, if you have 100k keys, 75k of which are past their
    expiration
    times, 25k of which are not... then when Redis randomly probes for keys
    to
    delete, it's generally going to find a key to delete 3/4 of the time.
    But
    that quickly dwindles (depending on how often you are writing more keys
    with
    TTLs to Redis), as those old keys are deleted pretty reliably. Once you
    get
    to about 20-25% of your keys being ready for deletion, then only about
    15-20% of the random checks will return keys that can be deleted.

    Then there's the other part that if Redis finds a lot of keys to expire
    during its random probing, it takes extra time to look for keys to
    clear
    out.
    So, from my point of view this means that still if the redis process
    expiring keys (hz) is really deleting them, it is not deleting ALL the
    keys
    which are already expired. And somehow when I use "keys *" it really
    forces
    ALL expired keys to be deleted and memory freed.

    It doesn't delete all of them with random probing. Statistically
    speaking,
    it is very improbable that it would actually find them all. When
    performing
    a KEYS call, Redis visits all of the keys, notices which ones have
    expired,
    and deletes them. The visiting all keys is part of why KEYS is so slow
    (compared to other commands).

    Sorry for this extended explanation, but I think my first post was too
    vague and I am not sure if this is this behavior supposed to be
    normal.

    Perfectly normal.

    - Josiah
    Thanks a lot!

    El martes, 7 de mayo de 2013 19:50:25 UTC+2, Josiah Carlson escribió:
    This is normal behavior.

    By default, Redis only looks for keys to expire (along with some
    other
    operations) 10 times/second. You can increase this value (the
    configuration
    option is called 'hz'), but it will result in Redis using more
    processor
    during idle. You could also perform your own RANDOMKEY calls, which
    will
    have much the same effect, except that it only does the key
    expiration part
    of the normal operations that are done 10 times/second. While
    repeated calls
    to RANDOMKEY is a total hack for incrementally expiring old data, it
    is
    easily tuned depending on your current load and how much "old" data
    you are
    willing to have.

    You seem to collect roughly 20-35% extra unused space daily, how many
    keys of persistent and volatile data do you have? That will inform
    you of
    how many times you could/should send RANDOMKEY per time period to
    reduce
    that to something that you would find more reasonable.

    Regards,
    - Josiah





    On Tue, May 7, 2013 at 3:55 AM, Dan C wrote:

    Hi,

    I am not really sure if I am doing something wrong or if what is
    happening is normal.
    I attach an image with one redis instance memory use.
    At the image there are 2 memory drops. Those drops coincide in time
    with a "keys *". After some tests I think I can conclude those drops
    are
    caused by "keys" command forcing to free already expired keys. If I
    am right
    it means a lot of keys are already expired but still using memory.
    Is this
    normal? I think in a very busy redis instance with big TTLs it can
    be a
    problem. Is there maybe an option to modify this behavior?

    The image is the memory of a redis 2.2 but I tested the same on a
    2.6.13 and the same happens.

    Thanks,
    Dan.

    --
    You received this message because you are subscribed to the Google
    Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it,
    send
    an email to redis-db+u...@googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.

    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google
    Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send
    an
    email to redis-db+u...@googlegroups.com.
    To post to this group, send email to redi...@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.
    --
    You received this message because you are subscribed to the Google Groups
    "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an
    email to redis-db+u...@googlegroups.com <javascript:>.
    To post to this group, send email to redi...@googlegroups.com<javascript:>.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.


    --
    Salvatore 'antirez' Sanfilippo
    open source developer - VMware
    http://invece.org

    Beauty is more important in computing than anywhere else in technology
    because software is so complicated. Beauty is the ultimate defence
    against complexity.
    — David Gelernter
    --
    You received this message because you are subscribed to the Google Groups "Redis DB" group.
    To unsubscribe from this group and stop receiving emails from it, send an email to redis-db+unsubscribe@googlegroups.com.
    To post to this group, send email to redis-db@googlegroups.com.
    Visit this group at http://groups.google.com/group/redis-db?hl=en.
    For more options, visit https://groups.google.com/groups/opt_out.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupredis-db @
categoriesredis
postedMay 7, '13 at 11:04a
activeMay 10, '13 at 9:31a
posts11
users4
websiteredis.io
irc#redis

People

Translate

site design / logo © 2022 Grokbase