Hi everybody,

I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
gc pauses.
As you can see in the log below the old-gen-heap consists of one large
block, the new Size has 256m, it uses 13 worker threads and it has to
copy 27505761 words (~210mb) directly from eden to old gen.
I have seen that this problem occurs only after about one week of
uptime. Even thought we make a full (compacting) gc every night.
Since real-time > user-time I assume it might be a synchronization
problem. Can this be true?

Do you have any Ideas how I can speed up this gcs?

Please let me know, if you need more informations.

Thank you,
Flo


##### java -version #####
java version "1.6.0_29"
Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

##### The startup parameters: #####
-Xms28G -Xmx28G
-XX:+UseConcMarkSweepGC \
-XX:CMSMaxAbortablePrecleanTime000 \
-XX:SurvivorRatio=8 \
-XX:TargetSurvivorRatio� \
-XX:MaxTenuringThreshold1 \
-XX:CMSInitiatingOccupancyFraction� \
-XX:NewSize%6M \

-verbose:gc \
-XX:+PrintFlagsFinal \
-XX:PrintFLSStatistics=1 \
-XX:+PrintGCDetails \
-XX:+PrintGCDateStamps \
-XX:-TraceClassUnloading \
-XX:+PrintGCApplicationConcurrentTime \
-XX:+PrintGCApplicationStoppedTime \
-XX:+PrintTenuringDistribution \
-XX:+CMSClassUnloadingEnabled \
-Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
-Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

-Djava.awt.headless=true

##### From the out-file (as of +PrintFlagsFinal): #####
ParallelGCThreads = 13

##### The gc.log-excerpt: #####
Application time: 20,0617700 seconds
2011-12-22T12:02:03.289+0100: [GC Before GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 1183290963
Max Chunk Size: 1183290963
Number of Blocks: 1
Av. Block Size: 1183290963
Tree Height: 1
Before GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
Tree Height: 0
[ParNew
Desired survivor size 25480392 bytes, new threshold 1 (max 31)
- age 1: 28260160 bytes, 28260160 total
: 249216K->27648K(249216K), 6,1808130 secs]
20061765K->20056210K(29332480K)After GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 1155785202
Max Chunk Size: 1155785202
Number of Blocks: 1
Av. Block Size: 1155785202
Tree Height: 1
After GC:
Statistics for BinaryTreeDictionary:
------------------------------------
Total Free Space: 0
Max Chunk Size: 0
Number of Blocks: 0
Tree Height: 0
, 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
Total time for which application threads were stopped: 6,1818730 seconds

Search Discussions

  • Srinivas Ramakrishna at Jan 9, 2012 at 10:40 am
    Haven't looked at any logs, but setting MaxTenuringThreshold to 31 can be
    bad. I'd dial that down to 8,
    or leave it at the default of 15. (Your GC logs which must presumably
    include the tenuring distribution should
    inform you as to a more optimal size to use. As Kirk noted, premature
    promotion can be bad, and so can
    survivor space overflow, which can lead to premature promotion and
    exacerbate fragmentation.)

    -- ramki
    On Mon, Jan 9, 2012 at 3:08 AM, Florian Binder wrote:

    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730 seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120109/17f1facd/attachment.html
  • Kirk Pepperdine at Jan 9, 2012 at 11:06 am
    Hi Ramki,

    AFAICT given the limited GC log, the calculated tenuring threshold is always 1 which mean's he always flooding survivor spaces (i.e. suffering from premature promotion). My guess is that the tuning strategy assumes cost of long lived objects dominates and so heap is configured to minimize (survivor) copy costs. But it would appear that this strategy has backfired. Look at young gen size and if you do the maths you can see that there is no chance of there not being premature promotion. WIth the 80% initiating occupancy fraction.. well, that can't lead to anything good either. WIth the VM so misconfigured it's difficult to estimate true live set size which could then be used to calculate more reasonable pool sizes.

    So, with all the promtion going on, I suspect that fragmentation is making it difficult to reallocate the object in tenuring... hence long pause time. Would you say with these large data strictures that it might be difficult for the CMS to parallelize the scan for roots? The abortable pre-clean aborts on time which means that it's not able to clear out much and given the apparent life-cycle, is it worth running this phase? In fact, would you not guess that the parallel collector do better in this scenario?

    -- Kirk

    ps. I'm always happy beat you to the punch.. 'cos it's very difficult to do. ;-)
    On 2012-01-09, at 7:40 PM, Srinivas Ramakrishna wrote:

    Haven't looked at any logs, but setting MaxTenuringThreshold to 31 can be bad. I'd dial that down to 8,
    or leave it at the default of 15. (Your GC logs which must presumably include the tenuring distribution should
    inform you as to a more optimal size to use. As Kirk noted, premature promotion can be bad, and so can
    survivor space overflow, which can lead to premature promotion and exacerbate fragmentation.)

    -- ramki

    On Mon, Jan 9, 2012 at 3:08 AM, Florian Binder wrote:
    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730 seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use

    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120109/08d12ac9/attachment.html
  • Florian Binder at Jan 9, 2012 at 11:18 am
    Hi Ramki,

    Yes, I am agreed with you. 31 is too large and I have removed the
    parameter (using default now). Nevertheless this is not the problem as
    the max used age was always 1.

    Since the most (more than 90%) new allocated objects in our application
    live for a long time (>1h) we mostly will have premature promotion.
    Is there a way to optimize this?

    I have seen most time, when young gc needs much time (> 6 secs) there is
    only one large block in the old gen. If there has been a
    cms-old-gen-collection and there are more than one blocks in the old
    generation it is mostly (not always) much faster (needs less than 200ms).

    Is it possible that premature promotion can not be done parallel if
    there is only one large block in the old gen?

    In the past we have had a problem with fragmentation on this server but
    this is gone since we increased memory for it and triggered a full gc
    (compacting) every night, like Tony advised us. With setting the
    initiating occupancy fraction to 80% we have only a few (~10) old
    generation collections (which are very fast) and the heap fragmentation
    is low.

    Flo


    Am 09.01.2012 19:40, schrieb Srinivas Ramakrishna:
    Haven't looked at any logs, but setting MaxTenuringThreshold to 31 can
    be bad. I'd dial that down to 8,
    or leave it at the default of 15. (Your GC logs which must presumably
    include the tenuring distribution should
    inform you as to a more optimal size to use. As Kirk noted, premature
    promotion can be bad, and so can
    survivor space overflow, which can lead to premature promotion and
    exacerbate fragmentation.)

    -- ramki

    On Mon, Jan 9, 2012 at 3:08 AM, Florian Binder <java at java4.info
    wrote:

    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730
    seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    <mailto:hotspot-gc-use at openjdk.java.net>
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120109/a2997a2e/attachment.html
  • Jon Masamitsu at Jan 9, 2012 at 11:24 am
    Florian,

    Have you even turned on

    PrintReferenceGC

    to see if you are spending a significant amount of time
    doing Reference processing?

    If you do see significant Reference processing times , you can
    try turning on ParallelRefProcEnabled.

    Jon
    On 01/09/12 03:08, Florian Binder wrote:
    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time> user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730 seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
  • Chi Ho Kwok at Jan 9, 2012 at 11:33 am
    Just making sure the obvious case is covered: is it just me or is 6s real >
    3.5s user+sys with 13 threads just plain weird? That means there was 0.5
    thread actually running on the average during that collection.

    Do a sar -B (requires package sysstat) and see if there were any major
    pagefaults (or indirectly via cacti and other monitoring tools via memory
    usage, load average etc, or even via cat /proc/vmstat and pgmajfault), I've
    seen those cause these kind of times during GC.


    Chi Ho Kwok
    On Mon, Jan 9, 2012 at 12:08 PM, Florian Binder wrote:

    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730 seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120109/ffc7400e/attachment.html
  • Florian Binder at Jan 9, 2012 at 11:47 am
    Yes!
    You are right!
    I have a lot of page faults when gc is taking so much time.

    For example (sar -B):
    00:00:01 pgpgin/s pgpgout/s fault/s majflt/s
    00:50:01 0,01 45,18 162,29 0,00
    01:00:01 0,02 46,58 170,45 0,00
    01:10:02 25313,71 27030,39 27464,37 0,02
    01:20:02 23456,85 25371,28 13621,92 0,01
    01:30:01 22778,76 22918,60 10136,71 0,03
    01:40:11 19020,44 22723,65 8617,42 0,15
    01:50:01 5,52 44,22 147,26 0,05

    What is this meaning and how can I avoid it?


    Flo



    Am 09.01.2012 20:33, schrieb Chi Ho Kwok:
    Just making sure the obvious case is covered: is it just me or is 6s
    real > 3.5s user+sys with 13 threads just plain weird? That means
    there was 0.5 thread actually running on the average during that
    collection.

    Do a sar -B (requires package sysstat) and see if there were any major
    pagefaults (or indirectly via cacti and other monitoring tools via
    memory usage, load average etc, or even via cat /proc/vmstat and
    pgmajfault), I've seen those cause these kind of times during GC.


    Chi Ho Kwok

    On Mon, Jan 9, 2012 at 12:08 PM, Florian Binder <java at java4.info
    wrote:

    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730
    seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    <mailto:hotspot-gc-use at openjdk.java.net>
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120109/4bdedec2/attachment-0001.html
  • Chi Ho Kwok at Jan 9, 2012 at 9:21 pm
    Hi Florian,

    Uh, you might want to try sar -r as well, that reports memory usage (and
    man sar for other reporting options, and -f /var/log/sysstat/saXX where xx
    is the day for older data is useful as well). Page in / out means reading
    or writing to the swap file, usual cause here is one or more huge
    background task / cron jobs taking up too much memory forcing other things
    to swap out. You can try reducing the size of the heap and see if it helps
    if you're just a little bit short, but otherwise I don't think you can
    solve this with just VM options.


    Here's the relevant section from the manual:

    -B Report paging statistics. Some of the metrics below are
    available only with post 2.5 kernels. The following values are displayed:
    pgpgin/s
    Total number of kilobytes the system paged in from
    disk per second. Note: With old kernels (2.2.x) this value is a number of
    blocks
    per second (and not kilobytes).
    pgpgout/s
    Total number of kilobytes the system paged out to
    disk per second. Note: With old kernels (2.2.x) this value is a number of
    blocks
    per second (and not kilobytes).
    fault/s
    Number of page faults (major + minor) made by the
    system per second. This is not a count of page faults that generate I/O,
    because
    some page faults can be resolved without I/O.
    majflt/s
    Number of major faults the system has made per
    second, those which have required loading a memory page from disk.

    I'm not sure what kernel you're on, but pgpgin / out being high is a bad
    thing. Sar seems to report that all faults are minor, but that conflicts
    with the first two columns.


    Chi Ho Kwok
    On Mon, Jan 9, 2012 at 8:47 PM, Florian Binder wrote:

    Yes!
    You are right!
    I have a lot of page faults when gc is taking so much time.

    For example (sar -B):
    00:00:01 pgpgin/s pgpgout/s fault/s majflt/s
    00:50:01 0,01 45,18 162,29 0,00
    01:00:01 0,02 46,58 170,45 0,00
    01:10:02 25313,71 27030,39 27464,37 0,02
    01:20:02 23456,85 25371,28 13621,92 0,01
    01:30:01 22778,76 22918,60 10136,71 0,03
    01:40:11 19020,44 22723,65 8617,42 0,15
    01:50:01 5,52 44,22 147,26 0,05

    What is this meaning and how can I avoid it?


    Flo



    Am 09.01.2012 20:33, schrieb Chi Ho Kwok:

    Just making sure the obvious case is covered: is it just me or is 6s real
    3.5s user+sys with 13 threads just plain weird? That means there was 0.5
    thread actually running on the average during that collection.

    Do a sar -B (requires package sysstat) and see if there were any major
    pagefaults (or indirectly via cacti and other monitoring tools via memory
    usage, load average etc, or even via cat /proc/vmstat and pgmajfault), I've
    seen those cause these kind of times during GC.


    Chi Ho Kwok
    On Mon, Jan 9, 2012 at 12:08 PM, Florian Binder wrote:

    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730 seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120110/ea863255/attachment.html
  • Vitaly Davidovich at Jan 9, 2012 at 9:43 pm
    Apparently pgpgin/pgpgout may not be that accurate to determine swap file
    usage:
    http://help.lockergnome.com/linux/pgpgin-pgpgout-measure--ftopict506279.html

    May need to use vmstat and look at si/so instead.
    On Jan 10, 2012 12:24 AM, "Chi Ho Kwok" wrote:

    Hi Florian,

    Uh, you might want to try sar -r as well, that reports memory usage (and
    man sar for other reporting options, and -f /var/log/sysstat/saXX where xx
    is the day for older data is useful as well). Page in / out means reading
    or writing to the swap file, usual cause here is one or more huge
    background task / cron jobs taking up too much memory forcing other things
    to swap out. You can try reducing the size of the heap and see if it helps
    if you're just a little bit short, but otherwise I don't think you can
    solve this with just VM options.


    Here's the relevant section from the manual:

    -B Report paging statistics. Some of the metrics below are
    available only with post 2.5 kernels. The following values are displayed:
    pgpgin/s
    Total number of kilobytes the system paged in from
    disk per second. Note: With old kernels (2.2.x) this value is a number of
    blocks
    per second (and not kilobytes).
    pgpgout/s
    Total number of kilobytes the system paged out to
    disk per second. Note: With old kernels (2.2.x) this value is a number of
    blocks
    per second (and not kilobytes).
    fault/s
    Number of page faults (major + minor) made by the
    system per second. This is not a count of page faults that generate I/O,
    because
    some page faults can be resolved without I/O.
    majflt/s
    Number of major faults the system has made per
    second, those which have required loading a memory page from disk.

    I'm not sure what kernel you're on, but pgpgin / out being high is a bad
    thing. Sar seems to report that all faults are minor, but that conflicts
    with the first two columns.


    Chi Ho Kwok
    On Mon, Jan 9, 2012 at 8:47 PM, Florian Binder wrote:

    Yes!
    You are right!
    I have a lot of page faults when gc is taking so much time.

    For example (sar -B):
    00:00:01 pgpgin/s pgpgout/s fault/s majflt/s
    00:50:01 0,01 45,18 162,29 0,00
    01:00:01 0,02 46,58 170,45 0,00
    01:10:02 25313,71 27030,39 27464,37 0,02
    01:20:02 23456,85 25371,28 13621,92 0,01
    01:30:01 22778,76 22918,60 10136,71 0,03
    01:40:11 19020,44 22723,65 8617,42 0,15
    01:50:01 5,52 44,22 147,26 0,05

    What is this meaning and how can I avoid it?


    Flo



    Am 09.01.2012 20:33, schrieb Chi Ho Kwok:

    Just making sure the obvious case is covered: is it just me or is 6s real
    3.5s user+sys with 13 threads just plain weird? That means there was 0.5
    thread actually running on the average during that collection.

    Do a sar -B (requires package sysstat) and see if there were any major
    pagefaults (or indirectly via cacti and other monitoring tools via memory
    usage, load average etc, or even via cat /proc/vmstat and pgmajfault), I've
    seen those cause these kind of times during GC.


    Chi Ho Kwok
    On Mon, Jan 9, 2012 at 12:08 PM, Florian Binder wrote:

    Hi everybody,

    I am using CMS (with ParNew) gc and have very long (> 6 seconds) young
    gc pauses.
    As you can see in the log below the old-gen-heap consists of one large
    block, the new Size has 256m, it uses 13 worker threads and it has to
    copy 27505761 words (~210mb) directly from eden to old gen.
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?

    Do you have any Ideas how I can speed up this gcs?

    Please let me know, if you need more informations.

    Thank you,
    Flo


    ##### java -version #####
    java version "1.6.0_29"
    Java(TM) SE Runtime Environment (build 1.6.0_29-b11)
    Java HotSpot(TM) 64-Bit Server VM (build 20.4-b02, mixed mode)

    ##### The startup parameters: #####
    -Xms28G -Xmx28G
    -XX:+UseConcMarkSweepGC \
    -XX:CMSMaxAbortablePrecleanTime000 \
    -XX:SurvivorRatio=8 \
    -XX:TargetSurvivorRatio� \
    -XX:MaxTenuringThreshold1 \
    -XX:CMSInitiatingOccupancyFraction� \
    -XX:NewSize%6M \

    -verbose:gc \
    -XX:+PrintFlagsFinal \
    -XX:PrintFLSStatistics=1 \
    -XX:+PrintGCDetails \
    -XX:+PrintGCDateStamps \
    -XX:-TraceClassUnloading \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintTenuringDistribution \
    -XX:+CMSClassUnloadingEnabled \
    -Dsun.rmi.dgc.server.gcInterval�23372036854775807 \
    -Dsun.rmi.dgc.client.gcInterval�23372036854775807 \

    -Djava.awt.headless=true

    ##### From the out-file (as of +PrintFlagsFinal): #####
    ParallelGCThreads = 13

    ##### The gc.log-excerpt: #####
    Application time: 20,0617700 seconds
    2011-12-22T12:02:03.289+0100: [GC Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1183290963
    Max Chunk Size: 1183290963
    Number of Blocks: 1
    Av. Block Size: 1183290963
    Tree Height: 1
    Before GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    [ParNew
    Desired survivor size 25480392 bytes, new threshold 1 (max 31)
    - age 1: 28260160 bytes, 28260160 total
    : 249216K->27648K(249216K), 6,1808130 secs]
    20061765K->20056210K(29332480K)After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 1155785202
    Max Chunk Size: 1155785202
    Number of Blocks: 1
    Av. Block Size: 1155785202
    Tree Height: 1
    After GC:
    Statistics for BinaryTreeDictionary:
    ------------------------------------
    Total Free Space: 0
    Max Chunk Size: 0
    Number of Blocks: 0
    Tree Height: 0
    , 6,1809440 secs] [Times: user=3,08 sys=0,51, real=6,18 secs]
    Total time for which application threads were stopped: 6,1818730 seconds
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    _______________________________________________
    hotspot-gc-use mailing list
    hotspot-gc-use at openjdk.java.net
    http://mail.openjdk.java.net/mailman/listinfo/hotspot-gc-use
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120110/7c061fcb/attachment-0001.html
  • Srinivas Ramakrishna at Jan 11, 2012 at 1:00 am

    On Mon, Jan 9, 2012 at 3:08 AM, Florian Binder wrote:

    ...
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?
    Together with your and Chi-Ho's conclusion that this is possibly related to
    paging,
    a question to ponder is why this happens only after a week. Since your
    process'
    heap size is presumably fixed and you have seen multiple full GC's (from
    which
    i assume that your heap's pages have all been touched), have you checked to
    see if the size of either this process (i.e. its native size) or of another
    process
    on the machine has grown during the week so that you start swapping?

    I also find it interesting that you state that whenever you see this problem
    there's always a single block in the old gen, and that the problem seems to
    go
    away when there are more than one block in the old gen. That would seem
    to throw out the paging theory, and point the finger of suspicion to some
    kind
    of bottleneck in the allocation out of a large block. You also state that
    you
    do a compacting collection every night, but the bad behaviour sets in only
    after a week.

    So let me ask you if you see that the slow scavenge happens to be the first
    scavenge after a full gc, or does the condition persist for a long time and
    is independent if whether a full gc has happened recently?

    Try turning on -XX:+PrintOldPLAB to see if it sheds any light...

    -- ramki
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120111/0d137518/attachment.html
  • Florian Binder at Jan 11, 2012 at 1:45 am
    I do not know why it has worked for a week.
    Maybe it is because this was the xmas week ;-)

    In the night there are a lot of disk operations (2 TB of data is
    written). Therefore the operating system caches a lot of files and tries
    to free memory for this, so unused pages are moved to swap space.
    I assume heap fragmentation avoids swapping, since more pages are
    touched during the application is running. After a compacting gc there
    is one large (free) block which is not touched until young gc copies the
    objects from eden space. This will yield the operating system to move
    the pages of this one free block to swap and at every young gc it has to
    read it from swap.
    After a CMS collection the following young gcs are much faster because
    the gaps in the heap are not swapped.

    Yesterday, we have turned off the swap on this machine and now all young
    gcs take less than 200ms (instead of 6s) :-)
    Thanks againt to Chi Ho Kwok for giving the key hint :-)

    Flo


    Am 11.01.2012 10:00, schrieb Srinivas Ramakrishna:

    On Mon, Jan 9, 2012 at 3:08 AM, Florian Binder <java at java4.info
    wrote:

    ...
    I have seen that this problem occurs only after about one week of
    uptime. Even thought we make a full (compacting) gc every night.
    Since real-time > user-time I assume it might be a synchronization
    problem. Can this be true?


    Together with your and Chi-Ho's conclusion that this is possibly
    related to paging,
    a question to ponder is why this happens only after a week. Since your
    process'
    heap size is presumably fixed and you have seen multiple full GC's
    (from which
    i assume that your heap's pages have all been touched), have you
    checked to
    see if the size of either this process (i.e. its native size) or of
    another process
    on the machine has grown during the week so that you start swapping?

    I also find it interesting that you state that whenever you see this
    problem
    there's always a single block in the old gen, and that the problem
    seems to go
    away when there are more than one block in the old gen. That would seem
    to throw out the paging theory, and point the finger of suspicion to
    some kind
    of bottleneck in the allocation out of a large block. You also state
    that you
    do a compacting collection every night, but the bad behaviour sets in only
    after a week.

    So let me ask you if you see that the slow scavenge happens to be the
    first
    scavenge after a full gc, or does the condition persist for a long
    time and
    is independent if whether a full gc has happened recently?

    Try turning on -XX:+PrintOldPLAB to see if it sheds any light...

    -- ramki
    -------------- next part --------------
    An HTML attachment was scrubbed...
    URL: http://mail.openjdk.java.net/pipermail/hotspot-gc-use/attachments/20120111/19dc97a7/attachment.html

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphotspot-gc-use @
categoriesopenjdk
postedJan 9, '12 at 3:08a
activeJan 11, '12 at 1:45a
posts11
users6
websiteopenjdk.java.net

People

Translate

site design / logo © 2021 Grokbase