FAQ
I have a 3 node 5.10 cluster with replicated levelDB as persistent store.


<persistenceAdapter>
          <replicatedLevelDB
           directory="activemq-data"
           replicas="3"
           bind="tcp://0.0.0.0:0"
           zkAddress="queue1:2181,queue2:2181,queue3:2181"
           zkPath="/activemq/leveldb-stores"
           hostname="queue3"
           sync="quorum_disk"
              />
         </persistenceAdapter>


I'm Stresstesting the queue with around 5000 persistent msg/s.
1 Producer
2 Consumers

I get some warning messages on the log:
On Master:
2014-12-16 16:25:28,375 | INFO | Slave has disconnected:
db6c9a23-7025-4384-b02f-dcda763113c3 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-3
2014-12-16 16:25:28,761 | WARN | Unexpected session error:
java.io.FileNotFoundException:
/queue/activemq/conf/activemq-data/00000027b8d097b0.log (No such file or
directory) | org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-1
2014-12-16 16:25:28,761 | INFO | Slave has disconnected:
6a84579c-77fe-41eb-a728-1be472c12894 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-2
2014-12-16 16:25:29,603 | INFO | Slave has connected:
db6c9a23-7025-4384-b02f-dcda763113c3 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-2
2014-12-16 16:25:30,034 | INFO | Slave has connected:
6a84579c-77fe-41eb-a728-1be472c12894 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-1
2014-12-16 16:25:31,360 | INFO | Slave has now caught up:
6a84579c-77fe-41eb-a728-1be472c12894 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-4
2014-12-16 16:25:32,546 | INFO | Slave has now caught up:
db6c9a23-7025-4384-b02f-dcda763113c3 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-2


On the slaves

2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
I have a 3 node 5.10 cluster with replicated levelDB as persistent store.


<persistenceAdapter>
          <replicatedLevelDB
           directory="activemq-data"
           replicas="3"
           bind="tcp://0.0.0.0:0"
           zkAddress="queue1:2181,queue2:2181,queue3:2181"
           zkPath="/activemq/leveldb-stores"
           hostname="queue3"
           sync="quorum_disk"
              />
         </persistenceAdapter>


I'm Stresstesting the queue with around 5000 persistent msg/s.
1 Producer
2 Consumers

I get some warning messages on the log:
On Master:
2014-12-16 16:25:28,375 | INFO | Slave has disconnected:
db6c9a23-7025-4384-b02f-dcda763113c3 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-3
2014-12-16 16:25:28,761 | WARN | Unexpected session error:
java.io.FileNotFoundException:
/queue/activemq/conf/activemq-data/00000027b8d097b0.log (No such file or
directory) | org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-1
2014-12-16 16:25:28,761 | INFO | Slave has disconnected:
6a84579c-77fe-41eb-a728-1be472c12894 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-2
2014-12-16 16:25:29,603 | INFO | Slave has connected:
db6c9a23-7025-4384-b02f-dcda763113c3 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-2
2014-12-16 16:25:30,034 | INFO | Slave has connected:
6a84579c-77fe-41eb-a728-1be472c12894 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-1
2014-12-16 16:25:31,360 | INFO | Slave has now caught up:
6a84579c-77fe-41eb-a728-1be472c12894 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-4
2014-12-16 16:25:32,546 | INFO | Slave has now caught up:
db6c9a23-7025-4384-b02f-dcda763113c3 |
org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
hawtdispatch-DEFAULT-2


On the slaves

2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882
2014-12-16 16:30:00,287 | WARN | No reader available for position:
27fd52c562, log_infos:
{171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
org.apache.activemq.leveldb.RecordLog | Thread-16882

Search Discussions

  • Kevin Burton at Dec 18, 2014 at 8:50 pm
    Hey. I have a similar configuration and I’m getting a ton of "No reader
    available for position” messages as well as significant data loss on AMQ
    restart.

    It literally loses about 95% of the messages I enqueued..
    On Wed, Dec 17, 2014 at 9:06 AM, Christian Grassi wrote:

    I have a 3 node 5.10 cluster with replicated levelDB as persistent store.


    <persistenceAdapter>
    <replicatedLevelDB
    directory="activemq-data"
    replicas="3"
    bind="tcp://0.0.0.0:0"
    zkAddress="queue1:2181,queue2:2181,queue3:2181"
    zkPath="/activemq/leveldb-stores"
    hostname="queue3"
    sync="quorum_disk"
    />
    </persistenceAdapter>


    I'm Stresstesting the queue with around 5000 persistent msg/s.
    1 Producer
    2 Consumers

    I get some warning messages on the log:
    On Master:
    2014-12-16 16:25:28,375 | INFO | Slave has disconnected:
    db6c9a23-7025-4384-b02f-dcda763113c3 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-3
    2014-12-16 16:25:28,761 | WARN | Unexpected session error:
    java.io.FileNotFoundException:
    /queue/activemq/conf/activemq-data/00000027b8d097b0.log (No such file or
    directory) | org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-1
    2014-12-16 16:25:28,761 | INFO | Slave has disconnected:
    6a84579c-77fe-41eb-a728-1be472c12894 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-2
    2014-12-16 16:25:29,603 | INFO | Slave has connected:
    db6c9a23-7025-4384-b02f-dcda763113c3 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-2
    2014-12-16 16:25:30,034 | INFO | Slave has connected:
    6a84579c-77fe-41eb-a728-1be472c12894 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-1
    2014-12-16 16:25:31,360 | INFO | Slave has now caught up:
    6a84579c-77fe-41eb-a728-1be472c12894 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-4
    2014-12-16 16:25:32,546 | INFO | Slave has now caught up:
    db6c9a23-7025-4384-b02f-dcda763113c3 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-2


    On the slaves

    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    I have a 3 node 5.10 cluster with replicated levelDB as persistent store.


    <persistenceAdapter>
    <replicatedLevelDB
    directory="activemq-data"
    replicas="3"
    bind="tcp://0.0.0.0:0"
    zkAddress="queue1:2181,queue2:2181,queue3:2181"
    zkPath="/activemq/leveldb-stores"
    hostname="queue3"
    sync="quorum_disk"
    />
    </persistenceAdapter>


    I'm Stresstesting the queue with around 5000 persistent msg/s.
    1 Producer
    2 Consumers

    I get some warning messages on the log:
    On Master:
    2014-12-16 16:25:28,375 | INFO | Slave has disconnected:
    db6c9a23-7025-4384-b02f-dcda763113c3 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-3
    2014-12-16 16:25:28,761 | WARN | Unexpected session error:
    java.io.FileNotFoundException:
    /queue/activemq/conf/activemq-data/00000027b8d097b0.log (No such file or
    directory) | org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-1
    2014-12-16 16:25:28,761 | INFO | Slave has disconnected:
    6a84579c-77fe-41eb-a728-1be472c12894 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-2
    2014-12-16 16:25:29,603 | INFO | Slave has connected:
    db6c9a23-7025-4384-b02f-dcda763113c3 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-2
    2014-12-16 16:25:30,034 | INFO | Slave has connected:
    6a84579c-77fe-41eb-a728-1be472c12894 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-1
    2014-12-16 16:25:31,360 | INFO | Slave has now caught up:
    6a84579c-77fe-41eb-a728-1be472c12894 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-4
    2014-12-16 16:25:32,546 | INFO | Slave has now caught up:
    db6c9a23-7025-4384-b02f-dcda763113c3 |
    org.apache.activemq.leveldb.replicated.MasterLevelDBStore |
    hawtdispatch-DEFAULT-2


    On the slaves

    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882
    2014-12-16 16:30:00,287 | WARN | No reader available for position:
    27fd52c562, log_infos:

    {171757839210=LogInfo(/queue/activemq/conf/activemq-data/00000027fd90a36a.log,171757839210,0)}
    org.apache.activemq.leveldb.RecordLog | Thread-16882

    --

    Founder/CEO Spinn3r.com
    Location: *San Francisco, CA*
    blog: http://burtonator.wordpress.com
    … or check out my Google+ profile
    <https://plus.google.com/102718274791889610666/posts>
    <http://spinn3r.com>
  • Christian Grassi at Dec 23, 2014 at 12:47 pm
    Hi Kevin,
    Do you have any update on this issue?
    It looks to me that this config is not really production ready.
    What do you think?

    Is someone of the development team having a look?

    Chris

    Il giorno gio 18 dic 2014 21:50 Kevin Burton <burton@spinn3r.com> ha
    scritto:
  • Tim Bain at Dec 23, 2014 at 2:54 pm
    Is there a bug report in Jira for it, and has someone been able to identify
    a configuration and set of steps to reliably reproduce it? If not, the
    odds are good that no one is actively investigating it, so doing those
    things would be the first step towards getting the problem fixed.
    On Dec 23, 2014 5:47 AM, "Christian Grassi" wrote:

    Hi Kevin,
    Do you have any update on this issue?
    It looks to me that this config is not really production ready.
    What do you think?

    Is someone of the development team having a look?

    Chris

    Il giorno gio 18 dic 2014 21:50 Kevin Burton <burton@spinn3r.com> ha
    scritto:
  • Christian Grassi at Dec 23, 2014 at 3:47 pm
    the jira is the folowing
    https://issues.apache.org/jira/browse/AMQ-5300

    It was opened on july.
    I can reproduce it always, with the configuration in my first mail.

    Christian
    On Tue, Dec 23, 2014 at 3:54 PM, Tim Bain wrote:

    Is there a bug report in Jira for it, and has someone been able to identify
    a configuration and set of steps to reliably reproduce it? If not, the
    odds are good that no one is actively investigating it, so doing those
    things would be the first step towards getting the problem fixed.
    On Dec 23, 2014 5:47 AM, "Christian Grassi" wrote:

    Hi Kevin,
    Do you have any update on this issue?
    It looks to me that this config is not really production ready.
    What do you think?

    Is someone of the development team having a look?

    Chris

    Il giorno gio 18 dic 2014 21:50 Kevin Burton <burton@spinn3r.com> ha
    scritto:
  • Tim Bain at Dec 23, 2014 at 4:52 pm
    I was going to tell you that since you have a configuration that reliably
    reproduces the problem, and it's not described in that JIRA (which
    describes symptoms but no means of reproducing the problem), you should
    update it to provide the configuration so that someone can investigate.
    But looking more closely at the description of AMQ-5300, it doesn't sound
    like what you're seeing; that bug is very specifically about deleting the
    index files for LevelDB and then restarting the broker, whereas you haven't
    described doing either of those things. (Or maybe you did and just left it
    out of your description?) But unless you really did delete the index, your
    issue is a separate bug (though the two might turn out to have the same
    root cause once someone investigates) and should be submitted as a new JIRA.
    On Tue, Dec 23, 2014 at 8:45 AM, Christian Grassi wrote:

    the jira is the folowing
    https://issues.apache.org/jira/browse/AMQ-5300

    It was opened on july.
    I can reproduce it always, with the configuration in my first mail.

    Christian
    On Tue, Dec 23, 2014 at 3:54 PM, Tim Bain wrote:

    Is there a bug report in Jira for it, and has someone been able to identify
    a configuration and set of steps to reliably reproduce it? If not, the
    odds are good that no one is actively investigating it, so doing those
    things would be the first step towards getting the problem fixed.
    On Dec 23, 2014 5:47 AM, "Christian Grassi" <christiangrassi@gmail.com>
    wrote:
    Hi Kevin,
    Do you have any update on this issue?
    It looks to me that this config is not really production ready.
    What do you think?

    Is someone of the development team having a look?

    Chris

    Il giorno gio 18 dic 2014 21:50 Kevin Burton <burton@spinn3r.com> ha
    scritto:
  • Kevin Burton at Dec 23, 2014 at 11:14 pm
    that bug is very specifically about deleting the
    index files for LevelDB and then restarting the broker, whereas you haven't
    described doing either of those things.

    Yes.. I agree as well. That’s why I didn’t follow up on that thread. I’m
    concerned about the error message though.
    On Tue, Dec 23, 2014 at 8:50 AM, Tim Bain wrote:

    I was going to tell you that since you have a configuration that reliably
    reproduces the problem, and it's not described in that JIRA (which
    describes symptoms but no means of reproducing the problem), you should
    update it to provide the configuration so that someone can investigate.
    But looking more closely at the description of AMQ-5300, it doesn't sound
    like what you're seeing; that bug is very specifically about deleting the
    index files for LevelDB and then restarting the broker, whereas you haven't
    described doing either of those things. (Or maybe you did and just left it
    out of your description?) But unless you really did delete the index, your
    issue is a separate bug (though the two might turn out to have the same
    root cause once someone investigates) and should be submitted as a new
    JIRA.

    On Tue, Dec 23, 2014 at 8:45 AM, Christian Grassi <
    christiangrassi@gmail.com
    wrote:
    the jira is the folowing
    https://issues.apache.org/jira/browse/AMQ-5300

    It was opened on july.
    I can reproduce it always, with the configuration in my first mail.

    Christian
    On Tue, Dec 23, 2014 at 3:54 PM, Tim Bain wrote:

    Is there a bug report in Jira for it, and has someone been able to identify
    a configuration and set of steps to reliably reproduce it? If not, the
    odds are good that no one is actively investigating it, so doing those
    things would be the first step towards getting the problem fixed.
    On Dec 23, 2014 5:47 AM, "Christian Grassi" <christiangrassi@gmail.com
    wrote:
    Hi Kevin,
    Do you have any update on this issue?
    It looks to me that this config is not really production ready.
    What do you think?

    Is someone of the development team having a look?

    Chris

    Il giorno gio 18 dic 2014 21:50 Kevin Burton <burton@spinn3r.com> ha
    scritto:


    --

    Founder/CEO Spinn3r.com
    Location: *San Francisco, CA*
    blog: http://burtonator.wordpress.com
    … or check out my Google+ profile
    <https://plus.google.com/102718274791889610666/posts>
    <http://spinn3r.com>
  • Tim Bain at Dec 24, 2014 at 12:29 am
    Based on what Christian has said, it sounds like he's got a configuration
    that reliably reproduces his problem whereas you're not at that point yet,
    so let's have him create a JIRA bug for that so we don't lose the details
    about how to produce it. As you investigate, hopefully you'll be able
    update the JIRA to give more details on what's actually going on, ideally
    with a proposed patch if your investigation leads you to one. (Thanks for
    investigating to that depth, BTW.)
    On Tue, Dec 23, 2014 at 4:13 PM, Kevin Burton wrote:

    that bug is very specifically about deleting the
    index files for LevelDB and then restarting the broker, whereas you haven't
    described doing either of those things.

    Yes.. I agree as well. That’s why I didn’t follow up on that thread. I’m
    concerned about the error message though.
    On Tue, Dec 23, 2014 at 8:50 AM, Tim Bain wrote:

    I was going to tell you that since you have a configuration that reliably
    reproduces the problem, and it's not described in that JIRA (which
    describes symptoms but no means of reproducing the problem), you should
    update it to provide the configuration so that someone can investigate.
    But looking more closely at the description of AMQ-5300, it doesn't sound
    like what you're seeing; that bug is very specifically about deleting the
    index files for LevelDB and then restarting the broker, whereas you haven't
    described doing either of those things. (Or maybe you did and just left it
    out of your description?) But unless you really did delete the index, your
    issue is a separate bug (though the two might turn out to have the same
    root cause once someone investigates) and should be submitted as a new
    JIRA.

    On Tue, Dec 23, 2014 at 8:45 AM, Christian Grassi <
    christiangrassi@gmail.com
    wrote:
    the jira is the folowing
    https://issues.apache.org/jira/browse/AMQ-5300

    It was opened on july.
    I can reproduce it always, with the configuration in my first mail.

    Christian
    On Tue, Dec 23, 2014 at 3:54 PM, Tim Bain wrote:

    Is there a bug report in Jira for it, and has someone been able to identify
    a configuration and set of steps to reliably reproduce it? If not,
    the
    odds are good that no one is actively investigating it, so doing
    those
    things would be the first step towards getting the problem fixed.
    On Dec 23, 2014 5:47 AM, "Christian Grassi" <
    christiangrassi@gmail.com
    wrote:
    Hi Kevin,
    Do you have any update on this issue?
    It looks to me that this config is not really production ready.
    What do you think?

    Is someone of the development team having a look?

    Chris

    Il giorno gio 18 dic 2014 21:50 Kevin Burton <burton@spinn3r.com>
    ha
    scritto:


    --

    Founder/CEO Spinn3r.com
    Location: *San Francisco, CA*
    blog: http://burtonator.wordpress.com
    … or check out my Google+ profile
    <https://plus.google.com/102718274791889610666/posts>
    <http://spinn3r.com>
  • Kevin Burton at Dec 23, 2014 at 11:12 pm

    On Tue, Dec 23, 2014 at 6:54 AM, Tim Bain wrote:

    Is there a bug report in Jira for it, and has someone been able to identify
    a configuration and set of steps to reliably reproduce it?


    No. I don’t think there is and I totally agree in a reduction and test
    case to easily reproduce this problem. I think the main challenge is that
    I need to use ActiveMQ with a production use and not the embedded broker.
    The embedded broker doesn’t properly shut itself down and leaves some
    background threads.

    Anyway.. I’m working on it but it’s in parallel with our production load.
    We can’t have our ActiveMQ boxes crashing like this so resolving it is
    definitely a priority for us.

    I’ll post more information while this evolves.

    --

    Founder/CEO Spinn3r.com
    Location: *San Francisco, CA*
    blog: http://burtonator.wordpress.com
    … or check out my Google+ profile
    <https://plus.google.com/102718274791889610666/posts>
    <http://spinn3r.com>
  • Kevin Burton at Dec 23, 2014 at 11:10 pm
    I’m still working on it and trying to figure it out.. it might take me some
    time to reproduce it as it seems to have a few issues that might be
    compounding the issue.

    Bumping up memory definitely seemed to help resolve the issue.

    I think what would be ideal is for me to write a producer/consumer and then
    continually restart activemq+leveldb while it’s executing to see what
    happens.

    My their is that there’s data loss. But I’m not sure if it’s also due to
    my GC issue I was seeing before.
    On Tue, Dec 23, 2014 at 4:46 AM, Christian Grassi wrote:

    Hi Kevin,
    Do you have any update on this issue?
    It looks to me that this config is not really production ready.
    What do you think?

    Is someone of the development team having a look?

    Chris

    Il giorno gio 18 dic 2014 21:50 Kevin Burton <burton@spinn3r.com> ha
    scritto:


    --

    Founder/CEO Spinn3r.com
    Location: *San Francisco, CA*
    blog: http://burtonator.wordpress.com
    … or check out my Google+ profile
    <https://plus.google.com/102718274791889610666/posts>
    <http://spinn3r.com>

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupusers @
categoriesactivemq
postedDec 17, '14 at 5:08p
activeDec 24, '14 at 12:29a
posts10
users3
websiteactivemq.apache.org

People

Translate

site design / logo © 2022 Grokbase