Grokbase Groups HBase user April 2011
FAQ
If we kill the HMaster and try restarting it.. the following exceptions are
logged



plitting hlog 2 of 2:
hdfs://10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
A60020.1302350355407, length=1459

2011-04-09 18:02:56,017 INFO org.apache.hadoop.hbase.util.FSUtils:
Recovering file
hdfs://10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
A60020.1302350355407

2011-04-09 18:02:56,037 ERROR
com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker: Exception occured
while connecting to server : /10.18.52.108:9000

org.apache.hadoop.ipc.RemoteException:
org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
create file
/hbase/.logs/linux108,60020,1302346754067/linux108%3A60020.1302350355407 for
DFSClient_hb_m_linux108:60000_1302352358592 on client 10.18.52.108, because
this file is already being created by NN_Recovery on 10.18.52.108

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSN
amesystem.java:1453)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSName
system.java:1291)

at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.
java:1473)

at
org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:628)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:541)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1105)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1101)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:396)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1099)



at org.apache.hadoop.ipc.Client.call(Client.java:942)

at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:239)

at $Proxy5.append(Unknown Source)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
)

at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
.java:25)

at java.lang.reflect.Method.invoke(Method.java:597)

at
com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
AndSwitchInvoker.java:157)

at
com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
AndSwitchInvoker.java:145)

at
com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invoke(RPCRetryAndSwi
tchInvoker.java:54)

at $Proxy5.append(Unknown Source)

at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:741)

at
org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.ja
va:366)

at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:665)

at
org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:634)

at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:261)

at
org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
java:188)

at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
va:196)

at
org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
ileSystem.java:180)

at
org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:379
)

at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:278)



But the HMaster is starting correctly. Here only I have only 2 datanodes
and the replication factor is 2.



Regards

Ram



****************************************************************************
***********
This e-mail and attachments contain confidential information from HUAWEI,
which is intended only for the person or entity whose address is listed
above. Any use of the information contained herein in any way (including,
but not limited to, total or partial disclosure, reproduction, or
dissemination) by persons other than the intended recipient's) is
prohibited. If you receive this e-mail in error, please notify the sender by
phone or email immediately and delete it!

Search Discussions

  • Ted Yu at Apr 9, 2011 at 2:12 pm
    Have you read the email thread entitled 'file is already being created by
    NN_Recovery' on user mailing list ?
    On Sat, Apr 9, 2011 at 7:06 AM, Ramkrishna S Vasudevan wrote:

    If we kill the HMaster and try restarting it.. the following exceptions are
    logged



    plitting hlog 2 of 2:
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407, length=1459

    2011-04-09 18:02:56,017 INFO org.apache.hadoop.hbase.util.FSUtils:
    Recovering file
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407

    2011-04-09 18:02:56,037 ERROR
    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker: Exception occured
    while connecting to server : /10.18.52.108:9000

    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
    create file
    /hbase/.logs/linux108,60020,1302346754067/linux108%3A60020.1302350355407
    for
    DFSClient_hb_m_linux108:60000_1302352358592 on client 10.18.52.108, because
    this file is already being created by NN_Recovery on 10.18.52.108

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSN
    amesystem.java:1453)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSName
    system.java:1291)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.
    java:1473)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:628)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:541)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1105)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1101)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:396)

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1099)



    at org.apache.hadoop.ipc.Client.call(Client.java:942)

    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:239)

    at $Proxy5.append(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:157)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:145)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invoke(RPCRetryAndSwi
    tchInvoker.java:54)

    at $Proxy5.append(Unknown Source)

    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:741)

    at

    org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.ja
    va:366)

    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:665)

    at
    org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:634)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:261)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:188)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
    va:196)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
    ileSystem.java:180)

    at

    org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:379
    )

    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:278)



    But the HMaster is starting correctly. Here only I have only 2 datanodes
    and the replication factor is 2.



    Regards

    Ram




    ****************************************************************************
    ***********
    This e-mail and attachments contain confidential information from HUAWEI,
    which is intended only for the person or entity whose address is listed
    above. Any use of the information contained herein in any way (including,
    but not limited to, total or partial disclosure, reproduction, or
    dissemination) by persons other than the intended recipient's) is
    prohibited. If you receive this e-mail in error, please notify the sender
    by
    phone or email immediately and delete it!


  • Ramkrishna S Vasudevan at Apr 9, 2011 at 2:19 pm
    Hi

    Yes .. i had gone thro that
    But those scenarios were something like the region server went down or data
    node is down.

    Here it is like only master got restarted.

    Regards
    Ram

    ****************************************************************************
    ***********
    This e-mail and attachments contain confidential information from HUAWEI,
    which is intended only for the person or entity whose address is listed
    above. Any use of the information contained herein in any way (including,
    but not limited to, total or partial disclosure, reproduction, or
    dissemination) by persons other than the intended recipient's) is
    prohibited. If you receive this e-mail in error, please notify the sender by
    phone or email immediately and delete it!

    -----Original Message-----
    From: Ted Yu
    Sent: Saturday, April 09, 2011 7:42 PM
    To: user@hbase.apache.org; ramakrishnas@huawei.com
    Subject: Re: Killing and restarting of master caused
    AlreadyBeingCreatedException from HLogs

    Have you read the email thread entitled 'file is already being created by
    NN_Recovery' on user mailing list ?
    On Sat, Apr 9, 2011 at 7:06 AM, Ramkrishna S Vasudevan wrote:

    If we kill the HMaster and try restarting it.. the following exceptions are
    logged



    plitting hlog 2 of 2:
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407, length=1459

    2011-04-09 18:02:56,017 INFO org.apache.hadoop.hbase.util.FSUtils:
    Recovering file
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407

    2011-04-09 18:02:56,037 ERROR
    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker: Exception occured
    while connecting to server : /10.18.52.108:9000

    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
    create file
    /hbase/.logs/linux108,60020,1302346754067/linux108%3A60020.1302350355407
    for
    DFSClient_hb_m_linux108:60000_1302352358592 on client 10.18.52.108, because
    this file is already being created by NN_Recovery on 10.18.52.108

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSN
    amesystem.java:1453)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSName
    system.java:1291)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.
    java:1473)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:628)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:541)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1105)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1101)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:396)

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1099)



    at org.apache.hadoop.ipc.Client.call(Client.java:942)

    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:239)

    at $Proxy5.append(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:157)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:145)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invoke(RPCRetryAndSwi
    tchInvoker.java:54)

    at $Proxy5.append(Unknown Source)

    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:741)

    at

    org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.ja
    va:366)

    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:665)

    at
    org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:634)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:261)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:188)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
    va:196)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
    ileSystem.java:180)

    at

    org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:379
    )

    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:278)



    But the HMaster is starting correctly. Here only I have only 2 datanodes
    and the replication factor is 2.



    Regards

    Ram




    ****************************************************************************
    ***********
    This e-mail and attachments contain confidential information from HUAWEI,
    which is intended only for the person or entity whose address is listed
    above. Any use of the information contained herein in any way (including,
    but not limited to, total or partial disclosure, reproduction, or
    dissemination) by persons other than the intended recipient's) is
    prohibited. If you receive this e-mail in error, please notify the sender
    by
    phone or email immediately and delete it!


  • Jean-Daniel Cryans at Apr 9, 2011 at 4:57 pm
    Well maybe that's what you did, but the log does say that it's splitting logs.

    J-D

    On Sat, Apr 9, 2011 at 7:19 AM, Ramkrishna S Vasudevan
    wrote:
    Hi

    Yes .. i had gone thro that
    But those scenarios were something like the region server went down or data
    node is down.

    Here it is like only master got restarted.

    Regards
    Ram

    ****************************************************************************
    ***********
    This e-mail and attachments contain confidential information from HUAWEI,
    which is intended only for the person or entity whose address is listed
    above. Any use of the information contained herein in any way (including,
    but not limited to, total or partial disclosure, reproduction, or
    dissemination) by persons other than the intended recipient's) is
    prohibited. If you receive this e-mail in error, please notify the sender by
    phone or email immediately and delete it!

    -----Original Message-----
    From: Ted Yu
    Sent: Saturday, April 09, 2011 7:42 PM
    To: user@hbase.apache.org; ramakrishnas@huawei.com
    Subject: Re: Killing and restarting of master caused
    AlreadyBeingCreatedException from HLogs

    Have you read the email thread entitled 'file is already being created by
    NN_Recovery' on user mailing list ?

    On Sat, Apr 9, 2011 at 7:06 AM, Ramkrishna S Vasudevan <
    ramakrishnas@huawei.com> wrote:
    If we kill the HMaster and try restarting it.. the following exceptions are
    logged



    plitting hlog 2 of 2:
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407, length=1459

    2011-04-09 18:02:56,017 INFO org.apache.hadoop.hbase.util.FSUtils:
    Recovering file
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407

    2011-04-09 18:02:56,037 ERROR
    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker: Exception occured
    while connecting to server : /10.18.52.108:9000

    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
    create file
    /hbase/.logs/linux108,60020,1302346754067/linux108%3A60020.1302350355407
    for
    DFSClient_hb_m_linux108:60000_1302352358592 on client 10.18.52.108, because
    this file is already being created by NN_Recovery on 10.18.52.108

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSN
    amesystem.java:1453)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSName
    system.java:1291)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.
    java:1473)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:628)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:541)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1105)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1101)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:396)

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1099)



    at org.apache.hadoop.ipc.Client.call(Client.java:942)

    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:239)

    at $Proxy5.append(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:157)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:145)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invoke(RPCRetryAndSwi
    tchInvoker.java:54)

    at $Proxy5.append(Unknown Source)

    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:741)

    at

    org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.ja
    va:366)

    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:665)

    at
    org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:634)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:261)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:188)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
    va:196)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
    ileSystem.java:180)

    at

    org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:379
    )

    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:278)



    But the HMaster is starting correctly.  Here only I have only 2 datanodes
    and the replication factor is 2.



    Regards

    Ram




    ****************************************************************************
    ***********
    This e-mail and attachments contain confidential information from HUAWEI,
    which is intended only for the person or entity whose address is listed
    above. Any use of the information contained herein in any way (including,
    but not limited to, total or partial disclosure, reproduction, or
    dissemination) by persons other than the intended recipient's) is
    prohibited. If you receive this e-mail in error, please notify the sender
    by
    phone or email immediately and delete it!


  • Stack at Apr 9, 2011 at 5:00 pm
    Yeah, what J-D said. Maybe when you killed it it was in middle of log
    splitting perhaps stuck at same point? You kill the server, it comes
    up, looks around for logs not split and then runs into the same issue.
    As per Ted, please refer to the previous mail thread.

    St.Ack
    On Sat, Apr 9, 2011 at 9:56 AM, Jean-Daniel Cryans wrote:
    Well maybe that's what you did, but the log does say that it's splitting logs.

    J-D

    On Sat, Apr 9, 2011 at 7:19 AM, Ramkrishna S Vasudevan
    wrote:
    Hi

    Yes .. i had gone thro that
    But those scenarios were something like the region server went down or data
    node is down.

    Here it is like only master got restarted.

    Regards
    Ram

    ****************************************************************************
    ***********
    This e-mail and attachments contain confidential information from HUAWEI,
    which is intended only for the person or entity whose address is listed
    above. Any use of the information contained herein in any way (including,
    but not limited to, total or partial disclosure, reproduction, or
    dissemination) by persons other than the intended recipient's) is
    prohibited. If you receive this e-mail in error, please notify the sender by
    phone or email immediately and delete it!

    -----Original Message-----
    From: Ted Yu
    Sent: Saturday, April 09, 2011 7:42 PM
    To: user@hbase.apache.org; ramakrishnas@huawei.com
    Subject: Re: Killing and restarting of master caused
    AlreadyBeingCreatedException from HLogs

    Have you read the email thread entitled 'file is already being created by
    NN_Recovery' on user mailing list ?

    On Sat, Apr 9, 2011 at 7:06 AM, Ramkrishna S Vasudevan <
    ramakrishnas@huawei.com> wrote:
    If we kill the HMaster and try restarting it.. the following exceptions are
    logged



    plitting hlog 2 of 2:
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407, length=1459

    2011-04-09 18:02:56,017 INFO org.apache.hadoop.hbase.util.FSUtils:
    Recovering file
    hdfs://
    10.18.52.108:9000/hbase/.logs/linux108,60020,1302346754067/linux108%3
    A60020.1302350355407

    2011-04-09 18:02:56,037 ERROR
    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker: Exception occured
    while connecting to server : /10.18.52.108:9000

    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to
    create file
    /hbase/.logs/linux108,60020,1302346754067/linux108%3A60020.1302350355407
    for
    DFSClient_hb_m_linux108:60000_1302352358592 on client 10.18.52.108, because
    this file is already being created by NN_Recovery on 10.18.52.108

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSN
    amesystem.java:1453)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSName
    system.java:1291)

    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.
    java:1473)

    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:628)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:541)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1105)

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1101)

    at java.security.AccessController.doPrivileged(Native Method)

    at javax.security.auth.Subject.doAs(Subject.java:396)

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1099)



    at org.apache.hadoop.ipc.Client.call(Client.java:942)

    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:239)

    at $Proxy5.append(Unknown Source)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39
    )

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl
    .java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:157)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invokeMethod(RPCRetry
    AndSwitchInvoker.java:145)

    at

    com.huawei.isap.ump.ha.client.RPCRetryAndSwitchInvoker.invoke(RPCRetryAndSwi
    tchInvoker.java:54)

    at $Proxy5.append(Unknown Source)

    at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:741)

    at

    org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.ja
    va:366)

    at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:665)

    at
    org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:634)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:261)

    at

    org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLog(HLogSplitter.
    java:188)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLog(MasterFileSystem.ja
    va:196)

    at

    org.apache.hadoop.hbase.master.MasterFileSystem.splitLogAfterStartup(MasterF
    ileSystem.java:180)

    at

    org.apache.hadoop.hbase.master.HMaster.finishInitialization(HMaster.java:379
    )

    at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:278)



    But the HMaster is starting correctly.  Here only I have only 2 datanodes
    and the replication factor is 2.



    Regards

    Ram




    ****************************************************************************
    ***********
    This e-mail and attachments contain confidential information from HUAWEI,
    which is intended only for the person or entity whose address is listed
    above. Any use of the information contained herein in any way (including,
    but not limited to, total or partial disclosure, reproduction, or
    dissemination) by persons other than the intended recipient's) is
    prohibited. If you receive this e-mail in error, please notify the sender
    by
    phone or email immediately and delete it!


Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshbase, hadoop
postedApr 9, '11 at 2:06p
activeApr 9, '11 at 5:00p
posts5
users4
websitehbase.apache.org

People

Translate

site design / logo © 2022 Grokbase