FAQ
Hello,

I'm running cdh4.0.1 with namenode and hbase master having high
availability in standby namenode. The following error pops up in standby
namenode logs. Namenode is not running. It's not starting. The permission
and everything is good.

And one thing there is *no files in /name/current directory*.

Do you want me to copy the contents of /current* directory from active
namenode to standby namenode /current* directory to get rid of this error??

Help me out !! Thank you


2012-10-05 17:19:44,386 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.FileNotFoundException: No valid image files found
at
org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImage(FSImageTransactionalStorageInspector.java:128)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:581)
at
org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:246)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:498)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:390)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:354)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:389)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:423)
at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:571)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
2012-10-05 17:19:44,389 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************

--

Search Discussions

  • Todd Lipcon at Oct 5, 2012 at 6:03 pm
    Hi Karunakar,

    This implies that you haven't set up the standby node at all. Did you
    follow the directions in the high availability guide, including the step to
    bootstrap the standby?

    -Todd
    On Fri, Oct 5, 2012 at 10:40 AM, karunakar wrote:

    Hello,

    I'm running cdh4.0.1 with namenode and hbase master having high
    availability in standby namenode. The following error pops up in standby
    namenode logs. Namenode is not running. It's not starting. The permission
    and everything is good.

    And one thing there is *no files in /name/current directory*.

    Do you want me to copy the contents of /current* directory from active
    namenode to standby namenode /current* directory to get rid of this error??

    Help me out !! Thank you


    2012-10-05 17:19:44,386 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
    java.io.FileNotFoundException: No valid image files found
    at
    org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImage(FSImageTransactionalStorageInspector.java:128)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:581)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:498)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:390)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:354)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:389)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:423)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
    2012-10-05 17:19:44,389 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************

    --




    --
    Todd Lipcon
    Software Engineer, Cloudera

    --
  • Karunakar at Oct 5, 2012 at 6:17 pm
    Hi Todd,

    Thanks for the reply.
    I've already did the bootstrap for the standby namenode. But if I try to do
    the bootstrap again, then it is asking:

    re-format the filesystem in ///name (Y or N)?
    On Friday, October 5, 2012 11:03:05 AM UTC-7, Todd Lipcon wrote:

    Hi Karunakar,

    This implies that you haven't set up the standby node at all. Did you
    follow the directions in the high availability guide, including the step to
    bootstrap the standby?

    -Todd

    On Fri, Oct 5, 2012 at 10:40 AM, karunakar <lkarun...@gmail.com<javascript:>
    wrote:
    Hello,

    I'm running cdh4.0.1 with namenode and hbase master having high
    availability in standby namenode. The following error pops up in standby
    namenode logs. Namenode is not running. It's not starting. The permission
    and everything is good.

    And one thing there is *no files in /name/current directory*.

    Do you want me to copy the contents of /current* directory from active
    namenode to standby namenode /current* directory to get rid of this error??

    Help me out !! Thank you


    2012-10-05 17:19:44,386 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
    java.io.FileNotFoundException: No valid image files found
    at
    org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.getLatestImage(FSImageTransactionalStorageInspector.java:128)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:581)
    at
    org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:246)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:498)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:390)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:354)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:389)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:423)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:571)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1134)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1193)
    2012-10-05 17:19:44,389 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************

    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --
  • Todd Lipcon at Oct 5, 2012 at 6:25 pm
    Sounds like your configuration for dfs.name.dir isn't pointing to a valid
    directory (///name). Please double check that you've followed the
    directions carefully.
    On Fri, Oct 5, 2012 at 11:11 AM, karunakar wrote:

    Hi Todd,

    Thanks for the reply.
    I've already did the bootstrap for the standby namenode. But if I try to
    do the bootstrap again, then it is asking:

    re-format the filesystem in ///name (Y or N)?

    On Friday, October 5, 2012 11:03:05 AM UTC-7, Todd Lipcon wrote:

    Hi Karunakar,

    This implies that you haven't set up the standby node at all. Did you
    follow the directions in the high availability guide, including the step to
    bootstrap the standby?

    -Todd

    On Fri, Oct 5, 2012 at 10:40 AM, karunakar wrote:

    Hello,

    I'm running cdh4.0.1 with namenode and hbase master having high
    availability in standby namenode. The following error pops up in standby
    namenode logs. Namenode is not running. It's not starting. The permission
    and everything is good.

    And one thing there is *no files in /name/current directory*.

    Do you want me to copy the contents of /current* directory from active
    namenode to standby namenode /current* directory to get rid of this error??

    Help me out !! Thank you


    2012-10-05 17:19:44,386 ERROR org.apache.hadoop.hdfs.server.**namenode.NameNode:
    Exception in namenode join
    java.io.FileNotFoundException: No valid image files found
    at org.apache.hadoop.hdfs.server.**namenode.**
    FSImageTransactionalStorageIns**pector.getLatestImage(**
    FSImageTransactionalStorageIns**pector.java:128)
    at org.apache.hadoop.hdfs.server.**namenode.FSImage.loadFSImage(**
    FSImage.java:581)
    at org.apache.hadoop.hdfs.server.**namenode.FSImage.**
    recoverTransitionRead(FSImage.**java:246)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
    loadFSImage(FSNamesystem.java:**498)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
    loadFromDisk(FSNamesystem.**java:390)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
    loadFromDisk(FSNamesystem.**java:354)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**
    loadNamesystem(NameNode.java:**389)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.initialize(**
    NameNode.java:423)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**
    NameNode.java:590)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**
    NameNode.java:571)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**
    createNameNode(NameNode.java:**1134)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.main(**
    NameNode.java:1193)
    2012-10-05 17:19:44,389 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
    SHUTDOWN_MSG:
    /****************************************************************

    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --




    --
    Todd Lipcon
    Software Engineer, Cloudera

    --
  • Karunakar at Oct 5, 2012 at 6:47 pm
    The directory structure for both the active and standby namenodes are same:
    pointing metadata to two directories:

    /var/run/hadoop-0.20/hdfs/name/current
    /hdfs/metadata/name

    The information in current directory [Both pointing directories] in active
    namenode looks like this:

    -rw-r--r-- 1 hdfs hdfs 23517930 Oct 5 17:13
    edits_inprogress_0000000000000000001
    -rw-r--r-- 1 hdfs root 115 Sep 7 23:11 fsimage_0000000000000000000
    -rw-r--r-- 1 hdfs root 62 Sep 7 23:11 fsimage_0000000000000000000.md5
    -rw-r--r-- 1 hdfs root 2 Sep 7 23:11 seen_txid
    -rw-r--r-- 1 hdfs root 205 Sep 7 23:11 VERSION

    But, the information in current directory [Both pointing directories] in
    standby namenode looks like this:

    -rw-r--r-- 1 hdfs hdfs 2 Sep 10 04:05 seen_txid
    -rw-r--r-- 1 hdfs hdfs 205 Sep 10 04:05 VERSION

    And also the namespace ID for both the namenodes are same.
    I'm unable to figure out this?



    On Friday, October 5, 2012 11:25:12 AM UTC-7, Todd Lipcon wrote:

    Sounds like your configuration for dfs.name.dir isn't pointing to a valid
    directory (///name). Please double check that you've followed the
    directions carefully.

    On Fri, Oct 5, 2012 at 11:11 AM, karunakar <lkarun...@gmail.com<javascript:>
    wrote:
    Hi Todd,

    Thanks for the reply.
    I've already did the bootstrap for the standby namenode. But if I try to
    do the bootstrap again, then it is asking:

    re-format the filesystem in ///name (Y or N)?

    On Friday, October 5, 2012 11:03:05 AM UTC-7, Todd Lipcon wrote:

    Hi Karunakar,

    This implies that you haven't set up the standby node at all. Did you
    follow the directions in the high availability guide, including the step to
    bootstrap the standby?

    -Todd

    On Fri, Oct 5, 2012 at 10:40 AM, karunakar wrote:

    Hello,

    I'm running cdh4.0.1 with namenode and hbase master having high
    availability in standby namenode. The following error pops up in standby
    namenode logs. Namenode is not running. It's not starting. The permission
    and everything is good.

    And one thing there is *no files in /name/current directory*.

    Do you want me to copy the contents of /current* directory from active
    namenode to standby namenode /current* directory to get rid of this error??

    Help me out !! Thank you


    2012-10-05 17:19:44,386 ERROR org.apache.hadoop.hdfs.server.**namenode.NameNode:
    Exception in namenode join
    java.io.FileNotFoundException: No valid image files found
    at org.apache.hadoop.hdfs.server.**namenode.**
    FSImageTransactionalStorageIns**pector.getLatestImage(**
    FSImageTransactionalStorageIns**pector.java:128)
    at org.apache.hadoop.hdfs.server.**namenode.FSImage.loadFSImage(**
    FSImage.java:581)
    at org.apache.hadoop.hdfs.server.**namenode.FSImage.**
    recoverTransitionRead(FSImage.**java:246)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
    loadFSImage(FSNamesystem.java:**498)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
    loadFromDisk(FSNamesystem.**java:390)
    at org.apache.hadoop.hdfs.server.**namenode.FSNamesystem.**
    loadFromDisk(FSNamesystem.**java:354)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**
    loadNamesystem(NameNode.java:**389)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.initialize(**
    NameNode.java:423)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**
    NameNode.java:590)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**
    NameNode.java:571)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**
    createNameNode(NameNode.java:**1134)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.main(**
    NameNode.java:1193)
    2012-10-05 17:19:44,389 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
    SHUTDOWN_MSG:
    /****************************************************************

    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --
  • Todd Lipcon at Oct 5, 2012 at 11:57 pm
    Can you attach your configuration for your standby node, and also paste the
    output of the bootstrapStandby command running on that node?
    On Fri, Oct 5, 2012 at 11:40 AM, karunakar wrote:

    The directory structure for both the active and standby namenodes are
    same: pointing metadata to two directories:

    /var/run/hadoop-0.20/hdfs/name/current
    /hdfs/metadata/name

    The information in current directory [Both pointing directories] in active
    namenode looks like this:

    -rw-r--r-- 1 hdfs hdfs 23517930 Oct 5 17:13
    edits_inprogress_0000000000000000001
    -rw-r--r-- 1 hdfs root 115 Sep 7 23:11 fsimage_0000000000000000000
    -rw-r--r-- 1 hdfs root 62 Sep 7 23:11
    fsimage_0000000000000000000.md5
    -rw-r--r-- 1 hdfs root 2 Sep 7 23:11 seen_txid
    -rw-r--r-- 1 hdfs root 205 Sep 7 23:11 VERSION

    But, the information in current directory [Both pointing directories] in
    standby namenode looks like this:

    -rw-r--r-- 1 hdfs hdfs 2 Sep 10 04:05 seen_txid
    -rw-r--r-- 1 hdfs hdfs 205 Sep 10 04:05 VERSION

    And also the namespace ID for both the namenodes are same.
    I'm unable to figure out this?



    On Friday, October 5, 2012 11:25:12 AM UTC-7, Todd Lipcon wrote:

    Sounds like your configuration for dfs.name.dir isn't pointing to a valid
    directory (///name). Please double check that you've followed the
    directions carefully.
    On Fri, Oct 5, 2012 at 11:11 AM, karunakar wrote:

    Hi Todd,

    Thanks for the reply.
    I've already did the bootstrap for the standby namenode. But if I try to
    do the bootstrap again, then it is asking:

    re-format the filesystem in ///name (Y or N)?

    On Friday, October 5, 2012 11:03:05 AM UTC-7, Todd Lipcon wrote:

    Hi Karunakar,

    This implies that you haven't set up the standby node at all. Did you
    follow the directions in the high availability guide, including the step to
    bootstrap the standby?

    -Todd

    On Fri, Oct 5, 2012 at 10:40 AM, karunakar wrote:

    Hello,

    I'm running cdh4.0.1 with namenode and hbase master having high
    availability in standby namenode. The following error pops up in standby
    namenode logs. Namenode is not running. It's not starting. The permission
    and everything is good.

    And one thing there is *no files in /name/current directory*.

    Do you want me to copy the contents of /current* directory from active
    namenode to standby namenode /current* directory to get rid of this error??

    Help me out !! Thank you


    2012-10-05 17:19:44,386 ERROR org.apache.hadoop.hdfs.server.****namenode.NameNode:
    Exception in namenode join
    java.io.FileNotFoundException: No valid image files found
    at org.apache.hadoop.hdfs.server.****namenode.**FSImageTransactionalS
    **torageIns**pector.**getLatestImage(**FSImageTransact**
    ionalStorageIns**pector.java:**128)
    at org.apache.hadoop.hdfs.server.****namenode.FSImage.loadFSImage(**F
    **SImage.java:581)
    at org.apache.hadoop.hdfs.server.****namenode.FSImage.**recoverTransi
    **tionRead(FSImage.**java:246)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFSIm*
    *age(FSNamesystem.java:**498)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFrom
    **Disk(FSNamesystem.**java:390)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFrom*
    *Disk(FSNamesystem.**java:354)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**loadNamesyst
    **em(NameNode.java:**389)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.initialize(**N*
    *ameNode.java:423)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**NameN
    **ode.java:590)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**NameN*
    *ode.java:571)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**createNameNo
    **de(NameNode.java:**1134)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.main(**NameNod*
    *e.java:1193)
    2012-10-05 17:19:44,389 INFO org.apache.hadoop.hdfs.server.****namenode.NameNode:
    SHUTDOWN_MSG:
    /********************************************************************

    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --




    --
    Todd Lipcon
    Software Engineer, Cloudera

    --
  • Karunakar at Oct 6, 2012 at 1:03 am
    Hey Todd,

    Thanks for the reply. *I figured it out [Solved] !!*

    1. There are no fsimage files in the metadata directories in the standby
    namenode.
    2. I stopped the namenode, and *did a bootstrap again* [earlier at the
    starting stage of the cluster build stage -the first time I did bootstrap
    all the information was same in the two namenode metadata directories, but
    somehow the standby namenode metadata was gone-I can't figure out this].
    3.when I did bootstrap the fsimage files generated in only in one of the
    metadata directories.
    4. I copied them to the second metadata directory and VOILA !!

    I got it running normally as usual !!!


    On Friday, October 5, 2012 4:49:39 PM UTC-7, Todd Lipcon wrote:

    Can you attach your configuration for your standby node, and also paste
    the output of the bootstrapStandby command running on that node?

    On Fri, Oct 5, 2012 at 11:40 AM, karunakar <lkarun...@gmail.com<javascript:>
    wrote:
    The directory structure for both the active and standby namenodes are
    same: pointing metadata to two directories:

    /var/run/hadoop-0.20/hdfs/name/current
    /hdfs/metadata/name

    The information in current directory [Both pointing directories] in
    active namenode looks like this:

    -rw-r--r-- 1 hdfs hdfs 23517930 Oct 5 17:13
    edits_inprogress_0000000000000000001
    -rw-r--r-- 1 hdfs root 115 Sep 7 23:11 fsimage_0000000000000000000
    -rw-r--r-- 1 hdfs root 62 Sep 7 23:11
    fsimage_0000000000000000000.md5
    -rw-r--r-- 1 hdfs root 2 Sep 7 23:11 seen_txid
    -rw-r--r-- 1 hdfs root 205 Sep 7 23:11 VERSION

    But, the information in current directory [Both pointing directories] in
    standby namenode looks like this:

    -rw-r--r-- 1 hdfs hdfs 2 Sep 10 04:05 seen_txid
    -rw-r--r-- 1 hdfs hdfs 205 Sep 10 04:05 VERSION

    And also the namespace ID for both the namenodes are same.
    I'm unable to figure out this?



    On Friday, October 5, 2012 11:25:12 AM UTC-7, Todd Lipcon wrote:

    Sounds like your configuration for dfs.name.dir isn't pointing to a
    valid directory (///name). Please double check that you've followed the
    directions carefully.
    On Fri, Oct 5, 2012 at 11:11 AM, karunakar wrote:

    Hi Todd,

    Thanks for the reply.
    I've already did the bootstrap for the standby namenode. But if I try
    to do the bootstrap again, then it is asking:

    re-format the filesystem in ///name (Y or N)?

    On Friday, October 5, 2012 11:03:05 AM UTC-7, Todd Lipcon wrote:

    Hi Karunakar,

    This implies that you haven't set up the standby node at all. Did you
    follow the directions in the high availability guide, including the step to
    bootstrap the standby?

    -Todd

    On Fri, Oct 5, 2012 at 10:40 AM, karunakar wrote:

    Hello,

    I'm running cdh4.0.1 with namenode and hbase master having high
    availability in standby namenode. The following error pops up in standby
    namenode logs. Namenode is not running. It's not starting. The permission
    and everything is good.

    And one thing there is *no files in /name/current directory*.

    Do you want me to copy the contents of /current* directory from
    active namenode to standby namenode /current* directory to get rid of this
    error??

    Help me out !! Thank you


    2012-10-05 17:19:44,386 ERROR org.apache.hadoop.hdfs.server.****namenode.NameNode:
    Exception in namenode join
    java.io.FileNotFoundException: No valid image files found
    at org.apache.hadoop.hdfs.server.****namenode.**
    FSImageTransactionalS**torageIns**pector.**getLatestImage(**
    FSImageTransact**ionalStorageIns**pector.java:**128)
    at org.apache.hadoop.hdfs.server.****namenode.FSImage.loadFSImage(**
    F**SImage.java:581)
    at org.apache.hadoop.hdfs.server.****namenode.FSImage.**
    recoverTransi**tionRead(FSImage.**java:246)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFSIm
    **age(FSNamesystem.java:**498)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**
    loadFrom**Disk(FSNamesystem.**java:390)
    at org.apache.hadoop.hdfs.server.****namenode.FSNamesystem.**loadFrom
    **Disk(FSNamesystem.**java:354)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**
    loadNamesyst**em(NameNode.java:**389)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.initialize(**N
    **ameNode.java:423)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**
    NameN**ode.java:590)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.<init>(**NameN
    **ode.java:571)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.**
    createNameNo**de(NameNode.java:**1134)
    at org.apache.hadoop.hdfs.server.****namenode.NameNode.main(**NameNod
    **e.java:1193)
    2012-10-05 17:19:44,389 INFO org.apache.hadoop.hdfs.server.****namenode.NameNode:
    SHUTDOWN_MSG:
    /********************************************************************

    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --




    --
    Todd Lipcon
    Software Engineer, Cloudera
    --

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcdh-user @
categorieshadoop
postedOct 5, '12 at 6:03p
activeOct 6, '12 at 1:03a
posts7
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Karunakar: 4 posts Todd Lipcon: 3 posts

People

Translate

site design / logo © 2022 Grokbase