FAQ
Hi all,

I wonder is it enough for recovering hadoop cluster by just coping the
meta data from SecondaryNameNode to the new master node ? Do I any
need do any other stuffs ?
Thanks for any help.



--
Best Regards

Jeff Zhang

Search Discussions

  • Ted Yu at May 13, 2010 at 1:20 pm
    I suggest you take a look at AvatarNode which runs standby avatar that
    can be activated if primary avatar fails.

    On Thursday, May 13, 2010, Jeff Zhang wrote:
    Hi all,

    I wonder is it enough for recovering hadoop cluster by just coping the
    meta data from SecondaryNameNode to the new master node ? Do I any
    need do any other stuffs ?
    Thanks for any help.



    --
    Best Regards

    Jeff Zhang
  • Allen Wittenauer at May 13, 2010 at 5:32 pm
    This code hasn't been committed to any branch yet and doesn't appear to have undergone any review outside of Facebook. :(
    On May 13, 2010, at 6:18 AM, Ted Yu wrote:

    I suggest you take a look at AvatarNode which runs standby avatar that
    can be activated if primary avatar fails.

    On Thursday, May 13, 2010, Jeff Zhang wrote:
    Hi all,

    I wonder is it enough for recovering hadoop cluster by just coping the
    meta data from SecondaryNameNode to the new master node ? Do I any
    need do any other stuffs ?
    Thanks for any help.



    --
    Best Regards

    Jeff Zhang
  • Edward Capriolo at May 13, 2010 at 6:15 pm

    On Thu, May 13, 2010 at 1:32 PM, Allen Wittenauer wrote:


    This code hasn't been committed to any branch yet and doesn't appear to
    have undergone any review outside of Facebook. :(
    On May 13, 2010, at 6:18 AM, Ted Yu wrote:

    I suggest you take a look at AvatarNode which runs standby avatar that
    can be activated if primary avatar fails.

    On Thursday, May 13, 2010, Jeff Zhang wrote:
    Hi all,

    I wonder is it enough for recovering hadoop cluster by just coping the
    meta data from SecondaryNameNode to the new master node ? Do I any
    need do any other stuffs ?
    Thanks for any help.



    --
    Best Regards

    Jeff Zhang
    Wow that AvatarNode was really flying below the Radar! First I heard of it.
    Awesome!
  • Jeff Zhang at May 14, 2010 at 1:40 am
    Allen,

    Do you mean HDFS-976 <https://issues.apache.org/jira/browse/HDFS-976> and
    HDFS-966 <https://issues.apache.org/jira/browse/HDFS-966> ?


    On Fri, May 14, 2010 at 1:32 AM, Allen Wittenauer wrote:

    This code hasn't been committed to any branch yet and doesn't appear to
    have undergone any review outside of Facebook. :(
    On May 13, 2010, at 6:18 AM, Ted Yu wrote:

    I suggest you take a look at AvatarNode which runs standby avatar that
    can be activated if primary avatar fails.

    On Thursday, May 13, 2010, Jeff Zhang wrote:
    Hi all,

    I wonder is it enough for recovering hadoop cluster by just coping the
    meta data from SecondaryNameNode to the new master node ? Do I any
    need do any other stuffs ?
    Thanks for any help.



    --
    Best Regards

    Jeff Zhang


    --
    Best Regards

    Jeff Zhang
  • Eric Sammer at May 13, 2010 at 3:06 pm
    You can use the copy of fsimage and the editlog from the SNN to
    recover. Remember that it will be (roughly) an hour old. The process
    for recovery is to copy the fsimage and editlog to a new machine,
    place them in the dfs.name.dir/current directory, and start all the
    daemons. It's worth practicing this type of procedure before trying it
    on a production cluster. More importantly, it's worth practicing this
    *before* you need it on a production cluster.
    On Thu, May 13, 2010 at 5:01 AM, Jeff Zhang wrote:
    Hi all,

    I wonder is it enough for recovering hadoop cluster by just coping the
    meta data from SecondaryNameNode to the new master node ? Do I any
    need do any other stuffs ?
    Thanks for any help.



    --
    Best Regards

    Jeff Zhang


    --
    Eric Sammer
    phone: +1-917-287-2675
    twitter: esammer
    data: www.cloudera.com
  • Allen Wittenauer at May 13, 2010 at 5:31 pm
    This is a good time to remind folks that the namenode can write to multiple directories, including one over a network filesystem or SAN so that you always have a fresh copy. :)
    On May 13, 2010, at 8:05 AM, Eric Sammer wrote:

    You can use the copy of fsimage and the editlog from the SNN to
    recover. Remember that it will be (roughly) an hour old. The process
    for recovery is to copy the fsimage and editlog to a new machine,
    place them in the dfs.name.dir/current directory, and start all the
    daemons. It's worth practicing this type of procedure before trying it
    on a production cluster. More importantly, it's worth practicing this
    *before* you need it on a production cluster.
    On Thu, May 13, 2010 at 5:01 AM, Jeff Zhang wrote:
    Hi all,

    I wonder is it enough for recovering hadoop cluster by just coping the
    meta data from SecondaryNameNode to the new master node ? Do I any
    need do any other stuffs ?
    Thanks for any help.



    --
    Best Regards

    Jeff Zhang


    --
    Eric Sammer
    phone: +1-917-287-2675
    twitter: esammer
    data: www.cloudera.com

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedMay 13, '10 at 9:01a
activeMay 14, '10 at 1:40a
posts7
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase