FAQ
+scm-users (Cloudera Manager mailing list)

Sebastian,

The first thing I'd check is whether you've deployed the client
configurations to the machine where you are looking up the contents of
HDFS. Deploying client configurations is enabled by default as part of the
Enable HA workflow. It is also available as a HDFS "Action".

You can migrate the data of an existing NameNode by providing the name
directories of the NameNode in the configurations page of the Enable HA
workflow. See attached file: here, nightly41-1.ent.cloudera.com is the
existing NN and /dfs/nn is the path of the name directory of this NameNode.
By default, the Enable HA workflow will use the existing configuration of
nightly41-1.ent.cloudera.com. So if you had changed that to point to some
other path, a new HDFS will get created. If you want to go through the
enable HA workflow again, you will have to disable HA first. Make sure to
clear the name directories of the Standby NameNode before trying to enable
HA again. (Make sure you're clearing the name directories of the new
Standby NameNode and not of the existing NameNode.)

-Vinithra
On Mon, Feb 18, 2013 at 12:51 PM, Sebastian Castro wrote:

Hi all,

I'm currently running Cloudera Manager, Free Edition, version 4.1.2. I've
enabled the HDFS High Availability, and the process create a new filesystem
instead of "migrating" the existing one. I have a backup of the directory
with old metadata and logs, is there a way to "import" that data into this
new HA filesystem. If importing is not possible, would disabling HA restore
the existing filesystem?

Kind Regards,
Sebastian

--


Search Discussions

  • Sebastian Castro at Feb 20, 2013 at 4:34 am

    On Wednesday, February 20, 2013 3:05:45 PM UTC+13, Vinithra wrote:
    +scm-users (Cloudera Manager mailing list)

    Sebastian,
    Hi Vinithra,

    The first thing I'd check is whether you've deployed the client
    configurations to the machine where you are looking up the contents of
    HDFS. Deploying client configurations is enabled by default as part of the
    Enable HA workflow. It is also available as a HDFS "Action".
    Client configuration was deployed properly.

    You can migrate the data of an existing NameNode by providing the name
    directories of the NameNode in the configurations page of the Enable HA
    workflow. See attached file: here, nightly41-1.ent.cloudera.com is the
    existing NN and /dfs/nn is the path of the name directory of this NameNode.
    By default, the Enable HA workflow will use the existing configuration of
    nightly41-1.ent.cloudera.com. So if you had changed that to point to some
    other path, a new HDFS will get created. If you want to go through the
    enable HA workflow again, you will have to disable HA first. Make sure to
    clear the name directories of the Standby NameNode before trying to enable
    HA again. (Make sure you're clearing the name directories of the new
    Standby NameNode and not of the existing NameNode.)
    In effect the issue was that I specified new checkpoint directories in the
    namenodes when enabling HA in order to benefit of new disks. Disabling HA
    mode set the cluster to its old HDFS with the data I wanted.

    Thanks for your help.


    -Vinithra

    On Mon, Feb 18, 2013 at 12:51 PM, Sebastian Castro <sebastia...@gmail.com<javascript:>
    wrote:
    Hi all,

    I'm currently running Cloudera Manager, Free Edition, version 4.1.2. I've
    enabled the HDFS High Availability, and the process create a new filesystem
    instead of "migrating" the existing one. I have a backup of the directory
    with old metadata and logs, is there a way to "import" that data into this
    new HA filesystem. If importing is not possible, would disabling HA restore
    the existing filesystem?

    Kind Regards,
    Sebastian

    --


Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedFeb 20, '13 at 2:06a
activeFeb 20, '13 at 4:34a
posts2
users2
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase