Sebastian,
The first thing I'd check is whether you've deployed the client
configurations to the machine where you are looking up the contents of
HDFS. Deploying client configurations is enabled by default as part of the
Enable HA workflow. It is also available as a HDFS "Action".
You can migrate the data of an existing NameNode by providing the name
directories of the NameNode in the configurations page of the Enable HA
workflow. See attached file: here, nightly41-1.ent.cloudera.com is the
existing NN and /dfs/nn is the path of the name directory of this NameNode.
By default, the Enable HA workflow will use the existing configuration of
nightly41-1.ent.cloudera.com. So if you had changed that to point to some
other path, a new HDFS will get created. If you want to go through the
enable HA workflow again, you will have to disable HA first. Make sure to
clear the name directories of the Standby NameNode before trying to enable
HA again. (Make sure you're clearing the name directories of the new
Standby NameNode and not of the existing NameNode.)
-Vinithra
On Mon, Feb 18, 2013 at 12:51 PM, Sebastian Castro wrote:
Hi all,
I'm currently running Cloudera Manager, Free Edition, version 4.1.2. I've
enabled the HDFS High Availability, and the process create a new filesystem
instead of "migrating" the existing one. I have a backup of the directory
with old metadata and logs, is there a way to "import" that data into this
new HA filesystem. If importing is not possible, would disabling HA restore
the existing filesystem?
Kind Regards,
Sebastian
--
Hi all,
I'm currently running Cloudera Manager, Free Edition, version 4.1.2. I've
enabled the HDFS High Availability, and the process create a new filesystem
instead of "migrating" the existing one. I have a backup of the directory
with old metadata and logs, is there a way to "import" that data into this
new HA filesystem. If importing is not possible, would disabling HA restore
the existing filesystem?
Kind Regards,
Sebastian
--