FAQ
Hi,

I am trying to install cloudera manager on 6 node cluster by automated
method using parcles. I installed all the parcels sucessfully and assigned
all the roles to the particular nodes but, while starting the hdfs services
I am facing issue. Please, someone help me.

logdetails:

2:33:46.615 AMINFOorg.apache.hadoop.metrics2.impl.MetricsConfig

loaded properties from hadoop-metrics2.properties

2:33:46.709 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl

Scheduled snapshot period at 10 second(s).

2:33:46.709 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl

NameNode metrics system started

2:33:46.936 AMWARNorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!

2:33:46.936 AMWARNorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!

2:33:47.060 AMINFOorg.apache.hadoop.util.HostsFileReader

Refreshing hosts (include/exclude) list

2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader

Adding 10.8.240.34 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader

Adding 10.8.240.54 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader

Adding 10.8.240.36 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader

Adding 10.8.240.32 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader

Adding 10.8.240.62 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader

Adding 10.8.240.56 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

2:33:47.121
AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager

dfs.block.invalidate.limit=1000

2:33:47.147 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

dfs.block.access.token.enable=false

2:33:47.148 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

defaultReplication = 3

2:33:47.148 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

maxReplication = 512

2:33:47.148 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

minReplication = 1

2:33:47.148 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

maxReplicationStreams = 2

2:33:47.148 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

shouldCheckForEnoughRacks = true

2:33:47.148 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

replicationRecheckInterval = 3000

2:33:47.148 AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager

encryptDataTransfer = false

2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

fsOwner = hdfs (auth:SIMPLE)

2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

supergroup = supergroup

2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

isPermissionEnabled = true

2:33:47.157 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

HA Enabled: false

2:33:47.164 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

Append Enabled: true

2:33:47.171
AMWARNcom.cloudera.cmf.event.publish.EventStorePublisherWithRetry

Failed to publish event: SimpleEvent{attributes={ROLE_TYPE=[NAMENODE], CATEGORY=[LOG_MESSAGE], ROLE=[hdfs1-NAMENODE-e30871fc37152f76433a15439514ba7e], SEVERITY=[IMPORTANT], SERVICE=[hdfs1], HOST_IDS=, SERVICE_TYPE=[HDFS], LOG_LEVEL=[WARN], HOSTS=host.com, EVENTCODE=[EV_LOG_EVENT]}, content=Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!, timestamp=1367721226936}

2:33:47.482 AMINFOorg.apache.hadoop.hdfs.server.namenode.NameNode

Caching file names occuring more than 10 times

2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

dfs.namenode.safemode.threshold-pct = 0.9990000128746033

2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

dfs.namenode.safemode.min.datanodes = 0

2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem

dfs.namenode.safemode.extension = 30000

2:33:47.526 AMINFOorg.apache.hadoop.hdfs.server.common.Storage

Lock on /data/dfs/nn/in_use.lock acquired by nodename 3666

2:33:47.529 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl

Stopping NameNode metrics system...

2:33:47.530 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl

NameNode metrics system stopped.

2:33:47.530 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl

NameNode metrics system shutdown complete.

2:33:47.531 AMFATALorg.apache.hadoop.hdfs.server.namenode.NameNode

Exception in namenode join
java.io.IOException: NameNode is not formatted.
  at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:592)
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:435)
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:397)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:399)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:433)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:590)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1141)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205)

2:33:47.534 AMINFOorg.apache.hadoop.util.ExitUtil

Exiting with status 1

2:33:47.536 AMINFOorg.apache.hadoop.hdfs.server.namenode.NameNode

SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode

************************************************************/


Thanks,
Sam

Search Discussions

  • Philip Zeyliger at May 5, 2013 at 6:45 pm
    This:

    java.io.IOException: NameNode is not formatted.


    Suggests that your namenode is not formatted. There's an option to format
    HDFS in the "Actions" drop-down in the HDFS service. Typically this
    happens during the wizard during the first installation; clearly that
    either didn't run or failed earlier, which might warrant some more
    investigation.

    Cheers,

    -- Philip

    On Sat, May 4, 2013 at 11:00 PM, sam wrote:

    Hi,

    I am trying to install cloudera manager on 6 node cluster by automated
    method using parcles. I installed all the parcels sucessfully and assigned
    all the roles to the particular nodes but, while starting the hdfs services
    I am facing issue. Please, someone help me.

    logdetails:

    2:33:46.615 AMINFOorg.apache.hadoop.metrics2.impl.MetricsConfig


    loaded properties from hadoop-metrics2.properties

    2:33:46.709 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl


    Scheduled snapshot period at 10 second(s).

    2:33:46.709 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl


    NameNode metrics system started

    2:33:46.936 AMWARNorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!

    2:33:46.936 AMWARNorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!

    2:33:47.060 AMINFOorg.apache.hadoop.util.HostsFileReader


    Refreshing hosts (include/exclude) list

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader


    Adding 10.8.240.34 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader


    Adding 10.8.240.54 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader


    Adding 10.8.240.36 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader


    Adding 10.8.240.32 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader


    Adding 10.8.240.62 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader


    Adding 10.8.240.56 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.121
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager


    dfs.block.invalidate.limit=1000

    2:33:47.147
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    dfs.block.access.token.enable=false

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    defaultReplication = 3

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    maxReplication = 512

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    minReplication = 1

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    maxReplicationStreams = 2

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    shouldCheckForEnoughRacks = true

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    replicationRecheckInterval = 3000

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager


    encryptDataTransfer = false

    2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    fsOwner = hdfs (auth:SIMPLE)

    2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    supergroup = supergroup

    2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    isPermissionEnabled = true

    2:33:47.157 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    HA Enabled: false

    2:33:47.164 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    Append Enabled: true

    2:33:47.171
    AMWARNcom.cloudera.cmf.event.publish.EventStorePublisherWithRetry


    Failed to publish event: SimpleEvent{attributes={ROLE_TYPE=[NAMENODE], CATEGORY=[LOG_MESSAGE], ROLE=[hdfs1-NAMENODE-e30871fc37152f76433a15439514ba7e], SEVERITY=[IMPORTANT], SERVICE=[hdfs1], HOST_IDS=, SERVICE_TYPE=[HDFS], LOG_LEVEL=[WARN], HOSTS=host.com, EVENTCODE=[EV_LOG_EVENT]}, content=Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!, timestamp=1367721226936}

    2:33:47.482 AMINFOorg.apache.hadoop.hdfs.server.namenode.NameNode


    Caching file names occuring more than 10 times

    2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    dfs.namenode.safemode.threshold-pct = 0.9990000128746033

    2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    dfs.namenode.safemode.min.datanodes = 0

    2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem


    dfs.namenode.safemode.extension = 30000

    2:33:47.526 AMINFOorg.apache.hadoop.hdfs.server.common.Storage


    Lock on /data/dfs/nn/in_use.lock acquired by nodename 3666

    2:33:47.529 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl


    Stopping NameNode metrics system...

    2:33:47.530 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl


    NameNode metrics system stopped.

    2:33:47.530 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl


    NameNode metrics system shutdown complete.

    2:33:47.531 AMFATALorg.apache.hadoop.hdfs.server.namenode.NameNode


    Exception in namenode join
    java.io.IOException: NameNode is not formatted.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:592)

    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:435)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:397)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:399)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:433)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)

    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1141)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205)

    2:33:47.534 AMINFOorg.apache.hadoop.util.ExitUtil


    Exiting with status 1

    2:33:47.536 AMINFOorg.apache.hadoop.hdfs.server.namenode.NameNode


    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode

    ************************************************************/


    Thanks,
    Sam
  • Philip Zeyliger at May 6, 2013 at 5:56 pm
    Please include scm-users when you respond.

    Looks like your namenode is not starting. Read the namenode logs.

    On Sun, May 5, 2013 at 8:46 PM, sharat ph wrote:

    Hi Avinash and philip,

    I tried to start the service from action drop-down but, still have the
    same issue.

    Phillip: I don't see any format option in the hdfs action drop-down.

    log:

    Supervisor returned FATAL: + perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE#g' /var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE/log4j.properties
    ++ find /var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE -maxdepth 1 -name '*.py'
    + OUTPUT='/var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE/topology.py
    /var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE/cloudera_manager_agent_fencer.py'
    + '[' '/var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE/topology.py
    /var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE/cloudera_manager_agent_fencer.py' '!=' '' ']'
    + chmod +x /var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE/cloudera_manager_agent_fencer.py /var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE/topology.py
    + export HADOOP_IDENT_STRING=hdfs
    + HADOOP_IDENT_STRING=hdfs
    + '[' -n '' ']'
    + acquire_kerberos_tgt hdfs.keytab
    + '[' -z hdfs.keytab ']'
    + '[' -n '' ']'
    + '[' validate-writable-empty-dirs = namenode ']'
    + '[' file-operation = namenode ']'
    + '[' bootstrap = namenode ']'
    + '[' failover = namenode ']'
    + '[' transition-to-active = namenode ']'
    + '[' initializeSharedEdits = namenode ']'
    + '[' initialize-znode = namenode ']'
    + '[' format-namenode = namenode ']'
    + '[' monitor-decommission = namenode ']'
    + '[' jnSyncWait = namenode ']'
    + '[' nnRpcWait = namenode ']'
    + '[' monitor-upgrade = namenode ']'
    + '[' finalize-upgrade = namenode ']'
    + '[' mkdir = namenode ']'
    + '[' namenode = namenode -o secondarynamenode = namenode -o datanode = namenode ']'
    + HADOOP_OPTS='-Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + export 'HADOOP_OPTS=-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + HADOOP_OPTS='-Dhdfs.audit.logger=INFO,RFAAUDIT -Dsecurity.audit.logger=INFO,RFAS -Djava.net.preferIPv4Stack=true '
    + exec /opt/cloudera/parcels/CDH-4.2.1-1.cdh4.2.1.p0.5/lib/hadoop-hdfs/bin/hdfs --config /var/run/cloudera-scm-agent/process/35-hdfs-NAMENODE namenode


    Thanks,
    Sam

    On Sun, May 5, 2013 at 1:45 PM, Philip Zeyliger wrote:

    This:

    java.io.IOException: NameNode is not formatted.


    Suggests that your namenode is not formatted. There's an option to
    format HDFS in the "Actions" drop-down in the HDFS service. Typically this
    happens during the wizard during the first installation; clearly that
    either didn't run or failed earlier, which might warrant some more
    investigation.

    Cheers,

    -- Philip

    On Sat, May 4, 2013 at 11:00 PM, sam wrote:

    Hi,

    I am trying to install cloudera manager on 6 node cluster by automated
    method using parcles. I installed all the parcels sucessfully and assigned
    all the roles to the particular nodes but, while starting the hdfs services
    I am facing issue. Please, someone help me.

    logdetails:

    2:33:46.615 AMINFOorg.apache.hadoop.metrics2.impl.MetricsConfig



    loaded properties from hadoop-metrics2.properties

    2:33:46.709 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl



    Scheduled snapshot period at 10 second(s).

    2:33:46.709 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl



    NameNode metrics system started

    2:33:46.936 AMWARNorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!

    2:33:46.936 AMWARNorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of dataloss due to lack of redundant storage directories!

    2:33:47.060 AMINFOorg.apache.hadoop.util.HostsFileReader



    Refreshing hosts (include/exclude) list

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader



    Adding 10.8.240.34 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader



    Adding 10.8.240.54 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader



    Adding 10.8.240.36 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader



    Adding 10.8.240.32 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader



    Adding 10.8.240.62 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.061 AMINFOorg.apache.hadoop.util.HostsFileReader



    Adding 10.8.240.56 to the list of hosts from /var/run/cloudera-scm-agent/process/107-hdfs-NAMENODE/dfs_hosts_allow.txt

    2:33:47.121
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager



    dfs.block.invalidate.limit=1000

    2:33:47.147
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    dfs.block.access.token.enable=false

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    defaultReplication = 3

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    maxReplication = 512

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    minReplication = 1

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    maxReplicationStreams = 2

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    shouldCheckForEnoughRacks = true

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    replicationRecheckInterval = 3000

    2:33:47.148
    AMINFOorg.apache.hadoop.hdfs.server.blockmanagement.BlockManager



    encryptDataTransfer = false

    2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    fsOwner = hdfs (auth:SIMPLE)

    2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    supergroup = supergroup

    2:33:47.156 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    isPermissionEnabled = true

    2:33:47.157 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    HA Enabled: false

    2:33:47.164 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    Append Enabled: true

    2:33:47.171
    AMWARNcom.cloudera.cmf.event.publish.EventStorePublisherWithRetry



    Failed to publish event: SimpleEvent{attributes={ROLE_TYPE=[NAMENODE], CATEGORY=[LOG_MESSAGE], ROLE=[hdfs1-NAMENODE-e30871fc37152f76433a15439514ba7e], SEVERITY=[IMPORTANT], SERVICE=[hdfs1], HOST_IDS=, SERVICE_TYPE=[HDFS], LOG_LEVEL=[WARN], HOSTS=host.com, EVENTCODE=[EV_LOG_EVENT]}, content=Only one image storage directory (dfs.namenode.name.dir) configured. Beware of dataloss due to lack of redundant storage directories!, timestamp=1367721226936}



    2:33:47.482 AMINFOorg.apache.hadoop.hdfs.server.namenode.NameNode



    Caching file names occuring more than 10 times

    2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    dfs.namenode.safemode.threshold-pct = 0.9990000128746033

    2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    dfs.namenode.safemode.min.datanodes = 0

    2:33:47.485 AMINFOorg.apache.hadoop.hdfs.server.namenode.FSNamesystem



    dfs.namenode.safemode.extension = 30000

    2:33:47.526 AMINFOorg.apache.hadoop.hdfs.server.common.Storage



    Lock on /data/dfs/nn/in_use.lock acquired by nodename 3666

    2:33:47.529 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl



    Stopping NameNode metrics system...

    2:33:47.530 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl



    NameNode metrics system stopped.

    2:33:47.530 AMINFOorg.apache.hadoop.metrics2.impl.MetricsSystemImpl



    NameNode metrics system shutdown complete.

    2:33:47.531 AMFATALorg.apache.hadoop.hdfs.server.namenode.NameNode



    Exception in namenode join
    java.io.IOException: NameNode is not formatted.
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:592)




    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:435)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:397)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:399)




    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:433)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:609)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:590)




    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1141)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1205)

    2:33:47.534 AMINFOorg.apache.hadoop.util.ExitUtil



    Exiting with status 1

    2:33:47.536 AMINFOorg.apache.hadoop.hdfs.server.namenode.NameNode



    SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode

    ************************************************************/


    Thanks,
    Sam

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedMay 5, '13 at 6:01a
activeMay 6, '13 at 5:56p
posts3
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Philip Zeyliger: 2 posts Sam: 1 post

People

Translate

site design / logo © 2023 Grokbase