FAQ
Hi,

i have lately been running into problems since i started running hadoop
on a cluster:

The setup is the following:
1 Computer is NameNode and Jobtracker
1 Computer is SecondaryNameNode
2 Computers are TaskTracker and DataNode

I ran into problems with running the wordcount example: NameNode and
Jobtracker do not start properly both having connection problems of some
kind.
And this is although ssh is configured that way, that no prompt happens
when i connect from any node in the cluster to any other.

Is there any reason why this happens?

The logs look like the following:
\________ JOBTRACKER__________________________________________________
2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.JobTracker:
STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting JobTracker
STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
************************************************************/
2012-07-18 16:08:06,479 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2012-07-18 16:08:06,534 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2012-07-18 16:08:06,554 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2012-07-18 16:08:06,554 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics
system started
2012-07-18 16:08:07,157 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
QueueMetrics,q=default registered.
2012-07-18 16:08:10,395 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
ugi registered.
2012-07-18 16:08:10,417 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Updating the current master key for generating delegation tokens
2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.JobTracker:
Scheduler configured with (memSizeForMapSlotOnJT,
memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2012-07-18 16:08:10,438 INFO org.apache.hadoop.util.HostsFileReader:
Refreshing hosts (include/exclude) list
2012-07-18 16:08:10,440 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Starting expired delegation token remover thread,
tokenRemoverScanInterval=60 min(s)
2012-07-18 16:08:10,465 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Updating the current master key for generating delegation tokens
2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.JobTracker:
Starting jobtracker with owner as bmacek
2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.JobTracker: Error
starting tracker: java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:225)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:1483)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:506)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:2192)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:300)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

2012-07-18 16:08:13,861 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
QueueMetrics,q=default already exists!
2012-07-18 16:08:13,885 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
already exists!
2012-07-18 16:08:13,885 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Updating the current master key for generating delegation tokens
2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.JobTracker:
Scheduler configured with (memSizeForMapSlotOnJT,
memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2012-07-18 16:08:13,911 INFO org.apache.hadoop.util.HostsFileReader:
Refreshing hosts (include/exclude) list
2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.JobTracker:
Starting jobtracker with owner as bmacek
2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.JobTracker: Error
starting tracker: java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:225)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:1483)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:506)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:2192)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:300)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

2012-07-18 16:08:13,912 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Starting expired delegation token remover thread,
tokenRemoverScanInterval=60 min(s)
2012-07-18 16:08:13,913 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Updating the current master key for generating delegation tokens
2012-07-18 16:08:21,348 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
QueueMetrics,q=default already exists!
2012-07-18 16:08:21,390 WARN
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
already exists!
2012-07-18 16:08:21,390 INFO
org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
Updating the current master key for generating delegation tokens
2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.JobTracker:
Scheduler configured with (memSizeForMapSlotOnJT,
memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
limitMaxMemForReduceTasks) (-1, -1, -1, -1)
2012-07-18 16:08:21,427 INFO org.apache.hadoop.util.HostsFileReader:
Refreshing hosts (include/exclude) list
2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.JobTracker:
Starting jobtracker with owner as bmacek
2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.JobTracker: Error
starting tracker: java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:225)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:1483)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:506)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:2192)
at org.apache.hadoop.mapred.JobTracker.(JobTracker.java:300)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)


\________ DATANODE__________________________________________________
2012-07-18 16:07:58,759 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.0.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
************************************************************/
2012-07-18 16:07:59,738 INFO
org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
hadoop-metrics2.properties
2012-07-18 16:07:59,790 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2012-07-18 16:07:59,807 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2012-07-18 16:07:59,807 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
system started
2012-07-18 16:08:00,382 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
ugi registered.
2012-07-18 16:08:00,454 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
jvm registered.
2012-07-18 16:08:00,456 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
NameNode registered.
2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: VM
type = 64-bit
2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
memory = 17.77875 MB
2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
capacity = 2^21 = 2097152 entries
2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
recommended=2097152, actual=2097152
2012-07-18 16:08:00,812 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
2012-07-18 16:08:00,812 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-07-18 16:08:00,824 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isPermissionEnabled=true
2012-07-18 16:08:00,846 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
dfs.block.invalidate.limit=100
2012-07-18 16:08:00,846 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
accessTokenLifetime=0 min(s)
2012-07-18 16:08:02,746 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
FSNamesystemStateMBean and NameNodeMXBean
2012-07-18 16:08:02,868 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
occuring more than 10 times
2012-07-18 16:08:02,932 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
2012-07-18 16:08:02,963 INFO
org.apache.hadoop.hdfs.server.common.Storage: Number of files under
construction = 0
2012-07-18 16:08:02,966 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112
loaded in 0 seconds.
2012-07-18 16:08:02,975 INFO
org.apache.hadoop.hdfs.server.common.Storage: Edits file
/home/work/bmacek/hadoop/master/current/edits of size 4 edits # 0 loaded
in 0 seconds.
2012-07-18 16:08:02,977 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112
saved in 0 seconds.
2012-07-18 16:08:03,191 INFO
org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112
saved in 0 seconds.
2012-07-18 16:08:03,334 INFO
org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
entries 0 lookups
2012-07-18 16:08:03,334 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
FSImage in 2567 msecs
2012-07-18 16:08:03,401 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
blocks = 0
2012-07-18 16:08:03,401 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
blocks = 0
2012-07-18 16:08:03,401 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
under-replicated blocks = 0
2012-07-18 16:08:03,401 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
over-replicated blocks = 0
2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Safe mode termination scan for invalid, over- and under-replicated
blocks completed in 61 msec
2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Leaving safe mode after 2 secs.
2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
Network topology has 0 racks and 0 datanodes
2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
UnderReplicatedBlocks has 0 blocks
2012-07-18 16:08:03,472 INFO org.apache.hadoop.util.HostsFileReader:
Refreshing hosts (include/exclude) list
2012-07-18 16:08:03,488 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
2012-07-18 16:08:03,490 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
processing time, 1 msec clock time, 1 cycles
2012-07-18 16:08:03,490 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
2012-07-18 16:08:03,490 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
processing time, 0 msec clock time, 1 cycles
2012-07-18 16:08:03,495 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
FSNamesystemMetrics registered.
2012-07-18 16:08:03,553 WARN
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
thread received InterruptedException.java.lang.InterruptedException:
sleep interrupted
2012-07-18 16:08:03,555 INFO
org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
Monitor
java.lang.InterruptedException: sleep interrupted
at java.lang.Thread.sleep(Native Method)
at
org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
at java.lang.Thread.run(Thread.java:619)
2012-07-18 16:08:03,556 INFO
org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
transactions: 0 Total time for transactions(ms): 0Number of transactions
batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
2012-07-18 16:08:03,594 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode:
java.net.SocketException: Permission denied
at sun.nio.ch.Net.bind(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
at org.apache.hadoop.ipc.Server.bind(Server.java:225)
at org.apache.hadoop.ipc.Server$Listener.(Server.java:1483)
at org.apache.hadoop.ipc.RPC$Server.(RPC.java:506)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1279)
at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

2012-07-18 16:08:03,627 INFO
org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at
its-cs100.its.uni-kassel.de/141.51.205.10
************************************************************/

Search Discussions

  • Suresh Srinivas at Jul 18, 2012 at 5:48 pm
    Can you share information on the java version that you are using.
    - Is it as obvious as some previous processes still running and new
    processes cannot bind to the port?
    - Another pointer -
    http://stackoverflow.com/questions/8360913/weird-java-net-socketexception-permission-denied-connect-error-when-running-groo

    On Wed, Jul 18, 2012 at 7:29 AM, Björn-Elmar Macek
    wrote:
    Hi,

    i have lately been running into problems since i started running hadoop on
    a cluster:

    The setup is the following:
    1 Computer is NameNode and Jobtracker
    1 Computer is SecondaryNameNode
    2 Computers are TaskTracker and DataNode

    I ran into problems with running the wordcount example: NameNode and
    Jobtracker do not start properly both having connection problems of some
    kind.
    And this is although ssh is configured that way, that no prompt happens
    when i connect from any node in the cluster to any other.

    Is there any reason why this happens?

    The logs look like the following:
    \________ JOBTRACKER____________________**______________________________
    2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.**JobTracker:
    STARTUP_MSG:
    /****************************************************************
    STARTUP_MSG: Starting JobTracker
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/**141.51.205.10<http://its-cs100.its.uni-kassel.de/141.51.205.10>
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build = https://svn.apache.org/repos/**
    asf/hadoop/common/branches/**branch-1.0.2<https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2>-r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ****************************************************************/
    2012-07-18 16:08:06,479 INFO org.apache.hadoop.metrics2.**impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-18 16:08:06,534 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source MetricsSystem,sub=Stats registered.
    2012-07-18 16:08:06,554 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    Scheduled snapshot period at 10 second(s).
    2012-07-18 16:08:06,554 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    JobTracker metrics system started
    2012-07-18 16:08:07,157 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source QueueMetrics,q=default registered.
    2012-07-18 16:08:10,395 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source ugi registered.
    2012-07-18 16:08:10,417 INFO org.apache.hadoop.security.**
    token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
    current master key for generating delegation tokens
    2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.**JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:10,438 INFO org.apache.hadoop.util.**HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:10,440 INFO org.apache.hadoop.security.**
    token.delegation.**AbstractDelegationTokenSecretM**anager: Starting
    expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:10,465 INFO org.apache.hadoop.security.**
    token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
    current master key for generating delegation tokens
    2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.**JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.**JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.**ServerSocketChannelImpl.bind(**
    ServerSocketChannelImpl.java:**119)
    at sun.nio.ch.**ServerSocketAdaptor.bind(**
    ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
    at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2306)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2192)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2186)
    at org.apache.hadoop.mapred.**JobTracker.startTracker(**
    JobTracker.java:300)
    at org.apache.hadoop.mapred.**JobTracker.startTracker(**
    JobTracker.java:291)
    at org.apache.hadoop.mapred.**JobTracker.main(JobTracker.**java:4978)

    2012-07-18 16:08:13,861 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    Source name QueueMetrics,q=default already exists!
    2012-07-18 16:08:13,885 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    Source name ugi already exists!
    2012-07-18 16:08:13,885 INFO org.apache.hadoop.security.**
    token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
    current master key for generating delegation tokens
    2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.**JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.util.**HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.**JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.**JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.**ServerSocketChannelImpl.bind(**
    ServerSocketChannelImpl.java:**119)
    at sun.nio.ch.**ServerSocketAdaptor.bind(**
    ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
    at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2306)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2192)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2186)
    at org.apache.hadoop.mapred.**JobTracker.startTracker(**
    JobTracker.java:300)
    at org.apache.hadoop.mapred.**JobTracker.startTracker(**
    JobTracker.java:291)
    at org.apache.hadoop.mapred.**JobTracker.main(JobTracker.**java:4978)

    2012-07-18 16:08:13,912 INFO org.apache.hadoop.security.**
    token.delegation.**AbstractDelegationTokenSecretM**anager: Starting
    expired delegation token remover thread, tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:13,913 INFO org.apache.hadoop.security.**
    token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
    current master key for generating delegation tokens
    2012-07-18 16:08:21,348 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    Source name QueueMetrics,q=default already exists!
    2012-07-18 16:08:21,390 WARN org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    Source name ugi already exists!
    2012-07-18 16:08:21,390 INFO org.apache.hadoop.security.**
    token.delegation.**AbstractDelegationTokenSecretM**anager: Updating the
    current master key for generating delegation tokens
    2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.**JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.util.**HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.**JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.**JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.**ServerSocketChannelImpl.bind(**
    ServerSocketChannelImpl.java:**119)
    at sun.nio.ch.**ServerSocketAdaptor.bind(**
    ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
    at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2306)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2192)
    at org.apache.hadoop.mapred.**JobTracker.<init>(JobTracker.**
    java:2186)
    at org.apache.hadoop.mapred.**JobTracker.startTracker(**
    JobTracker.java:300)
    at org.apache.hadoop.mapred.**JobTracker.startTracker(**
    JobTracker.java:291)
    at org.apache.hadoop.mapred.**JobTracker.main(JobTracker.**java:4978)


    \________ DATANODE______________________**____________________________
    2012-07-18 16:07:58,759 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
    STARTUP_MSG:
    /****************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/**141.51.205.10<http://its-cs100.its.uni-kassel.de/141.51.205.10>
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build = https://svn.apache.org/repos/**
    asf/hadoop/common/branches/**branch-1.0.2<https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2>-r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ****************************************************************/
    2012-07-18 16:07:59,738 INFO org.apache.hadoop.metrics2.**impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-18 16:07:59,790 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source MetricsSystem,sub=Stats registered.
    2012-07-18 16:07:59,807 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    Scheduled snapshot period at 10 second(s).
    2012-07-18 16:07:59,807 INFO org.apache.hadoop.metrics2.**impl.MetricsSystemImpl:
    NameNode metrics system started
    2012-07-18 16:08:00,382 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source ugi registered.
    2012-07-18 16:08:00,454 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source jvm registered.
    2012-07-18 16:08:00,456 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source NameNode registered.
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet: VM type
    = 64-bit
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet: 2% max
    memory = 17.77875 MB
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet: capacity
    = 2^21 = 2097152 entries
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.**GSet:
    recommended=2097152, actual=2097152
    2012-07-18 16:08:00,812 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    fsOwner=bmacek
    2012-07-18 16:08:00,812 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    supergroup=supergroup
    2012-07-18 16:08:00,824 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-18 16:08:00,846 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-18 16:08:00,846 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-18 16:08:02,746 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    Registered FSNamesystemStateMBean and NameNodeMXBean
    2012-07-18 16:08:02,868 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
    Caching file names occuring more than 10 times
    2012-07-18 16:08:02,932 INFO org.apache.hadoop.hdfs.server.**common.Storage:
    Number of files = 1
    2012-07-18 16:08:02,963 INFO org.apache.hadoop.hdfs.server.**common.Storage:
    Number of files under construction = 0
    2012-07-18 16:08:02,966 INFO org.apache.hadoop.hdfs.server.**common.Storage:
    Image file of size 112 loaded in 0 seconds.
    2012-07-18 16:08:02,975 INFO org.apache.hadoop.hdfs.server.**common.Storage:
    Edits file /home/work/bmacek/hadoop/**master/current/edits of size 4
    edits # 0 loaded in 0 seconds.
    2012-07-18 16:08:02,977 INFO org.apache.hadoop.hdfs.server.**common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,191 INFO org.apache.hadoop.hdfs.server.**common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,334 INFO org.apache.hadoop.hdfs.server.**namenode.NameCache:
    initialized with 0 entries 0 lookups
    2012-07-18 16:08:03,334 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    Finished loading FSImage in 2567 msecs
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    Total number of blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    Number of invalid blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    Number of under-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    Number of over-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
    Safe mode termination scan for invalid, over- and under-replicated blocks
    completed in 61 msec
    2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.**StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-18 16:08:03,472 INFO org.apache.hadoop.util.**HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:03,488 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    ReplicateQueue QueueProcessingStatistics: First cycle completed 0 blocks in
    1 msec
    2012-07-18 16:08:03,490 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    ReplicateQueue QueueProcessingStatistics: Queue flush completed 0 blocks in
    1 msec processing time, 1 msec clock time, 1 cycles
    2012-07-18 16:08:03,490 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    InvalidateQueue QueueProcessingStatistics: First cycle completed 0 blocks
    in 0 msec
    2012-07-18 16:08:03,490 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    InvalidateQueue QueueProcessingStatistics: Queue flush completed 0 blocks
    in 0 msec processing time, 0 msec clock time, 1 cycles
    2012-07-18 16:08:03,495 INFO org.apache.hadoop.metrics2.**impl.MetricsSourceAdapter:
    MBean for source FSNamesystemMetrics registered.
    2012-07-18 16:08:03,553 WARN org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    ReplicationMonitor thread received InterruptedException.java.**lang.InterruptedException:
    sleep interrupted
    2012-07-18 16:08:03,555 INFO org.apache.hadoop.hdfs.server.**namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.**InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at org.apache.hadoop.hdfs.server.**namenode.DecommissionManager$**
    Monitor.run(**DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.**java:619)
    2012-07-18 16:08:03,556 INFO org.apache.hadoop.hdfs.server.**namenode.FSNamesystem:
    Number of transactions: 0 Total time for transactions(ms): 0Number of
    transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2012-07-18 16:08:03,594 ERROR org.apache.hadoop.hdfs.server.**namenode.NameNode:
    java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch.**ServerSocketChannelImpl.bind(**
    ServerSocketChannelImpl.java:**119)
    at sun.nio.ch.**ServerSocketAdaptor.bind(**
    ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.**bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$**Listener.<init>(Server.java:**301)
    at org.apache.hadoop.ipc.Server.<**init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$**Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.**getServer(RPC.java:506)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.initialize(**
    NameNode.java:294)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.<init>(**
    NameNode.java:496)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.**
    createNameNode(NameNode.java:**1279)
    at org.apache.hadoop.hdfs.server.**namenode.NameNode.main(**
    NameNode.java:1288)

    2012-07-18 16:08:03,627 INFO org.apache.hadoop.hdfs.server.**namenode.NameNode:
    SHUTDOWN_MSG:
    /****************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at its-cs100.its.uni-kassel.de/**
    141.51.205.10 <http://its-cs100.its.uni-kassel.de/141.51.205.10>
    ****************************************************************/


    --
    http://hortonworks.com/download/
  • Björn-Elmar Macek at Jul 20, 2012 at 2:16 pm
    Hi Srinivas,

    thanks for your reply! I have been following your link and idea and been
    playing around alot, but still got problems with the connection (though
    they are different now):

    \_______ JAVA VERSION_________
    "which java" tells me it is 1.6.0_01. If i got it right version 1.7 got
    problems with ssh.

    \_______MY TESTS_____________
    According to your suggestion to look for processes running on that port
    i changed ports alot:
    When i was posting the first post of this thread. i was using ports 999
    for namenode and 1000 for jobtracker.
    Since due to some reasons commands like "lsof -i" etc dont give me any
    output when usedin the cluster enviroment. So i started looking for
    ports that are in general unused by programs.
    When i changed the ports to 9004 and 9005 i got different errors which
    look very much like the ones you posted in the beginning of this year in
    the lucene section (
    http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
    ).

    It seems as if a DataNode can not communicate with the NameNode.

    The logs look like the following:

    \_______TEST RESULTS__________
    ########## A DataNode #############
    2012-07-20 14:47:59,536 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = its-cs102.its.uni-kassel.de/141.51.205.12
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:59,824 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-20 14:47:59,841 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,843 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period at 10 second(s).
    2012-07-20 14:47:59,844 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
    system started
    2012-07-20 14:47:59,969 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi registered.
    2012-07-20 14:48:26,792 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: its-cs100/141.51.205.10:9005. Already tried 0 time(s).
    2012-07-20 14:48:26,889 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
    FSDatasetStatusMBean
    2012-07-20 14:48:26,934 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
    2012-07-20 14:48:26,949 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
    1048576 bytes/s
    2012-07-20 14:48:27,014 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:27,147 INFO org.apache.hadoop.http.HttpServer: Added
    global filtersafety
    (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:27,160 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is
    -1. Opening the listener on 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50075
    webServer.getConnectors()[0].getLocalPort() returned 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50075
    2012-07-20 14:48:27,160 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:27,805 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50075
    2012-07-20 14:48:27,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    jvm registered.
    2012-07-20 14:48:27,813 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    DataNode registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort50020 registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort50020 registered.
    2012-07-20 14:48:28,487 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
    DatanodeRegistration(its-cs102.its.uni-kassel.de:50010, storageID=,
    infoPort=50075, ipcPort=50020)
    2012-07-20 14:48:28,489 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:38,706 ERROR
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1066)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy5.register(Unknown Source)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:673)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1480)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1540)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

    2012-07-20 14:48:38,712 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at
    its-cs102.its.uni-kassel.de/141.51.205.12
    ************************************************************/


    ####### NameNode ##########################
    CAUTION: Please recognize, that the file mentioned in the first error
    log message
    (/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info) does
    not exist on the NameNode, when i checked for it.
    The only path that has a simiar name is:
    /home/work/bmacek/hadoop/hdfs/slave/tmp (containing no further
    subfolders or files)



    2012-07-20 14:47:58,033 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:58,985 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-20 14:47:59,037 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period at 10 second(s).
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system started
    2012-07-20 14:47:59,622 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi registered.
    2012-07-20 14:47:59,685 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    jvm registered.
    2012-07-20 14:47:59,703 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-20 14:47:59,896 INFO org.apache.hadoop.hdfs.util.GSet: VM
    type = 64-bit
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
    memory = 17.77875 MB
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet:
    capacity = 2^21 = 2097152 entries
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-20 14:48:00,083 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-20 14:48:00,084 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-20 14:48:01,573 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-20 14:48:01,643 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
    occuring more than 10 times
    2012-07-20 14:48:01,686 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2012-07-20 14:48:01,712 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2012-07-20 14:48:01,713 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112
    loaded in 0 seconds.
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode,
    reached end of edit log Number of transactions found 53
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /home/work/bmacek/hadoop/master/current/edits of size 1049092 edits # 53
    loaded in 0 seconds.
    2012-07-20 14:48:01,797 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 861
    saved in 0 seconds.
    2012-07-20 14:48:02,003 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 861
    saved in 0 seconds.
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 1
    entries 11 lookups
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2135 msecs
    2012-07-20 14:48:02,203 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Safe mode termination scan for invalid, over- and under-replicated
    blocks completed in 44 msec
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-20 14:48:02,205 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-20 14:48:02,265 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-20 14:48:02,275 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-20 14:48:02,281 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-20 14:48:02,336 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort9005 registered.
    2012-07-20 14:48:02,337 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort9005 registered.
    2012-07-20 14:48:02,341 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:02,356 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    its-cs100.its.uni-kassel.de/141.51.205.10:9005
    2012-07-20 14:48:02,878 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:03,312 INFO org.apache.hadoop.http.HttpServer: Added
    global filtersafety
    (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:03,426 INFO org.apache.hadoop.http.HttpServer:
    dfs.webhdfs.enabled = false
    2012-07-20 14:48:03,465 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is
    -1. Opening the listener on 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2012-07-20 14:48:03,511 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:06,528 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2012-07-20 14:48:06,528 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2012-07-20 14:48:06,561 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2012-07-20 14:48:06,593 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 9005: starting
    2012-07-20 14:48:06,656 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005: starting
    2012-07-20 14:48:06,685 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005: starting
    2012-07-20 14:48:06,731 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005: starting
    2012-07-20 14:48:06,759 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 9005: starting
    2012-07-20 14:48:06,791 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005: starting
    2012-07-20 14:48:06,849 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005: starting
    2012-07-20 14:48:06,874 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 9005: starting
    2012-07-20 14:48:06,898 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005: starting
    2012-07-20 14:48:06,921 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005: starting
    2012-07-20 14:48:06,974 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005: starting
    2012-07-20 14:48:27,222 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:27,224 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56513: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:38,701 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    2012-07-20 14:48:38,701 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    register(DatanodeRegistration(its-cs102.its.uni-kassel.de:50010,
    storageID=DS-1791721778-141.51.205.12-50010-1342788518692,
    infoPort=50075, ipcPort=50020)) from 141.51.205.12:33789: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:54,331 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:54,331 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56514: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:18,079 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 13 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 9 SyncTimes(ms): 111
    2012-07-20 14:49:18,151 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:18,151 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56515: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:41,419 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:41,419 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56516: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:04,474 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:04,474 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56517: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:26,299 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 25 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 18 SyncTimes(ms): 170
    2012-07-20 14:50:26,359 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:26,359 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56518: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:47,243 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:47,243 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56519: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:06,865 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs103.its.uni-kassel.de:50010
    2012-07-20 14:51:06,865 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005, call
    register(DatanodeRegistration(its-cs103.its.uni-kassel.de:50010,
    storageID=DS-1725464844-141.51.205.13-50010-1342788666863,
    infoPort=50075, ipcPort=50020)) from 141.51.205.13:48227: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:08,305 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:08,305 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56520: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:34,855 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 37 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 27 SyncTimes(ms): 256
    2012-07-20 14:51:34,932 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:34,932 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56521: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:57,128 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:57,128 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56522: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:21,974 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:21,976 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56523: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:43,473 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 49 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 36 SyncTimes(ms): 341
    2012-07-20 14:52:43,570 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:43,570 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info, DFSClient_-1997886712,
    null) from 141.51.205.10:56524: error: java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)





    I am not

    Am 18.07.2012 19:47, schrieb Suresh Srinivas:
    Can you share information on the java version that you are using.
    - Is it as obvious as some previous processes still running and new
    processes cannot bind to the port?
    - Another pointer -
    http://stackoverflow.com/questions/8360913/weird-java-net-socketexception-permission-denied-connect-error-when-running-groo

    On Wed, Jul 18, 2012 at 7:29 AM, Björn-Elmar Macek
    wrote:

    Hi,

    i have lately been running into problems since i started running
    hadoop on a cluster:

    The setup is the following:
    1 Computer is NameNode and Jobtracker
    1 Computer is SecondaryNameNode
    2 Computers are TaskTracker and DataNode

    I ran into problems with running the wordcount example: NameNode
    and Jobtracker do not start properly both having connection
    problems of some kind.
    And this is although ssh is configured that way, that no prompt
    happens when i connect from any node in the cluster to any other.

    Is there any reason why this happens?

    The logs look like the following:
    \________ JOBTRACKER__________________________________________________
    2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.JobTracker:
    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting JobTracker
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    <http://its-cs100.its.uni-kassel.de/141.51.205.10>
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2
    -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:08:06,479 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
    from hadoop-metrics2.properties
    2012-07-18 16:08:06,534 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source MetricsSystem,sub=Stats registered.
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
    snapshot period at 10 second(s).
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker
    metrics system started
    2012-07-18 16:08:07,157 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source QueueMetrics,q=default registered.
    2012-07-18 16:08:10,395 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source ugi registered.
    2012-07-18 16:08:10,417 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
    limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:10,438 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:10,440 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:10,465 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.JobTracker:
    Error starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,861 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:13,885 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
    already exists!
    2012-07-18 16:08:13,885 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
    limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:13,911 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.JobTracker:
    Error starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,912 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:13,913 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,348 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:21,390 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
    already exists!
    2012-07-18 16:08:21,390 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
    limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:21,427 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.JobTracker:
    Error starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)


    \________ DATANODE__________________________________________________
    2012-07-18 16:07:58,759 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    <http://its-cs100.its.uni-kassel.de/141.51.205.10>
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2
    -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:07:59,738 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
    from hadoop-metrics2.properties
    2012-07-18 16:07:59,790 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source MetricsSystem,sub=Stats registered.
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
    snapshot period at 10 second(s).
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
    metrics system started
    2012-07-18 16:08:00,382 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source ugi registered.
    2012-07-18 16:08:00,454 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source jvm registered.
    2012-07-18 16:08:00,456 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source NameNode registered.
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: VM
    type = 64-bit
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: 2%
    max memory = 17.77875 MB
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
    capacity = 2^21 = 2097152 entries
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2012-07-18 16:08:00,824 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-18 16:08:02,746 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-18 16:08:02,868 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file
    names occuring more than 10 times
    2012-07-18 16:08:02,932 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2012-07-18 16:08:02,963 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files
    under construction = 0
    2012-07-18 16:08:02,966 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size
    112 loaded in 0 seconds.
    2012-07-18 16:08:02,975 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /home/work/bmacek/hadoop/master/current/edits of size 4 edits # 0
    loaded in 0 seconds.
    2012-07-18 16:08:02,977 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size
    112 saved in 0 seconds.
    2012-07-18 16:08:03,191 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size
    112 saved in 0 seconds.
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with
    0 entries 0 lookups
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
    loading FSImage in 2567 msecs
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number
    of blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    invalid blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Safe mode termination scan for invalid, over- and
    under-replicated blocks completed in 61 msec
    2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 2 secs.
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2012-07-18 16:08:03,472 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:03,488 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    ReplicateQueue QueueProcessingStatistics: First cycle completed 0
    blocks in 1 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    ReplicateQueue QueueProcessingStatistics: Queue flush completed 0
    blocks in 1 msec processing time, 1 msec clock time, 1 cycles
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    InvalidateQueue QueueProcessingStatistics: First cycle completed 0
    blocks in 0 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    InvalidateQueue QueueProcessingStatistics: Queue flush completed 0
    blocks in 0 msec processing time, 0 msec clock time, 1 cycles
    2012-07-18 16:08:03,495 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source FSNamesystemMetrics registered.
    2012-07-18 16:08:03,553 WARN
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    ReplicationMonitor thread received
    InterruptedException.java.lang.InterruptedException: sleep interrupted
    2012-07-18 16:08:03,555 INFO
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2012-07-18 16:08:03,556 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 0 Total time for transactions(ms): 0Number of
    transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2012-07-18 16:08:03,594 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

    2012-07-18 16:08:03,627 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at
    its-cs100.its.uni-kassel.de/141.51.205.10
    <http://its-cs100.its.uni-kassel.de/141.51.205.10>
    ************************************************************/




    --
    http://hortonworks.com/download/
  • Björn-Elmar Macek at Jul 20, 2012 at 2:54 pm
    Hi together,

    well just stumbled upon this post:
    http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html

    And it says:
    "Problem: Hadoop-datanode job failed or datanode not running:
    java.io.IOException: File ../mapred/system/jobtracker.info could only be
    replicated to 0 nodes, instead of 1.
    ...
    Cause: You may also get this message due to permissions. May be
    JobTracker can not create jobtracker.info on startup."

    Since the file does not exist i think, this might be a probable reason
    for my errors. But why should the JobTracker not be able to create that
    file. It created several other directories on this node with easy via
    the slave.sh script that i started with the very same user that calls
    start-all.sh.

    Any help would be really appreciated.


    Am 20.07.2012 16:15, schrieb Björn-Elmar Macek:
    Hi Srinivas,

    thanks for your reply! I have been following your link and idea and
    been playing around alot, but still got problems with the connection
    (though they are different now):

    \_______ JAVA VERSION_________
    "which java" tells me it is 1.6.0_01. If i got it right version 1.7
    got problems with ssh.

    \_______MY TESTS_____________
    According to your suggestion to look for processes running on that
    port i changed ports alot:
    When i was posting the first post of this thread. i was using ports
    999 for namenode and 1000 for jobtracker.
    Since due to some reasons commands like "lsof -i" etc dont give me any
    output when usedin the cluster enviroment. So i started looking for
    ports that are in general unused by programs.
    When i changed the ports to 9004 and 9005 i got different errors which
    look very much like the ones you posted in the beginning of this year
    in the lucene section (
    http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
    ).

    It seems as if a DataNode can not communicate with the NameNode.

    The logs look like the following:

    \_______TEST RESULTS__________
    ########## A DataNode #############
    2012-07-20 14:47:59,536 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = its-cs102.its.uni-kassel.de/141.51.205.12
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2
    -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:59,824 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-20 14:47:59,841 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,843 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period at 10 second(s).
    2012-07-20 14:47:59,844 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
    system started
    2012-07-20 14:47:59,969 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi registered.
    2012-07-20 14:48:26,792 INFO org.apache.hadoop.ipc.Client: Retrying
    connect to server: its-cs100/141.51.205.10:9005. Already tried 0 time(s).
    2012-07-20 14:48:26,889 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
    FSDatasetStatusMBean
    2012-07-20 14:48:26,934 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
    50010
    2012-07-20 14:48:26,949 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
    1048576 bytes/s
    2012-07-20 14:48:27,014 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:27,147 INFO org.apache.hadoop.http.HttpServer: Added
    global filtersafety
    (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:27,160 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled =
    false
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50075
    webServer.getConnectors()[0].getLocalPort() returned 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50075
    2012-07-20 14:48:27,160 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:27,805 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50075
    2012-07-20 14:48:27,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    jvm registered.
    2012-07-20 14:48:27,813 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    DataNode registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort50020 registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort50020 registered.
    2012-07-20 14:48:28,487 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
    DatanodeRegistration(its-cs102.its.uni-kassel.de:50010, storageID=,
    infoPort=50075, ipcPort=50020)
    2012-07-20 14:48:28,489 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:38,706 ERROR
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1066)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy5.register(Unknown Source)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:673)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1480)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1540)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

    2012-07-20 14:48:38,712 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at
    its-cs102.its.uni-kassel.de/141.51.205.12
    ************************************************************/


    ####### NameNode ##########################
    CAUTION: Please recognize, that the file mentioned in the first error
    log message
    (/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info) does
    not exist on the NameNode, when i checked for it.
    The only path that has a simiar name is:
    /home/work/bmacek/hadoop/hdfs/slave/tmp (containing no further
    subfolders or files)



    2012-07-20 14:47:58,033 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2
    -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:58,985 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-20 14:47:59,037 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period at 10 second(s).
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system started
    2012-07-20 14:47:59,622 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi registered.
    2012-07-20 14:47:59,685 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    jvm registered.
    2012-07-20 14:47:59,703 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-20 14:47:59,896 INFO org.apache.hadoop.hdfs.util.GSet: VM
    type = 64-bit
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
    memory = 17.77875 MB
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet:
    capacity = 2^21 = 2097152 entries
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-20 14:48:00,083 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-20 14:48:00,084 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-20 14:48:01,573 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-20 14:48:01,643 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
    occuring more than 10 times
    2012-07-20 14:48:01,686 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2012-07-20 14:48:01,712 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files under
    construction = 0
    2012-07-20 14:48:01,713 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 112
    loaded in 0 seconds.
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode,
    reached end of edit log Number of transactions found 53
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /home/work/bmacek/hadoop/master/current/edits of size 1049092 edits #
    53 loaded in 0 seconds.
    2012-07-20 14:48:01,797 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 861
    saved in 0 seconds.
    2012-07-20 14:48:02,003 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size 861
    saved in 0 seconds.
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 1
    entries 11 lookups
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2135 msecs
    2012-07-20 14:48:02,203 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Safe mode termination scan for invalid, over- and
    under-replicated blocks completed in 44 msec
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 2 secs.
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2012-07-20 14:48:02,205 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2012-07-20 14:48:02,265 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-20 14:48:02,275 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-20 14:48:02,281 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-20 14:48:02,336 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort9005 registered.
    2012-07-20 14:48:02,337 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort9005 registered.
    2012-07-20 14:48:02,341 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:02,356 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    its-cs100.its.uni-kassel.de/141.51.205.10:9005
    2012-07-20 14:48:02,878 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:03,312 INFO org.apache.hadoop.http.HttpServer: Added
    global filtersafety
    (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:03,426 INFO org.apache.hadoop.http.HttpServer:
    dfs.webhdfs.enabled = false
    2012-07-20 14:48:03,465 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open()
    is -1. Opening the listener on 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50070
    2012-07-20 14:48:03,511 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:06,528 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2012-07-20 14:48:06,528 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2012-07-20 14:48:06,561 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2012-07-20 14:48:06,593 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 9005: starting
    2012-07-20 14:48:06,656 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005: starting
    2012-07-20 14:48:06,685 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005: starting
    2012-07-20 14:48:06,731 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005: starting
    2012-07-20 14:48:06,759 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 9005: starting
    2012-07-20 14:48:06,791 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005: starting
    2012-07-20 14:48:06,849 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005: starting
    2012-07-20 14:48:06,874 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 9005: starting
    2012-07-20 14:48:06,898 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005: starting
    2012-07-20 14:48:06,921 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005: starting
    2012-07-20 14:48:06,974 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005: starting
    2012-07-20 14:48:27,222 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:27,224 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56513: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:38,701 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    2012-07-20 14:48:38,701 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    register(DatanodeRegistration(its-cs102.its.uni-kassel.de:50010,
    storageID=DS-1791721778-141.51.205.12-50010-1342788518692,
    infoPort=50075, ipcPort=50020)) from 141.51.205.12:33789: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:54,331 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:54,331 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56514: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:18,079 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 13 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 9 SyncTimes(ms): 111
    2012-07-20 14:49:18,151 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:18,151 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56515: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:41,419 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:41,419 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56516: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:04,474 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:04,474 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56517: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:26,299 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 25 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 18 SyncTimes(ms): 170
    2012-07-20 14:50:26,359 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:26,359 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56518: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:47,243 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:47,243 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56519: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:06,865 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    2012-07-20 14:51:06,865 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005, call
    register(DatanodeRegistration(its-cs103.its.uni-kassel.de:50010,
    storageID=DS-1725464844-141.51.205.13-50010-1342788666863,
    infoPort=50075, ipcPort=50020)) from 141.51.205.13:48227: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:08,305 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:08,305 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56520: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:34,855 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 37 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 27 SyncTimes(ms): 256
    2012-07-20 14:51:34,932 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:34,932 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56521: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:57,128 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:57,128 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56522: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:21,974 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:21,976 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56523: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:43,473 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 49 Total time for transactions(ms): 22Number of
    transactions batched in Syncs: 0 Number of syncs: 36 SyncTimes(ms): 341
    2012-07-20 14:52:43,570 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:43,570 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56524: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)





    I am not

    Am 18.07.2012 19:47, schrieb Suresh Srinivas:
    Can you share information on the java version that you are using.
    - Is it as obvious as some previous processes still running and new
    processes cannot bind to the port?
    - Another pointer -
    http://stackoverflow.com/questions/8360913/weird-java-net-socketexception-permission-denied-connect-error-when-running-groo

    On Wed, Jul 18, 2012 at 7:29 AM, Björn-Elmar Macek
    wrote:

    Hi,

    i have lately been running into problems since i started running
    hadoop on a cluster:

    The setup is the following:
    1 Computer is NameNode and Jobtracker
    1 Computer is SecondaryNameNode
    2 Computers are TaskTracker and DataNode

    I ran into problems with running the wordcount example: NameNode
    and Jobtracker do not start properly both having connection
    problems of some kind.
    And this is although ssh is configured that way, that no prompt
    happens when i connect from any node in the cluster to any other.

    Is there any reason why this happens?

    The logs look like the following:
    \________
    JOBTRACKER__________________________________________________
    2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.JobTracker:
    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting JobTracker
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    <http://its-cs100.its.uni-kassel.de/141.51.205.10>
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2
    -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:08:06,479 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
    from hadoop-metrics2.properties
    2012-07-18 16:08:06,534 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source MetricsSystem,sub=Stats registered.
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
    snapshot period at 10 second(s).
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker
    metrics system started
    2012-07-18 16:08:07,157 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source QueueMetrics,q=default registered.
    2012-07-18 16:08:10,395 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source ugi registered.
    2012-07-18 16:08:10,417 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
    limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:10,438 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:10,440 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:10,465 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.JobTracker:
    Error starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,861 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:13,885 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    ugi already exists!
    2012-07-18 16:08:13,885 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
    limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:13,911 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.JobTracker:
    Error starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,912 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:13,913 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,348 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:21,390 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    ugi already exists!
    2012-07-18 16:08:21,390 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT, limitMaxMemForMapTasks,
    limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:21,427 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.JobTracker:
    Starting jobtracker with owner as bmacek
    2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.JobTracker:
    Error starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)


    \________ DATANODE__________________________________________________
    2012-07-18 16:07:58,759 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    <http://its-cs100.its.uni-kassel.de/141.51.205.10>
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2
    -r 1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:07:59,738 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties
    from hadoop-metrics2.properties
    2012-07-18 16:07:59,790 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source MetricsSystem,sub=Stats registered.
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled
    snapshot period at 10 second(s).
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode
    metrics system started
    2012-07-18 16:08:00,382 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source ugi registered.
    2012-07-18 16:08:00,454 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source jvm registered.
    2012-07-18 16:08:00,456 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source NameNode registered.
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: VM
    type = 64-bit
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: 2%
    max memory = 17.77875 MB
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
    capacity = 2^21 = 2097152 entries
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2012-07-18 16:08:00,824 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-18 16:08:02,746 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-18 16:08:02,868 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file
    names occuring more than 10 times
    2012-07-18 16:08:02,932 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files = 1
    2012-07-18 16:08:02,963 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Number of files
    under construction = 0
    2012-07-18 16:08:02,966 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size
    112 loaded in 0 seconds.
    2012-07-18 16:08:02,975 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Edits file
    /home/work/bmacek/hadoop/master/current/edits of size 4 edits # 0
    loaded in 0 seconds.
    2012-07-18 16:08:02,977 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size
    112 saved in 0 seconds.
    2012-07-18 16:08:03,191 INFO
    org.apache.hadoop.hdfs.server.common.Storage: Image file of size
    112 saved in 0 seconds.
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized
    with 0 entries 0 lookups
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished
    loading FSImage in 2567 msecs
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number
    of blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    invalid blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Safe mode termination scan for invalid, over- and
    under-replicated blocks completed in 61 msec
    2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Leaving safe mode after 2 secs.
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* Network topology has 0 racks and 0 datanodes
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange:
    STATE* UnderReplicatedBlocks has 0 blocks
    2012-07-18 16:08:03,472 INFO
    org.apache.hadoop.util.HostsFileReader: Refreshing hosts
    (include/exclude) list
    2012-07-18 16:08:03,488 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    ReplicateQueue QueueProcessingStatistics: First cycle completed 0
    blocks in 1 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    ReplicateQueue QueueProcessingStatistics: Queue flush completed 0
    blocks in 1 msec processing time, 1 msec clock time, 1 cycles
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    InvalidateQueue QueueProcessingStatistics: First cycle completed
    0 blocks in 0 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    InvalidateQueue QueueProcessingStatistics: Queue flush completed
    0 blocks in 0 msec processing time, 0 msec clock time, 1 cycles
    2012-07-18 16:08:03,495 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for
    source FSNamesystemMetrics registered.
    2012-07-18 16:08:03,553 WARN
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    ReplicationMonitor thread received
    InterruptedException.java.lang.InterruptedException: sleep
    interrupted
    2012-07-18 16:08:03,555 INFO
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager:
    Interrupted Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2012-07-18 16:08:03,556 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions: 0 Total time for transactions(ms): 0Number of
    transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
    2012-07-18 16:08:03,594 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch
    <http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

    2012-07-18 16:08:03,627 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at
    its-cs100.its.uni-kassel.de/141.51.205.10
    <http://its-cs100.its.uni-kassel.de/141.51.205.10>
    ************************************************************/




    --
    http://hortonworks.com/download/
  • Mohammad Tariq at Jul 20, 2012 at 2:59 pm
    Hello sir,

    If possible, could you please paste your config files??

    Regards,
    Mohammad Tariq


    On Fri, Jul 20, 2012 at 8:24 PM, Björn-Elmar Macek
    wrote:
    Hi together,

    well just stumbled upon this post:
    http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html

    And it says:
    "Problem: Hadoop-datanode job failed or datanode not running:
    java.io.IOException: File ../mapred/system/jobtracker.info could only be
    replicated to 0 nodes, instead of 1.
    ...
    Cause: You may also get this message due to permissions. May be JobTracker
    can not create jobtracker.info on startup."

    Since the file does not exist i think, this might be a probable reason for
    my errors. But why should the JobTracker not be able to create that file. It
    created several other directories on this node with easy via the slave.sh
    script that i started with the very same user that calls start-all.sh.

    Any help would be really appreciated.


    Am 20.07.2012 16:15, schrieb Björn-Elmar Macek:

    Hi Srinivas,

    thanks for your reply! I have been following your link and idea and been
    playing around alot, but still got problems with the connection (though they
    are different now):

    \_______ JAVA VERSION_________
    "which java" tells me it is 1.6.0_01. If i got it right version 1.7 got
    problems with ssh.

    \_______MY TESTS_____________
    According to your suggestion to look for processes running on that port i
    changed ports alot:
    When i was posting the first post of this thread. i was using ports 999 for
    namenode and 1000 for jobtracker.
    Since due to some reasons commands like "lsof -i" etc dont give me any
    output when usedin the cluster enviroment. So i started looking for ports
    that are in general unused by programs.
    When i changed the ports to 9004 and 9005 i got different errors which look
    very much like the ones you posted in the beginning of this year in the
    lucene section (
    http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
    ).

    It seems as if a DataNode can not communicate with the NameNode.

    The logs look like the following:

    \_______TEST RESULTS__________
    ########## A DataNode #############
    2012-07-20 14:47:59,536 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = its-cs102.its.uni-kassel.de/141.51.205.12
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:59,824 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-20 14:47:59,841 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,843 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-20 14:47:59,844 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
    started
    2012-07-20 14:47:59,969 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-20 14:48:26,792 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: its-cs100/141.51.205.10:9005. Already tried 0 time(s).
    2012-07-20 14:48:26,889 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
    FSDatasetStatusMBean
    2012-07-20 14:48:26,934 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
    2012-07-20 14:48:26,949 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
    1048576 bytes/s
    2012-07-20 14:48:27,014 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:27,147 INFO org.apache.hadoop.http.HttpServer: Added global
    filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:27,160 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
    Opening the listener on 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50075
    webServer.getConnectors()[0].getLocalPort() returned 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Jetty bound
    to port 50075
    2012-07-20 14:48:27,160 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:27,805 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50075
    2012-07-20 14:48:27,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
    registered.
    2012-07-20 14:48:27,813 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    DataNode registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort50020 registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort50020 registered.
    2012-07-20 14:48:28,487 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
    DatanodeRegistration(its-cs102.its.uni-kassel.de:50010, storageID=,
    infoPort=50075, ipcPort=50020)
    2012-07-20 14:48:28,489 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:38,706 ERROR
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1066)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy5.register(Unknown Source)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:673)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1480)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1540)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

    2012-07-20 14:48:38,712 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at
    its-cs102.its.uni-kassel.de/141.51.205.12
    ************************************************************/


    ####### NameNode ##########################
    CAUTION: Please recognize, that the file mentioned in the first error log
    message (/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info)
    does not exist on the NameNode, when i checked for it.
    The only path that has a simiar name is:
    /home/work/bmacek/hadoop/hdfs/slave/tmp (containing no further subfolders or
    files)



    2012-07-20 14:47:58,033 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:58,985 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-20 14:47:59,037 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    started
    2012-07-20 14:47:59,622 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-20 14:47:59,685 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
    registered.
    2012-07-20 14:47:59,703 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-20 14:47:59,896 INFO org.apache.hadoop.hdfs.util.GSet: VM type
    = 64-bit
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory
    = 17.77875 MB
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: capacity
    = 2^21 = 2097152 entries
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-20 14:48:00,083 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-20 14:48:00,084 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-20 14:48:01,573 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-20 14:48:01,643 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring
    more than 10 times
    2012-07-20 14:48:01,686 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files = 1
    2012-07-20 14:48:01,712 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files under construction = 0
    2012-07-20 14:48:01,713 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 loaded in 0 seconds.
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, reached
    end of edit log Number of transactions found 53
    2012-07-20 14:48:01,796 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Edits file /home/work/bmacek/hadoop/master/current/edits of size 1049092
    edits # 53 loaded in 0 seconds.
    2012-07-20 14:48:01,797 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 861 saved in 0 seconds.
    2012-07-20 14:48:02,003 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 861 saved in 0 seconds.
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 1 entries
    11 lookups
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2135 msecs
    2012-07-20 14:48:02,203 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
    = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe
    mode termination scan for invalid, over- and under-replicated blocks
    completed in 44 msec
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-20 14:48:02,205 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-20 14:48:02,265 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-20 14:48:02,275 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-20 14:48:02,281 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-20 14:48:02,336 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort9005 registered.
    2012-07-20 14:48:02,337 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort9005 registered.
    2012-07-20 14:48:02,341 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:02,356 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    its-cs100.its.uni-kassel.de/141.51.205.10:9005
    2012-07-20 14:48:02,878 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:03,312 INFO org.apache.hadoop.http.HttpServer: Added global
    filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:03,426 INFO org.apache.hadoop.http.HttpServer:
    dfs.webhdfs.enabled = false
    2012-07-20 14:48:03,465 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
    Opening the listener on 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer: Jetty bound
    to port 50070
    2012-07-20 14:48:03,511 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:06,528 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2012-07-20 14:48:06,528 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2012-07-20 14:48:06,561 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2012-07-20 14:48:06,593 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 9005: starting
    2012-07-20 14:48:06,656 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005: starting
    2012-07-20 14:48:06,685 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005: starting
    2012-07-20 14:48:06,731 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005: starting
    2012-07-20 14:48:06,759 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 9005: starting
    2012-07-20 14:48:06,791 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005: starting
    2012-07-20 14:48:06,849 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005: starting
    2012-07-20 14:48:06,874 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 9005: starting
    2012-07-20 14:48:06,898 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005: starting
    2012-07-20 14:48:06,921 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005: starting
    2012-07-20 14:48:06,974 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005: starting
    2012-07-20 14:48:27,222 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:27,224 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56513: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:38,701 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    2012-07-20 14:48:38,701 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    register(DatanodeRegistration(its-cs102.its.uni-kassel.de:50010,
    storageID=DS-1791721778-141.51.205.12-50010-1342788518692, infoPort=50075,
    ipcPort=50020)) from 141.51.205.12:33789: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:54,331 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:54,331 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56514: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:18,079 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    13 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 9 SyncTimes(ms): 111
    2012-07-20 14:49:18,151 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:18,151 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56515: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:41,419 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:41,419 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56516: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:04,474 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:04,474 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56517: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:26,299 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    25 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 18 SyncTimes(ms): 170
    2012-07-20 14:50:26,359 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:26,359 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56518: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:47,243 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:47,243 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56519: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:06,865 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    2012-07-20 14:51:06,865 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005, call
    register(DatanodeRegistration(its-cs103.its.uni-kassel.de:50010,
    storageID=DS-1725464844-141.51.205.13-50010-1342788666863, infoPort=50075,
    ipcPort=50020)) from 141.51.205.13:48227: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs103.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs103.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:08,305 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:08,305 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56520: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:34,855 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    37 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 27 SyncTimes(ms): 256
    2012-07-20 14:51:34,932 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:34,932 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56521: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:57,128 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:57,128 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56522: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:21,974 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:21,976 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56523: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:43,473 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    49 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 36 SyncTimes(ms): 341
    2012-07-20 14:52:43,570 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:43,570 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56524: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)





    I am not

    Am 18.07.2012 19:47, schrieb Suresh Srinivas:

    Can you share information on the java version that you are using.
    - Is it as obvious as some previous processes still running and new
    processes cannot bind to the port?
    - Another pointer -
    http://stackoverflow.com/questions/8360913/weird-java-net-socketexception-permission-denied-connect-error-when-running-groo
    On Wed, Jul 18, 2012 at 7:29 AM, Björn-Elmar Macek wrote:

    Hi,

    i have lately been running into problems since i started running hadoop on
    a cluster:

    The setup is the following:
    1 Computer is NameNode and Jobtracker
    1 Computer is SecondaryNameNode
    2 Computers are TaskTracker and DataNode

    I ran into problems with running the wordcount example: NameNode and
    Jobtracker do not start properly both having connection problems of some
    kind.
    And this is although ssh is configured that way, that no prompt happens
    when i connect from any node in the cluster to any other.

    Is there any reason why this happens?

    The logs look like the following:
    \________ JOBTRACKER__________________________________________________
    2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.JobTracker:
    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting JobTracker
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:08:06,479 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-18 16:08:06,534 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system
    started
    2012-07-18 16:08:07,157 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    QueueMetrics,q=default registered.
    2012-07-18 16:08:10,395 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-18 16:08:10,417 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:10,438 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:10,440 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:10,465 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.JobTracker: Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,861 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:13,885 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
    exists!
    2012-07-18 16:08:13,885 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.JobTracker: Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,912 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:13,913 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,348 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:21,390 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
    exists!
    2012-07-18 16:08:21,390 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.JobTracker: Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)


    \________ DATANODE__________________________________________________
    2012-07-18 16:07:58,759 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:07:59,738 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-18 16:07:59,790 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    started
    2012-07-18 16:08:00,382 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-18 16:08:00,454 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
    registered.
    2012-07-18 16:08:00,456 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: VM type
    = 64-bit
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
    memory = 17.77875 MB
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: capacity
    = 2^21 = 2097152 entries
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2012-07-18 16:08:00,824 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-18 16:08:02,746 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-18 16:08:02,868 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring
    more than 10 times
    2012-07-18 16:08:02,932 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files = 1
    2012-07-18 16:08:02,963 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files under construction = 0
    2012-07-18 16:08:02,966 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 loaded in 0 seconds.
    2012-07-18 16:08:02,975 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Edits file /home/work/bmacek/hadoop/master/current/edits of size 4 edits # 0
    loaded in 0 seconds.
    2012-07-18 16:08:02,977 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,191 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries
    0 lookups
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2567 msecs
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
    = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Safe mode termination scan for invalid, over- and under-replicated blocks
    completed in 61 msec
    2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-18 16:08:03,472 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:03,488 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-18 16:08:03,495 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-18 16:08:03,553 WARN
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
    thread received InterruptedException.java.lang.InterruptedException: sleep
    interrupted
    2012-07-18 16:08:03,555 INFO
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
    Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2012-07-18 16:08:03,556 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    0 Total time for transactions(ms): 0Number of transactions batched in Syncs:
    0 Number of syncs: 0 SyncTimes(ms): 0
    2012-07-18 16:08:03,594 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.SocketException:
    Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

    2012-07-18 16:08:03,627 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at
    its-cs100.its.uni-kassel.de/141.51.205.10
    ************************************************************/



    --
    http://hortonworks.com/download/



  • Björn-Elmar Macek at Jul 20, 2012 at 3:38 pm
    Hi Mohammad,

    Thanks for your fast reply. Here they are:

    \_____________hadoop-env.sh___
    I added those 2 lines:

    # The java implementation to use. Required.
    export JAVA_HOME=/opt/jdk1.6.0_01/
    export JAVA_OPTS="-Djava.net.preferIPv4Stack=true $JAVA_OPTS"


    \_____________core-site.xml_____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://its-cs100:9005</value>
    </property>
    </configuration>


    \_____________hdfs-site.xml____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- configure data paths for masters and slaves -->

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/work/bmacek/hadoop/master</value>
    </property>
    <!-- maybe one cannot config masters and slaves on with the same
    file -->
    <property>
    <name>dfs.data.dir</name>
    <value>/home/work/bmacek/hadoop/hdfs/slave</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/work/bmacek/hadoop/hdfs/tmp</value>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    \_______mapred-site.xml____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <!-- master -->
    <property>
    <name>mapred.job.tracker</name>
    <value>its-cs100:9004</value>
    </property>
    <!-- datanode -->
    <property>
    <name>dfs.hosts</name>
    <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>

    <property>
    <name>mapred.hosts</name>
    <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>
    </configuration>

    \_______masters____
    its-cs101

    \_______slaves______
    its-cs102
    its-cs103


    Thats about it, i think. I hope i didnt forget anything.

    Regards,
    Björn-Elmar

    Am 20.07.2012 16:58, schrieb Mohammad Tariq:
    Hello sir,

    If possible, could you please paste your config files??

    Regards,
    Mohammad Tariq


    On Fri, Jul 20, 2012 at 8:24 PM, Björn-Elmar Macek
    wrote:
    Hi together,

    well just stumbled upon this post:
    http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html

    And it says:
    "Problem: Hadoop-datanode job failed or datanode not running:
    java.io.IOException: File ../mapred/system/jobtracker.info could only be
    replicated to 0 nodes, instead of 1.
    ...
    Cause: You may also get this message due to permissions. May be JobTracker
    can not create jobtracker.info on startup."

    Since the file does not exist i think, this might be a probable reason for
    my errors. But why should the JobTracker not be able to create that file. It
    created several other directories on this node with easy via the slave.sh
    script that i started with the very same user that calls start-all.sh.

    Any help would be really appreciated.


    Am 20.07.2012 16:15, schrieb Björn-Elmar Macek:

    Hi Srinivas,

    thanks for your reply! I have been following your link and idea and been
    playing around alot, but still got problems with the connection (though they
    are different now):

    \_______ JAVA VERSION_________
    "which java" tells me it is 1.6.0_01. If i got it right version 1.7 got
    problems with ssh.

    \_______MY TESTS_____________
    According to your suggestion to look for processes running on that port i
    changed ports alot:
    When i was posting the first post of this thread. i was using ports 999 for
    namenode and 1000 for jobtracker.
    Since due to some reasons commands like "lsof -i" etc dont give me any
    output when usedin the cluster enviroment. So i started looking for ports
    that are in general unused by programs.
    When i changed the ports to 9004 and 9005 i got different errors which look
    very much like the ones you posted in the beginning of this year in the
    lucene section (
    http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
    ).

    It seems as if a DataNode can not communicate with the NameNode.

    The logs look like the following:

    \_______TEST RESULTS__________
    ########## A DataNode #############
    2012-07-20 14:47:59,536 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = its-cs102.its.uni-kassel.de/141.51.205.12
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:59,824 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-20 14:47:59,841 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,843 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-20 14:47:59,844 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
    started
    2012-07-20 14:47:59,969 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-20 14:48:26,792 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: its-cs100/141.51.205.10:9005. Already tried 0 time(s).
    2012-07-20 14:48:26,889 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
    FSDatasetStatusMBean
    2012-07-20 14:48:26,934 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
    2012-07-20 14:48:26,949 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
    1048576 bytes/s
    2012-07-20 14:48:27,014 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:27,147 INFO org.apache.hadoop.http.HttpServer: Added global
    filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:27,160 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
    Opening the listener on 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50075
    webServer.getConnectors()[0].getLocalPort() returned 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Jetty bound
    to port 50075
    2012-07-20 14:48:27,160 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:27,805 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50075
    2012-07-20 14:48:27,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
    registered.
    2012-07-20 14:48:27,813 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    DataNode registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort50020 registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort50020 registered.
    2012-07-20 14:48:28,487 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
    DatanodeRegistration(its-cs102.its.uni-kassel.de:50010, storageID=,
    infoPort=50075, ipcPort=50020)
    2012-07-20 14:48:28,489 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:38,706 ERROR
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1066)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy5.register(Unknown Source)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:673)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1480)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1540)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

    2012-07-20 14:48:38,712 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at
    its-cs102.its.uni-kassel.de/141.51.205.12
    ************************************************************/


    ####### NameNode ##########################
    CAUTION: Please recognize, that the file mentioned in the first error log
    message (/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info)
    does not exist on the NameNode, when i checked for it.
    The only path that has a simiar name is:
    /home/work/bmacek/hadoop/hdfs/slave/tmp (containing no further subfolders or
    files)



    2012-07-20 14:47:58,033 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:58,985 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-20 14:47:59,037 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    started
    2012-07-20 14:47:59,622 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-20 14:47:59,685 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
    registered.
    2012-07-20 14:47:59,703 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-20 14:47:59,896 INFO org.apache.hadoop.hdfs.util.GSet: VM type
    = 64-bit
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory
    = 17.77875 MB
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: capacity
    = 2^21 = 2097152 entries
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-20 14:48:00,083 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-20 14:48:00,084 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-20 14:48:01,573 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-20 14:48:01,643 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring
    more than 10 times
    2012-07-20 14:48:01,686 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files = 1
    2012-07-20 14:48:01,712 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files under construction = 0
    2012-07-20 14:48:01,713 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 loaded in 0 seconds.
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode, reached
    end of edit log Number of transactions found 53
    2012-07-20 14:48:01,796 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Edits file /home/work/bmacek/hadoop/master/current/edits of size 1049092
    edits # 53 loaded in 0 seconds.
    2012-07-20 14:48:01,797 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 861 saved in 0 seconds.
    2012-07-20 14:48:02,003 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 861 saved in 0 seconds.
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 1 entries
    11 lookups
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2135 msecs
    2012-07-20 14:48:02,203 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
    = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE* Safe
    mode termination scan for invalid, over- and under-replicated blocks
    completed in 44 msec
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-20 14:48:02,205 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-20 14:48:02,265 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-20 14:48:02,275 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-20 14:48:02,281 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-20 14:48:02,336 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort9005 registered.
    2012-07-20 14:48:02,337 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort9005 registered.
    2012-07-20 14:48:02,341 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:02,356 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    its-cs100.its.uni-kassel.de/141.51.205.10:9005
    2012-07-20 14:48:02,878 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:03,312 INFO org.apache.hadoop.http.HttpServer: Added global
    filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:03,426 INFO org.apache.hadoop.http.HttpServer:
    dfs.webhdfs.enabled = false
    2012-07-20 14:48:03,465 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
    Opening the listener on 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer: Jetty bound
    to port 50070
    2012-07-20 14:48:03,511 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:06,528 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2012-07-20 14:48:06,528 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2012-07-20 14:48:06,561 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2012-07-20 14:48:06,593 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 9005: starting
    2012-07-20 14:48:06,656 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005: starting
    2012-07-20 14:48:06,685 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005: starting
    2012-07-20 14:48:06,731 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005: starting
    2012-07-20 14:48:06,759 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 9005: starting
    2012-07-20 14:48:06,791 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005: starting
    2012-07-20 14:48:06,849 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005: starting
    2012-07-20 14:48:06,874 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 9005: starting
    2012-07-20 14:48:06,898 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005: starting
    2012-07-20 14:48:06,921 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005: starting
    2012-07-20 14:48:06,974 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005: starting
    2012-07-20 14:48:27,222 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:27,224 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56513: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:38,701 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    2012-07-20 14:48:38,701 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    register(DatanodeRegistration(its-cs102.its.uni-kassel.de:50010,
    storageID=DS-1791721778-141.51.205.12-50010-1342788518692, infoPort=50075,
    ipcPort=50020)) from 141.51.205.12:33789: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:54,331 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:54,331 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56514: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:18,079 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    13 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 9 SyncTimes(ms): 111
    2012-07-20 14:49:18,151 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:18,151 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56515: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:41,419 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:41,419 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56516: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:04,474 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:04,474 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56517: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:26,299 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    25 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 18 SyncTimes(ms): 170
    2012-07-20 14:50:26,359 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:26,359 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56518: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:47,243 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:47,243 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56519: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:06,865 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    2012-07-20 14:51:06,865 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005, call
    register(DatanodeRegistration(its-cs103.its.uni-kassel.de:50010,
    storageID=DS-1725464844-141.51.205.13-50010-1342788666863, infoPort=50075,
    ipcPort=50020)) from 141.51.205.13:48227: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs103.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException: Datanode
    denied communication with namenode: its-cs103.its.uni-kassel.de:50010
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:08,305 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:08,305 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56520: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:34,855 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    37 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 27 SyncTimes(ms): 256
    2012-07-20 14:51:34,932 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:34,932 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56521: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:57,128 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:57,128 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56522: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:21,974 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:21,976 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56523: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:43,473 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    49 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 36 SyncTimes(ms): 341
    2012-07-20 14:52:43,570 ERROR
    org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:43,570 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56524: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could only
    be replicated to 0 nodes, instead of 1
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)





    I am not

    Am 18.07.2012 19:47, schrieb Suresh Srinivas:

    Can you share information on the java version that you are using.
    - Is it as obvious as some previous processes still running and new
    processes cannot bind to the port?
    - Another pointer -
    http://stackoverflow.com/questions/8360913/weird-java-net-socketexception-permission-denied-connect-error-when-running-groo

    On Wed, Jul 18, 2012 at 7:29 AM, Björn-Elmar Macek <macek@cs.uni-kassel.de>
    wrote:
    Hi,

    i have lately been running into problems since i started running hadoop on
    a cluster:

    The setup is the following:
    1 Computer is NameNode and Jobtracker
    1 Computer is SecondaryNameNode
    2 Computers are TaskTracker and DataNode

    I ran into problems with running the wordcount example: NameNode and
    Jobtracker do not start properly both having connection problems of some
    kind.
    And this is although ssh is configured that way, that no prompt happens
    when i connect from any node in the cluster to any other.

    Is there any reason why this happens?

    The logs look like the following:
    \________ JOBTRACKER__________________________________________________
    2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.JobTracker:
    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting JobTracker
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:08:06,479 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-18 16:08:06,534 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics system
    started
    2012-07-18 16:08:07,157 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    QueueMetrics,q=default registered.
    2012-07-18 16:08:10,395 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-18 16:08:10,417 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:10,438 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:10,440 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:10,465 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.JobTracker: Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,861 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:13,885 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
    exists!
    2012-07-18 16:08:13,885 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.JobTracker: Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,912 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:13,913 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,348 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:21,390 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi already
    exists!
    2012-07-18 16:08:21,390 INFO
    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT, memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.JobTracker: Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)


    \________ DATANODE__________________________________________________
    2012-07-18 16:07:58,759 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:07:59,738 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-18 16:07:59,790 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system
    started
    2012-07-18 16:08:00,382 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-18 16:08:00,454 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
    registered.
    2012-07-18 16:08:00,456 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: VM type
    = 64-bit
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
    memory = 17.77875 MB
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: capacity
    = 2^21 = 2097152 entries
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
    2012-07-18 16:08:00,824 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-18 16:08:02,746 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-18 16:08:02,868 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring
    more than 10 times
    2012-07-18 16:08:02,932 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files = 1
    2012-07-18 16:08:02,963 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Number of files under construction = 0
    2012-07-18 16:08:02,966 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 loaded in 0 seconds.
    2012-07-18 16:08:02,975 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Edits file /home/work/bmacek/hadoop/master/current/edits of size 4 edits # 0
    loaded in 0 seconds.
    2012-07-18 16:08:02,977 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,191 INFO org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0 entries
    0 lookups
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2567 msecs
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of blocks
    = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Safe mode termination scan for invalid, over- and under-replicated blocks
    completed in 61 msec
    2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-18 16:08:03,472 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:03,488 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-18 16:08:03,495 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-18 16:08:03,553 WARN
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
    thread received InterruptedException.java.lang.InterruptedException: sleep
    interrupted
    2012-07-18 16:08:03,555 INFO
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
    Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2012-07-18 16:08:03,556 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of transactions:
    0 Total time for transactions(ms): 0Number of transactions batched in Syncs:
    0 Number of syncs: 0 SyncTimes(ms): 0
    2012-07-18 16:08:03,594 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.SocketException:
    Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at
    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

    2012-07-18 16:08:03,627 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at
    its-cs100.its.uni-kassel.de/141.51.205.10
    ************************************************************/


    --
    http://hortonworks.com/download/



  • Mohammad Tariq at Jul 20, 2012 at 3:44 pm
    Hi Macek,

    hadoop.tmp.dir actually belongs to core-site.xml. So,it would be better
    to move it there.
    On Friday, July 20, 2012, Björn-Elmar Macek wrote:
    Hi Mohammad,

    Thanks for your fast reply. Here they are:

    \_____________hadoop-env.sh___
    I added those 2 lines:

    # The java implementation to use. Required.
    export JAVA_HOME=/opt/jdk1.6.0_01/
    export JAVA_OPTS="-Djava.net.preferIPv4Stack=true $JAVA_OPTS"


    \_____________core-site.xml_____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://its-cs100:9005</value>
    </property>
    </configuration>


    \_____________hdfs-site.xml____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- configure data paths for masters and slaves -->

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/work/bmacek/hadoop/master</value>
    </property>
    <!-- maybe one cannot config masters and slaves on with the same file -->
    <property>
    <name>dfs.data.dir</name>
    <value>/home/work/bmacek/hadoop/hdfs/slave</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/work/bmacek/hadoop/hdfs/tmp</value>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    \_______mapred-site.xml____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <!-- master -->
    <property>
    <name>mapred.job.tracker</name>
    <value>its-cs100:9004</value>
    </property>
    <!-- datanode -->
    <property>
    <name>dfs.hosts</name>
    <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>

    <property>
    <name>mapred.hosts</name>
    <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>
    </configuration>

    \_______masters____
    its-cs101

    \_______slaves______
    its-cs102
    its-cs103


    Thats about it, i think. I hope i didnt forget anything.

    Regards,
    Björn-Elmar

    Am 20.07.2012 16:58, schrieb Mohammad Tariq:

    Hello sir,

    If possible, could you please paste your config files??

    Regards,
    Mohammad Tariq


    On Fri, Jul 20, 2012 at 8:24 PM, Björn-Elmar Macek
    wrote:

    Hi together,

    well just stumbled upon this post:
    http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html
    And it says:
    "Problem: Hadoop-datanode job failed or datanode not running:
    java.io.IOException: File ../mapred/system/jobtracker.info could only be
    replicated to 0 nodes, instead of 1.
    ...
    Cause: You may also get this message due to permissions. May be JobTracker
    can not create jobtracker.info on startup."

    Since the file does not exist i think, this might be a probable reason for
    my errors. But why should the JobTracker not be able to create that file. It
    created several other directories on this node with easy via the slave.sh
    script that i started with the very same user that calls start-all.sh.

    Any help would be really appreciated.


    Am 20.07.2012 16:15, schrieb Björn-Elmar Macek:

    Hi Srinivas,

    thanks for your reply! I have been following your link and idea and been
    playing around alot, but still got problems with the connection (though they
    are different now):

    \_______ JAVA VERSION_________
    "which java" tells me it is 1.6.0_01. If i got it right version 1.7 got
    problems with ssh.

    \_______MY TESTS_____________
    According to your suggestion to look for processes running on that port i
    changed ports alot:
    When i was posting the first post of this thread. i was using ports 999 for
    namenode and 1000 for jobtracker.
    Since due to some reasons commands like "lsof -i" etc dont give me any
    output when usedin the cluster enviroment. So i started looking for ports
    that are in general unused by programs.
    When i changed the ports to 9004 and 9005 i got different errors which look
    very much like the ones you posted in the beginning of this year in the
    lucene section (
    http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
    ).

    It seems as if a DataNode can not communicate with the NameNode.

    The logs look like the following:

    \_______TEST RESULTS__________
    ########## A DataNode #############
    2012-07-20 14:47:59,536 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = its-cs102.its.uni-kassel.de/141.51.205.12
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:59,824 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-20 14:47:59,841 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,843 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period
    at 10 second(s).
    2012-07-20 14:47:59,844 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
    started
    2012-07-20 14:47:59,969 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
    registered.
    2012-07-20 14:48:26,792 INFO org.apache.hadoop.ipc.Client: Retrying connect
    to server: its-cs100/141.51.205.10:9005. Already tried 0 time(s).
    2012-07-20 14:48:26,889 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
    FSDatasetStatusMBean
    2012-07-20 14:48:26,934 I
    --
    Regards,
    Mohammad Tariq
  • Harsh J at Jul 20, 2012 at 4:01 pm
    Hi,

    <property>
    <name>dfs.hosts</name>
    <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>

    This one is probably the cause of all your trouble. It makes the
    "hosts" file a white-list of allowed nodes. Ensure, hence, that
    "its-cs103.its.uni-kassel.de" is in this file for sure.

    Also, dfs.hosts must be in hdfs-site.xml, and mapred.hosts in
    mapred-site.xml, but you've got both of them in the latter. You should
    fix this up as well.

    Or if you do not need such a white-lister feature, just remove both
    properties away and restart.

    On Fri, Jul 20, 2012 at 9:08 PM, Björn-Elmar Macek
    wrote:
    Hi Mohammad,

    Thanks for your fast reply. Here they are:

    \_____________hadoop-env.sh___
    I added those 2 lines:

    # The java implementation to use. Required.
    export JAVA_HOME=/opt/jdk1.6.0_01/
    export JAVA_OPTS="-Djava.net.preferIPv4Stack=true $JAVA_OPTS"


    \_____________core-site.xml_____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <property>
    <name>fs.default.name</name>
    <value>hdfs://its-cs100:9005</value>
    </property>
    </configuration>


    \_____________hdfs-site.xml____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- configure data paths for masters and slaves -->

    <configuration>
    <property>
    <name>dfs.name.dir</name>
    <value>/home/work/bmacek/hadoop/master</value>
    </property>
    <!-- maybe one cannot config masters and slaves on with the same file
    -->
    <property>
    <name>dfs.data.dir</name>
    <value>/home/work/bmacek/hadoop/hdfs/slave</value>
    </property>
    <property>
    <name>hadoop.tmp.dir</name>
    <value>/home/work/bmacek/hadoop/hdfs/tmp</value>
    </property>

    <property>
    <name>dfs.replication</name>
    <value>1</value>
    </property>
    </configuration>


    \_______mapred-site.xml____
    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

    <!-- Put site-specific property overrides in this file. -->

    <configuration>
    <!-- master -->
    <property>
    <name>mapred.job.tracker</name>
    <value>its-cs100:9004</value>
    </property>
    <!-- datanode -->
    <property>
    <name>dfs.hosts</name>
    <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>

    <property>
    <name>mapred.hosts</name>
    <value>/home/fb16/bmacek/hadoop-1.0.2/conf/hosts</value>
    </property>
    </configuration>

    \_______masters____
    its-cs101

    \_______slaves______
    its-cs102
    its-cs103


    Thats about it, i think. I hope i didnt forget anything.

    Regards,
    Björn-Elmar

    Am 20.07.2012 16:58, schrieb Mohammad Tariq:
    Hello sir,

    If possible, could you please paste your config files??

    Regards,
    Mohammad Tariq


    On Fri, Jul 20, 2012 at 8:24 PM, Björn-Elmar Macek
    wrote:
    Hi together,

    well just stumbled upon this post:

    http://ankitasblogger.blogspot.de/2012/01/error-that-occured-in-hadoop-and-its.html

    And it says:
    "Problem: Hadoop-datanode job failed or datanode not running:
    java.io.IOException: File ../mapred/system/jobtracker.info could only be
    replicated to 0 nodes, instead of 1.
    ...
    Cause: You may also get this message due to permissions. May be
    JobTracker
    can not create jobtracker.info on startup."

    Since the file does not exist i think, this might be a probable reason
    for
    my errors. But why should the JobTracker not be able to create that file.
    It
    created several other directories on this node with easy via the slave.sh
    script that i started with the very same user that calls start-all.sh.

    Any help would be really appreciated.


    Am 20.07.2012 16:15, schrieb Björn-Elmar Macek:

    Hi Srinivas,

    thanks for your reply! I have been following your link and idea and been
    playing around alot, but still got problems with the connection (though
    they
    are different now):

    \_______ JAVA VERSION_________
    "which java" tells me it is 1.6.0_01. If i got it right version 1.7 got
    problems with ssh.

    \_______MY TESTS_____________
    According to your suggestion to look for processes running on that port i
    changed ports alot:
    When i was posting the first post of this thread. i was using ports 999
    for
    namenode and 1000 for jobtracker.
    Since due to some reasons commands like "lsof -i" etc dont give me any
    output when usedin the cluster enviroment. So i started looking for ports
    that are in general unused by programs.
    When i changed the ports to 9004 and 9005 i got different errors which
    look
    very much like the ones you posted in the beginning of this year in the
    lucene section (

    http://lucene.472066.n3.nabble.com/Unable-to-start-hadoop-0-20-2-but-able-to-start-hadoop-0-20-203-cluster-td2991350.html
    ).

    It seems as if a DataNode can not communicate with the NameNode.

    The logs look like the following:

    \_______TEST RESULTS__________
    ########## A DataNode #############
    2012-07-20 14:47:59,536 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting DataNode
    STARTUP_MSG: host = its-cs102.its.uni-kassel.de/141.51.205.12
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:59,824 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-20 14:47:59,841 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,843 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period
    at 10 second(s).
    2012-07-20 14:47:59,844 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics
    system
    started
    2012-07-20 14:47:59,969 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi
    registered.
    2012-07-20 14:48:26,792 INFO org.apache.hadoop.ipc.Client: Retrying
    connect
    to server: its-cs100/141.51.205.10:9005. Already tried 0 time(s).
    2012-07-20 14:48:26,889 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
    FSDatasetStatusMBean
    2012-07-20 14:48:26,934 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
    50010
    2012-07-20 14:48:26,949 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
    1048576 bytes/s
    2012-07-20 14:48:27,014 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:27,147 INFO org.apache.hadoop.http.HttpServer: Added
    global
    filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:27,160 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled =
    false
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is
    -1.
    Opening the listener on 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50075
    webServer.getConnectors()[0].getLocalPort() returned 50075
    2012-07-20 14:48:27,160 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound
    to port 50075
    2012-07-20 14:48:27,160 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:27,805 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50075
    2012-07-20 14:48:27,811 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    jvm
    registered.
    2012-07-20 14:48:27,813 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    DataNode registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort50020 registered.
    2012-07-20 14:48:28,484 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort50020 registered.
    2012-07-20 14:48:28,487 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
    DatanodeRegistration(its-cs102.its.uni-kassel.de:50010, storageID=,
    infoPort=50075, ipcPort=50020)
    2012-07-20 14:48:28,489 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:38,706 ERROR
    org.apache.hadoop.hdfs.server.datanode.DataNode:
    org.apache.hadoop.ipc.RemoteException:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)

    at org.apache.hadoop.ipc.Client.call(Client.java:1066)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:225)
    at $Proxy5.register(Unknown Source)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.register(DataNode.java:673)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.runDatanodeDaemon(DataNode.java:1480)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1540)
    at

    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1665)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)

    2012-07-20 14:48:38,712 INFO
    org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down DataNode at
    its-cs102.its.uni-kassel.de/141.51.205.12
    ************************************************************/


    ####### NameNode ##########################
    CAUTION: Please recognize, that the file mentioned in the first error log
    message (/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info)
    does not exist on the NameNode, when i checked for it.
    The only path that has a simiar name is:
    /home/work/bmacek/hadoop/hdfs/slave/tmp (containing no further subfolders
    or
    files)



    2012-07-20 14:47:58,033 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-20 14:47:58,985 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig:
    loaded properties from hadoop-metrics2.properties
    2012-07-20 14:47:59,037 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period
    at 10 second(s).
    2012-07-20 14:47:59,052 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system
    started
    2012-07-20 14:47:59,622 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi
    registered.
    2012-07-20 14:47:59,685 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    jvm
    registered.
    2012-07-20 14:47:59,703 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-20 14:47:59,896 INFO org.apache.hadoop.hdfs.util.GSet: VM type
    = 64-bit
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
    memory
    = 17.77875 MB
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet: capacity
    = 2^21 = 2097152 entries
    2012-07-20 14:47:59,897 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2012-07-20 14:48:00,067 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-20 14:48:00,083 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-20 14:48:00,084 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-20 14:48:01,573 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-20 14:48:01,643 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
    occuring
    more than 10 times
    2012-07-20 14:48:01,686 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Number of files = 1
    2012-07-20 14:48:01,712 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Number of files under construction = 0
    2012-07-20 14:48:01,713 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 loaded in 0 seconds.
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Invalid opcode,
    reached
    end of edit log Number of transactions found 53
    2012-07-20 14:48:01,796 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Edits file /home/work/bmacek/hadoop/master/current/edits of size 1049092
    edits # 53 loaded in 0 seconds.
    2012-07-20 14:48:01,797 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 861 saved in 0 seconds.
    2012-07-20 14:48:02,003 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 861 saved in 0 seconds.
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 1
    entries
    11 lookups
    2012-07-20 14:48:02,159 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2135 msecs
    2012-07-20 14:48:02,203 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks
    = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Safe
    mode termination scan for invalid, over- and under-replicated blocks
    completed in 44 msec
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-20 14:48:02,204 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-20 14:48:02,205 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-20 14:48:02,265 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-20 14:48:02,275 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-20 14:48:02,277 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-20 14:48:02,281 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-20 14:48:02,336 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcDetailedActivityForPort9005 registered.
    2012-07-20 14:48:02,337 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    RpcActivityForPort9005 registered.
    2012-07-20 14:48:02,341 INFO org.apache.hadoop.ipc.Server: Starting
    SocketReader
    2012-07-20 14:48:02,356 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Namenode up at:
    its-cs100.its.uni-kassel.de/141.51.205.10:9005
    2012-07-20 14:48:02,878 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-20 14:48:03,312 INFO org.apache.hadoop.http.HttpServer: Added
    global
    filtersafety (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-20 14:48:03,426 INFO org.apache.hadoop.http.HttpServer:
    dfs.webhdfs.enabled = false
    2012-07-20 14:48:03,465 INFO org.apache.hadoop.http.HttpServer: Port
    returned by webServer.getConnectors()[0].getLocalPort() before open() is
    -1.
    Opening the listener on 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer:
    listener.getLocalPort() returned 50070
    webServer.getConnectors()[0].getLocalPort() returned 50070
    2012-07-20 14:48:03,511 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound
    to port 50070
    2012-07-20 14:48:03,511 INFO org.mortbay.log: jetty-6.1.26
    2012-07-20 14:48:06,528 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50070
    2012-07-20 14:48:06,528 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Web-server up at:
    0.0.0.0:50070
    2012-07-20 14:48:06,561 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2012-07-20 14:48:06,593 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 9005: starting
    2012-07-20 14:48:06,656 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005: starting
    2012-07-20 14:48:06,685 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005: starting
    2012-07-20 14:48:06,731 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005: starting
    2012-07-20 14:48:06,759 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 9005: starting
    2012-07-20 14:48:06,791 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005: starting
    2012-07-20 14:48:06,849 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005: starting
    2012-07-20 14:48:06,874 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 6 on 9005: starting
    2012-07-20 14:48:06,898 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005: starting
    2012-07-20 14:48:06,921 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005: starting
    2012-07-20 14:48:06,974 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005: starting
    2012-07-20 14:48:27,222 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:27,224 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56513: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:38,701 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs102.its.uni-kassel.de:50010
    2012-07-20 14:48:38,701 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    register(DatanodeRegistration(its-cs102.its.uni-kassel.de:50010,
    storageID=DS-1791721778-141.51.205.12-50010-1342788518692,
    infoPort=50075,
    ipcPort=50020)) from 141.51.205.12:33789: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode
    denied communication with namenode: its-cs102.its.uni-kassel.de:50010
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:48:54,331 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:48:54,331 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56514: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:18,079 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions:
    13 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 9 SyncTimes(ms): 111
    2012-07-20 14:49:18,151 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:18,151 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56515: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:49:41,419 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:49:41,419 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56516: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:04,474 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:04,474 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56517: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:26,299 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions:
    25 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 18 SyncTimes(ms): 170
    2012-07-20 14:50:26,359 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:26,359 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 9 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56518: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:50:47,243 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:50:47,243 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 7 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56519: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:06,865 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek
    cause:org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode denied communication with namenode:
    its-cs103.its.uni-kassel.de:50010
    2012-07-20 14:51:06,865 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 9005, call
    register(DatanodeRegistration(its-cs103.its.uni-kassel.de:50010,
    storageID=DS-1725464844-141.51.205.13-50010-1342788666863,
    infoPort=50075,
    ipcPort=50020)) from 141.51.205.13:48227: error:
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode
    denied communication with namenode: its-cs103.its.uni-kassel.de:50010
    org.apache.hadoop.hdfs.server.protocol.DisallowedDatanodeException:
    Datanode
    denied communication with namenode: its-cs103.its.uni-kassel.de:50010
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.registerDatanode(FSNamesystem.java:2391)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.register(NameNode.java:973)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:08,305 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:08,305 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56520: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:34,855 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions:
    37 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 27 SyncTimes(ms): 256
    2012-07-20 14:51:34,932 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:34,932 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56521: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:51:57,128 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:51:57,128 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 8 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56522: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:21,974 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:21,976 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56523: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)
    2012-07-20 14:52:43,473 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions:
    49 Total time for transactions(ms): 22Number of transactions batched in
    Syncs: 0 Number of syncs: 36 SyncTimes(ms): 341
    2012-07-20 14:52:43,570 ERROR
    org.apache.hadoop.security.UserGroupInformation:
    PriviledgedActionException
    as:bmacek cause:java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    2012-07-20 14:52:43,570 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 9005, call
    addBlock(/home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info,
    DFSClient_-1997886712, null) from 141.51.205.10:56524: error:
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    java.io.IOException: File
    /home/work/bmacek/hadoop/hdfs/tmp/mapred/system/jobtracker.info could
    only
    be replicated to 0 nodes, instead of 1
    at

    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1558)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:696)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:563)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1388)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1384)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at

    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1093)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1382)





    I am not

    Am 18.07.2012 19:47, schrieb Suresh Srinivas:

    Can you share information on the java version that you are using.
    - Is it as obvious as some previous processes still running and new
    processes cannot bind to the port?
    - Another pointer -

    http://stackoverflow.com/questions/8360913/weird-java-net-socketexception-permission-denied-connect-error-when-running-groo

    On Wed, Jul 18, 2012 at 7:29 AM, Björn-Elmar Macek
    <macek@cs.uni-kassel.de>
    wrote:
    Hi,

    i have lately been running into problems since i started running hadoop
    on
    a cluster:

    The setup is the following:
    1 Computer is NameNode and Jobtracker
    1 Computer is SecondaryNameNode
    2 Computers are TaskTracker and DataNode

    I ran into problems with running the wordcount example: NameNode and
    Jobtracker do not start properly both having connection problems of some
    kind.
    And this is although ssh is configured that way, that no prompt happens
    when i connect from any node in the cluster to any other.

    Is there any reason why this happens?

    The logs look like the following:
    \________ JOBTRACKER__________________________________________________
    2012-07-18 16:08:05,808 INFO org.apache.hadoop.mapred.JobTracker:
    STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting JobTracker
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:08:06,479 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-18 16:08:06,534 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period
    at 10 second(s).
    2012-07-18 16:08:06,554 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: JobTracker metrics
    system
    started
    2012-07-18 16:08:07,157 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    QueueMetrics,q=default registered.
    2012-07-18 16:08:10,395 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi
    registered.
    2012-07-18 16:08:10,417 INFO

    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,436 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:10,438 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:10,440 INFO

    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:10,465 INFO

    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:10,510 INFO org.apache.hadoop.mapred.JobTracker:
    Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:10,620 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at

    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,861 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:13,885 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
    already
    exists!
    2012-07-18 16:08:13,885 INFO

    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:13,910 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:13,911 INFO org.apache.hadoop.mapred.JobTracker:
    Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:13,912 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at

    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)

    2012-07-18 16:08:13,912 INFO

    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Starting expired delegation token remover thread,
    tokenRemoverScanInterval=60 min(s)
    2012-07-18 16:08:13,913 INFO

    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,348 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name
    QueueMetrics,q=default already exists!
    2012-07-18 16:08:21,390 WARN
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Source name ugi
    already
    exists!
    2012-07-18 16:08:21,390 INFO

    org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager:
    Updating the current master key for generating delegation tokens
    2012-07-18 16:08:21,426 INFO org.apache.hadoop.mapred.JobTracker:
    Scheduler configured with (memSizeForMapSlotOnJT,
    memSizeForReduceSlotOnJT,
    limitMaxMemForMapTasks, limitMaxMemForReduceTasks) (-1, -1, -1, -1)
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:21,427 INFO org.apache.hadoop.mapred.JobTracker:
    Starting
    jobtracker with owner as bmacek
    2012-07-18 16:08:21,428 WARN org.apache.hadoop.mapred.JobTracker: Error
    starting tracker: java.net.SocketException: Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at

    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2306)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2192)
    at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2186)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:300)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:291)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4978)


    \________ DATANODE__________________________________________________
    2012-07-18 16:07:58,759 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
    /************************************************************
    STARTUP_MSG: Starting NameNode
    STARTUP_MSG: host = its-cs100.its.uni-kassel.de/141.51.205.10
    STARTUP_MSG: args = []
    STARTUP_MSG: version = 1.0.2
    STARTUP_MSG: build =
    https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0.2 -r
    1304954; compiled by 'hortonfo' on Sat Mar 24 23:58:21 UTC 2012
    ************************************************************/
    2012-07-18 16:07:59,738 INFO
    org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
    hadoop-metrics2.properties
    2012-07-18 16:07:59,790 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    MetricsSystem,sub=Stats registered.
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
    period
    at 10 second(s).
    2012-07-18 16:07:59,807 INFO
    org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
    system
    started
    2012-07-18 16:08:00,382 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    ugi
    registered.
    2012-07-18 16:08:00,454 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    jvm
    registered.
    2012-07-18 16:08:00,456 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    NameNode registered.
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: VM type
    = 64-bit
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
    memory = 17.77875 MB
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet: capacity
    = 2^21 = 2097152 entries
    2012-07-18 16:08:00,645 INFO org.apache.hadoop.hdfs.util.GSet:
    recommended=2097152, actual=2097152
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=bmacek
    2012-07-18 16:08:00,812 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    supergroup=supergroup
    2012-07-18 16:08:00,824 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isPermissionEnabled=true
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    dfs.block.invalidate.limit=100
    2012-07-18 16:08:00,846 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
    isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
    accessTokenLifetime=0 min(s)
    2012-07-18 16:08:02,746 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
    FSNamesystemStateMBean and NameNodeMXBean
    2012-07-18 16:08:02,868 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
    occuring
    more than 10 times
    2012-07-18 16:08:02,932 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Number of files = 1
    2012-07-18 16:08:02,963 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Number of files under construction = 0
    2012-07-18 16:08:02,966 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 loaded in 0 seconds.
    2012-07-18 16:08:02,975 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Edits file /home/work/bmacek/hadoop/master/current/edits of size 4 edits
    # 0
    loaded in 0 seconds.
    2012-07-18 16:08:02,977 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,191 INFO
    org.apache.hadoop.hdfs.server.common.Storage:
    Image file of size 112 saved in 0 seconds.
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
    entries
    0 lookups
    2012-07-18 16:08:03,334 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
    FSImage in 2567 msecs
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
    blocks
    = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
    blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    under-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    over-replicated blocks = 0
    2012-07-18 16:08:03,401 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Safe mode termination scan for invalid, over- and under-replicated
    blocks
    completed in 61 msec
    2012-07-18 16:08:03,402 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Leaving safe mode after 2 secs.
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    Network topology has 0 racks and 0 datanodes
    2012-07-18 16:08:03,412 INFO org.apache.hadoop.hdfs.StateChange: STATE*
    UnderReplicatedBlocks has 0 blocks
    2012-07-18 16:08:03,472 INFO org.apache.hadoop.util.HostsFileReader:
    Refreshing hosts (include/exclude) list
    2012-07-18 16:08:03,488 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 1 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 1 msec
    processing time, 1 msec clock time, 1 cycles
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
    2012-07-18 16:08:03,490 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
    QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec
    processing time, 0 msec clock time, 1 cycles
    2012-07-18 16:08:03,495 INFO
    org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
    FSNamesystemMetrics registered.
    2012-07-18 16:08:03,553 WARN
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
    thread received InterruptedException.java.lang.InterruptedException:
    sleep
    interrupted
    2012-07-18 16:08:03,555 INFO
    org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
    Monitor
    java.lang.InterruptedException: sleep interrupted
    at java.lang.Thread.sleep(Native Method)
    at

    org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
    at java.lang.Thread.run(Thread.java:619)
    2012-07-18 16:08:03,556 INFO
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
    transactions:
    0 Total time for transactions(ms): 0Number of transactions batched in
    Syncs:
    0 Number of syncs: 0 SyncTimes(ms): 0
    2012-07-18 16:08:03,594 ERROR
    org.apache.hadoop.hdfs.server.namenode.NameNode:
    java.net.SocketException:
    Permission denied
    at sun.nio.ch.Net.bind(Native Method)
    at

    sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:119)
    at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
    at org.apache.hadoop.ipc.Server.bind(Server.java:225)
    at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
    at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
    at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
    at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
    at

    org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)

    2012-07-18 16:08:03,627 INFO
    org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
    /************************************************************
    SHUTDOWN_MSG: Shutting down NameNode at
    its-cs100.its.uni-kassel.de/141.51.205.10
    ************************************************************/



    --
    http://hortonworks.com/download/





    --
    Harsh J

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphdfs-user @
categorieshadoop
postedJul 18, '12 at 2:29p
activeJul 20, '12 at 4:01p
posts8
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2021 Grokbase