FAQ
I got the following error while running the example sort program (hadoop
0.20) on a brand new Hadoop cluster (using the Cloudera distro). The job
seems to have recovered. However I'm wondering if this is normal or should I
be checking for something.



attempt_200910051513_0009_r_000005_0: 09/10/15 09:53:52 INFO hdfs.DFSClient:
Exception in createBlockOutputStream java.io.IOException: Bad connect ack
with firstBadLink 10.10.10.52:50010

attempt_200910051513_0009_r_000005_0: 09/10/15 09:53:52 INFO hdfs.DFSClient:
Abandoning block blk_-7778196938228518311_13172

attempt_200910051513_0009_r_000005_0: 09/10/15 09:53:52 INFO hdfs.DFSClient:
Waiting to find target node: 10.10.10.56:50010

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:01 INFO hdfs.DFSClient:
Exception in createBlockOutputStream java.io.IOException: Bad connect ack
with firstBadLink 10.10.10.55:50010

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:01 INFO hdfs.DFSClient:
Abandoning block blk_-7309503129247220072_13172

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:09 INFO hdfs.DFSClient:
Exception in createBlockOutputStream java.io.IOException: Bad connect ack
with firstBadLink 10.10.10.52:50010

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:09 INFO hdfs.DFSClient:
Abandoning block blk_3948102363851753370_13172

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:15 INFO hdfs.DFSClient:
Exception in createBlockOutputStream java.io.IOException: Bad connect ack
with firstBadLink 10.10.10.55:50010

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:15 INFO hdfs.DFSClient:
Abandoning block blk_-9105283762069697302_13172

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:21 WARN hdfs.DFSClient:
DataStreamer Exception: java.io.IOException: Unable to create new block.

attempt_200910051513_0009_r_000005_0: at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2813)

attempt_200910051513_0009_r_000005_0: at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2077)

attempt_200910051513_0009_r_000005_0: at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2263)

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:21 WARN hdfs.DFSClient:
Error Recovery for block blk_-9105283762069697302_13172 bad datanode[1]
nodes == null

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:21 WARN hdfs.DFSClient:
Could not get block locations. Source file
"/user/hadoop/sort_out/_temporary/_attempt_200910051513_0009_r_000005_0/part-00005"
- Aborting...

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:21 WARN
mapred.TaskTracker: Error running child

attempt_200910051513_0009_r_000005_0: java.io.IOException: Bad connect ack
with firstBadLink 10.10.10.55:50010

attempt_200910051513_0009_r_000005_0: at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2871)

attempt_200910051513_0009_r_000005_0: at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2794)

attempt_200910051513_0009_r_000005_0: at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2077)

attempt_200910051513_0009_r_000005_0: at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2263)

attempt_200910051513_0009_r_000005_0: 09/10/15 09:54:21 INFO
mapred.TaskRunner: Runnning cleanup for the task

09/10/15 09:54:37 INFO mapred.JobClient: map 100% reduce 82%

09/10/15 09:54:41 INFO mapred.JobClient: map 100% reduce 83%

09/10/15 09:54:43 INFO mapred.JobClient: Task Id :
attempt_200910051513_0009_r_000017_0, Status : FAILED

java.io.IOException: Bad connect ack with firstBadLink 10.10.10.56:50010

at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2871)

at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2794)

at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2077)

at
org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2263)


attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: hadoop.tmp.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter:
hadoop.rpc.socket.factory.class.default; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: tasktracker.http.threads; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.child.ulimit; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: fs.checkpoint.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.job.tracker.handler.count;
Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.local.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: fs.trash.interval; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter:
mapred.tasktracker.reduce.tasks.maximum; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.tasktracker.map.tasks.maximum;
Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO jvm.JvmMetrics:
Initializing JVM Metrics with processName=SHUFFLE, sessionId=

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: hadoop.tmp.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: dfs.namenode.handler.count; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter:
hadoop.rpc.socket.factory.class.default; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: tasktracker.http.threads; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: dfs.block.size; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: dfs.permissions; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.child.ulimit; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: fs.checkpoint.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.job.tracker.handler.count;
Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: dfs.datanode.du.reserved; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: dfs.data.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.local.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: fs.trash.interval; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: dfs.name.dir; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter:
mapred.tasktracker.reduce.tasks.maximum; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: dfs.datanode.handler.count; Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 WARN
conf.Configuration:
/h3/hadoop/mapred/hadoop/taskTracker/jobcache/job_200910051513_0009/attempt_200910051513_0009_r_000017_0/job.xml:a
attempt to override final parameter: mapred.tasktracker.map.tasks.maximum;
Ignoring.

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: ShuffleRamManager: MemoryLimit=762078784,
MaxSingleShuffleLimit=190519696

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0 Thread started:
Thread for merging on-disk files

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0 Thread waiting:
Thread for merging on-disk files

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0 Thread started:
Thread for merging in memory files

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0 Need another 560 map
output(s) where 0 is already in progress

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0 Thread started:
Thread for polling Map Completion Events

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0 Scheduled 0 outputs
(0 slow hosts and0 dup hosts)

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:48 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0: Got 79 new
map-outputs

attempt_200910051513_0009_r_000017_0: 09/10/15 09:47:51 INFO
mapred.ReduceTask: attempt_200910051513_0009_r_000017_0: Got 6 new
map-outputs

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 15, '09 at 2:24p
activeOct 15, '09 at 2:24p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Patrick Angeles: 1 post

People

Translate

site design / logo © 2022 Grokbase