FAQ
Hi!

I'm having some trouble with Map/Reduce jobs failing due to HDFS
errors. I've been digging around the logs trying to figure out what's
happening, and I see the following in the datanode logs:

2010-11-19 10:27:01,059 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in
BlockReceiver.lastNodeRun: java.io.IOException: No temporary
file /opera/log4/hadoop/dfs/data/tmp/blk_-8143694940938019938 for block
blk_-8143694940938019938_6144372 at
org.apache.hadoop.hdfs.server.datanode.FSDataset.finalizeBlock(FSDataset.java:1240)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.lastDataNodeRun(BlockReceiver.java:809)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:859)
at java.lang.Thread.run(Thread.java:619) 2010-11-19 10:27:09,170 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError:
exception: java.io.IOException: No temporary
file /opera/log4/hadoop/dfs/data/tmp/blk_-8143694940938019938 for block
blk_-8143694940938019938_6144372 at
org.apache.hadoop.hdfs.server.datanode.FSDataset.finalizeBlock(FSDataset.java:1240)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.lastDataNodeRun(BlockReceiver.java:809)
at
org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:859)
at java.lang.Thread.run(Thread.java:619)

What would be the possible causes of such exceptions?

(This is on Hadoop 0.20.1)

Regards,
\EF
--
Erik Forsberg <forsberg@opera.com>
Developer, Opera Software - http://www.opera.com/

Search Discussions

Discussion Posts

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 2 | next ›
Discussion Overview
grouphdfs-user @
categorieshadoop
postedNov 24, '10 at 9:30a
activeDec 3, '10 at 7:40a
posts2
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Erik Forsberg: 2 posts

People

Translate

site design / logo © 2022 Grokbase