FAQ
Hi,

What could be the possible reasons for getting too many checksum
exceptions? I am getting these kind of exceptions quite frequently, and the
whole job fails in the end:

org.apache.hadoop.fs.ChecksumException: Checksum error:
/blk_8186355706212889850:of:/tmp/Webevent_07_05_2010.dat at 4075520
at org.apache.hadoop.fs.FSInputChecker.verifySum(FSInputChecker.java:277)
at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:241)
at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1158)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:1718)
at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1770)
at java.io.DataInputStream.read(DataInputStream.java:83)
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:97)
at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:423)
at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
at org.apache.hadoop.mapred.Child.main(Child.java:170)


Thanks,
Hari

Search Discussions

  • Steve Loughran at Nov 23, 2010 at 10:56 am

    On 22/11/10 11:02, Hari Sreekumar wrote:
    Hi,

    What could be the possible reasons for getting too many checksum
    exceptions? I am getting these kind of exceptions quite frequently, and the
    whole job fails in the end:

    org.apache.hadoop.fs.ChecksumException: Checksum error:
    /blk_8186355706212889850:of:/tmp/Webevent_07_05_2010.dat at 4075520
    at org.apache.hadoop.fs.FSInputChecker.verifySum(FSInputChecker.java:277)
    at org.apache.hadoop.fs.FSInputChecker.readChecksumChunk(FSInputChecker.java:241)
    at org.apache.hadoop.fs.FSInputChecker.read1(FSInputChecker.java:189)
    at org.apache.hadoop.fs.FSInputChecker.read(FSInputChecker.java:158)
    at org.apache.hadoop.hdfs.DFSClient$BlockReader.read(DFSClient.java:1158)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.readBuffer(DFSClient.java:1718)
    at org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1770)
    at java.io.DataInputStream.read(DataInputStream.java:83)
    at org.apache.hadoop.util.LineReader.readLine(LineReader.java:134)
    at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(LineRecordReader.java:97)
    at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:423)
    at org.apache.hadoop.mapreduce.MapContext.nextKeyValue(MapContext.java:67)
    at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:143)
    at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:621)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
    at org.apache.hadoop.mapred.Child.main(Child.java:170)
    looks like a warning sign of disk failure -are there other disk health
    checks you could run?

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedNov 22, '10 at 11:02a
activeNov 23, '10 at 10:56a
posts2
users2
websitehadoop.apache.org...
irc#hadoop

2 users in discussion

Steve Loughran: 1 post Hari Sreekumar: 1 post

People

Translate

site design / logo © 2022 Grokbase