FAQ
Hi,

My map tasks run properly but the reduce task just halts at 4%. In one
of the stderr outputs of a the reduce task I saw the following stack
trace.

=================================================
log4j:ERROR Could not find value for key log4j.appender.INFO
log4j:ERROR Could not instantiate appender named "INFO".
log4j:WARN No appenders could be found for logger
(org.apache.hadoop.mapred.TaskLog).
log4j:WARN Please initialize the log4j system properly.
Exception in thread "main" java.lang.StackOverflowError
at java.nio.Buffer.limit(Buffer.java:248)
at java.nio.Buffer.(ByteBuffer.java:259)
at java.nio.HeapByteBuffer.(ByteBuffer.java:350)
at java.nio.ByteBuffer.wrap(ByteBuffer.java:373)
at
java.lang.StringCoding$StringEncoder.encode(StringCoding.java:237)
at java.lang.StringCoding.encode(StringCoding.java:272)
at java.lang.StringCoding.encode(StringCoding.java:284)
at java.lang.String.getBytes(String.java:987)
at
org.apache.hadoop.mapred.TaskLogAppender.append(TaskLogAppender.java:51)
at
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)
at
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Ap
penderAttachableImpl.java:65)
at org.apache.log4j.Category.callAppenders(Category.java:203)
at org.apache.log4j.Category.forcedLog(Category.java:388)
at org.apache.log4j.Category.log(Category.java:853)
at
org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:110)
at
org.apache.hadoop.mapred.TaskLog$Writer.write(TaskLog.java:205)
at
org.apache.hadoop.mapred.TaskLogAppender.append(TaskLogAppender.java:52)
at
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)
at
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Ap
penderAttachableImpl.java:65)
at org.apache.log4j.Category.callAppenders(Category.java:203)
at org.apache.log4j.Category.forcedLog(Category.java:388)
at org.apache.log4j.Category.log(Category.java:853)
at
org.apache.commons.logging.impl.Log4JLogger.debug(Log4JLogger.java:110)
at
org.apache.hadoop.mapred.TaskLog$Writer.write(TaskLog.java:205)
at
org.apache.hadoop.mapred.TaskLogAppender.append(TaskLogAppender.java:52)
at
org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:230)
at
org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(Ap
penderAttachableImpl.java:65)
at org.apache.log4j.Category.callAppenders(Category.java:203)
at org.apache.log4j.Category.forcedLog(Category.java:388)

....
=================================================

Clearly this is a recursive call which results in stack overflow. I
commented out the following lines in
org.apache.hadoop.mapred.TaskLog:Writer:write()

LOG.debug("Total no. of bytes written to split#" + noSplits +
" -> " + splitLength);

And the problem was resolved. I am not sure why this problem didn't show
up earlier. But looking at stack trace I think this is what is
happening:

The hadoop logging mechanisn is checking if it should rotate the log in
the write() method. It then concludes that it should rotate the log and
before calling logRotate() it adds a DEBUG entry to the log. Now this
call to adding the log entry results in another call to check if the log
should be rotated and so on ..
Is this a bug? (Apart from the above commenting, I havn't changed any
other hadoop code).

~ Neeraj

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 27, '07 at 1:32a
activeJun 27, '07 at 1:32a
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Mahajan, Neeraj: 1 post

People

Translate

site design / logo © 2022 Grokbase