[MapReduce-user] log files not found
Apr 1, 2010 at 6:33 am
Try to see the web UI of job tracker.
The default address should be :
: Hi all, I am running a series of jobs one after another. While executing the 4th job, the job fails. It fails in the reducer --- the progress percentage would be map 100%, reduce 99%. It gives out the following message 10/04/01 01:04:15 INFO mapred.JobClient: Task Id : attempt_201003240138_0110_r_000018_1, Status : FAILED Task attempt_201003240138_0110_r_000018_1 failed to report status for 602 seconds. Killing! It makes several attempts again to execute it but fails with similar message. I
: Hi Lu, Thank you for the reply. The Web UI is not configured on that machine :). Is there any other way to check it? Regards, Raghava.
: Hi all, I have found the log files on the DataNodes. I have checked the userlogs, but they do not contain any exception related to the error I have mentioned in the previous email (I am putting it here again). 10/04/01 01:04:15 INFO mapred.JobClient: Task Id : attempt_201003240138_0110_r_ 000018_1, Status : FAILED Task attempt_201003240138_0110_r_000018_1 failed to report status for 602 seconds. Killing! I have also done some tests by changing the order of the jobs. After the 3rd job, any job
DistributedCache adds prefix to the file
rename() removed from FileSystem.java but not FsShell.java
Reposting - Hadoop log timestamps & file timestamps not same as system time.
Hadoop log timestamps & file timestamps not same as system time.
can't compile the mapreduce project in eclipse
TableSplit not implementing "hashCode" problem
Profiling Hadoop Code
Distributed Cache File Not Found Exception
Need 0.20.2 new API documentation/examples, where are they?
how to avoid 2 jobs running on the same machine at the same time
2 of 4
Apr 1, '10 at 6:25a
Apr 3, '10 at 4:16a
2 users in discussion
Raghava Mutharaju (3)
welman Lu (1)
Groups & Organizations
site design / logo © 2022 Grokbase