FAQ
Hi,

I recently installed Cloudera Manager 4.1 Free Edition;
I installed CDH4 through the Manager in 4 nodes successfully.

I can run map reduce jobs through the command-line, but when I try to run a
Hive query through the Hue server


Select * FROM jamesjoyce
WHERE count >100 SORT BY count ASC
LIMIT 10


I get the following:


Driver returned: 2. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201212010833_733610235.txt
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
   set mapred.reduce.tasks=<number>
Starting Job = job_201211271304_0003, Tracking URL = http://cdhnode3.aar.cisco.com:50030/jobdetails.jsp?jobid=job_201211271304_0003
Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=cdhnode3.aar.cisco.com:8021 -kill job_201211271304_0003
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2012-12-01 08:34:06,554 Stage-1 map = 0%, reduce = 0%
2012-12-01 08:34:28,968 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201211271304_0003 with errors
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec


Looking at the log file in TaskTracker I see this error:

java.io.IOException: Could not create job user log directory:
file:/var/log/hadoop-0.20-mapreduce/userlogs/job_201211271304_0001
         at
org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:241)
         at
org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:225)
         at org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1415)
         at java.security.AccessController.doPrivileged(Native Method)
         at javax.security.auth.Subject.doAs(Subject.java:396)
         at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
         at
org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1390)
         at
org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1305)
         at
org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2722)
         at
org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2686)

I guess is a permission problem.
I created a hdfs user in Hue as a superuse.

Any ideas how to help?

Thank you
Rui Vaz

Search Discussions

  • bc Wong at Dec 3, 2012 at 7:52 am

    On Sat, Dec 1, 2012 at 1:59 PM, RuiVaz wrote:

    Hi,

    I recently installed Cloudera Manager 4.1 Free Edition;
    I installed CDH4 through the Manager in 4 nodes successfully.

    I can run map reduce jobs through the command-line, but when I try to run
    a Hive query through the Hue server


    Select * FROM jamesjoyce
    WHERE count >100 SORT BY count ASC
    LIMIT 10


    I get the following:


    Driver returned: 2. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201212010833_733610235.txt
    Total MapReduce jobs = 2
    Launching Job 1 out of 2
    Number of reduce tasks not specified. Estimated from input data size: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201211271304_0003, Tracking URL = http://cdhnode3.aar.cisco.com:50030/jobdetails.jsp?jobid=job_201211271304_0003
    Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=cdhnode3.aar.cisco.com:8021 -kill job_201211271304_0003
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2012-12-01 08:34:06,554 Stage-1 map = 0%, reduce = 0%
    2012-12-01 08:34:28,968 Stage-1 map = 100%, reduce = 100%
    Ended Job = job_201211271304_0003 with errors
    FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
    MapReduce Jobs Launched:
    Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
    Total MapReduce CPU Time Spent: 0 msec


    Looking at the log file in TaskTracker I see this error:

    java.io.IOException: Could not create job user log directory:
    file:/var/log/hadoop-0.20-mapreduce/userlogs/job_201211271304_0001
    at
    org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:241)
    at
    org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:225)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1415)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1390)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1305)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2722)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2686)

    I guess is a permission problem.
    I created a hdfs user in Hue as a superuse.

    Any ideas how to help?
    Can you try `ls -ld /var/log/hadoop-0.20-mapreduce/userlogs/' on the TT
    host? It should be owned by mapred:mapred. (Did you change the user of your
    mapreduce daemons in CM?)

    Cheers,
    bc
  • Serega Sheypak at Jun 3, 2013 at 8:39 am
    I've got the same Exception.
    The problem was in free space. One of our applications had badly configured
    log4j properties. Logs of this application oppupied all free space. The
    root cause was not it access rights, the root case was in insufficient disk
    space.

    Free space should also be checked.

    воскресенье, 2 декабря 2012 г., 1:59:32 UTC+4 пользователь RuiVaz написал:
    Hi,

    I recently installed Cloudera Manager 4.1 Free Edition;
    I installed CDH4 through the Manager in 4 nodes successfully.

    I can run map reduce jobs through the command-line, but when I try to run
    a Hive query through the Hue server


    Select * FROM jamesjoyce
    WHERE count >100 SORT BY count ASC
    LIMIT 10


    I get the following:


    Driver returned: 2. Errors: Hive history file=/tmp/hue/hive_job_log_hue_201212010833_733610235.txt
    Total MapReduce jobs = 2
    Launching Job 1 out of 2
    Number of reduce tasks not specified. Estimated from input data size: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201211271304_0003, Tracking URL = http://cdhnode3.aar.cisco.com:50030/jobdetails.jsp?jobid=job_201211271304_0003
    Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=cdhnode3.aar.cisco.com:8021 -kill job_201211271304_0003
    Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
    2012-12-01 08:34:06,554 Stage-1 map = 0%, reduce = 0%
    2012-12-01 08:34:28,968 Stage-1 map = 100%, reduce = 100%
    Ended Job = job_201211271304_0003 with errors
    FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.MapRedTask
    MapReduce Jobs Launched:
    Job 0: Map: 1 Reduce: 1 HDFS Read: 0 HDFS Write: 0 FAIL
    Total MapReduce CPU Time Spent: 0 msec


    Looking at the log file in TaskTracker I see this error:

    java.io.IOException: Could not create job user log directory:
    file:/var/log/hadoop-0.20-mapreduce/userlogs/job_201211271304_0001
    at
    org.apache.hadoop.mapred.JobLocalizer.initializeJobLogDir(JobLocalizer.java:241)
    at
    org.apache.hadoop.mapred.DefaultTaskController.initializeJob(DefaultTaskController.java:225)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1415)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1390)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1305)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2722)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2686)

    I guess is a permission problem.
    I created a hdfs user in Hue as a superuse.

    Any ideas how to help?

    Thank you
    Rui Vaz

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedDec 1, '12 at 9:59p
activeJun 3, '13 at 8:39a
posts3
users3
websitecloudera.com
irc#hadoop

3 users in discussion

RuiVaz: 1 post bc Wong: 1 post Serega Sheypak: 1 post

People

Translate

site design / logo © 2022 Grokbase