FAQ
I ran yum update on all the servers and it upgraded a bunch of stuff from
the cloudera repositories. I'd have to assume that either the mapreduce job
doesn't have the proper hadoop.jar in it's classpath or something is wrong
with dependencies.

Either way, this sounds like a cloudera packaging issues as I haven't
changed anything here except the /etc/hive/conf/hive-env.sh file to include
the jars for the hbase-hive-handler stuff.

Anything else I can do to chase this down? Which jar holds the
org.apache.hadoop.mapred.Client class?

Thanks,
Pat

On Mon, Dec 3, 2012 at 5:04 PM, Vinithra Varadharajan wrote:


On Mon, Dec 3, 2012 at 11:54 AM, Patrick Ethier wrote:

After I ran a yum update on all my servers which effectively update
CDH4.1.1 to CDH4.1.2 I'm now having problems running any select statements
in Hive

I saw that the version number for the hive-hbase-handler had changed, so
I changed my AUX_JAR_PATH to reflect this. I am now getting the error
below. I'm assuming there's a ClassNotFound exception somewhere, but I
can't seem to find the log that would tell me what's missing and where.

Error during job, obtaining debugging information...
Examining task ID: task_201211261321_0039_m_000025 (and more) from job
job_201211261321_0039
Exception in thread "Thread-30" java.lang.NullPointerException
at
org.apache.hadoop.hive.shims.Hadoop23Shims.getTaskAttemptLogUrl(Hadoop23Shims.java:44)
at
org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.getTaskInfos(JobDebugger.java:186)
at
org.apache.hadoop.hive.ql.exec.JobDebugger$TaskInfoGrabber.run(JobDebugger.java:142)
at java.lang.Thread.run(Thread.java:662)
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.MapRedTask
MapReduce Jobs Launched:
Job 0: Map: 24 HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec


It seems like the mapreduce logs give the following as the root cause:
[user@cluster01 attempt_201211261321_0039_m_000025_0]# tail stderr
Exception in thread "main" java.lang.NoClassDefFoundError:
org/apache/hadoop/mapred/Child
Caused by: java.lang.ClassNotFoundException:
org.apache.hadoop.mapred.Child
at java.net.URLClassLoader$1.run(URLClassLoader.java:202)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:190)
at java.lang.ClassLoader.loadClass(ClassLoader.java:306)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301)
at java.lang.ClassLoader.loadClass(ClassLoader.java:247)
Could not find the main class: org.apache.hadoop.mapred.Child. Program
will exit.
Patrick, try setting hive.exec.show.job.failure.debug.info=false in your
hive configs. This is a known issue with Hive and CDH4 MR1.
Something is wrong here, because I thought org.apache.hadoop.mapred was
deprecate for org.apache.hadoop.mapreduce

Could I be dealing with a version mismatch here? If so, isn't the
cloudera manager supposed to take care of upgrading all the components to
interoperable versions?
This shouldn't be the result of a version mismatch. If you want to make
sure, go to the Hosts tab in CM and start the "Host Inspector". The results
should show the versions of CDH on the hosts. CM itself does not upgrade
components - that is taken care of when you do the yum update on each
server.

Hope this helps.

-Vinithra
Pat

Search Discussions

Discussion Posts

Previous

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 3 of 3 | next ›
Discussion Overview
groupscm-users @
categorieshadoop
postedDec 3, '12 at 7:54p
activeDec 3, '12 at 10:49p
posts3
users2
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase