FAQ
CMF 4.1 can not be installed, and i retried for many times.

I have proceeded like this:

1. Download the newest 4.1 installer file: cloudera-manager-installer.bin
2. Delete the current /etc/yum.repos.d/cloudera-manager.repo file
3. Then reinstall. The cloudera-manager-installer.bin file installs the
.repo file.

But, it looks like the content of the .repo file is not correct,
because it's installing CMF 4.1, but the log display that it can not find
.rpm file for cloudera-manager-server-4.0.3.

or the directory of the URL as follows is not the correct:
http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/


The detail error log as follows:

Transaction Summary
================================================================================
Install 2 Package(s)

Total download size: 97 M
Installed size: 115 M
Downloading Packages:
http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64.rpm:
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404"
Trying other mirror.
http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64.rpm:
[Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404"
Trying other mirror.


Error Downloading Packages:
   cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64: failure:
RPMS/x86_64/cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64.rpm from
cloudera-manager: [Errno 256] No more mirrors to try.
   cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64: failure:
RPMS/x86_64/cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64.rpm from
cloudera-manager: [Errno 256] No more mirrors to try.

Search Discussions

  • Philip Zeyliger at Nov 20, 2012 at 5:21 am
    Hi James,

    Could you try doing "rm -rf /var/cache/yum/cloudera*" and/or "yum clean
    all" on all the relevant machines? What's going on is that despite your
    update to the .repo file, the yum cache is looking at old data.

    Cheers,

    -- Philip
    On Mon, Nov 19, 2012 at 7:54 PM, James wrote:

    CMF 4.1 can not be installed, and i retried for many times.

    I have proceeded like this:

    1. Download the newest 4.1 installer file: cloudera-manager-installer.bin
    2. Delete the current /etc/yum.repos.d/cloudera-manager.repo file
    3. Then reinstall. The cloudera-manager-installer.bin file installs the
    .repo file.

    But, it looks like the content of the .repo file is not correct,
    because it's installing CMF 4.1, but the log display that it can not find
    .rpm file for cloudera-manager-server-4.0.3.

    or the directory of the URL as follows is not the correct:
    http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/


    The detail error log as follows:

    Transaction Summary

    ================================================================================
    Install 2 Package(s)

    Total download size: 97 M
    Installed size: 115 M
    Downloading Packages:

    http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64.rpm:
    [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404"
    Trying other mirror.

    http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64.rpm:
    [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404"
    Trying other mirror.


    Error Downloading Packages:
    cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64: failure:
    RPMS/x86_64/cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64.rpm from
    cloudera-manager: [Errno 256] No more mirrors to try.
    cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64: failure:
    RPMS/x86_64/cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64.rpm from
    cloudera-manager: [Errno 256] No more mirrors to try.
  • James at Nov 20, 2012 at 8:16 am
    Hi Philip,

    Thanks very much for your solution.

    I tried to do like that:
    rm -rf /var/cache/yum/hadoop*
    and
    yum clean all

    but the problem is not be resolved.
    May be there is a little bug of CMF 4.1's config file ?



    在 2012年11月20日星期二UTC+8下午1时21分55秒,Philip Zeyliger写道:
    Hi James,

    Could you try doing "rm -rf /var/cache/yum/cloudera*" and/or "yum clean
    all" on all the relevant machines? What's going on is that despite your
    update to the .repo file, the yum cache is looking at old data.

    Cheers,

    -- Philip

    On Mon, Nov 19, 2012 at 7:54 PM, James <[email protected] <javascript:>>wrote:
    CMF 4.1 can not be installed, and i retried for many times.

    I have proceeded like this:

    1. Download the newest 4.1 installer file: cloudera-manager-installer.bin
    2. Delete the current /etc/yum.repos.d/cloudera-manager.repo file
    3. Then reinstall. The cloudera-manager-installer.bin file installs the
    .repo file.

    But, it looks like the content of the .repo file is not correct,
    because it's installing CMF 4.1, but the log display that it can not find
    .rpm file for cloudera-manager-server-4.0.3.

    or the directory of the URL as follows is not the correct:
    http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/


    The detail error log as follows:

    Transaction Summary

    ================================================================================
    Install 2 Package(s)

    Total download size: 97 M
    Installed size: 115 M
    Downloading Packages:

    http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64.rpm:
    [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404"
    Trying other mirror.

    http://archive.cloudera.com/cm4/redhat/6/x86_64/cm/4/RPMS/x86_64/cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64.rpm:
    [Errno 14] PYCURL ERROR 22 - "The requested URL returned error: 404"
    Trying other mirror.


    Error Downloading Packages:
    cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64: failure:
    RPMS/x86_64/cloudera-manager-daemons-4.0.3-1.cm403.p0.50.x86_64.rpm from
    cloudera-manager: [Errno 256] No more mirrors to try.
    cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64: failure:
    RPMS/x86_64/cloudera-manager-server-4.0.3-1.cm403.p0.50.x86_64.rpm from
    cloudera-manager: [Errno 256] No more mirrors to try.
  • James at Nov 20, 2012 at 8:26 am
    Dear,

    Another thing, I installed CMF 4.0.3 on the machine before, and it was
    uninstalled already.

    Look forward to resolve the problem.

    Thanks!
  • James at Nov 21, 2012 at 2:26 am
    Dear Philip,

    I thinks I find the reason of the problem.
    As your kindly solution, I proceeded like this and CMF 4.1 can be installed
    correctly:

    rm -rf /var/cache/yum/hadoop*
    yum clean all
    yum remove postgresql


    Thanks Philip again, and
    Best wishes!

    James, beijing, china
  • Weibeauty at Dec 29, 2012 at 8:27 am
    Dear Cloudera Team,

    Anyone who knows the plan or schedule for next CDH Version which contains Hbase9.4 ?
    We look forward to it.^_^


    thanks,
    Best wishes!

    James
    12/29/2012
  • Harsh J at Dec 29, 2012 at 9:14 am
    CDH4 4.2.0+ will include HBase 0.94, plus stability and feature
    backports. It is currently scheduled for a mid-Q1 release.
    On Sat, Dec 29, 2012 at 1:00 PM, weibeauty wrote:
    Dear Cloudera Team,

    Anyone who knows the plan or schedule for next CDH Version which contains
    Hbase9.4 ?
    We look forward to it.^_^


    thanks,
    Best wishes!

    James
    12/29/2012


    --
    Harsh J
  • Weibeauty at Dec 29, 2012 at 9:50 am
    Dear Harsh,

    It is worth the wait.

    Happy new year,
    from Beijing, China. ^_^



    thanks,
    Best wishes!

    James
    12/29/2012
    Beijing, China



    发件人:Harsh J
    发送时间:2012-12-29 17:09
    主题:Re: What's th plan or schedule for next CDH Version which contains Hbase9.4 ?
    收件人:"weibeauty"<[email protected]>
    抄送:"scm-users"<[email protected]>

    CDH4 4.2.0+ will include HBase 0.94, plus stability and feature
    backports. It is currently scheduled for a mid-Q1 release.
    On Sat, Dec 29, 2012 at 1:00 PM, weibeauty wrote:
    Dear Cloudera Team,

    Anyone who knows the plan or schedule for next CDH Version which contains
    Hbase9.4 ?
    We look forward to it.^_^


    thanks,
    Best wishes!

    James
    12/29/2012


    --
    Harsh J
  • Weibeauty at Dec 29, 2012 at 9:09 am
    Dear Cloudera Team,

    Anyone who knows the plan or schedule for next CDH Version which contains Hbase9.4 ?
    We look forward to it.^_^


    thanks,
    Best wishes!

    James
    12/29/2012
  • Weibeauty at Feb 18, 2013 at 9:58 am
    Hi all,

    My hadoop is running well for some days. Suddenly, the Hadoop Map/Reduce Administration Web UI is not accessible: http://localhost:50030. Give such message like below.

    HTTP ERROR 404
    Problem accessing /jobtracker.jsp. Reason:
    /jobtracker.jsp
    ------------------------------
    Powered by Jetty://

    Give such more information:
    1. My hadoop Version: CDH4, Cloudera Manager Free 4
    2. I reboot JobTracker task with CMF, and the JobTracker is running fine.
    3. The HDFS NameNode Administration Web UI is running fine: http://localhost:50070/dfshealth.jsp
    4. There is no ERROR log in the JobTracker's log file: hadoop-cmf-mapreduce1-JOBTRACKER-inner-1.log.out
    5. The port 50030 is running fine:
    [[email protected] logs]# lsof -i:50030
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    java 16095 mapred 277u IPv4 113625545 0t0 TCP *:50030 (LISTEN)

    Does anyone meet it?



    thanks,
    Best wishes!

    James
    02/19/2013
  • Weibeauty at Feb 20, 2013 at 1:13 am
    Hi all,

    My hadoop is running well for some days. Suddenly, the Hadoop Map/Reduce Administration Web UI is not accessible: http://localhost:50030. Give such message like below.

    HTTP ERROR 404
    Problem accessing /jobtracker.jsp. Reason:
    /jobtracker.jsp
    ------------------------------
    Powered by Jetty://

    Give such more information:
    1. My hadoop Version: CDH4, Cloudera Manager Free 4
    2. I reboot JobTracker task with CMF, and the JobTracker is running fine.
    3. The HDFS NameNode Administration Web UI is running fine: http://localhost:50070/dfshealth.jsp
    4. There is no ERROR log in the JobTracker's log file: hadoop-cmf-mapreduce1-JOBTRACKER-inner-1.log.out
    5. The port 50030 is running fine:
    [[email protected] logs]# lsof -i:50030
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    java 16095 mapred 277u IPv4 113625545 0t0 TCP *:50030 (LISTEN)

    Does anyone meet it?



    thanks,
    Best wishes!

    James
    02/20/2013
  • Weibeauty at Mar 11, 2013 at 7:55 am
    Hi all,

    Install CMF4.5 failed on a clean cluster, when use custom databases. There is an error information: JDBC driver cannot be found. Unable to find the JDBC database jar on host : test163.

    I have the MySQL java connector installed under /usr/share/java, /usr/share/cmf/lib, and restarted the CMF Server, but the error is still there.
    Where should the MySQL java connector be installed?

    Does anyone meet it?




    2013-03-11



    thanks,
    Best wishes!

    James
    Beijing, China



    发件人:weibeauty
    发送时间:2013-02-18 17:51
    主题:Hadoop Map/Reduce Administration Web UI is not accessible: Version CDH4, CMF4
    收件人:"scm-users"<[email protected]>
    抄送:

    Hi all,

    My hadoop is running well for some days. Suddenly, the Hadoop Map/Reduce Administration Web UI is not accessible: http://localhost:50030. Give such message like below.

    HTTP ERROR 404
    Problem accessing /jobtracker.jsp. Reason:
    /jobtracker.jsp
    ------------------------------
    Powered by Jetty://

    Give such more information:
    1. My hadoop Version: CDH4, Cloudera Manager Free 4
    2. I reboot JobTracker task with CMF, and the JobTracker is running fine.
    3. The HDFS NameNode Administration Web UI is running fine: http://localhost:50070/dfshealth.jsp
    4. There is no ERROR log in the JobTracker's log file: hadoop-cmf-mapreduce1-JOBTRACKER-inner-1.log.out
    5. The port 50030 is running fine:
    [[email protected] logs]# lsof -i:50030
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    java 16095 mapred 277u IPv4 113625545 0t0 TCP *:50030 (LISTEN)

    Does anyone meet it?



    thanks,
    Best wishes!

    James
    02/19/2013
  • bc Wong at Mar 11, 2013 at 8:16 am
    By default, it'll look for `/usr/share/java/mysql-connector-java.jar'. Does
    that file exist on host `test163'?

    Cheers,
    bc
    On Mon, Mar 11, 2013 at 12:56 AM, weibeauty wrote:

    **
    Hi all,

    Install CMF4.5 failed on a clean cluster, when use custom databases. There
    is an error information: JDBC driver cannot be found. Unable to find the
    JDBC database jar on host : test163.

    I have the MySQL java connector installed under /usr/share/java,
    /usr/share/cmf/lib, and restarted the CMF Server, but the error is still
    there.
    Where should the MySQL java connector be installed?

    Does anyone meet it?




    2013-03-11
    ------------------------------
    **
    **
    thanks,
    Best wishes!

    James
    Beijing, China
    **
    **
    ------------------------------
    *发件人:*weibeauty
    *发送时间:*2013-02-18 17:51
    *主题:*Hadoop Map/Reduce Administration Web UI is not accessible: Version
    CDH4, CMF4
    *收件人:*"scm-users"<[email protected]>
    *抄送:*

    Hi all,

    My hadoop is running well for some days. Suddenly, the Hadoop Map/Reduce
    Administration Web UI is not accessible: http://localhost:50030. Give
    such message like below.

    HTTP ERROR 404
    Problem accessing /jobtracker.jsp. Reason:
    /jobtracker.jsp
    ------------------------------
    Powered by Jetty://

    Give such more information:
    1. My hadoop Version: CDH4, Cloudera Manager Free 4
    2. I reboot JobTracker task with CMF, and the JobTracker is running fine.
    3. The HDFS NameNode Administration Web UI is running fine:
    http://localhost:50070/dfshealth.jsp
    4. There is no ERROR log in the JobTracker's log file:
    hadoop-cmf-mapreduce1-JOBTRACKER-inner-1.log.out
    5. The port 50030 is running fine:
    [[email protected] logs]# lsof -i:50030
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    java 16095 mapred 277u IPv4 113625545 0t0 TCP *:50030 (LISTEN)

    Does anyone meet it?



    thanks,
    Best wishes!

    James
    02/19/2013
  • Weibeauty at Mar 11, 2013 at 9:00 am
    Hi bc,

    Thanks for bc's response!
    Got it, and connecting mysql successful.
    I install the JDBC database driver on master host test161, but forgot to install it on slave host test162 and test163.
    hehe...


    thanks,
    Best wishes!
    2013-03-11
    James
    Beijing, China



    发件人:bc Wong
    发送时间:2013-03-11 16:16
    主题:Re: Install CMF4.5 failed when use custom databases: JDBC driver cannot be found. Unable to find the JDBC database jar on host : tongjitest163.
    收件人:"weibeauty"<[email protected]>
    抄送:"scm-users"<[email protected]>

    By default, it'll look for `/usr/share/java/mysql-connector-java.jar'. Does that file exist on host `test163'?


    Cheers,
    bc


    On Mon, Mar 11, 2013 at 12:56 AM, weibeauty wrote:

    Hi all,

    Install CMF4.5 failed on a clean cluster, when use custom databases. There is an error information: JDBC driver cannot be found. Unable to find the JDBC database jar on host : test163.

    I have the MySQL java connector installed under /usr/share/java, /usr/share/cmf/lib, and restarted the CMF Server, but the error is still there.
    Where should the MySQL java connector be installed?

    Does anyone meet it?




    2013-03-11



    thanks,
    Best wishes!

    James
    Beijing, China



    发件人:weibeauty
    发送时间:2013-02-18 17:51
    主题:Hadoop Map/Reduce Administration Web UI is not accessible: Version CDH4, CMF4
    收件人:"scm-users"<[email protected]>
    抄送:

    Hi all,

    My hadoop is running well for some days. Suddenly, the Hadoop Map/Reduce Administration Web UI is not accessible: http://localhost:50030. Give such message like below.

    HTTP ERROR 404
    Problem accessing /jobtracker.jsp. Reason:
    /jobtracker.jsp
    ------------------------------
    Powered by Jetty://

    Give such more information:
    1. My hadoop Version: CDH4, Cloudera Manager Free 4
    2. I reboot JobTracker task with CMF, and the JobTracker is running fine.
    3. The HDFS NameNode Administration Web UI is running fine: http://localhost:50070/dfshealth.jsp
    4. There is no ERROR log in the JobTracker's log file: hadoop-cmf-mapreduce1-JOBTRACKER-inner-1.log.out
    5. The port 50030 is running fine:
    [[email protected] logs]# lsof -i:50030
    COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
    java 16095 mapred 277u IPv4 113625545 0t0 TCP *:50030 (LISTEN)

    Does anyone meet it?



    thanks,
    Best wishes!

    James
    02/19/2013
  • Weibeauty at Dec 11, 2013 at 9:45 am
    Dear Cloudera Team,

    I run a hive task, and there is a exception.
    The problem had confused us for some days, and it appears every week almost.
    Could anyone kindly help us to explain the reason why this error occurred, and how to fix the problem?

    Thank you in advance.


    The cluster environment:
    CDH 4.2.1
    hive-0.10.0-cdh4.2.1
    NameNode HA: tongjihadoop1(Active NameNode), tongjihadoop11(Standby NameNode)


    1. HiveServer error logs:

    HiveServerException(errorCode=40000, message='Query returned non-zero code: 40000, cause: FAILED: RuntimeException java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException; Host Details : local host is: "tongjihadoop28/10.32.21.28"; destination host is: "tongjihadoop11":8020; ', SQLState='42000')

    HiveServerException(errorCode=40000, message='Query returned non-zero code: 40000, cause: FAILED: RuntimeException java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException; Host Details : local host is: "tongjihadoop28/10.32.21.28"; destination host is: "tongjihadoop1":8020; ', SQLState='42000')

    2013-12-11 03:46:19,704 WARN ipc.Client (Client.java:call(1203)) - interrupted waiting to send params to server
    java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:913)
    at org.apache.hadoop.ipc.Client.call(Client.java:1198)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy15.delete(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:407)
    at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy16.delete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1487)
    at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:356)
    at org.apache.hadoop.hive.ql.Context.removeScratchDir(Context.java:236)
    at org.apache.hadoop.hive.ql.Context.clear(Context.java:377)
    at org.apache.hadoop.hive.ql.Driver.close(Driver.java:1476)
    at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:186)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    2013-12-11 03:46:19,725 WARN retry.RetryInvocationHandler (RetryInvocationHandler.java:invoke(94)) - Exception while invoking class org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete. Not retrying because the invoked method is not idempotent, and unable to determine whether it was invoked
    java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1204)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy15.delete(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:407)
    at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy16.delete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1487)
    at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:356)
    at org.apache.hadoop.hive.ql.Context.removeScratchDir(Context.java:236)
    at org.apache.hadoop.hive.ql.Context.clear(Context.java:377)
    at org.apache.hadoop.hive.ql.Driver.close(Driver.java:1476)
    at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:186)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:913)
    at org.apache.hadoop.ipc.Client.call(Client.java:1198)
    ... 23 more
    2013-12-11 03:46:19,726 WARN ql.Context (Context.java:removeScratchDir(238)) - Error Removing Scratch: java.io.IOException: java.lang.InterruptedException
    at org.apache.hadoop.ipc.Client.call(Client.java:1204)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy15.delete(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:407)
    at sun.reflect.GeneratedMethodAccessor78.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy16.delete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1487)
    at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:356)
    at org.apache.hadoop.hive.ql.Context.removeScratchDir(Context.java:236)
    at org.apache.hadoop.hive.ql.Context.clear(Context.java:377)
    at org.apache.hadoop.hive.ql.Driver.close(Driver.java:1476)
    at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:186)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:913)
    at org.apache.hadoop.ipc.Client.call(Client.java:1198)
    ... 23 more

    2013-12-11 03:46:19,807 WARN ipc.Client (Client.java:call(1203)) - interrupted waiting to send params to server
    java.lang.InterruptedException
    at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1279)
    at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:218)
    at java.util.concurrent.FutureTask.get(FutureTask.java:83)
    at org.apache.hadoop.ipc.Client$Connection.sendParam(Client.java:913)
    at org.apache.hadoop.ipc.Client.call(Client.java:1198)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy15.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:425)
    at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy16.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2121)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2092)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:546)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1902)
    at org.apache.hadoop.hive.ql.Context.getScratchDir(Context.java:164)
    at org.apache.hadoop.hive.ql.Context.getExternalScratchDir(Context.java:225)
    at org.apache.hadoop.hive.ql.Context.getExternalTmpFileURI(Context.java:318)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genFileSinkPlan(SemanticAnalyzer.java:4648)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:6811)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:6721)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:7454)
    at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8131)
    at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:443)
    at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:347)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:908)
    at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    2013-12-11 03:46:19,809 WARN retry.RetryInvocationHandler (RetryInvocationHandler.java:invoke(117)) - Exception while invoking mkdirs of class ClientNamenodeProtocolTranslatorPB. Trying to fail over immediately.
    2013-12-11 03:46:19,811 WARN retry.RetryInvocationHandler (RetryInvocationHandler.java:invoke(117)) - Exception while invoking mkdirs of class ClientNamenodeProtocolTranslatorPB after 1 fail over attempts. Trying to fail over immediately.

    java.io.IOException: Failed on local exception: java.nio.channels.ClosedByInterruptException; Host Details : local host is: "tongjihadoop10/10.32.21.10"; destination host is: "tongjihadoop11":8020;
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:760)
    at org.apache.hadoop.ipc.Client.call(Client.java:1229)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    at $Proxy15.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:425)
    at sun.reflect.GeneratedMethodAccessor46.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    at $Proxy16.mkdirs(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2121)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2092)
    at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:546)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1902)
    at org.apache.hadoop.hive.ql.exec.ExecDriver.createTmpDirs(ExecDriver.java:223)
    at org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:445)
    at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:138)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:138)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1352)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1138)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:951)
    at org.apache.hadoop.hive.service.HiveServer$HiveServerHandler.execute(HiveServer.java:198)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:644)
    at org.apache.hadoop.hive.service.ThriftHive$Processor$execute.getResult(ThriftHive.java:628)
    at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
    at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:39)
    at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:206)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
    Caused by: java.nio.channels.ClosedByInterruptException
    at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:184)
    at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:511)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:193)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:525)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:489)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:499)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:593)
    at org.apache.hadoop.ipc.Client$Connection.access$2000(Client.java:241)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1278)
    at org.apache.hadoop.ipc.Client.call(Client.java:1196)
    ... 30 more
    Job Submission failed with exception 'java.io.IOException(Failed on local exception: java.nio.channels.ClosedByInterruptException; Host Details : local host is: "tongjihadoop10/10.32.21.10"; destination host is: "tongjihadoop11":8020; )'


    2、NameNode error logs:

    2013-12-11 03:43:10,730 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000003_0 File does not exist. Holder DFSClient_NONMAPREDUCE_1022005432_1 does not have any open files.
    2013-12-11 03:43:10,730 INFO org.apache.hadoop.ipc.Server: IPC Server handler 578 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.complete from 10.32.21.25:40530: error: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000003_0 File does not exist. Holder DFSClient_NONMAPREDUCE_1022005432_1 does not have any open files.
    org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000003_0 File does not exist. Holder DFSClient_NONMAPREDUCE_1022005432_1 does not have any open files.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2419)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2410)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:2478)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:2455)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:535)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:335)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44084)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)
    2013-12-11 03:43:10,737 INFO BlockStateChange: BLOCK* addToInvalidates: blk_-5428338368977879777_18430422 10.32.21.28:50010 10.32.21.25:50010 10.32.21.41:50010 10.32.21.21:50010 10.32.21.12:50010 10.32.21.32:50010 10.32.21.33:50010 10.32.21.29:50010 10.32.21.37:50010 10.32.21.43:50010
    2013-12-11 03:43:10,737 INFO BlockStateChange: BLOCK* addToInvalidates: blk_-8989026595130721770_18430424 10.32.21.28:50010 10.32.21.12:50010 10.32.21.16:50010 10.32.21.22:50010 10.32.21.21:50010 10.32.21.37:50010 10.32.21.30:50010 10.32.21.32:50010 10.32.21.14:50010 10.32.21.9:50010
    2013-12-11 03:43:10,737 INFO BlockStateChange: BLOCK* addToInvalidates: blk_4688728492859260894_18430426 10.32.21.28:50010 10.32.21.26:50010 10.32.21.39:50010
    2013-12-11 03:43:10,737 INFO BlockStateChange: BLOCK* addToInvalidates: blk_6666893292808536198_18430428 10.32.21.28:50010 10.32.21.37:50010 10.32.21.40:50010
    2013-12-11 03:43:10,737 INFO BlockStateChange: BLOCK* addToInvalidates: blk_-3847187288385873412_18430418 10.32.21.28:50010 10.32.21.8:50010 10.32.21.41:50010 10.32.21.13:50010 10.32.21.42:50010 10.32.21.21:50010 10.32.21.35:50010 10.32.21.32:50010 10.32.21.14:50010 10.32.21.26:50010
    2013-12-11 03:43:10,737 INFO BlockStateChange: BLOCK* addToInvalidates: blk_7477012993476457763_18430420 10.32.21.28:50010 10.32.21.36:50010 10.32.21.18:50010 10.32.21.11:50010 10.32.21.46:50010 10.32.21.39:50010 10.32.21.31:50010 10.32.21.38:50010 10.32.21.9:50010 10.32.21.12:50010
    2013-12-11 03:43:10,762 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000039_0. BP-1616094619-10.32.21.1-1369297210855 blk_-2415267581369766488_18430652{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[10.32.21.33:50010|RBW], ReplicaUnderConstruction[10.32.21.42:50010|RBW], ReplicaUnderConstruction[10.32.21.22:50010|RBW]]}
    2013-12-11 03:43:10,796 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocateBlock: /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000022_0. BP-1616094619-10.32.21.1-1369297210855 blk_-7115905787522100678_18430653{blockUCState=UNDER_CONSTRUCTION, primaryNodeIndex=-1, replicas=[ReplicaUnderConstruction[10.32.21.41:50010|RBW], ReplicaUnderConstruction[10.32.21.36:50010|RBW], ReplicaUnderConstruction[10.32.21.8:50010|RBW]]}
    2013-12-11 03:43:10,827 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop (auth:SIMPLE) cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000066_0 File does not exist. Holder DFSClient_NONMAPREDUCE_1576277036_1 does not have any open files.
    2013-12-11 03:43:10,827 INFO org.apache.hadoop.ipc.Server: IPC Server handler 433 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 10.32.21.32:7479: error: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000066_0 File does not exist. Holder DFSClient_NONMAPREDUCE_1576277036_1 does not have any open files.
    org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /tmp/hive-hadoop/hive_2013-12-11_03-39-52_021_2074940431677688659/_task_tmp.-ext-10002/_tmp.000066_0 File does not exist. Holder DFSClient_NONMAPREDUCE_1576277036_1 does not have any open files.
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2419)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2410)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2203)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:480)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:297)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44080)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1695)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1691)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1689)




    thanks,
    Best wishes!

    James, beijing, china
    2013-12-11

    To unsubscribe from this group and stop receiving emails from it, send an email to [email protected].

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedNov 20, '12 at 3:54a
activeDec 11, '13 at 9:45a
posts15
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase