FAQ
Hello all,

This morning we upgraded our cluster to use Hive 0.6.0 from Hive 0.5.0. I
ran the Derby DB migration script before upgrading.

All of our queries are running fine, however our INSERT OVERWRITE queries
are failing immediately. I am not sure what the problem is.

When we submit an INSERT OVERWRITE query, the query fails with the maps in
"Pending". Here is the failure from the web interface:

HTTP ERROR 500

java.lang.ArrayIndexOutOfBoundsException: 0
at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:324)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


Does anyone have any ideas?


Thanks,

Ryan

Search Discussions

  • Ryan LeCompte at Dec 7, 2010 at 5:19 pm
    I just put the Hive log4j config file on DEBUG, and here is the error that
    I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:

    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive 0.5.0. I
    ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE queries
    are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the maps in
    "Pending". Here is the failure from the web interface:

    HTTP ERROR 500

    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan

  • Ryan LeCompte at Dec 7, 2010 at 5:21 pm
    Digging even further, here's what I see:

    NOTE: We have a table in Hive called "test_table" but this seems to look for
    "default.test_table" ? )

    2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357)) -
    NoSuchObjectException(message:default.test_table table not found)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:399)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:432)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:354)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:333)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:1088)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:129)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:99)
    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:582)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:462)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:324)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)


    On Tue, Dec 7, 2010 at 12:19 PM, Ryan LeCompte wrote:

    I just put the Hive log4j config file on DEBUG, and here is the error that
    I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:429)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:172)
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:430)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:

    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive 0.5.0. I
    ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE queries
    are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the maps in
    "Pending". Here is the failure from the web interface:

    HTTP ERROR 500

    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan

  • Ryan LeCompte at Dec 7, 2010 at 5:25 pm
    Digging even further, here's what I see:

    NOTE: We have a table in Hive called "test_table" but this seems to look for
    "default.test_table" ? )

    2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357)) -
    NoSuchObjectException(message:default.test_table table not found)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:399)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:432)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:354)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:333)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:1088)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:129)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:99)
    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:582)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:462)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:324)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)


    On Tue, Dec 7, 2010 at 12:19 PM, Ryan LeCompte wrote:

    I just put the Hive log4j config file on DEBUG, and here is the error that
    I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:429)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:172)
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:430)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:

    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive 0.5.0. I
    ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE queries
    are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the maps in
    "Pending". Here is the failure from the web interface:

    HTTP ERROR 500

    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan

  • Ryan LeCompte at Dec 7, 2010 at 5:27 pm
    Digging even further, here's what I see:

    NOTE: We have a table in Hive called "test_table" but this seems to look for
    "default.test_table" ? )

    2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357)) -
    NoSuchObjectException(message:default.test_table table not found)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:399)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:432)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:354)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:333)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:1088)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:129)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:99)
    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:582)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:462)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:324)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)


    On Tue, Dec 7, 2010 at 12:19 PM, Ryan LeCompte wrote:

    I just put the Hive log4j config file on DEBUG, and here is the error that
    I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:429)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:172)
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:430)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:

    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive 0.5.0. I
    ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE queries
    are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the maps in
    "Pending". Here is the failure from the web interface:

    HTTP ERROR 500

    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan

  • Ryan LeCompte at Dec 7, 2010 at 6:22 pm
    Sorry for all of the messages! Stupid gmail client.

    It turns out that I was able to resolve the failures by setting
    "hive.merge.mapfiles=false" (by default it is true).

    I think this is because I'm running Hadoop version 0.20.1, r810220.

    This thread is what lead me to try this:
    http://www.mail-archive.com/hive-user@hadoop.apache.org/msg03955.html

    On Tue, Dec 7, 2010 at 12:26 PM, Ryan LeCompte wrote:

    Digging even further, here's what I see:

    NOTE: We have a table in Hive called "test_table" but this seems to look
    for "default.test_table" ? )

    2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357)) -
    NoSuchObjectException(message:default.test_table table not found)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:399)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:432)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:354)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:333)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:1088)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:129)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:99)
    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:582)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:462)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:324)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)


    On Tue, Dec 7, 2010 at 12:19 PM, Ryan LeCompte wrote:

    I just put the Hive log4j config file on DEBUG, and here is the error that
    I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:429)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:172)
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:430)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:

    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive 0.5.0. I
    ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE queries
    are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the maps
    in "Pending". Here is the failure from the web interface:

    HTTP ERROR 500

    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan

  • Ning Zhang at Dec 7, 2010 at 6:41 pm
    Ryan, I wonder why setting 'hive.merge.mapfiles=false' could solve the issue. The issue seems to be a metastore related (drop table could not find default.test_table). This is probably due to the database support newly introduced in 0.6 (see JIRA HIVE-675).

    On Dec 7, 2010, at 10:21 AM, Ryan LeCompte wrote:

    Sorry for all of the messages! Stupid gmail client.

    It turns out that I was able to resolve the failures by setting "hive.merge.mapfiles=false" (by default it is true).

    I think this is because I'm running Hadoop version 0.20.1, r810220.

    This thread is what lead me to try this: http://www.mail-archive.com/hive-user@hadoop.apache.org/msg03955.html


    On Tue, Dec 7, 2010 at 12:26 PM, Ryan LeCompte wrote:
    Digging even further, here's what I see:

    NOTE: We have a table in Hive called "test_table" but this seems to look for "default.test_table" ? )

    2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357)) - NoSuchObjectException(message:default.test_table table not found)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:399)
    at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:432)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:354)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:333)
    at org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:1088)
    at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:129)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:99)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:582)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:462)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:324)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)



    On Tue, Dec 7, 2010 at 12:19 PM, Ryan LeCompte wrote:
    I just put the Hive log4j config file on DEBUG, and here is the error that I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)



    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:
    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive 0.5.0. I ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE queries are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the maps in "Pending". Here is the failure from the web interface:

    HTTP ERROR 500


    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan
  • Ryan LeCompte at Dec 8, 2010 at 1:27 pm
    Yes, I also find it very strange.

    Unfortunately, it's not an ideal workaround since having
    hive.merge.mapfiles=false means that we have a lot of blocks being utilized
    that only have a small amount of data in them (much less than the
    configured HDFS block size).

    Can you think of any other workaround? Should this be resolved if we upgrade
    to latest version of Hadoop?
    On Tue, Dec 7, 2010 at 1:40 PM, Ning Zhang wrote:

    Ryan, I wonder why setting 'hive.merge.mapfiles=false' could solve the
    issue. The issue seems to be a metastore related (drop table could not find
    default.test_table). This is probably due to the database support newly
    introduced in 0.6 (see JIRA HIVE-675).

    On Dec 7, 2010, at 10:21 AM, Ryan LeCompte wrote:

    Sorry for all of the messages! Stupid gmail client.

    It turns out that I was able to resolve the failures by setting
    "hive.merge.mapfiles=false" (by default it is true).

    I think this is because I'm running Hadoop version 0.20.1, r810220.

    This thread is what lead me to try this:
    http://www.mail-archive.com/hive-user@hadoop.apache.org/msg03955.html

    On Tue, Dec 7, 2010 at 12:26 PM, Ryan LeCompte wrote:

    Digging even further, here's what I see:

    NOTE: We have a table in Hive called "test_table" but this seems to look
    for "default.test_table" ? )

    2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357)) -
    NoSuchObjectException(message:default.test_table table not found)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:399)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:432)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:354)
    at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:333)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:1088)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:129)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:99)
    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:582)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:462)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:324)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)


    On Tue, Dec 7, 2010 at 12:19 PM, Ryan LeCompte wrote:

    I just put the Hive log4j config file on DEBUG, and here is the error
    that I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:429)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:172)
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:430)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:

    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive
    0.5.0. I ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE
    queries are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the
    maps in "Pending". Here is the failure from the web interface:

    HTTP ERROR 500

    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan

  • Ryan LeCompte at Dec 8, 2010 at 2:55 pm
    After upgrading from Hadoop 0.20.1 to Hadoop 0.20.2, our problem has gone
    away. My recommendation: If you are using Hive 0.6.0, upgrade to Hadoop
    0.20.2.
    On Wed, Dec 8, 2010 at 8:27 AM, Ryan LeCompte wrote:

    Yes, I also find it very strange.

    Unfortunately, it's not an ideal workaround since having
    hive.merge.mapfiles=false means that we have a lot of blocks being utilized
    that only have a small amount of data in them (much less than the
    configured HDFS block size).

    Can you think of any other workaround? Should this be resolved if we
    upgrade to latest version of Hadoop?

    On Tue, Dec 7, 2010 at 1:40 PM, Ning Zhang wrote:

    Ryan, I wonder why setting 'hive.merge.mapfiles=false' could solve the
    issue. The issue seems to be a metastore related (drop table could not find
    default.test_table). This is probably due to the database support newly
    introduced in 0.6 (see JIRA HIVE-675).

    On Dec 7, 2010, at 10:21 AM, Ryan LeCompte wrote:

    Sorry for all of the messages! Stupid gmail client.

    It turns out that I was able to resolve the failures by setting
    "hive.merge.mapfiles=false" (by default it is true).

    I think this is because I'm running Hadoop version 0.20.1, r810220.

    This thread is what lead me to try this:
    http://www.mail-archive.com/hive-user@hadoop.apache.org/msg03955.html

    On Tue, Dec 7, 2010 at 12:26 PM, Ryan LeCompte wrote:

    Digging even further, here's what I see:

    NOTE: We have a table in Hive called "test_table" but this seems to
    look for "default.test_table" ? )

    2010-12-07 00:52:24,600 ERROR metadata.Hive (Hive.java:getTable(357))
    - NoSuchObjectException(message:default.test_table table not found)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:399)
    at
    org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:432)
    at
    org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:354)
    at
    org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:333)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.dropTable(DDLTask.java:1088)
    at
    org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:129)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:99)
    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:582)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:462)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:324)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:312)
    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)


    On Tue, Dec 7, 2010 at 12:19 PM, Ryan LeCompte wrote:

    I just put the Hive log4j config file on DEBUG, and here is the error
    that I'm seeing:

    2010-12-07 12:16:50,281 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(539)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-12-07 12:17:26,399 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:429)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    2010-12-07 12:17:26,405 DEBUG conf.Configuration
    (Configuration.java:<init>(210)) - java.io.IOException: config()
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:210)
    at org.apache.hadoop.conf.Configuration.<init>(Configuration.java:197)
    at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:172)
    at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:449)
    at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:430)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)


    On Tue, Dec 7, 2010 at 12:12 PM, Ryan LeCompte wrote:

    Hello all,

    This morning we upgraded our cluster to use Hive 0.6.0 from Hive
    0.5.0. I ran the Derby DB migration script before upgrading.

    All of our queries are running fine, however our INSERT OVERWRITE
    queries are failing immediately. I am not sure what the problem is.

    When we submit an INSERT OVERWRITE query, the query fails with the
    maps in "Pending". Here is the failure from the web interface:

    HTTP ERROR 500

    java.lang.ArrayIndexOutOfBoundsException: 0
    at org.apache.hadoop.mapred.JobInProgress.getTaskInProgress(JobInProgress.java:2523)
    at org.apache.hadoop.mapred.taskdetails_jsp._jspService(taskdetails_jsp.java:115)
    at org.apache.jasper.runtime.HttpJspBase.service(HttpJspBase.java:97)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502)
    at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:363)
    at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
    at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181)
    at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
    at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:417)
    at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
    at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
    at org.mortbay.jetty.Server.handle(Server.java:324)
    at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534)
    at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:864)
    at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:533)
    at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:207)
    at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:403)
    at org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409)
    at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:522)


    Does anyone have any ideas?


    Thanks,

    Ryan

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshive, hadoop
postedDec 7, '10 at 5:13p
activeDec 8, '10 at 2:55p
posts9
users2
websitehive.apache.org

2 users in discussion

Ryan LeCompte: 8 posts Ning Zhang: 1 post

People

Translate

site design / logo © 2021 Grokbase