FAQ
Hi,

I'm running an impala script (release 1.1.1) that creates a table
(partitioned) then performs and insert overwrite. The table is successfulyl
created and data insert however I get the following error?

Query failed

ERROR: InternalException: Error updating metastore
CAUSED BY: MetaException: java.lang.NullPointerException
Could not execute command: insert OVERWRITE


Also on checking the logs I find this

Failed to close file
/user/hive/warehouse/20131002.db/viper_points/.-4518591338098003330--5446485784476593744_162434857_dir/strategy=X
/symbol=X/expiry_date_id=20140620/-4518591338098003330--5446485784476593744_1212166427_data.0

Java exception follows:

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
No lease on
/user/hive/warehouse/20131002.db/viper_points/.-4518591338098003330--544648578

4476593744_162434857_dir/strategy=X/symbol=X/expiry_date_id=20140620/-4518591338098003330--5446485784476593744_1212166427_data.0:
File does not exist. [Lease. Holder: DFSClient_NONMAPREDUCE_-578515034_1,
pendingcreates:

98]

         at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2445)

         at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)

         at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)

         at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)

         at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)

         at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)

         at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)

         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)

         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)

         at java.security.AccessController.doPrivileged(Native Method)

         at javax.security.auth.Subject.doAs(Subject.java:396)

         at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)

         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)



         at org.apache.hadoop.ipc.Client.call(Client.java:1225)

         at
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)

         at $Proxy9.addBlock(Unknown Source)

        at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)

         at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)

         at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

         at java.lang.reflect.Method.invoke(Method.java:597)

         at
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)

         at
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)

         at $Proxy10.addBlock(Unknown Source)

         at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1176)

         at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1029)

         at
org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487

To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.

Search Discussions

  • Alan Choi at Oct 4, 2013 at 6:28 pm
    Hi Andrew,

    The log should contain the exception stack of "MetaException". If you can
    share it with us, we can take a look. Thanks!

    Alan

    On Thu, Oct 3, 2013 at 11:40 PM, Andrew Stevenson wrote:

    Hi,

    I'm running an impala script (release 1.1.1) that creates a table
    (partitioned) then performs and insert overwrite. The table is successfulyl
    created and data insert however I get the following error?

    Query failed

    ERROR: InternalException: Error updating metastore
    CAUSED BY: MetaException: java.lang.NullPointerException
    Could not execute command: insert OVERWRITE


    Also on checking the logs I find this

    Failed to close file
    /user/hive/warehouse/20131002.db/viper_points/.-4518591338098003330--5446485784476593744_162434857_dir/strategy=X
    /symbol=X/expiry_date_id=20140620/-4518591338098003330--5446485784476593744_1212166427_data.0

    ****

    Java exception follows:****


    org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException):
    No lease on
    /user/hive/warehouse/20131002.db/viper_points/.-4518591338098003330--544648578
    ****

    4476593744_162434857_dir/strategy=X/symbol=X/expiry_date_id=20140620/-4518591338098003330--5446485784476593744_1212166427_data.0:
    File does not exist. [Lease. Holder: DFSClient_NONMAPREDUCE_-578515034_1,
    pendingcreates:****

    98]****

    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:2445)
    ****

    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.analyzeFileState(FSNamesystem.java:2262)
    ****

    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2175)
    ****

    at
    org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:501)
    ****

    at
    org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:299)
    ****

    at
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:44954)
    ****

    at
    org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:453)
    ****

    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1002)****

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1751)***
    *

    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1747)***
    *

    at java.security.AccessController.doPrivileged(Native Method)****

    at javax.security.auth.Subject.doAs(Subject.java:396)****

    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    ****

    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1745)****

    ** **

    at org.apache.hadoop.ipc.Client.call(Client.java:1225)****

    at
    org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:202)
    ****

    at $Proxy9.addBlock(Unknown Source)****

    at
    org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:291)
    ****

    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)***
    *

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    ****

    at java.lang.reflect.Method.invoke(Method.java:597)****

    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:164)
    ****

    at
    org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:83)
    ****

    at $Proxy10.addBlock(Unknown Source)****

    at
    org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1176)
    ****

    at
    org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1029)
    ****

    at
    org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:487

    To unsubscribe from this group and stop receiving emails from it, send an
    email to impala-user+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
  • Andrew Stevenson at Oct 8, 2013 at 8:22 am
    Today I have more errors, it looks like an issue on the mysql metastore,
    could a transaction deadlock be the cause. The tables are still created and
    data inserted.

    Query failed
    Backend 0:Failed to write row (length: 5551) to Hdfs file:
    hdfs://amshadoop/user/hive/warehouse/viper.db/fmp/.3984243974868064858-3883040089855219847_1450782212_dir/strategy=X/
    symbol=X/expiry_date_id=99991231/3984243974868064858-3883040089855219847_1385144053_data.0
    Error(255): Unknown error 255

    In the hive metastore I see

    2013-10-08 06:16:11,657 WARN DataNucleus.Datastore.Persist: Insert of
    object "org.apache.hadoop.hive.metastore.model.MPartition@78948cdd" using
    statement "INSERT INTO `PARTITIONS`
    (`PART_ID`,`TBL_ID`,`SD_ID`,`LAST_ACCESS_TIME`,`PART_NAME`,`CREATE_TIME`)
    VALUES (?,?,?,?,?,?)" failed : Deadlock found when trying to get lock; try
    restarting transaction


    In the imapald log

    expiry_date_id INT) STORED AS PARQUETFILE
    I1008 06:16:12.549499 26813 impala-beeswax-server.cc:301] close():
    query_id=e1428dc7ee6254be:f0eb839f517f3698
    I1008 06:16:12.588481 26811 impala-server.cc:994] UnregisterQuery():
    query_id=6344a002fcd39387:e323144943730cbb
    I1008 06:16:12.673985 31386 frontend.cc:113]
    com.cloudera.impala.common.InternalException: Error updating metastore
             at
    com.cloudera.impala.service.Frontend.updateMetastore(Frontend.java:541)
             at
    com.cloudera.impala.service.JniFrontend.updateMetastore(JniFrontend.java:271)
    Caused by: MetaException(message:java.lang.NullPointerException)
    --
       09: allow_unsupported_formats (bool) = false,
       10: default_order_by_limit (i64) = 50000,
       11: debug_action (string) = "",
       12: mem_limit (i64) = 0,
       13: abort_on_default_limit_exceeded (bool) = false,
       14: parquet_compression_codec (i32) = 5,
       15: hbase_caching (i32) = 0,
       16: hbase_cache_blocks (bool) = false,
    }
    I1008 06:16:12.704635 31386 status.cc:44] InternalException: Error updating
    metastore
    CAUSED BY: MetaException: java.lang.NullPointerException




    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.
  • Ishaan Joshi at Oct 10, 2013 at 6:40 am
    Andrew,

        Your second set of errors looks like a hive metastore issue, the hive
    mailing list may give you a better answer. For the first issue, could you
    send over the impalad logs so we can better diagnose it?

    Thanks,

    Ishaan

    On Tue, Oct 8, 2013 at 1:22 AM, Andrew Stevenson wrote:

    Today I have more errors, it looks like an issue on the mysql metastore,
    could a transaction deadlock be the cause. The tables are still created and
    data inserted.

    Query failed
    Backend 0:Failed to write row (length: 5551) to Hdfs file:
    hdfs://amshadoop/user/hive/warehouse/viper.db/fmp/.3984243974868064858-3883040089855219847_1450782212_dir/strategy=X/

    symbol=X/expiry_date_id=99991231/3984243974868064858-3883040089855219847_1385144053_data.0
    Error(255): Unknown error 255

    In the hive metastore I see

    2013-10-08 06:16:11,657 WARN DataNucleus.Datastore.Persist: Insert of
    object "org.apache.hadoop.hive.metastore.model.MPartition@78948cdd" using
    statement "INSERT INTO `PARTITIONS`
    (`PART_ID`,`TBL_ID`,`SD_ID`,`LAST_ACCESS_TIME`,`PART_NAME`,`CREATE_TIME`)
    VALUES (?,?,?,?,?,?)" failed : Deadlock found when trying to get lock; try
    restarting transaction


    In the imapald log

    expiry_date_id INT) STORED AS PARQUETFILE
    I1008 06:16:12.549499 26813 impala-beeswax-server.cc:301] close():
    query_id=e1428dc7ee6254be:f0eb839f517f3698
    I1008 06:16:12.588481 26811 impala-server.cc:994] UnregisterQuery():
    query_id=6344a002fcd39387:e323144943730cbb
    I1008 06:16:12.673985 31386 frontend.cc:113]
    com.cloudera.impala.common.InternalException: Error updating metastore
    at
    com.cloudera.impala.service.Frontend.updateMetastore(Frontend.java:541)
    at
    com.cloudera.impala.service.JniFrontend.updateMetastore(JniFrontend.java:271)
    Caused by: MetaException(message:java.lang.NullPointerException)
    --
    09: allow_unsupported_formats (bool) = false,
    10: default_order_by_limit (i64) = 50000,
    11: debug_action (string) = "",
    12: mem_limit (i64) = 0,
    13: abort_on_default_limit_exceeded (bool) = false,
    14: parquet_compression_codec (i32) = 5,
    15: hbase_caching (i32) = 0,
    16: hbase_cache_blocks (bool) = false,
    }
    I1008 06:16:12.704635 31386 status.cc:44] InternalException: Error
    updating metastore
    CAUSED BY: MetaException: java.lang.NullPointerException




    To unsubscribe from this group and stop receiving emails from it, send an
    email to impala-user+unsubscribe@cloudera.org.
    To unsubscribe from this group and stop receiving emails from it, send an email to impala-user+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupimpala-user @
categorieshadoop
postedOct 4, '13 at 1:19p
activeOct 10, '13 at 6:40a
posts4
users3
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase