FAQ
Deepesh Khandelwal created HIVE-8239:
----------------------------------------

              Summary: MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
                  Key: HIVE-8239
                  URL: https://issues.apache.org/jira/browse/HIVE-8239
              Project: Hive
           Issue Type: Bug
           Components: Database/Schema
     Affects Versions: 0.13.0
             Reporter: Deepesh Khandelwal
             Assignee: Deepesh Khandelwal


In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
{noformat}
2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
         at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
         at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
         at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
         at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
         at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
         at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
         at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
         at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
         at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
         at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
         at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
         at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
         at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
         at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
         at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
         ...
{noformat}
In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Search Discussions

  • Deepesh Khandelwal (JIRA) at Sep 23, 2014 at 10:03 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Deepesh Khandelwal updated HIVE-8239:
    -------------------------------------
         Attachment: HIVE-8239.1.patch

    Attaching a patch for review that changes the affected column datatypes from int to bigint.
    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)
  • Deepesh Khandelwal (JIRA) at Sep 23, 2014 at 10:03 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Deepesh Khandelwal updated HIVE-8239:
    -------------------------------------
         Description:
    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
             at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
             at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
             at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
             at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
             at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
             at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
             at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
             at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
             at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
             ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.

    NO PRECOMMIT TESTS

       was:
    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
             at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
             at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
             at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
             at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
             at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
             at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
             at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
             at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
             at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
             at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
             ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.

    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.
    NO PRECOMMIT TESTS


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)
  • Deepesh Khandelwal (JIRA) at Sep 23, 2014 at 10:04 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Deepesh Khandelwal updated HIVE-8239:
    -------------------------------------
         Fix Version/s: 0.14.0
                Status: Patch Available (was: Open)
    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Fix For: 0.14.0

    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.
    NO PRECOMMIT TESTS


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)
  • Alan Gates (JIRA) at Sep 23, 2014 at 11:43 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145624#comment-14145624 ]

    Alan Gates commented on HIVE-8239:
    ----------------------------------

    +1
    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Fix For: 0.14.0

    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.
    NO PRECOMMIT TESTS


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)
  • Alan Gates (JIRA) at Sep 23, 2014 at 11:48 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145636#comment-14145636 ]

    Alan Gates commented on HIVE-8239:
    ----------------------------------

    One missing piece, we should make these same changes to hive-txn-schema-0.13, for completeness. I can do that when I check in the patch.
    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Fix For: 0.14.0

    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.
    NO PRECOMMIT TESTS


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)
  • Deepesh Khandelwal (JIRA) at Sep 23, 2014 at 11:54 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14145649#comment-14145649 ]

    Deepesh Khandelwal commented on HIVE-8239:
    ------------------------------------------

    I left that out as the composite hive-schema-0.14.0.mssql.sql includes those tables.
    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Fix For: 0.14.0

    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.
    NO PRECOMMIT TESTS


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)
  • Alan Gates (JIRA) at Sep 24, 2014 at 11:42 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Alan Gates updated HIVE-8239:
    -----------------------------
         Resolution: Fixed
             Status: Resolved (was: Patch Available)

    Patch checked in. I did not change hive-txn-schema-0.13.
    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Fix For: 0.14.0

    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.
    NO PRECOMMIT TESTS


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)
  • Deepesh Khandelwal (JIRA) at Sep 24, 2014 at 11:47 pm
    [ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14147112#comment-14147112 ]

    Deepesh Khandelwal commented on HIVE-8239:
    ------------------------------------------

    Thanks [~alangates] for the review and commit!
    MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
    -------------------------------------------------------------------------------------------------------------

    Key: HIVE-8239
    URL: https://issues.apache.org/jira/browse/HIVE-8239
    Project: Hive
    Issue Type: Bug
    Components: Database/Schema
    Affects Versions: 0.13.0
    Reporter: Deepesh Khandelwal
    Assignee: Deepesh Khandelwal
    Fix For: 0.14.0

    Attachments: HIVE-8239.1.patch


    In Transaction related tables, Java long column fields are mapped to int which results in failure as shown:
    {noformat}
    2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update <insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1')>
    2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback
    2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int.
    at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197)
    at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246)
    at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676)
    at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615)
    at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154)
    at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633)
    at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244)
    at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403)
    at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255)
    ...
    {noformat}
    In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead.
    NO PRECOMMIT TESTS


    --
    This message was sent by Atlassian JIRA
    (v6.3.4#6332)

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupdev @
categorieshive, hadoop
postedSep 23, '14 at 9:55p
activeSep 24, '14 at 11:47p
posts9
users1
websitehive.apache.org

1 user in discussion

Deepesh Khandelwal (JIRA): 9 posts

People

Translate

site design / logo © 2021 Grokbase