Grokbase Groups Hive user April 2011
FAQ
We have a hadoop/hive cluster which is using cloudera's distribution.
Metastore is stored in mysql and all the relevant drivers are in classpath
and in conf files. While running queries on hive I am getting these errors

$ hive -hiveconf hive.root.logger=INFO,console
Hive history file=/tmp/vipul/hive_job_log_vipul_201104271409_1310845475.txt
11/04/27 14:09:01 INFO exec.HiveHistory: Hive history
file=/tmp/vipul/hive_job_log_vipul_201104271409_1310845475.txt
hive> select time from requests_stat_min;
11/04/27 14:15:38 INFO parse.ParseDriver: Parsing command: select time from
requests_stat_min
11/04/27 14:15:38 INFO parse.ParseDriver: Parse Completed
11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Completed phase 1 of Semantic
Analysis
11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Get metadata for source
tables
11/04/27 14:15:39 INFO metastore.HiveMetaStore: 0: Opening raw store with
implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
11/04/27 14:15:39 INFO metastore.ObjectStore: ObjectStore, initialize called
11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core"
requires "org.eclipse.core.resources" but it cannot be resolved.
11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core"
requires "org.eclipse.core.runtime" but it cannot be resolved.
11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core"
requires "org.eclipse.text" but it cannot be resolved.
11/04/27 14:15:40 INFO metastore.ObjectStore: Initialized ObjectStore
11/04/27 14:15:40 INFO metastore.HiveMetaStore: 0: get_table : db=default
tbl=requests_stat_min
11/04/27 14:15:41 INFO hive.log: DDL: struct requests_stat_min { string
time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders,
i64 unique_events, i64 start_orders, i32 max_response_time, i32
min_response_time, i32 avg_response_time, i32 max_response_size, i32
min_response_size, i32 avg_response_size, i64 unique_referrers}
11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for subqueries
11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for destination
tables
11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Completed getting MetaData in
Semantic Analysis
11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for FS(2)
11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for SEL(1)
11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for TS(0)
11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition_names :
db=default tbl=requests_stat_min
11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
db=default tbl=requests_stat_min
11/04/27 14:15:42 INFO hive.log: DDL: struct requests_stat_min { string
time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders,
i64 unique_events, i64 start_orders, i32 max_response_time, i32
min_response_time, i32 avg_response_time, i32 max_response_size, i32
min_response_size, i32 avg_response_size, i64 unique_referrers}
11/04/27 14:15:42 INFO hive.log: DDL: struct requests_stat_min { string
time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders,
i64 unique_events, i64 start_orders, i32 max_response_time, i32
min_response_time, i32
11/04/27 14:15:42 INFO parse.SemanticAnalyzer: Completed plan generation
11/04/27 14:15:42 INFO ql.Driver: Semantic Analysis Completed
11/04/27 14:15:42 INFO ql.Driver: Starting command: select time from
requests_stat_min
Total MapReduce jobs = 1
11/04/27 14:15:42 INFO ql.Driver: Total MapReduce jobs = 1
Launching Job 1 out of 1
11/04/27 14:15:42 INFO ql.Driver: Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
11/04/27 14:15:42 INFO exec.ExecDriver: Number of reduce tasks is set to 0
since there's no reduce operator
FAILED: Unknown exception : null
11/04/27 14:15:42 ERROR ql.Driver: FAILED: Unknown exception : null
java.lang.NullPointerException
at java.util.Hashtable.put(Hashtable.java:394)
at java.util.Properties.setProperty(Properties.java:143)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:460)
at org.apache.hadoop.hive.conf.HiveConf.setVar(HiveConf.java:293)
at
org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:505)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:100)
at
org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:572)
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:452)
at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:314)
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:302)
at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:287)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:186)


I am assuming this has to do with metastore but I cant figure out whats
wrong. The way we run query is via a remote client. Any help is greatly
appreciated!

--
Vipul Sharma
sharmavipul AT gmail DOT com

Search Discussions

  • Vipul sharma at Apr 27, 2011 at 11:55 pm
    This got resolved. Some of the clients were not upgraded to CHD3 and were
    running old version.
    On Wed, Apr 27, 2011 at 2:27 PM, vipul sharma wrote:

    We have a hadoop/hive cluster which is using cloudera's distribution.
    Metastore is stored in mysql and all the relevant drivers are in classpath
    and in conf files. While running queries on hive I am getting these errors

    $ hive -hiveconf hive.root.logger=INFO,console
    Hive history file=/tmp/vipul/hive_job_log_vipul_201104271409_1310845475.txt
    11/04/27 14:09:01 INFO exec.HiveHistory: Hive history
    file=/tmp/vipul/hive_job_log_vipul_201104271409_1310845475.txt
    hive> select time from requests_stat_min;
    11/04/27 14:15:38 INFO parse.ParseDriver: Parsing command: select time from
    requests_stat_min
    11/04/27 14:15:38 INFO parse.ParseDriver: Parse Completed
    11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Starting Semantic Analysis
    11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Completed phase 1 of
    Semantic Analysis
    11/04/27 14:15:39 INFO parse.SemanticAnalyzer: Get metadata for source
    tables
    11/04/27 14:15:39 INFO metastore.HiveMetaStore: 0: Opening raw store with
    implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
    11/04/27 14:15:39 INFO metastore.ObjectStore: ObjectStore, initialize
    called
    11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core"
    requires "org.eclipse.core.resources" but it cannot be resolved.
    11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core"
    requires "org.eclipse.core.runtime" but it cannot be resolved.
    11/04/27 14:15:39 ERROR DataNucleus.Plugin: Bundle "org.eclipse.jdt.core"
    requires "org.eclipse.text" but it cannot be resolved.
    11/04/27 14:15:40 INFO metastore.ObjectStore: Initialized ObjectStore
    11/04/27 14:15:40 INFO metastore.HiveMetaStore: 0: get_table : db=default
    tbl=requests_stat_min
    11/04/27 14:15:41 INFO hive.log: DDL: struct requests_stat_min { string
    time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders,
    i64 unique_events, i64 start_orders, i32 max_response_time, i32
    min_response_time, i32 avg_response_time, i32 max_response_size, i32
    min_response_size, i32 avg_response_size, i64 unique_referrers}
    11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for subqueries
    11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Get metadata for destination
    tables
    11/04/27 14:15:41 INFO parse.SemanticAnalyzer: Completed getting MetaData
    in Semantic Analysis
    11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for FS(2)
    11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for SEL(1)
    11/04/27 14:15:41 INFO ppd.OpProcFactory: Processing for TS(0)
    11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition_names :
    db=default tbl=requests_stat_min
    11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:41 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO metastore.HiveMetaStore: 0: get_partition :
    db=default tbl=requests_stat_min
    11/04/27 14:15:42 INFO hive.log: DDL: struct requests_stat_min { string
    time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders,
    i64 unique_events, i64 start_orders, i32 max_response_time, i32
    min_response_time, i32 avg_response_time, i32 max_response_size, i32
    min_response_size, i32 avg_response_size, i64 unique_referrers}
    11/04/27 14:15:42 INFO hive.log: DDL: struct requests_stat_min { string
    time, i64 request_count, i64 get_count, i64 post_count, i64 unique_orders,
    i64 unique_events, i64 start_orders, i32 max_response_time, i32
    min_response_time, i32
    11/04/27 14:15:42 INFO parse.SemanticAnalyzer: Completed plan generation
    11/04/27 14:15:42 INFO ql.Driver: Semantic Analysis Completed
    11/04/27 14:15:42 INFO ql.Driver: Starting command: select time from
    requests_stat_min
    Total MapReduce jobs = 1
    11/04/27 14:15:42 INFO ql.Driver: Total MapReduce jobs = 1
    Launching Job 1 out of 1
    11/04/27 14:15:42 INFO ql.Driver: Launching Job 1 out of 1
    Number of reduce tasks is set to 0 since there's no reduce operator
    11/04/27 14:15:42 INFO exec.ExecDriver: Number of reduce tasks is set to 0
    since there's no reduce operator
    FAILED: Unknown exception : null
    11/04/27 14:15:42 ERROR ql.Driver: FAILED: Unknown exception : null
    java.lang.NullPointerException
    at java.util.Hashtable.put(Hashtable.java:394)
    at java.util.Properties.setProperty(Properties.java:143)
    at org.apache.hadoop.conf.Configuration.set(Configuration.java:460)
    at org.apache.hadoop.hive.conf.HiveConf.setVar(HiveConf.java:293)
    at
    org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:505)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:100)
    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:64)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:572)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:452)
    at org.apache.hadoop.hive.ql.Driver.runCommand(Driver.java:314)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:302)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:123)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:181)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:287)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:186)


    I am assuming this has to do with metastore but I cant figure out whats
    wrong. The way we run query is via a remote client. Any help is greatly
    appreciated!

    --
    Vipul Sharma
    sharmavipul AT gmail DOT com


    --
    Vipul Sharma
    sharmavipul AT gmail DOT com

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshive, hadoop
postedApr 27, '11 at 9:27p
activeApr 27, '11 at 11:55p
posts2
users1
websitehive.apache.org

1 user in discussion

Vipul sharma: 2 posts

People

Translate

site design / logo © 2021 Grokbase