FAQ
Manisha,

Are you able to read the data via Hive on the same table?

Thanks,
Udai

On Thu, Jan 24, 2013 at 11:10 PM, Manisha Agrawal wrote:

Hi Udai,

Thanks.

I resolved root issue by adding it to dfs.block.local-path-**access.user
in hdfs-site.xml and also changed file permissions to 777.
Now I am starting all services as root along with impala-shell.

It worked once, but now metadata is unreadbale with root user.

File format being queried is a row delimited file.
Are there any checks need to be made before startng impala?

Please help.
Thanks
Mansiha Agrawal

On Friday, January 25, 2013 8:15:36 AM UTC+5:30, Udai wrote:

Hello Manisha,

Appreciate the Configs. What is the file format that you are querying?

Thanks,
Udai

On Wed, Jan 23, 2013 at 9:22 PM, Manisha Agrawal wrote:

Hi all,

I have CDH 4.1.2 running in pseudo mode on a RHEL 64 bit machine.
Mysql is configured as Hive's remote datasore.
Hive succsessfully reads metadata and fetches correct results.

Impala is running on top of it. I am unable to excute queries on impala.

I have a external table named sample and permissions on file are as
follows:
[[email protected] conf]# sudo -u hdfs hadoop fs -ls -R
/user/impala
-rw-r--r-- 2 impala hdfs 31 2013-01-23 23:10
/user/impala/test_ext.out


This is how I start statestored and impalad services

GLOG_v=1 /usr/bin/statestored -state_store_port=24000 &
GLOG_v=1 /usr/bin/impalad -state_store_host=localhost -nn=localhost
-nn_port=8020 -hostname=localhost -ipaddress=10.100.255.207 &

When I start services as *root user* , hdfs is unreadable.
and impala-shell throws below error:

readDirect: FSDataInputStream#read error:
org.apache.hadoop.hdfs.**BlockMissingException: Could not obtain block:
BP-344648192-10.100.255.207-**1358752982703:blk_**183363845701608725_1522
file=/user/impala/test_ext.out
at org.apache.hadoop.hdfs.**DFSInputStream.chooseDataNode(**
DFSInputStream.java:734)
at org.apache.hadoop.hdfs.**DFSInputStream.blockSeekTo(**
DFSInputStream.java:448)
at org.apache.hadoop.hdfs.**DFSInputStream.**readWithStrategy(**
DFSInputStream.java:645)
at org.apache.hadoop.hdfs.**DFSInputStream.read(**
DFSInputStream.java:696)
at org.apache.hadoop.fs.**FSDataInputStream.read(**
FSDataInputStream.java:123)
hdfsOpenFile(hdfs://localhost:**8020/user/impala/test_ext.out)**: WARN:
Unexpected error 255 when testing for direct read compatibility


On starting services as *impala user,* metadata is unreadable
error in impala-shell is as below:

Query: select * from sample
ERROR: com.cloudera.impala.common.**AnalysisException: Analysis
exception (in select * from sample)
at com.cloudera.impala.analysis.**AnalysisContext.analyze(**
AnalysisContext.java:133)
at com.cloudera.impala.service.**Frontend.createExecRequest(**
Frontend.java:216)
at com.cloudera.impala.service.**JniFrontend.createExecRequest(*
*JniFrontend.java:86)
Caused by: com.cloudera.impala.common.**AnalysisException: Unknown
table: 'sample'
at com.cloudera.impala.analysis.**Analyzer.registerBaseTableRef(
**Analyzer.java:178)
at com.cloudera.impala.analysis.**BaseTableRef.analyze(**
BaseTableRef.java:51)
at com.cloudera.impala.analysis.**SelectStmt.analyze(SelectStmt.
**java:115)
at com.cloudera.impala.analysis.**AnalysisContext.analyze(**
AnalysisContext.java:130)
... 2 more

Please provide suggestions to resolve the issue.

Thanks in advance
Manisha
==============================**==============================**
==============================
*core-site.xml*

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:8020</**value>
</property>
<!-- OOZIE proxy user setting -->
<property>
<name>hadoop.proxyuser.oozie.**hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.oozie.**groups</name>
<value>*</value>
</property>
<!-- HTTPFS proxy user setting -->
<property>
<name>hadoop.proxyuser.httpfs.**hosts</name>
<value>*</value>
</property>
<property>
<name>hadoop.proxyuser.httpfs.**groups</name>
<value>*</value>
</property>
<property>
<name>dfs.client.read.**shortcircuit</name>
<value>true</value>
</property>
</configuration>

==============================**==============================**
==============================

*hdfs-site.xml*

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<!-- Immediately exit safemode as soon as one DataNode checks in.
On a multi-node cluster, these configurations must be removed.
-->
<property>
<name>dfs.safemode.extension</**name>
<value>0</value>
</property>
<property>
<name>dfs.safemode.min.**datanodes</name>
<value>1</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop-hdfs/**cache/${user.name}</value>
</property>
<property>
<name>dfs.namenode.name.dir</**name>
<value>/var/lib/hadoop-hdfs/**cache/${user.name}/dfs/name</**value>
</property>
<property>
<name>dfs.namenode.checkpoint.**dir</name>
<value>/var/lib/hadoop-hdfs/**cache/${user.name}/dfs/**
namesecondary</value>
</property>
<property>
<name>dfs.datanode.data.dir</**name>
<value>/var/lib/hadoop-hdfs/**cache/${user.name}/dfs/data</**value>
</property>
<property>
<name>dfs.datanode.data.dir.**perm</name>
<value>750</value>
</property>
<property>
<name>dfs.block.local-path-**access.user</name>
<value>impala,hdfs,mapred</**value>
</property>
<property>
<name>dfs.datanode.hdfs-**blocks-metadata.enabled</name>
<value>true</value>
</property>
</configuration>
==============================**==============================**
==============================

*hive-site.xml*

<configuration>
<property>
<name>javax.jdo.option.**ConnectionURL</name>
<value>jdbc:mysql://localhost/**metastore</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.**ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</**value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.**ConnectionUserName</name>
<value>hive</value>
</property>

<property>
<name>javax.jdo.option.**ConnectionPassword</name>
<value>hive</value>
</property>

<property>
<name>hive.metastore.local</**name>
<value>true</value>
</property>
</configuration>
=============================**==============================**
==============================**=============




--

--

Search Discussions

Discussion Posts

Previous

Follow ups

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 3 of 5 | next ›
Discussion Overview
groupimpala-user @
categorieshadoop
postedJan 25, '13 at 2:45a
activeJan 30, '13 at 4:36p
posts5
users2
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2023 Grokbase