FAQ
When running a select query from the impala-shell, I'm getting the
following exception (hive queries work fine on this table):

ERROR: com.google.common.util.concurrent.UncheckedExecutionException:
java.lang.IllegalArgumentException: Wrong FS:
hdfs://hadoop0.local:8020/path/to/file/part-0000, expected:
hdfs://hadoop0.ourdomain.com:8020
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at
com.cloudera.impala.catalog.HdfsTable.getBlockMetadata(HdfsTable.java:629)
at
com.cloudera.impala.planner.HdfsScanNode.getScanRangeLocations(HdfsScanNode.java:127)
at
com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:285)
at
com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:91)

Impala has been installed from Cloudera Manager Free Edition.

Here's some configuration:

cat
/var/run/cloudera-scm-agent/process/439-impala-IMPALAD/impala-conf/impalad_flags
-fe_port=21000
-be_port=22000
-enable_webserver=true
-webserver_port=25000
-state_store_subscriber_port=23000
-default_query_options
-log_filename=impalad
-ipaddress=xxx.xxx.xxx.xxx
-hostname=hadoop0.ourdomain.com
-state_store_host=hadoop0.ourdomain.com
-state_store_port=24000
-nn=hadoop0.ourdomain.com
-nn_port=8020

cat
/var/run/cloudera-scm-agent/process/439-impala-IMPALAD/hadoop-conf/core-site.xml
<?xml version="1.0" encoding="UTF-8"?>

<!--Autogenerated by Cloudera CM on 2013-03-16T00:13:43.590Z-->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop0.ourdomain.com:8020</value>
</property>
<property>
<name>hadoop.security.authentication</name>
<value>simple</value>
</property>
<property>
<name>hadoop.rpc.protection</name>
<value>authentication</value>
</property>
<property>
<name>hadoop.security.auth_to_local</name>
<value>DEFAULT</value>
</property>
</configuration>

Search Discussions

  • Henry Robinson at Mar 18, 2013 at 6:15 pm
    Hi Jason -

    This error is likely because the Hive metadata for the table has a location
    field pointing to "hdfs://hadoop0.local:8020/<etc>", which may have
    happened because it was created in Hive with a different configuration to
    the one used to start Impala and your Hadoop cluster. You can confirm this
    by doing:

    SHOW TABLE EXTENDED LIKE <table name>;

    in Hive, and checking the 'location' field that you see. This has to match
    the URI that HDFS thinks it is serving on.

    You can update the metadata by issuing the following command in Hive:

    ALTER TABLE <table name> SET LOCATION "hdfs://hadoop0.ourdomain.com:8020/<path
    to table directory>";

    You'll then need to refresh Impala's metadata cache by typing 'refresh' in
    the shell.

    Let me know if that works for you,

    Henry
    On 15 March 2013 17:56, Jason M wrote:

    When running a select query from the impala-shell, I'm getting the
    following exception (hive queries work fine on this table):

    ERROR: com.google.common.util.concurrent.UncheckedExecutionException:
    java.lang.IllegalArgumentException: Wrong FS:
    hdfs://hadoop0.local:8020/path/to/file/part-0000, expected: hdfs://
    hadoop0.ourdomain.com:8020
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
    at
    com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
    at
    com.cloudera.impala.catalog.HdfsTable.getBlockMetadata(HdfsTable.java:629)
    at
    com.cloudera.impala.planner.HdfsScanNode.getScanRangeLocations(HdfsScanNode.java:127)
    at
    com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:285)
    at
    com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:91)

    Impala has been installed from Cloudera Manager Free Edition.

    Here's some configuration:

    cat
    /var/run/cloudera-scm-agent/process/439-impala-IMPALAD/impala-conf/impalad_flags
    -fe_port=21000
    -be_port=22000
    -enable_webserver=true
    -webserver_port=25000
    -state_store_subscriber_port=23000
    -default_query_options
    -log_filename=impalad
    -ipaddress=xxx.xxx.xxx.xxx
    -hostname=hadoop0.ourdomain.com
    -state_store_host=hadoop0.ourdomain.com
    -state_store_port=24000
    -nn=hadoop0.ourdomain.com
    -nn_port=8020

    cat
    /var/run/cloudera-scm-agent/process/439-impala-IMPALAD/hadoop-conf/core-site.xml
    <?xml version="1.0" encoding="UTF-8"?>

    <!--Autogenerated by Cloudera CM on 2013-03-16T00:13:43.590Z-->
    <configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop0.ourdomain.com:8020</value>
    </property>
    <property>
    <name>hadoop.security.authentication</name>
    <value>simple</value>
    </property>
    <property>
    <name>hadoop.rpc.protection</name>
    <value>authentication</value>
    </property>
    <property>
    <name>hadoop.security.auth_to_local</name>
    <value>DEFAULT</value>
    </property>
    </configuration>


    --
    Henry Robinson
    Software Engineer
    Cloudera
    415-994-6679
  • Henry Robinson at Mar 18, 2013 at 11:21 pm
    Glad to hear it. Let us know if there's anything further we can help with.

    Henry
    On 18 March 2013 16:16, Jason Mason wrote:

    Thanks Henry, that resolved the issue.

    I also needed to modify each partition, with ALTER TABLE <table name>
    PARTITION(dt='YYYYMMDD') SET LOCATION 'hdfs://..../path/to/partition';

    On Mon, Mar 18, 2013 at 11:14 AM, Henry Robinson wrote:

    Hi Jason -

    This error is likely because the Hive metadata for the table has a
    location field pointing to "hdfs://hadoop0.local:8020/<etc>", which may
    have happened because it was created in Hive with a different configuration
    to the one used to start Impala and your Hadoop cluster. You can confirm
    this by doing:

    SHOW TABLE EXTENDED LIKE <table name>;

    in Hive, and checking the 'location' field that you see. This has to
    match the URI that HDFS thinks it is serving on.

    You can update the metadata by issuing the following command in Hive:

    ALTER TABLE <table name> SET LOCATION "hdfs://hadoop0.ourdomain.com:8020/<path
    to table directory>";

    You'll then need to refresh Impala's metadata cache by typing 'refresh'
    in the shell.

    Let me know if that works for you,

    Henry
    On 15 March 2013 17:56, Jason M wrote:

    When running a select query from the impala-shell, I'm getting the
    following exception (hive queries work fine on this table):

    ERROR: com.google.common.util.concurrent.UncheckedExecutionException:
    java.lang.IllegalArgumentException: Wrong FS:
    hdfs://hadoop0.local:8020/path/to/file/part-0000, expected: hdfs://
    hadoop0.ourdomain.com:8020
    at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2234)
    at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
    at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
    at
    com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
    at
    com.cloudera.impala.catalog.HdfsTable.getBlockMetadata(HdfsTable.java:629)
    at
    com.cloudera.impala.planner.HdfsScanNode.getScanRangeLocations(HdfsScanNode.java:127)
    at
    com.cloudera.impala.service.Frontend.createExecRequest(Frontend.java:285)
    at
    com.cloudera.impala.service.JniFrontend.createExecRequest(JniFrontend.java:91)

    Impala has been installed from Cloudera Manager Free Edition.

    Here's some configuration:

    cat
    /var/run/cloudera-scm-agent/process/439-impala-IMPALAD/impala-conf/impalad_flags
    -fe_port=21000
    -be_port=22000
    -enable_webserver=true
    -webserver_port=25000
    -state_store_subscriber_port=23000
    -default_query_options
    -log_filename=impalad
    -ipaddress=xxx.xxx.xxx.xxx
    -hostname=hadoop0.ourdomain.com
    -state_store_host=hadoop0.ourdomain.com
    -state_store_port=24000
    -nn=hadoop0.ourdomain.com
    -nn_port=8020

    cat
    /var/run/cloudera-scm-agent/process/439-impala-IMPALAD/hadoop-conf/core-site.xml
    <?xml version="1.0" encoding="UTF-8"?>

    <!--Autogenerated by Cloudera CM on 2013-03-16T00:13:43.590Z-->
    <configuration>
    <property>
    <name>fs.defaultFS</name>
    <value>hdfs://hadoop0.ourdomain.com:8020</value>
    </property>
    <property>
    <name>hadoop.security.authentication</name>
    <value>simple</value>
    </property>
    <property>
    <name>hadoop.rpc.protection</name>
    <value>authentication</value>
    </property>
    <property>
    <name>hadoop.security.auth_to_local</name>
    <value>DEFAULT</value>
    </property>
    </configuration>


    --
    Henry Robinson
    Software Engineer
    Cloudera
    415-994-6679

    --
    Henry Robinson
    Software Engineer
    Cloudera
    415-994-6679

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupimpala-user @
categorieshadoop
postedMar 16, '13 at 12:56a
activeMar 18, '13 at 11:21p
posts3
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Henry Robinson: 2 posts Jason M: 1 post

People

Translate

site design / logo © 2022 Grokbase