Looks like the root cause of problem was:
https://groups.google.com/a/cloudera.org/forum/?fromgroups=#!topic/scm-users/mepf96MbQSISome strange things during CM 4.1.3-> CM 4.5 upgrade.
Now the problem looks different. Impala is started, but daemons have "bad"
status:
Impala Daemon
10 Started <
http://uat-scm.kyc.megafon.ru:7180/cmf/services/25/serviceStatusHealthTablePopup?timestamp=1367996909420&roleCount=10¤tMode=true&filterRoleType=IMPALAD&filterStatus=RUNNING&filterHealth=>
10 Bad <
http://uat-scm.kyc.megafon.ru:7180/cmf/services/25/serviceStatusHealthTablePopup?timestamp=1367996909420&roleCount=10¤tMode=true&filterRoleType=IMPALAD&filterHealth=bad&filterStatus=>
Impala StateStore Daemon
Started<
http://uat-scm.kyc.megafon.ru:7180/cmf/services/25/instances/235/status>
Good<
http://uat-scm.kyc.megafon.ru:7180/cmf/services/25/instances/235/status>
The log from one of the daemon:
10:55:04.895INFOdaemon.cc:45
Mem Info: 126.00 GB
10:55:07.871INFOimpala-server.cc:1809
Default query options:TQueryOptions {
01: abort_on_error (bool) = false,
02: max_errors (i32) = 0,
03: disable_codegen (bool) = false,
04: batch_size (i32) = 0,
05: num_nodes (i32) = 0,
06: max_scan_range_length (i64) = 0,
07: num_scanner_threads (i32) = 0,
08: max_io_buffers (i32) = 0,
09: allow_unsupported_formats (bool) = false,
10: default_order_by_limit (i64) = -1,
11: debug_action (string) = "",
12: mem_limit (i64) = 0,
13: abort_on_default_limit_exceeded (bool) = false,
}
10:55:07.992WARNorg.apache.hadoop.conf.Configuration
mapred.max.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.maxsize
10:55:07.993WARNorg.apache.hadoop.conf.Configuration
mapred.min.split.size is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize
10:55:07.994WARNorg.apache.hadoop.conf.Configuration
mapred.min.split.size.per.rack is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.rack
10:55:07.994WARNorg.apache.hadoop.conf.Configuration
mapred.min.split.size.per.node is deprecated. Instead, use mapreduce.input.fileinputformat.split.minsize.per.node
10:55:07.994WARNorg.apache.hadoop.conf.Configuration
mapred.reduce.tasks is deprecated. Instead, use mapreduce.job.reduces
10:55:07.994WARNorg.apache.hadoop.conf.Configuration
mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
10:55:08.281WARNorg.apache.hadoop.conf.Configuration
[email protected]:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
10:55:08.319WARNorg.apache.hadoop.conf.Configuration
[email protected]:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
10:55:08.325WARNorg.apache.hadoop.hive.conf.HiveConf
DEPRECATED: Configuration property hive.metastore.local no longer has any effect. Make sure to provide a valid value for hive.metastore.uris if you are connecting to a remote metastore.
10:55:08.368INFOhive.metastore
Trying to connect to metastore with URI thrift://uat-scm.lol.ru:9083
10:55:08.496INFOhive.metastore
Waiting 1 seconds before next connection attempt.
10:55:09.497INFOhive.metastore
Connected to metastore.
10:55:09.498INFOhive.metastore
Trying to connect to metastore with URI thrift://uat-scm.lol.ru:9083
10:55:09.500INFOhive.metastore
Waiting 1 seconds before next connection attempt.
10:55:10.500INFOhive.metastore
Connected to metastore.
10:55:10.501INFOhive.metastore
Trying to connect to metastore with URI thrift://uat-scm.lol.ru:9083
10:55:10.502INFOhive.metastore
Waiting 1 seconds before next connection attempt.
10:55:11.503INFOhive.metastore
Connected to metastore.
10:55:11.503INFOhive.metastore
Trying to connect to metastore with URI thrift://uat-scm.lol.ru:9083
10:55:11.505INFOhive.metastore
Waiting 1 seconds before next connection attempt.
10:55:12.505INFOhive.metastore
Connected to metastore.
10:55:12.506INFOhive.metastore
Trying to connect to metastore with URI thrift://uat-scm.lol.ru:9083
10:55:12.508INFOhive.metastore
Waiting 1 seconds before next connection attempt.
10:55:13.508INFOhive.metastore
Connected to metastore.
10:55:15.812INFOstatus.cc:42
ERROR: short-circuit local reads is disabled because
- dfs.client.use.legacy.blockreader.local is not enabled.
@ 0x829f2d (unknown)
@ 0x6a1549 (unknown)
@ 0x6b1c66 (unknown)
@ 0x6b1e61 (unknown)
@ 0x69a11e (unknown)
@ 0x35a641ecdd (unknown)
@ 0x699de9 (unknown)
10:55:15.812ERRORimpala-server.cc:648
ERROR: short-circuit local reads is disabled because
- dfs.client.use.legacy.blockreader.local is not enabled.
10:55:15.813ERRORimpala-server.cc:650
Impala is aborted due to improper configurations.
среда, 8 мая 2013 г., 1:10:41 UTC+4 пользователь Hari Sekhon написал:
Is this only happening on one node or all nodes? Is this production, are
you adding to existing working cluster or a new cluster?
If this is just a new test setup that isn't production yet it'll probably
be simplest to try redeploying the node(s) agents+software using Cloudera
Manager to fix your basic setup before trying to get Impala working.
On 7 May 2013 22:00, Vikas Singh <[email protected] <javascript:>> wrote:Hi Serega,
Did you configure it to be some other location then (in the
Admin->Properties->Parcel->Local Parcel Repository Path )? Also, what
are the repo urls that show up in "Remote Parcel Repository URLs ". Is
the host that you have is part of cluster where you are deploying the
Parcel?
Something basic is going wrong with setup/install in general and I think
you should fix that before moving on to starting Impala step. Is there any
message/error in the CM log file (in /var/log/cloudera-scm-server)?
CC'ing scm-users and bcc: impala-users.
Vikas
On Tue, May 7, 2013 at 1:40 PM, Serega Sheypak <
[email protected]<javascript:>
wrote:
There is no /opt/cloudera/parcels
I've tried to activate->deactivate->activate before.
Didn't help.
2013/5/8 Vikas Singh <
[email protected] <javascript:>>
Hi Serega,
So you are using Parcels for deployment? The startup script is running
as if there was a package/yum based deployment and is failing to find
Impala installation.
Can you look into the content of /opt/cloudera/parcels on one of the
node where you see failure and check if Impala parcel exist there? If you
don't see it there, then try deactivating and activating Impala parcel
again.
Vikas
On Mon, May 6, 2013 at 11:24 PM, Serega Sheypak <
[email protected]<javascript:>
wrote:
Sorry, I was misprint.
*My Cloudera manager is:*
*Version*: 4.5.2 (#327 built by jenkins on 20130429-1453 git:
16cab2c7b76194b7877d64a4215494daa387a266)
*Server Time*: 07.05.2013 10:19:38, Moscow Standard Time (MSK)
*Parcels page:*
CDH 4.2.1-1.cdh4.2.1.p0.5Activated
IMPALA 1.0-1.p0.371Activated
*Random host from cluster:*
[
[email protected] ~]$ rpm -qa | grep cloud
cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64
*VM where CM is installed:*
[
[email protected] ~]$ rpm -qa | grep cloud
cloudera-manager-server-db-4.5.2-1.cm452.p0.327.x86_64
cloudera-manager-server-4.5.2-1.cm452.p0.327.x86_64
cloudera-manager-agent-4.5.2-1.cm452.p0.327.x86_64
cloudera-manager-daemons-4.5.2-1.cm452.p0.327.x86_64
cloudera-manager-repository-4.0-1.noarch
[
2013/5/6 Vikas Singh <
[email protected] <javascript:>>
CM 4.5.1 will not install Impala 1.0 (Impala 1.0 was released on
archive.cloudera.com whereas CM 4.5.1 only knows about Impala on
beta.cloudera.com). You may need CM 4.5.2 which was released to
support Impala 1.0.
How did you install Impala 1.0? The error message means that Impala
is not installed on the node where you are trying to start it.
Vikas
On Mon, May 6, 2013 at 1:37 AM, Serega Sheypak <
[email protected]<javascript:>
wrote:
Hi, sorry for dummy question, I ca'nt start Impala 1,0 on my CDH
4.2.1 under CM 4.5.1.
The error is:
++ dirname /usr/lib64/cmf/service/impala/impala.sh
+ cloudera_config=/usr/lib64/cmf/service/impala
++ cd /usr/lib64/cmf/service/impala/../common
++ pwd
+ cloudera_config=/usr/lib64/cmf/service/common
+ . /usr/lib64/cmf/service/common/cloudera-config.sh
++ set -x
+ source_parcel_environment
+ '[' '!' -z '' ']'
+ export IMPALA_HOME=/usr/lib/impala
+ IMPALA_HOME=/usr/lib/impala
+ export IMPALA_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/impala-conf
+ IMPALA_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/impala-conf
+ export HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/hadoop-conf
+ HADOOP_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/hadoop-conf
+ export HIVE_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/hive-conf
+ HIVE_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/hive-conf
+ export HBASE_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/hbase-conf
+ HBASE_CONF_DIR=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/hbase-conf
+ JDBC_JARS=/usr/share/java/mysql-connector-java.jar::
+ [[ -z '' ]]
+ export AUX_CLASSPATH=/usr/share/java/mysql-connector-java.jar::
+ AUX_CLASSPATH=/usr/share/java/mysql-connector-java.jar::
+ [[ -z '' ]]
+ export CLASSPATH=/usr/share/java/mysql-connector-java.jar::
+ CLASSPATH=/usr/share/java/mysql-connector-java.jar::
+ FLAG_FILE=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/impala-conf/impalad_flags
+ USE_DEBUG_BUILD=false
+ perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/839-impala-IMPALAD#g' /var/run/cloudera-scm-agent/process/839-impala-IMPALAD/impala-conf/impalad_flags
+ false
+ export IMPALA_BIN=/usr/lib/impala/sbin-retail
+ IMPALA_BIN=/usr/lib/impala/sbin-retail
+ '[' impalad = impalad ']'
+ exec /usr/lib/impala/../../bin/impalad --flagfile=/var/run/cloudera-scm-agent/process/839-impala-IMPALAD/impala-conf/impalad_flags/usr/lib64/cmf/service/impala/impala.sh: line 57: /usr/lib/impala/../../bin/impalad: No such file or catalog
What do I do wrong?