FAQ
Cluster Information:
  Total of 5 Nodes in the cluster - with CDH42 installed by RPM and impala
beta .7 (latest)
  One node is namenode and another 4 node is datanode and TT
Running on Redhat Linux version 8 HP blades with 48GB memory on each
blade.
Used internal disk for hdfs filesystem
I can see that - all the four nodes all in the impala cluster using the
following commands

  1.) http://chadvt3endc02.ops.tiaa-cref.org:25000/backends
  2.) At the bottom I have varz information
  3.) At the bottom I have query profile information from imapla
  4

impala select statement takes: *111.35s
Hive Select statement takes: 27 seconds

I really do not know what I am doing wrong .....please help me
*

Hive - Stats:

Hive history
file=/tmp/nathamu/hive_job_log_nathamu_201304170927_428112587.txt
hive> select count(*) from security_report;
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
   set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
   set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
   set mapred.reduce.tasks=<number>
Starting Job = job_201304150921_0005, Tracking URL =
http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005
Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201304150921_0005
Hadoop job information for Stage-1: number of mappers: 52; number of
reducers: 1
2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
16.8 sec
2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
60.81 sec
2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
108.77 sec
2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
108.77 sec
2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
108.77 sec
2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
131.68 sec
2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
160.57 sec
2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
172.5 sec
2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
172.5 sec
2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative CPU
172.5 sec
2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
176.36 sec
2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
176.36 sec
2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
176.36 sec
MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
Ended Job = job_201304150921_0005
MapReduce Jobs Launched:
Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
13554671046 HDFS Write: 8 SUCCESS
Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
OK
*1645957
Time taken: 26.526 seconds*
hive>

I*MPALA select statement:*
[nathamu@chadvt3endc02 ~]$ impala-shell
Connected to chadvt3endc02.ops.tiaa-cref.org:21000
Welcome to the Impala shell. Press TAB twice to see a list of available
commands.

Copyright (c) 2012 Cloudera, Inc. All rights reserved.

(Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT 2013)
[chadvt3endc02.ops.tiaa-cref.org:21000] > select count(*) from
security_report;
Query: select count(*) from security_report
Query finished, fetching results ...
+----------+
count(*) |
+----------+
1645956 |
+-*---------+
Returned 1 row(s) in 111.35s*
[chadvt3endc02.ops.tiaa-cref.org:21000] >









varz - information

  Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

   Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
dfs.namenode.checkpoint.txns40000s3.replication3
mapreduce.output.fileoutputformat.compress.typeRECORD
mapreduce.jobtracker.jobhistory.lru.cache.size5
dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializers
org.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir
${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent
0.25yarn.nodemanager.keytab/etc/krb5.keytab
dfs.https.server.keystore.resourcessl-server.xml
mapreduce.reduce.skip.maxgroups0dfs.domain.socket.path
/var/run/hadoop-hdfs/dn._PORThadoop.http.authentication.kerberos.keytab
${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5
ha.failover-controller.new-active.rpc-timeout.ms60000
mapreduce.framework.namelocalha.health-monitor.check-interval.ms1000
io.file.buffer.size4096dfs.namenode.checkpoint.period3600
mapreduce.task.tmp.dir./tmpipc.client.kill.max10
yarn.resourcemanager.scheduler.class
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler
mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096
dfs.namenode.secondary.http-address0.0.0.0:50090
dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070
mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transferfalse
dfs.datanode.address0.0.0.0:50010hadoop.http.authentication.token.validity
36000hadoop.security.group.mapping.ldap.search.filter.group
(objectClass=group)dfs.client.failover.max.attempts15
kfs.client-write-packet-size65536yarn.admin.acl*
yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs
86400dfs.client.failover.connection.retries.on.timeouts0
mapreduce.map.sort.spill.percent0.80file.stream-buffer-size4096
dfs.webhdfs.enabledfalseipc.client.connection.maxidletime10000
mapreduce.jobtracker.persist.jobstatus.hours1dfs.datanode.ipc.address
0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0
yarn.app.mapreduce.am.job.task.listener.thread-count30
dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000
ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-executor.class
org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
io.skip.checksum.errorsfalse
yarn.resourcemanager.scheduler.client.thread-count50
hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
mapreduce.reduce.log.levelINFOfs.s3.maxRetries4hadoop.kerberos.kinit.command
kinityarn.nodemanager.process-kill-wait.ms2000dfs.namenode.name.dir.restore
falsemapreduce.jobtracker.handler.count10
yarn.app.mapreduce.client-am.ipc.max-retries1
dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmur
io.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefault
yarn.nodemanager.disk-health-checker.min-healthy-disks0.25
mapreduce.job.maxtaskfailures.per.tracker3
mapreduce.tasktracker.healthchecker.script.timeout600000
hadoop.security.group.mapping.ldap.search.attr.group.namecnfs.df.interval
60000dfs.namenode.kerberos.internal.spnego.principal
${dfs.web.authentication.kerberos.principal}
mapreduce.job.reduce.shuffle.consumer.plugin.class
org.apache.hadoop.mapreduce.task.reduce.Shufflemapreduce.jobtracker.address
chadvt3endc01:54311mapreduce.tasktracker.tasks.sleeptimebeforesigkill5000
dfs.journalnode.rpc-address0.0.0.0:8485mapreduce.job.acl-view-job
dfs.client.block.write.replace-datanode-on-failure.policyDEFAULT
dfs.namenode.replication.interval3dfs.namenode.num.checkpoints.retained2
mapreduce.tasktracker.http.address0.0.0.0:50060
yarn.resourcemanager.scheduler.address0.0.0.0:8030
dfs.datanode.directoryscan.threads1hadoop.security.group.mapping.ldap.ssl
falsemapreduce.task.merge.progress.records10000dfs.heartbeat.interval3
net.topology.script.number.args100mapreduce.local.clientfactory.class.name
org.apache.hadoop.mapred.LocalClientFactorydfs.client-write-packet-size65536
io.native.lib.availabletruedfs.client.failover.connection.retries0
yarn.nodemanager.disk-health-checker.interval-ms120000dfs.blocksize67108864
yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs86400
mapreduce.jobhistory.webapp.address0.0.0.0:19888
yarn.resourcemanager.resource-tracker.client.thread-count50
dfs.blockreport.initialDelay0ha.health-monitor.rpc-timeout.ms45000
mapreduce.reduce.markreset.buffer.percent0.0dfs.ha.tail-edits.period60
mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native
yarn.resourcemanager.client.thread-count50
yarn.nodemanager.health-checker.script.timeout-ms1200000
file.bytes-per-checksum512dfs.replication.max512
dfs.namenode.max.extra.edits.segments.retained10000io.map.index.skip0
mapreduce.task.timeout600000dfs.datanode.du.reserved0dfs.support.appendtrue
ftp.blocksize67108864dfs.client.file-block-storage-locations.num-threads10
yarn.nodemanager.container-manager.thread-count20
ipc.server.listen.queue.size128
yarn.resourcemanager.amliveliness-monitor.interval-ms1000
hadoop.ssl.hostname.verifierDEFAULTmapreduce.tasktracker.dns.interface
defaulthadoop.security.group.mapping.ldap.search.attr.membermember
mapreduce.tasktracker.outofband.heartbeatfalse
mapreduce.job.userlog.retain.hours24yarn.nodemanager.resource.memory-mb8192
dfs.namenode.delegation.token.renew-interval86400000
hadoop.ssl.keystores.factory.class
org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
dfs.datanode.sync.behind.writesfalsemapreduce.map.maxattempts4
dfs.datanode.handler.count10hadoop.ssl.require.client.certfalse
ftp.client-write-packet-size65536
dfs.client.write.exclude.nodes.cache.expiry.interval.millis600000
ipc.server.tcpnodelayfalsemapreduce.reduce.shuffle.retry-delay.max.ms60000
mapreduce.task.profile.reduces0-2ha.health-monitor.connect-retry-interval.ms
1000hadoop.fuse.connection.timeout300dfs.permissions.superusergrouphadoop
mapreduce.jobtracker.jobhistory.task.numberprogresssplits12fs.ftp.host.port
21mapreduce.map.speculativetruemapreduce.client.submit.file.replication10
dfs.datanode.data.dir.perm700s3native.blocksize67108864
mapreduce.job.ubertask.maxmaps9dfs.namenode.replication.min1
mapreduce.cluster.acls.enabledfalsehadoop.security.uid.cache.secs14400
yarn.nodemanager.localizer.fetch.thread-count4map.sort.class
org.apache.hadoop.util.QuickSortfs.trash.checkpoint.interval0
dfs.image.transfer.timeout600000dfs.namenode.name.dir
file://${hadoop.tmp.dir}/dfs/nameyarn.app.mapreduce.am.staging-dir
/tmp/hadoop-yarn/stagingfs.AbstractFileSystem.file.impl
org.apache.hadoop.fs.local.LocalFsyarn.nodemanager.env-whitelist
JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME
dfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodec
mapreduce.job.reduces1mapreduce.job.complete.cancel.delegation.tokenstrue
hadoop.security.group.mapping.ldap.search.filter.user
(&(objectClass=user)(sAMAccountName={0}))
yarn.nodemanager.sleep-delay-before-sigkill.ms250
mapreduce.tasktracker.healthchecker.interval60000
mapreduce.jobtracker.heartbeats.in.second100kfs.bytes-per-checksum512
mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfo
dfs.namenode.backup.http-address0.0.0.0:50105hadoop.rpc.protection
authenticationdfs.namenode.https-address0.0.0.0:50470ftp.stream-buffer-size
4096dfs.ha.log-roll.period120yarn.resourcemanager.admin.client.thread-count1
file.client-write-packet-size65536
hadoop.http.authentication.simple.anonymous.allowedtrue
yarn.nodemanager.log.retain-seconds10800dfs.datanode.drop.cache.behind.reads
falsedfs.image.transfer.bandwidthPerSec0
ha.failover-controller.cli-check.rpc-timeout.ms20000
mapreduce.tasktracker.instrumentation
org.apache.hadoop.mapred.TaskTrackerMetricsInstio.mapfile.bloom.size1048576
dfs.ha.fencing.ssh.connect-timeout30000s3.bytes-per-checksum512
fs.automatic.closetruefs.trash.interval0hadoop.security.authenticationsimple
fs.defaultFShdfs://chadvt3endc01:8020hadoop.ssl.server.confssl-server.xml
ipc.client.connect.max.retries10
yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000
dfs.journalnode.http-address0.0.0.0:8480mapreduce.jobtracker.taskscheduler
org.apache.hadoop.mapred.JobQueueTaskScheduler
mapreduce.job.speculative.speculativecap0.1
yarn.am.liveness-monitor.expiry-interval-ms600000
mapreduce.output.fileoutputformat.compressfalse
net.topology.node.switch.mapping.impl
org.apache.hadoop.net.ScriptBasedMapping
dfs.namenode.replication.considerLoadtruedfs.namenode.audit.loggersdefault
mapreduce.job.counters.max120yarn.resourcemanager.address0.0.0.0:8032
dfs.client.block.write.retries3
yarn.resourcemanager.nm.liveness-monitor.interval-ms1000
io.map.index.interval128mapred.child.java.opts-Xmx200m
mapreduce.tasktracker.local.dir.minspacestart0
mapreduce.client.progressmonitor.pollinterval1000
dfs.client.https.keystore.resourcessl-client.xml
rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB
org.apache.hadoop.ipc.ProtobufRpcEngine
mapreduce.jobtracker.tasktracker.maxblacklists4mapreduce.job.queuename
defaultyarn.nodemanager.localizer.address0.0.0.0:8040
io.mapfile.bloom.error.rate0.005mapreduce.job.split.metainfo.maxsize10000000
yarn.nodemanager.delete.thread-count4ipc.client.tcpnodelayfalse
yarn.app.mapreduce.am.resource.mb1536dfs.datanode.dns.nameserverdefault
mapreduce.map.output.compress.codec
org.apache.hadoop.io.compress.DefaultCodecdfs.namenode.accesstime.precision
3600000mapreduce.map.log.levelINFOio.seqfile.compress.blocksize1000000
mapreduce.tasktracker.taskcontroller
org.apache.hadoop.mapred.DefaultTaskController
hadoop.security.groups.cache.secs300
mapreduce.job.end-notification.max.attempts5yarn.nodemanager.webapp.address
0.0.0.0:8042mapreduce.jobtracker.expire.trackers.interval600000
yarn.resourcemanager.webapp.address0.0.0.0:8088
yarn.nodemanager.health-checker.interval-ms600000
hadoop.security.authorizationfalsemapreduce.job.map.output.collector.class
org.apache.hadoop.mapred.MapTask$MapOutputBufferfs.ftp.host0.0.0.0
yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms1000
mapreduce.ifile.readaheadtrueha.zookeeper.session-timeout.ms5000
mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000
mapreduce.reduce.shuffle.parallelcopies5mapreduce.map.skip.maxrecords0
dfs.https.enablefalsemapreduce.reduce.shuffle.read.timeout180000
mapreduce.output.fileoutputformat.compress.codec
org.apache.hadoop.io.compress.DefaultCodec
mapreduce.jobtracker.instrumentation
org.apache.hadoop.mapred.JobTrackerMetricsInst
yarn.nodemanager.remote-app-log-dir-suffixlogsdfs.blockreport.intervalMsec
21600000mapreduce.reduce.speculativetruemapreduce.jobhistory.keytab
/etc/security/keytab/jhs.service.keytabdfs.datanode.balance.bandwidthPerSec
1048576file.blocksize67108864yarn.resourcemanager.admin.address0.0.0.0:8033
yarn.resourcemanager.resource-tracker.address0.0.0.0:8031
mapreduce.tasktracker.local.dir.minspacekill0
mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging
mapreduce.jobtracker.retiredjobs.cache.size1000
ipc.client.connect.max.retries.on.timeouts45ha.zookeeper.acl
world:anyone:rwcdayarn.nodemanager.local-dirs${hadoop.tmp.dir}/nm-local-dir
mapreduce.reduce.shuffle.connect.timeout180000
dfs.block.access.key.update.interval600dfs.block.access.token.lifetime600
mapreduce.job.end-notification.retry.attempts5
mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/system
yarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
yarn.log-aggregation.retain-seconds-1
mapreduce.jobtracker.jobhistory.block.size3145728
mapreduce.tasktracker.indexcache.mb10dfs.namenode.checkpoint.check.period60
dfs.client.block.write.replace-datanode-on-failure.enabletrue
dfs.datanode.directoryscan.interval21600
yarn.nodemanager.container-monitor.interval-ms3000
dfs.default.chunk.view.size32768mapreduce.job.speculative.slownodethreshold
1.0mapreduce.job.reduce.slowstart.completedmaps0.05
hadoop.security.instrumentation.requires.adminfalse
dfs.namenode.safemode.min.datanodes0
hadoop.http.authentication.signature.secret.file
${user.home}/hadoop-http-auth-signature-secretmapreduce.reduce.maxattempts4
yarn.nodemanager.localizer.cache.target-size-mb10240s3native.replication3
dfs.datanode.https.address0.0.0.0:50475
mapreduce.reduce.skip.proc.count.autoincrtruefile.replication1
hadoop.hdfs.configuration.version1ipc.client.idlethreshold4000hadoop.tmp.dir
/tmp/hadoop-${user.name}mapreduce.jobhistory.address0.0.0.0:10020
mapreduce.jobtracker.restart.recoverfalsemapreduce.cluster.local.dir
/test02/mapred/localyarn.ipc.serializer.typeprotocolbuffers
dfs.namenode.decommission.nodes.per.interval5
dfs.namenode.delegation.key.update-interval86400000fs.s3.buffer.dir
${hadoop.tmp.dir}/s3dfs.namenode.support.allow.formattrue
yarn.nodemanager.remote-app-log-dir/tmp/logs
hadoop.work.around.non.threadsafe.getpwuidfalse
dfs.ha.automatic-failover.enabledfalse
mapreduce.jobtracker.persist.jobstatus.activetruedfs.namenode.logging.level
infoyarn.nodemanager.log-dirs${yarn.log.dir}/userlogs
ha.health-monitor.sleep-after-disconnect.ms1000
dfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}
hadoop.rpc.socket.factory.class.default
org.apache.hadoop.net.StandardSocketFactoryyarn.resourcemanager.keytab
/etc/krb5.keytabdfs.datanode.http.address0.0.0.0:50075mapreduce.task.profile
falsedfs.namenode.edits.dir${dfs.namenode.name.dir}hadoop.fuse.timer.period5
mapreduce.map.skip.proc.count.autoincrtruefs.AbstractFileSystem.viewfs.impl
org.apache.hadoop.fs.viewfs.ViewFs
mapreduce.job.speculative.slowtaskthreshold1.0s3native.stream-buffer-size
4096yarn.nodemanager.delete.debug-delay-sec0
dfs.secondary.namenode.kerberos.internal.spnego.principal
${dfs.web.authentication.kerberos.principal}
dfs.namenode.safemode.threshold-pct0.999fmapreduce.ifile.readahead.bytes
4194304yarn.scheduler.maximum-allocation-mb8192s3native.bytes-per-checksum
512mapreduce.job.committer.setup.cleanup.neededtruekfs.replication3
yarn.nodemanager.log-aggregation.compression-typenone
hadoop.http.authentication.typesimpledfs.client.failover.sleep.base.millis
500yarn.nodemanager.heartbeat.interval-ms1000hadoop.jetty.logs.serve.aliases
trueha.failover-controller.graceful-fence.rpc-timeout.ms5000
mapreduce.reduce.shuffle.input.buffer.percent0.70
dfs.datanode.max.transfer.threads4096mapreduce.task.io.sort.mb100
mapreduce.reduce.merge.inmem.threshold1000dfs.namenode.handler.count10
hadoop.ssl.client.confssl-client.xml
yarn.resourcemanager.container.liveness-monitor.interval-ms600000
mapreduce.client.completion.pollinterval5000yarn.nodemanager.vmem-pmem-ratio
2.1yarn.app.mapreduce.client.max-retries3hadoop.ssl.enabledfalse
fs.AbstractFileSystem.hdfs.implorg.apache.hadoop.fs.Hdfs
mapreduce.reduce.java.opts-Xmx1024M
mapreduce.tasktracker.reduce.tasks.maximum4mapreduce.map.java.opts-Xmx1024M
mapreduce.reduce.input.buffer.percent0.0kfs.stream-buffer-size4096
dfs.namenode.invalidate.work.pct.per.iteration0.32f
yarn.app.mapreduce.am.command-opts-Xmx1024mdfs.bytes-per-checksum512
dfs.replication1mapreduce.shuffle.ssl.file.buffer.size65536
dfs.permissions.enabledtruemapreduce.jobtracker.maxtasks.perjob-1
dfs.datanode.use.datanode.hostnamefalsemapreduce.task.userlog.limit.kb0
dfs.namenode.fs-limits.max-directory-items0s3.client-write-packet-size65536
dfs.client.failover.sleep.max.millis15000mapreduce.job.maps2
dfs.namenode.fs-limits.max-component-length0mapreduce.map.output.compress
falses3.blocksize67108864dfs.namenode.edits.journal-plugin.qjournal
org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerkfs.blocksize
67108864dfs.client.https.need-authfalseyarn.scheduler.minimum-allocation-mb
1024ftp.replication3mapreduce.input.fileinputformat.split.minsize0
fs.s3n.block.size67108864yarn.ipc.rpc.class
org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
dfs.namenode.num.extra.edits.retained1000000hadoop.http.staticuser.user
dr.whoyarn.nodemanager.localizer.cache.cleanup.interval-ms600000
mapreduce.job.jvm.numtasks1mapreduce.task.profile.maps0-2
mapreduce.shuffle.port8080mapreduce.reduce.shuffle.merge.percent0.66
mapreduce.jobtracker.http.address0.0.0.0:50030
mapreduce.task.skip.start.attempts2mapreduce.task.io.sort.factor10
dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondary
tfile.fs.input.buffer.size262144tfile.io.chunk.size1048576fs.s3.block.size
67108864io.serializations
org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
yarn.resourcemanager.max-completed-applications10000
mapreduce.jobhistory.principaljhs/_HOST@REALM.TLD
mapreduce.job.end-notification.retry.interval1dfs.namenode.backup.address
0.0.0.0:50100dfs.block.access.token.enablefalseio.seqfile.sorter.recordlimit
1000000s3native.client-write-packet-size65536ftp.bytes-per-checksum512
hadoop.security.group.mapping
org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
dfs.client.file-block-storage-locations.timeout3000
mapreduce.job.end-notification.max.retry.interval5yarn.acl.enabletrue
yarn.nm.liveness-monitor.expiry-interval-ms600000
mapreduce.tasktracker.map.tasks.maximum8dfs.namenode.max.objects0
dfs.namenode.delegation.token.max-lifetime604800000
mapreduce.job.hdfs-servers${fs.defaultFS}yarn.application.classpath
$HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*
mapreduce.tasktracker.dns.nameserverdefault
dfs.datanode.hdfs-blocks-metadata.enabledtrue
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandlerdfs.datanode.readahead.bytes4193404
mapreduce.job.ubertask.maxreduces1dfs.image.compressfalse
mapreduce.shuffle.ssl.enabledfalseyarn.log-aggregation-enablefalse
mapreduce.tasktracker.report.address127.0.0.1:0
mapreduce.tasktracker.http.threads40dfs.stream-buffer-size4096
tfile.fs.output.buffer.size262144fs.permissions.umask-mode022
yarn.resourcemanager.am.max-retries1
ha.failover-controller.graceful-fence.connection.retries1
dfs.datanode.drop.cache.behind.writesfalsemapreduce.job.ubertask.enablefalse
hadoop.common.configuration.version0.23.0
dfs.namenode.replication.work.multiplier.per.iteration2
mapreduce.job.acl-modify-job
io.seqfile.local.dir${hadoop.tmp.dir}/io/localfs.s3.sleepTimeSeconds10
mapreduce.client.output.filterFAILEDCommand-line Flags

--dump_ir=false
--module_output=
--be_port=22000
--hostname=chadvt3endc02
--keytab_file=
--mem_limit=80%
--planservice_host=localhost
--planservice_port=20000
--principal=
--exchg_node_buffer_size_bytes=10485760
--max_row_batches=0
--randomize_splits=false
--num_disks=0
--num_threads_per_disk=1
--read_size=8388608
--enable_webserver=true
--state_store_host=chadvt3endc02.ops.tiaa-cref.org
--state_store_subscriber_port=23000
--use_statestore=true
--nn=chadvt3endc01
--nn_port=8020
--serialize_batch=false
--status_report_interval=5
--compress_rowbatches=true
--abort_on_config_error=true
--be_service_threads=64
--beeswax_port=21000
--default_query_options=
--fe_service_threads=64
--heap_profile_dir=
--hs2_port=21050
--load_catalog_at_startup=false
--log_mem_usage_interval=0
--log_query_to_file=true
--query_log_size=25
--use_planservice=false
--statestore_subscriber_timeout_seconds=10
--state_store_port=24000
--statestore_max_missed_heartbeats=5
--statestore_num_heartbeat_threads=10
--statestore_suspect_heartbeats=2
--kerberos_reinit_interval=60
--sasl_path=/usr/lib/sasl2:/usr/lib64/sasl2:/usr/local/lib/sasl2:/usr/lib/x86_64-linux-gnu/sasl2
--web_log_bytes=1048576
--log_filename=impalad
--periodic_counter_update_period_ms=500
--rpc_cnxn_attempts=10
--rpc_cnxn_retry_interval_ms=2000
--enable_webserver_doc_root=true
--webserver_doc_root=/usr/lib/impala
--webserver_interface=
--webserver_port=25000
--flagfile=
--fromenv=
--tryfromenv=
--undefok=
--tab_completion_columns=80
--tab_completion_word=
--help=false
--helpfull=false
--helpmatch=
--helpon=
--helppackage=false
--helpshort=false
--helpxml=false
--version=false
--alsologtoemail=
--alsologtostderr=false
--drop_log_memory=true
--log_backtrace_at=
--log_dir=/var/log/impala
--log_link=
--log_prefix=true
--logbuflevel=0
--logbufsecs=30
--logemaillevel=999
--logmailer=/bin/mail
--logtostderr=false
--max_log_size=1800
--minloglevel=0
--stderrthreshold=2
--stop_logging_if_full_disk=false
--symbolize_stacktrace=true
--v=0
--vmodule=

query profile:

  Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

   Query (id=9148ac87180b4fed:b92602810a654bb1):
    - PlanningTime: 15.457ms
   Summary:
     Default Db: default
     End Time: 2013-04-17 09:58:50
     Impala Version: impalad version 0.7 RELEASE (build 62a2db93eb04c36e5becab5fdcaf06b53a839238)
Built on Mon, 15 Apr 2013 08:27:38 PST
     Plan:
----------------
Plan Fragment 0
   UNPARTITIONED
   AGGREGATE
   OUTPUT: SUM()
   GROUP BY:
   TUPLE IDS: 1
     EXCHANGE (2)
       TUPLE IDS: 1

Plan Fragment 1
   RANDOM
   STREAM DATA SINK
     EXCHANGE ID: 2
     UNPARTITIONED

   AGGREGATE
   OUTPUT: COUNT(*)
   GROUP BY:
   TUPLE IDS: 1
     SCAN HDFS table=default.security_report #partitions=1 size=12.62GB (0)
       TUPLE IDS: 0
----------------
     Query State: FINISHED
     Query Type: QUERY
     Sql Statement: select count(*) from security_report
     Start Time: 2013-04-17 09:56:59
     User: nathamu
   Query 9148ac87180b4fed:b92602810a654bb1:(1m50s 0.00%)
     Aggregate Profile:
        - FinalizationTimer: 0ns
     Coordinator Fragment:(1m50s 0.00%)
        - RowsProduced: 1
       CodeGen:
          - CodegenTime: 465.0us
          - CompileTime: 109.684ms
          - LoadTime: 9.50ms
          - ModuleFileSize: 70.02 KB
       AGGREGATION_NODE (id=3):(1m50s 0.00%)
          - BuildBuckets: 1.02K (1024)
          - BuildTime: 6.0us
          - GetResultsTime: 3.0us
          - LoadFactor: 0.00
          - MemoryUsed: 32.01 KB
          - RowsReturned: 1
          - RowsReturnedRate: 0
       EXCHANGE_NODE (id=2):(1m50s 100.00%)
          - BytesReceived: 64.00 B
          - ConvertRowBatchTime: 7.0us
          - DataArrivalWaitTime: 1m50s
          - DeserializeRowBatchTimer: 22.0us
          - FirstBatchArrivalWaitTime: 0ns
          - MemoryUsed: 0.00
          - RowsReturned: 4
          - RowsReturnedRate: 0
          - SendersBlockedTotalTimer: 0ns
          - SendersBlockedWallTimer: 0ns
     Averaged Fragment 1:(1m47s 0.00%)
       completion times: min:1m43s max:1m50s mean: 1m47s stddev:2s582ms
       execution rates: min:27.82 MB/sec max:32.43 MB/sec mean:29.95 MB/sec stddev:1.95 MB/sec
       num instances: 4
       split sizes: min: 2.81 GB, max: 3.50 GB, avg: 3.16 GB, stddev: 273.91 MB
        - RowsProduced: 1
       CodeGen:
          - CodegenTime: 742.0us
          - CompileTime: 114.504ms
          - LoadTime: 8.748ms
          - ModuleFileSize: 70.02 KB
       DataStreamSender (dst_id=2):(292.250us 0.00%)
          - BytesSent: 16.00 B
          - NetworkThroughput: 83.56 KB/sec
          - OverallThroughput: 53.97 KB/sec
          - SerializeBatchTime: 7.500us
          - ThriftTransmitTime: 204.0us
          - UncompressedRowBatchSize: 16.00 B
       AGGREGATION_NODE (id=1):(1m47s 0.01%)
          - BuildBuckets: 1.02K (1024)
          - BuildTime: 3.6ms
          - GetResultsTime: 3.250us
          - LoadFactor: 0.00
          - MemoryUsed: 32.01 KB
          - RowsReturned: 1
          - RowsReturnedRate: 0
       HDFS_SCAN_NODE (id=0):(1m47s 99.99%)
          - AverageHdfsReadThreadConcurrency: 0.06
            - HdfsReadThreadConcurrencyCountPercentage=0: 93.50
            - HdfsReadThreadConcurrencyCountPercentage=1: 6.50
            - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=36: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=37: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=38: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=39: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=40: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=41: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
            - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
          - AverageScannerThreadConcurrency: 0.02
          - BytesRead: 3.16 GB
          - MemoryUsed: 0.00
          - NumDisksAccessed: 1
          - PerReadThreadRawHdfsThroughput: 464.42 MB/sec
          - RowsReturned: 411.49K (411489)
          - RowsReturnedRate: 3.82 K/sec
          - ScanRangesComplete: 50
          - ScannerThreadsInvoluntaryContextSwitches: 4
          - ScannerThreadsTotalWallClockTime: 49m5s
            - DelimiterParseTime: 1s340ms
            - MaterializeTupleTime: 331.500us
            - ScannerThreadsSysTime: 6.744ms
            - ScannerThreadsUserTime: 1s365ms
          - ScannerThreadsVoluntaryContextSwitches: 1.27K (1273)
          - TotalRawHdfsReadTime: 6s973ms
          - TotalReadThroughput: 29.97 MB/sec
     Fragment 1:
       Instance 9148ac87180b4fed:b92602810a654bb3 (host=chas2t3endc02:22000):(1m50s 0.00%)
         Hdfs split stats (:<# splits>/): 0:56/3.50 GB
          - RowsProduced: 1
         CodeGen:
            - CodegenTime: 780.0us
            - CompileTime: 113.908ms
            - LoadTime: 8.810ms
            - ModuleFileSize: 70.02 KB
         DataStreamSender (dst_id=2):(311.0us 0.00%)
            - BytesSent: 16.00 B
            - NetworkThroughput: 61.04 KB/sec
            - OverallThroughput: 50.24 KB/sec
            - SerializeBatchTime: 6.0us
            - ThriftTransmitTime: 256.0us
            - UncompressedRowBatchSize: 16.00 B
         AGGREGATION_NODE (id=1):(1m50s 0.01%)
            - BuildBuckets: 1.02K (1024)
            - BuildTime: 3.256ms
            - GetResultsTime: 3.0us
            - LoadFactor: 0.00
            - MemoryUsed: 32.01 KB
            - RowsReturned: 1
            - RowsReturnedRate: 0
         HDFS_SCAN_NODE (id=0):(1m50s 99.99%)
           File Formats: TEXT/NONE:56
           Hdfs split stats (:<# splits>/): 0:56/3.50 GB
            - AverageHdfsReadThreadConcurrency: 0.07
              - HdfsReadThreadConcurrencyCountPercentage=0: 93.21
              - HdfsReadThreadConcurrencyCountPercentage=1: 6.79
              - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
            - AverageScannerThreadConcurrency: 0.01
            - BytesRead: 3.50 GB
            - MemoryUsed: 0.00
            - NumDisksAccessed: 1
            - PerReadThreadRawHdfsThroughput: 451.90 MB/sec
            - RowsReturned: 434.37K (434373)
            - RowsReturnedRate: 3.93 K/sec
            - ScanRangesComplete: 56
            - ScannerThreadsInvoluntaryContextSwitches: 2
            - ScannerThreadsTotalWallClockTime: 59m13s
              - DelimiterParseTime: 1s502ms
              - MaterializeTupleTime: 334.0us
              - ScannerThreadsSysTime: 5.994ms
              - ScannerThreadsUserTime: 1s525ms
            - ScannerThreadsVoluntaryContextSwitches: 1.36K (1365)
            - TotalRawHdfsReadTime: 7s931ms
            - TotalReadThroughput: 32.43 MB/sec
       Instance 9148ac87180b4fed:b92602810a654bb4 (host=chadvt3endc02:22000):(1m48s 0.00%)
         Hdfs split stats (:<# splits>/): 0:48/3.00 GB
          - RowsProduced: 1
         CodeGen:
            - CodegenTime: 724.0us
            - CompileTime: 113.126ms
            - LoadTime: 8.573ms
            - ModuleFileSize: 70.02 KB
         DataStreamSender (dst_id=2):(247.0us 0.00%)
            - BytesSent: 16.00 B
            - NetworkThroughput: 131.30 KB/sec
            - OverallThroughput: 63.26 KB/sec
            - SerializeBatchTime: 8.0us
            - ThriftTransmitTime: 119.0us
            - UncompressedRowBatchSize: 16.00 B
         AGGREGATION_NODE (id=1):(1m48s 0.01%)
            - BuildBuckets: 1.02K (1024)
            - BuildTime: 2.511ms
            - GetResultsTime: 4.0us
            - LoadFactor: 0.00
            - MemoryUsed: 32.01 KB
            - RowsReturned: 1
            - RowsReturnedRate: 0
         HDFS_SCAN_NODE (id=0):(1m48s 99.99%)
           File Formats: TEXT/NONE:48
           Hdfs split stats (:<# splits>/): 0:48/3.00 GB
            - AverageHdfsReadThreadConcurrency: 0.07
              - HdfsReadThreadConcurrencyCountPercentage=0: 92.59
              - HdfsReadThreadConcurrencyCountPercentage=1: 7.41
              - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
            - AverageScannerThreadConcurrency: 0.02
            - BytesRead: 3.00 GB
            - MemoryUsed: 0.00
            - NumDisksAccessed: 1
            - PerReadThreadRawHdfsThroughput: 447.17 MB/sec
            - RowsReturned: 321.75K (321753)
            - RowsReturnedRate: 2.97 K/sec
            - ScanRangesComplete: 48
            - ScannerThreadsInvoluntaryContextSwitches: 4
            - ScannerThreadsTotalWallClockTime: 49m39s
              - DelimiterParseTime: 1s252ms
              - MaterializeTupleTime: 312.0us
              - ScannerThreadsSysTime: 5.995ms
              - ScannerThreadsUserTime: 1s277ms
            - ScannerThreadsVoluntaryContextSwitches: 1.27K (1268)
            - TotalRawHdfsReadTime: 6s862ms
            - TotalReadThroughput: 28.40 MB/sec
       Instance 9148ac87180b4fed:b92602810a654bb5 (host=chas2t3endc01:22000):(1m43s 0.00%)
         Hdfs split stats (:<# splits>/): 0:45/2.81 GB
          - RowsProduced: 1
         CodeGen:
            - CodegenTime: 731.0us
            - CompileTime: 113.26ms
            - LoadTime: 8.870ms
            - ModuleFileSize: 70.02 KB
         DataStreamSender (dst_id=2):(315.0us 0.00%)
            - BytesSent: 16.00 B
            - NetworkThroughput: 73.36 KB/sec
            - OverallThroughput: 49.60 KB/sec
            - SerializeBatchTime: 8.0us
            - ThriftTransmitTime: 213.0us
            - UncompressedRowBatchSize: 16.00 B
         AGGREGATION_NODE (id=1):(1m43s 0.01%)
            - BuildBuckets: 1.02K (1024)
            - BuildTime: 3.123ms
            - GetResultsTime: 3.0us
            - LoadFactor: 0.00
            - MemoryUsed: 32.01 KB
            - RowsReturned: 1
            - RowsReturnedRate: 0
         HDFS_SCAN_NODE (id=0):(1m43s 99.99%)
           File Formats: TEXT/NONE:45
           Hdfs split stats (:<# splits>/): 0:45/2.81 GB
            - AverageHdfsReadThreadConcurrency: 0.05
              - HdfsReadThreadConcurrencyCountPercentage=0: 95.15
              - HdfsReadThreadConcurrencyCountPercentage=1: 4.85
              - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=36: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=37: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=38: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=39: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=40: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=41: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
            - AverageScannerThreadConcurrency: 0.01
            - BytesRead: 2.81 GB
            - MemoryUsed: 0.00
            - NumDisksAccessed: 1
            - PerReadThreadRawHdfsThroughput: 485.12 MB/sec
            - RowsReturned: 428.45K (428449)
            - RowsReturnedRate: 4.14 K/sec
            - ScanRangesComplete: 45
            - ScannerThreadsInvoluntaryContextSwitches: 3
            - ScannerThreadsTotalWallClockTime: 37m58s
              - DelimiterParseTime: 1s197ms
              - MaterializeTupleTime: 311.0us
              - ScannerThreadsSysTime: 6.994ms
              - ScannerThreadsUserTime: 1s219ms
            - ScannerThreadsVoluntaryContextSwitches: 1.17K (1173)
            - TotalRawHdfsReadTime: 5s936ms
            - TotalReadThroughput: 27.82 MB/sec
       Instance 9148ac87180b4fed:b92602810a654bb6 (host=chas2t3endc03:22000):(1m48s 0.00%)
         Hdfs split stats (:<# splits>/): 0:53/3.31 GB
          - RowsProduced: 1
         CodeGen:
            - CodegenTime: 733.0us
            - CompileTime: 117.958ms
            - LoadTime: 8.740ms
            - ModuleFileSize: 70.02 KB
         DataStreamSender (dst_id=2):(296.0us 0.00%)
            - BytesSent: 16.00 B
            - NetworkThroughput: 68.53 KB/sec
            - OverallThroughput: 52.79 KB/sec
            - SerializeBatchTime: 8.0us
            - ThriftTransmitTime: 228.0us
            - UncompressedRowBatchSize: 16.00 B
         AGGREGATION_NODE (id=1):(1m48s 0.01%)
            - BuildBuckets: 1.02K (1024)
            - BuildTime: 3.137ms
            - GetResultsTime: 3.0us
            - LoadFactor: 0.00
            - MemoryUsed: 32.01 KB
            - RowsReturned: 1
            - RowsReturnedRate: 0
         HDFS_SCAN_NODE (id=0):(1m48s 99.99%)
           File Formats: TEXT/NONE:53
           Hdfs split stats (:<# splits>/): 0:53/3.31 GB
            - AverageHdfsReadThreadConcurrency: 0.07
              - HdfsReadThreadConcurrencyCountPercentage=0: 93.06
              - HdfsReadThreadConcurrencyCountPercentage=1: 6.94
              - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
              - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
            - AverageScannerThreadConcurrency: 0.01
            - BytesRead: 3.31 GB
            - MemoryUsed: 0.00
            - NumDisksAccessed: 1
            - PerReadThreadRawHdfsThroughput: 473.49 MB/sec
            - RowsReturned: 461.38K (461381)
            - RowsReturnedRate: 4.25 K/sec
            - ScanRangesComplete: 53
            - ScannerThreadsInvoluntaryContextSwitches: 7
            - ScannerThreadsTotalWallClockTime: 49m30s
              - DelimiterParseTime: 1s407ms
              - MaterializeTupleTime: 369.0us
              - ScannerThreadsSysTime: 7.993ms
              - ScannerThreadsUserTime: 1s436ms
            - ScannerThreadsVoluntaryContextSwitches: 1.29K (1286)
            - TotalRawHdfsReadTime: 7s163ms
            - TotalReadThroughput: 31.25 MB/sec

Search Discussions

  • Ishaan Joshi at Apr 17, 2013 at 2:31 pm
    Hi,

       To better diagnose the problem, could you send us the query profile, and
    the logs from the impalad you connected to?. The query profile is available
    on the impalad debug webpage - click on /queries, and the profile link
    should be right next to the query you ran. The log can be retrieved from
    /logs.

       Additionally, more information about the table would be helpful, the
    output of a describe command is ideal.

    Thanks,

    -- Ishaan



    On Wed, Apr 17, 2013 at 7:05 AM, wrote:

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and impala
    beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-cref.org:25000/backends
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history
    file=/tmp/nathamu/hive_job_log_nathamu_201304170927_428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.org:21000
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.org:21000] > select count(*) from
    security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.org:21000] >









    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3
    mapreduce.output.fileoutputformat.compress.typeRECORD
    mapreduce.jobtracker.jobhistory.lru.cache.size5
    dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializers
    org.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent
    0.25yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.resourcessl-server.xml
    mapreduce.reduce.skip.maxgroups0dfs.domain.socket.path
    /var/run/hadoop-hdfs/dn._PORThadoop.http.authentication.kerberos.keytab
    ${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5
    ha.failover-controller.new-active.rpc-timeout.ms60000
    mapreduce.framework.namelocalha.health-monitor.check-interval.ms1000
    io.file.buffer.size4096dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10
    yarn.resourcemanager.scheduler.class
    org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler
    mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096
    dfs.namenode.secondary.http-address0.0.0.0:50090
    dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070
    mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transfer
    falsedfs.datanode.address0.0.0.0:50010
    hadoop.http.authentication.token.validity36000
    hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
    dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
    yarn.admin.acl*
    yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs
    86400dfs.client.failover.connection.retries.on.timeouts0
    mapreduce.map.sort.spill.percent0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.maxidletime10000
    mapreduce.jobtracker.persist.jobstatus.hours1dfs.datanode.ipc.address
    0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0
    yarn.app.mapreduce.am.job.task.listener.thread-count30
    dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000
    ha.zookeeper.parent-znode/hadoop-ha
    yarn.nodemanager.container-executor.class
    org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
    io.skip.checksum.errorsfalse
    yarn.resourcemanager.scheduler.client.thread-count50
    hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
    2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count10
    yarn.app.mapreduce.client-am.ipc.max-retries1
    dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmur
    io.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefault
    yarn.nodemanager.disk-health-checker.min-healthy-disks0.25
    mapreduce.job.maxtaskfailures.per.tracker3
    mapreduce.tasktracker.healthchecker.script.timeout600000
    hadoop.security.group.mapping.ldap.search.attr.group.namecnfs.df.interval
    60000dfs.namenode.kerberos.internal.spnego.principal
    ${dfs.web.authentication.kerberos.principal}
    mapreduce.job.reduce.shuffle.consumer.plugin.class
    org.apache.hadoop.mapreduce.task.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311
    mapreduce.tasktracker.tasks.sleeptimebeforesigkill5000
    dfs.journalnode.rpc-address0.0.0.0:8485mapreduce.job.acl-view-job
    dfs.client.block.write.replace-datanode-on-failure.policyDEFAULT
    dfs.namenode.replication.interval3dfs.namenode.num.checkpoints.retained2
    mapreduce.tasktracker.http.address0.0.0.0:50060
    yarn.resourcemanager.scheduler.address0.0.0.0:8030
    dfs.datanode.directoryscan.threads1hadoop.security.group.mapping.ldap.ssl
    falsemapreduce.task.merge.progress.records10000dfs.heartbeat.interval3
    net.topology.script.number.args100mapreduce.local.clientfactory.class.name
    org.apache.hadoop.mapred.LocalClientFactorydfs.client-write-packet-size
    65536io.native.lib.availabletruedfs.client.failover.connection.retries0
    yarn.nodemanager.disk-health-checker.interval-ms120000dfs.blocksize
    67108864
    yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs
    86400mapreduce.jobhistory.webapp.address0.0.0.0:19888
    yarn.resourcemanager.resource-tracker.client.thread-count50
    dfs.blockreport.initialDelay0ha.health-monitor.rpc-timeout.ms45000
    mapreduce.reduce.markreset.buffer.percent0.0dfs.ha.tail-edits.period60
    mapreduce.admin.user.envLD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native
    yarn.resourcemanager.client.thread-count50
    yarn.nodemanager.health-checker.script.timeout-ms1200000
    file.bytes-per-checksum512dfs.replication.max512
    dfs.namenode.max.extra.edits.segments.retained10000io.map.index.skip0
    mapreduce.task.timeout600000dfs.datanode.du.reserved0dfs.support.append
    trueftp.blocksize67108864
    dfs.client.file-block-storage-locations.num-threads10
    yarn.nodemanager.container-manager.thread-count20
    ipc.server.listen.queue.size128
    yarn.resourcemanager.amliveliness-monitor.interval-ms1000
    hadoop.ssl.hostname.verifierDEFAULTmapreduce.tasktracker.dns.interface
    defaulthadoop.security.group.mapping.ldap.search.attr.membermember
    mapreduce.tasktracker.outofband.heartbeatfalse
    mapreduce.job.userlog.retain.hours24yarn.nodemanager.resource.memory-mb
    8192dfs.namenode.delegation.token.renew-interval86400000
    hadoop.ssl.keystores.factory.class
    org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory
    dfs.datanode.sync.behind.writesfalsemapreduce.map.maxattempts4
    dfs.datanode.handler.count10hadoop.ssl.require.client.certfalse
    ftp.client-write-packet-size65536
    dfs.client.write.exclude.nodes.cache.expiry.interval.millis600000
    ipc.server.tcpnodelayfalsemapreduce.reduce.shuffle.retry-delay.max.ms60000
    mapreduce.task.profile.reduces0-2
    ha.health-monitor.connect-retry-interval.ms1000
    hadoop.fuse.connection.timeout300dfs.permissions.superusergrouphadoop
    mapreduce.jobtracker.jobhistory.task.numberprogresssplits12
    fs.ftp.host.port21mapreduce.map.speculativetrue
    mapreduce.client.submit.file.replication10dfs.datanode.data.dir.perm700
    s3native.blocksize67108864mapreduce.job.ubertask.maxmaps9
    dfs.namenode.replication.min1mapreduce.cluster.acls.enabledfalse
    hadoop.security.uid.cache.secs14400
    yarn.nodemanager.localizer.fetch.thread-count4map.sort.class
    org.apache.hadoop.util.QuickSortfs.trash.checkpoint.interval0
    dfs.image.transfer.timeout600000dfs.namenode.name.dir
    file://${hadoop.tmp.dir}/dfs/nameyarn.app.mapreduce.am.staging-dir
    /tmp/hadoop-yarn/stagingfs.AbstractFileSystem.file.impl
    org.apache.hadoop.fs.local.LocalFsyarn.nodemanager.env-whitelist
    JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME
    dfs.image.compression.codecorg.apache.hadoop.io.compress.DefaultCodec
    mapreduce.job.reduces1mapreduce.job.complete.cancel.delegation.tokenstrue
    hadoop.security.group.mapping.ldap.search.filter.user
    (&(objectClass=user)(sAMAccountName={0}))
    yarn.nodemanager.sleep-delay-before-sigkill.ms250
    mapreduce.tasktracker.healthchecker.interval60000
    mapreduce.jobtracker.heartbeats.in.second100kfs.bytes-per-checksum512
    mapreduce.jobtracker.persist.jobstatus.dir/jobtracker/jobsInfo
    dfs.namenode.backup.http-address0.0.0.0:50105hadoop.rpc.protection
    authenticationdfs.namenode.https-address0.0.0.0:50470
    ftp.stream-buffer-size4096dfs.ha.log-roll.period120
    yarn.resourcemanager.admin.client.thread-count1
    file.client-write-packet-size65536
    hadoop.http.authentication.simple.anonymous.allowedtrue
    yarn.nodemanager.log.retain-seconds10800
    dfs.datanode.drop.cache.behind.readsfalse
    dfs.image.transfer.bandwidthPerSec0
    ha.failover-controller.cli-check.rpc-timeout.ms20000
    mapreduce.tasktracker.instrumentation
    org.apache.hadoop.mapred.TaskTrackerMetricsInstio.mapfile.bloom.size
    1048576dfs.ha.fencing.ssh.connect-timeout30000s3.bytes-per-checksum512
    fs.automatic.closetruefs.trash.interval0hadoop.security.authentication
    simplefs.defaultFShdfs://chadvt3endc01:8020hadoop.ssl.server.conf
    ssl-server.xmlipc.client.connect.max.retries10
    yarn.resourcemanager.delayed.delegation-token.removal-interval-ms30000
    dfs.journalnode.http-address0.0.0.0:8480mapreduce.jobtracker.taskscheduler
    org.apache.hadoop.mapred.JobQueueTaskScheduler
    mapreduce.job.speculative.speculativecap0.1
    yarn.am.liveness-monitor.expiry-interval-ms600000
    mapreduce.output.fileoutputformat.compressfalse
    net.topology.node.switch.mapping.impl
    org.apache.hadoop.net.ScriptBasedMapping
    dfs.namenode.replication.considerLoadtruedfs.namenode.audit.loggersdefault
    mapreduce.job.counters.max120yarn.resourcemanager.address0.0.0.0:8032
    dfs.client.block.write.retries3
    yarn.resourcemanager.nm.liveness-monitor.interval-ms1000
    io.map.index.interval128mapred.child.java.opts-Xmx200m
    mapreduce.tasktracker.local.dir.minspacestart0
    mapreduce.client.progressmonitor.pollinterval1000
    dfs.client.https.keystore.resourcessl-client.xml
    rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB
    org.apache.hadoop.ipc.ProtobufRpcEngine
    mapreduce.jobtracker.tasktracker.maxblacklists4mapreduce.job.queuename
    defaultyarn.nodemanager.localizer.address0.0.0.0:8040
    io.mapfile.bloom.error.rate0.005mapreduce.job.split.metainfo.maxsize
    10000000yarn.nodemanager.delete.thread-count4ipc.client.tcpnodelayfalse
    yarn.app.mapreduce.am.resource.mb1536dfs.datanode.dns.nameserverdefault
    mapreduce.map.output.compress.codec
    org.apache.hadoop.io.compress.DefaultCodec
    dfs.namenode.accesstime.precision3600000mapreduce.map.log.levelINFO
    io.seqfile.compress.blocksize1000000mapreduce.tasktracker.taskcontroller
    org.apache.hadoop.mapred.DefaultTaskController
    hadoop.security.groups.cache.secs300
    mapreduce.job.end-notification.max.attempts5
    yarn.nodemanager.webapp.address0.0.0.0:8042
    mapreduce.jobtracker.expire.trackers.interval600000
    yarn.resourcemanager.webapp.address0.0.0.0:8088
    yarn.nodemanager.health-checker.interval-ms600000
    hadoop.security.authorizationfalsemapreduce.job.map.output.collector.class
    org.apache.hadoop.mapred.MapTask$MapOutputBufferfs.ftp.host0.0.0.0
    yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms1000
    mapreduce.ifile.readaheadtrueha.zookeeper.session-timeout.ms5000
    mapreduce.tasktracker.taskmemorymanager.monitoringinterval5000
    mapreduce.reduce.shuffle.parallelcopies5mapreduce.map.skip.maxrecords0
    dfs.https.enablefalsemapreduce.reduce.shuffle.read.timeout180000
    mapreduce.output.fileoutputformat.compress.codec
    org.apache.hadoop.io.compress.DefaultCodec
    mapreduce.jobtracker.instrumentation
    org.apache.hadoop.mapred.JobTrackerMetricsInst
    yarn.nodemanager.remote-app-log-dir-suffixlogsdfs.blockreport.intervalMsec
    21600000mapreduce.reduce.speculativetruemapreduce.jobhistory.keytab
    /etc/security/keytab/jhs.service.keytab
    dfs.datanode.balance.bandwidthPerSec1048576file.blocksize67108864
    yarn.resourcemanager.admin.address0.0.0.0:8033
    yarn.resourcemanager.resource-tracker.address0.0.0.0:8031
    mapreduce.tasktracker.local.dir.minspacekill0
    mapreduce.jobtracker.staging.root.dir${hadoop.tmp.dir}/mapred/staging
    mapreduce.jobtracker.retiredjobs.cache.size1000
    ipc.client.connect.max.retries.on.timeouts45ha.zookeeper.acl
    world:anyone:rwcdayarn.nodemanager.local-dirs
    ${hadoop.tmp.dir}/nm-local-dirmapreduce.reduce.shuffle.connect.timeout
    180000dfs.block.access.key.update.interval600
    dfs.block.access.token.lifetime600
    mapreduce.job.end-notification.retry.attempts5
    mapreduce.jobtracker.system.dir${hadoop.tmp.dir}/mapred/system
    yarn.nodemanager.admin-envMALLOC_ARENA_MAX=$MALLOC_ARENA_MAX
    yarn.log-aggregation.retain-seconds-1
    mapreduce.jobtracker.jobhistory.block.size3145728
    mapreduce.tasktracker.indexcache.mb10dfs.namenode.checkpoint.check.period
    60dfs.client.block.write.replace-datanode-on-failure.enabletrue
    dfs.datanode.directoryscan.interval21600
    yarn.nodemanager.container-monitor.interval-ms3000
    dfs.default.chunk.view.size32768
    mapreduce.job.speculative.slownodethreshold1.0
    mapreduce.job.reduce.slowstart.completedmaps0.05
    hadoop.security.instrumentation.requires.adminfalse
    dfs.namenode.safemode.min.datanodes0
    hadoop.http.authentication.signature.secret.file
    ${user.home}/hadoop-http-auth-signature-secretmapreduce.reduce.maxattempts
    4yarn.nodemanager.localizer.cache.target-size-mb10240s3native.replication3
    dfs.datanode.https.address0.0.0.0:50475
    mapreduce.reduce.skip.proc.count.autoincrtruefile.replication1
    hadoop.hdfs.configuration.version1ipc.client.idlethreshold4000
    hadoop.tmp.dir/tmp/hadoop-${user.name}mapreduce.jobhistory.address
    0.0.0.0:10020mapreduce.jobtracker.restart.recoverfalse
    mapreduce.cluster.local.dir/test02/mapred/localyarn.ipc.serializer.type
    protocolbuffersdfs.namenode.decommission.nodes.per.interval5
    dfs.namenode.delegation.key.update-interval86400000fs.s3.buffer.dir
    ${hadoop.tmp.dir}/s3dfs.namenode.support.allow.formattrue
    yarn.nodemanager.remote-app-log-dir/tmp/logs
    hadoop.work.around.non.threadsafe.getpwuidfalse
    dfs.ha.automatic-failover.enabledfalse
    mapreduce.jobtracker.persist.jobstatus.activetrue
    dfs.namenode.logging.levelinfoyarn.nodemanager.log-dirs
    ${yarn.log.dir}/userlogsha.health-monitor.sleep-after-disconnect.ms1000
    dfs.namenode.checkpoint.edits.dir${dfs.namenode.checkpoint.dir}
    hadoop.rpc.socket.factory.class.default
    org.apache.hadoop.net.StandardSocketFactoryyarn.resourcemanager.keytab
    /etc/krb5.keytabdfs.datanode.http.address0.0.0.0:50075
    mapreduce.task.profilefalsedfs.namenode.edits.dir${dfs.namenode.name.dir}
    hadoop.fuse.timer.period5mapreduce.map.skip.proc.count.autoincrtrue
    fs.AbstractFileSystem.viewfs.implorg.apache.hadoop.fs.viewfs.ViewFs
    mapreduce.job.speculative.slowtaskthreshold1.0s3native.stream-buffer-size
    4096yarn.nodemanager.delete.debug-delay-sec0
    dfs.secondary.namenode.kerberos.internal.spnego.principal
    ${dfs.web.authentication.kerberos.principal}
    dfs.namenode.safemode.threshold-pct0.999fmapreduce.ifile.readahead.bytes
    4194304yarn.scheduler.maximum-allocation-mb8192s3native.bytes-per-checksum
    512mapreduce.job.committer.setup.cleanup.neededtruekfs.replication3
    yarn.nodemanager.log-aggregation.compression-typenone
    hadoop.http.authentication.typesimpledfs.client.failover.sleep.base.millis
    500yarn.nodemanager.heartbeat.interval-ms1000
    hadoop.jetty.logs.serve.aliasestrue
    ha.failover-controller.graceful-fence.rpc-timeout.ms5000
    mapreduce.reduce.shuffle.input.buffer.percent0.70
    dfs.datanode.max.transfer.threads4096mapreduce.task.io.sort.mb100
    mapreduce.reduce.merge.inmem.threshold1000dfs.namenode.handler.count10
    hadoop.ssl.client.confssl-client.xml
    yarn.resourcemanager.container.liveness-monitor.interval-ms600000
    mapreduce.client.completion.pollinterval5000
    yarn.nodemanager.vmem-pmem-ratio2.1yarn.app.mapreduce.client.max-retries3
    hadoop.ssl.enabledfalsefs.AbstractFileSystem.hdfs.impl
    org.apache.hadoop.fs.Hdfsmapreduce.reduce.java.opts-Xmx1024M
    mapreduce.tasktracker.reduce.tasks.maximum4mapreduce.map.java.opts
    -Xmx1024Mmapreduce.reduce.input.buffer.percent0.0kfs.stream-buffer-size
    4096dfs.namenode.invalidate.work.pct.per.iteration0.32f
    yarn.app.mapreduce.am.command-opts-Xmx1024mdfs.bytes-per-checksum512
    dfs.replication1mapreduce.shuffle.ssl.file.buffer.size65536
    dfs.permissions.enabledtruemapreduce.jobtracker.maxtasks.perjob-1
    dfs.datanode.use.datanode.hostnamefalsemapreduce.task.userlog.limit.kb0
    dfs.namenode.fs-limits.max-directory-items0s3.client-write-packet-size
    65536dfs.client.failover.sleep.max.millis15000mapreduce.job.maps2
    dfs.namenode.fs-limits.max-component-length0mapreduce.map.output.compress
    falses3.blocksize67108864dfs.namenode.edits.journal-plugin.qjournal
    org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManagerkfs.blocksize
    67108864dfs.client.https.need-authfalse
    yarn.scheduler.minimum-allocation-mb1024ftp.replication3
    mapreduce.input.fileinputformat.split.minsize0fs.s3n.block.size67108864
    yarn.ipc.rpc.classorg.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC
    dfs.namenode.num.extra.edits.retained1000000hadoop.http.staticuser.user
    dr.whoyarn.nodemanager.localizer.cache.cleanup.interval-ms600000
    mapreduce.job.jvm.numtasks1mapreduce.task.profile.maps0-2
    mapreduce.shuffle.port8080mapreduce.reduce.shuffle.merge.percent0.66
    mapreduce.jobtracker.http.address0.0.0.0:50030
    mapreduce.task.skip.start.attempts2mapreduce.task.io.sort.factor10
    dfs.namenode.checkpoint.dirfile://${hadoop.tmp.dir}/dfs/namesecondary
    tfile.fs.input.buffer.size262144tfile.io.chunk.size1048576fs.s3.block.size
    67108864io.serializations
    org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
    yarn.resourcemanager.max-completed-applications10000
    mapreduce.jobhistory.principaljhs/_HOST@REALM.TLD
    mapreduce.job.end-notification.retry.interval1dfs.namenode.backup.address
    0.0.0.0:50100dfs.block.access.token.enablefalse
    io.seqfile.sorter.recordlimit1000000s3native.client-write-packet-size65536
    ftp.bytes-per-checksum512hadoop.security.group.mapping
    org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback
    dfs.client.file-block-storage-locations.timeout3000
    mapreduce.job.end-notification.max.retry.interval5yarn.acl.enabletrue
    yarn.nm.liveness-monitor.expiry-interval-ms600000
    mapreduce.tasktracker.map.tasks.maximum8dfs.namenode.max.objects0
    dfs.namenode.delegation.token.max-lifetime604800000
    mapreduce.job.hdfs-servers${fs.defaultFS}yarn.application.classpath
    $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*
    mapreduce.tasktracker.dns.nameserverdefault
    dfs.datanode.hdfs-blocks-metadata.enabledtrue
    yarn.nodemanager.aux-services.mapreduce.shuffle.class
    org.apache.hadoop.mapred.ShuffleHandlerdfs.datanode.readahead.bytes4193404
    mapreduce.job.ubertask.maxreduces1dfs.image.compressfalse
    mapreduce.shuffle.ssl.enabledfalseyarn.log-aggregation-enablefalse
    mapreduce.tasktracker.report.address127.0.0.1:0
    mapreduce.tasktracker.http.threads40dfs.stream-buffer-size4096
    tfile.fs.output.buffer.size262144fs.permissions.umask-mode022
    yarn.resourcemanager.am.max-retries1
    ha.failover-controller.graceful-fence.connection.retries1
    dfs.datanode.drop.cache.behind.writesfalsemapreduce.job.ubertask.enable
    falsehadoop.common.configuration.version0.23.0
    dfs.namenode.replication.work.multiplier.per.iteration2
    mapreduce.job.acl-modify-job
    io.seqfile.local.dir${hadoop.tmp.dir}/io/localfs.s3.sleepTimeSeconds10
    mapreduce.client.output.filterFAILEDCommand-line Flags

    --dump_ir=false
    --module_output=
    --be_port=22000
    --hostname=chadvt3endc02
    --keytab_file=
    --mem_limit=80%
    --planservice_host=localhost
    --planservice_port=20000
    --principal=
    --exchg_node_buffer_size_bytes=10485760
    --max_row_batches=0
    --randomize_splits=false
    --num_disks=0
    --num_threads_per_disk=1
    --read_size=8388608
    --enable_webserver=true
    --state_store_host=chadvt3endc02.ops.tiaa-cref.org
    --state_store_subscriber_port=23000
    --use_statestore=true
    --nn=chadvt3endc01
    --nn_port=8020
    --serialize_batch=false
    --status_report_interval=5
    --compress_rowbatches=true
    --abort_on_config_error=true
    --be_service_threads=64
    --beeswax_port=21000
    --default_query_options=
    --fe_service_threads=64
    --heap_profile_dir=
    --hs2_port=21050
    --load_catalog_at_startup=false
    --log_mem_usage_interval=0
    --log_query_to_file=true
    --query_log_size=25
    --use_planservice=false
    --statestore_subscriber_timeout_seconds=10
    --state_store_port=24000
    --statestore_max_missed_heartbeats=5
    --statestore_num_heartbeat_threads=10
    --statestore_suspect_heartbeats=2
    --kerberos_reinit_interval=60
    --sasl_path=/usr/lib/sasl2:/usr/lib64/sasl2:/usr/local/lib/sasl2:/usr/lib/x86_64-linux-gnu/sasl2
    --web_log_bytes=1048576
    --log_filename=impalad
    --periodic_counter_update_period_ms=500
    --rpc_cnxn_attempts=10
    --rpc_cnxn_retry_interval_ms=2000
    --enable_webserver_doc_root=true
    --webserver_doc_root=/usr/lib/impala
    --webserver_interface=
    --webserver_port=25000
    --flagfile=
    --fromenv=
    --tryfromenv=
    --undefok=
    --tab_completion_columns=80
    --tab_completion_word=
    --help=false
    --helpfull=false
    --helpmatch=
    --helpon=
    --helppackage=false
    --helpshort=false
    --helpxml=false
    --version=false
    --alsologtoemail=
    --alsologtostderr=false
    --drop_log_memory=true
    --log_backtrace_at=
    --log_dir=/var/log/impala
    --log_link=
    --log_prefix=true
    --logbuflevel=0
    --logbufsecs=30
    --logemaillevel=999
    --logmailer=/bin/mail
    --logtostderr=false
    --max_log_size=1800
    --minloglevel=0
    --stderrthreshold=2
    --stop_logging_if_full_disk=false
    --symbolize_stacktrace=true
    --v=0
    --vmodule=

    query profile:

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Query (id=9148ac87180b4fed:b92602810a654bb1):
    - PlanningTime: 15.457ms
    Summary:
    Default Db: default
    End Time: 2013-04-17 09:58:50
    Impala Version: impalad version 0.7 RELEASE (build 62a2db93eb04c36e5becab5fdcaf06b53a839238)
    Built on Mon, 15 Apr 2013 08:27:38 PST
    Plan:
    ----------------
    Plan Fragment 0
    UNPARTITIONED
    AGGREGATE
    OUTPUT: SUM()
    GROUP BY:
    TUPLE IDS: 1
    EXCHANGE (2)
    TUPLE IDS: 1

    Plan Fragment 1
    RANDOM
    STREAM DATA SINK
    EXCHANGE ID: 2
    UNPARTITIONED

    AGGREGATE
    OUTPUT: COUNT(*)
    GROUP BY:
    TUPLE IDS: 1
    SCAN HDFS table=default.security_report #partitions=1 size=12.62GB (0)
    TUPLE IDS: 0
    ----------------
    Query State: FINISHED
    Query Type: QUERY
    Sql Statement: select count(*) from security_report
    Start Time: 2013-04-17 09:56:59
    User: nathamu
    Query 9148ac87180b4fed:b92602810a654bb1:(1m50s 0.00%)
    Aggregate Profile:
    - FinalizationTimer: 0ns
    Coordinator Fragment:(1m50s 0.00%)
    - RowsProduced: 1
    CodeGen:
    - CodegenTime: 465.0us
    - CompileTime: 109.684ms
    - LoadTime: 9.50ms
    - ModuleFileSize: 70.02 KB
    AGGREGATION_NODE (id=3):(1m50s 0.00%)
    - BuildBuckets: 1.02K (1024)
    - BuildTime: 6.0us
    - GetResultsTime: 3.0us
    - LoadFactor: 0.00
    - MemoryUsed: 32.01 KB
    - RowsReturned: 1
    - RowsReturnedRate: 0
    EXCHANGE_NODE (id=2):(1m50s 100.00%)
    - BytesReceived: 64.00 B
    - ConvertRowBatchTime: 7.0us
    - DataArrivalWaitTime: 1m50s
    - DeserializeRowBatchTimer: 22.0us
    - FirstBatchArrivalWaitTime: 0ns
    - MemoryUsed: 0.00
    - RowsReturned: 4
    - RowsReturnedRate: 0
    - SendersBlockedTotalTimer: 0ns
    - SendersBlockedWallTimer: 0ns
    Averaged Fragment 1:(1m47s 0.00%)
    completion times: min:1m43s max:1m50s mean: 1m47s stddev:2s582ms
    execution rates: min:27.82 MB/sec max:32.43 MB/sec mean:29.95 MB/sec stddev:1.95 MB/sec
    num instances: 4
    split sizes: min: 2.81 GB, max: 3.50 GB, avg: 3.16 GB, stddev: 273.91 MB
    - RowsProduced: 1
    CodeGen:
    - CodegenTime: 742.0us
    - CompileTime: 114.504ms
    - LoadTime: 8.748ms
    - ModuleFileSize: 70.02 KB
    DataStreamSender (dst_id=2):(292.250us 0.00%)
    - BytesSent: 16.00 B
    - NetworkThroughput: 83.56 KB/sec
    - OverallThroughput: 53.97 KB/sec
    - SerializeBatchTime: 7.500us
    - ThriftTransmitTime: 204.0us
    - UncompressedRowBatchSize: 16.00 B
    AGGREGATION_NODE (id=1):(1m47s 0.01%)
    - BuildBuckets: 1.02K (1024)
    - BuildTime: 3.6ms
    - GetResultsTime: 3.250us
    - LoadFactor: 0.00
    - MemoryUsed: 32.01 KB
    - RowsReturned: 1
    - RowsReturnedRate: 0
    HDFS_SCAN_NODE (id=0):(1m47s 99.99%)
    - AverageHdfsReadThreadConcurrency: 0.06
    - HdfsReadThreadConcurrencyCountPercentage=0: 93.50
    - HdfsReadThreadConcurrencyCountPercentage=1: 6.50
    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=36: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=37: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=38: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=39: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=40: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=41: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
    - AverageScannerThreadConcurrency: 0.02
    - BytesRead: 3.16 GB
    - MemoryUsed: 0.00
    - NumDisksAccessed: 1
    - PerReadThreadRawHdfsThroughput: 464.42 MB/sec
    - RowsReturned: 411.49K (411489)
    - RowsReturnedRate: 3.82 K/sec
    - ScanRangesComplete: 50
    - ScannerThreadsInvoluntaryContextSwitches: 4
    - ScannerThreadsTotalWallClockTime: 49m5s
    - DelimiterParseTime: 1s340ms
    - MaterializeTupleTime: 331.500us
    - ScannerThreadsSysTime: 6.744ms
    - ScannerThreadsUserTime: 1s365ms
    - ScannerThreadsVoluntaryContextSwitches: 1.27K (1273)
    - TotalRawHdfsReadTime: 6s973ms
    - TotalReadThroughput: 29.97 MB/sec
    Fragment 1:
    Instance 9148ac87180b4fed:b92602810a654bb3 (host=chas2t3endc02:22000):(1m50s 0.00%)
    Hdfs split stats (:<# splits>/): 0:56/3.50 GB
    - RowsProduced: 1
    CodeGen:
    - CodegenTime: 780.0us
    - CompileTime: 113.908ms
    - LoadTime: 8.810ms
    - ModuleFileSize: 70.02 KB
    DataStreamSender (dst_id=2):(311.0us 0.00%)
    - BytesSent: 16.00 B
    - NetworkThroughput: 61.04 KB/sec
    - OverallThroughput: 50.24 KB/sec
    - SerializeBatchTime: 6.0us
    - ThriftTransmitTime: 256.0us
    - UncompressedRowBatchSize: 16.00 B
    AGGREGATION_NODE (id=1):(1m50s 0.01%)
    - BuildBuckets: 1.02K (1024)
    - BuildTime: 3.256ms
    - GetResultsTime: 3.0us
    - LoadFactor: 0.00
    - MemoryUsed: 32.01 KB
    - RowsReturned: 1
    - RowsReturnedRate: 0
    HDFS_SCAN_NODE (id=0):(1m50s 99.99%)
    File Formats: TEXT/NONE:56
    Hdfs split stats (:<# splits>/): 0:56/3.50 GB
    - AverageHdfsReadThreadConcurrency: 0.07
    - HdfsReadThreadConcurrencyCountPercentage=0: 93.21
    - HdfsReadThreadConcurrencyCountPercentage=1: 6.79
    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
    - AverageScannerThreadConcurrency: 0.01
    - BytesRead: 3.50 GB
    - MemoryUsed: 0.00
    - NumDisksAccessed: 1
    - PerReadThreadRawHdfsThroughput: 451.90 MB/sec
    - RowsReturned: 434.37K (434373)
    - RowsReturnedRate: 3.93 K/sec
    - ScanRangesComplete: 56
    - ScannerThreadsInvoluntaryContextSwitches: 2
    - ScannerThreadsTotalWallClockTime: 59m13s
    - DelimiterParseTime: 1s502ms
    - MaterializeTupleTime: 334.0us
    - ScannerThreadsSysTime: 5.994ms
    - ScannerThreadsUserTime: 1s525ms
    - ScannerThreadsVoluntaryContextSwitches: 1.36K (1365)
    - TotalRawHdfsReadTime: 7s931ms
    - TotalReadThroughput: 32.43 MB/sec
    Instance 9148ac87180b4fed:b92602810a654bb4 (host=chadvt3endc02:22000):(1m48s 0.00%)
    Hdfs split stats (:<# splits>/): 0:48/3.00 GB
    - RowsProduced: 1
    CodeGen:
    - CodegenTime: 724.0us
    - CompileTime: 113.126ms
    - LoadTime: 8.573ms
    - ModuleFileSize: 70.02 KB
    DataStreamSender (dst_id=2):(247.0us 0.00%)
    - BytesSent: 16.00 B
    - NetworkThroughput: 131.30 KB/sec
    - OverallThroughput: 63.26 KB/sec
    - SerializeBatchTime: 8.0us
    - ThriftTransmitTime: 119.0us
    - UncompressedRowBatchSize: 16.00 B
    AGGREGATION_NODE (id=1):(1m48s 0.01%)
    - BuildBuckets: 1.02K (1024)
    - BuildTime: 2.511ms
    - GetResultsTime: 4.0us
    - LoadFactor: 0.00
    - MemoryUsed: 32.01 KB
    - RowsReturned: 1
    - RowsReturnedRate: 0
    HDFS_SCAN_NODE (id=0):(1m48s 99.99%)
    File Formats: TEXT/NONE:48
    Hdfs split stats (:<# splits>/): 0:48/3.00 GB
    - AverageHdfsReadThreadConcurrency: 0.07
    - HdfsReadThreadConcurrencyCountPercentage=0: 92.59
    - HdfsReadThreadConcurrencyCountPercentage=1: 7.41
    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
    - AverageScannerThreadConcurrency: 0.02
    - BytesRead: 3.00 GB
    - MemoryUsed: 0.00
    - NumDisksAccessed: 1
    - PerReadThreadRawHdfsThroughput: 447.17 MB/sec
    - RowsReturned: 321.75K (321753)
    - RowsReturnedRate: 2.97 K/sec
    - ScanRangesComplete: 48
    - ScannerThreadsInvoluntaryContextSwitches: 4
    - ScannerThreadsTotalWallClockTime: 49m39s
    - DelimiterParseTime: 1s252ms
    - MaterializeTupleTime: 312.0us
    - ScannerThreadsSysTime: 5.995ms
    - ScannerThreadsUserTime: 1s277ms
    - ScannerThreadsVoluntaryContextSwitches: 1.27K (1268)
    - TotalRawHdfsReadTime: 6s862ms
    - TotalReadThroughput: 28.40 MB/sec
    Instance 9148ac87180b4fed:b92602810a654bb5 (host=chas2t3endc01:22000):(1m43s 0.00%)
    Hdfs split stats (:<# splits>/): 0:45/2.81 GB
    - RowsProduced: 1
    CodeGen:
    - CodegenTime: 731.0us
    - CompileTime: 113.26ms
    - LoadTime: 8.870ms
    - ModuleFileSize: 70.02 KB
    DataStreamSender (dst_id=2):(315.0us 0.00%)
    - BytesSent: 16.00 B
    - NetworkThroughput: 73.36 KB/sec
    - OverallThroughput: 49.60 KB/sec
    - SerializeBatchTime: 8.0us
    - ThriftTransmitTime: 213.0us
    - UncompressedRowBatchSize: 16.00 B
    AGGREGATION_NODE (id=1):(1m43s 0.01%)
    - BuildBuckets: 1.02K (1024)
    - BuildTime: 3.123ms
    - GetResultsTime: 3.0us
    - LoadFactor: 0.00
    - MemoryUsed: 32.01 KB
    - RowsReturned: 1
    - RowsReturnedRate: 0
    HDFS_SCAN_NODE (id=0):(1m43s 99.99%)
    File Formats: TEXT/NONE:45
    Hdfs split stats (:<# splits>/): 0:45/2.81 GB
    - AverageHdfsReadThreadConcurrency: 0.05
    - HdfsReadThreadConcurrencyCountPercentage=0: 95.15
    - HdfsReadThreadConcurrencyCountPercentage=1: 4.85
    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=36: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=37: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=38: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=39: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=40: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=41: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
    - AverageScannerThreadConcurrency: 0.01
    - BytesRead: 2.81 GB
    - MemoryUsed: 0.00
    - NumDisksAccessed: 1
    - PerReadThreadRawHdfsThroughput: 485.12 MB/sec
    - RowsReturned: 428.45K (428449)
    - RowsReturnedRate: 4.14 K/sec
    - ScanRangesComplete: 45
    - ScannerThreadsInvoluntaryContextSwitches: 3
    - ScannerThreadsTotalWallClockTime: 37m58s
    - DelimiterParseTime: 1s197ms
    - MaterializeTupleTime: 311.0us
    - ScannerThreadsSysTime: 6.994ms
    - ScannerThreadsUserTime: 1s219ms
    - ScannerThreadsVoluntaryContextSwitches: 1.17K (1173)
    - TotalRawHdfsReadTime: 5s936ms
    - TotalReadThroughput: 27.82 MB/sec
    Instance 9148ac87180b4fed:b92602810a654bb6 (host=chas2t3endc03:22000):(1m48s 0.00%)
    Hdfs split stats (:<# splits>/): 0:53/3.31 GB
    - RowsProduced: 1
    CodeGen:
    - CodegenTime: 733.0us
    - CompileTime: 117.958ms
    - LoadTime: 8.740ms
    - ModuleFileSize: 70.02 KB
    DataStreamSender (dst_id=2):(296.0us 0.00%)
    - BytesSent: 16.00 B
    - NetworkThroughput: 68.53 KB/sec
    - OverallThroughput: 52.79 KB/sec
    - SerializeBatchTime: 8.0us
    - ThriftTransmitTime: 228.0us
    - UncompressedRowBatchSize: 16.00 B
    AGGREGATION_NODE (id=1):(1m48s 0.01%)
    - BuildBuckets: 1.02K (1024)
    - BuildTime: 3.137ms
    - GetResultsTime: 3.0us
    - LoadFactor: 0.00
    - MemoryUsed: 32.01 KB
    - RowsReturned: 1
    - RowsReturnedRate: 0
    HDFS_SCAN_NODE (id=0):(1m48s 99.99%)
    File Formats: TEXT/NONE:53
    Hdfs split stats (:<# splits>/): 0:53/3.31 GB
    - AverageHdfsReadThreadConcurrency: 0.07
    - HdfsReadThreadConcurrencyCountPercentage=0: 93.06
    - HdfsReadThreadConcurrencyCountPercentage=1: 6.94
    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00
    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00
    - AverageScannerThreadConcurrency: 0.01
    - BytesRead: 3.31 GB
    - MemoryUsed: 0.00
    - NumDisksAccessed: 1
    - PerReadThreadRawHdfsThroughput: 473.49 MB/sec
    - RowsReturned: 461.38K (461381)
    - RowsReturnedRate: 4.25 K/sec
    - ScanRangesComplete: 53
    - ScannerThreadsInvoluntaryContextSwitches: 7
    - ScannerThreadsTotalWallClockTime: 49m30s
    - DelimiterParseTime: 1s407ms
    - MaterializeTupleTime: 369.0us
    - ScannerThreadsSysTime: 7.993ms
    - ScannerThreadsUserTime: 1s436ms
    - ScannerThreadsVoluntaryContextSwitches: 1.29K (1286)
    - TotalRawHdfsReadTime: 7s163ms
    - TotalReadThroughput: 31.25 MB/sec




  • Ishaan Joshi at Apr 17, 2013 at 2:57 pm
    Ramanujam,

       My apologies, you'd already sent most of the information I asked you for
    in your first email. We've looking at it and will get back to you soon.

    Thanks,

    .. Ishaan

    On Wed, Apr 17, 2013 at 7:51 AM, Nathamuni, Ramanujam wrote:

    Hi Ishaan, pls find describe table,query profile and logs for impalad **
    **

    ** **

    [chadvt3endc02.ops.tiaa-cref.org:21000] > describe security_report;****

    Query: describe security_report****

    Query finished, fetching results ...****

    +---------------------------------------------+--------+---------+****
    name | type | comment |****
    +---------------------------------------------+--------+---------+****
    asset_ipv4_address | string | |****
    asset_ipv6_address | string | |****
    asset_ip_address | string | |****
    asset_mac_addresses | string | |****
    asset_names | string | |****
    asset_os_family | string | |****
    asset_os_name | string | |****
    asset_os_version | string | |****
    asset_risk_score | string | |****
    asset_exploit_count | string | |****
    asset_exploit_minimum_skill | string | |****
    asset_exploit_urls | string | |****
    asset_malware_kit_count | string | |****
    asset_malware_kit_names | string | |****
    asset_scan_id | string | |****
    asset_scan_template_name | string | |****
    asset_service_name | string | |****
    asset_service_port | string | |****
    asset_service_product | string | |****
    asset_service_protocol | string | |****
    asset_site_importance | string | |****
    asset_site_name | string | |****
    asset_vulnerability_additional_urls | string | |****
    asset_vulnerability_age | string | |****
    asset_vulnerability_cve_ids | string | |****
    asset_vulnerability_cve_urls | string | |****
    asset_vulnerability_cvss_score | string | |****
    asset_vulnerability_cvss_vector | string | |****
    asset_vulnerability_description | string | |****
    asset_vulnerability_id | string | |****
    asset_vulnerability_pci_compliance_status | string | |****
    asset_vulnerability_proof | string | |****
    asset_vulnerability_published_date | string | |****
    asset_vulnerability_reference_ids | string | |****
    asset_vulnerability_reference_urls | string | |****
    asset_vulnerability_risk_score | string | |****
    asset_vulnerability_severity_level | string | |****
    asset_vulnerability_solution | string | |****
    asset_vulnerability_tags | string | |****
    asset_vulnerability_test_date | string | |****
    asset_vulnerability_test_result_code | string | |****
    asset_culnerability_test_result_description | string | |****
    asset_vulnerability_title | string | |****
    asset_vulnerable_since | string | |****
    +---------------------------------------------+--------+---------+****

    Returned 44 row(s) in 0.03s****

    [chadvt3endc02.ops.tiaa-cref.org:21000] >****

    ** **

    ** **

    **1.) ***Below is Query profile:*

    ** **

    Impala****

    ** **

    /****

    /backends****

    /catalog****

    /logs****

    /memz****

    /metrics****

    /queries****

    /sessions****

    /varz****

    ** **

    Query (id=9148ac87180b4fed:b92602810a654bb1):****

    - PlanningTime: 15.457ms****

    Summary:****

    Default Db: default****

    End Time: 2013-04-17 09:58:50****

    Impala Version: impalad version 0.7 RELEASE (build
    62a2db93eb04c36e5becab5fdcaf06b53a839238)****

    Built on Mon, 15 Apr 2013 08:27:38 PST****

    Plan: ****

    ----------------****

    Plan Fragment 0****

    UNPARTITIONED****

    AGGREGATE****

    OUTPUT: SUM()****

    GROUP BY: ****

    TUPLE IDS: 1 ****

    EXCHANGE (2)****

    TUPLE IDS: 1 ****

    ** **

    Plan Fragment 1****

    RANDOM****

    STREAM DATA SINK****

    EXCHANGE ID: 2****

    UNPARTITIONED****

    ** **

    AGGREGATE****

    OUTPUT: COUNT(*)****

    GROUP BY: ****

    TUPLE IDS: 1 ****

    SCAN HDFS table=default.security_report #partitions=1 size=12.62GB (0)
    ****

    TUPLE IDS: 0 ****

    ----------------****

    Query State: FINISHED****

    Query Type: QUERY****

    Sql Statement: select count(*) from security_report****

    Start Time: 2013-04-17 09:56:59****

    User: nathamu****

    Query 9148ac87180b4fed:b92602810a654bb1:(1m50s 0.00%)****

    Aggregate Profile:****

    - FinalizationTimer: 0ns****

    Coordinator Fragment:(1m50s 0.00%)****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 465.0us****

    - CompileTime: 109.684ms****

    - LoadTime: 9.50ms****

    - ModuleFileSize: 70.02 KB****

    AGGREGATION_NODE (id=3):(1m50s 0.00%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 6.0us****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    EXCHANGE_NODE (id=2):(1m50s 100.00%)****

    - BytesReceived: 64.00 B****

    - ConvertRowBatchTime: 7.0us****

    - DataArrivalWaitTime: 1m50s****

    - DeserializeRowBatchTimer: 22.0us****

    - FirstBatchArrivalWaitTime: 0ns****

    - MemoryUsed: 0.00 ****

    - RowsReturned: 4****

    - RowsReturnedRate: 0****

    - SendersBlockedTotalTimer: 0ns****

    - SendersBlockedWallTimer: 0ns****

    Averaged Fragment 1:(1m47s 0.00%)****

    completion times: min:1m43s max:1m50s mean: 1m47s stddev:2s582ms*
    ***

    execution rates: min:27.82 MB/sec max:32.43 MB/sec mean:29.95
    MB/sec stddev:1.95 MB/sec****

    num instances: 4****

    split sizes: min: 2.81 GB, max: 3.50 GB, avg: 3.16 GB, stddev:
    273.91 MB****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 742.0us****

    - CompileTime: 114.504ms****

    - LoadTime: 8.748ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(292.250us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 83.56 KB/sec****

    - OverallThroughput: 53.97 KB/sec****

    - SerializeBatchTime: 7.500us****

    - ThriftTransmitTime: 204.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m47s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.6ms****

    - GetResultsTime: 3.250us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m47s 99.99%)****

    - AverageHdfsReadThreadConcurrency: 0.06 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 93.50 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 6.50 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=36: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=37: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=38: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=39: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=40: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=41: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.02 ****

    - BytesRead: 3.16 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 464.42 MB/sec****

    - RowsReturned: 411.49K (411489)****

    - RowsReturnedRate: 3.82 K/sec****

    - ScanRangesComplete: 50****

    - ScannerThreadsInvoluntaryContextSwitches: 4****

    - ScannerThreadsTotalWallClockTime: 49m5s****

    - DelimiterParseTime: 1s340ms****

    - MaterializeTupleTime: 331.500us****

    - ScannerThreadsSysTime: 6.744ms****

    - ScannerThreadsUserTime: 1s365ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.27K (1273)****

    - TotalRawHdfsReadTime: 6s973ms****

    - TotalReadThroughput: 29.97 MB/sec****

    Fragment 1:****

    Instance 9148ac87180b4fed:b92602810a654bb3
    (host=chas2t3endc02:22000):(1m50s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:56/3.50 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 780.0us****

    - CompileTime: 113.908ms****

    - LoadTime: 8.810ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(311.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 61.04 KB/sec****

    - OverallThroughput: 50.24 KB/sec****

    - SerializeBatchTime: 6.0us****

    - ThriftTransmitTime: 256.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m50s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.256ms****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m50s 99.99%)****

    File Formats: TEXT/NONE:56 ****

    Hdfs split stats (:<# splits>/): 0:56/3.50 GB ****

    - AverageHdfsReadThreadConcurrency: 0.07 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 93.21 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 6.79 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.01 ****

    - BytesRead: 3.50 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 451.90 MB/sec****

    - RowsReturned: 434.37K (434373)****

    - RowsReturnedRate: 3.93 K/sec****

    - ScanRangesComplete: 56****

    - ScannerThreadsInvoluntaryContextSwitches: 2****

    - ScannerThreadsTotalWallClockTime: 59m13s****

    - DelimiterParseTime: 1s502ms****

    - MaterializeTupleTime: 334.0us****

    - ScannerThreadsSysTime: 5.994ms****

    - ScannerThreadsUserTime: 1s525ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.36K (1365)****

    - TotalRawHdfsReadTime: 7s931ms****

    - TotalReadThroughput: 32.43 MB/sec****

    Instance 9148ac87180b4fed:b92602810a654bb4
    (host=chadvt3endc02:22000):(1m48s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:48/3.00 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 724.0us****

    - CompileTime: 113.126ms****

    - LoadTime: 8.573ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(247.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 131.30 KB/sec****

    - OverallThroughput: 63.26 KB/sec****

    - SerializeBatchTime: 8.0us****

    - ThriftTransmitTime: 119.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m48s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 2.511ms****

    - GetResultsTime: 4.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m48s 99.99%)****

    File Formats: TEXT/NONE:48 ****

    Hdfs split stats (:<# splits>/): 0:48/3.00 GB ****

    - AverageHdfsReadThreadConcurrency: 0.07 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 92.59 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 7.41 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.02 ****

    - BytesRead: 3.00 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 447.17 MB/sec****

    - RowsReturned: 321.75K (321753)****

    - RowsReturnedRate: 2.97 K/sec****

    - ScanRangesComplete: 48****

    - ScannerThreadsInvoluntaryContextSwitches: 4****

    - ScannerThreadsTotalWallClockTime: 49m39s****

    - DelimiterParseTime: 1s252ms****

    - MaterializeTupleTime: 312.0us****

    - ScannerThreadsSysTime: 5.995ms****

    - ScannerThreadsUserTime: 1s277ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.27K (1268)****

    - TotalRawHdfsReadTime: 6s862ms****

    - TotalReadThroughput: 28.40 MB/sec****

    Instance 9148ac87180b4fed:b92602810a654bb5
    (host=chas2t3endc01:22000):(1m43s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:45/2.81 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 731.0us****

    - CompileTime: 113.26ms****

    - LoadTime: 8.870ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(315.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 73.36 KB/sec****

    - OverallThroughput: 49.60 KB/sec****

    - SerializeBatchTime: 8.0us****

    - ThriftTransmitTime: 213.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m43s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.123ms****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m43s 99.99%)****

    File Formats: TEXT/NONE:45 ****

    Hdfs split stats (:<# splits>/): 0:45/2.81 GB ****

    - AverageHdfsReadThreadConcurrency: 0.05 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 95.15 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 4.85 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=36: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=37: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=38: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=39: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=40: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=41: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.01 ****

    - BytesRead: 2.81 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 485.12 MB/sec****

    - RowsReturned: 428.45K (428449)****

    - RowsReturnedRate: 4.14 K/sec****

    - ScanRangesComplete: 45****

    - ScannerThreadsInvoluntaryContextSwitches: 3****

    - ScannerThreadsTotalWallClockTime: 37m58s****

    - DelimiterParseTime: 1s197ms****

    - MaterializeTupleTime: 311.0us****

    - ScannerThreadsSysTime: 6.994ms****

    - ScannerThreadsUserTime: 1s219ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.17K (1173)****

    - TotalRawHdfsReadTime: 5s936ms****

    - TotalReadThroughput: 27.82 MB/sec****

    Instance 9148ac87180b4fed:b92602810a654bb6
    (host=chas2t3endc03:22000):(1m48s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:53/3.31 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 733.0us****

    - CompileTime: 117.958ms****

    - LoadTime: 8.740ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(296.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 68.53 KB/sec****

    - OverallThroughput: 52.79 KB/sec****

    - SerializeBatchTime: 8.0us****

    - ThriftTransmitTime: 228.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m48s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.137ms****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m48s 99.99%)****

    File Formats: TEXT/NONE:53 ****

    Hdfs split stats (:<# splits>/): 0:53/3.31 GB ****

    - AverageHdfsReadThreadConcurrency: 0.07 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 93.06 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 6.94 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.01 ****

    - BytesRead: 3.31 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 473.49 MB/sec****

    - RowsReturned: 461.38K (461381)****

    - RowsReturnedRate: 4.25 K/sec****

    - ScanRangesComplete: 53****

    - ScannerThreadsInvoluntaryContextSwitches: 7****

    - ScannerThreadsTotalWallClockTime: 49m30s****

    - DelimiterParseTime: 1s407ms****

    - MaterializeTupleTime: 369.0us****

    - ScannerThreadsSysTime: 7.993ms****

    - ScannerThreadsUserTime: 1s436ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.29K (1286)****

    - TotalRawHdfsReadTime: 7s163ms****

    - TotalReadThroughput: 31.25 MB/sec****

    ** **

    **2.) ** LOGS ****

    ** **

    Impala****

    ** **

    /****

    /backends****

    /catalog****

    /logs****

    /memz****

    /metrics****

    /queries****

    /sessions****

    /varz****

    ** **

    INFO logs****

    Log path is: /var/log/impala/impalad.INFO****

    Showing last 1048576 bytes of log****

    ** **

    Log file created at: 2013/04/16 21:38:25****

    Running on machine: chadvt3endc02****

    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg****

    I0416 21:38:25.142798 12921 daemon.cc:34] impalad version 0.7 RELEASE
    (build 62a2db93eb04c36e5becab5fdcaf06b53a839238)****

    Built on Mon, 15 Apr 2013 08:27:38 PST****

    I0416 21:38:25.143051 12921 daemon.cc:35] Using hostname: chadvt3endc02***
    *

    I0416 21:38:25.143482 12921 logging.cc:76] Flags (see also /varz are on
    debug webserver):****

    --dump_ir=false****

    --module_output=****

    --be_port=22000****

    --hostname=chadvt3endc02****

    --keytab_file=****

    --mem_limit=80%****

    --planservice_host=localhost****

    --planservice_port=20000****

    --principal=****

    --exchg_node_buffer_size_bytes=10485760****

    --max_row_batches=0****

    --randomize_splits=false****

    --num_disks=0****

    --num_threads_per_disk=1****

    --read_size=8388608****

    --enable_webserver=true****

    --state_store_host=chadvt3endc02.ops.tiaa-cref.org****

    --state_store_subscriber_port=23000****

    --use_statestore=true****

    --nn=****

    --nn_port=0****

    --serialize_batch=false****

    --status_report_interval=5****

    --compress_rowbatches=true****

    --abort_on_config_error=true****

    --be_service_threads=64****

    --beeswax_port=21000****

    --default_query_options=****

    --fe_service_threads=64****

    --heap_profile_dir=****

    --hs2_port=21050****

    --load_catalog_at_startup=false****

    --log_mem_usage_interval=0****

    --log_query_to_file=true****

    --query_log_size=25****

    --use_planservice=false****

    --statestore_subscriber_timeout_seconds=10****

    --state_store_port=24000****

    --statestore_max_missed_heartbeats=5****

    --statestore_num_heartbeat_threads=10****

    --statestore_suspect_heartbeats=2****

    --kerberos_reinit_interval=60****


    --sasl_path=/usr/lib/sasl2:/usr/lib64/sasl2:/usr/local/lib/sasl2:/usr/lib/x86_64-linux-gnu/sasl2
    ****

    --web_log_bytes=1048576****

    --log_filename=impalad****

    --periodic_counter_update_period_ms=500****

    --rpc_cnxn_attempts=10****

    --rpc_cnxn_retry_interval_ms=2000****

    --enable_webserver_doc_root=true****

    --webserver_doc_root=/usr/lib/impala****

    --webserver_interface=****

    --webserver_port=25000****

    --flagfile=****

    --fromenv=****

    --tryfromenv=****

    --undefok=****

    --tab_completion_columns=80****

    --tab_completion_word=****

    --help=false****

    --helpfull=false****

    --helpmatch=****

    --helpon=****

    --helppackage=false****

    --helpshort=false****

    --helpxml=false****

    --version=false****

    --alsologtoemail=****

    --alsologtostderr=false****

    --drop_log_memory=true****

    --log_backtrace_at=****

    --log_dir=/var/log/impala****

    --log_link=****

    --log_prefix=true****

    --logbuflevel=0****

    --logbufsecs=30****

    --logemaillevel=999****

    --logmailer=/bin/mail****

    --logtostderr=false****

    --max_log_size=1800****

    --minloglevel=0****

    --stderrthreshold=2****

    --stop_logging_if_full_disk=false****

    --symbolize_stacktrace=true****

    --v=0****

    --vmodule=****

    I0416 21:38:25.144712 12921 mem-info.cc:66] Physical Memory: 47.04 GB****

    I0416 21:38:25.144736 12921 daemon.cc:43] Cpu Info:****

    Model: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz****

    Cores: 16****

    L1 Cache: 0.00 ****

    L2 Cache: 0.00 ****

    L3 Cache: 0.00 ****

    Hardware Supports:****

    ssse3****

    sse4_1****

    sse4_2****

    I0416 21:38:25.144747 12921 daemon.cc:44] Disk Info: ****

    Num disks 19: cciss/c0d, cciss/c0d0p, sda, sdb, sdc, sdd, sde, sdf, sdg,
    sdh, sdi, sdj, sdk, sdl, sdm, sdn, sdo, sdp, dm-****

    I0416 21:38:25.144757 12921 daemon.cc:45] Mem Info: 47.04 GB****

    I0416 21:38:26.578546 12921 impala-server.cc:1740] Default query
    options:TQueryOptions {****

    01: abort_on_error (bool) = false,****

    02: max_errors (i32) = 0,****

    03: disable_codegen (bool) = false,****

    04: batch_size (i32) = 0,****

    05: num_nodes (i32) = 0,****

    06: max_scan_range_length (i64) = 0,****

    07: num_scanner_threads (i32) = 0,****

    08: max_io_buffers (i32) = 0,****

    09: allow_unsupported_formats (bool) = false,****

    10: default_order_by_limit (i64) = -1,****

    11: debug_action (string) = "",****

    12: mem_limit (i64) = 0,****

    13: abort_on_default_limit_exceeded (bool) = false,****

    }****

    I0416 21:38:32.458487 12921 impala-server.cc:1960] Read fs.defaultFS from
    Hadoop config: hdfs://chadvt3endc01:8020****

    I0416 21:38:32.458521 12921 impala-server.cc:1972] Setting default name
    (-nn): chadvt3endc01****

    I0416 21:38:32.458528 12921 impala-server.cc:1974] Setting default port
    (-nn_port): 8020****

    I0416 21:38:32.458569 12921 impala-server.cc:2003] Impala Beeswax Service
    listening on 21000****

    I0416 21:38:32.458595 12921 impala-server.cc:2014] Impala HiveServer2
    Service listening on 21050****

    I0416 21:38:32.458608 12921 impala-server.cc:2022] ImpalaInternalService
    listening on 22000****

    I0416 21:38:32.458814 12921 thrift-server.cc:365] ThriftServer 'backend'
    started on port: 22000****

    I0416 21:38:32.458827 12921 exec-env.cc:143] Starting global services****

    I0416 21:38:32.458853 12921 exec-env.cc:164] Using global memory limit:
    37.63 GB****

    I0416 21:38:32.458865 12921 webserver.cc:118] Starting webserver on all
    interfaces, port 25000****

    I0416 21:38:32.458916 12921 webserver.cc:128] Document root:
    /usr/lib/impala****

    I0416 21:38:32.459671 12921 webserver.cc:167] Webserver started****

    I0416 21:38:32.459707 12921 simple-scheduler.cc:98] Starting simple
    scheduler****

    I0416 21:38:32.459724 12921 state-store-subscriber.cc:124] Starting
    subscriber****

    I0416 21:38:32.459841 12921 thrift-server.cc:365] ThriftServer
    'StateStoreSubscriber' started on port: 23000****

    I0416 21:38:32.462204 12921 thrift-server.cc:365] ThriftServer
    'beeswax-frontend' started on port: 21000****

    I0416 21:38:32.463726 12921 thrift-server.cc:365] ThriftServer
    'hiveServer2-frontend' started on port: 21050****

    I0416 21:38:32.463737 12921 impalad-main.cc:101] Impala has started.****

    I0416 21:38:42.876473 13074 impala-server.cc:1027] Query
    44db0e0b3310432f:8af86dc350e9f19d finished (1/1)
    eJyTlJHQCSxNLapU0MhMsTUxSUkySDVIMjY2NDAxNkqzskhMszBLSTY2NUi1TDO0TNEUZZDUkeAJyEnMy8vMSw/JzE0V5RLb0Pkjj0GCMyS/JDEHKsTAIMYoKM0gzdjFgKqcgQEADpwe0g==
    ****

    I0416 21:38:50.420109 13074 exchange-node.cc:49] Exch id=2****

    input_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    ** **

    output_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    I0416 21:38:50.644784 13217 aggregation-node.cc:170]
    AggregationNode(node_id=3) using llvm codegend functions.****

    I0416 21:38:50.644783 13215 aggregation-node.cc:170]
    AggregationNode(node_id=1) using llvm codegend functions.****

    I0416 21:38:50.843297 13074 impala-server.cc:1027] Query
    3c363b79ce9a4bc0:b876f5f4ac7ed12a finished (1/1)
    eJztWE1oI9cdHzvejVfr3a7XljNmS/tIoJETr3dGsvwh1gZZkm3RtaxoRtkEAsvTzJM82flQZp6865CDe0nNUlhT6KGUQnrLIVBDDwlhD4HQHtpLEwqFnpZeWnoK9BJKCP2/mZH1XfvRJCf55Pef9//++L2/Zr+aEudfaRD3AMUMfS2hJZYSleVVjazixYompSory0vVZHURa8tEl+N4Ljo2Oy9OFE1s24ZdUw2LRCMzJ3/9y+MLgnhJdSg2Q5ogzIxM3hBujDwSOu8L4rNKw7KwexAVZr/fj+nSUSRLqrhhUpStPKsH/47nbB2xa1NxSU7clBZvyksoLqcSK6mkdDVv1bGJ0avE9QzH9gz/qKP94IykhWVUyt3JpZUcilUahqmjpTiO65XVBKlIi+A1SVaIhivJqq7hqrRUSSbwSmI1nliZi2zAfYpAyo5jzyM5idJ1FzEjkLSSii+DBaioqGPMx3efidzs+oswOtp0cc0iNkVSBKFyoZguqXk1v1vIZeGc3toq5bbSag7+3y2rxbKaQkp5J3bbMx3gWJ8D+lZpt1xEG6+nEBzUcvFODuWzSgrJ7IxQ7rXMdrqwBd7F53xC96UuM2S4VEoXsrs78I+ilnLpHZRNq2mk5As/7pSYz6ZQ3Cd12j3A8MxuuaDGXjqHzUomXUDb2U0FUVwxyVqY6AVq1dELdexSg7JcrsnIM94mawl5A8WkPt5JqCfol4OSViimZHwzX8gr22BxQFQP6uTCK+Vc6fUryltmcIfFRPSISTSKNKdhU3AAVV3HQmBMBK64dEDtLa6OlT3ijrmOQ28IgvhioOTMRoqOd9f+J+8e/2s0bBoQNJmu1VxSA+NQ0XWqhkmgXebFyU3DxqbxNmaxYZyu3zaDe6+XQRCnM47j6kCnjntaE0Fnl5wHHujTGxrRo8LMaJfkp5/96fFIe2e334fOzjg62SI2mPqGeJkdasQOWQ+/VgRGs+rgS0h77/Fn24I4fsfBejMKP//juCBe3QGJJtmEmwrkPnpx5uTp+GAnt9tVtas4ldwlUBCfa9YuFPO9wm42F8y/uejo7M/ECdby+kZDu0+oB1E4vAa6fVqo+4sFsHGL0BLxoGK9NmqEadzEGkQ2OsESE9khluMeQJHo4Mbx4VgYtBKhDdduBvlaO6kESY+Ozkx0OXz88W+OIfYXQ6cfdljZMq/LrjaD2kzpMKFHOVRIs/1bsYnPQVafiFc2DiiB2xox9n2XkCBOZRwbRi0FORuYanvNXK7DpyymOO26xj4272KDNkP1+y8fjQiimCXQPAarT9LOzIr6uCCIs5uG61Gf3CujJ7gCT2hFhdg6wMOG6UAA9dNAh/30XOfnu9g0B3Xbydf/AV9mxq4HeXm/M0L9YtMvKANDMTgG587nQF8HeQl5S4PRuEb0NsyIXjzHhHjvH78NJ8TY0TUNWtEkbPIgCp+9VcuwU3JSXkgmli0PIQs/7DgSbAM8tAge1XWyn5Js73vkIdEaviQXPPK2mKS4tLggSWjjlke0QFgXhcnrJDUlMhKjXLEbFjJsj2JbI96IfNmrmwb1EcdbQUwJSsg+/7yvoHXC+7W2UygX+YJ5R+PTz+/2jMaTJ/9+s3M0Hj7524XvYjTeYNWpUJdgKygPFNM9eq85At6BUcgKXGGY4bf/ZIHQB457X91znUZtr96g0fGZDz6CJpvcZVUERdX+5fM/wDi9rjSLvX1ifPE+lNN1uG1Uqepi27NOu/34F1PdWPS7q9DGZZsVmUs86IJm34SBQa0w7LVs7rW218o+1vWxaqDuQfAinxNeTqx+8MLG6bcIL4cnH7LGHf0u4CXKnn332AOwFRyJFddXL4oonDzbehW4oEb3oBJ1mKJaw3WJrR0Ejv8wvKZosFoQd8CtS+Esxiw6EL/rWWIakD7iFrHrNXuNAfdLA/Vl2IOwSFwNigcUrkmB6PMzyAHDy+dnkLg5+HXEuTkS3ByL3BxJbo4lbo5lbo4Vbo5V3iKJ8zIkeBkWeRmSvAxLvAzLvAwrvAxhHqZ3YO6EI11t1E9Rtt8b8lqhYWUN776X1jR/tAcz9AcgtaWyhB8wGzqA7ekHfabvdP/pezTCMBBmVwnbNeJlgocSCVTFOoaal7f3HRPcwi64Z1PykCoPDIAawsADDI52XlcOvJZ3qPObP/DZMy/DXnxNaP3JT58XxJnOm2yrbcLRr/8Ojv2o8/urg00CD6Z9RWGQ/KiFmP1LeNRMBR+DWLbC1/2u/vOnv2LvaoG9q595JPzzTGA4CxJaYNCW855sn5Hn/w1tfVJ67mSemapzZqBv7PvF/MyAfjnNgYoceMiDhDwYyIN+PLjHg3g8WMeDcjz4xoNsPJjGgWYcOMaBYBzYxYFaHHjFgVQcGHVm52/3ebf2h7S+UDBgtAP2tS33o/1/jPd/jMzlww35zB82NRTbczy6pu1hfZ8mYH/UpHgqHpckeN/z/X4wcrTKQojCjZxi6qHYbTY9LYIMfT11+4Xgm7d+63ZwySR2je5563MTUkq+FS7nw3V8uI4P1/HhOj5cx4fr+HAdH67jw3X8/1zHR48m2PMEbTquBY+yCTX3mnqrsFvIpWT0TT3Zhgv/cOEfLvzDhX+48H+TC7/wX0AoX0g=
    ****

    I0416 21:39:06.510437 13074 exchange-node.cc:49] Exch id=1****

    input_desc=Tuple(id=0 size=712 slots=[Slot(id=0 type=STRING col=0 offset=8
    null=(offset=0 mask=1)), Slot(id=1 type=STRING col=1 offset=24
    null=(offset=0 mask=2)), Slot(id=2 type=STRING col=2 offset=40
    null=(offset=0 mask=4)), Slot(id=3 type=STRING col=3 offset=56
    null=(offset=0 mask=8)), Slot(id=4 type=STRING col=4 offset=72
    null=(offset=0 mask=10)), Slot(id=5 type=STRING col=5 offset=88
    null=(offset=0 mask=20)), Slot(id=6 type=STRING col=6 offset=104
    null=(offset=0 mask=40)), Slot(id=7 type=STRING col=7 offset=120
    null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=136
    null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=152
    null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=168
    null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=184
    null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=200
    null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=216
    null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=232
    null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=248
    null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=264
    null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=280
    null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=296
    null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=312
    null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=328
    null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=344
    null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=360
    null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=376
    null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=392
    null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=408
    null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=424
    null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=440
    null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=456
    null=(offset=3 mask=10)), Slot(id=29 type=STRING col=29 offset=472
    null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=488
    null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=504
    null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=520
    null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=536
    null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=552
    null=(offset=4 mask=4)), Slot(id=35 type=STRING col=35 offset=568
    null=(offset=4 mask=8)), Slot(id=36 type=STRING col=36 offset=584
    null=(offset=4 mask=10)), Slot(id=37 type=STRING col=37 offset=600
    null=(offset=4 mask=20)), Slot(id=38 type=STRING col=38 offset=616
    null=(offset=4 mask=40)), Slot(id=39 type=STRING col=39 offset=632
    null=(offset=4 mask=80)), Slot(id=40 type=STRING col=40 offset=648
    null=(offset=5 mask=1)), Slot(id=41 type=STRING col=41 offset=664
    null=(offset=5 mask=2)), Slot(id=42 type=STRING col=42 offset=680
    null=(offset=5 mask=4)), Slot(id=43 type=STRING col=43 offset=696
    null=(offset=5 mask=8))])****

    ** **

    output_desc=Tuple(id=0 size=712 slots=[Slot(id=0 type=STRING col=0
    offset=8 null=(offset=0 mask=1)), Slot(id=1 type=STRING col=1 offset=24
    null=(offset=0 mask=2)), Slot(id=2 type=STRING col=2 offset=40
    null=(offset=0 mask=4)), Slot(id=3 type=STRING col=3 offset=56
    null=(offset=0 mask=8)), Slot(id=4 type=STRING col=4 offset=72
    null=(offset=0 mask=10)), Slot(id=5 type=STRING col=5 offset=88
    null=(offset=0 mask=20)), Slot(id=6 type=STRING col=6 offset=104
    null=(offset=0 mask=40)), Slot(id=7 type=STRING col=7 offset=120
    null=(offset=0 mask=80)), Slot(id=8 type=STRING col=8 offset=136
    null=(offset=1 mask=1)), Slot(id=9 type=STRING col=9 offset=152
    null=(offset=1 mask=2)), Slot(id=10 type=STRING col=10 offset=168
    null=(offset=1 mask=4)), Slot(id=11 type=STRING col=11 offset=184
    null=(offset=1 mask=8)), Slot(id=12 type=STRING col=12 offset=200
    null=(offset=1 mask=10)), Slot(id=13 type=STRING col=13 offset=216
    null=(offset=1 mask=20)), Slot(id=14 type=STRING col=14 offset=232
    null=(offset=1 mask=40)), Slot(id=15 type=STRING col=15 offset=248
    null=(offset=1 mask=80)), Slot(id=16 type=STRING col=16 offset=264
    null=(offset=2 mask=1)), Slot(id=17 type=STRING col=17 offset=280
    null=(offset=2 mask=2)), Slot(id=18 type=STRING col=18 offset=296
    null=(offset=2 mask=4)), Slot(id=19 type=STRING col=19 offset=312
    null=(offset=2 mask=8)), Slot(id=20 type=STRING col=20 offset=328
    null=(offset=2 mask=10)), Slot(id=21 type=STRING col=21 offset=344
    null=(offset=2 mask=20)), Slot(id=22 type=STRING col=22 offset=360
    null=(offset=2 mask=40)), Slot(id=23 type=STRING col=23 offset=376
    null=(offset=2 mask=80)), Slot(id=24 type=STRING col=24 offset=392
    null=(offset=3 mask=1)), Slot(id=25 type=STRING col=25 offset=408
    null=(offset=3 mask=2)), Slot(id=26 type=STRING col=26 offset=424
    null=(offset=3 mask=4)), Slot(id=27 type=STRING col=27 offset=440
    null=(offset=3 mask=8)), Slot(id=28 type=STRING col=28 offset=456
    null=(offset=3 mask=10)), Slot(id=29 type=STRING col=29 offset=472
    null=(offset=3 mask=20)), Slot(id=30 type=STRING col=30 offset=488
    null=(offset=3 mask=40)), Slot(id=31 type=STRING col=31 offset=504
    null=(offset=3 mask=80)), Slot(id=32 type=STRING col=32 offset=520
    null=(offset=4 mask=1)), Slot(id=33 type=STRING col=33 offset=536
    null=(offset=4 mask=2)), Slot(id=34 type=STRING col=34 offset=552
    null=(offset=4 mask=4)), Slot(id=35 type=STRING col=35 offset=568
    null=(offset=4 mask=8)), Slot(id=36 type=STRING col=36 offset=584
    null=(offset=4 mask=10)), Slot(id=37 type=STRING col=37 offset=600
    null=(offset=4 mask=20)), Slot(id=38 type=STRING col=38 offset=616
    null=(offset=4 mask=40)), Slot(id=39 type=STRING col=39 offset=632
    null=(offset=4 mask=80)), Slot(id=40 type=STRING col=40 offset=648
    null=(offset=5 mask=1)), Slot(id=41 type=STRING col=41 offset=664
    null=(offset=5 mask=2)), Slot(id=42 type=STRING col=42 offset=680
    null=(offset=5 mask=4)), Slot(id=43 type=STRING col=43 offset=696
    null=(offset=5 mask=8))])****

    I0416 21:43:16.357435 13074 impala-server.cc:1027] Query
    7d859f8ee5f74152:8c8bc6400df2f9fc finished (1/1)
    eJztWF1oHNcVHsmqLK1jR3a823Fd0guBRHJkeXZWu1ptLcNKu/pprbWys3JaWjB3Z+6uBs/ObGfuylYoVNSm/qEPaimmlIS6JS56CQ20NC2Y1gmFQqiDS6GUUKgLhfghtH7Ig0ny0HNnZn81G/mC6dPqaefe85177nfPPd89OvLpiDj+Uo3YG2hU12amtGR8upQkJF6amozG5VRSTRbVxKQkaSW5NF1Sx8IDR8bFp1YMbJq6WS7oFRIORe796r27g4I4XLAoNvwxQYj0HTwqHO27LrTbC+JepVapYHsjLBz5YhBo+GooQ0q4ZlCUKe7VvJ9DWVNDzOwZWYrGjkuTx6MJJEdTk7FUNHFgqVLFBkZnie3oluno7qeG1r1vJE1MoXz2dDatZNFosaYbGkrIWNaK0zFSlCbVWILEi0TFxXhJU3FJShTjMZyMTcux5FhoFuwpAi/LljmOonGUrtqIBYGkZEqeSsWSaEUpDLA9/qA/dLzjL8TG0byNyxViUiSFEFrNraTzhaXC0plcNgPf2a/NLaZzCxBadAw+ESqsrpzOoqWMkkISCnV4iIJJPp3LnFmGH0ohn00vo0y6kEbKUu6rLrzhbymTcs07l2TAuXQOLWbmFURx0SAzPssTDlFrtk43ztmkatkUPVfFNtUpI3Umihz9FTITlScS8sIsGpWCou3c/z4vuxSKKRmGyLIrLIiQN1rYqJLPvbSazX99v/ItwzNim3zWIQZRKTqGUMm2KqgjqhAYQmwB2RCbTknxgVWH2AO2ZdGjgiC+4C21a2qHhzqzcfPDB//d46cxODqYLpdtUoYQ0YptlXSDQAKPiwfndRMb+iuYkcSQtpvI3W/DToAgHp6zLFuDcWrZjaP27lreuuDAelpNJVpYiGyOdLi+/6e/vNrXetlaAXDZ5iyNLBATYv2muI99lInZiIqNVKqwlbqz238DzNBpC2v1kUvfGxLEA8vgzyDzYKlADoQHI2/eH+q+x8XWhVqXaHjucAgM1LP2XO5MJutWo+gYBH1b3D+7QYmTJyrR14GCwcijd94PCeIzc5YJ95vCdmcxVdf8KG7e2ANzGUxx2rb1dWy8jHXqzz18/S5QJYoZAgmiszMgrWh2cDd/+O5XBPHIvG471J3Y6QVYCy2TimVvQJ6xeASf9DyhNdtsnNJI61ge0ibcH/k1huUVYmpQmGYNSz1PtAaFbPnNv/7n1T8cEMTPt9u8jI2GycPLnkkb9w9/7CZBv8//djtnQVQFUdSVmO58tDDRxsGOzXfddbetwimmIWhcJlpL8QsPBt2Jrf7OS/HBzcsDbj4OXB1RIf8Mwm4bojDtHKvoZmqyMu0gVMEX678INlPI++1QTSPrKcl0niYXofS4UBs24SwwaDw6kZTQ8uwJKEuei44R5qp9qO5xQpIQG9lv1ipINx2KTZU4fdF9TtXQqVtfnS8jtghyyyxamB13V2j5xOvl1k/fNXJ981aArUe/O7SjCDz84KO7fe1V4M37N/4vVeAoy0qF2gRXvLRAo5pDz9WLwbfFYTexFVYdByNXP3ljH9TTHKEXLPt8Yc22auW1ao2GhyLfv3RvHabOsAyChGqd+uhnt14UxENKPdNbq8fWh3++Arl0CAB6iRZsbDqVxsW/88n2Nzp2e+ffr33MKsqqyZLMJg7cg/rN8Sl6/7v/ijc5WWtuYGfcO8MNiDIgtq7LC2KYafw5pvbNuioxKj99QUT+/VrUSnBV4UTWgHcNagWIrU1MdSP8VOT137594ye3r29/RxC/5JsrKjzmiB1k/fG1d19769a1P24CJ8N+CcKsRG5ubj74AvCaIYYOQRN7BdtOPd/u/OPnlyCNjnWNY86qmXSF2CqwBgHMSCywvTyIKCAmBPHFxwewNQQuRJQbIXMjYtyISW5EnBuR4EZMcSOS3IhpD/H4SSLzAmK8gEleQJwXkOAFTPECkrwA/xwOL4OA+sWsUKs21GbrvX9e6Qt4Uo3kapWM7px30qrqFjYQehh+Fjw3l83jCyyOtvr+o7eubPcHPMnYIyHoTbZ9by+TA6hqeWyWiTPnPReIt95oW7lzlsx1y4D9QfsKW6bkIlUu6FBtieOZh9vNlQ2n+WpE7XOujrDnzhx7+TTNIu1mrJlpTj7fPnn2M4M57C7hc+SS5vfrP/3FL8/C+8qb9shs8vf7W5c7n5dbv/n7tX5wyURsz3Xhwa7KsZtWNMWh5dx3nPguZ/3ZD86AE33ss9z1pB7zGAIPIIj2XQl9dJhDHjl0kUcRebSQRwV59I9H+Xg0j0fteHSOR+F4tI1D1Tj0jEPJODSMQ704dItDsTi0atebvxjwhg2WtkAl6FLcQf9aetz+4P+Luv+Fyi75XeOu/9EiaHTNcuiMuoa1dRqDdkqV5JQsSxI0ALxtdN/VaUYi8vtUiqmDRk+y+lkhSNdOpU4+5805p06c9IwMYpbpmnNq7GkpJUvyiXrL2mtSe01qr0ntNam9JrXXpPaa1F6T+uSb1Cf6WOn1u71+t9fv9vrdXr/7JPtd4X+XTIoQ
    ****

    E0416 21:43:16.370897 13213 impala-server.cc:1612] unknown query id:
    7d859f8ee5f74152:8c8bc6400df2f9fc****

    E0416 21:43:16.372153 13708 data-stream-mgr.cc:236] unknown row batch
    destination: fragment_id=7d859f8ee5f74152:8c8bc6400df2f9fd node_id=1****

    E0416 21:43:25.412171 13647 data-stream-sender.cc:236] channel send
    status: unknown row batch destination:
    fragment_id=7d859f8ee5f74152:8c8bc6400df2f9fd node_id=1****

    I0416 21:44:13.661774 18043 aggregation-node.cc:170]
    AggregationNode(node_id=1) using llvm codegend functions.****

    I0416 21:46:41.111171 20660 aggregation-node.cc:170]
    AggregationNode(node_id=1) using llvm codegend functions.****

    I0416 21:50:16.563940 13072 exchange-node.cc:49] Exch id=2****

    input_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    ** **

    output_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    I0416 21:50:16.801233 24079 aggregation-node.cc:170]
    AggregationNode(node_id=1) using llvm codegend functions.****

    I0416 21:50:16.803128 24081 aggregation-node.cc:170]
    AggregationNode(node_id=3) using llvm codegend functions.****

    I0416 21:52:11.887372 13072 impala-server.cc:1027] Query
    c5698a75db49466e:b5eb520b6e717d9d finished (1/1)
    eJztnX14U9Udx0OppbTgEElNxel51Ge2Wkpyb5KmGeVZ2qalz2xak9SXvbHb3NM2I7mp994U6rM9ZtpJwcIijpcxlAqCijjKy4MVUQuPgOgeFpmgDzCtyNxExSLMoTKenZsXmpd7m5wS7YqXhz9yzz2vv/M753zv55wD+edvVBXd4YVsOyhw0mUOnb7UQJXo6EZtqVavh8ZGHWzUEepGPSzRlNCldKEyM79INaHeRTGMk2m2O91QmZPXs+r4ZIVqvN3DU65wkEKRN+aqqYqpYxYqYqMrVONsXrebYtuVivzrxBKN78yphE2U18WDysZxdOhntpmhgRDtakKtIaeptdM0ekBojDrCqNFcWeNupVwUuBOynNPDcM7gIw3aQs9AXVwCrObbzSabGRQ0ep0uGugJiqAbS0nYqNY6SD3UNUIH1ahroh1Uk1rfqCMpA1lKkIbCnHIUnwcol1oPUwQ0OmBqZYFQCaA2GIkSI2kA9TZ7ptDGZWNzpsX9yRHCQRVLNbshwwN1DgANlnqT1V5jr6mzmCvRs6m62mquNtnN6Hddg72+wW4Etobaghmcy4NSzCxE4dXWuoZ6UH6PEaAHe0P97WZQU2kzAo3wDID57opZJks1ah1RGAyIjxRXDQ2KZDVZKutq0Q+b3Wo21YJKk90EbDWWH8fmWFNpBEQwKLbeEhWvqGuw2AtuTaHOtgqTBcyqrLIBnmp0wbJwRxdz0OFlnXz7bBa2elge3NRKsbyTF/q1TAM4532wTEMU64nqclCgFmmtGiR0Qm7Iw208xcPsqhpLjW0WakEo0N7eCq+4o8FsvWei7V5XKI5go5s56IIOHjg8XoZHDQJNrMcN4iqXg6KjKor5pdqo0Wc2cJAdx1B8C+X2TlUoVLeEykw6zJTZ8UPD//Xv3ns7KzyoUE5XmZqbWdiMKgvqWU+T0wXRcCpSXVXlZCiX8z5KsJeQlA0OK+mxmZhAoZpS4fGwNArnPexFnwkNfKtnLofKo70OSCsVeRlxOXc/tfjsoazooR+dAA39Cg8NqyGD6vozVa7w0AyZcNq+VTqFEOZuRY2JtHr51y0KVfbtHooOhwQ+fC1bobqyFuXoglUopg05hDIrr6c/W7qVs6KLii7iYs5xGSpU10ScG3n7bEtdpTk4P5KFyoz8LtUEYU6gy72OOZDnkBl8k1DZwbBIW2aiOlZD3go55NJcVGiOUGIV5UCmVU4QeianFro9bDvyFBo1w+/LDBvNCnkvy0SsPCk6yIp6XZmR0K3+za8GjZ8VbvW8mGoO1i+uYlE1iqpLTB0SSkc+EpkgBo1DFKJu3amaWN7OQxTbAZ1twTb5xihUV1d4GDQb8yijcop3tESqbEGvKimeMrGss41y3UU5+fCr/pOvnEatUakqIRpETsFHYXRqwbEDC5Bp8qucLMcHwxNzSTCwItG82ZLmVdkgQ6M1pNzlQTakLxo7PKiuiX19F+VySQ05//49QmvyMieH+ubpWCuJmUfMLpLGkDZCyn0q2VipZqKuM6FKU82QjlpZlFkpzBM9L6zt6gvNE5mdkxxoQLqgMAEBHr3nNG4nY9S4tSUcAG5qHvqp0wo/IcWg1cOt06AHjqdp2GYkOH2pzs19D85DE3MwBxY1hasRciD0xQYdqC2fjibtUEakppgoGQwR8iMMxaXqi0HhXDXFhki0iYzXDZwMx1OMA3JjtLlcq8vJB5chbgYQigFEsUEDqsuLgkUAslinDj5Rbc3CE1oOhKdwzoAoIYtLNShz3Onx7Ec/TZgeX/3tzjmx0+OJ7Tu+lelxquCcNp6FlDvkHaCA5vjZkVng12g6FPzbJiwcWXkArTMWyM/1sHPsLazH29zS6uWV2XmL1+agN3WCEyGfin7TtRTVd7It4uvRc8aT96I3KLKzibezFMO5Lw72ztevj2vkO8e+j0ZxAyN4GAs5NAYioyZsFzBohZbBKidWNrGSIpUTqZVk2VIrjCbVFWZ7IENsjTlBfKNrTO+/VwUHbsa3scYoBXU4W9CJg/ZRC+51vkwFwlPPLLoJpUJe2oJ8kUbTKNJnLGQc7ajlvRt3zd/0xAMb71eobghHtznQlwhkxWKvWfnV3qP/XLPkfkXYeYVskbm2nF2yOh+5XCV0OVGvQraeYrnICDz3xNNLkA/fKlmNCkE+1kPWgdwKlV+mRiW9Pv/0C5+dDrwtLIypp9SglMce+uz9zuOnlwopb0s9pTrkBhgpNNgpCOwUJHYKLXYKHXYKPXaKEuwUBuwUpaEUqXsLdncQ2E5CYDsJgV8rbCchsJ2EwHYSAttJCGwnIbCdhMB2EmzjkthOQmI7CYntJCR+O7CdhMR2EhLbSUhsJyGxnYTEdhJsU2mxnUSrwa2UDjeBHjdBCW4CA26CcE9MqUViJywl7d7WQfaxq1jk43WSxeuudHJzOJPDERSVIe12Pcp3sFArNVeoRYyivvDhlt1jE3Xf7scIceV3oFQQ4EguWSmmGXIVoW80iJKgD5WCGB3F1TBtHhdqHMWiRjI8nMfb5jqR0IWCdEViUxkb3dYeUat/WfDHXIUKxL4O6k3hK7NC+OCMCM9luzb1bEayJy82skDYwlHOv9b1Z6TFfhAb4U7pml24Glk/WFrYYEELhjLb9vgbe3XoCzf0OmTZQWMe8525Lk4Zd/j9DwnKWCF84Y9dqPhXUoWaTJMOqtAoL0jo/yQ9P7TGFunglLs2aa+l2BGiPSBm96QGPXcrhg7HEN44ShtHY+OoaxxdjaOocbQ0jorG0c84yhlHM2OoZRydjKOQcbQxjirG0cM4ShhHA+OoXxzdi6N4MbQujsrF0bc4yhZH0+KoWRwdi6NgcbQrjmrF0asYShVHo+KoUwxdiqFIMbQohgrF0J9JV/BZItBLXK6KCjwJqYZUbdR2QcJOp2Jwk9NcEwbvSXdMm0BBi4fjyxwtFEfwJGRoh5owEoRarS5MaUfCf6g3snM5prNUsCEIk36e4jlQMEOQQW4InPRM44ybQu+4mdNnhCK5INPMt3AzCyeqjTr99PAeAMBl/D2HExl/YOdTcYw/0HNkFDH+gf9cIcH4d64fL8X4A11jpCB/4A8F8ZvOb+ZfppA/sPVjUcj/DW8k+xZtD46FEYb8JSlB/v7/dh77/a6vN6cG+Vc89vDftm7esSwe8vvQn+ukIL//4PYlubiQX6jUI72rhwH5X+44tQ+1apUM+WXIn0T/ypBfhvwy5E875MeG4zK4jgLX3Qs06QTXu3de6BUB1wsf1ImrmVU/lALXrXjgOnsIcD2w5dmJqYHrAf8Hh32HhwTXPWtPbs3FAdfrldLgum/Dkb0zJMH1hTXrbohTe4GPO04dCoHrjM4JgvwHVR7Wjb56JtrNd9unW+osZvRRA9L2USTD8RGB4zfLcFyG4zIcl+G4DMel9R+G8pOhcHqhcMqwl1IPwl66bTiwt/v0lwv2pAX2ag1I16iHBXu7l9+ZCHt7D8bBXv+Gs+NGD+xd5lNKwN4tZ7KkYK/vN1Ks1/9cblwjB/bceJmyXt8zy0RZ70DxN8t6v/wkOBRGmPXekhLrXbL/SeHvM6mx3lPbj36K/iaw3gNvvnH+GinWG9i3663xuKw3VNDxYbDeUIsOPiqzXpn1JpFzw+R+Msu6FJYVeN+QTpb1+vn920RY1pa1t4jP1u8WSbGsX6bvEObAi+uuTI1lBR7+wrd+65Asq/uzFe+Nx2FZR4Y4hOnz+TdpJVnWuwc3T42/V7hzUceeZCxLa7g0lhWt+WSWNSIsa4rMsmSWNSyWhbF+yixgFLAAjcjBLw0OCxjY2L1udXpYgG56+Co4Pgs4/4sEFtB9IBDHAvqW7h9FB79WHMyWYAEnnxwnxQL6fy7FArrfiF/s+46Cy5QF9HU8PxIsoLt3eXAojIrL3Rd2v7Pmo3OPrE2NBYRjPyhy7qsvT4oFdG/5tAObBby7bP6pTzZ1voPPAg4/vfLlEx1nF8ksQGYBKbEA+dyXfO5LPveVxnNfOFdj5MvdqaaQL3eLXO4+flM6ueLnh95+S+xy93qtuPLbNlOKK/4EjytmDMEV+069n5kaV+x/YN/xM51DckXfo/61WFzxwGRpruj3rzgxTZIrfnjwg2vjbwcdfW716qRcUXeJXDHq+1HmivIFcpkrjiKuKJ+Rk8/IfUfOyOGoZBx9jKOMcTSxfIH8O7w/QIjsD5BYZwU/37Fob3ouhpPTyWJyWPsD/S8l7g8Edq+L2x8YOPnmKNofWLlb6mL4kaWSF8OH2B/4023xot137WW6PzBwrnPsCOwP9C1/ITgURsW98K4NX/Sc/Menm1LbH3j5zKEFf126MuEffxX2B66V2h/of/HFv2P/46+9qxbv++B07zDOCn4eSrpC3h+Q9weSyFp5f0DeH5D3B9K+PyCfpb0U5t33fFE6mfdXXc/uE7sX/oxBXM0s/JEU8/5V+pi3f//mCakx774dJx58KslZ2guPP4B1L3zjlCHO0r6yoa9Eknlv2/FI/P+d4HvJN39v0nvh5CXeC4/6JpKZt3wvXGbeMvOWmbfMvP/fmLd8FnykWK/ifwuc0i8=
    ****

    I0417 08:03:51.055438 13073 impala-server.cc:1411] Refreshing catalog****

    I0417 08:04:04.572739 13073 impala-server.cc:1027] Query
    4a0337d0e1b6464c:b0dc047bf47c9456 finished (1/1)
    eJyTlJHQCSxNLapU0MhMsTVJNDA2Nk8xSDVMMjMxM0m2SjJISTYwMU9KMzFPtjQxNdMUZZDUkeAJyEnMy8vMSw/JzE0V5RJ78KyKQYIzJL8kMQcqwsAgxigozSDN2MWAqpqBAQDc1R5B
    ****

    I0417 08:04:22.523432 13073 exchange-node.cc:49] Exch id=2****

    input_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    ** **

    output_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    I0417 08:04:22.722268 8734 aggregation-node.cc:170]
    AggregationNode(node_id=1) using llvm codegend functions.****

    I0417 08:04:22.748064 8737 aggregation-node.cc:170]
    AggregationNode(node_id=3) using llvm codegend functions.****

    I0417 08:06:12.807008 13073 impala-server.cc:1027] Query
    149f2b5ee7e427e:a0c5637a7528bf8d finished (1/1)
    eJztnA9wU0UexwPUUlrwUElND092lNEWsSTvJU2bocykTVo6Z9PapP4bZ7jXZNvmSF56770U6tycPUHbq/Wo/FFUkAgHcoIaQdDRgkHkRFQsHnfeKef1gOMczqv1H6L2ztvNH5o/7zXZEq3F10xn8vbt7tv97W93v++zu8kbukp13Y1eyLWBfKejVKMtaaQadBDqoZbSQwOjtuuKaD2j11HFDY3FjgJlRt5c1dRaF8OyTrbJ5nRDZXZuoHtgb6ZCNcXmERhXOEyhyJ1wyUzFzAlditj4CtVkq9ftZrg2pSLvCrFEUzqzTbCR8boEYGqY7Ah9zTKzDoCjXUapNfT1au31Gj1QFxvURQYNdXGVu4VxMeAmyPFOD8s7g5cO0Bq6BupCPagz32A2Ws0gv8HrdDlAEcVQjoYSGjaotXa6COoaoJ1p0DU67EyjuqhBRzPFdAlFFxdkl6H4AkC5VHvYuUCjA8YWDuBC4MdTegNdDGqttgxcxwcmZV8f95eNw0EFxzS5ISsAdTYA9ZZaY52tylZVYzGb0LWxsrLOXGm0mdH3mnpbbb3NAKz11fnzeZcHpVhQgMIr62rqa0HZrQaALmz1tTeYQZXJagAafA2A+ZbyhUZLJaodVRAMiI8UVwwNilRntJhqqtEXq63ObKwGJqPNCKxVlp/G5lhlMgAqGBRbbomCl9fUW2z5c1Ios7XcaAELTRVWIDANLlgabuhCHtq9nFNoW8TBFg8ngKtbGE5wCrhdSzWAd94BSzVUYRFVWQby1SK1VYOERsgJebhVYASYVVFlqbIuRDUIBdraWuBFN9ab626dZv2FKxQH22g2D13QLgC7x8sKqEKgkfO4QVzhslF0VEQxv9QaKE1GPQ+5ySwjNDNu70yFQnVN6JnJupkyK75n+AMPDO3MDPcplNElxqYmDjahsoJaztPodEHUm+aqLqlwsozLeQeDzYWTcsFeJd01ExMoVDPKPR7OgcIFD3fOZUIdv86zhEfPc3jt0KFU5E6Myzmw5t7HdmRG9/zoBKjnl3scsBKyqKy3q3LwRRNkI2nfvlqBw9wtqDLhsL69/71docq6wcM4wiE9L7x+kUJ1cTXK0QUrUEwr8gdlZq6/P0u6lgujHxX9iHM5x2WoUF0e8W3k7IssNSZzcHikC5QT87pVU/GQ4Cjz2hdDgUdmaJ+Onh0Mi9RlASpjJRTqII88mo8KzcZPrGDsyLTKqbhlsquh28O1IUdxoGr0tGeEjVYHBS/HRqw8PTqoDrW6cmJCs/YfeDpo/MxwrZfGFHO4fHEFiypRVFliypDwdOQjkfFh2DhUAWrWXtW0sjYBoth26GwN1ql9gkJ1WbmHRYOxgDIqYwR7c8Sr3eiWiREYI8c5WxnXzYxTiBjr9MENqDYqlQmiPuTEPgqjU2PHDuyapFDlVTg5XgiGJ+aSYGBFonmzJM2rskLWgaaQMpcH2dBxztjhTnV57O2bGZdLqsv5O97CtcnNuDTUNltjrSRmHjG7SBpD2ggpt6lkZaWqiZrOiArNNEFH1MSizExhnPA/tOf4+tA4kdE53Y46pAviAQgI6D6vcTtZg8atpXgA3MxS/LUEf4UMiyYPt1aPLnjB4YCtBorXqUvc/I/gUjQuB3PgUFX4KpwDpS8s0YPqsnlozA5lRKNpgx4OwfnR6kKN9lxQOFfNcMJprNcNnCwvMKwd8hO0OXyLyykEZyF+PsCPAVRhsQZUls0NPgLQhTp18IppbcJXmqLgVThnQOnpwhINypx0eNzecVvC8Niz+u7G2OFxcHtv1ncxPM7EzmkVOMi4Q94B8h28sCgyCvwSDYfYv6144sjMBWiesUBhiYdbbGvmPN6m5havoMzKffjoFHSnBjsR8qnoO3vvRuW91Brx9egxY60X3UGRnY2CjWNY3n2us6/7RhVXycEjV6JeXM9iD+Mgj/pApNeE7QKGrdA8XOTEwiYWUqRwIqWSfLbUDKNJcYZ59uuTE8XmmMHCb3WO6d6zK9hxJ34Xc4wSi8NFWCYO20eN3WuoVAXCQ89CRyNKhby0GfmiAw2jSJ5xkLW3oZo/u2LzmpXrV/vvVKhmhaNb7ehFBHJisU+dvG/lvoEXVt+pCDsvzhaZa8dnKzfkIZczQZcTtSrkahmOj/TAz75+6iPkw3Mki1GO1WMt5OzIrdDzS9XoSf/76PNjq5btfgdPjKmn1KCUXe0bn3nlzQ0P4ZTXpZ5SHXIDghQa4hQUcQqaOIWWOIWOOEURcQo9cYpi4hQloRSpewtxc1DETkIROwlFXipiJ6GInYQidhKK2EkoYiehiJ2EInYSYuPSxE5CEzsJTewkNHk9iJ2EJnYSmthJaGInoYmdhCZ2EmJTaYmdRKshLZSONEERaQI9aYJi0gThlphRjcROWEravC3nxP2H7+eLvLxOt3jdJie/mDfa7UFRGdJuV6J8hx9axyzBpYhR1Hed6Tw6KVH3vbyeEld+XfOxAEdyqY5hmyBfHnpHgygJelHJj9FRfBXb6nGhyjEcqiQrwKWCdYkTCV2IpSt6h1fGRre2RdTqZ93HshUqEHs7qDfxW2Y5fuGMqP7dmw6vfALJntzYyBiwRd6Qlt3XhXK7JjbCTdIlO30Zsn7waWGDBS0Yymz50Xv20+gNN3Q7ZNlhY3bfve7KOGXcu9l3DCtjBX7Dn9Sl+CCpQk2mSYdVaJQXJLR/kpYfWWOLNHDKTZu01VJsCNEWELN7UoOenUOgwwmEN4nSJtHYJOqaRFeTKGoSLU2iokn0M4lyJtHMBGqZRCeTKGQSbUyiikn0MIkSJtHAJOqXRPeSKF4CrUuickn0LYmyJdG0JGqWRMeSKFgS7UqiWkn0KoFSJdGoJOqUQJcSKFICLUqgQgn0Z9IZfKEI9BKXq6ICT0KqIVUbtVyQsNKpGF7kNFWFwXuyBdNGkN/s4YVSezPDUwINWYddTRkoSq1WF6S0IDHouyuycDmhswSbEIRBv8AIPMifj1WQGwKnY4Fh/tWhe/yCefNDkVyQbRKa+QUF09QGXdG88BIAIEX8/ftvT0D8gb4nXbGIv+/AH8cR4u/dIoX4Ty2XRPx4WVAc8fuO5MW33NGfXKCIv2f3N2OB+PtWDfl2jD3i16eE+P0Pdp656x9PpYj4A8t++3z/rzvWxSP+dvR3hRTi9/+7c2MOKeI/9lDvI/uf2zkKxP/Kyd6Db3S8JiN+GfEnU78y4pcRv4z40474idG4jK2jsHXPvsJ0Yuu/vLf8kAi27lqmE1czp0qlsHVL+rB1/579U1LD1u0dJ9/a+eaI2Np/4sNXc0iw9UalNLb2PfHIFoMktu585z0Qr/Ye2/3wjhC2ntg5Fat/UOHh3OilZ5rNfIttnqXGYkbvNCBt70QyGh8TND5bRuMyGpfRuIzGZTQurf8IlJ+MhNOLhFNFvSXqYdTraB0N6vWdGFi1KS2oV1uMZI16VKi3J3BLAur1n9hmi9vNvX7fOEK9XZtzJFDvPe2TyVFvzwplwiGKqy5Q1Ot7ea0o6v2WTwz1fXE82BXGGPVemxLqXf32V4+i/6dSQ72+P9yzCv/Ho97DR14fulwK9fYMPO8n3s29d1/wMwrUu/aDB/FnrYx6ZdSbRM2NEvvJKOt8UFZwvkkfytrywr29Iihrx6ZrxUfrvkIplPUzMpSVMQLK6gk8l5MayvJ1vLT71e0jo6wvB3qnkKCsd0fYgdn++N/3SO/APPPmK1fErwT/bfmKTclQlrb4/FBWtOaTUdaYoKwZMsqSUdaoUBbB/CmjgO8/CtCI7PrSkKCAwNfPbP7morSgAN288DFw8l1fb9ya+LsXvncXx/3uxRvj6WB3+3GpXV/9r2WSo4D2XfEHu/t8sy9QFDD4p9OiKMA369tFAZ/+LtgVxsXB7k/+0/Pl7oO921JDAZ98ETj7om9ovciur0CuFAro33W4gxgFvH3m0NYXNx8dBQoYWHMEp10jowAZBaSEAuRdX/KuL3nXVxp3fZEci5EPdqeaQj7YLbJD7s/XphMrDna/f0LsYPcWrbjyazdKYcXb0ocVfaffmpwaVhw88HHgpWUjY8XHz24hwoqPXiqNFf17Bp6cK4kVD+3tiseK7Qc/fBQr45Gxou48sWLU+6OMFeXD4zJWHEdYUd4hJ++Q+4HskCNRyST6mEQZk2hi+fD4D3d5gBJZHqCJDoX/88jTG9NzKJyeRxfSaVseaB/qj1seGFzmG0fLA1s3ZUksD2x/NVtqeaCve4LU+kDgX/Give9Q7gW6PuDfcP+kMTgVHvg81BfGxanw8Oa/nURbBT/9lcj6wI8lT4X3bztBvD6wcuB+/BnF+sDx9aGPvD4grw8kkbXy+oC8PiCvD6R9fUDeSns+zLv/2Jx0Mm/frl2nxE6F/75YXM2cLZNi3j8nY97ZIzDvwOubL06Nefct//LjFdtGZN6+3/QPETHvlTOkmXdg3zNfSW+lHfAfmBV/0ubwX7duTHoqnD7PU+FRL0Uy85ZPhcvMW2beMvOWmff3jXnLW8HHivUq/g/ei83G
    ****

    I0417 08:07:43.675503 13073 exchange-node.cc:49] Exch id=2****

    input_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    ** **

    output_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    I0417 08:07:43.913043 12163 aggregation-node.cc:170]
    AggregationNode(node_id=1) using llvm codegend functions.****

    I0417 08:07:43.914149 12165 aggregation-node.cc:170]
    AggregationNode(node_id=3) using llvm codegend functions.****

    I0417 08:09:35.016392 13073 impala-server.cc:1027] Query
    bb79198e8f464029:add536cd855fa589 finished (1/1)
    eJztnH14U9UdxwNUKAUcIqlB2DjPdLNltST3Jm0aKc/SNi19ZtOapL5te9ht7mmbkdzUe28K9dlmlIqlgHb4ApsgpVYLjwqhD1JQhA4Zo0NLkT2+TbBzPFKmPnQKPgz74M5NUpqXe5ucEq3F279yzz2vv/M753zv55zTWQM/VmXc4YFsHUhz0LkVFdk5mhw91Fdqs7RqIsdA0bSOzLLTep2uktLpc9KVSbMyVFPLnBTDOJgqm8MFlSmpvovPXatQTba5ecoZDFIoUsddN1sxe1yjIjy6QjXJ6nG5KLZOqZg1RyzR5IaUAlhJeZw8KKiYRAd+JpsYGgjRrifUGvJWtfZWTTZQ6w3qHAOpu7bYVUM5KXAnZDmHm+Ec/kca1AaegTozG1hMt5uMVhNIq/A4nDTIIiiCrsghYYVaayezoK4C2qkKXSVtpyrVWRU6ktKTOQSpT0/JQ/F5gHIpcTMZQKMDxhoWCJUQiieyDaQelFltSUIbn5qQcmvEX4oQDgpZqsoFGR6oUwAoN5cZLbZiW3Gp2VSAno1FRRZTkdFmQr9Ly21l5TYDsJaXpC3gnG6UYmE6Ci+ylJaXgbx7DAA92MrLbjeB4gKrAWiEZwBMd+cvMpqLUOuIdH9AZKSIamhQJIvRXFBagn5YbRaTsQQUGG1GYC02/yI8x+ICAyD8QeH1lqh4fmm52ZY2L446W/ONZrCooNAKeKrCCXODHZ3JQbuHdfB1i1lY42Z5cFMNxfIOXujXXA3gHPfDXA2RmUUU5YE0tUhr1SCqE6YEPNzKUzxMLiw2F1sXoRYEAm11NfCaO8pNlnumWe9zBuIINrqZg05o54Hd7WF41CBQybpdIKJyKSg6qqKYX2YbtGRSOQfZSQzFV1Muz2yFQnVLoMyYw0yZHDk0epc3nds9MTioUE7XGauqWFiFKgvKWHelwwnRcMpQXVfoYCin435KsJeQlPUPK+mxGZ1AoZqZ73azNArn3exlnwkMfIt7KYfKoz12SCsVqeMjcvau3buyY2Lo0A9NgIZ+vpuGRZBBdf2VaorwUAWZYNr+5XqFEOaqQY0ZDDvxXrVClXy7m6IHS3jw1WSF6toSlKMTFqKYVuQQyompvt5k6VYuCi0qtIjLOUdkqFDdMOjcyNsXm0sLTP75kUxXjp+1WjVVmBPoPI99CeQ5ZAbvdFS2PyxYdpMZ1bEI8hbIIZfmQkJThBILKTsyrXKq0DMpJdDlZuuQp9CoGU3epKDRLJD3sMyglaeHBllQryvHR3Vr71/O+40/MdjqZWHVHKpfRMVCahRSl7A6RJWOfGRwghgyDpGOunWvalpeHQ9RbDt01Prb5B2nUF2f72bQbMyjjPIo3l49WOVfo1cFFE8ZWdZRSznvohz84Kun/7UCtUalKoBoEDkEH4WhqQXH7t+CTDOr0MFyvD88OpcoAyuizZssaV6VFTI0WkPynG5kQ/qysYOD6obw13dRTqfUkOs81Se0JjVpRqBvtoRbScw8YnaRNIa0EeLuU8nGSjUTdZ0RVZqqgnTIyqKcGMc80bfp3KqWwDyR1DDdjgakEwoTEODRe07jcjAGjUtLcgC4qGXop04t/IQUg1YPlzYbPXA8TcNaA8FlkToX9wO4DE3M/hxY1BSuWMiByM7UZ4OSvPlo0g5kRBKZWu1QiJAfkZOZo78cFMxVk5kzGG0a43EBB8PxFGOH3DjtFK7G6eD9yxC3AAjFACJTrwFFeRn+IgCZqVP7n6jaKuFJk+V/CuYMiGwyM0eDMsedHg//456o6bH38xed4dPj7v3bvpXpcbbgnFaehZQr4B0gjeb4xYOzwO/QdCj4t1VYOCamArTOmCG/1M0usVWzbk9VdY2HVyanfn50CnpTKjgR8qnQN+u70dCfYR309dA5o/U+9AZFdlTyNpZiONflwf7po7MiGrn34s1oFJczgoexkENjYHDUBO0ChqxQPVTl6MpGV1KkciK1kixbaoXRxLnCXPjfJ+PF1pj+zG90jenpPu0fuOO/jTVGKajDxYJOHLKPWnCvgVwVCE49i+hKlAp5aTXyRRpNo0ifsZCx16GW//3t7efXNB3a8YBCNTcY3WpHXyKQFYu9b81Lb3Y/1/nwA4qg8wrZInO1n1u7CfnVjALodKBehWwZxXKDI3DgSN9jKQrVPMlq5AvysQyyduRWqPxcNSppxaf1b+063vqOsDDGn1KDUrbub//wya6XnxZS/iz+lOqAG2Ck0GCnILBTkNgptNgpdNgpsrBTZGOn0GOnyAmkiN9bsLuDwHYSAttJCPxaYTsJge0kBLaTENhOQmA7CYHtJAS2k2Abl8R2EhLbSUhsJyHx24HtJCS2k5DYTkJiOwmJ7SQktpNgm0qL7SRaDW6ldLgJsnATZOMm0OMmCPbEzBIkdoJS0uapuSzudx5MF/l4nW72uAoc3BLOaLf7RWVAu/0I5TtUqIVaKtQiTFF3bjjz5oRo3XdgIyGu/D66TRDgSC5ZKKYKcvmBbzSIkqAPlbQwHcUVM7VuJ2ocxaJGMjxcxluXOpDQhYJ0nYGEXHh0a92gWr3w9CaknkD4a7/eFL4y84UPzmDM9f/tOvvMC0j2pIZHFghbMMqp41tbUW4/DY9wp3TNPr0eWd9fWtBgfgsGMvugee8zWvSFG3gdsOyQMbsfv/DDCGXc+oTvYUEZK4Qv/AmNir6YCjWWJh1SoSFeENX/MXp+eI0t0sFxd23MXouzI0R7QMzuMQ16YR6GDscQ3jhKG0dj46hrHF2No6hxtDSOisbRzzjKGUczY6hlHJ2Mo5BxtDGOKsbRwzhKGEcD46hfHN2Lo3gxtC6OysXRtzjKFkfT4qhZHB2Lo2BxtCuOasXRqxhKFUej4qhTDF2KoUgxtCiGCsXQnzFX8EUi0EtcrooKPAmphlRtyHZB1E6nYmiT01QcBO8xd0wrQFq1m+Nz7dUUR/AkZGi7mjAQhFqtTo9rR6L51dWNwZ3LcQ05gg1BkPTzFM+BtAWCDHJB4KAXGhbcFHjHLZy/IBDJCZkqvppbmD5NbdBlzQ/uAQDsLdB1v4xi/D27no9g/P3Ht48hxj/QPkmC8a/YKsn4vb+XYvzejSCikU0f/OQqZfzecxdHg/F7D13y7yOPMuPPjovxrz7R99dt51o64mP875/1fdbx7perIhm/F/3NkWL83p6vdk/BZfznWta9cmzVibfxGf9Jf4PWtciMX2b8MeSvzPhlxi8z/oQzfmw2LnPrEG7d/9K8RHLrhwa6Dopw68blOnE1sztXilvX4HHrqcNwa++J1yfHx617n9/asK97WG7ds7vrzSk43PrPSmlu7duzwXubJLfecunlKPl8/m/LOwLcenzDVEH9g0I360IfPdNsprtt882lZhP6pgEJ+yaS2fiosPGbZTYus3GZjctsXGbj0voPQ/nJTDixTDh+1msfYr107YhY7wsD3W0JYb1aPdI16hGxXt+Hd0ex3uY/PhrBenveaBtDrPfwnlQJ1tu4R5L1+lxSrNd3NCWikb5LV+t57qaDbaKst3nuN3tn6PGz/qEwyqz3lrhYb8OBpvrNDQd88bHebQ99UL97Zf/nf4hgvd3HjgzcIMV6fe+1HJmMy3oP7Vj97/WHdozgPPeG03u6v9hw+k8y65VZbww5N0LuJ7OsK2FZ3q8Tegaz78uTe0VYVvuzt4jP1r5MKZb1GzyWpR2GZTV1vBbnGcze4xv3v/rS8Cyr7czHk3FYVvcwZzA79zXuJCVZVsO7LXMiV7OO84fbYrEsrf7KWFao5pNZ1qiwrJkyy5JZ1ohYFsb6KbOAMcACaJFzXxocFuDt6jj7YGJYgG5+8CY4NgtoXnVvFAtoaj+zJIIFbNo1hljAqYvJEiygb9s1+CygufvGyH+p0JFxlbKA3l31E0bh3Ffn663+oTAm7na/dXDTsyv3H2qPjwXseuSNTx5d/84akXNfnalSLKDnoy0rsFnAxy8HisJnAc1Ce9Ze2iCzAJkFxMUC5HNf8rkv+dxXAs994dyMke92x5tCvtsdzRV7GjISyRWfOvnFR2J3u5/Xiiu/Uz+X4or34nHFpGG4oq9vx9T4uGJ/W+sj2+uH5Yr9r5x5Aosrts8YhiuuOiZ8O0hwxbcf65sd+V+Puo71PRiTK+qukCuGfD/KXFG+Py5zxTHEFeUzcvIZue/JGTkclYyjj3GUMY4mlu+Pf4/3B6DI/gCJsz/Qe7TpdGLOCurI+WQmOaL9gd6j0WcFvceedEXsD7z24hjaH2h+V+pe+H+EI4+4+wO93rkRjex/Z85Vuj/Qf6FBdH+gc+E3uj/QvHH16e/AWcH47oXXf3ak57WV77fFtz8gRF23+auHxPYHbpTaH+h/9p87sf/365fP9Z2s/+yTEewPrN/81dqd2w+slfcH5P2BGLJW3h+Q9wfk/YGE7w/IZ2mvhHn7/09PAu+Fb244LnYvfKteXM3sy5Ni3r9NHPNu/qIrOT7m7WvZe/atF4dl3t6djYex/p/p+pnDMO+vL6zJkmTeO/s2Rcrnnvd39MY8S6sjr/BeeMg3kcy85XvhMvOWmbfMvGXm/V1j3vJZ8NFivYr/AzN50i4=
    ****

    I0417 09:56:59.495132 13075 exchange-node.cc:49] Exch id=2****

    input_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    ** **

    output_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0
    null=(offset=0 mask=0))])****

    I0417 09:56:59.731639 12418 aggregation-node.cc:170]
    AggregationNode(node_id=1) using llvm codegend functions.****

    I0417 09:56:59.736656 12423 aggregation-node.cc:170]
    AggregationNode(node_id=3) using llvm codegend functions.****

    I0417 09:58:50.823942 13075 impala-server.cc:1027] Query
    9148ac87180b4fed:b92602810a654bb1 finished (1/1)
    eJztnH14E0UexwPUUsrLoZIaxNN5Tk9bD0qym7RppDyXNmnpczatSerbc/dw22Ta5kg2vd1NoT53jxFUKiBW5PWRakWOBwE1QNWqFSve46mARE/Rq/dgUTjq6xVQjkcL3GyyJW+7TaYUa3H7V3Z2Xn/zm9nvfGamU/t+pZp+iw8yjSDb5Sws0Gj1lEOfr9Grq7U10GmoLiDy1IReo6bydNrqak2OMm3qdNWESjdF0y661u7yQGVmVvDLg5MUqnF2L0e5hSCFImvUpdMU00YtUcRGV6jG2nweD8U0KhVTrxJLNK4p0wRrKJ+bA6bqsc7wzwwz7QR8tMsJtYacodbO0OQDdYFBpzfo1JPKPPWUmwK3QoZ1eWnWFXp0gobwM1Dn5gOr+Waz0WYG2dU+l9sJ8giKcFYXkLBarXWQeVBXDR1Uta7G6aBq1HnVOpLSkwUEqc/JLELxOYByKffS04FGB4z1DOArAdR6A5FvIPWg0mZP49u4ekzmjLi/TD4clDBUrQfSHFBnAlBlqTRa7WX2sgqL2YSejaWlVnOp0W5Gvyuq7JVVdgOwVZVnz2LdXpRidg4KL7VWVFWCojsMAD3YqypvNoMyk80ANPwzAObbi+cYLaWodUROKCA+Ulw1NCiS1WgxVZSjHza71WwsByaj3QhsZZbfxeZYZjIAIhQUW2+JihdXVFns2TemUGdbsdEC5phKbICjqt2wUOjoXBY6fIyLa5zLwHovw4Fr6ymGc3F8vxZqAOu6CxZqiNw8orQIZKtFWqsGCZ0wPuzhNo7iYEZJmaXMNge1IBxob6yHl9xSZbbeMdH2Z3c4Dm+j61johg4OOLw+mkMNAjWM1wPiKpeJoqMqivllnkFXkFbFQmYsTXF1lMc3TaFQ3RAuM+kwU2bED43mFSeefDFdGFQop0uNtbUMrEWVBZWMt8blhmg4TVddWuKiKbfrLoq3F5+UCQ0r6bGZmEChmlLs9TJOFM55mXM+Ex74Vu98FpXn9DmgU6nIGh2Xc3DPxudfSI8e+tEJ0NAv9jphKaRRXX+vGs8/1EK6P+0hvYIP89SjxghhnSv31SlUGTd7KacQ0nowmKFQTSpHObphCYppQw6hTM8KdGdIt3JOdFHRRZzLOS5DheqKfudG3j7XUmEyh+ZHMkc5euoy1QR+TnAW+RzzIMciM/gno7JDYULZ3X9AdSyFnBWyyKVZIbQ3V6HK5EssoRzItMoJfM9klkOPl2lEnuJEzWj2pwlGs0LOx9D9Vp4cHWRFva4cndCt3T0HQ8ZPF1q9IKaakfrFVSyqRlF1ialDQunIR/oniIhxiBzUrR2qiUWNHESxHdDVEGqTf5RCdXmxl0azMYcyKqI4R51Q5YAHvTJRHGVkGFcD5b6NcnH9Xb/49E7UGpXKBNEgcvE+CqNT847d/SEyzdQSF8NyofDEXBIMrEg0b4akeVU2SDvRN6TI7UU2dJ4ztjCoroh9fRvldksNudZ997XxfZN2WbhvNsdaScw8YnaRNIa0EVLuU8nGSjUTdZ0RVZqqhc6oL4syPYV5Yn3H9i83hOeJtKbJDjQg3ZCfgACH3rMaj4s2aDxakgXAQy1AP3Vq/iekaPT18Gjz0QPLOZ2wwUCwOj3hYX8BF6CJOZQDg5rClvE5EPm5egKUF81Ek3Y4I5LI1ZKRED4/oiC3QHcuSMhVEwmbSPs8wEWzHEU7IDtKO56td7u40GeInQX4YgCRq9eA0qLpoSIAmatTh56ohlr+SZMXehJyBkQ+mVugQZnjTo/db92ZMD327F3niZ0e205s/VGmx2m8c9o4BlKesHeAbCfLze2fBf6CpkPev238hyM9C6DvjAVy873MPHsd4/XV1tX7OGVG1vH2TPSmgnci5FPRb974BA39y2z9vh49Z6z1oTcosquGszMUzXoiU8YJVVwj2w5ci0ZxFc17GANZNAb6R41gFxCxQl2kyomVTaykSOVEaiVZttQXRpPiF6bj3v+OFvvGHCYu6Dem/em/hQbu6B/jG6Pk1eFcXidG7KPm3auvUAWEqWeOswalQl5ah3zRiaZRpM8YSDsaUcvf6Hrm44d3vBu4W6G6Rohuc6CVCGREY28NvHzw2NnldysE5+WzReba8e2Kx6cilzNBtwv1KmQqKYbtH4F9z5w5PU6hulGyGsW8fKyEjAO5FSq/UI1KWnnqrQ377g98yH8YU0+pQSk7Tncf2Lr67Do+5W9ST6kOuwFGCg12CgI7BYmdQoudQoedIg87RT52Cj12ioJwitS9Bbs7CGwnIbCdhMCvFbaTENhOQmA7CYHtJAS2kxDYTkJgOwm2cUlsJyGxnYTEdhISvx3YTkJiOwmJ7SQktpOQ2E5CYjsJtqm02E6i1eBWSoebIA83QT5uAj1uAqEnppQjsSNISbuv/py4/+jlbJHF62SLz2NysfNYo8MREpVh7XY1yjdSqJWaz9ciRlFvePu54JhE3be7hRBXfl038QIcySUrRddCtji8RoMoCVqoZMfoKLaMbvC6UeMoBjWS5uACzjbfhYQuZMPLa2VsdFtjv1r1P84v8UHs65De5FeZxfyCU4h5quWpz9u3ItmTFRuZJ2xClKbHDq9C64nrYyPcKl2z45cj64dKEwwWsmA4s21rd35PohVu+HXYshFjHujq+2WcMg4sPHKYV8YKfoU/ZomiJ6lCTaZJIyo0ygsS+j9Jzw+ssUU6OOWuTdprKXaEaA+I2T2pQU/diKHDMYQ3jtLG0dg46hpHV+MoahwtjaOicfQzjnLG0cwYahlHJ+MoZBxtjKOKcfQwjhLG0cA46hdH9+IoXgyti6NycfQtjrLF0bQ4ahZHx+IoWBztiqNacfQqhlLF0ag46hRDl2IoUgwtiqFCMfRn0i/4HBHoJS5XRQWehFRDqjZquyBhp1MR2eQ0lwngPemOKQmy67wsV+ioo1iCIyHtdKgJA0Go1eqclHYkmvtOvCjsXI5qKuBtCATSz1EcC7Jn8TLIA4HLOdsw69rwO3b2zFnhSG5I13J17OyciWqDLm+msAcAcBl/56NzE7dADwfdcVug7TtHEON/MzhWgvF/9aQk4+f3T8UZv7/1mnjh/cOvL1LG73/5wTHDsI8cPHokNBSGmfHnp8T4N6y8Z/WGBwLbU2P87a09HYt2/2dpPOP3o7+rpBh/50e9a8fjMv7dx75auezopkEw/gNHt+94Zfkjj8qMX2b8SeSvzPhlxi8z/iFn/NhsXObWUdy6+9Mh5dYbe46/LsKtlyzSiauZtkIpbl2Px63TBuDWrR99fElq3Dr40sk1n78zILcOnFz87Hgcbr1FKc2te8/sXXqTJLc+88LzIOFkWhN/zo7n1qObJvDqH5R4GQ9a9Ey0m2+3z7RUWMxoTQOGbE0ks/FhYePXyWxcZuMyG5fZuMzGpfUfhvKTmfDQMuHUWa82wnqdDYNhva1rO1o2Dwnr1eqRrlEPjvUGbk9gvd1PHZsXy3qbX1s9gljvF22TJVjvim/HSrFe/1+lWG/g9fhbhYFHrr5IWW9g9w7R89ydsy/snaFv2kJDYZhZ7w0psd7gKw90rXnihx2psV4h9op41vvOu3v6rpBivb2PdW3BPs+9K1zQIFjvpp6D9369J9gis16Z9SaRc4PkfjLLOh+W5V96/VCyrC3PrXpVhGXtePIG8dl6W64Uy/rj0J3B7D2aKsvq7eg7+/zTA7Is/6t7do/DYVk9A5zB9L+3vVn6DObCZ3deFb+J3/7PNZuTsSyt/vxYVrTmk1nWsLCsKTLLklnWoFgWxvdTZgEjgAXoRM59aXBYQO/ZnRsXDQ0L0M0UboJjs4DeljsSWEDrQ0fiWED3oZdGEAvoWjVOggW8t07y3Jc0C2j2Xxnfcy9ef5GygN71p0VZwAU+9+U/szE0FEbE3e77933z2umnv96cGgtoOvnCc58FP39Q5NxXZ5YUC/DvXXwyA5cFHNz29/2d7x8eBAtY2/bSYlTNR2QWILOAlFiAfO5LPvcln/sawnNfODdj5LvdqaaQ73YncsXQnYWh44oLtz10ROxu9yatuPI78FsprngnHldMH+iM3KEPUrzb3fnd/t799w7IFYOvbVyKxRW3XCbNFZsXvt8yXZIrLv+qZ1r8jYjju1sWJeWKuvPkilHrR5kryvfHZa44griifEZOPiP3Mzkjh6OScfQxjjLG0cTy/fGf8f5Ansj+AImzPxD896HtQ3NWUEfOJHPJQe0PNL+SuD/QverV+tj9gc7jW0bQ/sCuNzMk9gf27h3E/kDnF/Gi3f/wdRfp/kDwf2eGY38gcOiD7T+Bs4Kp3Qs/fKx9/fJ/fJbqvXA+7rH2ZWL7A1dK3gs/tWlXJu7+wPFQUU8MYn9AaJF8L1zeH0gma+X9AXl/QN4fGPL9Afks7fkw7+B9M4aSeXd8sutfYvfCn9KLq5n2Iinm/Sc85j1pAOYd7No/NjXm3b3k0/Untg3IvFvvWdeF9f9Ml02RZt6B5d8/q5Nk3m+vSfi3Sp0Pf7cl6VlaHXme98Kj1kQy85bvhcvMW2beMvOWmfdPjXnLZ8GHi/Uq/g+xDdSQ
    ****

    ** **

    *From:* Ishaan Joshi
    *Sent:* Wednesday, April 17, 2013 10:32 AM
    *To:* impala-user@cloudera.org
    *Subject:* Re: impala select count(*) from table_name - is way too slow
    compare with hive select statement****

    ** **

    Hi,****

    ** **

    To better diagnose the problem, could you send us the query profile, and
    the logs from the impalad you connected to?. The query profile is available
    on the impalad debug webpage - click on /queries, and the profile link
    should be right next to the query you ran. The log can be retrieved from
    /logs.****

    ** **

    Additionally, more information about the table would be helpful, the
    output of a describe command is ideal.****

    ** **

    Thanks,****

    ** **

    -- Ishaan****

    ** **

    ****

    ** **

    On Wed, Apr 17, 2013 at 7:05 AM, wrote:****

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and impala
    beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-cref.org:25000/backends
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history
    file=/tmp/nathamu/hive_job_log_nathamu_201304170927_428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.org:21000
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.org:21000] > select count(*) from
    security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.org:21000] >









    varz - information ****

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/> ****

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>****
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>****
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>****
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>****
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>****
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>****
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>****
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>****
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>****

    Hadoop Configuration****

    Configuration: core-default.xml, core-site.xml, mapred-default.xml,
    mapred-site.xml, yarn-default.xml, yarn-site.xml, hdfs-default.xml,
    hdfs-site.xml ****

    *Key*

    *Value*

    dfs.datanode.data.dir****

    /test01****

    dfs.namenode.checkpoint.txns****

    40000****

    s3.replication****

    3****

    mapreduce.output.fileoutputformat.compress.type****

    RECORD****

    mapreduce.jobtracker.jobhistory.lru.cache.size****

    5****

    dfs.datanode.failed.volumes.tolerated****

    0****

    hadoop.http.filter.initializers****

    org.apache.hadoop.http.lib.StaticUserWebFilter****

    mapreduce.cluster.temp.dir****

    ${hadoop.tmp.dir}/mapred/temp****

    mapreduce.reduce.shuffle.memory.limit.percent****

    0.25****

    yarn.nodemanager.keytab****

    /etc/krb5.keytab****

    dfs.https.server.keystore.resource****

    ssl-server.xml****

    mapreduce.reduce.skip.maxgroups****

    0****

    dfs.domain.socket.path****

    /var/run/hadoop-hdfs/dn._PORT****

    hadoop.http.authentication.kerberos.keytab****

    ${user.home}/hadoop.keytab****

    yarn.nodemanager.localizer.client.thread-count****

    5****

    ha.failover-controller.new-active.rpc-timeout.ms****

    60000****

    mapreduce.framework.name****

    local****

    ha.health-monitor.check-interval.ms****

    1000****

    io.file.buffer.size****

    4096****

    dfs.namenode.checkpoint.period****

    3600****

    mapreduce.task.tmp.dir****

    ./tmp****

    ipc.client.kill.max****

    10****

    yarn.resourcemanager.scheduler.class****

    org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler
    ****

    mapreduce.jobtracker.taskcache.levels****

    2****

    s3.stream-buffer-size****

    4096****

    dfs.namenode.secondary.http-address****

    0.0.0.0:50090****

    dfs.namenode.decommission.interval****

    30****

    dfs.namenode.http-address****

    0.0.0.0:50070****

    mapreduce.task.files.preserve.failedtasks****

    false****

    dfs.encrypt.data.transfer****

    false****

    dfs.datanode.address****

    0.0.0.0:50010****

    hadoop.http.authentication.token.validity****

    36000****

    hadoop.security.group.mapping.ldap.search.filter.group****

    (objectClass=group)****

    dfs.client.failover.max.attempts****

    15****

    kfs.client-write-packet-size****

    65536****

    yarn.admin.acl****

    *****

    yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs**
    **

    86400****

    dfs.client.failover.connection.retries.on.timeouts****

    0****

    mapreduce.map.sort.spill.percent****

    0.80****

    file.stream-buffer-size****

    4096****

    dfs.webhdfs.enabled****

    false****

    ipc.client.connection.maxidletime****

    10000****

    mapreduce.jobtracker.persist.jobstatus.hours****

    1****

    dfs.datanode.ipc.address****

    0.0.0.0:50020****

    yarn.nodemanager.address****

    0.0.0.0:0****

    yarn.app.mapreduce.am.job.task.listener.thread-count****

    30****

    dfs.client.read.shortcircuit****

    true****

    dfs.namenode.safemode.extension****

    30000****

    ha.zookeeper.parent-znode****

    /hadoop-ha****

    yarn.nodemanager.container-executor.class****

    org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor****

    io.skip.checksum.errors****

    false****

    yarn.resourcemanager.scheduler.client.thread-count****

    50****

    hadoop.http.authentication.kerberos.principal****

    HTTP/_HOST@LOCALHOST****

    mapreduce.reduce.log.level****

    INFO****

    fs.s3.maxRetries****

    4****

    hadoop.kerberos.kinit.command****

    kinit****

    yarn.nodemanager.process-kill-wait.ms****

    2000****

    dfs.namenode.name.dir.restore****

    false****

    mapreduce.jobtracker.handler.count****

    10****

    yarn.app.mapreduce.client-am.ipc.max-retries****

    1****

    dfs.client.use.datanode.hostname****

    false****

    hadoop.util.hash.type****

    murmur****

    io.seqfile.lazydecompress****

    true****

    dfs.datanode.dns.interface****

    default****

    yarn.nodemanager.disk-health-checker.min-healthy-disks****

    0.25****

    mapreduce.job.maxtaskfailures.per.tracker****

    3****

    mapreduce.tasktracker.healthchecker.script.timeout****

    600000****

    hadoop.security.group.mapping.ldap.search.attr.group.name****

    cn****

    fs.df.interval****

    60000****

    dfs.namenode.kerberos.internal.spnego.principal****

    ${dfs.web.authentication.kerberos.principal}****

    mapreduce.job.reduce.shuffle.consumer.plugin.class****

    org.apache.hadoop.mapreduce.task.reduce.Shuffle****

    mapreduce.jobtracker.address****

    chadvt3endc01:54311****

    mapreduce.tasktracker.tasks.sleeptimebeforesigkill****

    5000****

    dfs.journalnode.rpc-address****

    0.0.0.0:8485****

    mapreduce.job.acl-view-job****

    dfs.client.block.write.replace-datanode-on-failure.policy****

    DEFAULT****

    dfs.namenode.replication.interval****

    3****

    dfs.namenode.num.checkpoints.retained****

    2****

    mapreduce.tasktracker.http.address****

    0.0.0.0:50060****

    yarn.resourcemanager.scheduler.address****

    0.0.0.0:8030****

    dfs.datanode.directoryscan.threads****

    1****

    hadoop.security.group.mapping.ldap.ssl****

    false****

    mapreduce.task.merge.progress.records****

    10000****

    dfs.heartbeat.interval****

    3****

    net.topology.script.number.args****

    100****

    mapreduce.local.clientfactory.class.name****

    org.apache.hadoop.mapred.LocalClientFactory****

    dfs.client-write-packet-size****

    65536****

    io.native.lib.available****

    true****

    dfs.client.failover.connection.retries****

    0****

    yarn.nodemanager.disk-health-checker.interval-ms****

    120000****

    dfs.blocksize****

    67108864****

    yarn.resourcemanager.container-tokens.master-key-rolling-interval-secs****

    86400****

    mapreduce.jobhistory.webapp.address****

    0.0.0.0:19888****

    yarn.resourcemanager.resource-tracker.client.thread-count****

    50****

    dfs.blockreport.initialDelay****

    0****

    ha.health-monitor.rpc-timeout.ms****

    45000****

    mapreduce.reduce.markreset.buffer.percent****

    0.0****

    dfs.ha.tail-edits.period****

    60****

    mapreduce.admin.user.env****

    LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native****

    yarn.resourcemanager.client.thread-count****

    50****

    yarn.nodemanager.health-checker.script.timeout-ms****

    1200000****

    file.bytes-per-checksum****

    512****

    dfs.replication.max****

    512****

    dfs.namenode.max.extra.edits.segments.retained****

    10000****

    io.map.index.skip****

    0****

    mapreduce.task.timeout****

    600000****

    dfs.datanode.du.reserved****

    0****

    dfs.support.append****

    true****

    ftp.blocksize****

    67108864****

    dfs.client.file-block-storage-locations.num-threads****

    10****

    yarn.nodemanager.container-manager.thread-count****

    20****

    ipc.server.listen.queue.size****

    128****

    yarn.resourcemanager.amliveliness-monitor.interval-ms****

    1000****

    hadoop.ssl.hostname.verifier****

    DEFAULT****

    mapreduce.tasktracker.dns.interface****

    default****

    hadoop.security.group.mapping.ldap.search.attr.member****

    member****

    mapreduce.tasktracker.outofband.heartbeat****

    false****

    mapreduce.job.userlog.retain.hours****

    24****

    yarn.nodemanager.resource.memory-mb****

    8192****

    dfs.namenode.delegation.token.renew-interval****

    86400000****

    hadoop.ssl.keystores.factory.class****

    org.apache.hadoop.security.ssl.FileBasedKeyStoresFactory****

    dfs.datanode.sync.behind.writes****

    false****

    mapreduce.map.maxattempts****

    4****

    dfs.datanode.handler.count****

    10****

    hadoop.ssl.require.client.cert****

    false****

    ftp.client-write-packet-size****

    65536****

    dfs.client.write.exclude.nodes.cache.expiry.interval.millis****

    600000****

    ipc.server.tcpnodelay****

    false****

    mapreduce.reduce.shuffle.retry-delay.max.ms****

    60000****

    mapreduce.task.profile.reduces****

    0-2****

    ha.health-monitor.connect-retry-interval.ms****

    1000****

    hadoop.fuse.connection.timeout****

    300****

    dfs.permissions.superusergroup****

    hadoop****

    mapreduce.jobtracker.jobhistory.task.numberprogresssplits****

    12****

    fs.ftp.host.port****

    21****

    mapreduce.map.speculative****

    true****

    mapreduce.client.submit.file.replication****

    10****

    dfs.datanode.data.dir.perm****

    700****

    s3native.blocksize****

    67108864****

    mapreduce.job.ubertask.maxmaps****

    9****

    dfs.namenode.replication.min****

    1****

    mapreduce.cluster.acls.enabled****

    false****

    hadoop.security.uid.cache.secs****

    14400****

    yarn.nodemanager.localizer.fetch.thread-count****

    4****

    map.sort.class****

    org.apache.hadoop.util.QuickSort****

    fs.trash.checkpoint.interval****

    0****

    dfs.image.transfer.timeout****

    600000****

    dfs.namenode.name.dir****

    file://${hadoop.tmp.dir}/dfs/name****

    yarn.app.mapreduce.am.staging-dir****

    /tmp/hadoop-yarn/staging****

    fs.AbstractFileSystem.file.impl****

    org.apache.hadoop.fs.local.LocalFs****

    yarn.nodemanager.env-whitelist****

    JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,YARN_HOME***
    *

    dfs.image.compression.codec****

    org.apache.hadoop.io.compress.DefaultCodec****

    mapreduce.job.reduces****

    1****

    mapreduce.job.complete.cancel.delegation.tokens****

    true****

    hadoop.security.group.mapping.ldap.search.filter.user****

    (&(objectClass=user)(sAMAccountName={0}))****

    yarn.nodemanager.sleep-delay-before-sigkill.ms****

    250****

    mapreduce.tasktracker.healthchecker.interval****

    60000****

    mapreduce.jobtracker.heartbeats.in.second****

    100****

    kfs.bytes-per-checksum****

    512****

    mapreduce.jobtracker.persist.jobstatus.dir****

    /jobtracker/jobsInfo****

    dfs.namenode.backup.http-address****

    0.0.0.0:50105****

    hadoop.rpc.protection****

    authentication****

    dfs.namenode.https-address****

    0.0.0.0:50470****

    ftp.stream-buffer-size****

    4096****

    dfs.ha.log-roll.period****

    120****

    yarn.resourcemanager.admin.client.thread-count****

    1****

    file.client-write-packet-size****

    65536****

    hadoop.http.authentication.simple.anonymous.allowed****

    true****

    yarn.nodemanager.log.retain-seconds****

    10800****

    dfs.datanode.drop.cache.behind.reads****

    false****

    dfs.image.transfer.bandwidthPerSec****

    0****

    ha.failover-controller.cli-check.rpc-timeout.ms****

    20000****

    mapreduce.tasktracker.instrumentation****

    org.apache.hadoop.mapred.TaskTrackerMetricsInst****

    io.mapfile.bloom.size****

    1048576****

    dfs.ha.fencing.ssh.connect-timeout****

    30000****

    s3.bytes-per-checksum****

    512****

    fs.automatic.close****

    true****

    fs.trash.interval****

    0****

    hadoop.security.authentication****

    simple****

    fs.defaultFS****

    hdfs://chadvt3endc01:8020****

    hadoop.ssl.server.conf****

    ssl-server.xml****

    ipc.client.connect.max.retries****

    10****

    yarn.resourcemanager.delayed.delegation-token.removal-interval-ms****

    30000****

    dfs.journalnode.http-address****

    0.0.0.0:8480****

    mapreduce.jobtracker.taskscheduler****

    org.apache.hadoop.mapred.JobQueueTaskScheduler****

    mapreduce.job.speculative.speculativecap****

    0.1****

    yarn.am.liveness-monitor.expiry-interval-ms****

    600000****

    mapreduce.output.fileoutputformat.compress****

    false****

    net.topology.node.switch.mapping.impl****

    org.apache.hadoop.net.ScriptBasedMapping****

    dfs.namenode.replication.considerLoad****

    true****

    dfs.namenode.audit.loggers****

    default****

    mapreduce.job.counters.max****

    120****

    yarn.resourcemanager.address****

    0.0.0.0:8032****

    dfs.client.block.write.retries****

    3****

    yarn.resourcemanager.nm.liveness-monitor.interval-ms****

    1000****

    io.map.index.interval****

    128****

    mapred.child.java.opts****

    -Xmx200m****

    mapreduce.tasktracker.local.dir.minspacestart****

    0****

    mapreduce.client.progressmonitor.pollinterval****

    1000****

    dfs.client.https.keystore.resource****

    ssl-client.xml****

    rpc.engine.org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolPB****

    org.apache.hadoop.ipc.ProtobufRpcEngine****

    mapreduce.jobtracker.tasktracker.maxblacklists****

    4****

    mapreduce.job.queuename****

    default****

    yarn.nodemanager.localizer.address****

    0.0.0.0:8040****

    io.mapfile.bloom.error.rate****

    0.005****

    mapreduce.job.split.metainfo.maxsize****

    10000000****

    yarn.nodemanager.delete.thread-count****

    4****

    ipc.client.tcpnodelay****

    false****

    yarn.app.mapreduce.am.resource.mb****

    1536****

    dfs.datanode.dns.nameserver****

    default****

    mapreduce.map.output.compress.codec****

    org.apache.hadoop.io.compress.DefaultCodec****

    dfs.namenode.accesstime.precision****

    3600000****

    mapreduce.map.log.level****

    INFO****

    io.seqfile.compress.blocksize****

    1000000****

    mapreduce.tasktracker.taskcontroller****

    org.apache.hadoop.mapred.DefaultTaskController****

    hadoop.security.groups.cache.secs****

    300****

    mapreduce.job.end-notification.max.attempts****

    5****

    yarn.nodemanager.webapp.address****

    0.0.0.0:8042****

    mapreduce.jobtracker.expire.trackers.interval****

    600000****

    yarn.resourcemanager.webapp.address****

    0.0.0.0:8088****

    yarn.nodemanager.health-checker.interval-ms****

    600000****

    hadoop.security.authorization****

    false****

    mapreduce.job.map.output.collector.class****

    org.apache.hadoop.mapred.MapTask$MapOutputBuffer****

    fs.ftp.host****

    0.0.0.0****

    yarn.app.mapreduce.am.scheduler.heartbeat.interval-ms****

    1000****

    mapreduce.ifile.readahead****

    true****

    ha.zookeeper.session-timeout.ms****

    5000****

    mapreduce.tasktracker.taskmemorymanager.monitoringinterval****

    5000****

    mapreduce.reduce.shuffle.parallelcopies****

    5****

    mapreduce.map.skip.maxrecords****

    0****

    dfs.https.enable****

    false****

    mapreduce.reduce.shuffle.read.timeout****

    180000****

    mapreduce.output.fileoutputformat.compress.codec****

    org.apache.hadoop.io.compress.DefaultCodec****

    mapreduce.jobtracker.instrumentation****

    org.apache.hadoop.mapred.JobTrackerMetricsInst****

    yarn.nodemanager.remote-app-log-dir-suffix****

    logs****

    dfs.blockreport.intervalMsec****

    21600000****

    mapreduce.reduce.speculative****

    true****

    mapreduce.jobhistory.keytab****

    /etc/security/keytab/jhs.service.keytab****

    dfs.datanode.balance.bandwidthPerSec****

    1048576****

    file.blocksize****

    67108864****

    yarn.resourcemanager.admin.address****

    0.0.0.0:8033****

    yarn.resourcemanager.resource-tracker.address****

    0.0.0.0:8031****

    mapreduce.tasktracker.local.dir.minspacekill****

    0****

    mapreduce.jobtracker.staging.root.dir****

    ${hadoop.tmp.dir}/mapred/staging****

    mapreduce.jobtracker.retiredjobs.cache.size****

    1000****

    ipc.client.connect.max.retries.on.timeouts****

    45****

    ha.zookeeper.acl****

    world:anyone:rwcda****

    yarn.nodemanager.local-dirs****

    ${hadoop.tmp.dir}/nm-local-dir****

    mapreduce.reduce.shuffle.connect.timeout****

    180000****

    dfs.block.access.key.update.interval****

    600****

    dfs.block.access.token.lifetime****

    600****

    mapreduce.job.end-notification.retry.attempts****

    5****

    mapreduce.jobtracker.system.dir****

    ${hadoop.tmp.dir}/mapred/system****

    yarn.nodemanager.admin-env****

    MALLOC_ARENA_MAX=$MALLOC_ARENA_MAX****

    yarn.log-aggregation.retain-seconds****

    -1****

    mapreduce.jobtracker.jobhistory.block.size****

    3145728****

    mapreduce.tasktracker.indexcache.mb****

    10****

    dfs.namenode.checkpoint.check.period****

    60****

    dfs.client.block.write.replace-datanode-on-failure.enable****

    true****

    dfs.datanode.directoryscan.interval****

    21600****

    yarn.nodemanager.container-monitor.interval-ms****

    3000****

    dfs.default.chunk.view.size****

    32768****

    mapreduce.job.speculative.slownodethreshold****

    1.0****

    mapreduce.job.reduce.slowstart.completedmaps****

    0.05****

    hadoop.security.instrumentation.requires.admin****

    false****

    dfs.namenode.safemode.min.datanodes****

    0****

    hadoop.http.authentication.signature.secret.file****

    ${user.home}/hadoop-http-auth-signature-secret****

    mapreduce.reduce.maxattempts****

    4****

    yarn.nodemanager.localizer.cache.target-size-mb****

    10240****

    s3native.replication****

    3****

    dfs.datanode.https.address****

    0.0.0.0:50475****

    mapreduce.reduce.skip.proc.count.autoincr****

    true****

    file.replication****

    1****

    hadoop.hdfs.configuration.version****

    1****

    ipc.client.idlethreshold****

    4000****

    hadoop.tmp.dir****

    /tmp/hadoop-${user.name}****

    mapreduce.jobhistory.address****

    0.0.0.0:10020****

    mapreduce.jobtracker.restart.recover****

    false****

    mapreduce.cluster.local.dir****

    /test02/mapred/local****

    yarn.ipc.serializer.type****

    protocolbuffers****

    dfs.namenode.decommission.nodes.per.interval****

    5****

    dfs.namenode.delegation.key.update-interval****

    86400000****

    fs.s3.buffer.dir****

    ${hadoop.tmp.dir}/s3****

    dfs.namenode.support.allow.format****

    true****

    yarn.nodemanager.remote-app-log-dir****

    /tmp/logs****

    hadoop.work.around.non.threadsafe.getpwuid****

    false****

    dfs.ha.automatic-failover.enabled****

    false****

    mapreduce.jobtracker.persist.jobstatus.active****

    true****

    dfs.namenode.logging.level****

    info****

    yarn.nodemanager.log-dirs****

    ${yarn.log.dir}/userlogs****

    ha.health-monitor.sleep-after-disconnect.ms****

    1000****

    dfs.namenode.checkpoint.edits.dir****

    ${dfs.namenode.checkpoint.dir}****

    hadoop.rpc.socket.factory.class.default****

    org.apache.hadoop.net.StandardSocketFactory****

    yarn.resourcemanager.keytab****

    /etc/krb5.keytab****

    dfs.datanode.http.address****

    0.0.0.0:50075****

    mapreduce.task.profile****

    false****

    dfs.namenode.edits.dir****

    ${dfs.namenode.name.dir}****

    hadoop.fuse.timer.period****

    5****

    mapreduce.map.skip.proc.count.autoincr****

    true****

    fs.AbstractFileSystem.viewfs.impl****

    org.apache.hadoop.fs.viewfs.ViewFs****

    mapreduce.job.speculative.slowtaskthreshold****

    1.0****

    s3native.stream-buffer-size****

    4096****

    yarn.nodemanager.delete.debug-delay-sec****

    0****

    dfs.secondary.namenode.kerberos.internal.spnego.principal****

    ${dfs.web.authentication.kerberos.principal}****

    dfs.namenode.safemode.threshold-pct****

    0.999f****

    mapreduce.ifile.readahead.bytes****

    4194304****

    yarn.scheduler.maximum-allocation-mb****

    8192****

    s3native.bytes-per-checksum****

    512****

    mapreduce.job.committer.setup.cleanup.needed****

    true****

    kfs.replication****

    3****

    yarn.nodemanager.log-aggregation.compression-type****

    none****

    hadoop.http.authentication.type****

    simple****

    dfs.client.failover.sleep.base.millis****

    500****

    yarn.nodemanager.heartbeat.interval-ms****

    1000****

    hadoop.jetty.logs.serve.aliases****

    true****

    ha.failover-controller.graceful-fence.rpc-timeout.ms****

    5000****

    mapreduce.reduce.shuffle.input.buffer.percent****

    0.70****

    dfs.datanode.max.transfer.threads****

    4096****

    mapreduce.task.io.sort.mb****

    100****

    mapreduce.reduce.merge.inmem.threshold****

    1000****

    dfs.namenode.handler.count****

    10****

    hadoop.ssl.client.conf****

    ssl-client.xml****

    yarn.resourcemanager.container.liveness-monitor.interval-ms****

    600000****

    mapreduce.client.completion.pollinterval****

    5000****

    yarn.nodemanager.vmem-pmem-ratio****

    2.1****

    yarn.app.mapreduce.client.max-retries****

    3****

    hadoop.ssl.enabled****

    false****

    fs.AbstractFileSystem.hdfs.impl****

    org.apache.hadoop.fs.Hdfs****

    mapreduce.reduce.java.opts****

    -Xmx1024M****

    mapreduce.tasktracker.reduce.tasks.maximum****

    4****

    mapreduce.map.java.opts****

    -Xmx1024M****

    mapreduce.reduce.input.buffer.percent****

    0.0****

    kfs.stream-buffer-size****

    4096****

    dfs.namenode.invalidate.work.pct.per.iteration****

    0.32f****

    yarn.app.mapreduce.am.command-opts****

    -Xmx1024m****

    dfs.bytes-per-checksum****

    512****

    dfs.replication****

    1****

    mapreduce.shuffle.ssl.file.buffer.size****

    65536****

    dfs.permissions.enabled****

    true****

    mapreduce.jobtracker.maxtasks.perjob****

    -1****

    dfs.datanode.use.datanode.hostname****

    false****

    mapreduce.task.userlog.limit.kb****

    0****

    dfs.namenode.fs-limits.max-directory-items****

    0****

    s3.client-write-packet-size****

    65536****

    dfs.client.failover.sleep.max.millis****

    15000****

    mapreduce.job.maps****

    2****

    dfs.namenode.fs-limits.max-component-length****

    0****

    mapreduce.map.output.compress****

    false****

    s3.blocksize****

    67108864****

    dfs.namenode.edits.journal-plugin.qjournal****

    org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager****

    kfs.blocksize****

    67108864****

    dfs.client.https.need-auth****

    false****

    yarn.scheduler.minimum-allocation-mb****

    1024****

    ftp.replication****

    3****

    mapreduce.input.fileinputformat.split.minsize****

    0****

    fs.s3n.block.size****

    67108864****

    yarn.ipc.rpc.class****

    org.apache.hadoop.yarn.ipc.HadoopYarnProtoRPC****

    dfs.namenode.num.extra.edits.retained****

    1000000****

    hadoop.http.staticuser.user****

    dr.who****

    yarn.nodemanager.localizer.cache.cleanup.interval-ms****

    600000****

    mapreduce.job.jvm.numtasks****

    1****

    mapreduce.task.profile.maps****

    0-2****

    mapreduce.shuffle.port****

    8080****

    mapreduce.reduce.shuffle.merge.percent****

    0.66****

    mapreduce.jobtracker.http.address****

    0.0.0.0:50030****

    mapreduce.task.skip.start.attempts****

    2****

    mapreduce.task.io.sort.factor****

    10****

    dfs.namenode.checkpoint.dir****

    file://${hadoop.tmp.dir}/dfs/namesecondary****

    tfile.fs.input.buffer.size****

    262144****

    tfile.io.chunk.size****

    1048576****

    fs.s3.block.size****

    67108864****

    io.serializations****


    org.apache.hadoop.io.serializer.WritableSerialization,org.apache.hadoop.io.serializer.avro.AvroSpecificSerialization,org.apache.hadoop.io.serializer.avro.AvroReflectSerialization
    ****

    yarn.resourcemanager.max-completed-applications****

    10000****

    mapreduce.jobhistory.principal****

    jhs/_HOST@REALM.TLD****

    mapreduce.job.end-notification.retry.interval****

    1****

    dfs.namenode.backup.address****

    0.0.0.0:50100****

    dfs.block.access.token.enable****

    false****

    io.seqfile.sorter.recordlimit****

    1000000****

    s3native.client-write-packet-size****

    65536****

    ftp.bytes-per-checksum****

    512****

    hadoop.security.group.mapping****

    org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback****

    dfs.client.file-block-storage-locations.timeout****

    3000****

    mapreduce.job.end-notification.max.retry.interval****

    5****

    yarn.acl.enable****

    true****

    yarn.nm.liveness-monitor.expiry-interval-ms****

    600000****

    mapreduce.tasktracker.map.tasks.maximum****

    8****

    dfs.namenode.max.objects****

    0****

    dfs.namenode.delegation.token.max-lifetime****

    604800000****

    mapreduce.job.hdfs-servers****

    ${fs.defaultFS}****

    yarn.application.classpath****


    $HADOOP_CONF_DIR,$HADOOP_COMMON_HOME/share/hadoop/common/*,$HADOOP_COMMON_HOME/share/hadoop/common/lib/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/*,$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,$YARN_HOME/share/hadoop/yarn/*,$YARN_HOME/share/hadoop/yarn/lib/*,$YARN_HOME/share/hadoop/mapreduce/*,$YARN_HOME/share/hadoop/mapreduce/lib/*
    ****

    mapreduce.tasktracker.dns.nameserver****

    default****

    dfs.datanode.hdfs-blocks-metadata.enabled****

    true****

    yarn.nodemanager.aux-services.mapreduce.shuffle.class****

    org.apache.hadoop.mapred.ShuffleHandler****

    dfs.datanode.readahead.bytes****

    4193404****

    mapreduce.job.ubertask.maxreduces****

    1****

    dfs.image.compress****

    false****

    mapreduce.shuffle.ssl.enabled****

    false****

    yarn.log-aggregation-enable****

    false****

    mapreduce.tasktracker.report.address****

    127.0.0.1:0****

    mapreduce.tasktracker.http.threads****

    40****

    dfs.stream-buffer-size****

    4096****

    tfile.fs.output.buffer.size****

    262144****

    fs.permissions.umask-mode****

    022****

    yarn.resourcemanager.am.max-retries****

    1****

    ha.failover-controller.graceful-fence.connection.retries****

    1****

    dfs.datanode.drop.cache.behind.writes****

    false****

    mapreduce.job.ubertask.enable****

    false****

    hadoop.common.configuration.version****

    0.23.0****

    dfs.namenode.replication.work.multiplier.per.iteration****

    2****

    mapreduce.job.acl-modify-job****

    io.seqfile.local.dir****

    ${hadoop.tmp.dir}/io/local****

    fs.s3.sleepTimeSeconds****

    10****

    mapreduce.client.output.filter****

    FAILED****
    Command-line Flags****

    --dump_ir=false****

    --module_output=****

    --be_port=22000****

    --hostname=chadvt3endc02****

    --keytab_file=****

    --mem_limit=80%****

    --planservice_host=localhost****

    --planservice_port=20000****

    --principal=****

    --exchg_node_buffer_size_bytes=10485760****

    --max_row_batches=0****

    --randomize_splits=false****

    --num_disks=0****

    --num_threads_per_disk=1****

    --read_size=8388608****

    --enable_webserver=true****

    --state_store_host=chadvt3endc02.ops.tiaa-cref.org****

    --state_store_subscriber_port=23000****

    --use_statestore=true****

    --nn=chadvt3endc01****

    --nn_port=8020****

    --serialize_batch=false****

    --status_report_interval=5****

    --compress_rowbatches=true****

    --abort_on_config_error=true****

    --be_service_threads=64****

    --beeswax_port=21000****

    --default_query_options=****

    --fe_service_threads=64****

    --heap_profile_dir=****

    --hs2_port=21050****

    --load_catalog_at_startup=false****

    --log_mem_usage_interval=0****

    --log_query_to_file=true****

    --query_log_size=25****

    --use_planservice=false****

    --statestore_subscriber_timeout_seconds=10****

    --state_store_port=24000****

    --statestore_max_missed_heartbeats=5****

    --statestore_num_heartbeat_threads=10****

    --statestore_suspect_heartbeats=2****

    --kerberos_reinit_interval=60****

    --sasl_path=/usr/lib/sasl2:/usr/lib64/sasl2:/usr/local/lib/sasl2:/usr/lib/x86_64-linux-gnu/sasl2****

    --web_log_bytes=1048576****

    --log_filename=impalad****

    --periodic_counter_update_period_ms=500****

    --rpc_cnxn_attempts=10****

    --rpc_cnxn_retry_interval_ms=2000****

    --enable_webserver_doc_root=true****

    --webserver_doc_root=/usr/lib/impala****

    --webserver_interface=****

    --webserver_port=25000****

    --flagfile=****

    --fromenv=****

    --tryfromenv=****

    --undefok=****

    --tab_completion_columns=80****

    --tab_completion_word=****

    --help=false****

    --helpfull=false****

    --helpmatch=****

    --helpon=****

    --helppackage=false****

    --helpshort=false****

    --helpxml=false****

    --version=false****

    --alsologtoemail=****

    --alsologtostderr=false****

    --drop_log_memory=true****

    --log_backtrace_at=****

    --log_dir=/var/log/impala****

    --log_link=****

    --log_prefix=true****

    --logbuflevel=0****

    --logbufsecs=30****

    --logemaillevel=999****

    --logmailer=/bin/mail****

    --logtostderr=false****

    --max_log_size=1800****

    --minloglevel=0****

    --stderrthreshold=2****

    --stop_logging_if_full_disk=false****

    --symbolize_stacktrace=true****

    --v=0****

    --vmodule=****



    query profile:****

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/> ****

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>****
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>****
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>****
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>****
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>****
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>****
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>****
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>****
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>****

    Query (id=9148ac87180b4fed:b92602810a654bb1):****

    - PlanningTime: 15.457ms****

    Summary:****

    Default Db: default****

    End Time: 2013-04-17 09:58:50****

    Impala Version: impalad version 0.7 RELEASE (build 62a2db93eb04c36e5becab5fdcaf06b53a839238)****

    Built on Mon, 15 Apr 2013 08:27:38 PST****

    Plan: ****

    ----------------****

    Plan Fragment 0****

    UNPARTITIONED****

    AGGREGATE****

    OUTPUT: SUM()****

    GROUP BY: ****

    TUPLE IDS: 1 ****

    EXCHANGE (2)****

    TUPLE IDS: 1 ****

    ** **

    Plan Fragment 1****

    RANDOM****

    STREAM DATA SINK****

    EXCHANGE ID: 2****

    UNPARTITIONED****

    ** **

    AGGREGATE****

    OUTPUT: COUNT(*)****

    GROUP BY: ****

    TUPLE IDS: 1 ****

    SCAN HDFS table=default.security_report #partitions=1 size=12.62GB (0)****

    TUPLE IDS: 0 ****

    ----------------****

    Query State: FINISHED****

    Query Type: QUERY****

    Sql Statement: select count(*) from security_report****

    Start Time: 2013-04-17 09:56:59****

    User: nathamu****

    Query 9148ac87180b4fed:b92602810a654bb1:(1m50s 0.00%)****

    Aggregate Profile:****

    - FinalizationTimer: 0ns****

    Coordinator Fragment:(1m50s 0.00%)****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 465.0us****

    - CompileTime: 109.684ms****

    - LoadTime: 9.50ms****

    - ModuleFileSize: 70.02 KB****

    AGGREGATION_NODE (id=3):(1m50s 0.00%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 6.0us****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    EXCHANGE_NODE (id=2):(1m50s 100.00%)****

    - BytesReceived: 64.00 B****

    - ConvertRowBatchTime: 7.0us****

    - DataArrivalWaitTime: 1m50s****

    - DeserializeRowBatchTimer: 22.0us****

    - FirstBatchArrivalWaitTime: 0ns****

    - MemoryUsed: 0.00 ****

    - RowsReturned: 4****

    - RowsReturnedRate: 0****

    - SendersBlockedTotalTimer: 0ns****

    - SendersBlockedWallTimer: 0ns****

    Averaged Fragment 1:(1m47s 0.00%)****

    completion times: min:1m43s max:1m50s mean: 1m47s stddev:2s582ms****

    execution rates: min:27.82 MB/sec max:32.43 MB/sec mean:29.95 MB/sec stddev:1.95 MB/sec****

    num instances: 4****

    split sizes: min: 2.81 GB, max: 3.50 GB, avg: 3.16 GB, stddev: 273.91 MB****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 742.0us****

    - CompileTime: 114.504ms****

    - LoadTime: 8.748ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(292.250us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 83.56 KB/sec****

    - OverallThroughput: 53.97 KB/sec****

    - SerializeBatchTime: 7.500us****

    - ThriftTransmitTime: 204.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m47s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.6ms****

    - GetResultsTime: 3.250us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m47s 99.99%)****

    - AverageHdfsReadThreadConcurrency: 0.06 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 93.50 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 6.50 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=36: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=37: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=38: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=39: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=40: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=41: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.02 ****

    - BytesRead: 3.16 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 464.42 MB/sec****

    - RowsReturned: 411.49K (411489)****

    - RowsReturnedRate: 3.82 K/sec****

    - ScanRangesComplete: 50****

    - ScannerThreadsInvoluntaryContextSwitches: 4****

    - ScannerThreadsTotalWallClockTime: 49m5s****

    - DelimiterParseTime: 1s340ms****

    - MaterializeTupleTime: 331.500us****

    - ScannerThreadsSysTime: 6.744ms****

    - ScannerThreadsUserTime: 1s365ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.27K (1273)****

    - TotalRawHdfsReadTime: 6s973ms****

    - TotalReadThroughput: 29.97 MB/sec****

    Fragment 1:****

    Instance 9148ac87180b4fed:b92602810a654bb3 (host=chas2t3endc02:22000):(1m50s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:56/3.50 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 780.0us****

    - CompileTime: 113.908ms****

    - LoadTime: 8.810ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(311.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 61.04 KB/sec****

    - OverallThroughput: 50.24 KB/sec****

    - SerializeBatchTime: 6.0us****

    - ThriftTransmitTime: 256.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m50s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.256ms****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m50s 99.99%)****

    File Formats: TEXT/NONE:56 ****

    Hdfs split stats (:<# splits>/): 0:56/3.50 GB ****

    - AverageHdfsReadThreadConcurrency: 0.07 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 93.21 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 6.79 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.01 ****

    - BytesRead: 3.50 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 451.90 MB/sec****

    - RowsReturned: 434.37K (434373)****

    - RowsReturnedRate: 3.93 K/sec****

    - ScanRangesComplete: 56****

    - ScannerThreadsInvoluntaryContextSwitches: 2****

    - ScannerThreadsTotalWallClockTime: 59m13s****

    - DelimiterParseTime: 1s502ms****

    - MaterializeTupleTime: 334.0us****

    - ScannerThreadsSysTime: 5.994ms****

    - ScannerThreadsUserTime: 1s525ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.36K (1365)****

    - TotalRawHdfsReadTime: 7s931ms****

    - TotalReadThroughput: 32.43 MB/sec****

    Instance 9148ac87180b4fed:b92602810a654bb4 (host=chadvt3endc02:22000):(1m48s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:48/3.00 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 724.0us****

    - CompileTime: 113.126ms****

    - LoadTime: 8.573ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(247.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 131.30 KB/sec****

    - OverallThroughput: 63.26 KB/sec****

    - SerializeBatchTime: 8.0us****

    - ThriftTransmitTime: 119.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m48s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 2.511ms****

    - GetResultsTime: 4.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m48s 99.99%)****

    File Formats: TEXT/NONE:48 ****

    Hdfs split stats (:<# splits>/): 0:48/3.00 GB ****

    - AverageHdfsReadThreadConcurrency: 0.07 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 92.59 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 7.41 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.02 ****

    - BytesRead: 3.00 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 447.17 MB/sec****

    - RowsReturned: 321.75K (321753)****

    - RowsReturnedRate: 2.97 K/sec****

    - ScanRangesComplete: 48****

    - ScannerThreadsInvoluntaryContextSwitches: 4****

    - ScannerThreadsTotalWallClockTime: 49m39s****

    - DelimiterParseTime: 1s252ms****

    - MaterializeTupleTime: 312.0us****

    - ScannerThreadsSysTime: 5.995ms****

    - ScannerThreadsUserTime: 1s277ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.27K (1268)****

    - TotalRawHdfsReadTime: 6s862ms****

    - TotalReadThroughput: 28.40 MB/sec****

    Instance 9148ac87180b4fed:b92602810a654bb5 (host=chas2t3endc01:22000):(1m43s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:45/2.81 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 731.0us****

    - CompileTime: 113.26ms****

    - LoadTime: 8.870ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(315.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 73.36 KB/sec****

    - OverallThroughput: 49.60 KB/sec****

    - SerializeBatchTime: 8.0us****

    - ThriftTransmitTime: 213.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m43s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.123ms****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m43s 99.99%)****

    File Formats: TEXT/NONE:45 ****

    Hdfs split stats (:<# splits>/): 0:45/2.81 GB ****

    - AverageHdfsReadThreadConcurrency: 0.05 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 95.15 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 4.85 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=36: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=37: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=38: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=39: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=40: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=41: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.01 ****

    - BytesRead: 2.81 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 485.12 MB/sec****

    - RowsReturned: 428.45K (428449)****

    - RowsReturnedRate: 4.14 K/sec****

    - ScanRangesComplete: 45****

    - ScannerThreadsInvoluntaryContextSwitches: 3****

    - ScannerThreadsTotalWallClockTime: 37m58s****

    - DelimiterParseTime: 1s197ms****

    - MaterializeTupleTime: 311.0us****

    - ScannerThreadsSysTime: 6.994ms****

    - ScannerThreadsUserTime: 1s219ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.17K (1173)****

    - TotalRawHdfsReadTime: 5s936ms****

    - TotalReadThroughput: 27.82 MB/sec****

    Instance 9148ac87180b4fed:b92602810a654bb6 (host=chas2t3endc03:22000):(1m48s 0.00%)****

    Hdfs split stats (:<# splits>/): 0:53/3.31 GB ****

    - RowsProduced: 1****

    CodeGen:****

    - CodegenTime: 733.0us****

    - CompileTime: 117.958ms****

    - LoadTime: 8.740ms****

    - ModuleFileSize: 70.02 KB****

    DataStreamSender (dst_id=2):(296.0us 0.00%)****

    - BytesSent: 16.00 B****

    - NetworkThroughput: 68.53 KB/sec****

    - OverallThroughput: 52.79 KB/sec****

    - SerializeBatchTime: 8.0us****

    - ThriftTransmitTime: 228.0us****

    - UncompressedRowBatchSize: 16.00 B****

    AGGREGATION_NODE (id=1):(1m48s 0.01%)****

    - BuildBuckets: 1.02K (1024)****

    - BuildTime: 3.137ms****

    - GetResultsTime: 3.0us****

    - LoadFactor: 0.00 ****

    - MemoryUsed: 32.01 KB****

    - RowsReturned: 1****

    - RowsReturnedRate: 0****

    HDFS_SCAN_NODE (id=0):(1m48s 99.99%)****

    File Formats: TEXT/NONE:53 ****

    Hdfs split stats (:<# splits>/): 0:53/3.31 GB ****

    - AverageHdfsReadThreadConcurrency: 0.07 ****

    - HdfsReadThreadConcurrencyCountPercentage=0: 93.06 ****

    - HdfsReadThreadConcurrencyCountPercentage=1: 6.94 ****

    - HdfsReadThreadConcurrencyCountPercentage=10: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=11: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=12: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=13: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=14: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=15: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=16: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=17: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=18: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=19: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=2: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=20: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=21: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=22: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=23: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=24: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=25: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=26: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=27: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=28: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=29: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=3: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=30: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=31: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=32: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=33: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=34: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=35: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=4: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=5: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=6: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=7: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=8: 0.00 ****

    - HdfsReadThreadConcurrencyCountPercentage=9: 0.00 ****

    - AverageScannerThreadConcurrency: 0.01 ****

    - BytesRead: 3.31 GB****

    - MemoryUsed: 0.00 ****

    - NumDisksAccessed: 1****

    - PerReadThreadRawHdfsThroughput: 473.49 MB/sec****

    - RowsReturned: 461.38K (461381)****

    - RowsReturnedRate: 4.25 K/sec****

    - ScanRangesComplete: 53****

    - ScannerThreadsInvoluntaryContextSwitches: 7****

    - ScannerThreadsTotalWallClockTime: 49m30s****

    - DelimiterParseTime: 1s407ms****

    - MaterializeTupleTime: 369.0us****

    - ScannerThreadsSysTime: 7.993ms****

    - ScannerThreadsUserTime: 1s436ms****

    - ScannerThreadsVoluntaryContextSwitches: 1.29K (1286)****

    - TotalRawHdfsReadTime: 7s163ms****

    - TotalReadThroughput: 31.25 MB/sec****

    ** **

    ** **

    ** **


    *************************************************************************
    This e-mail may contain confidential or privileged information.
    If you are not the intended recipient, please notify the sender
    immediately and then delete it.

    TIAA-CREF
    *************************************************************************
  • Ramanujam at Apr 19, 2013 at 3:50 pm
    Hello All,

    Could you please let me know if there is any performance tuning or
    configuration I can play with to resolve this issue?

    Thanks,
    Ramanujam
    On Wednesday, April 17, 2013 10:05:52 AM UTC-4, Ramanujam wrote:

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and impala
    beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-cref.org:25000/backends
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history
    file=/tmp/nathamu/hive_job_log_nathamu_201304170927_428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.org:21000
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.org:21000] > select count(*) from
    security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.org:21000] >









    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3
    mapreduce.output.fileoutputformat.compress.typeRECORD
    mapreduce.jobtracker.jobhistory.lru.cache.size5
    dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializers
    org.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent
    0.25yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.resourcessl-server.xml
    mapreduce.reduce.skip.maxgroups0dfs.domain.socket.path
    /var/run/hadoop-hdfs/dn._PORThadoop.http.authentication.kerberos.keytab
    ${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5
    ha.failover-controller.new-active.rpc-timeout.ms60000
    mapreduce.framework.namelocalha.health-monitor.check-interval.ms1000
    io.file.buffer.size4096dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10
    yarn.resourcemanager.scheduler.class
    org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler
    mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096
    dfs.namenode.secondary.http-address0.0.0.0:50090
    dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070
    mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transfer
    falsedfs.datanode.address0.0.0.0:50010
    hadoop.http.authentication.token.validity36000
    hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
    dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
    yarn.admin.acl*
    yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs
    86400dfs.client.failover.connection.retries.on.timeouts0
    mapreduce.map.sort.spill.percent0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.maxidletime10000
    mapreduce.jobtracker.persist.jobstatus.hours1dfs.datanode.ipc.address
    0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0
    yarn.app.mapreduce.am.job.task.listener.thread-count30
    dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000
    ha.zookeeper.parent-znode/hadoop-ha
    yarn.nodemanager.container-executor.class
    org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
    io.skip.checksum.errorsfalse
    yarn.resourcemanager.scheduler.client.thread-count50
    hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
    2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count10
    yarn.app.mapreduce.client-am.ipc.max-retries1
    dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmur
    io.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefault
    yarn.nodemanager.disk-health-checker.min-healthy-disks0.25
    mapreduce.job.maxtaskfailures.per.tracker3
    mapreduce.tasktracker.healthchecker.script.timeout600000
    hadoop.security.group.mapping.ldap.search.attr.group.namecnfs.df.interval
    60000dfs.namenode.kerberos.internal.spnego.principal
    ${dfs.web.authentication.kerberos.principal}
    mapreduce.job.reduce.shuffle.consumer.plugin.class
    org.apache.hadoop.mapreduce.task.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311mapreduce.tasktrac
    ...
  • Marcel Kornacker at Apr 19, 2013 at 8:03 pm
    Hi Ramanujam, sorry for the delay, but we're still investigating internally.
    On Fri, Apr 19, 2013 at 8:50 AM, Ramanujam wrote:

    Hello All,

    Could you please let me know if there is any performance tuning or
    configuration I can play with to resolve this issue?

    Thanks,
    Ramanujam

    On Wednesday, April 17, 2013 10:05:52 AM UTC-4, Ramanujam wrote:

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and
    impala beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-**cref.org:25000/backends<http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history file=/tmp/nathamu/hive_job_**log_nathamu_201304170927_**
    428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.**reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<**number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/**jobdetails.jsp?jobid=job_**201304150921_0005<http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005>
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]
    select count(*) from security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]








    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.**
    fileoutputformat.compress.typeRECORDmapreduce.jobtracker.**
    jobhistory.lru.cache.size5dfs.datanode.failed.volumes.**tolerated0
    hadoop.http.filter.**initializersorg.apache.hadoop.http.lib.**
    StaticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.**
    memory.limit.percent0.25yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.**resourcessl-server.xmlmapreduce.reduce.skip.*
    *maxgroups0dfs.domain.socket.path/var/run/hadoop-hdfs/dn._PORT
    hadoop.http.authentication.**kerberos.keytab${user.home}/hadoop.keytab
    yarn.nodemanager.localizer.**client.thread-count5
    ha.failover-controller.new-**active.rpc-timeout.ms<http://ha.failover-controller.new-active.rpc-timeout.ms>
    60000mapreduce.framework.namelocalha.health-monitor.check-**interval.ms<http://ha.health-monitor.check-interval.ms>
    1000io.file.buffer.size4096dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10yarn.resourcemanager.**
    scheduler.classorg.apache.hadoop.yarn.server.**resourcemanager.scheduler.
    **fifo.FifoSchedulermapreduce.jobtracker.**taskcache.levels2
    s3.stream-buffer-size4096dfs.namenode.secondary.http-**address
    0.0.0.0:50090dfs.namenode.decommission.**interval30
    dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.**
    failedtasksfalsedfs.encrypt.data.transferfalsedfs.datanode.address
    0.0.0.0:50010hadoop.http.authentication.**token.validity36000
    hadoop.security.group.mapping.**ldap.search.filter.group
    (objectClass=group)dfs.client.failover.max.**attempts15
    kfs.client-write-packet-size65536yarn.admin.acl*yarn.resourcemanager.**
    application-tokens.master-key-**rolling-interval-secs86400
    dfs.client.failover.**connection.retries.on.timeouts0
    mapreduce.map.sort.spill.**percent0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.**maxidletime10000
    mapreduce.jobtracker.persist.**jobstatus.hours1dfs.datanode.ipc.address
    0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0yarn.app.mapreduce.am.job.*
    *task.listener.thread-count30dfs.client.read.shortcircuittrue
    dfs.namenode.safemode.**extension30000ha.zookeeper.parent-znode/hadoop-ha
    yarn.nodemanager.container-**executor.classorg.apache.hadoop.yarn.server.
    **nodemanager.**DefaultContainerExecutorio.skip.checksum.errorsfalse
    yarn.resourcemanager.**scheduler.client.thread-count50
    hadoop.http.authentication.**kerberos.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-**wait.ms<http://yarn.nodemanager.process-kill-wait.ms>
    2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.**
    count10yarn.app.mapreduce.client-am.**ipc.max-retries1
    dfs.client.use.datanode.**hostnamefalsehadoop.util.hash.typemurmur
    io.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefault
    yarn.nodemanager.disk-health-**checker.min-healthy-disks0.25
    mapreduce.job.maxtaskfailures.**per.tracker3mapreduce.tasktracker.**
    healthchecker.script.timeout600000hadoop.security.group.mapping.**
    ldap.search.attr.group.name<http://hadoop.security.group.mapping.ldap.search.attr.group.name>
    cnfs.df.interval60000dfs.namenode.kerberos.**internal.spnego.principal
    ${dfs.web.authentication.**kerberos.principal}
    mapreduce.job.reduce.shuffle.**consumer.plugin.class
    org.apache.hadoop.mapreduce.**task.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311mapreduce.tasktrac
    ...
  • Nong Li at Apr 19, 2013 at 8:42 pm
    Hi Ramanujam,

    I looked at the runtime profile and here's the issue:

             HDFS_SCAN_NODE (id=0):(1m50s 99.99%)
               File Formats: TEXT/NONE:56
               Hdfs split stats (:<# splits>/): 0:56/3.50 GB

    The block id metadata is incorrect here (I assume). We think all the
    blocks are on one disk. We also seem to think your system has very
    many disks. Did you setup the cluster with CM?


    Thanks

    Nong


    On Fri, Apr 19, 2013 at 1:03 PM, Marcel Kornacker wrote:

    Hi Ramanujam, sorry for the delay, but we're still investigating
    internally.

    On Fri, Apr 19, 2013 at 8:50 AM, Ramanujam wrote:

    Hello All,

    Could you please let me know if there is any performance tuning or
    configuration I can play with to resolve this issue?

    Thanks,
    Ramanujam

    On Wednesday, April 17, 2013 10:05:52 AM UTC-4, Ramanujam wrote:

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and
    impala beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-**cref.org:25000/backends<http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history file=/tmp/nathamu/hive_job_**log_nathamu_201304170927_**
    428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.**reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<**number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/**jobdetails.jsp?jobid=job_**
    201304150921_0005<http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005>
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill
    job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]
    select count(*) from security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]








    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.**
    fileoutputformat.compress.type RECORDmapreduce.jobtracker.**
    jobhistory.lru.cache.size5dfs.datanode.failed.volumes.**tolerated0
    hadoop.http.filter.**initializers org.apache.hadoop.http.lib.**
    StaticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.**
    memory.limit.percent0.25 yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.**resourcessl-server.xmlmapreduce.reduce.skip.
    **maxgroups0 dfs.domain.socket.path/var/run/hadoop-hdfs/dn._PORT
    hadoop.http.authentication.**kerberos.keytab${user.home}/hadoop.keytab
    yarn.nodemanager.localizer.**client.thread-count 5
    ha.failover-controller.new-**active.rpc-timeout.ms<http://ha.failover-controller.new-active.rpc-timeout.ms>
    60000mapreduce.framework.name localha.health-monitor.check-**interval.ms<http://ha.health-monitor.check-interval.ms>
    1000io.file.buffer.size4096 dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10yarn.resourcemanager.**
    scheduler.classorg.apache.hadoop.yarn.server.**
    resourcemanager.scheduler.**fifo.FifoScheduler mapreduce.jobtracker.**
    taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-**
    address0.0.0.0:50090 dfs.namenode.decommission.**interval30
    dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.**
    failedtasks falsedfs.encrypt.data.transferfalsedfs.datanode.address
    0.0.0.0:50010hadoop.http.authentication.**token.validity 36000
    hadoop.security.group.mapping.**ldap.search.filter.group
    (objectClass=group)dfs.client.failover.max.**attempts15
    kfs.client-write-packet-size 65536yarn.admin.acl*yarn.resourcemanager.**
    application-tokens.master-key-**rolling-interval-secs86400
    dfs.client.failover.**connection.retries.on.timeouts 0
    mapreduce.map.sort.spill.**percent0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.**maxidletime 10000
    mapreduce.jobtracker.persist.**jobstatus.hours1dfs.datanode.ipc.address
    0.0.0.0:50020 yarn.nodemanager.address0.0.0.0:0
    yarn.app.mapreduce.am.job.**task.listener.thread-count30
    dfs.client.read.shortcircuit truedfs.namenode.safemode.**extension30000
    ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-**
    executor.classorg.apache.hadoop.yarn.server.**nodemanager.**
    DefaultContainerExecutor io.skip.checksum.errorsfalse
    yarn.resourcemanager.**scheduler.client.thread-count50
    hadoop.http.authentication.**kerberos.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-**
    wait.ms <http://yarn.nodemanager.process-kill-wait.ms> 2000
    dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.**count10
    yarn.app.mapreduce.client-am.**ipc.max-retries1 dfs.client.use.datanode.
    **hostnamefalsehadoop.util.hash.typemurmurio.seqfile.lazydecompresstrue
    dfs.datanode.dns.interface defaultyarn.nodemanager.disk-health-**
    checker.min-healthy-disks0.25mapreduce.job.maxtaskfailures.**per.tracker
    3mapreduce.tasktracker.**healthchecker.script.timeout 600000
    hadoop.security.group.mapping.**ldap.search.attr.group.name<http://hadoop.security.group.mapping.ldap.search.attr.group.name>
    cnfs.df.interval 60000dfs.namenode.kerberos.**internal.spnego.principal
    ${dfs.web.authentication.**kerberos.principal}
    mapreduce.job.reduce.shuffle.**consumer.plugin.class
    org.apache.hadoop.mapreduce.**task.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311mapreduce.tasktrac
    ...
  • Ramanujam at Apr 22, 2013 at 12:43 pm
    Hi Nong,

    cluster is test cluster and I installed using rpm method. If need to
    reformat the hdfs. I can do that. please let me know.

    Thanks,
    Ram
    On Friday, April 19, 2013 4:42:45 PM UTC-4, Nong wrote:

    Hi Ramanujam,

    I looked at the runtime profile and here's the issue:

    HDFS_SCAN_NODE (id=0):(1m50s 99.99%)
    File Formats: TEXT/NONE:56
    Hdfs split stats (:<# splits>/): 0:56/3.50 GB

    The block id metadata is incorrect here (I assume). We think all the blocks are on one disk. We also seem to think your system has very many disks. Did you setup the cluster with CM?


    Thanks

    Nong



    On Fri, Apr 19, 2013 at 1:03 PM, Marcel Kornacker <mar...@cloudera.com<javascript:>
    wrote:
    Hi Ramanujam, sorry for the delay, but we're still investigating
    internally.


    On Fri, Apr 19, 2013 at 8:50 AM, Ramanujam <ranat...@tiaa-cref.org<javascript:>
    wrote:
    Hello All,

    Could you please let me know if there is any performance tuning or
    configuration I can play with to resolve this issue?

    Thanks,
    Ramanujam

    On Wednesday, April 17, 2013 10:05:52 AM UTC-4, Ramanujam wrote:

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and
    impala beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-**cref.org:25000/backends<http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history file=/tmp/nathamu/hive_job_**log_nathamu_201304170927_**
    428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.**reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<**number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/**jobdetails.jsp?jobid=job_**
    201304150921_0005<http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005>
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill
    job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]
    select count(*) from security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]








    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.**
    fileoutputformat.compress.type RECORDmapreduce.jobtracker.**
    jobhistory.lru.cache.size5dfs.datanode.failed.volumes.**tolerated0
    hadoop.http.filter.**initializers org.apache.hadoop.http.lib.**
    StaticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.**
    memory.limit.percent0.25 yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.**resourcessl-server.xml
    mapreduce.reduce.skip.**maxgroups0 dfs.domain.socket.path
    /var/run/hadoop-hdfs/dn._PORThadoop.http.authentication.**
    kerberos.keytab${user.home}/hadoop.keytabyarn.nodemanager.localizer.**
    client.thread-count 5ha.failover-controller.new-**active.rpc-timeout.ms<http://ha.failover-controller.new-active.rpc-timeout.ms>
    60000mapreduce.framework.name localha.health-monitor.check-**
    interval.ms <http://ha.health-monitor.check-interval.ms>1000
    io.file.buffer.size4096 dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10yarn.resourcemanager.**
    scheduler.classorg.apache.hadoop.yarn.server.**
    resourcemanager.scheduler.**fifo.FifoScheduler mapreduce.jobtracker.**
    taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-*
    *address0.0.0.0:50090 dfs.namenode.decommission.**interval30
    dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.**
    failedtasks falsedfs.encrypt.data.transferfalsedfs.datanode.address
    0.0.0.0:50010hadoop.http.authentication.**token.validity 36000
    hadoop.security.group.mapping.**ldap.search.filter.group
    (objectClass=group)dfs.client.failover.max.**attempts15
    kfs.client-write-packet-size 65536yarn.admin.acl*yarn.resourcemanager.*
    *application-tokens.master-key-**rolling-interval-secs86400
    dfs.client.failover.**connection.retries.on.timeouts 0
    mapreduce.map.sort.spill.**percent0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.**maxidletime 10000
    mapreduce.jobtracker.persist.**jobstatus.hours1dfs.datanode.ipc.address
    0.0.0.0:50020 yarn.nodemanager.address0.0.0.0:0
    yarn.app.mapreduce.am.job.**task.listener.thread-count30
    dfs.client.read.shortcircuit truedfs.namenode.safemode.**extension30000
    ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-**
    executor.classorg.apache.hadoop.yarn.server.**nodemanager.**
    DefaultContainerExecutor io.skip.checksum.errorsfalse
    yarn.resourcemanager.**scheduler.client.thread-count50
    hadoop.http.authentication.**kerberos.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-**
    wait.ms <http://yarn.nodemanager.process-kill-wait.ms> 2000
    dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.**count
    10yarn.app.mapreduce.client-am.**ipc.max-retries1
    dfs.client.use.datanode.**hostnamefalsehadoop.util.hash.typemurmur
    io.seqfile.lazydecompresstruedfs.datanode.dns.interface default
    yarn.nodemanager.disk-health-**checker.min-healthy-disks0.25
    mapreduce.job.maxtaskfailures.**per.tracker3mapreduce.tasktracker.**
    healthchecker.script.timeout 600000hadoop.security.group.mapping.**
    ldap.search.attr.group.name<http://hadoop.security.group.mapping.ldap.search.attr.group.name>
    cnfs.df.interval 60000dfs.namenode.kerberos.**internal.spnego.principal
    ${dfs.web.authentication.**kerberos.principal}
    mapreduce.job.reduce.shuffle.**consumer.plugin.class
    org.apache.hadoop.mapreduce.**task.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311mapreduce.tasktrac
    ...
  • Alan Choi at Apr 23, 2013 at 5:26 pm
    Hi Ramanujam,

    We probably need more logging to diagnose the issue. Can you increase the
    logging level to 2? You can just pick one host and

    1. export GLOG_v=2
    2. restart impalad

    http://www.cloudera.com/content/cloudera-content/cloudera-docs/ImpalaBeta/0.7/Installing-and-Using-Impala/ciiu_topic_9_2.html

    If you can send us the new logs from that impalad, that would be great.

    Thanks,
    Alan

    On Mon, Apr 22, 2013 at 5:43 AM, Ramanujam wrote:

    Hi Nong,

    cluster is test cluster and I installed using rpm method. If need to
    reformat the hdfs. I can do that. please let me know.

    Thanks,
    Ram
    On Friday, April 19, 2013 4:42:45 PM UTC-4, Nong wrote:

    Hi Ramanujam,

    I looked at the runtime profile and here's the issue:

    HDFS_SCAN_NODE (id=0):(1m50s 99.99%)
    File Formats: TEXT/NONE:56
    Hdfs split stats (:<# splits>/): 0:56/3.50 GB

    The block id metadata is incorrect here (I assume). We think all the blocks are on one disk. We also seem to think your system has very many disks. Did you setup the cluster with CM?


    Thanks

    Nong


    On Fri, Apr 19, 2013 at 1:03 PM, Marcel Kornacker wrote:

    Hi Ramanujam, sorry for the delay, but we're still investigating
    internally.

    On Fri, Apr 19, 2013 at 8:50 AM, Ramanujam wrote:

    Hello All,

    Could you please let me know if there is any performance tuning or
    configuration I can play with to resolve this issue?

    Thanks,
    Ramanujam

    On Wednesday, April 17, 2013 10:05:52 AM UTC-4, Ramanujam wrote:

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and
    impala beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using
    the following commands

    1.) http://chadvt3endc02.ops.tiaa-****cref.org:25000/backends<http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history file=/tmp/nathamu/hive_job_**log**_nathamu_201304170927_*
    *42811258**7.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.**r**educer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<**number**>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/**job**details.jsp?jobid=job_**20130415**
    0921_0005<http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005>
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill
    job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative
    CPU 16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative
    CPU 60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative
    CPU 108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative
    CPU 108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative
    CPU 108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative
    CPU 131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative
    CPU 160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.**or**g:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>
    Welcome to the Impala shell. Press TAB twice to see a list of
    available commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.**o**rg:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]
    select count(*) from security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.**o**rg:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]








    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.**
    fileoutputfor**mat.compress.type RECORDmapreduce.jobtracker.**
    jobhistor**y.lru.cache.size5dfs.datanode.failed.volumes.**to**lerated0
    hadoop.http.filter.**initializer**s org.apache.hadoop.http.lib.**Sta**
    ticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.**memor**
    y.limit.percent0.25 yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.**reso**urcessl-server.xml
    mapreduce.reduce.skip.**maxgroup**s0 dfs.domain.socket.path
    /var/run/hadoop-hdfs/dn._PORThadoop.http.authentication.**ker**
    beros.keytab${user.home}/hadoop.keytabyarn.nodemanager.localizer.**cli
    **ent.thread-count 5ha.failover-controller.new-**act**
    ive.rpc-timeout.ms<http://ha.failover-controller.new-active.rpc-timeout.ms>
    60000mapreduce.framework.name localha.health-monitor.check-**interv**
    al.ms <http://ha.health-monitor.check-interval.ms>1000
    io.file.buffer.size4096 dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10yarn.resourcemanager.*
    *scheduler**.classorg.apache.hadoop.yarn.server.****
    resourcemanager.scheduler.**fifo**.FifoScheduler mapreduce.jobtracker.
    **taskcache**.levels2s3.stream-buffer-size4096
    dfs.namenode.secondary.http-**ad**dress0.0.0.0:50090
    dfs.namenode.decommission.**inte**rval30dfs.namenode.http-address
    0.0.0.0:50070mapreduce.task.files.preserve.****failedtasks false
    dfs.encrypt.data.transferfalsedfs.datanode.address0.0.0.0:50010
    hadoop.http.authentication.**tok**en.validity 36000
    hadoop.security.group.mapping.****ldap.search.filter.group
    (objectClass=group)dfs.client.failover.max.**attemp**ts15
    kfs.client-write-packet-size 65536yarn.admin.acl*yarn.resourcemanager.
    **applicati**on-tokens.master-key-**rolling-**interval-secs86400
    dfs.client.failover.**connection**.retries.on.timeouts 0
    mapreduce.map.sort.spill.**perce**nt0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.**maxidlet**ime 10000
    mapreduce.jobtracker.persist.**j**obstatus.hours1
    dfs.datanode.ipc.address0.0.0.0:50020 yarn.nodemanager.address
    0.0.0.0:0yarn.app.mapreduce.am.job.**task**.listener.thread-count30
    dfs.client.read.shortcircuit truedfs.namenode.safemode.**extensio**n
    30000ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-**
    exe**cutor.classorg.apache.hadoop.yarn.server.****nodemanager.**
    DefaultContainerEx**ecutor io.skip.checksum.errorsfalse
    yarn.resourcemanager.**scheduler**.client.thread-count50
    hadoop.http.authentication.**ker**beros.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-****
    wait.ms <http://yarn.nodemanager.process-kill-wait.ms> 2000
    dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.**c**
    ount10yarn.app.mapreduce.client-am.**i**pc.max-retries1
    dfs.client.use.datanode.**hostna**mefalsehadoop.util.hash.typemurmur
    io.seqfile.lazydecompresstruedfs.datanode.dns.interface default
    yarn.nodemanager.disk-health-**c**hecker.min-healthy-disks0.25
    mapreduce.job.maxtaskfailures.****per.tracker3mapreduce.tasktracker.**
    healthch**ecker.script.timeout 600000hadoop.security.group.mapping.***
    *ldap.search.attr.group.name<http://hadoop.security.group.mapping.ldap.search.attr.group.name>
    cnfs.df.interval 60000dfs.namenode.kerberos.**internal**
    .spnego.principal${dfs.web.authentication.**kerbe**ros.principal}
    mapreduce.job.reduce.shuffle.**c**onsumer.plugin.class
    org.apache.hadoop.mapreduce.**ta**sk.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311mapreduce.tasktrac
    ...
  • Ramanujam at Apr 23, 2013 at 7:18 pm
    Hi Alan, log file with Log level 2


      Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

        - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
        - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
        - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
        - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
        - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
        - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
        - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
        - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
        - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

       INFO logs Log path is: /var/log/impala/impalad.INFO
    Showing last 1048576 bytes of log

    Log file created at: 2013/04/23 15:09:23
    Running on machine: chadvt3endc02
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0423 15:09:23.836171 7124 daemon.cc:34] impalad version 0.7 RELEASE (build 62a2db93eb04c36e5becab5fdcaf06b53a839238)
    Built on Mon, 15 Apr 2013 08:27:38 PST
    I0423 15:09:23.836442 7124 daemon.cc:35] Using hostname: chadvt3endc02
    I0423 15:09:23.836776 7124 logging.cc:76] Flags (see also /varz are on debug webserver):
    --dump_ir=false
    --module_output=
    --be_port=22000
    --hostname=chadvt3endc02
    --keytab_file=
    --mem_limit=80%
    --planservice_host=localhost
    --planservice_port=20000
    --principal=
    --exchg_node_buffer_size_bytes=10485760
    --max_row_batches=0
    --randomize_splits=false
    --num_disks=0
    --num_threads_per_disk=1
    --read_size=8388608
    --enable_webserver=true
    --state_store_host=chadvt3endc02.ops.tiaa-cref.org
    --state_store_subscriber_port=23000
    --use_statestore=true
    --nn=
    --nn_port=0
    --serialize_batch=false
    --status_report_interval=5
    --compress_rowbatches=true
    --abort_on_config_error=true
    --be_service_threads=64
    --beeswax_port=21000
    --default_query_options=
    --fe_service_threads=64
    --heap_profile_dir=
    --hs2_port=21050
    --load_catalog_at_startup=false
    --log_mem_usage_interval=0
    --log_query_to_file=true
    --query_log_size=25
    --use_planservice=false
    --statestore_subscriber_timeout_seconds=10
    --state_store_port=24000
    --statestore_max_missed_heartbeats=5
    --statestore_num_heartbeat_threads=10
    --statestore_suspect_heartbeats=2
    --kerberos_reinit_interval=60
    --sasl_path=/usr/lib/sasl2:/usr/lib64/sasl2:/usr/local/lib/sasl2:/usr/lib/x86_64-linux-gnu/sasl2
    --web_log_bytes=1048576
    --log_filename=impalad
    --periodic_counter_update_period_ms=500
    --rpc_cnxn_attempts=10
    --rpc_cnxn_retry_interval_ms=2000
    --enable_webserver_doc_root=true
    --webserver_doc_root=/usr/lib/impala
    --webserver_interface=
    --webserver_port=25000
    --flagfile=
    --fromenv=
    --tryfromenv=
    --undefok=
    --tab_completion_columns=80
    --tab_completion_word=
    --help=false
    --helpfull=false
    --helpmatch=
    --helpon=
    --helppackage=false
    --helpshort=false
    --helpxml=false
    --version=false
    --alsologtoemail=
    --alsologtostderr=false
    --drop_log_memory=true
    --log_backtrace_at=
    --log_dir=/var/log/impala
    --log_link=
    --log_prefix=true
    --logbuflevel=0
    --logbufsecs=30
    --logemaillevel=999
    --logmailer=/bin/mail
    --logtostderr=false
    --max_log_size=1800
    --minloglevel=0
    --stderrthreshold=2
    --stop_logging_if_full_disk=false
    --symbolize_stacktrace=true
    --v=0
    --vmodule=
    I0423 15:09:23.837723 7124 mem-info.cc:66] Physical Memory: 47.04 GB
    I0423 15:09:23.837739 7124 daemon.cc:43] Cpu Info:
       Model: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
       Cores: 16
       L1 Cache: 0.00
       L2 Cache: 0.00
       L3 Cache: 0.00
       Hardware Supports:
         ssse3
         sse4_1
         sse4_2
    I0423 15:09:23.837746 7124 daemon.cc:44] Disk Info:
       Num disks 19: cciss/c0d, cciss/c0d0p, sda, sdb, sdc, sdd, sde, sdf, sdg, sdh, sdi, sdj, sdk, sdl, sdm, sdn, sdo, sdp, dm-
    I0423 15:09:23.837752 7124 daemon.cc:45] Mem Info: 47.04 GB
    I0423 15:09:26.968709 7124 impala-server.cc:1740] Default query options:TQueryOptions {
       01: abort_on_error (bool) = false,
       02: max_errors (i32) = 0,
       03: disable_codegen (bool) = false,
       04: batch_size (i32) = 0,
       05: num_nodes (i32) = 0,
       06: max_scan_range_length (i64) = 0,
       07: num_scanner_threads (i32) = 0,
       08: max_io_buffers (i32) = 0,
       09: allow_unsupported_formats (bool) = false,
       10: default_order_by_limit (i64) = -1,
       11: debug_action (string) = "",
       12: mem_limit (i64) = 0,
       13: abort_on_default_limit_exceeded (bool) = false,
    }
    I0423 15:09:35.170954 7124 impala-server.cc:1960] Read fs.defaultFS from Hadoop config: hdfs://chadvt3endc01:8020
    I0423 15:09:35.170986 7124 impala-server.cc:1972] Setting default name (-nn): chadvt3endc01
    I0423 15:09:35.170992 7124 impala-server.cc:1974] Setting default port (-nn_port): 8020
    I0423 15:09:35.171031 7124 impala-server.cc:2003] Impala Beeswax Service listening on 21000
    I0423 15:09:35.171058 7124 impala-server.cc:2014] Impala HiveServer2 Service listening on 21050
    I0423 15:09:35.171072 7124 impala-server.cc:2022] ImpalaInternalService listening on 22000
    I0423 15:09:35.171372 7124 thrift-server.cc:365] ThriftServer 'backend' started on port: 22000
    I0423 15:09:35.171383 7124 exec-env.cc:143] Starting global services
    I0423 15:09:35.171406 7124 exec-env.cc:164] Using global memory limit: 37.63 GB
    I0423 15:09:35.171421 7124 webserver.cc:118] Starting webserver on all interfaces, port 25000
    I0423 15:09:35.171468 7124 webserver.cc:128] Document root: /usr/lib/impala
    I0423 15:09:35.172153 7124 webserver.cc:167] Webserver started
    I0423 15:09:35.172184 7124 simple-scheduler.cc:98] Starting simple scheduler
    I0423 15:09:35.172198 7124 state-store-subscriber.cc:124] Starting subscriber
    I0423 15:09:35.172338 7124 thrift-server.cc:365] ThriftServer 'StateStoreSubscriber' started on port: 23000
    I0423 15:09:35.174425 7124 thrift-server.cc:365] ThriftServer 'beeswax-frontend' started on port: 21000
    I0423 15:09:35.175701 7124 thrift-server.cc:365] ThriftServer 'hiveServer2-frontend' started on port: 21050
    I0423 15:09:35.175710 7124 impalad-main.cc:101] Impala has started.
    I0423 15:10:15.216495 7617 exchange-node.cc:49] Exch id=2
    input_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])

    output_desc=Tuple(id=1 size=8 slots=[Slot(id=0 type=BIGINT col=-1 offset=0 null=(offset=0 mask=0))])
    I0423 15:10:15.485615 8302 aggregation-node.cc:170] AggregationNode(node_id=1) using llvm codegend functions.
    I0423 15:10:15.491222 8303 aggregation-node.cc:170] AggregationNode(node_id=3) using llvm codegend functions.
    I0423 15:13:25.593000 7617 impala-server.cc:1027] Query 6bfc9ede5ee64f69:a9fbbc0143815368 finished (1/1) eJztWH9oG9cdPztu4spOmsSReyZjfZBtlTPHOZ1sxdZigyzJslgsOzqpP9ggPN09yUdOd97dkxOXQbPSJl5o14yUQrcxMkZhMGjC2NgGg4XRQKDQZGMsULqRbV1WxhhhJaUrWdn3nU62fs5+0JX9If2le+99f//4vO8buj8gjhwvE3sVBXRtKpwvqJNEI+OEhMcK4ckInizk86oUHAtNBMdD4Ylhf8/QiNi/aGDT1M1iVi8Rv2/w7p1nrvkF8cGsRbHhrQnCYNee/cL+rvNC/XlB3KGUSyVsr/qFoU+1InpwzRcnBVw2KIrnd2iVv70JU0Ps2IAsBUOHpLFDcggFxyPBUEQe35UqLWMDo8eI7eiW6ejup4ZWKt9IGj2CMoljiaiSQIF8WTc0FJaxrOUnQyQvjamhMBnPExXnxwuaigtSOD8ewhOhSTk0MeybgfMUAZd5yxwBkSi6bCOmBJImIvKRSGgCLSrZHmbjy9t8hxp+PraOZm1cLBGTIsmHUC69GM1kU9nUQjoRh+9oMplJJKPZBPxfyGUXc9kIUnLzgaOOYQHF9DCsJzMLuUU082QEwUc2t3gsgVJxJYKC7BuhxBOxuWg6CdbJw+5C46EGNYJwKBNNxxfm4Y+SzSSi8ygezUaRkkp/sZ5jKh5BsrtUr3cbxWMLuXQ2cHALOiuxaBrNxWcVRHHeIFNeoEcdopZtna6esMmyZVN0YBnbVKcsrlNB5OhPkamgPBqWkzMoILWwVkJNQeirpLhCMSW9s6l0SpkDCyqL2dVl8sDxXCLz5E7lK0blDPPRZxxiEJUi1SqbFAxCBdsqoQblfHAcVGyVl1IkKPfkHGLvMDFdwqXyfkEQH63I3LTO/L2NpXHh7ZcvPtvnFRVw2hMtFm1SBGXRom0VdINAOY2Ie2Z1Exv6U5j5i5Hablm1r81mAkHcF7MsW4N1atnrOVOp/Ix1ygF5Wlklml8Y7G7gfPNv1/7ytb7a0q8lgNKPWRpJEhN0/bLYxz6KxKx2kVv3+gW2WFoGa7zF21//Q1EQe49ZWKs64qN/BgRx1zywNMgsnFQgI/zbB6/c7m1v5lytrFoR65wbGAriw9XshnQ/kV6IJ9wOGRr2dw89L/azpqDNlNWThDrghzO7Qba75sm+9AjomCQ0QxzIaadm1cckzmIVfOvvZ6HxzZOSZa9CqmhgxoUzPZ7XMoSWbbPq5t21SxkIu7+7Ka4XvvP6Peb97Z7Vp+vU3NCvQbEajWp0qdOhSTokSbVDbDhHHoa4/kLcObNKCZxWib7i2oQEcSBmmdCNKfCZwVRd8jS+Og1bcUxx1Lb1FWw8jnVa9dW162+DMaIYJ1BEOstRUktte+RDs7rtUHe5mUmTewUe54oKMTWAkBnDAg9q6672aurh+u3HsWG0q7grL7zBjBns2VuJzA/qfdTKO63c0tYX7Z2w5Yi2NbadmRC5KCiNi0SrARb/9q20iZ8/947XJnrWdqtQjgZh/QdR2HcOlnQzEipNOgiV8OnqP4LNCKr8d6imkZWIZDoPkdPQkF1SG2xwkow0PDEaktD8zGFo1hUWDSuMVf1SleOoJCG2stMsl5BuOhSbKnG6gn3OsqFTF3ycLyAmBLkYhJIzI66Emk+8Uqz99FgjlzdvT7zyuy81tcS7z59V61vipfd+88An0RL3s5RUqE1wqZITKKA59ES18r8KLZBltcLQwq36PWlCT1n2yeySbZWLS8tl6u8dPPP6AOwssNSBTKrdOf/X7YK4V6lmeG2juPkPH2zBab1AszY2ndJ6jV/9Vn9jtT17AIo3Z7LEsokDqV8tFs8xaMMNSxs6N2vbrGUL7Vpo1VZ2O1gJbhFWbn7jbH8rYGGN8H8ILLdf+9efWb12fxLA4md3whPsdrjhH4nl1/1HReR1nDmtAFSQpkuQjBp0T7iV2cRUV8Hy775774UbL/7p1tOC+Ih3XFFh/iB2q9N/f+fGvz9666VvPy142cvYgrve/PUb96E498aJoUNUib2IbadagpeeufjhZwXxYFs1YuzSuEhsFdIK5E9JIOn6hy++cueHP71xpouHMgiUP3vu7p1Xfnv9V4zy81unlCppwEER5KaQuSlC3BRj3BTj3BRhbooj3BQT3BSTFYqtZ4vMSxDiJRjjJRjnJQjzEhzhJZjgJfDisG8eGpTX/rPl5XVEvnrxJ10tLpq70+VSXHdOOlFVdZGg0nA/DYw3pGbwKaZGHQ6+9er3L3c1N+u19693te7X75/rYsAJXS6DzSJxYpUbFYyEgy9tE8RAXf9zUuaKZYCB2AZDTUpOU+WUDgBFGOQMQgOuP66srqPMrctXQBCq33eBgt0KY+yCWEWMcz8+98EayB6sP8wG4uq08t4fL0MT/Vz9gcfaq/bacQiBK81zmutF71L0vZs/ehUAbqCyX3HvhkfP/v5avPGS8MtvvskwTWBX8m3nhXc3xZbN0GQDP2pSoSkJNgn/f0fHFjHecnA3DdsWI9EyBK38vqlDP9jHgaAckMmDkTzoyIOLPIjIg4U8KMiDfzzIx4N5PGjHgXMcCMeBbRyoxoFnHEjGgWEc6LVp5c+1uOa2BruW0NCmxwMk1rwLdLd+7HcfMxMpb9Te9GUUo8CS5dApdQlrKzQEU6gqyRFZliQYETifHrrWJpkPkTfbU0wdFDjK2meJIF2bjhw9UNlzpg8frRwyiFmkS8708ENSRJbkw9UxvzPYdwb7zmDfGew7g31nsO8M9p3BvjPY/18M9t1r/eySg2YtuwS3u13ZxBPZw+mFdIJd3tDHePvrvB903g867wed94PO+8HH+X4g/Af239vi












    On Friday, April 19, 2013 4:03:14 PM UTC-4, Marcel Kornacker wrote:

    Hi Ramanujam, sorry for the delay, but we're still investigating
    internally.

    On Fri, Apr 19, 2013 at 8:50 AM, Ramanujam <ranat...@tiaa-cref.org<javascript:>
    wrote:
    Hello All,

    Could you please let me know if there is any performance tuning or
    configuration I can play with to resolve this issue?

    Thanks,
    Ramanujam

    On Wednesday, April 17, 2013 10:05:52 AM UTC-4, Ramanujam wrote:

    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and
    impala beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-**cref.org:25000/backends<http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history file=/tmp/nathamu/hive_job_**log_nathamu_201304170927_**
    428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.**reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<**number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/**jobdetails.jsp?jobid=job_**
    201304150921_0005<http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005>
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill
    job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative
    CPU 172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative
    CPU 176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]
    select count(*) from security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.**org:21000<http://chadvt3endc02.ops.tiaa-cref.org:21000>]








    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3mapreduce.output.**
    fileoutputformat.compress.type RECORDmapreduce.jobtracker.**
    jobhistory.lru.cache.size5dfs.datanode.failed.volumes.**tolerated0
    hadoop.http.filter.**initializers org.apache.hadoop.http.lib.**
    StaticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.**
    memory.limit.percent0.25 yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.**resourcessl-server.xmlmapreduce.reduce.skip.
    **maxgroups0 dfs.domain.socket.path/var/run/hadoop-hdfs/dn._PORT
    hadoop.http.authentication.**kerberos.keytab${user.home}/hadoop.keytab
    yarn.nodemanager.localizer.**client.thread-count 5
    ha.failover-controller.new-**active.rpc-timeout.ms<http://ha.failover-controller.new-active.rpc-timeout.ms>
    60000mapreduce.framework.name localha.health-monitor.check-**interval.ms<http://ha.health-monitor.check-interval.ms>
    1000io.file.buffer.size4096 dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10yarn.resourcemanager.**
    scheduler.classorg.apache.hadoop.yarn.server.**
    resourcemanager.scheduler.**fifo.FifoScheduler mapreduce.jobtracker.**
    taskcache.levels2s3.stream-buffer-size4096dfs.namenode.secondary.http-**
    address0.0.0.0:50090 dfs.namenode.decommission.**interval30
    dfs.namenode.http-address0.0.0.0:50070mapreduce.task.files.preserve.**
    failedtasks falsedfs.encrypt.data.transferfalsedfs.datanode.address
    0.0.0.0:50010hadoop.http.authentication.**token.validity 36000
    hadoop.security.group.mapping.**ldap.search.filter.group
    (objectClass=group)dfs.client.failover.max.**attempts15
    kfs.client-write-packet-size 65536yarn.admin.acl*yarn.resourcemanager.**
    application-tokens.master-key-**rolling-interval-secs86400
    dfs.client.failover.**connection.retries.on.timeouts 0
    mapreduce.map.sort.spill.**percent0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.**maxidletime 10000
    mapreduce.jobtracker.persist.**jobstatus.hours1dfs.datanode.ipc.address
    0.0.0.0:50020 yarn.nodemanager.address0.0.0.0:0
    yarn.app.mapreduce.am.job.**task.listener.thread-count30
    dfs.client.read.shortcircuit truedfs.namenode.safemode.**extension30000
    ha.zookeeper.parent-znode/hadoop-hayarn.nodemanager.container-**
    executor.classorg.apache.hadoop.yarn.server.**nodemanager.**
    DefaultContainerExecutor io.skip.checksum.errorsfalse
    yarn.resourcemanager.**scheduler.client.thread-count50
    hadoop.http.authentication.**kerberos.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-**
    wait.ms <http://yarn.nodemanager.process-kill-wait.ms> 2000
    dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.**count10
    yarn.app.mapreduce.client-am.**ipc.max-retries1 dfs.client.use.datanode.
    **hostnamefalsehadoop.util.hash.typemurmurio.seqfile.lazydecompresstrue
    dfs.datanode.dns.interface defaultyarn.nodemanager.disk-health-**
    checker.min-healthy-disks0.25mapreduce.job.maxtaskfailures.**per.tracker
    3mapreduce.tasktracker.**healthchecker.script.timeout 600000
    hadoop.security.group.mapping.**ldap.search.attr.group.name<http://hadoop.security.group.mapping.ldap.search.attr.group.name>
    cnfs.df.interval 60000dfs.namenode.kerberos.**internal.spnego.principal
    ${dfs.web.authentication.**kerberos.principal}
    mapreduce.job.reduce.shuffle.**consumer.plugin.class
    org.apache.hadoop.mapreduce.**task.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311mapreduce.tasktrac
    ...
  • Ramanujam at Apr 24, 2013 at 2:27 pm
    Hi Mike,

    I tried with count(*) issue is still there. please see below using both
    impala and hive.


    [chadvt3endc02.ops.tiaa-cref.org:21000] > select count(1) from
    security_report;
    Query: select count(1) from security_report
    Query finished, fetching results ...
    +----------+
    count(1) |
    +----------+
    1645956 |
    +----------+
    Returned 1 row(s) in 111.35s


    [nathamu@chadvt3endc02 ~]$ hive
    Logging initialized using configuration in
    jar:file:/usr/lib/hive/lib/hive-common-0.10.0-cdh4.2.0.jar!/hive-log4j.properties
    Hive history
    file=/tmp/nathamu/hive_job_log_nathamu_201304241024_544799096.txt
    hive>
    select count(1) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
       set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
       set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
       set mapred.reduce.tasks=<number>
    Starting Job = job_201304191250_0015, Tracking URL =
    http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304191250_0015
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201304191250_0015
    Hadoop job information for Stage-1: number of mappers: 53; number of
    reducers: 1
    2013-04-24 10:24:52,047 Stage-1 map = 0%, reduce = 0%
    2013-04-24 10:24:59,183 Stage-1 map = 43%, reduce = 0%, Cumulative CPU
    99.06 sec
    2013-04-24 10:25:00,202 Stage-1 map = 45%, reduce = 0%, Cumulative CPU
    103.78 sec
    2013-04-24 10:25:01,214 Stage-1 map = 45%, reduce = 0%, Cumulative CPU
    103.78 sec
    2013-04-24 10:25:02,292 Stage-1 map = 45%, reduce = 0%, Cumulative CPU
    103.78 sec
    2013-04-24 10:25:03,305 Stage-1 map = 60%, reduce = 0%, Cumulative CPU
    138.05 sec
    2013-04-24 10:25:04,317 Stage-1 map = 66%, reduce = 0%, Cumulative CPU
    147.73 sec
    2013-04-24 10:25:05,331 Stage-1 map = 85%, reduce = 0%, Cumulative CPU
    188.17 sec
    2013-04-24 10:25:06,345 Stage-1 map = 98%, reduce = 0%, Cumulative CPU
    222.11 sec
    2013-04-24 10:25:07,361 Stage-1 map = 98%, reduce = 0%, Cumulative CPU
    222.11 sec
    2013-04-24 10:25:08,375 Stage-1 map = 100%, reduce = 33%, Cumulative CPU
    226.13 sec
    2013-04-24 10:25:09,389 Stage-1 map = 100%, reduce = 33%, Cumulative CPU
    226.13 sec
    2013-04-24 10:25:10,403 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    230.05 sec
    2013-04-24 10:25:11,417 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    230.05 sec
    2013-04-24 10:25:12,431 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    230.05 sec
    2013-04-24 10:25:13,445 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    230.05 sec
    MapReduce Total cumulative CPU time: 3 minutes 50 seconds 50 msec
    Ended Job = job_201304191250_0015
    MapReduce Jobs Launched:
    Job 0: Map: 53 Reduce: 1 Cumulative CPU: 230.05 sec HDFS Read:
    13554671183 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 3 minutes 50 seconds 50 msec
    OK
    1645957
    Time taken: 39.418 seconds
    hive>





    On Wednesday, April 24, 2013 9:30:57 AM UTC-4, Michael Naumov wrote:

    Hi,
    I think you can solve your issues by running
    select count(1) from security_report

    instead of count(*)

    Let me know if its ok
    Thanks
  • Ramanujam at May 20, 2013 at 7:57 pm
    On Wednesday, April 17, 2013 10:05:52 AM UTC-4, Ramanujam wrote:
    Cluster Information:
    Total of 5 Nodes in the cluster - with CDH42 installed by RPM and impala
    beta .7 (latest)
    One node is namenode and another 4 node is datanode and TT
    Running on Redhat Linux version 8 HP blades with 48GB memory on each
    blade.
    Used internal disk for hdfs filesystem
    I can see that - all the four nodes all in the impala cluster using the
    following commands

    1.) http://chadvt3endc02.ops.tiaa-cref.org:25000/backends
    2.) At the bottom I have varz information
    3.) At the bottom I have query profile information from imapla
    4

    impala select statement takes: *111.35s
    Hive Select statement takes: 27 seconds

    I really do not know what I am doing wrong .....please help me
    *

    Hive - Stats:

    Hive history
    file=/tmp/nathamu/hive_job_log_nathamu_201304170927_428112587.txt
    hive> select count(*) from security_report;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    Number of reduce tasks determined at compile time: 1
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201304150921_0005, Tracking URL =
    http://chadvt3endc01:50030/jobdetails.jsp?jobid=job_201304150921_0005
    Kill Command = /usr/lib/hadoop/bin/hadoop job -kill job_201304150921_0005
    Hadoop job information for Stage-1: number of mappers: 52; number of
    reducers: 1
    2013-04-17 09:27:38,293 Stage-1 map = 0%, reduce = 0%
    2013-04-17 09:27:43,426 Stage-1 map = 10%, reduce = 0%, Cumulative CPU
    16.8 sec
    2013-04-17 09:27:44,449 Stage-1 map = 37%, reduce = 0%, Cumulative CPU
    60.81 sec
    2013-04-17 09:27:45,472 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:46,491 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:47,514 Stage-1 map = 62%, reduce = 0%, Cumulative CPU
    108.77 sec
    2013-04-17 09:27:48,538 Stage-1 map = 77%, reduce = 0%, Cumulative CPU
    131.68 sec
    2013-04-17 09:27:49,553 Stage-1 map = 94%, reduce = 0%, Cumulative CPU
    160.57 sec
    2013-04-17 09:27:50,568 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:51,585 Stage-1 map = 100%, reduce = 0%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:52,600 Stage-1 map = 100%, reduce = 67%, Cumulative CPU
    172.5 sec
    2013-04-17 09:27:53,615 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:54,635 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    2013-04-17 09:27:55,650 Stage-1 map = 100%, reduce = 100%, Cumulative CPU
    176.36 sec
    MapReduce Total cumulative CPU time: 2 minutes 56 seconds 360 msec
    Ended Job = job_201304150921_0005
    MapReduce Jobs Launched:
    Job 0: Map: 52 Reduce: 1 Cumulative CPU: 176.36 sec HDFS Read:
    13554671046 HDFS Write: 8 SUCCESS
    Total MapReduce CPU Time Spent: 2 minutes 56 seconds 360 msec
    OK
    *1645957
    Time taken: 26.526 seconds*
    hive>

    I*MPALA select statement:*
    [nathamu@chadvt3endc02 ~]$ impala-shell
    Connected to chadvt3endc02.ops.tiaa-cref.org:21000
    Welcome to the Impala shell. Press TAB twice to see a list of available
    commands.

    Copyright (c) 2012 Cloudera, Inc. All rights reserved.

    (Build version: Impala v0.7 (62a2db9) built on Mon Apr 15 08:02:38 PDT
    2013)
    [chadvt3endc02.ops.tiaa-cref.org:21000] > select count(*) from
    security_report;
    Query: select count(*) from security_report
    Query finished, fetching results ...
    +----------+
    count(*) |
    +----------+
    1645956 |
    +-*---------+
    Returned 1 row(s) in 111.35s*
    [chadvt3endc02.ops.tiaa-cref.org:21000] >









    varz - information

    Impala <http://chadvt3endc02.ops.tiaa-cref.org:25000/>

    - / <http://chadvt3endc02.ops.tiaa-cref.org:25000/>
    - /backends <http://chadvt3endc02.ops.tiaa-cref.org:25000/backends>
    - /catalog <http://chadvt3endc02.ops.tiaa-cref.org:25000/catalog>
    - /logs <http://chadvt3endc02.ops.tiaa-cref.org:25000/logs>
    - /memz <http://chadvt3endc02.ops.tiaa-cref.org:25000/memz>
    - /metrics <http://chadvt3endc02.ops.tiaa-cref.org:25000/metrics>
    - /queries <http://chadvt3endc02.ops.tiaa-cref.org:25000/queries>
    - /sessions <http://chadvt3endc02.ops.tiaa-cref.org:25000/sessions>
    - /varz <http://chadvt3endc02.ops.tiaa-cref.org:25000/varz>

    Hadoop ConfigurationConfiguration: core-default.xml, core-site.xml,
    mapred-default.xml, mapred-site.xml, yarn-default.xml, yarn-site.xml,
    hdfs-default.xml, hdfs-site.xml KeyValuedfs.datanode.data.dir/test01
    dfs.namenode.checkpoint.txns40000s3.replication3
    mapreduce.output.fileoutputformat.compress.typeRECORD
    mapreduce.jobtracker.jobhistory.lru.cache.size5
    dfs.datanode.failed.volumes.tolerated0hadoop.http.filter.initializers
    org.apache.hadoop.http.lib.StaticUserWebFiltermapreduce.cluster.temp.dir
    ${hadoop.tmp.dir}/mapred/tempmapreduce.reduce.shuffle.memory.limit.percent
    0.25yarn.nodemanager.keytab/etc/krb5.keytab
    dfs.https.server.keystore.resourcessl-server.xml
    mapreduce.reduce.skip.maxgroups0dfs.domain.socket.path
    /var/run/hadoop-hdfs/dn._PORThadoop.http.authentication.kerberos.keytab
    ${user.home}/hadoop.keytabyarn.nodemanager.localizer.client.thread-count5
    ha.failover-controller.new-active.rpc-timeout.ms60000
    mapreduce.framework.namelocalha.health-monitor.check-interval.ms1000
    io.file.buffer.size4096dfs.namenode.checkpoint.period3600
    mapreduce.task.tmp.dir./tmpipc.client.kill.max10
    yarn.resourcemanager.scheduler.class
    org.apache.hadoop.yarn.server.resourcemanager.scheduler.fifo.FifoScheduler
    mapreduce.jobtracker.taskcache.levels2s3.stream-buffer-size4096
    dfs.namenode.secondary.http-address0.0.0.0:50090
    dfs.namenode.decommission.interval30dfs.namenode.http-address0.0.0.0:50070
    mapreduce.task.files.preserve.failedtasksfalsedfs.encrypt.data.transfer
    falsedfs.datanode.address0.0.0.0:50010
    hadoop.http.authentication.token.validity36000
    hadoop.security.group.mapping.ldap.search.filter.group(objectClass=group)
    dfs.client.failover.max.attempts15kfs.client-write-packet-size65536
    yarn.admin.acl*
    yarn.resourcemanager.application-tokens.master-key-rolling-interval-secs
    86400dfs.client.failover.connection.retries.on.timeouts0
    mapreduce.map.sort.spill.percent0.80file.stream-buffer-size4096
    dfs.webhdfs.enabledfalseipc.client.connection.maxidletime10000
    mapreduce.jobtracker.persist.jobstatus.hours1dfs.datanode.ipc.address
    0.0.0.0:50020yarn.nodemanager.address0.0.0.0:0
    yarn.app.mapreduce.am.job.task.listener.thread-count30
    dfs.client.read.shortcircuittruedfs.namenode.safemode.extension30000
    ha.zookeeper.parent-znode/hadoop-ha
    yarn.nodemanager.container-executor.class
    org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor
    io.skip.checksum.errorsfalse
    yarn.resourcemanager.scheduler.client.thread-count50
    hadoop.http.authentication.kerberos.principalHTTP/_HOST@LOCALHOST
    mapreduce.reduce.log.levelINFOfs.s3.maxRetries4
    hadoop.kerberos.kinit.commandkinityarn.nodemanager.process-kill-wait.ms
    2000dfs.namenode.name.dir.restorefalsemapreduce.jobtracker.handler.count10
    yarn.app.mapreduce.client-am.ipc.max-retries1
    dfs.client.use.datanode.hostnamefalsehadoop.util.hash.typemurmur
    io.seqfile.lazydecompresstruedfs.datanode.dns.interfacedefault
    yarn.nodemanager.disk-health-checker.min-healthy-disks0.25
    mapreduce.job.maxtaskfailures.per.tracker3
    mapreduce.tasktracker.healthchecker.script.timeout600000
    hadoop.security.group.mapping.ldap.search.attr.group.namecnfs.df.interval
    60000dfs.namenode.kerberos.internal.spnego.principal
    ${dfs.web.authentication.kerberos.principal}
    mapreduce.job.reduce.shuffle.consumer.plugin.class
    org.apache.hadoop.mapreduce.task.reduce.Shuffle
    mapreduce.jobtracker.addresschadvt3endc01:54311mapreduce.tasktrac
    ...

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupimpala-user @
categorieshadoop
postedApr 17, '13 at 2:05p
activeMay 20, '13 at 7:57p
posts11
users5
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase