FAQ
+scm-users

Did you let the wizard create your cluster for you and do all the necessary
configuration steps?
It will have created /tmp with 1777 permissions, which is necessary for
applications like MapReduce
to use it properly. You could try running the "Create Tmp Dir" command
provided by HDFS, and you
can also look at the permissions on /tmp and fix them if they are incorrect.

--phil


On 1 July 2012 20:17, Rupesh Chakrala wrote:

I also noticed the following in the *
/var/log/hadoop-0.20-mapreduce/hadoop-cmf-mapreduce1-JOBTRACKER-hadoopjt.log.out
*



2012-07-01 19:00:52,306 INFO org.apache.hadoop.mapred.JobTracker: Starting
jobtracker with owner as mapred
2012-07-01 19:00:52,326 INFO org.apache.hadoop.ipc.Server: Starting Socket
Reader #1 for port 8021
2012-07-01 19:00:52,342 WARN org.apache.hadoop.ipc.RPC: Interface
interface org.apache.hadoop.mapred.TaskTrackerManager ignored because it
does not extend VersionedProtocol
2012-07-01 19:00:52,342 WARN org.apache.hadoop.ipc.RPC: Interface
interface
org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
ignored because it does not extend VersionedProtocol
2012-07-01 19:00:52,368 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2012-07-01 19:00:52,401 INFO org.apache.hadoop.http.HttpServer: Added
global filter 'safety'
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2012-07-01 19:00:52,402 INFO org.apache.hadoop.http.HttpServer: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context WepAppsContext
2012-07-01 19:00:52,402 INFO org.apache.hadoop.http.HttpServer: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context logs
2012-07-01 19:00:52,402 INFO org.apache.hadoop.http.HttpServer: Added
filter static_user_filter
(class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
context static
2012-07-01 19:00:52,469 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50030
2012-07-01 19:00:52,469 INFO org.mortbay.log: jetty-6.1.26.cloudera.1
2012-07-01 19:00:52,636 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50030
2012-07-01 19:00:52,656 WARN org.apache.hadoop.conf.Configuration:
session.id is deprecated. Instead, use dfs.metrics.session-id
2012-07-01 19:00:52,657 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=JobTracker, sessionId=
2012-07-01 19:00:52,662 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker up at: 8021
2012-07-01 19:00:52,662 INFO org.apache.hadoop.mapred.JobTracker:
JobTracker webserver: 50030
*2012-07-01 19:00:52,869 INFO org.apache.hadoop.mapred.JobTracker:
Creating the system directory*
*2012-07-01 19:00:52,884 WARN org.apache.hadoop.mapred.JobTracker: Failed
to operate on mapred.system.dir (hdfs://hadoopnn:8020/tmp/mapred/system)
because of permissions.*
*2012-07-01 19:00:52,884 WARN org.apache.hadoop.mapred.JobTracker: This
directory should be owned by the user 'mapred (auth:SIMPLE)'*
*2012-07-01 19:00:52,884 WARN org.apache.hadoop.mapred.JobTracker:
Bailing out ...*
*org.apache.hadoop.security.AccessControlException: Permission denied:
user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x*
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
at
org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)
at
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)
at
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)
at
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
at
org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)
at
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)

at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at
org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
at
org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1730)
at
org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:482)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1731)
at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:503)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2284)
at org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2053)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
at
org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4799)
*Caused by: org.apache.hadoop.security.AccessControlException: Permission
denied: user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x*


Please let me know how to fix this permissions issue? I am not able to
start JobTracker because of this issue.


Thanks,
Rupesh



On Sun, Jul 1, 2012 at 7:07 PM, Rupesh Chakrala wrote:

Hello,

I have just installed the Cloudera Hadoop and after running a job, map
gets stuck at 0%. When, I tried to restart the services from Cloudera
Manager, I am unable to start the JobTracker.

Here is what I get when I try to start JobTracker service:

Supervisor returned FATAL: ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar' ++ tr -d '\n' + ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-4.0.2-shaded.jar + eval 'OLD_VALUE=$HADOOP_CLASSPATH' ++ OLD_VALUE='/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' + '[' -z '/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' ']' + export 'HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.2-shaded.jar:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' + HADOOP_CLASSPATH='/usr/share/cmf/lib/plugins/event-publish-4.0.2-shaded.jar:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' + set -x + perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER#g' /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/capacity-scheduler.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/core-site.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/fair-scheduler.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/hdfs-site.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/mapred-queue-acls.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/mapred-site.xml + '[' -e /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/topology.py ']' + perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER#g' /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/topology.py + chmod +x /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/topology.py + export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true ' + HADOOP_OPTS='-Djava.net.preferIPv4Stack=true ' + acquire_kerberos_tgt mapred.keytab + '[' -z mapred.keytab ']' + '[' -n '' ']' + '[' jobtracker = jobtracker ']' + '[' z = ztrue ']' + exec /usr/lib/hadoop-0.20-mapreduce/bin/hadoop --config /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER jobtracker



Any idea as what is wrong or where I am going wrong?


Thank you!

Regards,
Rupesh

Search Discussions

  • Rupesh Chakrala at Jul 2, 2012 at 9:31 pm
    Hi Phil,

    This has been fixed. For some reason, /tmp was not created and so I
    followed the below link and created the /tmp directory and assigned proper
    permissions. It is now working fine.

    https://ccp.cloudera.com/display/CDHDOC/CDH3+Deployment+on+a+Cluster#CDH3DeploymentonaCluster-Step3%3ACreateandConfigurethe%7B%7Bmapred.system.dir%7D%7DDirectoryinHDFS

    However, I now have an issue with the Beeswax service which fails to start.
    I see the following in the log file, /var/log/hue/beeswax_server.out:
    *Caused by: java.sql.SQLException: Failed to create database
    '/var/lib/hive/metastore/metastore_db'*

    Whereas, ls -l shows the following:
    [root@hadoope1 ~]# ls -l /var/lib/hive
    total 8
    drwxrwxrwt 3 hue hue 4096 Jul 1 04:10 hue_beeswax_metastore
    *drwxr-xr-x 2 hive hive 4096 Jun 4 20:53 metastore*

    So should I change the metastore directory to own by hue instead of hive??
    Will this fix my issue and not create any other hive issues? :)

    Thanks in advance!

    Regards,
    Rupesh
    On Mon, Jul 2, 2012 at 12:45 PM, Philip Langdale wrote:

    +scm-users

    Did you let the wizard create your cluster for you and do all the
    necessary configuration steps?
    It will have created /tmp with 1777 permissions, which is necessary for
    applications like MapReduce
    to use it properly. You could try running the "Create Tmp Dir" command
    provided by HDFS, and you
    can also look at the permissions on /tmp and fix them if they are
    incorrect.

    --phil



    On 1 July 2012 20:17, Rupesh Chakrala wrote:

    I also noticed the following in the *
    /var/log/hadoop-0.20-mapreduce/hadoop-cmf-mapreduce1-JOBTRACKER-hadoopjt.log.out
    *



    2012-07-01 19:00:52,306 INFO org.apache.hadoop.mapred.JobTracker:
    Starting jobtracker with owner as mapred
    2012-07-01 19:00:52,326 INFO org.apache.hadoop.ipc.Server: Starting
    Socket Reader #1 for port 8021
    2012-07-01 19:00:52,342 WARN org.apache.hadoop.ipc.RPC: Interface
    interface org.apache.hadoop.mapred.TaskTrackerManager ignored because it
    does not extend VersionedProtocol
    2012-07-01 19:00:52,342 WARN org.apache.hadoop.ipc.RPC: Interface
    interface
    org.apache.hadoop.security.authorize.RefreshAuthorizationPolicyProtocol
    ignored because it does not extend VersionedProtocol
    2012-07-01 19:00:52,368 INFO org.mortbay.log: Logging to
    org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
    org.mortbay.log.Slf4jLog
    2012-07-01 19:00:52,401 INFO org.apache.hadoop.http.HttpServer: Added
    global filter 'safety'
    (class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
    2012-07-01 19:00:52,402 INFO org.apache.hadoop.http.HttpServer: Added
    filter static_user_filter
    (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
    context WepAppsContext
    2012-07-01 19:00:52,402 INFO org.apache.hadoop.http.HttpServer: Added
    filter static_user_filter
    (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
    context logs
    2012-07-01 19:00:52,402 INFO org.apache.hadoop.http.HttpServer: Added
    filter static_user_filter
    (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to
    context static
    2012-07-01 19:00:52,469 INFO org.apache.hadoop.http.HttpServer: Jetty
    bound to port 50030
    2012-07-01 19:00:52,469 INFO org.mortbay.log: jetty-6.1.26.cloudera.1
    2012-07-01 19:00:52,636 INFO org.mortbay.log: Started
    SelectChannelConnector@0.0.0.0:50030
    2012-07-01 19:00:52,656 WARN org.apache.hadoop.conf.Configuration:
    session.id is deprecated. Instead, use dfs.metrics.session-id
    2012-07-01 19:00:52,657 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
    Initializing JVM Metrics with processName=JobTracker, sessionId=
    2012-07-01 19:00:52,662 INFO org.apache.hadoop.mapred.JobTracker:
    JobTracker up at: 8021
    2012-07-01 19:00:52,662 INFO org.apache.hadoop.mapred.JobTracker:
    JobTracker webserver: 50030
    *2012-07-01 19:00:52,869 INFO org.apache.hadoop.mapred.JobTracker:
    Creating the system directory*
    *2012-07-01 19:00:52,884 WARN org.apache.hadoop.mapred.JobTracker:
    Failed to operate on mapred.system.dir
    (hdfs://hadoopnn:8020/tmp/mapred/system) because of permissions.*
    *2012-07-01 19:00:52,884 WARN org.apache.hadoop.mapred.JobTracker: This
    directory should be owned by the user 'mapred (auth:SIMPLE)'*
    *2012-07-01 19:00:52,884 WARN org.apache.hadoop.mapred.JobTracker:
    Bailing out ...*
    *org.apache.hadoop.security.AccessControlException: Permission denied:
    user=mapred, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x*
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:205)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:186)
    at
    org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:135)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:4265)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:4236)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:2628)
    at
    org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:2592)
    at
    org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:638)
    at
    org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:412)
    at
    org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:42618)
    at
    org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:427)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:916)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1692)
    at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1688)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1232)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1686)

    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
    Method)
    at
    sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
    at
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
    at
    org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:90)
    at
    org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:57)
    at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:1730)
    at
    org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:482)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:1731)
    at org.apache.hadoop.fs.FileSystem.mkdirs(FileSystem.java:503)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2284)
    at
    org.apache.hadoop.mapred.JobTracker.<init>(JobTracker.java:2053)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:294)
    at
    org.apache.hadoop.mapred.JobTracker.startTracker(JobTracker.java:286)
    at org.apache.hadoop.mapred.JobTracker.main(JobTracker.java:4799)
    *Caused by: org.apache.hadoop.security.AccessControlException:
    Permission denied: user=mapred, access=WRITE,
    inode="/":hdfs:supergroup:drwxr-xr-x*


    Please let me know how to fix this permissions issue? I am not able to
    start JobTracker because of this issue.


    Thanks,
    Rupesh




    On Sun, Jul 1, 2012 at 7:07 PM, Rupesh Chakrala <achut.chakrala@gmail.com
    wrote:
    Hello,

    I have just installed the Cloudera Hadoop and after running a job, map
    gets stuck at 0%. When, I tried to restart the services from Cloudera
    Manager, I am unable to start the JobTracker.

    Here is what I get when I try to start JobTracker service:

    Supervisor returned FATAL: ++ find /usr/share/cmf/lib/plugins -name 'event-publish-*.jar' ++ tr -d '\n' + ADD_TO_CP=/usr/share/cmf/lib/plugins/event-publish-4.0.2-shaded.jar + eval 'OLD_VALUE=$HADOOP_CLASSPATH' ++ OLD_VALUE='/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' + '[' -z '/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' ']' + export 'HADOOP_CLASSPATH=/usr/share/cmf/lib/plugins/event-publish-4.0.2-shaded.jar:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' + HADOOP_CLASSPATH='/usr/share/cmf/lib/plugins/event-publish-4.0.2-shaded.jar:/usr/lib/hadoop-hdfs/lib/*:/usr/lib/hadoop-hdfs/*:/usr/lib/hadoop/lib/*:/usr/lib/hadoop/*::' + set -x + perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER#g' /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/capacity-scheduler.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/core-site.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/fair-scheduler.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/hdfs-site.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/mapred-queue-acls.xml /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/mapred-site.xml + '[' -e /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/topology.py ']' + perl -pi -e 's#{{CMF_CONF_DIR}}#/var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER#g' /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/topology.py + chmod +x /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER/topology.py + export 'HADOOP_OPTS=-Djava.net.preferIPv4Stack=true ' + HADOOP_OPTS='-Djava.net.preferIPv4Stack=true ' + acquire_kerberos_tgt mapred.keytab + '[' -z mapred.keytab ']' + '[' -n '' ']' + '[' jobtracker = jobtracker ']' + '[' z = ztrue ']' + exec /usr/lib/hadoop-0.20-mapreduce/bin/hadoop --config /var/run/cloudera-scm-agent/process/63-mapreduce-JOBTRACKER jobtracker



    Any idea as what is wrong or where I am going wrong?


    Thank you!

    Regards,
    Rupesh

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedJul 2, '12 at 7:45p
activeJul 2, '12 at 9:31p
posts2
users2
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase