FAQ
1. I have enabled Kerberos security in Cloudera Manager using the steps
mentioned at
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/Configuring-Hadoop-Security-with-Cloudera-Manager.html

2. Deployed Client configuration

3. Added missing entries in *-site.xml files

4. Also created hdfs.keytab, mapred.keytab, hive.keytab and cmf.keytab and
copied at respective locations in /etc/hadoop/conf and /etc/hive/conf and

/etc/cloudera-scm-server/


5. When I tried to start services, the the HDFS service is not starting.

6. Does Cloudera Manager generates key tabs at run time ? who generates
/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab?

7. I have seen number of keytabs generated in .var/run folder

         at
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:825)
         at
org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:279)
         at
org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243)
         at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
         at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
         at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
         at
org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:135)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at
org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:188)
Caused by: javax.security.auth.login.LoginException: Checksum failed
         at
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:763)
         at
com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
         at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:606)
         at
javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
         at
javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
         at
javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
         at
javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
         at java.security.AccessController.doPrivileged(Native Method)
         at
javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
         at
javax.security.auth.login.LoginContext.login(LoginContext.java:590)
         at
org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:816)
         ... 11 more
Caused by: KrbException: Checksum failed
         at
sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:102)
         at
sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:94)
         at sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:177)
         at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:149)
         at sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121)
         at
sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:288)
         at
sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
         at
com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
         ... 24 more
Caused by: java.security.GeneralSecurityException: Checksum failed
         at
sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
         at
sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
         at sun.security.krb5.internal.crypto.Aes128.decrypt(Aes128.java:76)
         at
sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:100)
         ... 31 more



+ export
KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
+
KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
+ echo 'using hdfs/blr11.ips.com@IPS.COM as Kerberos principal'
+ echo 'using
/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 as
Kerberos ticket cache'
+ kinit -c /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
-kt /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
hdfs/blr11.ips.com@IPS.COM
kinit(v5): Password incorrect while getting initial credentials
+ '[' 1 -ne 0 ']'
+ echo 'kinit was not successful.'
+ exit 1


[hdfs@blr11 var]$ klist -e -k -t
/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
Keytab name:
FILE:/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
KVNO Timestamp Principal
---- -----------------
--------------------------------------------------------
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-256 CTS mode with
96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS mode with
96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc mode with
HMAC/sha1)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-256 CTS mode with
96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS mode with
96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc mode with
HMAC/sha1)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
[hdfs@blr11 var]$ klist -e -k -t /etc/hadoop/conf/hdfs.keytab
Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab
KVNO Timestamp Principal
---- -----------------
--------------------------------------------------------
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS mode with
96-bit SHA-1 HMAC)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc mode with
HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES cbc mode with
RSA-MD5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (AES-128 CTS mode with
96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (Triple DES cbc mode with
HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES cbc mode with
RSA-MD5)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS mode with
96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc mode with
HMAC/sha1)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES cbc mode with
RSA-MD5)
[hdfs@blr11 var]$

To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Search Discussions

  • Harsh J at Dec 18, 2013 at 11:36 am
    Your issue is not CDH related but rather that of a kerberos
    misconfiguration.

    It is likely that you've missed this specific step:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cmchs_JCE_policy_s4.html

    On Wed, Dec 18, 2013 at 1:23 PM, Suresh Tirumalasetti wrote:

    1. I have enabled Kerberos security in Cloudera Manager using the steps
    mentioned at
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/Configuring-Hadoop-Security-with-Cloudera-Manager.html

    2. Deployed Client configuration

    3. Added missing entries in *-site.xml files

    4. Also created hdfs.keytab, mapred.keytab, hive.keytab and cmf.keytab and
    copied at respective locations in /etc/hadoop/conf and /etc/hive/conf and

    /etc/cloudera-scm-server/


    5. When I tried to start services, the the HDFS service is not starting.

    6. Does Cloudera Manager generates key tabs at run time ? who generates
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab?

    7. I have seen number of keytabs generated in .var/run folder

    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:825)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:279)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
    at
    org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:135)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:188)
    Caused by: javax.security.auth.login.LoginException: Checksum failed
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:763)
    at
    com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
    at
    javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
    at
    javax.security.auth.login.LoginContext.login(LoginContext.java:590)
    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:816)
    ... 11 more
    Caused by: KrbException: Checksum failed
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:102)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:94)
    at sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:177)
    at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:149)
    at sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121)
    at
    sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:288)
    at
    sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
    ... 24 more
    Caused by: java.security.GeneralSecurityException: Checksum failed
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
    at sun.security.krb5.internal.crypto.Aes128.decrypt(Aes128.java:76)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:100)
    ... 31 more



    + export
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    +
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    + echo 'using hdfs/blr11.ips.com@IPS.COM as Kerberos principal'
    + echo 'using
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 as
    Kerberos ticket cache'
    + kinit -c
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 -kt
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab hdfs/
    blr11.ips.com@ips.com
    kinit(v5): Password incorrect while getting initial credentials
    + '[' 1 -ne 0 ']'
    + echo 'kinit was not successful.'
    + exit 1


    [hdfs@blr11 var]$ klist -e -k -t
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-256 CTS mode with
    96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-256 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    [hdfs@blr11 var]$ klist -e -k -t /etc/hadoop/conf/hdfs.keytab
    Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES cbc mode with
    RSA-MD5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES cbc mode with
    RSA-MD5)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES cbc mode with
    RSA-MD5)
    [hdfs@blr11 var]$

    To unsubscribe from this group and stop receiving emails from it, send an
    email to scm-users+unsubscribe@cloudera.org.


    --
    Harsh J

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Harsh J at Dec 18, 2013 at 11:37 am
    Also as to 'who' generates the Keytabs, the answer is yes to Cloudera
    Manager (after you give it an administrative keytab and principal).

    On Wed, Dec 18, 2013 at 5:06 PM, Harsh J wrote:

    Your issue is not CDH related but rather that of a kerberos
    misconfiguration.

    It is likely that you've missed this specific step:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cmchs_JCE_policy_s4.html


    On Wed, Dec 18, 2013 at 1:23 PM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    1. I have enabled Kerberos security in Cloudera Manager using the steps
    mentioned at
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/Configuring-Hadoop-Security-with-Cloudera-Manager.html

    2. Deployed Client configuration

    3. Added missing entries in *-site.xml files

    4. Also created hdfs.keytab, mapred.keytab, hive.keytab and cmf.keytab
    and copied at respective locations in /etc/hadoop/conf and /etc/hive/conf
    and

    /etc/cloudera-scm-server/


    5. When I tried to start services, the the HDFS service is not starting.

    6. Does Cloudera Manager generates key tabs at run time ? who generates
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab?

    7. I have seen number of keytabs generated in .var/run folder

    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:825)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:279)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
    at
    org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:135)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:188)
    Caused by: javax.security.auth.login.LoginException: Checksum failed
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:763)
    at
    com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
    at
    javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
    at java.security.AccessController.doPrivileged(Native Method)
    at
    javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
    at
    javax.security.auth.login.LoginContext.login(LoginContext.java:590)
    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:816)
    ... 11 more
    Caused by: KrbException: Checksum failed
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:102)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:94)
    at sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:177)
    at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:149)
    at
    sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121)
    at
    sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:288)
    at
    sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
    ... 24 more
    Caused by: java.security.GeneralSecurityException: Checksum failed
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
    at
    sun.security.krb5.internal.crypto.Aes128.decrypt(Aes128.java:76)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:100)
    ... 31 more



    + export
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    +
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    + echo 'using hdfs/blr11.ips.com@IPS.COM as Kerberos principal'
    + echo 'using
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 as
    Kerberos ticket cache'
    + kinit -c
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 -kt
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab hdfs/
    blr11.ips.com@ips.com
    kinit(v5): Password incorrect while getting initial credentials
    + '[' 1 -ne 0 ']'
    + echo 'kinit was not successful.'
    + exit 1


    [hdfs@blr11 var]$ klist -e -k -t
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-256 CTS mode with
    96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-256 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    [hdfs@blr11 var]$ klist -e -k -t /etc/hadoop/conf/hdfs.keytab
    Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES cbc mode with
    RSA-MD5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES cbc mode with
    RSA-MD5)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS mode with
    96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc mode
    with HMAC/sha1)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (ArcFour with HMAC/md5)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES with HMAC/sha1)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES cbc mode with
    RSA-MD5)
    [hdfs@blr11 var]$

    To unsubscribe from this group and stop receiving emails from it, send
    an email to scm-users+unsubscribe@cloudera.org.


    --
    Harsh J


    --
    Harsh J

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Harsh J at Dec 19, 2013 at 4:24 am
    +scm-users@ (sorry for posting to wrong list instead)

    On Thu, Dec 19, 2013 at 9:19 AM, Harsh J wrote:

    Please follow the security guide carefully. It lists all the required
    configuration and caveats, including the issue of min.user.id you face
    here:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_prep_for_users_s17.html

    Note that its also atypical and non-recommended to run or allow jobs to
    run as the 'hdfs' or 'mapred' users as these are administrative superusers
    in the environment. In a secure mode you do not want to use these unless
    you're performing a clearly administrative command.


    On Thu, Dec 19, 2013 at 8:09 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    Any idea on how to avoid the following errors/exceptions ?

    [root@ipshyd84 run]# hadoop jar
    /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi
    10 10000
    Number of Maps = 10
    Samples per Map = 10000
    Wrote input for Map #0
    Wrote input for Map #1
    Wrote input for Map #2
    Wrote input for Map #3
    Wrote input for Map #4
    Wrote input for Map #5
    Wrote input for Map #6
    Wrote input for Map #7
    Wrote input for Map #8
    Wrote input for Map #9
    Starting Job
    13/12/19 08:10:26 WARN mapred.JobClient: Use GenericOptionsParser for
    parsing the arguments. Applications should implement Tool for the same.
    13/12/19 08:10:26 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN
    token 3 for hdfs on 9.184.184.94:8020
    13/12/19 08:10:26 INFO security.TokenCache: Got dt for hdfs://
    ipshyd84.in.ibm.com:8020; Kind: HDFS_DELEGATION_TOKEN, Service:
    9.184.184.94:8020, Ident: (HDFS_DELEGATION_TOKEN token 3 for hdfs)
    13/12/19 08:10:26 INFO mapred.FileInputFormat: Total input paths to
    process : 10
    13/12/19 08:10:28 INFO mapred.JobClient: Running job:
    job_201312190800_0001
    13/12/19 08:10:29 INFO mapred.JobClient: map 0% reduce 0%
    13/12/19 08:10:31 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_0, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_0:
    java.io.IOException: Job initialization failed (255) with output: Reading
    task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:31 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_0&filter=stdout
    13/12/19 08:10:31 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_0&filter=stderr
    13/12/19 08:10:32 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_1, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_1:
    java.io.IOException: Job initialization failed (255) with output: Reading
    task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_1&filter=stdout
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_1&filter=stderr
    13/12/19 08:10:32 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_2, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_2:
    java.io.IOException: Job initialization failed (255) with output: Reading
    task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_2&filter=stdout
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_2&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_0, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_0:
    java.io.IOException: Job initialization failed (255) with output: Reading
    task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_0&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_0&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_1, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_1:
    java.io.IOException: Job initialization failed (255) with output: Reading
    task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_1&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_1&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_2, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_2:
    java.io.IOException: Job initialization failed (255) with output: Reading
    task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_2&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_2&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Job complete:
    job_201312190800_0001
    13/12/19 08:10:33 INFO mapred.JobClient: Counters: 4
    13/12/19 08:10:33 INFO mapred.JobClient: Job Counters
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all maps
    in occupied slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    reduces in occupied slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all maps
    waiting after reserving slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    reduces waiting after reserving slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Job Failed: NA
    java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1372)
    at
    org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
    at
    org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at
    org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at
    org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
    at
    org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
    [root@ipshyd84 run]#



    On Thu, Dec 19, 2013 at 8:00 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    Hi Harsh, Thanks.
    After Regenerating I am able to start all services.

    Do we need to generate hive.keytab and copy it to /etc/hive/conf ?
    Also do I need to add any entries to hive-site.xml file ?


    On Thu, Dec 19, 2013 at 7:48 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    [root@ipshyd84 run]# ls -alt `find . -name hdfs.keytab`
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/887-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/881-hdfs-NAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/882-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/883-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/875-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/876-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/874-hdfs-NAMENODE/hdfs.keytab
    [root@ipshyd84 run]# su hdfs
    [hdfs@ipshyd84 run]$ klist -e -k -t
    ./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab

    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (ArcFour with
    HMAC/md5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (ArcFour with
    HMAC/md5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    [hdfs@ipshyd84 run]$ klist -e -k -t
    ./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab
    Keytab name:
    FILE:./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab

    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (ArcFour with
    HMAC/md5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (ArcFour with
    HMAC/md5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    [hdfs@ipshyd84 run]$


    On Thu, Dec 19, 2013 at 7:39 AM, Harsh J wrote:

    Your key tabs are carrying it, so it appears that you disabled in the
    wrong way or disabled it after key tabs were already generated. Please
    ensure kdc config is also proper and hit regenerate under CM,
    Administration, Kerberos tab.
    On Dec 19, 2013 7:28 AM, "Suresh Tirumalasetti" <
    suresh.tirumalasetti@gmail.com> wrote:
    I am not using AES 256 and removed this entry from supported
    encryption:

    *krb5.conf*

    [logging]
    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log

    [libdefaults]
    default_realm = IPS.COM
    dns_lookup_realm = false
    dns_lookup_kdc = false
    ticket_lifetime = 2d
    renew_lifetime = 2w
    max_line=10h
    kdc_timeout = 10s
    forwardable = true
    allow_weak_crypto = true

    [realms]
    IPS.COM = {
    kdc = islftp032.in.ibm.com:88
    admin_server = islftp032.in.ibm.com:749
    default_domain = in.ibm.com
    }

    [domain_realm]
    .in.ibm.com= IPS.COM
    in.ibm.com = IPS.COM

    *kdc.conf*

    [kdcdefaults]
    kdc_ports = 88
    kdc_tcp_ports = 88

    [realms]
    IPS.COM = {
    #master_key_type = aes128-cts
    acl_file = /var/kerberos/krb5kdc/kadm5.acl
    dict_file = /usr/share/dict/words
    admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
    supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal
    arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal
    des-cbc-crc:normal
    }



    On Wed, Dec 18, 2013 at 5:07 PM, Harsh J wrote:

    Also as to 'who' generates the Keytabs, the answer is yes to
    Cloudera Manager (after you give it an administrative keytab and principal).

    On Wed, Dec 18, 2013 at 5:06 PM, Harsh J wrote:

    Your issue is not CDH related but rather that of a kerberos
    misconfiguration.

    It is likely that you've missed this specific step:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cmchs_JCE_policy_s4.html


    On Wed, Dec 18, 2013 at 1:23 PM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    1. I have enabled Kerberos security in Cloudera Manager using the
    steps mentioned at
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/Configuring-Hadoop-Security-with-Cloudera-Manager.html

    2. Deployed Client configuration

    3. Added missing entries in *-site.xml files

    4. Also created hdfs.keytab, mapred.keytab, hive.keytab and
    cmf.keytab and copied at respective locations in /etc/hadoop/conf and
    /etc/hive/conf and

    /etc/cloudera-scm-server/


    5. When I tried to start services, the the HDFS service is not
    starting.

    6. Does Cloudera Manager generates key tabs at run time ? who
    generates /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab?

    7. I have seen number of keytabs generated in .var/run folder

    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:825)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:279)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
    at
    org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:135)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:188)
    Caused by: javax.security.auth.login.LoginException: Checksum
    failed
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:763)
    at
    com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
    at
    javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
    at
    javax.security.auth.login.LoginContext.login(LoginContext.java:590)
    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:816)
    ... 11 more
    Caused by: KrbException: Checksum failed
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:102)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:94)
    at
    sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:177)
    at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:149)
    at
    sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121)
    at
    sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:288)
    at
    sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
    ... 24 more
    Caused by: java.security.GeneralSecurityException: Checksum failed
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
    at
    sun.security.krb5.internal.crypto.Aes128.decrypt(Aes128.java:76)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:100)
    ... 31 more



    + export
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    +
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    + echo 'using hdfs/blr11.ips.com@IPS.COM as Kerberos principal'
    + echo 'using
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 as
    Kerberos ticket cache'
    + kinit -c
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 -kt
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab hdfs/
    blr11.ips.com@ips.com
    kinit(v5): Password incorrect while getting initial credentials
    + '[' 1 -ne 0 ']'
    + echo 'kinit was not successful.'
    + exit 1


    [hdfs@blr11 var]$ klist -e -k -t
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-256 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc
    mode with HMAC/sha1)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-256 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc
    mode with HMAC/sha1)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    [hdfs@blr11 var]$ klist -e -k -t /etc/hadoop/conf/hdfs.keytab
    Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (Triple DES cbc
    mode with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (Triple DES cbc
    mode with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (Triple DES cbc
    mode with HMAC/sha1)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    [hdfs@blr11 var]$

    To unsubscribe from this group and stop receiving emails from it,
    send an email to scm-users+unsubscribe@cloudera.org.


    --
    Harsh J


    --
    Harsh J


    --
    Thanks
    - Suresh Tirumalasetti

    --
    Thanks
    - Suresh Tirumalasetti


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Harsh J


    --
    Harsh J

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Harsh J at Dec 19, 2013 at 5:41 am
    Your SQL syntax is incorrect in context of Beeline. You will need to
    terminate each statement with a semi colon in order to fire it.

    P.s. Please reply to the list (or ensure to keep the list in the to: field
    during replies) and not to a poster directly, so that we keep the
    discussion on the forum.

    On Thu, Dec 19, 2013 at 10:22 AM, Suresh Tirumalasetti wrote:

    What could be wrong for beeline not giving any output for hive-sql
    commands ?

    [hive@ipshyd84 var]$ beeline
    Beeline version 0.10.0-cdh4.4.0 by Apache Hive
    beeline> !connect jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    scan complete in 7ms
    Connecting to jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    Enter username for jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    :
    Enter password for jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    :
    Connected to: Hive (version 0.10.0)
    Driver: Hive (version 0.10.0-cdh4.4.0)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    0: jdbc:hive2://ipshyd84.in.ibm.com:10000/def> show databases
    . . . . . . . . . . . . . . . . . . . . . . .> select * from tab2
    . . . . . . . . . . . . . . . . . . . . . . .> use default
    . . . . . . . . . . . . . . . . . . . . . . .> use default;
    Error: Error while processing statement: FAILED: ParseException line 1:15
    missing EOF at 'select' near 'databases' (state=42000,code=40000)
    0: jdbc:hive2://ipshyd84.in.ibm.com:10000/def>

    On Thu, Dec 19, 2013 at 9:54 AM, Harsh J wrote:

    +scm-users@ (sorry for posting to wrong list instead)

    On Thu, Dec 19, 2013 at 9:19 AM, Harsh J wrote:

    Please follow the security guide carefully. It lists all the required
    configuration and caveats, including the issue of min.user.id you face
    here:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_prep_for_users_s17.html

    Note that its also atypical and non-recommended to run or allow jobs to
    run as the 'hdfs' or 'mapred' users as these are administrative superusers
    in the environment. In a secure mode you do not want to use these unless
    you're performing a clearly administrative command.


    On Thu, Dec 19, 2013 at 8:09 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    Any idea on how to avoid the following errors/exceptions ?

    [root@ipshyd84 run]# hadoop jar
    /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi
    10 10000
    Number of Maps = 10
    Samples per Map = 10000
    Wrote input for Map #0
    Wrote input for Map #1
    Wrote input for Map #2
    Wrote input for Map #3
    Wrote input for Map #4
    Wrote input for Map #5
    Wrote input for Map #6
    Wrote input for Map #7
    Wrote input for Map #8
    Wrote input for Map #9
    Starting Job
    13/12/19 08:10:26 WARN mapred.JobClient: Use GenericOptionsParser for
    parsing the arguments. Applications should implement Tool for the same.
    13/12/19 08:10:26 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN
    token 3 for hdfs on 9.184.184.94:8020
    13/12/19 08:10:26 INFO security.TokenCache: Got dt for hdfs://
    ipshyd84.in.ibm.com:8020; Kind: HDFS_DELEGATION_TOKEN, Service:
    9.184.184.94:8020, Ident: (HDFS_DELEGATION_TOKEN token 3 for hdfs)
    13/12/19 08:10:26 INFO mapred.FileInputFormat: Total input paths to
    process : 10
    13/12/19 08:10:28 INFO mapred.JobClient: Running job:
    job_201312190800_0001
    13/12/19 08:10:29 INFO mapred.JobClient: map 0% reduce 0%
    13/12/19 08:10:31 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_0, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_0:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:31 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_0&filter=stdout
    13/12/19 08:10:31 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_0&filter=stderr
    13/12/19 08:10:32 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_1, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_1:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_1&filter=stdout
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_1&filter=stderr
    13/12/19 08:10:32 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_2, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_2:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_2&filter=stdout
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_2&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_0, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_0:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_0&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_0&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_1, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_1:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_1&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_1&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_2, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_2:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_2&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_2&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Job complete:
    job_201312190800_0001
    13/12/19 08:10:33 INFO mapred.JobClient: Counters: 4
    13/12/19 08:10:33 INFO mapred.JobClient: Job Counters
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    maps in occupied slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    reduces in occupied slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    maps waiting after reserving slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    reduces waiting after reserving slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Job Failed: NA
    java.io.IOException: Job failed!
    at
    org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1372)
    at
    org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
    at
    org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at
    org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at
    org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
    at
    org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
    [root@ipshyd84 run]#



    On Thu, Dec 19, 2013 at 8:00 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    Hi Harsh, Thanks.
    After Regenerating I am able to start all services.

    Do we need to generate hive.keytab and copy it to /etc/hive/conf ?
    Also do I need to add any entries to hive-site.xml file ?


    On Thu, Dec 19, 2013 at 7:48 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    [root@ipshyd84 run]# ls -alt `find . -name hdfs.keytab`
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/887-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/881-hdfs-NAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/882-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/883-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/875-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/876-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/874-hdfs-NAMENODE/hdfs.keytab
    [root@ipshyd84 run]# su hdfs
    [hdfs@ipshyd84 run]$ klist -e -k -t
    ./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab

    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    [hdfs@ipshyd84 run]$ klist -e -k -t
    ./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab
    Keytab name:
    FILE:./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab

    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    [hdfs@ipshyd84 run]$


    On Thu, Dec 19, 2013 at 7:39 AM, Harsh J wrote:

    Your key tabs are carrying it, so it appears that you disabled in
    the wrong way or disabled it after key tabs were already generated. Please
    ensure kdc config is also proper and hit regenerate under CM,
    Administration, Kerberos tab.
    On Dec 19, 2013 7:28 AM, "Suresh Tirumalasetti" <
    suresh.tirumalasetti@gmail.com> wrote:
    I am not using AES 256 and removed this entry from supported
    encryption:

    *krb5.conf*

    [logging]
    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log

    [libdefaults]
    default_realm = IPS.COM
    dns_lookup_realm = false
    dns_lookup_kdc = false
    ticket_lifetime = 2d
    renew_lifetime = 2w
    max_line=10h
    kdc_timeout = 10s
    forwardable = true
    allow_weak_crypto = true

    [realms]
    IPS.COM = {
    kdc = islftp032.in.ibm.com:88
    admin_server = islftp032.in.ibm.com:749
    default_domain = in.ibm.com
    }

    [domain_realm]
    .in.ibm.com= IPS.COM
    in.ibm.com = IPS.COM

    *kdc.conf*

    [kdcdefaults]
    kdc_ports = 88
    kdc_tcp_ports = 88

    [realms]
    IPS.COM = {
    #master_key_type = aes128-cts
    acl_file = /var/kerberos/krb5kdc/kadm5.acl
    dict_file = /usr/share/dict/words
    admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
    supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal
    arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal
    des-cbc-crc:normal
    }



    On Wed, Dec 18, 2013 at 5:07 PM, Harsh J wrote:

    Also as to 'who' generates the Keytabs, the answer is yes to
    Cloudera Manager (after you give it an administrative keytab and principal).

    On Wed, Dec 18, 2013 at 5:06 PM, Harsh J wrote:

    Your issue is not CDH related but rather that of a kerberos
    misconfiguration.

    It is likely that you've missed this specific step:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cmchs_JCE_policy_s4.html


    On Wed, Dec 18, 2013 at 1:23 PM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    1. I have enabled Kerberos security in Cloudera Manager using
    the steps mentioned at
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/Configuring-Hadoop-Security-with-Cloudera-Manager.html

    2. Deployed Client configuration

    3. Added missing entries in *-site.xml files

    4. Also created hdfs.keytab, mapred.keytab, hive.keytab and
    cmf.keytab and copied at respective locations in /etc/hadoop/conf and
    /etc/hive/conf and

    /etc/cloudera-scm-server/


    5. When I tried to start services, the the HDFS service is not
    starting.

    6. Does Cloudera Manager generates key tabs at run time ? who
    generates /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab?

    7. I have seen number of keytabs generated in .var/run folder

    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:825)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:279)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
    at
    org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:135)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:188)
    Caused by: javax.security.auth.login.LoginException: Checksum
    failed
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:763)
    at
    com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
    at
    javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
    at
    javax.security.auth.login.LoginContext.login(LoginContext.java:590)
    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:816)
    ... 11 more
    Caused by: KrbException: Checksum failed
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:102)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:94)
    at
    sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:177)
    at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:149)
    at
    sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121)
    at
    sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:288)
    at
    sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
    ... 24 more
    Caused by: java.security.GeneralSecurityException: Checksum
    failed
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
    at
    sun.security.krb5.internal.crypto.Aes128.decrypt(Aes128.java:76)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:100)
    ... 31 more



    + export
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    +
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    + echo 'using hdfs/blr11.ips.com@IPS.COM as Kerberos principal'
    + echo 'using
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 as
    Kerberos ticket cache'
    + kinit -c
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 -kt
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab hdfs/
    blr11.ips.com@ips.com
    kinit(v5): Password incorrect while getting initial credentials
    + '[' 1 -ne 0 ']'
    + echo 'kinit was not successful.'
    + exit 1


    [hdfs@blr11 var]$ klist -e -k -t
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-256 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-256 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    [hdfs@blr11 var]$ klist -e -k -t /etc/hadoop/conf/hdfs.keytab
    Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (ArcFour with
    HMAC/md5)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES cbc mode
    with RSA-MD5)
    [hdfs@blr11 var]$

    To unsubscribe from this group and stop receiving emails from
    it, send an email to scm-users+unsubscribe@cloudera.org.


    --
    Harsh J


    --
    Harsh J


    --
    Thanks
    - Suresh Tirumalasetti

    --
    Thanks
    - Suresh Tirumalasetti


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Harsh J


    --
    Harsh J


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Harsh J

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.
  • Suresh Tirumalasetti at Dec 20, 2013 at 3:43 pm
    Hi Harsh,

    Thank you very much for all your prompt responses. My Cloudera manager
    environment is up and running as expected.

    Thank you
    Suresh

    On Thu, Dec 19, 2013 at 11:10 AM, Harsh J wrote:

    Your SQL syntax is incorrect in context of Beeline. You will need to
    terminate each statement with a semi colon in order to fire it.

    P.s. Please reply to the list (or ensure to keep the list in the to: field
    during replies) and not to a poster directly, so that we keep the
    discussion on the forum.


    On Thu, Dec 19, 2013 at 10:22 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    What could be wrong for beeline not giving any output for hive-sql
    commands ?

    [hive@ipshyd84 var]$ beeline
    Beeline version 0.10.0-cdh4.4.0 by Apache Hive
    beeline> !connect jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    scan complete in 7ms
    Connecting to jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    Enter username for jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    :
    Enter password for jdbc:hive2://
    ipshyd84.in.ibm.com:10000/default;principal=hive/ipshyd84.in.ibm.com@IPS.COM
    :
    Connected to: Hive (version 0.10.0)
    Driver: Hive (version 0.10.0-cdh4.4.0)
    Transaction isolation: TRANSACTION_REPEATABLE_READ
    0: jdbc:hive2://ipshyd84.in.ibm.com:10000/def> show databases
    . . . . . . . . . . . . . . . . . . . . . . .> select * from tab2
    . . . . . . . . . . . . . . . . . . . . . . .> use default
    . . . . . . . . . . . . . . . . . . . . . . .> use default;
    Error: Error while processing statement: FAILED: ParseException line 1:15
    missing EOF at 'select' near 'databases' (state=42000,code=40000)
    0: jdbc:hive2://ipshyd84.in.ibm.com:10000/def>

    On Thu, Dec 19, 2013 at 9:54 AM, Harsh J wrote:

    +scm-users@ (sorry for posting to wrong list instead)

    On Thu, Dec 19, 2013 at 9:19 AM, Harsh J wrote:

    Please follow the security guide carefully. It lists all the required
    configuration and caveats, including the issue of min.user.id you face
    here:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cm5chs_prep_for_users_s17.html

    Note that its also atypical and non-recommended to run or allow jobs to
    run as the 'hdfs' or 'mapred' users as these are administrative superusers
    in the environment. In a secure mode you do not want to use these unless
    you're performing a clearly administrative command.


    On Thu, Dec 19, 2013 at 8:09 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    Any idea on how to avoid the following errors/exceptions ?

    [root@ipshyd84 run]# hadoop jar
    /opt/cloudera/parcels/CDH/lib/hadoop-0.20-mapreduce/hadoop-examples.jar pi
    10 10000
    Number of Maps = 10
    Samples per Map = 10000
    Wrote input for Map #0
    Wrote input for Map #1
    Wrote input for Map #2
    Wrote input for Map #3
    Wrote input for Map #4
    Wrote input for Map #5
    Wrote input for Map #6
    Wrote input for Map #7
    Wrote input for Map #8
    Wrote input for Map #9
    Starting Job
    13/12/19 08:10:26 WARN mapred.JobClient: Use GenericOptionsParser for
    parsing the arguments. Applications should implement Tool for the same.
    13/12/19 08:10:26 INFO hdfs.DFSClient: Created HDFS_DELEGATION_TOKEN
    token 3 for hdfs on 9.184.184.94:8020
    13/12/19 08:10:26 INFO security.TokenCache: Got dt for hdfs://
    ipshyd84.in.ibm.com:8020; Kind: HDFS_DELEGATION_TOKEN, Service:
    9.184.184.94:8020, Ident: (HDFS_DELEGATION_TOKEN token 3 for hdfs)
    13/12/19 08:10:26 INFO mapred.FileInputFormat: Total input paths to
    process : 10
    13/12/19 08:10:28 INFO mapred.JobClient: Running job:
    job_201312190800_0001
    13/12/19 08:10:29 INFO mapred.JobClient: map 0% reduce 0%
    13/12/19 08:10:31 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_0, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_0:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:31 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_0&filter=stdout
    13/12/19 08:10:31 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_0&filter=stderr
    13/12/19 08:10:32 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_1, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_1:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_1&filter=stdout
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_1&filter=stderr
    13/12/19 08:10:32 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000011_2, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000011_2:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_2&filter=stdout
    13/12/19 08:10:32 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000011_2&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_0, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_0:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_0&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_0&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_1, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_1:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_1&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_1&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Task Id :
    attempt_201312190800_0001_m_000010_2, Status : FAILED
    Error initializing attempt_201312190800_0001_m_000010_2:
    java.io.IOException: Job initialization failed (255) with output:
    Reading task controller config from
    /etc/hadoop/conf.cloudera.mapreduce1/taskcontroller.cfg
    Requested user hdfs has id 105, which is below the minimum allowed 1000

    at
    org.apache.hadoop.mapred.LinuxTaskController.initializeJob(LinuxTaskController.java:194)
    at
    org.apache.hadoop.mapred.TaskTracker$4.run(TaskTracker.java:1470)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at
    org.apache.hadoop.mapred.TaskTracker.initializeJob(TaskTracker.java:1445)
    at
    org.apache.hadoop.mapred.TaskTracker.localizeJob(TaskTracker.java:1360)
    at
    org.apache.hadoop.mapred.TaskTracker.startNewTask(TaskTracker.java:2786)
    at
    org.apache.hadoop.mapred.TaskTracker$TaskLauncher.run(TaskTracker.java:2750)
    Caused by: org.apache.hadoop.util.Shell$ExitCodeExcept
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_2&filter=stdout
    13/12/19 08:10:33 WARN mapred.JobClient: Error reading task
    outputhttp://
    ipshyd84.in.ibm.com:50060/tasklog?plaintext=true&attemptid=attempt_201312190800_0001_m_000010_2&filter=stderr
    13/12/19 08:10:33 INFO mapred.JobClient: Job complete:
    job_201312190800_0001
    13/12/19 08:10:33 INFO mapred.JobClient: Counters: 4
    13/12/19 08:10:33 INFO mapred.JobClient: Job Counters
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    maps in occupied slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    reduces in occupied slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    maps waiting after reserving slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Total time spent by all
    reduces waiting after reserving slots (ms)=0
    13/12/19 08:10:33 INFO mapred.JobClient: Job Failed: NA
    java.io.IOException: Job failed!
    at
    org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1372)
    at
    org.apache.hadoop.examples.PiEstimator.estimate(PiEstimator.java:297)
    at
    org.apache.hadoop.examples.PiEstimator.run(PiEstimator.java:342)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at
    org.apache.hadoop.examples.PiEstimator.main(PiEstimator.java:351)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:72)
    at
    org.apache.hadoop.util.ProgramDriver.driver(ProgramDriver.java:144)
    at
    org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:64)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
    [root@ipshyd84 run]#



    On Thu, Dec 19, 2013 at 8:00 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    Hi Harsh, Thanks.
    After Regenerating I am able to start all services.

    Do we need to generate hive.keytab and copy it to /etc/hive/conf ?
    Also do I need to add any entries to hive-site.xml file ?


    On Thu, Dec 19, 2013 at 7:48 AM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    [root@ipshyd84 run]# ls -alt `find . -name hdfs.keytab`
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 19 07:48
    ./cloudera-scm-agent/process/887-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/881-hdfs-NAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/882-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 22:55
    ./cloudera-scm-agent/process/883-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/875-hdfs-DATANODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/876-hdfs-SECONDARYNAMENODE/hdfs.keytab
    -rw------- 1 hdfs hdfs 696 Dec 18 21:56
    ./cloudera-scm-agent/process/874-hdfs-NAMENODE/hdfs.keytab
    [root@ipshyd84 run]# su hdfs
    [hdfs@ipshyd84 run]$ klist -e -k -t
    ./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:./cloudera-scm-agent/process/885-hdfs-NAMENODE/hdfs.keytab

    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (AES-128
    CTS mode with 96-bit SHA-1 HMAC)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (AES-128
    CTS mode with 96-bit SHA-1 HMAC)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    [hdfs@ipshyd84 run]$ klist -e -k -t
    ./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab
    Keytab name:
    FILE:./cloudera-scm-agent/process/886-hdfs-DATANODE/hdfs.keytab

    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (AES-128
    CTS mode with 96-bit SHA-1 HMAC)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    7 12/18/13 19:49:24 hdfs/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (AES-128
    CTS mode with 96-bit SHA-1 HMAC)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (ArcFour
    with HMAC/md5)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES with
    HMAC/sha1)
    13 12/18/13 19:49:24 HTTP/ipshyd84.in.ibm.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    [hdfs@ipshyd84 run]$


    On Thu, Dec 19, 2013 at 7:39 AM, Harsh J wrote:

    Your key tabs are carrying it, so it appears that you disabled in
    the wrong way or disabled it after key tabs were already generated. Please
    ensure kdc config is also proper and hit regenerate under CM,
    Administration, Kerberos tab.
    On Dec 19, 2013 7:28 AM, "Suresh Tirumalasetti" <
    suresh.tirumalasetti@gmail.com> wrote:
    I am not using AES 256 and removed this entry from supported
    encryption:

    *krb5.conf*

    [logging]
    default = FILE:/var/log/krb5libs.log
    kdc = FILE:/var/log/krb5kdc.log
    admin_server = FILE:/var/log/kadmind.log

    [libdefaults]
    default_realm = IPS.COM
    dns_lookup_realm = false
    dns_lookup_kdc = false
    ticket_lifetime = 2d
    renew_lifetime = 2w
    max_line=10h
    kdc_timeout = 10s
    forwardable = true
    allow_weak_crypto = true

    [realms]
    IPS.COM = {
    kdc = islftp032.in.ibm.com:88
    admin_server = islftp032.in.ibm.com:749
    default_domain = in.ibm.com
    }

    [domain_realm]
    .in.ibm.com= IPS.COM
    in.ibm.com = IPS.COM

    *kdc.conf*

    [kdcdefaults]
    kdc_ports = 88
    kdc_tcp_ports = 88

    [realms]
    IPS.COM = {
    #master_key_type = aes128-cts
    acl_file = /var/kerberos/krb5kdc/kadm5.acl
    dict_file = /usr/share/dict/words
    admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
    supported_enctypes = aes128-cts:normal des3-hmac-sha1:normal
    arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal
    des-cbc-crc:normal
    }



    On Wed, Dec 18, 2013 at 5:07 PM, Harsh J wrote:

    Also as to 'who' generates the Keytabs, the answer is yes to
    Cloudera Manager (after you give it an administrative keytab and principal).

    On Wed, Dec 18, 2013 at 5:06 PM, Harsh J wrote:

    Your issue is not CDH related but rather that of a kerberos
    misconfiguration.

    It is likely that you've missed this specific step:
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/cmchs_JCE_policy_s4.html


    On Wed, Dec 18, 2013 at 1:23 PM, Suresh Tirumalasetti <
    suresh.tirumalasetti@gmail.com> wrote:
    1. I have enabled Kerberos security in Cloudera Manager using
    the steps mentioned at
    http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM4Ent/latest/Configuring-Hadoop-Security-with-Cloudera-Manager/Configuring-Hadoop-Security-with-Cloudera-Manager.html

    2. Deployed Client configuration

    3. Added missing entries in *-site.xml files

    4. Also created hdfs.keytab, mapred.keytab, hive.keytab and
    cmf.keytab and copied at respective locations in /etc/hadoop/conf and
    /etc/hive/conf and

    /etc/cloudera-scm-server/


    5. When I tried to start services, the the HDFS service is not
    starting.

    6. Does Cloudera Manager generates key tabs at run time ? who
    generates /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab?

    7. I have seen number of keytabs generated in .var/run folder

    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:825)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:279)
    at
    org.apache.hadoop.security.SecurityUtil.login(SecurityUtil.java:243)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1726)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1751)
    at
    org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1904)
    at
    org.apache.hadoop.hdfs.server.datanode.SecureDataNodeStarter.start(SecureDataNodeStarter.java:135)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:188)
    Caused by: javax.security.auth.login.LoginException: Checksum
    failed
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:763)
    at
    com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:584)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at
    javax.security.auth.login.LoginContext.invoke(LoginContext.java:784)
    at
    javax.security.auth.login.LoginContext.access$000(LoginContext.java:203)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:721)
    at
    javax.security.auth.login.LoginContext$5.run(LoginContext.java:719)
    at java.security.AccessController.doPrivileged(Native
    Method)
    at
    javax.security.auth.login.LoginContext.invokeCreatorPriv(LoginContext.java:718)
    at
    javax.security.auth.login.LoginContext.login(LoginContext.java:590)
    at
    org.apache.hadoop.security.UserGroupInformation.loginUserFromKeytab(UserGroupInformation.java:816)
    ... 11 more
    Caused by: KrbException: Checksum failed
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:102)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:94)
    at
    sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:177)
    at sun.security.krb5.KrbAsRep.decrypt(KrbAsRep.java:149)
    at
    sun.security.krb5.KrbAsRep.decryptUsingKeyTab(KrbAsRep.java:121)
    at
    sun.security.krb5.KrbAsReqBuilder.resolve(KrbAsReqBuilder.java:288)
    at
    sun.security.krb5.KrbAsReqBuilder.action(KrbAsReqBuilder.java:364)
    at
    com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:735)
    ... 24 more
    Caused by: java.security.GeneralSecurityException: Checksum
    failed
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
    at
    sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
    at
    sun.security.krb5.internal.crypto.Aes128.decrypt(Aes128.java:76)
    at
    sun.security.krb5.internal.crypto.Aes128CtsHmacSha1EType.decrypt(Aes128CtsHmacSha1EType.java:100)
    ... 31 more



    + export
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    +
    KRB5CCNAME=/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105
    + echo 'using hdfs/blr11.ips.com@IPS.COM as Kerberos principal'
    + echo 'using
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 as
    Kerberos ticket cache'
    + kinit -c
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/krb5cc_105 -kt
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab hdfs/
    blr11.ips.com@ips.com
    kinit(v5): Password incorrect while getting initial credentials
    + '[' 1 -ne 0 ']'
    + echo 'kinit was not successful.'
    + exit 1


    [hdfs@blr11 var]$ klist -e -k -t
    /var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    Keytab name:
    FILE:/var/run/cloudera-scm-agent/process/756-hdfs-NAMENODE/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-256 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    3 12/17/13 19:16:41 hdfs/blr11.ips.com@IPS.COM (ArcFour
    with HMAC/md5)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-256 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    5 12/17/13 19:16:41 HTTP/blr11.ips.com@IPS.COM (ArcFour
    with HMAC/md5)
    [hdfs@blr11 var]$ klist -e -k -t /etc/hadoop/conf/hdfs.keytab
    Keytab name: FILE:/etc/hadoop/conf/hdfs.keytab
    KVNO Timestamp Principal
    ---- -----------------
    --------------------------------------------------------
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (ArcFour
    with HMAC/md5)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    3 12/18/13 00:34:53 hdfs/blr11.ips.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (ArcFour
    with HMAC/md5)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    5 12/18/13 00:34:53 host/blr11.ips.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (AES-128 CTS
    mode with 96-bit SHA-1 HMAC)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (Triple DES
    cbc mode with HMAC/sha1)
    5 12/18/13 00:34:53 HTTP/blr11.ips.com@IPS.COM (ArcFour
    with HMAC/md5)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES with
    HMAC/sha1)
    5 12/18/13 00:34:54 HTTP/blr11.ips.com@IPS.COM (DES cbc
    mode with RSA-MD5)
    [hdfs@blr11 var]$

    To unsubscribe from this group and stop receiving emails from
    it, send an email to scm-users+unsubscribe@cloudera.org.


    --
    Harsh J


    --
    Harsh J


    --
    Thanks
    - Suresh Tirumalasetti

    --
    Thanks
    - Suresh Tirumalasetti


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Harsh J


    --
    Harsh J


    --
    Thanks
    - Suresh Tirumalasetti


    --
    Harsh J


    --
    Thanks
    - Suresh Tirumalasetti

    To unsubscribe from this group and stop receiving emails from it, send an email to scm-users+unsubscribe@cloudera.org.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedDec 18, '13 at 7:53a
activeDec 20, '13 at 3:43p
posts6
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Harsh J: 4 posts Suresh Tirumalasetti: 2 posts

People

Translate

site design / logo © 2022 Grokbase