FAQ
Hi,

My CDH4 cluster is enable with Kerberos authentication. I have tested this
successfully. But when I am trying to a run a third party application
(Spark job) which uses HDFS for input/output, I am getting the following
error:

*org.apache.hadoop.security.AccessControlException: Authorization
(hadoop.security.authorization) is enabled but authentication
(hadoop.security.authentication) is configured as simple. Please configure
another method like kerberos or digest.*

*hadoop.security.authentication* is configured as *kerberos.* The user
which is trying to submit the Spark job has Kerberos credentials and I have
tested this by running a Hadoop job by the same user for the same input and
output directory.

Do we need to do some setting for running third party applications on top
of hadoop when kerberos is enabled? How do I resolve this problem?

Thanks,
Gaurav

Search Discussions

  • Harsh J at Apr 2, 2013 at 8:48 am
    For any client hadoop API using program to communicate with a secure
    cluster, the config options to indicate that should be present in
    complete forms. That is, your Spark job is lacking
    "hadoop.security.authentication" set to kerberos (and possibly other
    requisite client properties) when it attempts to talk to the HDFS
    instance.

    Usually the trouble is that, although Spark would definitely be
    looking for the core-site.xml and hdfs-site.xml, the directory it
    resides under may not be in the runtime classpath of your Spark's job
    invocation. Placing the dir (such as /etc/hadoop/conf/) on the
    classpath would immediately resolve the issue, unless there's
    something within Spark itself which prevents this on purpose.
    On Tue, Apr 2, 2013 at 2:09 PM, Gaurav Dasgupta wrote:
    Hi,

    My CDH4 cluster is enable with Kerberos authentication. I have tested this
    successfully. But when I am trying to a run a third party application (Spark
    job) which uses HDFS for input/output, I am getting the following error:

    org.apache.hadoop.security.AccessControlException: Authorization
    (hadoop.security.authorization) is enabled but authentication
    (hadoop.security.authentication) is configured as simple. Please configure
    another method like kerberos or digest.

    hadoop.security.authentication is configured as kerberos. The user which is
    trying to submit the Spark job has Kerberos credentials and I have tested
    this by running a Hadoop job by the same user for the same input and output
    directory.

    Do we need to do some setting for running third party applications on top of
    hadoop when kerberos is enabled? How do I resolve this problem?

    Thanks,
    Gaurav


    --
    Harsh J
  • Gaurav Dasgupta at Apr 2, 2013 at 9:49 am
    Hi Harsh,

    Thanks for the reply. Setting "/etc/hadoop/conf" in the classpath has
    resolved that issue. But now I am getting the following error:

    *Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is: "
    babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 1]
    **Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is: "
    babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 2]
    **Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is: "
    babar3.musigma.com/192.168.200.13"; destination host is: "br10":8020;
    [duplicate 16]

    *
    Kerberos credentials are generated and renewed by Cloudera Manager. Any
    idea what to do next?

    Thanks,

    On Tue, Apr 2, 2013 at 2:18 PM, Harsh J wrote:

    For any client hadoop API using program to communicate with a secure
    cluster, the config options to indicate that should be present in
    complete forms. That is, your Spark job is lacking
    "hadoop.security.authentication" set to kerberos (and possibly other
    requisite client properties) when it attempts to talk to the HDFS
    instance.

    Usually the trouble is that, although Spark would definitely be
    looking for the core-site.xml and hdfs-site.xml, the directory it
    resides under may not be in the runtime classpath of your Spark's job
    invocation. Placing the dir (such as /etc/hadoop/conf/) on the
    classpath would immediately resolve the issue, unless there's
    something within Spark itself which prevents this on purpose.
    On Tue, Apr 2, 2013 at 2:09 PM, Gaurav Dasgupta wrote:
    Hi,

    My CDH4 cluster is enable with Kerberos authentication. I have tested this
    successfully. But when I am trying to a run a third party application (Spark
    job) which uses HDFS for input/output, I am getting the following error:

    org.apache.hadoop.security.AccessControlException: Authorization
    (hadoop.security.authorization) is enabled but authentication
    (hadoop.security.authentication) is configured as simple. Please configure
    another method like kerberos or digest.

    hadoop.security.authentication is configured as kerberos. The user which is
    trying to submit the Spark job has Kerberos credentials and I have tested
    this by running a Hadoop job by the same user for the same input and output
    directory.

    Do we need to do some setting for running third party applications on top of
    hadoop when kerberos is enabled? How do I resolve this problem?

    Thanks,
    Gaurav


    --
    Harsh J
  • Harsh J at Apr 2, 2013 at 9:55 am
    Is this error on the services or seen in your job? If latter, you need
    to ensure you've done a local kinit before running your job.
    On Tue, Apr 2, 2013 at 3:19 PM, Gaurav Dasgupta wrote:
    Hi Harsh,

    Thanks for the reply. Setting "/etc/hadoop/conf" in the classpath has
    resolved that issue. But now I am getting the following error:

    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 1]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 2]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar3.musigma.com/192.168.200.13"; destination host is: "br10":8020;
    [duplicate 16]

    Kerberos credentials are generated and renewed by Cloudera Manager. Any idea
    what to do next?

    Thanks,

    On Tue, Apr 2, 2013 at 2:18 PM, Harsh J wrote:

    For any client hadoop API using program to communicate with a secure
    cluster, the config options to indicate that should be present in
    complete forms. That is, your Spark job is lacking
    "hadoop.security.authentication" set to kerberos (and possibly other
    requisite client properties) when it attempts to talk to the HDFS
    instance.

    Usually the trouble is that, although Spark would definitely be
    looking for the core-site.xml and hdfs-site.xml, the directory it
    resides under may not be in the runtime classpath of your Spark's job
    invocation. Placing the dir (such as /etc/hadoop/conf/) on the
    classpath would immediately resolve the issue, unless there's
    something within Spark itself which prevents this on purpose.

    On Tue, Apr 2, 2013 at 2:09 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    Hi,

    My CDH4 cluster is enable with Kerberos authentication. I have tested
    this
    successfully. But when I am trying to a run a third party application
    (Spark
    job) which uses HDFS for input/output, I am getting the following error:

    org.apache.hadoop.security.AccessControlException: Authorization
    (hadoop.security.authorization) is enabled but authentication
    (hadoop.security.authentication) is configured as simple. Please
    configure
    another method like kerberos or digest.

    hadoop.security.authentication is configured as kerberos. The user which
    is
    trying to submit the Spark job has Kerberos credentials and I have
    tested
    this by running a Hadoop job by the same user for the same input and
    output
    directory.

    Do we need to do some setting for running third party applications on
    top of
    hadoop when kerberos is enabled? How do I resolve this problem?

    Thanks,
    Gaurav


    --
    Harsh J


    --
    Harsh J
  • Gaurav Dasgupta at Apr 2, 2013 at 10:12 am
    I did local kinit for the user before submitting the job. I did this from
    all the machines as well. These error messages are seen on the terminal
    (job logs) after submitting the job.

    Thanks,
    Gaurav

    On Tue, Apr 2, 2013 at 3:25 PM, Harsh J wrote:

    Is this error on the services or seen in your job? If latter, you need
    to ensure you've done a local kinit before running your job.
    On Tue, Apr 2, 2013 at 3:19 PM, Gaurav Dasgupta wrote:
    Hi Harsh,

    Thanks for the reply. Setting "/etc/hadoop/conf" in the classpath has
    resolved that issue. But now I am getting the following error:

    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 1]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 2]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar3.musigma.com/192.168.200.13"; destination host is: "br10":8020;
    [duplicate 16]

    Kerberos credentials are generated and renewed by Cloudera Manager. Any idea
    what to do next?

    Thanks,

    On Tue, Apr 2, 2013 at 2:18 PM, Harsh J wrote:

    For any client hadoop API using program to communicate with a secure
    cluster, the config options to indicate that should be present in
    complete forms. That is, your Spark job is lacking
    "hadoop.security.authentication" set to kerberos (and possibly other
    requisite client properties) when it attempts to talk to the HDFS
    instance.

    Usually the trouble is that, although Spark would definitely be
    looking for the core-site.xml and hdfs-site.xml, the directory it
    resides under may not be in the runtime classpath of your Spark's job
    invocation. Placing the dir (such as /etc/hadoop/conf/) on the
    classpath would immediately resolve the issue, unless there's
    something within Spark itself which prevents this on purpose.

    On Tue, Apr 2, 2013 at 2:09 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    Hi,

    My CDH4 cluster is enable with Kerberos authentication. I have tested
    this
    successfully. But when I am trying to a run a third party application
    (Spark
    job) which uses HDFS for input/output, I am getting the following
    error:
    org.apache.hadoop.security.AccessControlException: Authorization
    (hadoop.security.authorization) is enabled but authentication
    (hadoop.security.authentication) is configured as simple. Please
    configure
    another method like kerberos or digest.

    hadoop.security.authentication is configured as kerberos. The user
    which
    is
    trying to submit the Spark job has Kerberos credentials and I have
    tested
    this by running a Hadoop job by the same user for the same input and
    output
    directory.

    Do we need to do some setting for running third party applications on
    top of
    hadoop when kerberos is enabled? How do I resolve this problem?

    Thanks,
    Gaurav


    --
    Harsh J


    --
    Harsh J
  • Harsh J at Apr 2, 2013 at 10:16 am
    Ok, that should've worked. Is your credential's REALM different than
    the one the cluster uses? Does your local node also have a proper
    krb5.conf that matches the clusters'? Also, is the ticket you
    received, renewable?

    You can also find additional help on these errors generally at
    https://ccp.cloudera.com/display/CDH4DOC/Appendix+A+-+Troubleshooting
    On Tue, Apr 2, 2013 at 3:42 PM, Gaurav Dasgupta wrote:
    I did local kinit for the user before submitting the job. I did this from
    all the machines as well. These error messages are seen on the terminal (job
    logs) after submitting the job.

    Thanks,
    Gaurav

    On Tue, Apr 2, 2013 at 3:25 PM, Harsh J wrote:

    Is this error on the services or seen in your job? If latter, you need
    to ensure you've done a local kinit before running your job.

    On Tue, Apr 2, 2013 at 3:19 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    Hi Harsh,

    Thanks for the reply. Setting "/etc/hadoop/conf" in the classpath has
    resolved that issue. But now I am getting the following error:

    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 1]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is: "br10":8020;
    [duplicate 2]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar3.musigma.com/192.168.200.13"; destination host is: "br10":8020;
    [duplicate 16]

    Kerberos credentials are generated and renewed by Cloudera Manager. Any
    idea
    what to do next?

    Thanks,

    On Tue, Apr 2, 2013 at 2:18 PM, Harsh J wrote:

    For any client hadoop API using program to communicate with a secure
    cluster, the config options to indicate that should be present in
    complete forms. That is, your Spark job is lacking
    "hadoop.security.authentication" set to kerberos (and possibly other
    requisite client properties) when it attempts to talk to the HDFS
    instance.

    Usually the trouble is that, although Spark would definitely be
    looking for the core-site.xml and hdfs-site.xml, the directory it
    resides under may not be in the runtime classpath of your Spark's job
    invocation. Placing the dir (such as /etc/hadoop/conf/) on the
    classpath would immediately resolve the issue, unless there's
    something within Spark itself which prevents this on purpose.

    On Tue, Apr 2, 2013 at 2:09 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    Hi,

    My CDH4 cluster is enable with Kerberos authentication. I have tested
    this
    successfully. But when I am trying to a run a third party application
    (Spark
    job) which uses HDFS for input/output, I am getting the following
    error:

    org.apache.hadoop.security.AccessControlException: Authorization
    (hadoop.security.authorization) is enabled but authentication
    (hadoop.security.authentication) is configured as simple. Please
    configure
    another method like kerberos or digest.

    hadoop.security.authentication is configured as kerberos. The user
    which
    is
    trying to submit the Spark job has Kerberos credentials and I have
    tested
    this by running a Hadoop job by the same user for the same input and
    output
    directory.

    Do we need to do some setting for running third party applications on
    top of
    hadoop when kerberos is enabled? How do I resolve this problem?

    Thanks,
    Gaurav


    --
    Harsh J


    --
    Harsh J


    --
    Harsh J
  • Gaurav Dasgupta at Apr 2, 2013 at 10:26 am
    There is only a single REALM in the cluster which is configured as the
    default REALM. krb5.conf in all the machines are pointing to the default
    realm and kdc and admin_server is also properly set. Tickets have 7d
    renewal life.

    Normal Hadoop jobs are running fine with the kerberos credentials. Its only
    when a third party application like Spark is trying to access HDFS.

    Thanks,
    Gaurav

    On Tue, Apr 2, 2013 at 3:45 PM, Harsh J wrote:

    Ok, that should've worked. Is your credential's REALM different than
    the one the cluster uses? Does your local node also have a proper
    krb5.conf that matches the clusters'? Also, is the ticket you
    received, renewable?

    You can also find additional help on these errors generally at
    https://ccp.cloudera.com/display/CDH4DOC/Appendix+A+-+Troubleshooting
    On Tue, Apr 2, 2013 at 3:42 PM, Gaurav Dasgupta wrote:
    I did local kinit for the user before submitting the job. I did this from
    all the machines as well. These error messages are seen on the terminal (job
    logs) after submitting the job.

    Thanks,
    Gaurav

    On Tue, Apr 2, 2013 at 3:25 PM, Harsh J wrote:

    Is this error on the services or seen in your job? If latter, you need
    to ensure you've done a local kinit before running your job.

    On Tue, Apr 2, 2013 at 3:19 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    Hi Harsh,

    Thanks for the reply. Setting "/etc/hadoop/conf" in the classpath has
    resolved that issue. But now I am getting the following error:

    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism
    level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is:
    "br10":8020;
    [duplicate 1]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism
    level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is:
    "br10":8020;
    [duplicate 2]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism
    level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar3.musigma.com/192.168.200.13"; destination host is:
    "br10":8020;
    [duplicate 16]

    Kerberos credentials are generated and renewed by Cloudera Manager.
    Any
    idea
    what to do next?

    Thanks,

    On Tue, Apr 2, 2013 at 2:18 PM, Harsh J wrote:

    For any client hadoop API using program to communicate with a secure
    cluster, the config options to indicate that should be present in
    complete forms. That is, your Spark job is lacking
    "hadoop.security.authentication" set to kerberos (and possibly other
    requisite client properties) when it attempts to talk to the HDFS
    instance.

    Usually the trouble is that, although Spark would definitely be
    looking for the core-site.xml and hdfs-site.xml, the directory it
    resides under may not be in the runtime classpath of your Spark's job
    invocation. Placing the dir (such as /etc/hadoop/conf/) on the
    classpath would immediately resolve the issue, unless there's
    something within Spark itself which prevents this on purpose.

    On Tue, Apr 2, 2013 at 2:09 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    Hi,

    My CDH4 cluster is enable with Kerberos authentication. I have
    tested
    this
    successfully. But when I am trying to a run a third party
    application
    (Spark
    job) which uses HDFS for input/output, I am getting the following
    error:

    org.apache.hadoop.security.AccessControlException: Authorization
    (hadoop.security.authorization) is enabled but authentication
    (hadoop.security.authentication) is configured as simple. Please
    configure
    another method like kerberos or digest.

    hadoop.security.authentication is configured as kerberos. The user
    which
    is
    trying to submit the Spark job has Kerberos credentials and I have
    tested
    this by running a Hadoop job by the same user for the same input
    and
    output
    directory.

    Do we need to do some setting for running third party applications
    on
    top of
    hadoop when kerberos is enabled? How do I resolve this problem?

    Thanks,
    Gaurav


    --
    Harsh J


    --
    Harsh J


    --
    Harsh J
  • Gaurav Dasgupta at Apr 3, 2013 at 8:54 am
    Solved. I had to locally kinit for the host principals.
    Thanks for all your replies. They were useful.

    Thanks,
    Gaurav

    On Tue, Apr 2, 2013 at 3:56 PM, Gaurav Dasgupta wrote:

    There is only a single REALM in the cluster which is configured as the
    default REALM. krb5.conf in all the machines are pointing to the default
    realm and kdc and admin_server is also properly set. Tickets have 7d
    renewal life.

    Normal Hadoop jobs are running fine with the kerberos credentials. Its
    only when a third party application like Spark is trying to access HDFS.

    Thanks,
    Gaurav

    On Tue, Apr 2, 2013 at 3:45 PM, Harsh J wrote:

    Ok, that should've worked. Is your credential's REALM different than
    the one the cluster uses? Does your local node also have a proper
    krb5.conf that matches the clusters'? Also, is the ticket you
    received, renewable?

    You can also find additional help on these errors generally at
    https://ccp.cloudera.com/display/CDH4DOC/Appendix+A+-+Troubleshooting

    On Tue, Apr 2, 2013 at 3:42 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    I did local kinit for the user before submitting the job. I did this from
    all the machines as well. These error messages are seen on the terminal (job
    logs) after submitting the job.

    Thanks,
    Gaurav

    On Tue, Apr 2, 2013 at 3:25 PM, Harsh J wrote:

    Is this error on the services or seen in your job? If latter, you need
    to ensure you've done a local kinit before running your job.

    On Tue, Apr 2, 2013 at 3:19 PM, Gaurav Dasgupta <gdsayshi@gmail.com>
    wrote:
    Hi Harsh,

    Thanks for the reply. Setting "/etc/hadoop/conf" in the classpath has
    resolved that issue. But now I am getting the following error:

    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism
    level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is:
    "br10":8020;
    [duplicate 1]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism
    level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar8.musigma.com/192.168.200.18"; destination host is:
    "br10":8020;
    [duplicate 2]
    Loss was due to java.io.IOException: Failed on local exception:
    java.io.IOException: javax.security.sasl.SaslException: GSS initiate
    failed
    [Caused by GSSException: No valid credentials provided (Mechanism
    level:
    Failed to find any Kerberos tgt)]; Host Details : local host is:
    "babar3.musigma.com/192.168.200.13"; destination host is:
    "br10":8020;
    [duplicate 16]

    Kerberos credentials are generated and renewed by Cloudera Manager.
    Any
    idea
    what to do next?

    Thanks,

    On Tue, Apr 2, 2013 at 2:18 PM, Harsh J wrote:

    For any client hadoop API using program to communicate with a secure
    cluster, the config options to indicate that should be present in
    complete forms. That is, your Spark job is lacking
    "hadoop.security.authentication" set to kerberos (and possibly other
    requisite client properties) when it attempts to talk to the HDFS
    instance.

    Usually the trouble is that, although Spark would definitely be
    looking for the core-site.xml and hdfs-site.xml, the directory it
    resides under may not be in the runtime classpath of your Spark's
    job
    invocation. Placing the dir (such as /etc/hadoop/conf/) on the
    classpath would immediately resolve the issue, unless there's
    something within Spark itself which prevents this on purpose.

    On Tue, Apr 2, 2013 at 2:09 PM, Gaurav Dasgupta <gdsayshi@gmail.com
    wrote:
    Hi,

    My CDH4 cluster is enable with Kerberos authentication. I have
    tested
    this
    successfully. But when I am trying to a run a third party
    application
    (Spark
    job) which uses HDFS for input/output, I am getting the following
    error:

    org.apache.hadoop.security.AccessControlException: Authorization
    (hadoop.security.authorization) is enabled but authentication
    (hadoop.security.authentication) is configured as simple. Please
    configure
    another method like kerberos or digest.

    hadoop.security.authentication is configured as kerberos. The user
    which
    is
    trying to submit the Spark job has Kerberos credentials and I have
    tested
    this by running a Hadoop job by the same user for the same input
    and
    output
    directory.

    Do we need to do some setting for running third party
    applications on
    top of
    hadoop when kerberos is enabled? How do I resolve this problem?

    Thanks,
    Gaurav


    --
    Harsh J


    --
    Harsh J


    --
    Harsh J

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupscm-users @
categorieshadoop
postedApr 2, '13 at 8:39a
activeApr 3, '13 at 8:54a
posts8
users2
websitecloudera.com
irc#hadoop

2 users in discussion

Gaurav Dasgupta: 5 posts Harsh J: 3 posts

People

Translate

site design / logo © 2022 Grokbase