FAQ
While clicking on Job Browser, it gives the following error:

Server Error (500)
Sorry, there's been an error. An email was sent to your administrators.
Thank you for your patience.

Here is the related parts of the log:

[13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx
[13/Mar/2013 07:49:49 +0000] middleware INFO Processing exception:
<urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent call last):
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
line 100, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/views.py",
line 72, in jobs
jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
username=user, state=state, text=text, retired=retired)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/api.py",
line 175, in get_jobs
json = self.resource_manager_api.apps(**filters)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
line 69, in apps
return self._root.get('cluster/apps', params=kwargs,
headers={'Accept': _JSON_CONTENT_TYPE})
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
line 91, in get
return self.invoke("GET", relpath, params, headers=headers)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
line 58, in invoke
headers=headers)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/http_client.py",
line 175, in execute
raise self._exc_class(ex)
RestException: <urlopen error [Errno 111] ECONNREFUSED>



Version: Hue 2.2.0, with CDH4

Everything else work fine in Hue.

I am not sure what configurations/settings/logs I should check further. Any
help is greatly appreciated.


Thanks,
Gaurav

Search Discussions

  • Romain Rigaux at Mar 13, 2013 at 3:29 pm
    http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx

    It seems that the YARN resource manager API is not there at
    http://localhost:8088 <http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

       # Configuration for YARN (MR2)
       # ------------------------------------------------------------------------
       [[yarn_clusters]]

         [[[default]]]
           # Enter the host on which you are running the ResourceManager
           resourcemanager_host=localhost
           # The port where the ResourceManager IPC listens on
           resourcemanager_port=8032
           # Whether to submit jobs to this cluster
           submit_to=True

           # URL of the ResourceManager API
           ## resourcemanager_api_url=http://localhost:8088

           # URL of the ProxyServer API
           ## proxy_api_url=http://localhost:8088

           # URL of the HistoryServer API
           history_server_api_url=http://localhost:19888

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing exception:
    <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent call last):
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
    line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/views.py",
    line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
    username=user, state=state, text=text, retired=retired)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/api.py",
    line 175, in get_jobs
    json = self.resource_manager_api.apps(**filters)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 58, in invoke
    headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/http_client.py",
    line 175, in execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Gaurav at Mar 13, 2013 at 4:44 pm
    Hi Romain,

    Thanks for the quick reply!

    From what you wrote, I checked the hue.ini (from CM -> Hue Server, as I am
    not sure where exactly it's located on the cluster), and it seems Hue is
    looking for a YARN mapreduce, while we have disabled it on our cluster and
    are using MR1. In other words, the hue.ini does not have any details
    for [[mapred_clusters]] sub-section under [[[default]]], but has details
    for [[yarn_clusters]] section.

    I assume we will have to (re)configure Hue to use MR1 instead of Yarn, and
    the problem should go away. Let me figure that out.

    Thanks for the help!

    - Gaurav
    On Wednesday, March 13, 2013 11:29:32 AM UTC-4, Romain Rigaux wrote:

    http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    #
    ------------------------------------------------------------------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http://localhost:8088

    # URL of the ProxyServer API
    ## proxy_api_url=http://localhost:8088

    # URL of the HistoryServer API
    history_server_api_url=http://localhost:19888


    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav <pandit...@gmail.com <javascript:>
    wrote:
    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing exception:
    <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent call last):
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
    line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/views.py",
    line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
    username=user, state=state, text=text, retired=retired)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/api.py",
    line 175, in get_jobs
    json = self.resource_manager_api.apps(**filters)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 58, in invoke
    headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/http_client.py",
    line 175, in execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Romain Rigaux at Mar 13, 2013 at 5:20 pm
    I could not see a menu for configuring it, but clicking on 'Hue Service',
    then configure, then 'Service Wide', then 'Hue Service Configuration Safety
    Valve for hue_safety_valve.ini', you could enter something like below
    (updating the properties). What matters the most is:

           # Enter the host on which you are running the Hadoop JobTracker
           jobtracker_host=localhost
           # Whether to submit jobs to this cluster
           submit_to=True




    To insert:

    [hadoop]

       # Configuration for MapReduce 0.20 JobTracker (MR1)
       # ------------------------------------------------------------------------
       [[mapred_clusters]]

         [[[default]]]
           # Enter the host on which you are running the Hadoop JobTracker
           jobtracker_host=localhost
           # The port where the JobTracker IPC listens on
           jobtracker_port=8021
           # Thrift plug-in port for the JobTracker
           ## thrift_port=9290
           # Whether to submit jobs to this cluster
           submit_to=True

           # Change this if your MapReduce cluster is Kerberos-secured
           ## security_enabled=false

           # Settings about this MR1 cluster. If you install MR1 in a
           # different location, you need to set the following.

           # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
           ## hadoop_mapred_home=/usr/lib/hadoop-0.20-mapreduce

           # Defaults to $HADOOP_BIN or /usr/bin/hadoop
           ## hadoop_bin=/usr/bin/hadoop

           # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
           ## hadoop_conf_dir=/etc/hadoop/conf


    Romain
    On Wed, Mar 13, 2013 at 9:44 AM, Gaurav wrote:

    Hi Romain,

    Thanks for the quick reply!

    From what you wrote, I checked the hue.ini (from CM -> Hue Server, as I am
    not sure where exactly it's located on the cluster), and it seems Hue is
    looking for a YARN mapreduce, while we have disabled it on our cluster and
    are using MR1. In other words, the hue.ini does not have any details
    for [[mapred_clusters]] sub-section under [[[default]]], but has details
    for [[yarn_clusters]] section.

    I assume we will have to (re)configure Hue to use MR1 instead of Yarn, and
    the problem should go away. Let me figure that out.

    Thanks for the help!

    - Gaurav
    On Wednesday, March 13, 2013 11:29:32 AM UTC-4, Romain Rigaux wrote:

    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    # ------------------------------**------------------------------**
    ------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http:/**/localhost:8088<http://localhost:8088>

    # URL of the ProxyServer API
    ## proxy_api_url=http://**localhost:8088 <http://localhost:8088>

    # URL of the HistoryServer API
    history_server_api_url=http://**localhost:19888<http://localhost:19888>

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/build/env/lib/python2.6/**site-packages/Django-1.2.3-**
    py2.6.egg/django/core/**handlers/base.py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/api.py", line 175, in get_jobs
    json = self.resource_manager_api.**apps(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/libs/hadoop/src/**hadoop/yarn/resource_manager_**api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 58, in
    invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/http_client.py", line 175, in
    execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Gaurav at Mar 14, 2013 at 7:00 pm
    Thanks again Romain. It's working fine.

    Just to reiterate, we followed your instructions and added the following in
    'Hue Service' -> 'Service Wide' -> 'Hue Service Configuration Safety Valve
    for hue_safety_valve.ini':

    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=xxxxx.xxx.xxx.xxxx.com
    # Whether to submit jobs to this cluster
    submit_to=True

    We did not add anything else though, and I guess it will be picked up from
    hue.ini itself.

    This got us going - job browser does not give the error anymore.

    I am still a bit unclear on how the safety ini thing works - as in, if any
    actual file is modified by CM when above changes are applied or not. I was
    expecting to find a hue_safety_valve.ini somewhere on the file system on
    the cluster, but I can't find any. But going through Service -> Hue -> Hue
    Server -> Show (Configuration Files/Environment) -> hue_safety_valve.ini
    shows the changes we put in through CM. So for now, it's all good - I just
    wish I could understand what goes behind the curtains a bit more.


    Thanks for you help.

    - Gaurav

    On Wednesday, March 13, 2013 1:20:30 PM UTC-4, Romain Rigaux wrote:

    I could not see a menu for configuring it, but clicking on 'Hue Service',
    then configure, then 'Service Wide', then 'Hue Service Configuration
    Safety Valve for hue_safety_valve.ini', you could enter something like
    below (updating the properties). What matters the most is:

    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # Whether to submit jobs to this cluster
    submit_to=True




    To insert:

    [hadoop]

    # Configuration for MapReduce 0.20 JobTracker (MR1)
    #
    ------------------------------------------------------------------------
    [[mapred_clusters]]

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290
    # Whether to submit jobs to this cluster
    submit_to=True

    # Change this if your MapReduce cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    ## hadoop_mapred_home=/usr/lib/hadoop-0.20-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/conf


    Romain

    On Wed, Mar 13, 2013 at 9:44 AM, Gaurav <pandit...@gmail.com <javascript:>
    wrote:
    Hi Romain,

    Thanks for the quick reply!

    From what you wrote, I checked the hue.ini (from CM -> Hue Server, as I
    am not sure where exactly it's located on the cluster), and it seems Hue is
    looking for a YARN mapreduce, while we have disabled it on our cluster and
    are using MR1. In other words, the hue.ini does not have any details
    for [[mapred_clusters]] sub-section under [[[default]]], but has details
    for [[yarn_clusters]] section.

    I assume we will have to (re)configure Hue to use MR1 instead of Yarn,
    and the problem should go away. Let me figure that out.

    Thanks for the help!

    - Gaurav
    On Wednesday, March 13, 2013 11:29:32 AM UTC-4, Romain Rigaux wrote:

    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    # ------------------------------**------------------------------**
    ------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http:/**/localhost:8088<http://localhost:8088>

    # URL of the ProxyServer API
    ## proxy_api_url=http://**localhost:8088 <http://localhost:8088>

    # URL of the HistoryServer API
    history_server_api_url=http://**localhost:19888<http://localhost:19888>

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your
    administrators. Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/build/env/lib/python2.6/**site-packages/Django-1.2.3-**
    py2.6.egg/django/core/**handlers/base.py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/api.py", line 175, in get_jobs
    json = self.resource_manager_api.**apps(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/libs/hadoop/src/**hadoop/yarn/resource_manager_**api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 58, in
    invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/http_client.py", line 175, in
    execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Romain Rigaux at Mar 14, 2013 at 7:47 pm
    Glad to hear!

    About CM, I guess feel free to ask for some clarifications on
    https://groups.google.com/a/cloudera.org/group/scm-users/
    and maybe these posts describe it more somewhere:
    http://blog.cloudera.com/blog/category/clouderas-service-and-configuration-manager/

    Romain

    On Thu, Mar 14, 2013 at 12:00 PM, Gaurav wrote:

    Thanks again Romain. It's working fine.

    Just to reiterate, we followed your instructions and added the following
    in 'Hue Service' -> 'Service Wide' -> 'Hue Service Configuration Safety
    Valve for hue_safety_valve.ini':

    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=xxxxx.xxx.xxx.xxxx.com
    # Whether to submit jobs to this cluster
    submit_to=True

    We did not add anything else though, and I guess it will be picked up from
    hue.ini itself.

    This got us going - job browser does not give the error anymore.

    I am still a bit unclear on how the safety ini thing works - as in, if any
    actual file is modified by CM when above changes are applied or not. I was
    expecting to find a hue_safety_valve.ini somewhere on the file system on
    the cluster, but I can't find any. But going through Service -> Hue -> Hue
    Server -> Show (Configuration Files/Environment) -> hue_safety_valve.ini
    shows the changes we put in through CM. So for now, it's all good - I just
    wish I could understand what goes behind the curtains a bit more.


    Thanks for you help.

    - Gaurav

    On Wednesday, March 13, 2013 1:20:30 PM UTC-4, Romain Rigaux wrote:

    I could not see a menu for configuring it, but clicking on 'Hue Service',
    then configure, then 'Service Wide', then 'Hue Service Configuration
    Safety Valve for hue_safety_valve.ini', you could enter something like
    below (updating the properties). What matters the most is:

    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # Whether to submit jobs to this cluster
    submit_to=True




    To insert:

    [hadoop]

    # Configuration for MapReduce 0.20 JobTracker (MR1)
    # ------------------------------**------------------------------**
    ------------
    [[mapred_clusters]]

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290
    # Whether to submit jobs to this cluster
    submit_to=True

    # Change this if your MapReduce cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    ## hadoop_mapred_home=/usr/lib/**hadoop-0.20-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/**conf


    Romain
    On Wed, Mar 13, 2013 at 9:44 AM, Gaurav wrote:

    Hi Romain,

    Thanks for the quick reply!

    From what you wrote, I checked the hue.ini (from CM -> Hue Server, as I
    am not sure where exactly it's located on the cluster), and it seems Hue is
    looking for a YARN mapreduce, while we have disabled it on our cluster and
    are using MR1. In other words, the hue.ini does not have any details
    for [[mapred_clusters]] sub-section under [[[default]]], but has details
    for [[yarn_clusters]] section.

    I assume we will have to (re)configure Hue to use MR1 instead of Yarn,
    and the problem should go away. Let me figure that out.

    Thanks for the help!

    - Gaurav
    On Wednesday, March 13, 2013 11:29:32 AM UTC-4, Romain Rigaux wrote:

    http://localhost:8088/ws/v1/**cl**uster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    # ------------------------------****------------------------------***
    *------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http:/****/localhost:8088<http://localhost:8088>

    # URL of the ProxyServer API
    ## proxy_api_url=http://**localhost**:8088<http://localhost:8088>

    # URL of the HistoryServer API
    history_server_api_url=http://****localhost:19888<http://localhost:19888>

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your
    administrators. Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cl**uster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/**
    hue/**build/env/lib/python2.6/**site-**packages/Django-1.2.3-**py2.6.
    **egg/django/core/**handlers/base.**py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/**
    hue/**apps/jobbrowser/src/**jobbrowser**/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**requ**est.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/**
    hue/**apps/jobbrowser/src/**jobbrowser**/api.py", line 175, in
    get_jobs
    json = self.resource_manager_api.**apps**(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/**
    hue/**desktop/libs/hadoop/src/**hadoop**/yarn/resource_manager_**
    api.py"**, line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/**
    hue/**desktop/core/src/desktop/**lib/**rest/resource.py", line 91,
    in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/**
    hue/**desktop/core/src/desktop/**lib/**rest/resource.py", line 58,
    in invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/**
    hue/**desktop/core/src/desktop/**lib/**rest/http_client.py", line
    175, in execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check
    further. Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Gaurav Pandit at Mar 25, 2013 at 10:15 pm
    Yes, I just re-read the CM documentation to understand the configuration
    part a bit better. The documentation (
    https://ccp.cloudera.com/display/FREE45DOC/Modifying+Service+Configurations)
    makes it clear on how to manage such changes.

    Thanks,
    Gaurav
    On Thu, Mar 14, 2013 at 3:47 PM, Romain Rigaux wrote:

    Glad to hear!

    About CM, I guess feel free to ask for some clarifications on
    https://groups.google.com/a/cloudera.org/group/scm-users/
    and maybe these posts describe it more somewhere:

    http://blog.cloudera.com/blog/category/clouderas-service-and-configuration-manager/

    Romain


    On Thu, Mar 14, 2013 at 12:00 PM, Gaurav wrote:

    Thanks again Romain. It's working fine.

    Just to reiterate, we followed your instructions and added the following
    in 'Hue Service' -> 'Service Wide' -> 'Hue Service Configuration Safety
    Valve for hue_safety_valve.ini':

    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=xxxxx.xxx.xxx.xxxx.com
    # Whether to submit jobs to this cluster
    submit_to=True

    We did not add anything else though, and I guess it will be picked up
    from hue.ini itself.

    This got us going - job browser does not give the error anymore.

    I am still a bit unclear on how the safety ini thing works - as in, if
    any actual file is modified by CM when above changes are applied or not. I
    was expecting to find a hue_safety_valve.ini somewhere on the file system
    on the cluster, but I can't find any. But going through Service -> Hue ->
    Hue Server -> Show (Configuration Files/Environment)
    -> hue_safety_valve.ini shows the changes we put in through CM. So for now,
    it's all good - I just wish I could understand what goes behind the
    curtains a bit more.


    Thanks for you help.

    - Gaurav

    On Wednesday, March 13, 2013 1:20:30 PM UTC-4, Romain Rigaux wrote:

    I could not see a menu for configuring it, but clicking on 'Hue
    Service', then configure, then 'Service Wide', then 'Hue Service
    Configuration Safety Valve for hue_safety_valve.ini', you could enter
    something like below (updating the properties). What matters the most is:

    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # Whether to submit jobs to this cluster
    submit_to=True




    To insert:

    [hadoop]

    # Configuration for MapReduce 0.20 JobTracker (MR1)
    # ------------------------------**------------------------------**
    ------------
    [[mapred_clusters]]

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290
    # Whether to submit jobs to this cluster
    submit_to=True

    # Change this if your MapReduce cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    ## hadoop_mapred_home=/usr/lib/**hadoop-0.20-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/**conf


    Romain
    On Wed, Mar 13, 2013 at 9:44 AM, Gaurav wrote:

    Hi Romain,

    Thanks for the quick reply!

    From what you wrote, I checked the hue.ini (from CM -> Hue Server, as I
    am not sure where exactly it's located on the cluster), and it seems Hue is
    looking for a YARN mapreduce, while we have disabled it on our cluster and
    are using MR1. In other words, the hue.ini does not have any details
    for [[mapred_clusters]] sub-section under [[[default]]], but has details
    for [[yarn_clusters]] section.

    I assume we will have to (re)configure Hue to use MR1 instead of Yarn,
    and the problem should go away. Let me figure that out.

    Thanks for the help!

    - Gaurav
    On Wednesday, March 13, 2013 11:29:32 AM UTC-4, Romain Rigaux wrote:

    http://localhost:8088/ws/v1/**cl**uster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    # ------------------------------****------------------------------**
    **------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http:/****/localhost:8088<http://localhost:8088>

    # URL of the ProxyServer API
    ## proxy_api_url=http://**localhost**:8088<http://localhost:8088>

    # URL of the HistoryServer API
    history_server_api_url=http://****localhost:19888<http://localhost:19888>

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your
    administrators. Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cl**uster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/*
    *hue/**build/env/lib/python2.6/**site-**packages/Django-1.2.3-**
    py2.6.**egg/django/core/**handlers/base.**py", line 100, in
    get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/*
    *hue/**apps/jobbrowser/src/**jobbrowser**/views.py", line 72, in
    jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**requ**est.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/*
    *hue/**apps/jobbrowser/src/**jobbrowser**/api.py", line 175, in
    get_jobs
    json = self.resource_manager_api.**apps**(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/*
    *hue/**desktop/libs/hadoop/src/**hadoop**/yarn/resource_manager_**
    api.py"**, line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/*
    *hue/**desktop/core/src/desktop/**lib/**rest/resource.py", line 91,
    in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/*
    *hue/**desktop/core/src/desktop/**lib/**rest/resource.py", line 58,
    in invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2**.0-1.cdh4.2.0.p0.10/share/*
    *hue/**desktop/core/src/desktop/**lib/**rest/http_client.py", line
    175, in execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check
    further. Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Eric Rva at Jul 3, 2013 at 6:21 pm
    Hello Romain and Gaurav,

    I'm having the same problem, but i couldn't solve with these instructions.
    Can you help me?
    My default for the hue.ini is the following:

      [[mapred_clusters]]
         # HA support by specifying multiple configs

         [[[default]]]
           # Enter the host on which you are running the Hadoop JobTracker
           jobtracker_host=localhost
           # The port where the JobTracker IPC listens on
           jobtracker_port=8021
           # Thrift plug-in port for the JobTracker
           ## thrift_port=9290
           # Whether to submit jobs to this cluster
           ## submit_to=True

           # Change this if your MapReduce cluster is Kerberos-secured
           ## security_enabled=false

           # Settings about this MR1 cluster. If you install MR1 in a
           # different location, you need to set the following.

           # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
           ## hadoop_mapred_home=/usr/lib/hadoop-0.20-mapreduce

           # Defaults to $HADOOP_BIN or /usr/bin/hadoop
           ## hadoop_bin=/usr/bin/hadoop

           # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
           ## hadoop_conf_dir=/etc/hadoop/conf

       # Configuration for YARN (MR2)
       # ------------------------------------------------------------------------
       [[yarn_clusters]]

         [[[default]]]
           # Enter the host on which you are running the ResourceManager
           resourcemanager_host=localhost
           # The port where the ResourceManager IPC listens on
           resourcemanager_port=8032
           # Whether to submit jobs to this cluster
           ## submit_to=False

           # Change this if your YARN cluster is Kerberos-secured
           ## security_enabled=false

           # Settings about this MR2 cluster. If you install MR2 in a
           # different location, you need to set the following.

           # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
           ## hadoop_mapred_home=/usr/lib/hadoop-mapreduce

           # Defaults to $HADOOP_BIN or /usr/bin/hadoop
           ## hadoop_bin=/usr/bin/hadoop

           # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
           ## hadoop_conf_dir=/etc/hadoop/conf

           # URL of the ResourceManager API
           ## resourcemanager_api_url=http://localhost:8088

           # URL of the ProxyServer API
           ## proxy_api_url=http://localhost:8088

           # URL of the HistoryServer API
           history_server_api_url=http://localhost:19888

           # URL of the NodeManager API
           node_manager_api_url=http://localhost:8042

    Em quarta-feira, 13 de março de 2013 12h29min32s UTC-3, Romain Rigaux
    escreveu:
    http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    #
    ------------------------------------------------------------------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http://localhost:8088

    # URL of the ProxyServer API
    ## proxy_api_url=http://localhost:8088

    # URL of the HistoryServer API
    history_server_api_url=http://localhost:19888


    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav <pandit...@gmail.com <javascript:>
    wrote:
    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing exception:
    <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent call last):
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
    line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/views.py",
    line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
    username=user, state=state, text=text, retired=retired)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/api.py",
    line 175, in get_jobs
    json = self.resource_manager_api.apps(**filters)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 58, in invoke
    headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/http_client.py",
    line 175, in execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Romain Rigaux at Jul 3, 2013 at 6:42 pm
    Are you using MR1 or YARN?

    One of the two cluster in hue.ini should have accordingly:
    submit_to=True
    not commented

    Romain

    On Wed, Jul 3, 2013 at 11:21 AM, wrote:

    Hello Romain and Gaurav,

    I'm having the same problem, but i couldn't solve with these instructions.
    Can you help me?
    My default for the hue.ini is the following:

    [[mapred_clusters]]
    # HA support by specifying multiple configs

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290
    # Whether to submit jobs to this cluster
    ## submit_to=True

    # Change this if your MapReduce cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    ## hadoop_mapred_home=/usr/lib/hadoop-0.20-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/conf

    # Configuration for YARN (MR2)
    #
    ------------------------------------------------------------------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    ## submit_to=False

    # Change this if your YARN cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR2 cluster. If you install MR2 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
    ## hadoop_mapred_home=/usr/lib/hadoop-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/conf

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http://localhost:8088

    # URL of the ProxyServer API
    ## proxy_api_url=http://localhost:8088

    # URL of the HistoryServer API
    history_server_api_url=http://localhost:19888

    # URL of the NodeManager API
    node_manager_api_url=http://localhost:8042

    Em quarta-feira, 13 de março de 2013 12h29min32s UTC-3, Romain Rigaux
    escreveu:
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    # ------------------------------**------------------------------**
    ------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http:/**/localhost:8088<http://localhost:8088>

    # URL of the ProxyServer API
    ## proxy_api_url=http://**localhost:8088 <http://localhost:8088>

    # URL of the HistoryServer API
    history_server_api_url=http://**localhost:19888<http://localhost:19888>

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/build/env/lib/python2.6/**site-packages/Django-1.2.3-**
    py2.6.egg/django/core/**handlers/base.py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/api.py", line 175, in get_jobs
    json = self.resource_manager_api.**apps(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/libs/hadoop/src/**hadoop/yarn/resource_manager_**api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 58, in
    invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/http_client.py", line 175, in
    execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Eric almeida at Jul 5, 2013 at 11:47 am
    MR1


    2013/7/3 Romain Rigaux <romain@cloudera.com>
    Are you using MR1 or YARN?

    One of the two cluster in hue.ini should have accordingly:
    submit_to=True
    not commented

    Romain

    On Wed, Jul 3, 2013 at 11:21 AM, wrote:

    Hello Romain and Gaurav,

    I'm having the same problem, but i couldn't solve with these
    instructions.
    Can you help me?
    My default for the hue.ini is the following:

    [[mapred_clusters]]
    # HA support by specifying multiple configs

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290
    # Whether to submit jobs to this cluster
    ## submit_to=True

    # Change this if your MapReduce cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    ## hadoop_mapred_home=/usr/lib/hadoop-0.20-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/conf

    # Configuration for YARN (MR2)
    #
    ------------------------------------------------------------------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    ## submit_to=False

    # Change this if your YARN cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR2 cluster. If you install MR2 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
    ## hadoop_mapred_home=/usr/lib/hadoop-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/conf

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http://localhost:8088

    # URL of the ProxyServer API
    ## proxy_api_url=http://localhost:8088

    # URL of the HistoryServer API
    history_server_api_url=http://localhost:19888

    # URL of the NodeManager API
    node_manager_api_url=http://localhost:8042

    Em quarta-feira, 13 de março de 2013 12h29min32s UTC-3, Romain Rigaux
    escreveu:
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    # ------------------------------**------------------------------**
    ------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http:/**/localhost:8088<http://localhost:8088>

    # URL of the ProxyServer API
    ## proxy_api_url=http://**localhost:8088 <http://localhost:8088>

    # URL of the HistoryServer API
    history_server_api_url=http://**localhost:19888<http://localhost:19888>

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your
    administrators. Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/build/env/lib/python2.6/**site-packages/Django-1.2.3-**
    py2.6.egg/django/core/**handlers/base.py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/api.py", line 175, in get_jobs
    json = self.resource_manager_api.**apps(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/libs/hadoop/src/**hadoop/yarn/resource_manager_**api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 58, in
    invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/http_client.py", line 175, in
    execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Romain Rigaux at Jul 8, 2013 at 4:17 pm
    Ok, do did you do what I said above? Could you share your new ini file?

    As you specified to clusters, MR1 and MR2, it is picking up MR2 by default
    and so failing.

    Romain

    On Fri, Jul 5, 2013 at 4:47 AM, eric almeida wrote:

    MR1


    2013/7/3 Romain Rigaux <romain@cloudera.com>
    Are you using MR1 or YARN?

    One of the two cluster in hue.ini should have accordingly:
    submit_to=True
    not commented

    Romain

    On Wed, Jul 3, 2013 at 11:21 AM, wrote:

    Hello Romain and Gaurav,

    I'm having the same problem, but i couldn't solve with these
    instructions.
    Can you help me?
    My default for the hue.ini is the following:

    [[mapred_clusters]]
    # HA support by specifying multiple configs

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=localhost
    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290
    # Whether to submit jobs to this cluster
    ## submit_to=True

    # Change this if your MapReduce cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    ## hadoop_mapred_home=/usr/lib/hadoop-0.20-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/conf

    # Configuration for YARN (MR2)
    #
    ------------------------------------------------------------------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    ## submit_to=False

    # Change this if your YARN cluster is Kerberos-secured
    ## security_enabled=false

    # Settings about this MR2 cluster. If you install MR2 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
    ## hadoop_mapred_home=/usr/lib/hadoop-mapreduce

    # Defaults to $HADOOP_BIN or /usr/bin/hadoop
    ## hadoop_bin=/usr/bin/hadoop

    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    ## hadoop_conf_dir=/etc/hadoop/conf

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http://localhost:8088

    # URL of the ProxyServer API
    ## proxy_api_url=http://localhost:8088

    # URL of the HistoryServer API
    history_server_api_url=http://localhost:19888

    # URL of the NodeManager API
    node_manager_api_url=http://localhost:8042

    Em quarta-feira, 13 de março de 2013 12h29min32s UTC-3, Romain Rigaux
    escreveu:
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>

    It seems that the YARN resource manager API is not there at
    http://localhost:8088<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>.
    Can you check if these below is correct in /etc/hue/hue.ini?

    Romain

    # Configuration for YARN (MR2)
    # ------------------------------**------------------------------**
    ------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=localhost
    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=True

    # URL of the ResourceManager API
    ## resourcemanager_api_url=http:/**/localhost:8088<http://localhost:8088>

    # URL of the ProxyServer API
    ## proxy_api_url=http://**localhost:8088 <http://localhost:8088>

    # URL of the HistoryServer API
    history_server_api_url=http://**localhost:19888<http://localhost:19888>

    On Wed, Mar 13, 2013 at 7:58 AM, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your
    administrators. Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/build/env/lib/python2.6/**site-packages/Django-1.2.3-**
    py2.6.egg/django/core/**handlers/base.py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/api.py", line 175, in get_jobs
    json = self.resource_manager_api.**apps(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/libs/hadoop/src/**hadoop/yarn/resource_manager_**api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 58, in
    invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/http_client.py", line 175,
    in execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check
    further. Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Kongzhenzhen at Jun 19, 2013 at 6:33 am
    Hi Romain Rigaux and Gaurav,
    I think I have the same question. I checked the hue.ini,and added the
    mapred_clusters,but it didn't work;

    the hue.ini:
    [hadoop]

       # Configuration for HDFS NameNode
       # ------------------------------------------------------------------------
       [[hdfs_clusters]]

         [[[default]]]
           # Enter the filesystem uri
           fs_defaultfs=hdfs://167.52.1.42:8020

           # Change this if your HDFS cluster is Kerberos-secured
           security_enabled=true

           # Use WebHdfs/HttpFs as the communication mechanism.
           # This should be the web service root URL, such as
           # http://namenode:50070/webhdfs/v1
           webhdfs_url=http://167.52.1.42:25000/webhdfs/v1

           # Settings about this HDFS cluster. If you install HDFS in a
           # different location, you need to set the following.

           # Defaults to $HADOOP_HDFS_HOME or /usr/lib/hadoop-hdfs
           hadoop_hdfs_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1

           # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop

           # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
            hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_7_NameNode/
      # Configuration for MapReduce 0.20 JobTracker (MR1)
       # ------------------------------------------------------------------------
       [[mapred_clusters]]

         [[[default]]]
           # Enter the host on which you are running the Hadoop JobTracker
           jobtracker_host=167.52.1.44
           # The port where the JobTracker IPC listens on
           jobtracker_port=8021
           # Thrift plug-in port for the JobTracker
           ## thrift_port=9290

           # Whether to submit jobs to this cluster
           submit_to=true

           # Change this if your MapReduce cluster is Kerberos-secured
           security_enabled=true

           # Settings about this MR1 cluster. If you install MR1 in a
           # different location, you need to set the following.

           # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
           hadoop_mapred_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1

           # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop

           # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
           hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_8_ResourceManager
       # Configuration for YARN (MR2)
       # ------------------------------------------------------------------------
       [[yarn_clusters]]

         [[[default]]]
           # Enter the host on which you are running the ResourceManager
           resourcemanager_host=167.52.1.44
           # The port where the ResourceManager IPC listens on
           resourcemanager_port=8032
           # Whether to submit jobs to this cluster
           submit_to=true

           # Change this if your YARN cluster is Kerberos-secured
            security_enabled=true

           # Settings about this MR2 cluster. If you install MR2 in a
           # different location, you need to set the following.

           # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
           hadoop_mapred_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1

           # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop

           # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
            hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_8_ResourceManager

           # URL of the ResourceManager API
            resourcemanager_api_url=http://167.52.1.44:8088

           # URL of the ProxyServer API
            proxy_api_url=http://167.52.1.44:1111

           # URL of the HistoryServer
           history_server_api_url=http://167.52.1.44:19888

           # URL of the NodeManager API
           node_manager_api_url=http://167.52.1.44:8042



    the error log from runcpserver.log:

      (error 401): Traceback (most recent call last):
       File
    "/opt/hue/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
    line 100, in get_response
         response = callback(request, *callback_args, **callback_kwargs)
       File "/opt/hue/hue/apps/jobbrowser/src/jobbrowser/views.py", line 72, in
    jobs
         jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
    username=user, state=state, text=text, retired=retired)
       File "/opt/hue/hue/apps/jobbrowser/src/jobbrowser/api.py", line 180, in
    get_jobs
         json = self.resource_manager_api.apps(**filters)
       File
    "/opt/hue/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
    line 69, in apps
         return self._root.get('cluster/apps', params=kwargs, headers={'Accept':
    _JSON_CONTENT_TYPE})
       File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/resource.py", line
    91, in get
         return self.invoke("GET", relpath, params, headers=headers)
       File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/resource.py", line
    58, in invoke
         headers=headers)
       File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/http_client.py",
    line 176, in execute
         raise self._exc_class(ex)
    RestException: <html>
    <head>

    Thanks,
    K

    On Wednesday, March 13, 2013 10:58:26 PM UTC+8, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing exception:
    <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent call last):
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
    line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/views.py",
    line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
    username=user, state=state, text=text, retired=retired)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/api.py",
    line 175, in get_jobs
    json = self.resource_manager_api.apps(**filters)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/resource.py",
    line 58, in invoke
    headers=headers)
    File
    "/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/desktop/core/src/desktop/lib/rest/http_client.py",
    line 175, in execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Romain Rigaux at Jun 19, 2013 at 4:12 pm
    You need to pick either Yarn or MapReduce. Which one are you using? Please
    have only one submit_to=true in either [[yarn_clusters]] or
    [[mapred_clusters]].

    Romain

    On Tue, Jun 18, 2013 at 11:33 PM, wrote:


    Hi Romain Rigaux and Gaurav,
    I think I have the same question. I checked the hue.ini,and added the
    mapred_clusters,but it didn't work;

    the hue.ini:
    [hadoop]

    # Configuration for HDFS NameNode
    #
    ------------------------------------------------------------------------
    [[hdfs_clusters]]

    [[[default]]]
    # Enter the filesystem uri
    fs_defaultfs=hdfs://167.52.1.42:8020

    # Change this if your HDFS cluster is Kerberos-secured
    security_enabled=true

    # Use WebHdfs/HttpFs as the communication mechanism.
    # This should be the web service root URL, such as
    # http://namenode:50070/webhdfs/v1
    webhdfs_url=http://167.52.1.42:25000/webhdfs/v1

    # Settings about this HDFS cluster. If you install HDFS in a

    # different location, you need to set the following.

    # Defaults to $HADOOP_HDFS_HOME or /usr/lib/hadoop-hdfs
    hadoop_hdfs_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1


    # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop


    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_7_NameNode/

    # Configuration for MapReduce 0.20 JobTracker (MR1)
    #
    ------------------------------------------------------------------------
    [[mapred_clusters]]

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=167.52.1.44

    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290

    # Whether to submit jobs to this cluster
    submit_to=true


    # Change this if your MapReduce cluster is Kerberos-secured
    security_enabled=true


    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    hadoop_mapred_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1


    # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop


    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_8_ResourceManager

    # Configuration for YARN (MR2)
    #
    ------------------------------------------------------------------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=167.52.1.44

    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=true

    # Change this if your YARN cluster is Kerberos-secured
    security_enabled=true

    # Settings about this MR2 cluster. If you install MR2 in a

    # different location, you need to set the following.

    # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
    hadoop_mapred_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1


    # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop


    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_8_ResourceManager


    # URL of the ResourceManager API
    resourcemanager_api_url=http://167.52.1.44:8088


    # URL of the ProxyServer API
    proxy_api_url=http://167.52.1.44:1111


    # URL of the HistoryServer
    history_server_api_url=http://167.52.1.44:19888

    # URL of the NodeManager API
    node_manager_api_url=http://167.52.1.44:8042



    the error log from runcpserver.log:

    (error 401): Traceback (most recent call last):
    File
    "/opt/hue/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
    line 100, in get_response

    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/hue/hue/apps/jobbrowser/src/jobbrowser/views.py", line 72, in
    jobs

    jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/hue/hue/apps/jobbrowser/src/jobbrowser/api.py", line 180, in
    get_jobs
    json = self.resource_manager_api.apps(**filters)
    File
    "/opt/hue/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
    line 69, in apps

    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/resource.py", line
    91, in get

    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/resource.py", line
    58, in invoke
    headers=headers)
    File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/http_client.py",
    line 176, in execute
    raise self._exc_class(ex)
    RestException: <html>
    <head>

    Thanks,
    K


    On Wednesday, March 13, 2013 10:58:26 PM UTC+8, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing exception:
    <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent call last):
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/build/env/lib/python2.6/**site-packages/Django-1.2.3-**
    py2.6.egg/django/core/**handlers/base.py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/api.py", line 175, in get_jobs
    json = self.resource_manager_api.**apps(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/libs/hadoop/src/**hadoop/yarn/resource_manager_**api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 58, in invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/http_client.py", line 175, in
    execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav
  • Kongzhenzhen at Jun 21, 2013 at 12:53 am
    Hi Romain,
    I checked that we were using yarn_clusters,I'm trying to install the newest
    Hue 2.4.0.

    On Thursday, June 20, 2013 12:12:20 AM UTC+8, Romain Rigaux wrote:

    You need to pick either Yarn or MapReduce. Which one are you using? Please
    have only one submit_to=true in either [[yarn_clusters]] or
    [[mapred_clusters]].

    Romain


    On Tue, Jun 18, 2013 at 11:33 PM, <kongzh...@huawei.com <javascript:>>wrote:
    Hi Romain Rigaux and Gaurav,
    I think I have the same question. I checked the hue.ini,and added the
    mapred_clusters,but it didn't work;

    the hue.ini:
    [hadoop]

    # Configuration for HDFS NameNode
    #
    ------------------------------------------------------------------------
    [[hdfs_clusters]]

    [[[default]]]
    # Enter the filesystem uri
    fs_defaultfs=hdfs://167.52.1.42:8020

    # Change this if your HDFS cluster is Kerberos-secured
    security_enabled=true

    # Use WebHdfs/HttpFs as the communication mechanism.
    # This should be the web service root URL, such as
    # http://namenode:50070/webhdfs/v1
    webhdfs_url=http://167.52.1.42:25000/webhdfs/v1

    # Settings about this HDFS cluster. If you install HDFS in a

    # different location, you need to set the following.

    # Defaults to $HADOOP_HDFS_HOME or /usr/lib/hadoop-hdfs
    hadoop_hdfs_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1


    # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop


    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_7_NameNode/

    # Configuration for MapReduce 0.20 JobTracker (MR1)
    #
    ------------------------------------------------------------------------
    [[mapred_clusters]]

    [[[default]]]
    # Enter the host on which you are running the Hadoop JobTracker
    jobtracker_host=167.52.1.44

    # The port where the JobTracker IPC listens on
    jobtracker_port=8021
    # Thrift plug-in port for the JobTracker
    ## thrift_port=9290

    # Whether to submit jobs to this cluster
    submit_to=true


    # Change this if your MapReduce cluster is Kerberos-secured
    security_enabled=true


    # Settings about this MR1 cluster. If you install MR1 in a
    # different location, you need to set the following.

    # Defaults to $HADOOP_MR1_HOME or /usr/lib/hadoop-0.20-mapreduce
    hadoop_mapred_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1


    # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop


    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_8_ResourceManager

    # Configuration for YARN (MR2)
    #
    ------------------------------------------------------------------------
    [[yarn_clusters]]

    [[[default]]]
    # Enter the host on which you are running the ResourceManager
    resourcemanager_host=167.52.1.44

    # The port where the ResourceManager IPC listens on
    resourcemanager_port=8032
    # Whether to submit jobs to this cluster
    submit_to=true

    # Change this if your YARN cluster is Kerberos-secured
    security_enabled=true

    # Settings about this MR2 cluster. If you install MR2 in a

    # different location, you need to set the following.

    # Defaults to $HADOOP_MR2_HOME or /usr/lib/hadoop-mapreduce
    hadoop_mapred_home=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1


    # Defaults to $HADOOP_BIN or /usr/bin/hadoop

    hadoop_bin=/opt/huawei/Bigdata/hadoop-2.0.1.tar/hadoop-2.0.1/bin/hadoop


    # Defaults to $HADOOP_CONF_DIR or /etc/hadoop/conf
    hadoop_conf_dir=/opt/huawei/Bigdata/etc/1_8_ResourceManager


    # URL of the ResourceManager API
    resourcemanager_api_url=http://167.52.1.44:8088


    # URL of the ProxyServer API
    proxy_api_url=http://167.52.1.44:1111


    # URL of the HistoryServer
    history_server_api_url=http://167.52.1.44:19888

    # URL of the NodeManager API
    node_manager_api_url=http://167.52.1.44:8042



    the error log from runcpserver.log:

    (error 401): Traceback (most recent call last):
    File
    "/opt/hue/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
    line 100, in get_response

    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/hue/hue/apps/jobbrowser/src/jobbrowser/views.py", line 72,
    in jobs

    jobs = get_api(request.user, request.jt).get_jobs(user=request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/hue/hue/apps/jobbrowser/src/jobbrowser/api.py", line 180, in
    get_jobs
    json = self.resource_manager_api.apps(**filters)
    File
    "/opt/hue/hue/desktop/libs/hadoop/src/hadoop/yarn/resource_manager_api.py",
    line 69, in apps

    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/resource.py", line
    91, in get

    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/resource.py", line
    58, in invoke
    headers=headers)
    File "/opt/hue/hue/desktop/core/src/desktop/lib/rest/http_client.py",
    line 176, in execute
    raise self._exc_class(ex)
    RestException: <html>
    <head>

    Thanks,
    K


    On Wednesday, March 13, 2013 10:58:26 PM UTC+8, Gaurav wrote:


    While clicking on Job Browser, it gives the following error:

    Server Error (500)
    Sorry, there's been an error. An email was sent to your administrators.
    Thank you for your patience.

    Here is the related parts of the log:

    [13/Mar/2013 07:49:49 +0000] http_client DEBUG GET
    http://localhost:8088/ws/v1/**cluster/apps?user=xxxxxx<http://localhost:8088/ws/v1/cluster/apps?user=xxxxxx>
    [13/Mar/2013 07:49:49 +0000] middleware INFO Processing
    exception: <urlopen error [Errno 111] ECONNREFUSED>: Traceback (most recent
    call last):
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/build/env/lib/python2.6/**site-packages/Django-1.2.3-**
    py2.6.egg/django/core/**handlers/base.py", line 100, in get_response
    response = callback(request, *callback_args, **callback_kwargs)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/views.py", line 72, in jobs
    jobs = get_api(request.user, request.jt).get_jobs(user=**request.user,
    username=user, state=state, text=text, retired=retired)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/apps/jobbrowser/src/**jobbrowser/api.py", line 175, in get_jobs
    json = self.resource_manager_api.**apps(**filters)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/libs/hadoop/src/**hadoop/yarn/resource_manager_**api.py",
    line 69, in apps
    return self._root.get('cluster/apps', params=kwargs,
    headers={'Accept': _JSON_CONTENT_TYPE})
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 91, in get
    return self.invoke("GET", relpath, params, headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/resource.py", line 58, in
    invoke
    headers=headers)
    File "/opt/cloudera/parcels/CDH-4.**2.0-1.cdh4.2.0.p0.10/share/**
    hue/desktop/core/src/desktop/**lib/rest/http_client.py", line 175, in
    execute
    raise self._exc_class(ex)
    RestException: <urlopen error [Errno 111] ECONNREFUSED>



    Version: Hue 2.2.0, with CDH4

    Everything else work fine in Hue.

    I am not sure what configurations/settings/logs I should check further.
    Any help is greatly appreciated.


    Thanks,
    Gaurav

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
grouphue-user @
categorieshadoop
postedMar 13, '13 at 2:58p
activeJul 8, '13 at 4:17p
posts14
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase