Hi, Suddenly, I've started to get errors while trying to view logs through
HUE admin:
http://node11.lol.ru:8888/jobbrowser/jobs/job_201303201339_0025/single_logs
I do get such stacktrace:
[01/Apr/2013 07:05:40 +0000] access INFO 10.66.49.134 hdfs - "GET
/debug/check_config_ajax HTTP/1.0"
[01/Apr/2013 07:05:43 +0000] access INFO 10.66.49.134 hdfs - "GET
/jobbrowser/jobs/job_201303201339_0025/tasks/task_201303201339_0025_r_000164
HTTP/1.0"
[01/Apr/2013 07:05:43 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString=u'job_201303201339_0025',
jobTrackerID=u'201303201339', jobID=25)), kwargs={})
[01/Apr/2013 07:05:43 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob returned in 53ms:
ThriftJobInProgress(profile=ThriftJobProfile(jobFile='hdfs://prod-node015.lol.ru:8020/user/hdfs/.staging/job_201303201339_0025/job.xml',
queueName='default', user='hdfs',
name='oozie:action:T=map-reduce:W=Url-rating-subworkflow:A=Url-rating-subworkflow-run:ID=0000021-130320135309911-oozie-oozi-W',
jobID=ThriftJobID(asString='job_201303201339_0025',
jobTrackerID='201303201339', jobID=25)),
status=ThriftJobStatus(cleanupProgress=1.0, reduceProgress=1.0, runState=2,
jobID=ThriftJobID(asString='job_201303201339_0025',
jobTrackerID='201303201339', jobID=25), priority=2, user='hdfs',
startTime=1364825038848, setupProgress=1.0, mapProgress=1.0,
schedulingInfo='NA'), tasks=ThriftTaskInProgressList(numTotalTasks=205,
tasks=[ThriftTaskInProgress(runningAttempts=[],
taskStatuses={'attempt_201303201339_0025_m_000035_0':
ThriftTaskStatus(finishTime=1364825088030, stateString='cleanup',
startTime=1364825085984, sortFinishTime=0,
taskTracker='tracker_prod-node014.lol.ru:localhost/127....
[01/Apr/2013 07:05:43 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getTask(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftTaskID(asString=None, taskType=1, taskID=164,
jobID=ThriftJobID(asString=None, jobTrackerID=u'201303201339', jobID=25))),
kwargs={})
[01/Apr/2013 07:05:43 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getTask returned in 6ms:
ThriftTaskInProgress(runningAttempts=[],
taskStatuses={'attempt_201303201339_0025_r_000164_0':
ThriftTaskStatus(finishTime=1364825078834, stateString='reduce > reduce',
startTime=1364825069066, sortFinishTime=1364825077527,
taskTracker='tracker_prod-node029.lol.ru:localhost/127.0.0.1:43079',
state=1, shuffleFinishTime=1364825076674, mapFinishTime=0,
taskID=ThriftTaskAttemptID(asString='attempt_201303201339_0025_r_000164_0',
attemptID=0,
taskID=ThriftTaskID(asString='task_201303201339_0025_r_000164', taskType=1,
taskID=164, jobID=ThriftJobID(asString='job_201303201339_0025',
jobTrackerID='201303201339', jobID=25))), diagnosticInfo='', phase=4,
progress=1.0, outputSize=-1,
counters=ThriftGroupList(groups=[ThriftCounterGroup(displayName='File
System Counters', name='org.apache.hadoop.mapreduce.FileSystemCounter',
counters={'FILE: Number of bytes read': ThriftCounter(displayName='FILE:
Number of bytes read', name='FILE_BYTES_READ', value=20), 'HDFS: Number of
write operations': Thr...
[01/Apr/2013 07:05:44 +0000] access INFO 10.66.49.134 hdfs - "GET
/debug/check_config_ajax HTTP/1.0"
[01/Apr/2013 07:05:54 +0000] middleware DEBUG No desktop_app known for
request.
[01/Apr/2013 07:05:54 +0000] access INFO 10.66.49.134 hdfs - "GET
/jobbrowser/ HTTP/1.0"
[01/Apr/2013 07:05:54 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getAllJobs(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}),), kwargs={})
[01/Apr/2013 07:05:54 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getAllJobs returned in 6ms:
ThriftJobList(jobs=[ThriftJobInProgress(profile=ThriftJobProfile(jobFile='hdfs://prod-node015.lol.ru:8020/user/devops/.staging/job_201303201339_0002/job.xml',
queueName='default', user='devops',
name='oozie:action:T=map-reduce:W=Url-rating-subworkflow:A=Url-rating-subworkflow-run:ID=0000006-130320135309911-oozie-oozi-W',
jobID=ThriftJobID(asString='job_201303201339_0002',
jobTrackerID='201303201339', jobID=2)),
status=ThriftJobStatus(cleanupProgress=1.0, reduceProgress=1.0, runState=3,
jobID=ThriftJobID(asString='job_201303201339_0002',
jobTrackerID='201303201339', jobID=2), priority=2, user='devops',
startTime=1364819345925, setupProgress=1.0, mapProgress=1.0,
schedulingInfo='NA'), tasks=None, desiredMaps=18, desiredReduces=168,
finishedMaps=0, finishedReduces=0,
jobID=ThriftJobID(asString='job_201303201339_0002',
jobTrackerID='201303201339', jobID=2), priority=2,
launchTime=1364819346297, startTime=1364819345925,
finishTime=1364819398372), ThriftJobInProgress(profile=ThriftJo...
[01/Apr/2013 07:05:55 +0000] access INFO 10.66.49.134 hdfs - "GET
/debug/check_config_ajax HTTP/1.0"
[01/Apr/2013 07:05:55 +0000] access DEBUG 10.66.49.134 hdfs - "GET
/static/art/datatables/sort_desc.png HTTP/1.0"
[01/Apr/2013 07:06:06 +0000] access INFO 10.66.49.134 hdfs - "GET
/jobbrowser/jobs/job_201303201339_0022 HTTP/1.0"
[01/Apr/2013 07:06:06 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString=u'job_201303201339_0022',
jobTrackerID=u'201303201339', jobID=22)), kwargs={})
[01/Apr/2013 07:06:06 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob returned in 32ms:
ThriftJobInProgress(profile=ThriftJobProfile(jobFile='hdfs://prod-node015.lol.ru:8020/user/hdfs/.staging/job_201303201339_0022/job.xml',
queueName='default', user='hdfs',
name='oozie:action:T=map-reduce:W=Url-rating-subworkflow:A=Url-rating-subworkflow-run:ID=0000020-130320135309911-oozie-oozi-W',
jobID=ThriftJobID(asString='job_201303201339_0022',
jobTrackerID='201303201339', jobID=22)),
status=ThriftJobStatus(cleanupProgress=1.0, reduceProgress=1.0, runState=2,
jobID=ThriftJobID(asString='job_201303201339_0022',
jobTrackerID='201303201339', jobID=22), priority=2, user='hdfs',
startTime=1364824738956, setupProgress=1.0, mapProgress=1.0,
schedulingInfo='NA'), tasks=ThriftTaskInProgressList(numTotalTasks=188,
tasks=[ThriftTaskInProgress(runningAttempts=[],
taskStatuses={'attempt_201303201339_0022_m_000018_0':
ThriftTaskStatus(finishTime=1364824794989, stateString='cleanup',
startTime=1364824793165, sortFinishTime=0,
taskTracker='tracker_prod-node034.lol.ru:localhost/127....
[01/Apr/2013 07:06:06 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobConfXML(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString='job_201303201339_0022',
jobTrackerID='201303201339', jobID=22)), kwargs={})
[01/Apr/2013 07:06:06 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobConfXML returned in 4ms:
'<?xml version="1.0" encoding="UTF-8"
standalone="no"?><configuration>\n<property><name>mapred.job.restart.recover</name><value>true</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0022.xml</source></property>\n<property><name>job.end.retry.interval</name><value>30000</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0022.xml</source></property>\n<property><name>mapred.job.tracker.retiredjobs.cache.size</name><value>1000</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0022.xml</source></property>\n<property><name>mapred.queue.default.acl-administer-jobs</name><value>*</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0022.xml</source></property>\n<property><name>dfs.image.transfer.bandwidthPerSec</name><value>0</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201...
[01/Apr/2013 07:06:06 +0000] http_client DEBUG GET
http://prod-node015.lol.ru:50070/webhdfs/v1/staging/landing/source/protei/http/2013/03/27/01?op=GETFILESTATUS&user.name=hue&doas=hdfs
[01/Apr/2013 07:06:06 +0000] resource DEBUG GET Got response:
{"FileStatus":{"accessTime":0,"b...
[01/Apr/2013 07:06:06 +0000] http_client DEBUG GET
http://prod-node015.lol.ru:50070/webhdfs/v1/masterdata/source/protei/http/archive/2013/03/27/01?op=GETFILESTATUS&user.name=hue&doas=hdfs
[01/Apr/2013 07:06:06 +0000] resource DEBUG GET Got response:
{"FileStatus":{"accessTime":0,"b...
[01/Apr/2013 07:06:06 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobCounterRollups(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString='job_201303201339_0022',
jobTrackerID='201303201339', jobID=22)), kwargs={})
[01/Apr/2013 07:06:06 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobCounterRollups returned in
18ms:
ThriftJobCounterRollups(reduceCounters=ThriftGroupList(groups=[ThriftCounterGroup(displayName='File
System Counters', name='org.apache.hadoop.mapreduce.FileSystemCounter',
counters={'FILE: Number of bytes read': ThriftCounter(displayName='FILE:
Number of bytes read', name='FILE_BYTES_READ', value=3360), 'HDFS: Number
of write operations': ThriftCounter(displayName='HDFS: Number of write
operations', name='HDFS_WRITE_OPS', value=168), 'FILE: Number of read
operations': ThriftCounter(displayName='FILE: Number of read operations',
name='FILE_READ_OPS', value=0), 'HDFS: Number of bytes read':
ThriftCounter(displayName='HDFS: Number of bytes read',
name='HDFS_BYTES_READ', value=0), 'HDFS: Number of read operations':
ThriftCounter(displayName='HDFS: Number of read operations',
name='HDFS_READ_OPS', value=21), 'FILE: Number of bytes written':
ThriftCounter(displayName='FILE: Number of bytes written',
name='FILE_BYTES_WRITTEN', value=29347957), 'HDFS: Number of large read
operations': ThriftCo...
[01/Apr/2013 07:06:07 +0000] access INFO 10.66.49.134 hdfs - "GET
/debug/check_config_ajax HTTP/1.0"
[01/Apr/2013 07:06:16 +0000] access INFO 10.66.49.134 hdfs - "GET
/jobbrowser/jobs/job_201303201339_0022/tasks/task_201303201339_0022_r_000167/attempts/attempt_201303201339_0022_r_000167_0/logs
HTTP/1.0"
[01/Apr/2013 07:06:16 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString=u'job_201303201339_0022',
jobTrackerID=u'201303201339', jobID=22)), kwargs={})
[01/Apr/2013 07:06:16 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob returned in 53ms:
ThriftJobInProgress(profile=ThriftJobProfile(jobFile='hdfs://prod-node015.lol.ru:8020/user/hdfs/.staging/job_201303201339_0022/job.xml',
queueName='default', user='hdfs',
name='oozie:action:T=map-reduce:W=Url-rating-subworkflow:A=Url-rating-subworkflow-run:ID=0000020-130320135309911-oozie-oozi-W',
jobID=ThriftJobID(asString='job_201303201339_0022',
jobTrackerID='201303201339', jobID=22)),
status=ThriftJobStatus(cleanupProgress=1.0, reduceProgress=1.0, runState=2,
jobID=ThriftJobID(asString='job_201303201339_0022',
jobTrackerID='201303201339', jobID=22), priority=2, user='hdfs',
startTime=1364824738956, setupProgress=1.0, mapProgress=1.0,
schedulingInfo='NA'), tasks=ThriftTaskInProgressList(numTotalTasks=188,
tasks=[ThriftTaskInProgress(runningAttempts=[],
taskStatuses={'attempt_201303201339_0022_m_000018_0':
ThriftTaskStatus(finishTime=1364824794989, stateString='cleanup',
startTime=1364824793165, sortFinishTime=0,
taskTracker='tracker_prod-node034.lol.ru:localhost/127....
[01/Apr/2013 07:06:16 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getTask(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftTaskID(asString=None, taskType=1, taskID=167,
jobID=ThriftJobID(asString=None, jobTrackerID=u'201303201339', jobID=22))),
kwargs={})
[01/Apr/2013 07:06:16 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getTask returned in 6ms:
ThriftTaskInProgress(runningAttempts=[],
taskStatuses={'attempt_201303201339_0022_r_000167_0':
ThriftTaskStatus(finishTime=1364824779167, stateString='reduce > reduce',
startTime=1364824766546, sortFinishTime=1364824777715,
taskTracker='tracker_prod-node014.lol.ru:localhost/127.0.0.1:47833',
state=1, shuffleFinishTime=1364824777120, mapFinishTime=0,
taskID=ThriftTaskAttemptID(asString='attempt_201303201339_0022_r_000167_0',
attemptID=0,
taskID=ThriftTaskID(asString='task_201303201339_0022_r_000167', taskType=1,
taskID=167, jobID=ThriftJobID(asString='job_201303201339_0022',
jobTrackerID='201303201339', jobID=22))), diagnosticInfo='', phase=4,
progress=1.0, outputSize=-1,
counters=ThriftGroupList(groups=[ThriftCounterGroup(displayName='File
System Counters', name='org.apache.hadoop.mapreduce.FileSystemCounter',
counters={'FILE: Number of bytes read': ThriftCounter(displayName='FILE:
Number of bytes read', name='FILE_BYTES_READ', value=20), 'HDFS: Number of
write operations': Thr...
[01/Apr/2013 07:06:16 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getTracker(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), 'tracker_prod-node014.lol.ru:localhost/127.0.0.1:47833'),
kwargs={})
[01/Apr/2013 07:06:16 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getTracker returned in 1ms:
ThriftTaskTrackerStatus(taskReports=None, availableSpace=2041132032000,
totalVirtualMemory=139483734016, failureCount=32, httpPort=50060,
host='prod-node014.lol.ru', totalPhysicalMemory=135290486784,
reduceCount=0, lastSeen=1364825174380,
trackerName='tracker_prod-node014.lol.ru:localhost/127.0.0.1:47833',
mapCount=0, maxReduceTasks=16, maxMapTasks=32)
[01/Apr/2013 07:06:16 +0000] models INFO Retrieving
http://prod-node014.lol.ru:50060/tasklog?attemptid=attempt_201303201339_0022_r_000167_0
[01/Apr/2013 07:06:16 +0000] middleware INFO Processing exception:
Unexpected end tag : td, line 7, column 12: Traceback (most recent call
last):
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/build/env/lib/python2.6/site-packages/Django-1.2.3-py2.6.egg/django/core/handlers/base.py",
line 100, in get_response
response = callback(request, *callback_args, **callback_kwargs)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/views.py",
line 62, in decorate
return view_func(request, *args, **kwargs)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/views.py",
line 290, in single_task_attempt_logs
logs += [ section.strip() for section in attempt.get_task_log() ]
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/apps/jobbrowser/src/jobbrowser/models.py",
line 451, in get_task_log
et = lxml.html.parse(data)
File
"/opt/cloudera/parcels/CDH-4.2.0-1.cdh4.2.0.p0.10/share/hue/build/env/lib/python2.6/site-packages/lxml-2.2.2-py2.6-linux-x86_64.egg/lxml/html/__init__.py",
line 661, in parse
return etree.parse(filename_or_url, parser, base_url=base_url, **kw)
File "lxml.etree.pyx", line 2698, in lxml.etree.parse
(src/lxml/lxml.etree.c:49590)
File "parser.pxi", line 1513, in lxml.etree._parseDocument
(src/lxml/lxml.etree.c:71423)
File "parser.pxi", line 1543, in lxml.etree._parseFilelikeDocument
(src/lxml/lxml.etree.c:71733)
File "parser.pxi", line 1426, in lxml.etree._parseDocFromFilelike
(src/lxml/lxml.etree.c:70648)
File "parser.pxi", line 997, in
lxml.etree._BaseParser._parseDocFromFilelike (src/lxml/lxml.etree.c:67944)
File "parser.pxi", line 539, in
lxml.etree._ParserContext._handleParseResultDoc
(src/lxml/lxml.etree.c:63820)
File "parser.pxi", line 625, in lxml.etree._handleParseResult
(src/lxml/lxml.etree.c:64741)
File "parser.pxi", line 565, in lxml.etree._raiseParseError
(src/lxml/lxml.etree.c:64084)
XMLSyntaxError: Unexpected end tag : td, line 7, column 12
[01/Apr/2013 07:06:16 +0000] access INFO 10.66.49.134 hdfs - "GET
/debug/check_config_ajax HTTP/1.0"
[01/Apr/2013 07:06:20 +0000] access WARNING 10.66.49.134 hdfs - "GET
/logs HTTP/1.0"
[01/Apr/2013 07:06:31 +0000] access INFO 10.66.49.134 hdfs - "GET
/debug/check_config_ajax HTTP/1.0"
[01/Apr/2013 07:14:18 +0000] access INFO 10.66.49.134 hdfs - "GET
/jobbrowser/jobs/job_201303201339_0025 HTTP/1.0"
[01/Apr/2013 07:14:18 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString=u'job_201303201339_0025',
jobTrackerID=u'201303201339', jobID=25)), kwargs={})
[01/Apr/2013 07:14:18 +0000] thrift_util INFO Thrift exception;
retrying: None
[01/Apr/2013 07:14:18 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString=u'job_201303201339_0025',
jobTrackerID=u'201303201339', jobID=25)), kwargs={})
[01/Apr/2013 07:14:19 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJob returned in 57ms:
ThriftJobInProgress(profile=ThriftJobProfile(jobFile='hdfs://prod-node015.lol.ru:8020/user/hdfs/.staging/job_201303201339_0025/job.xml',
queueName='default', user='hdfs',
name='oozie:action:T=map-reduce:W=Url-rating-subworkflow:A=Url-rating-subworkflow-run:ID=0000021-130320135309911-oozie-oozi-W',
jobID=ThriftJobID(asString='job_201303201339_0025',
jobTrackerID='201303201339', jobID=25)),
status=ThriftJobStatus(cleanupProgress=1.0, reduceProgress=1.0, runState=2,
jobID=ThriftJobID(asString='job_201303201339_0025',
jobTrackerID='201303201339', jobID=25), priority=2, user='hdfs',
startTime=1364825038848, setupProgress=1.0, mapProgress=1.0,
schedulingInfo='NA'), tasks=ThriftTaskInProgressList(numTotalTasks=205,
tasks=[ThriftTaskInProgress(runningAttempts=[],
taskStatuses={'attempt_201303201339_0025_m_000035_0':
ThriftTaskStatus(finishTime=1364825088030, stateString='cleanup',
startTime=1364825085984, sortFinishTime=0,
taskTracker='tracker_prod-node014.lol.ru:localhost/127....
[01/Apr/2013 07:14:19 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobConfXML(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString='job_201303201339_0025',
jobTrackerID='201303201339', jobID=25)), kwargs={})
[01/Apr/2013 07:14:19 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobConfXML returned in 4ms:
'<?xml version="1.0" encoding="UTF-8"
standalone="no"?><configuration>\n<property><name>mapred.job.restart.recover</name><value>true</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0025.xml</source></property>\n<property><name>job.end.retry.interval</name><value>30000</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0025.xml</source></property>\n<property><name>mapred.job.tracker.retiredjobs.cache.size</name><value>1000</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0025.xml</source></property>\n<property><name>mapred.queue.default.acl-administer-jobs</name><value>*</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201303201339_0025.xml</source></property>\n<property><name>dfs.image.transfer.bandwidthPerSec</name><value>0</value><source>programatically</source><source>/data/disk0/mapred/jt/jobTracker/job_201...
[01/Apr/2013 07:14:19 +0000] http_client DEBUG GET
http://prod-node015.lol.ru:50070/webhdfs/v1/staging/landing/source/protei/http/2013/03/27/02?op=GETFILESTATUS&user.name=hue&doas=hdfs
[01/Apr/2013 07:14:19 +0000] resource DEBUG GET Got response:
{"FileStatus":{"accessTime":0,"b...
[01/Apr/2013 07:14:19 +0000] http_client DEBUG GET
http://prod-node015.lol.ru:50070/webhdfs/v1/masterdata/source/protei/http/archive/2013/03/27/02?op=GETFILESTATUS&user.name=hue&doas=hdfs
[01/Apr/2013 07:14:19 +0000] resource DEBUG GET Got response:
{"FileStatus":{"accessTime":0,"b...
[01/Apr/2013 07:14:19 +0000] thrift_util DEBUG Thrift call: <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobCounterRollups(args=(RequestContext(confOptions={'effective_user':
u'hdfs'}), ThriftJobID(asString='job_201303201339_0025',
jobTrackerID='201303201339', jobID=25)), kwargs={})
[01/Apr/2013 07:14:19 +0000] thrift_util DEBUG Thrift call <class
'hadoop.api.jobtracker.Jobtracker.Client'>.getJobCounterRollups returned in
19ms:
ThriftJobCounterRollups(reduceCounters=ThriftGroupList(groups=[ThriftCounterGroup(displayName='File
System Counters', name='org.apache.hadoop.mapreduce.FileSystemCounter',
counters={'FILE: Number of bytes read': ThriftCounter(displayName='FILE:
Number of bytes read', name='FILE_BYTES_READ', value=3360), 'HDFS: Number
of write operations': ThriftCounter(displayName='HDFS: Number of write
operations', name='HDFS_WRITE_OPS', value=168), 'FILE: Number of read
operations': ThriftCounter(displayName='FILE: Number of read operations',
name='FILE_READ_OPS', value=0), 'HDFS: Number of bytes read':
ThriftCounter(displayName='HDFS: Number of bytes read',
name='HDFS_BYTES_READ', value=0), 'HDFS: Number of read operations':
ThriftCounter(displayName='HDFS: Number of read operations',
name='HDFS_READ_OPS', value=103), 'FILE: Number of bytes written':
ThriftCounter(displayName='FILE: Number of bytes written',
name='FILE_BYTES_WRITTEN', value=29347909), 'HDFS: Number of large read
operations': ThriftC...
[01/Apr/2013 07:14:19 +0000] access INFO 10.66.49.134 hdfs - "GET
/debug/check_config_ajax HTTP/1.0"
[01/Apr/2013 07:15:25 +0000] access WARNING 10.66.49.134 hdfs - "GET
/logs HTTP/1.0"
[01/Apr/2013 07:15:33 +0000] access WARNING 10.66.49.134 hdfs - "GET
/download_logs HTTP/1.0"
What does it mean?
Few hours ago I could see MapReduce logs through Hue interface.