FAQ
Have some log messages designed for machine parsing, either real-time or post-mortem
------------------------------------------------------------------------------------

Key: HADOOP-6107
URL: https://issues.apache.org/jira/browse/HADOOP-6107
Project: Hadoop Common
Issue Type: Improvement
Affects Versions: 0.21.0
Reporter: Steve Loughran


Many programs take the log output of bits of Hadoop, and try and parse it. Some may also put their own back end behind commons-logging, to capture the input without going via Log4J, so as to keep the output more machine-readable.

These programs need log messages that
# are easy to parse by a regexp or other simple string parse (consider quoting values, etc)
# push out the full exception chain rather than stringify() bits of it
# stay stable across versions
# log the things the tools need to analyse: events, data volumes, errors

For these logging tools, ease of parsing, retention of data and stability over time take the edge over readability. In HADOOP-5073, Jiaqi Tan proposed marking some of the existing log events as evolving towards stability. As someone who regulary patches log messages to improve diagnostics, this creates a conflict of interest. For me, good logs are ones that help people debug their problems without anyone else helping, and if that means improving the text, so be it. Tools like Chukwa have a different need.

What to do? Some options
# Have some messages that are designed purely for other programs to handle
# Have some logs specifically for machines, to which we log alongside the human-centric messages
# Fix many of the common messages, then leave them alone.
# Mark log messages to be left alone (somehow)

--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Search Discussions

  • Steve Loughran (JIRA) at Jun 25, 2009 at 1:01 pm
    [ https://issues.apache.org/jira/browse/HADOOP-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12724076#action_12724076 ]

    Steve Loughran commented on HADOOP-6107:
    ----------------------------------------

    as examples of the problem, some client side logs

    {{code}
    [java] 09/06/25 13:41:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:10 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_r_000001_0, Status : FAILED
    [java] Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
    [java] 09/06/25 13:41:10 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:10 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:07 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_m_000004_0, Status : FAILED
    [java] Too many fetch-failures
    [java] 09/06/25 13:44:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:11 INFO mapred.JobClient: map 83% reduce 0%
    [java] 09/06/25 13:44:14 INFO mapred.JobClient: map 100% reduce 0%
    [java] 09/06/25 13:49:23 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_m_000005_0, Status : FAILED
    [java] Too many fetch-failures
    [java] 09/06/25 13:49:23 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:49:23 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:49:27 INFO mapred.JobClient: map 83% reduce 0%
    {code}

    # bad spacing in the " Error reading task outputConnection refused" message.
    # not enough context as to why the connection was being refused: need to include the (hostname, port) details -which would change the message and break chukwa
    # no stack trace in the connection refused message
    # not enough context in the JobClient messages; if >1 job is running simultaneously, you cant determine what the map and reduce is referring to
    # The shuffle error doesn't actually say what the MAX_FAILED_UNIQUE_FETCHES value is.
    Have some log messages designed for machine parsing, either real-time or post-mortem
    ------------------------------------------------------------------------------------

    Key: HADOOP-6107
    URL: https://issues.apache.org/jira/browse/HADOOP-6107
    Project: Hadoop Common
    Issue Type: Improvement
    Affects Versions: 0.21.0
    Reporter: Steve Loughran

    Many programs take the log output of bits of Hadoop, and try and parse it. Some may also put their own back end behind commons-logging, to capture the input without going via Log4J, so as to keep the output more machine-readable.
    These programs need log messages that
    # are easy to parse by a regexp or other simple string parse (consider quoting values, etc)
    # push out the full exception chain rather than stringify() bits of it
    # stay stable across versions
    # log the things the tools need to analyse: events, data volumes, errors
    For these logging tools, ease of parsing, retention of data and stability over time take the edge over readability. In HADOOP-5073, Jiaqi Tan proposed marking some of the existing log events as evolving towards stability. As someone who regulary patches log messages to improve diagnostics, this creates a conflict of interest. For me, good logs are ones that help people debug their problems without anyone else helping, and if that means improving the text, so be it. Tools like Chukwa have a different need.
    What to do? Some options
    # Have some messages that are designed purely for other programs to handle
    # Have some logs specifically for machines, to which we log alongside the human-centric messages
    # Fix many of the common messages, then leave them alone.
    # Mark log messages to be left alone (somehow)
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Steve Loughran (JIRA) at Jun 25, 2009 at 1:01 pm
    [ https://issues.apache.org/jira/browse/HADOOP-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12724076#action_12724076 ]

    Steve Loughran edited comment on HADOOP-6107 at 6/25/09 5:59 AM:
    -----------------------------------------------------------------

    as examples of the problem, some client side logs

    {code}
    [java] 09/06/25 13:41:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:10 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_r_000001_0, Status : FAILED
    [java] Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
    [java] 09/06/25 13:41:10 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:10 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:07 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_m_000004_0, Status : FAILED
    [java] Too many fetch-failures
    [java] 09/06/25 13:44:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:11 INFO mapred.JobClient: map 83% reduce 0%
    [java] 09/06/25 13:44:14 INFO mapred.JobClient: map 100% reduce 0%
    [java] 09/06/25 13:49:23 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_m_000005_0, Status : FAILED
    [java] Too many fetch-failures
    [java] 09/06/25 13:49:23 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:49:23 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:49:27 INFO mapred.JobClient: map 83% reduce 0%
    {code}

    # bad spacing in the " Error reading task outputConnection refused" message.
    # not enough context as to why the connection was being refused: need to include the (hostname, port) details -which would change the message and break chukwa
    # no stack trace in the connection refused message
    # not enough context in the JobClient messages; if >1 job is running simultaneously, you cant determine what the map and reduce is referring to
    # The shuffle error doesn't actually say what the MAX_FAILED_UNIQUE_FETCHES value is.

    was (Author: steve_l):
    as examples of the problem, some client side logs

    {{code}
    [java] 09/06/25 13:41:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:10 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_r_000001_0, Status : FAILED
    [java] Shuffle Error: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
    [java] 09/06/25 13:41:10 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:41:10 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:07 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_m_000004_0, Status : FAILED
    [java] Too many fetch-failures
    [java] 09/06/25 13:44:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:07 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:44:11 INFO mapred.JobClient: map 83% reduce 0%
    [java] 09/06/25 13:44:14 INFO mapred.JobClient: map 100% reduce 0%
    [java] 09/06/25 13:49:23 INFO mapred.JobClient: Task Id : attempt_200906251314_0002_m_000005_0, Status : FAILED
    [java] Too many fetch-failures
    [java] 09/06/25 13:49:23 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:49:23 WARN mapred.JobClient: Error reading task outputConnection refused
    [java] 09/06/25 13:49:27 INFO mapred.JobClient: map 83% reduce 0%
    {code}

    # bad spacing in the " Error reading task outputConnection refused" message.
    # not enough context as to why the connection was being refused: need to include the (hostname, port) details -which would change the message and break chukwa
    # no stack trace in the connection refused message
    # not enough context in the JobClient messages; if >1 job is running simultaneously, you cant determine what the map and reduce is referring to
    # The shuffle error doesn't actually say what the MAX_FAILED_UNIQUE_FETCHES value is.
    Have some log messages designed for machine parsing, either real-time or post-mortem
    ------------------------------------------------------------------------------------

    Key: HADOOP-6107
    URL: https://issues.apache.org/jira/browse/HADOOP-6107
    Project: Hadoop Common
    Issue Type: Improvement
    Affects Versions: 0.21.0
    Reporter: Steve Loughran

    Many programs take the log output of bits of Hadoop, and try and parse it. Some may also put their own back end behind commons-logging, to capture the input without going via Log4J, so as to keep the output more machine-readable.
    These programs need log messages that
    # are easy to parse by a regexp or other simple string parse (consider quoting values, etc)
    # push out the full exception chain rather than stringify() bits of it
    # stay stable across versions
    # log the things the tools need to analyse: events, data volumes, errors
    For these logging tools, ease of parsing, retention of data and stability over time take the edge over readability. In HADOOP-5073, Jiaqi Tan proposed marking some of the existing log events as evolving towards stability. As someone who regulary patches log messages to improve diagnostics, this creates a conflict of interest. For me, good logs are ones that help people debug their problems without anyone else helping, and if that means improving the text, so be it. Tools like Chukwa have a different need.
    What to do? Some options
    # Have some messages that are designed purely for other programs to handle
    # Have some logs specifically for machines, to which we log alongside the human-centric messages
    # Fix many of the common messages, then leave them alone.
    # Mark log messages to be left alone (somehow)
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Steve Loughran (JIRA) at Jun 25, 2009 at 1:07 pm
    [ https://issues.apache.org/jira/browse/HADOOP-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12724078#action_12724078 ]

    Steve Loughran commented on HADOOP-6107:
    ----------------------------------------

    and server side, similar issues

    {code}
    09/06/25 13:54:26 INFO datanode.DataNode: BlockReport of 8 blocks got processed in 1 msecs
    09/06/25 13:54:26 INFO mapred.TaskTracker: attempt_200906251314_0002_r_000001_1 0.0% reduce > copy >
    09/06/25 13:54:29 INFO mapred.TaskTracker: attempt_200906251314_0002_r_000001_1 0.0% reduce > copy >
    09/06/25 13:54:29 INFO mapred.TaskTracker: attempt_200906251314_0002_r_000000_1 0.0% reduce > copy >
    09/06/25 13:54:32 INFO mapred.TaskTracker: attempt_200906251314_0002_r_000000_1 0.0% reduce > copy >
    09/06/25 13:54:35 INFO mapred.TaskTracker: attempt_200906251314_0002_r_000001_1 0.0% reduce > copy >
    09/06/25 13:54:36 INFO datanode.DataNode: BlockReport of 8 blocks got processed in 2 msecs
    09/06/25 13:54:38 INFO mapred.JobInProgress: Failed fetch notification #3 for task attempt_200906251314_0002_m_000000_1
    09/06/25 13:54:38 INFO mapred.JobInProgress: Too many fetch-failures for output of task: attempt_200906251314_0002_m_000000_1 ... killing it
    09/06/25 13:54:38 INFO mapred.TaskInProgress: Error from attempt_200906251314_0002_m_000000_1: Too many fetch-failures
    {code}
    The task tracker lists attempt IDs, but nothing includes any datanode or tasktracker identity information. If you are aggregating the logs you need that, which means you need to stick extra data on every log event that comes in, and you have to track all their values. Aggregation is simpler if the machine-level events include more sender info.

    Have some log messages designed for machine parsing, either real-time or post-mortem
    ------------------------------------------------------------------------------------

    Key: HADOOP-6107
    URL: https://issues.apache.org/jira/browse/HADOOP-6107
    Project: Hadoop Common
    Issue Type: Improvement
    Affects Versions: 0.21.0
    Reporter: Steve Loughran

    Many programs take the log output of bits of Hadoop, and try and parse it. Some may also put their own back end behind commons-logging, to capture the input without going via Log4J, so as to keep the output more machine-readable.
    These programs need log messages that
    # are easy to parse by a regexp or other simple string parse (consider quoting values, etc)
    # push out the full exception chain rather than stringify() bits of it
    # stay stable across versions
    # log the things the tools need to analyse: events, data volumes, errors
    For these logging tools, ease of parsing, retention of data and stability over time take the edge over readability. In HADOOP-5073, Jiaqi Tan proposed marking some of the existing log events as evolving towards stability. As someone who regulary patches log messages to improve diagnostics, this creates a conflict of interest. For me, good logs are ones that help people debug their problems without anyone else helping, and if that means improving the text, so be it. Tools like Chukwa have a different need.
    What to do? Some options
    # Have some messages that are designed purely for other programs to handle
    # Have some logs specifically for machines, to which we log alongside the human-centric messages
    # Fix many of the common messages, then leave them alone.
    # Mark log messages to be left alone (somehow)
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Todd Lipcon (JIRA) at Jun 25, 2009 at 2:22 pm
    [ https://issues.apache.org/jira/browse/HADOOP-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12724104#action_12724104 ]

    Todd Lipcon commented on HADOOP-6107:
    -------------------------------------

    In other systems, I've done this by using Thrift to serialize the logs on disk. This allows for backward compatible evolution of the contents of the log messages, and is a much more compact format even without compression. We could consider Avro for this down the road.

    The obvious disadvantage is that text logs are way more useful than binary ones. Having binary logs absolutely requires that there is a very good tool for converting them to a readable form, as well as grepping through them.
    Have some log messages designed for machine parsing, either real-time or post-mortem
    ------------------------------------------------------------------------------------

    Key: HADOOP-6107
    URL: https://issues.apache.org/jira/browse/HADOOP-6107
    Project: Hadoop Common
    Issue Type: Improvement
    Affects Versions: 0.21.0
    Reporter: Steve Loughran

    Many programs take the log output of bits of Hadoop, and try and parse it. Some may also put their own back end behind commons-logging, to capture the input without going via Log4J, so as to keep the output more machine-readable.
    These programs need log messages that
    # are easy to parse by a regexp or other simple string parse (consider quoting values, etc)
    # push out the full exception chain rather than stringify() bits of it
    # stay stable across versions
    # log the things the tools need to analyse: events, data volumes, errors
    For these logging tools, ease of parsing, retention of data and stability over time take the edge over readability. In HADOOP-5073, Jiaqi Tan proposed marking some of the existing log events as evolving towards stability. As someone who regulary patches log messages to improve diagnostics, this creates a conflict of interest. For me, good logs are ones that help people debug their problems without anyone else helping, and if that means improving the text, so be it. Tools like Chukwa have a different need.
    What to do? Some options
    # Have some messages that are designed purely for other programs to handle
    # Have some logs specifically for machines, to which we log alongside the human-centric messages
    # Fix many of the common messages, then leave them alone.
    # Mark log messages to be left alone (somehow)
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Steve Loughran (JIRA) at Jun 26, 2009 at 1:43 pm
    [ https://issues.apache.org/jira/browse/HADOOP-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12724536#action_12724536 ]

    Steve Loughran commented on HADOOP-6107:
    ----------------------------------------

    sounds like you need a way of having different plug-ins for monitoring, or a thrift back-end to log4j
    Have some log messages designed for machine parsing, either real-time or post-mortem
    ------------------------------------------------------------------------------------

    Key: HADOOP-6107
    URL: https://issues.apache.org/jira/browse/HADOOP-6107
    Project: Hadoop Common
    Issue Type: Improvement
    Affects Versions: 0.21.0
    Reporter: Steve Loughran

    Many programs take the log output of bits of Hadoop, and try and parse it. Some may also put their own back end behind commons-logging, to capture the input without going via Log4J, so as to keep the output more machine-readable.
    These programs need log messages that
    # are easy to parse by a regexp or other simple string parse (consider quoting values, etc)
    # push out the full exception chain rather than stringify() bits of it
    # stay stable across versions
    # log the things the tools need to analyse: events, data volumes, errors
    For these logging tools, ease of parsing, retention of data and stability over time take the edge over readability. In HADOOP-5073, Jiaqi Tan proposed marking some of the existing log events as evolving towards stability. As someone who regulary patches log messages to improve diagnostics, this creates a conflict of interest. For me, good logs are ones that help people debug their problems without anyone else helping, and if that means improving the text, so be it. Tools like Chukwa have a different need.
    What to do? Some options
    # Have some messages that are designed purely for other programs to handle
    # Have some logs specifically for machines, to which we log alongside the human-centric messages
    # Fix many of the common messages, then leave them alone.
    # Mark log messages to be left alone (somehow)
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.
  • Todd Lipcon (JIRA) at Jun 29, 2009 at 9:13 pm
    [ https://issues.apache.org/jira/browse/HADOOP-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12725351#action_12725351 ]

    Todd Lipcon commented on HADOOP-6107:
    -------------------------------------

    It's not so much a thrift back-end for log4j - the issue is that the log messages themselves are unstructured. If you're logging Strings, you're faced with a parsing challenge. If you're logging some kind of Event objects, you can choose to output strings or structured data.
    Have some log messages designed for machine parsing, either real-time or post-mortem
    ------------------------------------------------------------------------------------

    Key: HADOOP-6107
    URL: https://issues.apache.org/jira/browse/HADOOP-6107
    Project: Hadoop Common
    Issue Type: Improvement
    Affects Versions: 0.21.0
    Reporter: Steve Loughran

    Many programs take the log output of bits of Hadoop, and try and parse it. Some may also put their own back end behind commons-logging, to capture the input without going via Log4J, so as to keep the output more machine-readable.
    These programs need log messages that
    # are easy to parse by a regexp or other simple string parse (consider quoting values, etc)
    # push out the full exception chain rather than stringify() bits of it
    # stay stable across versions
    # log the things the tools need to analyse: events, data volumes, errors
    For these logging tools, ease of parsing, retention of data and stability over time take the edge over readability. In HADOOP-5073, Jiaqi Tan proposed marking some of the existing log events as evolving towards stability. As someone who regulary patches log messages to improve diagnostics, this creates a conflict of interest. For me, good logs are ones that help people debug their problems without anyone else helping, and if that means improving the text, so be it. Tools like Chukwa have a different need.
    What to do? Some options
    # Have some messages that are designed purely for other programs to handle
    # Have some logs specifically for machines, to which we log alongside the human-centric messages
    # Fix many of the common messages, then leave them alone.
    # Mark log messages to be left alone (somehow)
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedJun 25, '09 at 12:57p
activeJun 29, '09 at 9:13p
posts7
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Todd Lipcon (JIRA): 7 posts

People

Translate

site design / logo © 2022 Grokbase