FAQ
Hi,

Some of my map tasks are killed by the tracker and give the error "Task task_200801251420_0007_m_000006_0 failed to report status for 601 seconds. Killing!"

My map task is basically copy a large file so I don't have much to report during the process. How can I prevent the task tracker to kill my task?

Thanks,

Rui



____________________________________________________________________________________
Looking for last minute shopping deals?
Find them fast with Yahoo! Search. http://tools.search.yahoo.com/newsearch/category.php?category=shopping

Search Discussions

  • Lohit Vijayarenu at Jan 29, 2008 at 12:23 am
    You could try setting the value of mapred.task.timeout to higher value.
    Thanks,
    Lohit

    ----- Original Message ----
    From: Rui Shi <shearershot@yahoo.com>
    To: core-user@hadoop.apache.org
    Sent: Monday, January 28, 2008 3:03:45 PM
    Subject: Task was killed due to running over 600 sec


    Hi,

    Some of my map tasks are killed by the tracker and give the error "Task
    task_200801251420_0007_m_000006_0 failed to report status for 601
    seconds. Killing!"

    My map task is basically copy a large file so I don't have much to
    report during the process. How can I prevent the task tracker to kill my
    task?

    Thanks,

    Rui




    ____________________________________________________________________________________
    Looking for last minute shopping deals?
    Find them fast with Yahoo! Search.
    http://tools.search.yahoo.com/newsearch/category.php?category=shopping
  • Jason Venner at Jan 29, 2008 at 12:24 am
    You could update a counter every N lines/records , so that you get an
    update more than 1ce per ten minute interval

    lohit.vijayarenu@yahoo.com wrote:
    You could try setting the value of mapred.task.timeout to higher value.
    Thanks,
    Lohit

    ----- Original Message ----
    From: Rui Shi <shearershot@yahoo.com>
    To: core-user@hadoop.apache.org
    Sent: Monday, January 28, 2008 3:03:45 PM
    Subject: Task was killed due to running over 600 sec


    Hi,

    Some of my map tasks are killed by the tracker and give the error "Task
    task_200801251420_0007_m_000006_0 failed to report status for 601
    seconds. Killing!"

    My map task is basically copy a large file so I don't have much to
    report during the process. How can I prevent the task tracker to kill my
    task?

    Thanks,

    Rui




    ____________________________________________________________________________________
    Looking for last minute shopping deals?
    Find them fast with Yahoo! Search.
    http://tools.search.yahoo.com/newsearch/category.php?category=shopping

  • Michael Bieniosek at Jan 29, 2008 at 1:03 am
    You can start a separate thread which updates the status every 10 seconds. You need to include some information that changes for each update, eg. the number of seconds you have been waiting.

    -Michael

    On 1/28/08 4:24 PM, "Jason Venner" wrote:

    You could update a counter every N lines/records , so that you get an
    update more than 1ce per ten minute interval

    lohit.vijayarenu@yahoo.com wrote:
    You could try setting the value of mapred.task.timeout to higher value.
    Thanks,
    Lohit

    ----- Original Message ----
    From: Rui Shi <shearershot@yahoo.com>
    To: core-user@hadoop.apache.org
    Sent: Monday, January 28, 2008 3:03:45 PM
    Subject: Task was killed due to running over 600 sec


    Hi,

    Some of my map tasks are killed by the tracker and give the error "Task
    task_200801251420_0007_m_000006_0 failed to report status for 601
    seconds. Killing!"

    My map task is basically copy a large file so I don't have much to
    report during the process. How can I prevent the task tracker to kill my
    task?

    Thanks,

    Rui




    ____________________________________________________________________________________
    Looking for last minute shopping deals?
    Find them fast with Yahoo! Search.
    http://tools.search.yahoo.com/newsearch/category.php?category=shopping

  • ChaoChun Liang at Jan 29, 2008 at 7:13 am

    lohit.vijayarenu wrote:

    You could try setting the value of mapred.task.timeout to higher value.
    Thanks,
    Lohit
    Could I set the different timeout values for the maper and reducer
    separately?
    In my case, the execution time for the mapper is short than the reducer.

    Thanks.
    ChaoChun

    --
    View this message in context: http://www.nabble.com/Task-was-killed-due-to-running-over-600-sec-tp15148129p15153682.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Arun C Murthy at Jan 29, 2008 at 3:05 pm

    On Jan 28, 2008, at 11:12 PM, ChaoChun Liang wrote:

    lohit.vijayarenu wrote:
    You could try setting the value of mapred.task.timeout to higher
    value.
    Thanks,
    Lohit
    Could I set the different timeout values for the maper and reducer
    separately?
    In my case, the execution time for the mapper is short than the
    reducer.
    No. There isn't a way to do that.

    However, it really _is_ better to send progress/status-updates to the
    TaskTracker rather than to work around it... infact it is as simple
    as calling *reporter.progress()* periodically, or reporter.setStatus
    on the reporter which is passed to the map/reduce method. It helps
    debugging too...

    Arun
    Thanks.
    ChaoChun

    --
    View this message in context: http://www.nabble.com/Task-was-killed-
    due-to-running-over-600-sec-tp15148129p15153682.html
    Sent from the Hadoop core-user mailing list archive at Nabble.com.
  • Ted Dunning at Jan 30, 2008 at 9:31 pm
    I am looking for a way for scripts to write data to HDFS without having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST events. If
    anybody has an opinion about that, please let me know (or put a comment on
    the JIRA, if and when).
  • Jason Venner at Jan 30, 2008 at 9:58 pm
    I suppose we could add a feature to the hdfs web ui to allow uploading
    files.

    Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST events. If
    anybody has an opinion about that, please let me know (or put a comment on
    the JIRA, if and when).

    --
    Jason Venner
    Attributor - Publish with Confidence <http://www.attributor.com/>
    Attributor is hiring Hadoop Wranglers, contact if interested
  • Vadim Zaliva at Jan 30, 2008 at 10:01 pm
    On Jan 30, 2008, at 13:57, Jason Venner wrote:

    I think somebody mentioned WebDAV support. That would work for me,
    so I can PUT files.

    Vadim
    I suppose we could add a feature to the hdfs web ui to allow
    uploading files.

    Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without
    having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for
    reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST
    events. If
    anybody has an opinion about that, please let me know (or put a
    comment on
    the JIRA, if and when).

    --
    Jason Venner
    Attributor - Publish with Confidence <http://www.attributor.com/>
    Attributor is hiring Hadoop Wranglers, contact if interested
  • Ted Dunning at Jan 30, 2008 at 10:19 pm
    Might work.

    On 1/30/08 2:00 PM, "Vadim Zaliva" wrote:

    On Jan 30, 2008, at 13:57, Jason Venner wrote:

    I think somebody mentioned WebDAV support. That would work for me,
    so I can PUT files.

    Vadim
    I suppose we could add a feature to the hdfs web ui to allow
    uploading files.

    Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without
    having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for
    reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST
    events. If
    anybody has an opinion about that, please let me know (or put a
    comment on
    the JIRA, if and when).

    --
    Jason Venner
    Attributor - Publish with Confidence <http://www.attributor.com/>
    Attributor is hiring Hadoop Wranglers, contact if interested
  • Michael Bieniosek at Jan 31, 2008 at 12:01 am
    There is a webdav servlet in HADOOP-496 that works for read/write/delete. I've only tested it with the Mac OSX Finder client though.

    -Michael

    On 1/30/08 2:18 PM, "Ted Dunning" wrote:



    Might work.

    On 1/30/08 2:00 PM, "Vadim Zaliva" wrote:

    On Jan 30, 2008, at 13:57, Jason Venner wrote:

    I think somebody mentioned WebDAV support. That would work for me,
    so I can PUT files.

    Vadim
    I suppose we could add a feature to the hdfs web ui to allow
    uploading files.

    Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without
    having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for
    reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST
    events. If
    anybody has an opinion about that, please let me know (or put a
    comment on
    the JIRA, if and when).

    --
    Jason Venner
    Attributor - Publish with Confidence <http://www.attributor.com/><http://www.attributor.com/>
    Attributor is hiring Hadoop Wranglers, contact if interested
  • Ted Dunning at Jan 30, 2008 at 10:18 pm
    That's what I am about to do.

    On 1/30/08 1:57 PM, "Jason Venner" wrote:

    I suppose we could add a feature to the hdfs web ui to allow uploading
    files.

    Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST events. If
    anybody has an opinion about that, please let me know (or put a comment on
    the JIRA, if and when).

  • Arun C Murthy at Jan 30, 2008 at 11:06 pm

    On Jan 30, 2008, at 1:30 PM, Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without
    having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST
    events. If
    anybody has an opinion about that, please let me know (or put a
    comment on
    the JIRA, if and when).
    Umm... how does it interact with HDFS permissions coming in 0.16.0?

    Arun

    >
  • Ted Dunning at Jan 31, 2008 at 12:01 am

    On 1/30/08 3:04 PM, "Arun C Murthy" wrote:

    On Jan 30, 2008, at 1:30 PM, Ted Dunning wrote:


    Am I missing something?
    Umm... how does it interact with HDFS permissions coming in 0.16.0?
    Don't know.

    I just build a simple patch against the trunk and cloned the UGI stuff from
    the doGet method.

    Will that work?
  • Doug Cutting at Jan 31, 2008 at 8:14 pm

    Ted Dunning wrote:
    Don't know.

    I just build a simple patch against the trunk and cloned the UGI stuff from
    the doGet method.

    Will that work?
    It should. You'll have to specify the user & groups in the query
    string. Looking at the code, it looks like this should be something
    like "&ugi=user,group1,group2;".

    Doug
  • Raghu Angadi at Jan 31, 2008 at 8:06 pm
    You could take look at fuse plugin in for HDFS. Then client does not
    even need a web browser.

    Raghu.

    Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST events. If
    anybody has an opinion about that, please let me know (or put a comment on
    the JIRA, if and when).
  • Jason Venner at Jan 31, 2008 at 8:14 pm
    As of the last version, of this great tool, that I have loaded, file
    write was not yet enabled.
    Read works well.

    Raghu Angadi wrote:
    You could take look at fuse plugin in for HDFS. Then client does not
    even need a web browser.

    Raghu.

    Ted Dunning wrote:
    I am looking for a way for scripts to write data to HDFS without
    having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST
    events. If
    anybody has an opinion about that, please let me know (or put a
    comment on
    the JIRA, if and when).
    --
    Jason Venner
    Attributor - Publish with Confidence <http://www.attributor.com/>
    Attributor is hiring Hadoop Wranglers, contact if interested
  • C G at Feb 6, 2008 at 5:53 pm
    Ted:

    I am curious about how you read files without installing anything. Can you share your wisdom?

    Thanks,
    C G

    Ted Dunning wrote:

    I am looking for a way for scripts to write data to HDFS without having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST events. If
    anybody has an opinion about that, please let me know (or put a comment on
    the JIRA, if and when).





    ---------------------------------
    Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it now.
  • Ted Dunning at Feb 6, 2008 at 6:31 pm
    http://<namenode-and-port/data/<file-path-in-hadoop>

    I also have code written to allow posting to the same URL for file creation,
    but haven't had time to get it to actually work (the posting to the URL
    doesn't call doPost for some reason).

    If somebody else has time to track down the (probably obvious to anybody but
    me) issue, I would be happy to file a Jira and post a patch against 15.1 and
    trunk for their reference.

    On 2/6/08 9:52 AM, "C G" wrote:

    Ted:

    I am curious about how you read files without installing anything. Can you
    share your wisdom?

    Thanks,
    C G

    Ted Dunning wrote:

    I am looking for a way for scripts to write data to HDFS without having to
    install anything.

    The /data and /listPaths URL's on the nameserver are ideal for reading
    files, but I can't find anything comparable to write files.

    Am I missing something?

    If not, I think I will file a JIRA and make /data accept POST events. If
    anybody has an opinion about that, please let me know (or put a comment on
    the JIRA, if and when).





    ---------------------------------
    Be a better friend, newshound, and know-it-all with Yahoo! Mobile. Try it
    now.
  • Arun C Murthy at Jan 29, 2008 at 2:33 am

    On Jan 28, 2008, at 3:03 PM, Rui Shi wrote:

    Hi,

    Some of my map tasks are killed by the tracker and give the error
    "Task task_200801251420_0007_m_000006_0 failed to report status for
    601 seconds. Killing!"

    My map task is basically copy a large file so I don't have much to
    report during the process. How can I prevent the task tracker to
    kill my task?
    http://hadoop.apache.org/core/docs/r0.15.3/api/org/apache/hadoop/
    mapred/Mapper.html#map(K1,%20V1,%
    20org.apache.hadoop.mapred.OutputCollector,%
    20org.apache.hadoop.mapred.Reporter)

    Arun
    Thanks,

    Rui




    ______________________________________________________________________
    ______________
    Looking for last minute shopping deals?
    Find them fast with Yahoo! Search. http://tools.search.yahoo.com/
    newsearch/category.php?category=shopping

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJan 28, '08 at 11:04p
activeFeb 6, '08 at 6:31p
posts20
users11
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2021 Grokbase