FAQ
Hello all.

I need to process many Gigs of new data each 10 minutes. Each 10 minutes
cron launches bash script "do.sh" that puts data into HDFS and launches
processing. But...

Hadoop isn't military software, so there is probability of errors with
HDFS. So i need to watch LOG files to catch problems. For example, HDFS
may crash and it will be need to format whole HDFS, delete /tmp/hadoop*,
ets...

So i decided to do full restart each 10 mins whole cluster before begining
of data processing. I am erasing all /tmp/hadoop* on each node by ssh,
start dfs, start mapred, put binaries and data and then run processing.

But after formatting and starting DFS i need to wait some time (sleep 60)
before putting data into HDFS. Else i will receive
"NotReplicatedYetException".

What you think about this all? Thank you :)

Search Discussions

  • Steve Loughran at Jun 3, 2009 at 11:50 am

    b wrote:

    But after formatting and starting DFS i need to wait some time (sleep
    60) before putting data into HDFS. Else i will receive
    "NotReplicatedYetException".
    that means the namenode is up but there aren't enough workers yet.
  • Aaron Kimball at Jun 3, 2009 at 11:06 pm
    You can block for safemode exit by running 'hadoop dfsadmin -safemode wait'
    rather than sleeping for an arbitrary amount of time.

    More generally, I'm a bit confused what you mean by all this. Hadoop daemons
    may individually crash, but you should never need to reformat HDFS and start
    from scratch. If you're doing this, that means that you're probably sticking
    some important hadoop files in a temp dir that's getting cleaned out or
    something of the like. Are dfs.data.dir and dfs.name.dir suitably
    well-protected from tmpwatch or other such "housekeeping" programs?

    - Aaron
    On Wed, Jun 3, 2009 at 4:50 AM, Steve Loughran wrote:

    b wrote:

    But after formatting and starting DFS i need to wait some time (sleep 60)
    before putting data into HDFS. Else i will receive
    "NotReplicatedYetException".
    that means the namenode is up but there aren't enough workers yet.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedJun 3, '09 at 10:14a
activeJun 3, '09 at 11:06p
posts3
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase