FAQ
Hi everyone,

I have a problem about Hadoop startup.

I failed to startup the namenode and I got the following exception in the
namenode log file:
2008-10-23 21:54:51,223 INFO org.mortbay.http.SocketListener: Started
SocketListener on 0.0.0.0:50070
2008-10-23 21:54:51,224 INFO org.mortbay.util.Container: Started
org.mortbay.jetty.Server@ef2c60
2008-10-23 21:54:51,224 INFO org.apache.hadoop.fs.FSNamesystem: Web-server
up at: 0.0.0.0:50070
2008-10-23 21:54:51,224 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2008-10-23 21:54:51,226 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 58310: starting
2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 58310: starting
2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 58310: starting
2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 58310: starting
2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 3 on 58310: starting
2008-10-23 21:54:51,228 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 4 on 58310: starting
2008-10-23 21:54:51,232 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 5 on 58310: starting
2008-10-23 21:54:51,232 ERROR org.apache.hadoop.dfs.NameNode:
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:597)
at org.apache.hadoop.ipc.Server.start(Server.java:991)
at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:149)
at org.apache.hadoop.dfs.NameNode.(NameNode.java:179)
at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)

How can I fix it? Does it mean that my machine doesn't have enough memory
for Hadoop startup?
Any help will be appreciated!
Thanks in advance.

- Woody

Search Discussions

  • Steve Loughran at Oct 24, 2008 at 9:15 am

    woody zhou wrote:
    Hi everyone,

    I have a problem about Hadoop startup.

    I failed to startup the namenode and I got the following exception in the
    namenode log file:
    2008-10-23 21:54:51,232 ERROR org.apache.hadoop.dfs.NameNode:
    java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:597)
    at org.apache.hadoop.ipc.Server.start(Server.java:991)
    at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:149)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
    at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)

    How can I fix it? Does it mean that my machine doesn't have enough memory
    for Hadoop startup?
    Any help will be appreciated!
    Thanks in advance.

    - Woody
    Not memory; its an OS-level limit on the number of threads a process can
    have. Which is a mixture of physical and virtual memory allocation
    per-thread and any limits coded into the kernel.

    -search the web for the string "unable to create new native thread" and
    you will find more details and workarounds; include the OS you run on
    for some specific workarounds. For Hadoop, I'd consider throttling back
    the number of helpers

    1. Have a look at the value of dfs.namenode.handler.count and set it to
    something lower

    2. you can use kill -QUIT to get a dump of all threads in your process
    -this lets you see how many you have, and where they are.
  • Chaitanya krishna at Oct 26, 2008 at 4:09 am
    Hi,

    If the problem is due to the OS-level limit on the number of active
    threads, then why is the error showing outofmemory exception? Is it an issue
    of the heap size available for hadoop?Won't increasing heap size fix this
    problem?

    Thanks
    V.V.Chaitanya Krishna
    On Fri, Oct 24, 2008 at 2:42 PM, Steve Loughran wrote:

    woody zhou wrote:
    Hi everyone,

    I have a problem about Hadoop startup.

    I failed to startup the namenode and I got the following exception in the
    namenode log file:
    2008-10-23 21:54:51,232 ERROR org.apache.hadoop.dfs.NameNode:
    java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:597)
    at org.apache.hadoop.ipc.Server.start(Server.java:991)
    at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:149)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:193)
    at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:179)
    at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)

    How can I fix it? Does it mean that my machine doesn't have enough memory
    for Hadoop startup?
    Any help will be appreciated!
    Thanks in advance.

    - Woody
    Not memory; its an OS-level limit on the number of threads a process can
    have. Which is a mixture of physical and virtual memory allocation
    per-thread and any limits coded into the kernel.

    -search the web for the string "unable to create new native thread" and you
    will find more details and workarounds; include the OS you run on for some
    specific workarounds. For Hadoop, I'd consider throttling back the number of
    helpers

    1. Have a look at the value of dfs.namenode.handler.count and set it to
    something lower

    2. you can use kill -QUIT to get a dump of all threads in your process
    -this lets you see how many you have, and where they are.
  • Steve Loughran at Oct 27, 2008 at 10:42 am

    chaitanya krishna wrote:
    Hi,

    If the problem is due to the OS-level limit on the number of active
    threads, then why is the error showing outofmemory exception? Is it an issue
    of the heap size available for hadoop?Won't increasing heap size fix this
    problem?
    It's not saying out of memory, it is saying "java.lang.OutOfMemoryError:
    unable to create new native thread". That's a different message, with
    different causes. If it had said "Out of Heap Space", then yes, more
    heap. If it had said "Out of PermGen Space" then you'd have to research
    up on Permanent Generation heap space and how to tune it. Here it is
    saying it cannot create new native threads.
  • Chaitanya krishna at Oct 27, 2008 at 6:10 pm
    hi,

    Thank you for the information Steve. :) I never came across this and is very
    new :)

    V.V.Chaitanya Krishna
    IIIT,Hyderabad
    India
    On Mon, Oct 27, 2008 at 4:10 PM, Steve Loughran wrote:

    chaitanya krishna wrote:
    Hi,

    If the problem is due to the OS-level limit on the number of active
    threads, then why is the error showing outofmemory exception? Is it an
    issue
    of the heap size available for hadoop?Won't increasing heap size fix this
    problem?
    It's not saying out of memory, it is saying "java.lang.OutOfMemoryError:
    unable to create new native thread". That's a different message, with
    different causes. If it had said "Out of Heap Space", then yes, more heap.
    If it had said "Out of PermGen Space" then you'd have to research up on
    Permanent Generation heap space and how to tune it. Here it is saying it
    cannot create new native threads.

  • Yang Zhou at Oct 24, 2008 at 12:52 pm
    Hi everyone,

    I have a problem about Hadoop startup.

    I failed to startup the namenode and I got the following exception in the
    namenode log file:
    2008-10-23 21:54:51,223 INFO org.mortbay.http.SocketListener: Started
    SocketListener on 0.0.0.0:50070
    2008-10-23 21:54:51,224 INFO org.mortbay.util.Container: Started
    org.mortbay.jetty.Server@ef2c60
    2008-10-23 21:54:51,224 INFO org.apache.hadoop.fs.FSNamesystem: Web-server
    up at: 0.0.0.0:50070
    2008-10-23 21:54:51,224 INFO org.apache.hadoop.ipc.Server: IPC Server
    Responder: starting
    2008-10-23 21:54:51,226 INFO org.apache.hadoop.ipc.Server: IPC Server
    listener on 58310: starting
    2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 0 on 58310: starting
    2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 1 on 58310: starting
    2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 2 on 58310: starting
    2008-10-23 21:54:51,227 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 3 on 58310: starting
    2008-10-23 21:54:51,228 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 4 on 58310: starting
    2008-10-23 21:54:51,232 INFO org.apache.hadoop.ipc.Server: IPC Server
    handler 5 on 58310: starting
    2008-10-23 21:54:51,232 ERROR org.apache.hadoop.dfs.NameNode:
    java.lang.OutOfMemoryError: unable to create new native thread
    at java.lang.Thread.start0(Native Method)
    at java.lang.Thread.start(Thread.java:597)
    at org.apache.hadoop.ipc.Server.start(Server.java:991)
    at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:149)
    at org.apache.hadoop.dfs.NameNode.(NameNode.java:179)
    at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:830)
    at org.apache.hadoop.dfs.NameNode.main(NameNode.java:839)

    How can I fix it? Does it mean that my machine doesn't have enough memory
    for Hadoop startup?
    Any help will be appreciated!
    Thanks in advance.

    -Woody

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedOct 24, '08 at 2:14a
activeOct 27, '08 at 6:10p
posts6
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase