FAQ
my cluster consists of 8 nodes with the namenode in an independent machine,the following info is what I get from namenode web ui:
291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB / 4.34 GB (67%)
I'm wondering why the namenode take so much memory while I only store hundreds of files. I've check the fsimage and edits files, the size of the sum of both is only 232 KB. So far as I know namenode can store the information of millions of files with 1G RAM, why my cluster consume so much memory ? If it goes on,I can't store that many files before the memory is eaten up.

2010-09-06



shangan

Search Discussions

  • Ranjib Dey at Sep 6, 2010 at 10:09 am
    can you try out a hadoop dfs -ls /, and check the output?

    On Mon, Sep 6, 2010 at 12:57 PM, shangan wrote:

    my cluster consists of 8 nodes with the namenode in an independent
    machine,the following info is what I get from namenode web ui:
    291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB /
    4.34 GB (67%)
    I'm wondering why the namenode take so much memory while I only store
    hundreds of files. I've check the fsimage and edits files, the size of the
    sum of both is only 232 KB. So far as I know namenode can store the
    information of millions of files with 1G RAM, why my cluster consume so much
    memory ? If it goes on,I can't store that many files before the memory is
    eaten up.

    2010-09-06



    shangan
  • Shangan at Sep 6, 2010 at 10:18 am
    of course, it works well without any exception or anything abnormal. What clue do you want to provide?


    2010-09-06



    shangan



    发件人: Ranjib Dey
    发送时间: 2010-09-06 18:09:10
    收件人: common-user
    抄送:
    主题: Re: namenode consume quite a lot of memory with only serveralhundreds of files in it

    can you try out a hadoop dfs -ls /, and check the output?
    On Mon, Sep 6, 2010 at 12:57 PM, shangan wrote:
    my cluster consists of 8 nodes with the namenode in an independent
    machine,the following info is what I get from namenode web ui:
    291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB /
    4.34 GB (67%)
    I'm wondering why the namenode take so much memory while I only store
    hundreds of files. I've check the fsimage and edits files, the size of the
    sum of both is only 232 KB. So far as I know namenode can store the
    information of millions of files with 1G RAM, why my cluster consume so much
    memory ? If it goes on,I can't store that many files before the memory is
    eaten up.

    2010-09-06



    shangan
    __________ Information from ESET NOD32 Antivirus, version of virus signature database 5418 (20100902) __________
    The message was checked by ESET NOD32 Antivirus.
    http://www.eset.com
  • Steve Loughran at Sep 6, 2010 at 10:16 am

    On 06/09/10 08:27, shangan wrote:
    my cluster consists of 8 nodes with the namenode in an independent machine,the following info is what I get from namenode web ui:
    291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB / 4.34 GB (67%)
    I'm wondering why the namenode take so much memory while I only store hundreds of files. I've check the fsimage and edits files, the size of the sum of both is only 232 KB. So far as I know namenode can store the information of millions of files with 1G RAM, why my cluster consume so much memory ? If it goes on,I can't store that many files before the memory is eaten up.
    It might just been there isn't enough memory consumption on your
    pre-allocated heap to trigger GC yet; have a play with the GC tooling
    and jvisualvm to see what's going on.
  • Shangan at Sep 8, 2010 at 1:25 am
    how to change the configure in order to trigger GC earlier not when it is close to the memory maximum?


    2010-09-08



    shangan



    发件人: Steve Loughran
    发送时间: 2010-09-06 18:16:51
    收件人: common-user
    抄送:
    主题: Re: namenode consume quite a lot of memory with only serveral hundredsof files in it
    On 06/09/10 08:27, shangan wrote:
    my cluster consists of 8 nodes with the namenode in an independent machine,the following info is what I get from namenode web ui:
    291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB / 4.34 GB (67%)
    I'm wondering why the namenode take so much memory while I only store hundreds of files. I've check the fsimage and edits files, the size of the sum of both is only 232 KB. So far as I know namenode can store the information of millions of files with 1G RAM, why my cluster consume so much memory ? If it goes on,I can't store that many files before the memory is eaten up.
    It might just been there isn't enough memory consumption on your
    pre-allocated heap to trigger GC yet; have a play with the GC tooling
    and jvisualvm to see what's going on.
    __________ Information from ESET NOD32 Antivirus, version of virus signature database 5418 (20100902) __________
    The message was checked by ESET NOD32 Antivirus.
    http://www.eset.com
  • Edward Capriolo at Sep 8, 2010 at 1:34 am
    The fact that the memory is high is not necessarily a bad thing.
    Faster garbage collection implies more CPU usage.

    I had some success following the tuning advice here, to make my memory
    usage less spikey

    http://blog.mikiobraun.de/2010/08/cassandra-gc-tuning.html

    Again, less spikes != better performance, is not a fact.
    On Tue, Sep 7, 2010 at 9:25 PM, shangan wrote:
    how to change the configure in order to trigger GC earlier not when it is close to the memory maximum?


    2010-09-08



    shangan



    发件人: Steve Loughran
    发送时间: 2010-09-06 18:16:51
    收件人: common-user
    抄送:
    主题: Re: namenode consume quite a lot of memory with only serveral hundredsof files in it
    On 06/09/10 08:27, shangan wrote:
    my cluster consists of 8 nodes with the namenode in an independent machine,the following info is what I get from namenode web ui:
    291 files and directories, 1312 blocks = 1603 total. Heap Size is 2.92 GB / 4.34 GB (67%)
    I'm wondering why the namenode take so much memory while I only store hundreds of files. I've check the fsimage and edits files, the size of the sum of both is only 232 KB. So far as I know namenode can store the information of millions of files with 1G RAM, why my cluster consume so much memory ? If it goes on,I can't store that many files before the memory is eaten up.
    It might just been there isn't enough memory consumption on your
    pre-allocated heap to trigger GC yet; have a play with the GC tooling
    and jvisualvm to see what's going on.
    __________ Information from ESET NOD32 Antivirus, version of virus signature database 5418 (20100902) __________
    The message was checked by ESET NOD32 Antivirus.
    http://www.eset.com

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedSep 6, '10 at 7:28a
activeSep 8, '10 at 1:34a
posts6
users4
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase