FAQ
Modify datanode configs to specify minimum JVM heapsize
-------------------------------------------------------

Key: HADOOP-2499
URL: https://issues.apache.org/jira/browse/HADOOP-2499
Project: Hadoop
Issue Type: Bug
Components: dfs
Reporter: Robert Chansler

Y! 1524346
Currently the Hadoop DataNodes are running with the option -Xmx1000m. They
should and/or be running with the option -Xms1000m (if 1000m is correct; it
seems high?)

This turns out to be a sticky request. The place where Hadoop DFS is getting
the definition of how to define that 1000m is the hadoop-env file. Read the
code from bin/hadoop, which is used to start all hadoop processes:

) JAVA_HEAP_MAX=-Xmx1000m
)
) # check envvars which might override default args
) if [ "$HADOOP_HEAPSIZE" != "" ]; then
) #echo "run with heapsize $HADOOP_HEAPSIZE"
) JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"
) #echo $JAVA_HEAP_MAX
) fi

And here's the entry from hadoop-env.sh:
) # The maximum amount of heap to use, in MB. Default is 1000.
) export HADOOP_HEAPSIZE=1000

The problem is that I believe we want to specify -Xms for datanodes ONLY. But
the same script is used to start datanodes, tasktrackers, etc. This isn't
trivially a matter of distributing different config files, the options provided
are coded into the bin/hadoop executable. So this is an enhancement request.


--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Search Discussions

  • Nigel Daley (JIRA) at Dec 29, 2007 at 12:03 am
    [ https://issues.apache.org/jira/browse/HADOOP-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

    Nigel Daley updated HADOOP-2499:
    --------------------------------

    Issue Type: Improvement (was: Bug)
    Modify datanode configs to specify minimum JVM heapsize
    -------------------------------------------------------

    Key: HADOOP-2499
    URL: https://issues.apache.org/jira/browse/HADOOP-2499
    Project: Hadoop
    Issue Type: Improvement
    Components: dfs
    Reporter: Robert Chansler
    Y! 1524346
    Currently the Hadoop DataNodes are running with the option -Xmx1000m. They
    should and/or be running with the option -Xms1000m (if 1000m is correct; it
    seems high?)
    This turns out to be a sticky request. The place where Hadoop DFS is getting
    the definition of how to define that 1000m is the hadoop-env file. Read the
    code from bin/hadoop, which is used to start all hadoop processes:
    ) JAVA_HEAP_MAX=-Xmx1000m
    )
    ) # check envvars which might override default args
    ) if [ "$HADOOP_HEAPSIZE" != "" ]; then
    ) #echo "run with heapsize $HADOOP_HEAPSIZE"
    ) JAVA_HEAP_MAX="-Xmx""$HADOOP_HEAPSIZE""m"
    ) #echo $JAVA_HEAP_MAX
    ) fi
    And here's the entry from hadoop-env.sh:
    ) # The maximum amount of heap to use, in MB. Default is 1000.
    ) export HADOOP_HEAPSIZE=1000
    The problem is that I believe we want to specify -Xms for datanodes ONLY. But
    the same script is used to start datanodes, tasktrackers, etc. This isn't
    trivially a matter of distributing different config files, the options provided
    are coded into the bin/hadoop executable. So this is an enhancement request.
    --
    This message is automatically generated by JIRA.
    -
    You can reply to this email to add a comment to the issue online.

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedDec 28, '07 at 11:08p
activeDec 29, '07 at 12:03a
posts2
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Nigel Daley (JIRA): 2 posts

People

Translate

site design / logo © 2022 Grokbase