FAQ
Hi,

I am trying to write a map-reduce job to convert csv files to
sequencefiles, but the job fails with the following error:
java.lang.RuntimeException: Error while running command to get file
permissions : java.io.IOException: Cannot run program "/bin/ls":
error=12, Not enough space
at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
at org.apache.hadoop.util.Shell.run(Shell.java:182)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
at org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
at org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
at org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.Child.main(Child.java:253)
Caused by: java.io.IOException: error=12, Not enough space
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(ProcessImpl.java:65)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
... 16 more

at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
at org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
at org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
at org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.Child.main(Child.java:253)

Search Discussions

  • Adi at Aug 7, 2011 at 3:36 pm
    Caused by: java.io.IOException: error=12, Not enough space
    You either do not have enough memory allocated to your hadoop daemons(via
    HADOOP_HEAPSIZE) or swap space.

    -Adi
    On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu wrote:

    Hi,

    I am trying to write a map-reduce job to convert csv files to
    sequencefiles, but the job fails with the following error:
    java.lang.RuntimeException: Error while running command to get file
    permissions : java.io.IOException: Cannot run program "/bin/ls":
    error=12, Not enough space
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
    Caused by: java.io.IOException: error=12, Not enough space
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 16 more

    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
  • Xiaobo Gu at Aug 9, 2011 at 9:07 am
    Hi Adi,

    Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
    what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
    dedicated for a Single Node Hadoop with 1 data node instance, and the
    it will run 4 mapper and reducer tasks .

    Regards,

    Xiaobo Gu

    On Sun, Aug 7, 2011 at 11:35 PM, Adi wrote:
    Caused by: java.io.IOException: error=12, Not enough space
    You either do not have enough memory allocated to your hadoop daemons(via
    HADOOP_HEAPSIZE) or swap space.

    -Adi
    On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu wrote:

    Hi,

    I am trying to write a map-reduce job to convert csv files to
    sequencefiles, but the job fails with the following error:
    java.lang.RuntimeException: Error while running command to get file
    permissions : java.io.IOException: Cannot run program "/bin/ls":
    error=12, Not enough space
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
    Caused by: java.io.IOException: error=12, Not enough space
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 16 more

    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
  • Lance Norskog at Aug 11, 2011 at 5:08 am
    If the server is dedicated to this job, you might as well give it
    10-15g. After that shakes out, try changing the number of mappers &
    reducers.
    On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu wrote:
    Hi Adi,

    Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
    what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
    dedicated for a Single Node Hadoop with 1 data node instance, and the
    it will run 4 mapper and reducer tasks .

    Regards,

    Xiaobo Gu

    On Sun, Aug 7, 2011 at 11:35 PM, Adi wrote:
    Caused by: java.io.IOException: error=12, Not enough space
    You either do not have enough memory allocated to your hadoop daemons(via
    HADOOP_HEAPSIZE) or swap space.

    -Adi
    On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu wrote:

    Hi,

    I am trying to write a map-reduce job to convert csv files to
    sequencefiles, but the job fails with the following error:
    java.lang.RuntimeException: Error while running command to get file
    permissions : java.io.IOException: Cannot run program "/bin/ls":
    error=12, Not enough space
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
    Caused by: java.io.IOException: error=12, Not enough space
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 16 more

    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)


    --
    Lance Norskog
    goksron@gmail.com
  • Xiaobo Gu at Aug 11, 2011 at 6:08 am
    Is HADOOP_HEAPSIZE set for all Hadoop related Java processes, or just
    one Java process?

    Regards,

    Xiaobo Gu
    On Thu, Aug 11, 2011 at 1:07 PM, Lance Norskog wrote:
    If the server is dedicated to this job, you might as well give it
    10-15g. After that shakes out, try changing the number of mappers &
    reducers.
    On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu wrote:
    Hi Adi,

    Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
    what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
    dedicated for a Single Node Hadoop with 1 data node instance, and the
    it will run 4 mapper and reducer tasks .

    Regards,

    Xiaobo Gu

    On Sun, Aug 7, 2011 at 11:35 PM, Adi wrote:
    Caused by: java.io.IOException: error=12, Not enough space
    You either do not have enough memory allocated to your hadoop daemons(via
    HADOOP_HEAPSIZE) or swap space.

    -Adi
    On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu wrote:

    Hi,

    I am trying to write a map-reduce job to convert csv files to
    sequencefiles, but the job fails with the following error:
    java.lang.RuntimeException: Error while running command to get file
    permissions : java.io.IOException: Cannot run program "/bin/ls":
    error=12, Not enough space
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
    Caused by: java.io.IOException: error=12, Not enough space
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 16 more

    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)


    --
    Lance Norskog
    goksron@gmail.com
  • Harsh J at Aug 11, 2011 at 6:12 am
    It applies to all Hadoop daemon processes (JT, TT, NN, SNN, DN) and
    all direct commands executed via the 'hadoop' executable.
    On Thu, Aug 11, 2011 at 11:37 AM, Xiaobo Gu wrote:
    Is HADOOP_HEAPSIZE set for all Hadoop related Java processes, or just
    one Java process?

    Regards,

    Xiaobo Gu
    On Thu, Aug 11, 2011 at 1:07 PM, Lance Norskog wrote:
    If the server is dedicated to this job, you might as well give it
    10-15g. After that shakes out, try changing the number of mappers &
    reducers.
    On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu wrote:
    Hi Adi,

    Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
    what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
    dedicated for a Single Node Hadoop with 1 data node instance, and the
    it will run 4 mapper and reducer tasks .

    Regards,

    Xiaobo Gu

    On Sun, Aug 7, 2011 at 11:35 PM, Adi wrote:
    Caused by: java.io.IOException: error=12, Not enough space
    You either do not have enough memory allocated to your hadoop daemons(via
    HADOOP_HEAPSIZE) or swap space.

    -Adi
    On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu wrote:

    Hi,

    I am trying to write a map-reduce job to convert csv files to
    sequencefiles, but the job fails with the following error:
    java.lang.RuntimeException: Error while running command to get file
    permissions : java.io.IOException: Cannot run program "/bin/ls":
    error=12, Not enough space
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
    Caused by: java.io.IOException: error=12, Not enough space
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 16 more

    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)


    --
    Lance Norskog
    goksron@gmail.com


    --
    Harsh J
  • Adi at Aug 11, 2011 at 2:14 pm
    Some other options that effect the number of mappers and reducers and the
    amount of memory they use:

    mapred.child.java.opts* *-Xmx1200M (e.g. heap for your mapper/reducer or
    any other java options) - this will decide the number of slots(512M) per
    mapper

    splitsize will effect the number of splits(and in effect the number of
    mappers) depending on your input file and input format(in case you are using
    fileinputformat or deriving from it)
    mapreduce.input.fileinputformat.split.maxsize <max number of bytes>
    mapreduce.input.fileinputformat.split.minsize <min number of bytes>

    -Adi


    On Thu, Aug 11, 2011 at 2:11 AM, Harsh J wrote:

    It applies to all Hadoop daemon processes (JT, TT, NN, SNN, DN) and
    all direct commands executed via the 'hadoop' executable.
    On Thu, Aug 11, 2011 at 11:37 AM, Xiaobo Gu wrote:
    Is HADOOP_HEAPSIZE set for all Hadoop related Java processes, or just
    one Java process?

    Regards,

    Xiaobo Gu
    On Thu, Aug 11, 2011 at 1:07 PM, Lance Norskog wrote:
    If the server is dedicated to this job, you might as well give it
    10-15g. After that shakes out, try changing the number of mappers &
    reducers.
    On Tue, Aug 9, 2011 at 2:06 AM, Xiaobo Gu wrote:
    Hi Adi,

    Thanks for your response, on an SMP server with 32G RAM and 8 Cores,
    what's your suggestion for setting HADOOP_HEAPSIZE, the server will be
    dedicated for a Single Node Hadoop with 1 data node instance, and the
    it will run 4 mapper and reducer tasks .

    Regards,

    Xiaobo Gu

    On Sun, Aug 7, 2011 at 11:35 PM, Adi wrote:
    Caused by: java.io.IOException: error=12, Not enough space
    You either do not have enough memory allocated to your hadoop
    daemons(via
    HADOOP_HEAPSIZE) or swap space.

    -Adi
    On Sun, Aug 7, 2011 at 5:48 AM, Xiaobo Gu wrote:

    Hi,

    I am trying to write a map-reduce job to convert csv files to
    sequencefiles, but the job fails with the following error:
    java.lang.RuntimeException: Error while running command to get file
    permissions : java.io.IOException: Cannot run program "/bin/ls":
    error=12, Not enough space
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:460)
    at org.apache.hadoop.util.Shell.runCommand(Shell.java:200)
    at org.apache.hadoop.util.Shell.run(Shell.java:182)
    at
    org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:375)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:461)
    at org.apache.hadoop.util.Shell.execCommand(Shell.java:444)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.execCommand(RawLocalFileSystem.java:540)
    at
    org.apache.hadoop.fs.RawLocalFileSystem.access$100(RawLocalFileSystem.java:37)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:417)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)
    Caused by: java.io.IOException: error=12, Not enough space
    at java.lang.UNIXProcess.forkAndExec(Native Method)
    at java.lang.UNIXProcess.<init>(UNIXProcess.java:53)
    at java.lang.ProcessImpl.start(ProcessImpl.java:65)
    at java.lang.ProcessBuilder.start(ProcessBuilder.java:453)
    ... 16 more

    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:442)
    at
    org.apache.hadoop.fs.RawLocalFileSystem$RawLocalFileStatus.getOwner(RawLocalFileSystem.java:400)
    at
    org.apache.hadoop.mapred.TaskLog.obtainLogDirOwner(TaskLog.java:176)
    at
    org.apache.hadoop.mapred.TaskLogsTruncater.truncateLogs(TaskLogsTruncater.java:124)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:264)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at
    org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
    at org.apache.hadoop.mapred.Child.main(Child.java:253)


    --
    Lance Norskog
    goksron@gmail.com


    --
    Harsh J
  • Devilsp4 at Aug 11, 2011 at 9:32 am
    Hi,

    I deploy hadoop cluster use two machine.one as a namenode,and the other be used a datanode.

    My namenode machine hostname is namenode1,and datanode machine hostname is datanode1.

    when I use command ./start-all.sh on namenode1,the console display below string,

    root@namenode1:/opt/hadoop/bin# ./start-all.sh
    starting namenode, logging to /opt/hadoop/bin/../logs/hadoop-root-namenode-namenode1.out
    datanode1: starting datanode, logging to /opt/hadoop/bin/../logs/hadoop-root-datanode-datanode1.out
    namenode1: starting secondarynamenode, logging to /opt/hadoop/bin/../logs/hadoop-root-secondarynamenode-namenode1.out
    starting jobtracker, logging to /opt/hadoop/bin/../logs/hadoop-root-jobtracker-namenode1.out
    datanode1: starting tasktracker, logging to /opt/hadoop/bin/../logs/hadoop-root-tasktracker-datanode1.out

    and use jps show java processs,display below string,

    15438 JobTracker
    15159 NameNode
    15582 Jps
    15362 SecondaryNameNode

    and ssh datanode1,use comman jps,display below somethins strings

    21417 TaskTracker
    21497 Jps


    so,the datanode can't run,and I find logs

    [root@datanode1 logs]# ls
    hadoop-root-datanode-datanode1.out hadoop-root-tasktracker-datanode1.log hadoop-root-tasktracker-datanode1.out.2
    hadoop-root-datanode-datanode1.out.1 hadoop-root-tasktracker-datanode1.out
    hadoop-root-datanode-datanode1.out.2 hadoop-root-tasktracker-datanode1.out.1

    [root@datanode1 logs]# cat hadoop-root-datanode-datanode1.out
    Unrecognized option: -jvm
    Could not create the Java virtual machine.


    Next, what should I do to solve this problem。


    Thanks. devilsp
  • Harsh J at Aug 11, 2011 at 9:40 am
    A quick workaround is to not run your services as root.

    (Actually, you shouldn't run Hadoop as root ever!)
    On Thu, Aug 11, 2011 at 3:02 PM, devilsp4 wrote:
    Hi,

    I deploy hadoop cluster use two machine.one as a namenode,and the other be used a datanode.

    My namenode machine hostname is namenode1,and datanode machine hostname is datanode1.

    when I use command ./start-all.sh on namenode1,the console display below string,

    root@namenode1:/opt/hadoop/bin# ./start-all.sh
    starting namenode, logging to /opt/hadoop/bin/../logs/hadoop-root-namenode-namenode1.out
    datanode1: starting datanode, logging to /opt/hadoop/bin/../logs/hadoop-root-datanode-datanode1.out
    namenode1: starting secondarynamenode, logging to /opt/hadoop/bin/../logs/hadoop-root-secondarynamenode-namenode1.out
    starting jobtracker, logging to /opt/hadoop/bin/../logs/hadoop-root-jobtracker-namenode1.out
    datanode1: starting tasktracker, logging to /opt/hadoop/bin/../logs/hadoop-root-tasktracker-datanode1.out

    and use jps show java processs,display below string,

    15438 JobTracker
    15159 NameNode
    15582 Jps
    15362 SecondaryNameNode

    and ssh datanode1,use comman jps,display below somethins strings

    21417 TaskTracker
    21497 Jps


    so,the datanode can't run,and I find logs

    [root@datanode1 logs]# ls
    hadoop-root-datanode-datanode1.out    hadoop-root-tasktracker-datanode1.log    hadoop-root-tasktracker-datanode1.out.2
    hadoop-root-datanode-datanode1.out.1  hadoop-root-tasktracker-datanode1.out
    hadoop-root-datanode-datanode1.out.2  hadoop-root-tasktracker-datanode1.out.1

    [root@datanode1 logs]# cat hadoop-root-datanode-datanode1.out
    Unrecognized option: -jvm
    Could not create the Java virtual machine.


    Next, what should I do to solve this problem。


    Thanks. devilsp


    --
    Harsh J

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-user @
categorieshadoop
postedAug 7, '11 at 9:49a
activeAug 11, '11 at 2:14p
posts9
users5
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase