FAQ
The hadoop version is 0.18.3 . Recently we got "out of space" issue. It's from "java.util.zip.ZipOutputStream".
We found that /tmp is full and after cleaning /tmp the problem is solved.

However why hadoop needs to use /tmp? We had already configured hadoop tmp to a local disk in: hadoop-site.xml

<property>
<name>hadoop.tmp.dir</name>
<value> ... some large local disk ... </value>
</property>


Could it because java.util.zip.ZipOutputStream uses /tmp even if we configured hadoop.tmp.dir to a large local disk?

The error log is here FYI:

java.io.IOException: No space left on device
at java.io.FileOutputStream.write(Native Method)
at java.util.zip.ZipOutputStream.writeInt(ZipOutputStream.java:445)
at java.util.zip.ZipOutputStream.writeEXT(ZipOutputStream.java:362)
at java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:220)
at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:301)
at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:146)
at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
at org.apache.hadoop.streaming.JarBuilder.merge(JarBuilder.java:79)
at org.apache.hadoop.streaming.StreamJob.packageJobJar(StreamJob.java:628)
at org.apache.hadoop.streaming.StreamJob.setJobConf(StreamJob.java:843)
at org.apache.hadoop.streaming.StreamJob.go(StreamJob.java:110)
at org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:33)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)
Executing Hadoop job failure

Search Discussions

  • Steve Gao at Aug 28, 2009 at 6:19 pm
    would someone give us a hint? Thanks.
    Why "java.util.zip.ZipOutputStream" need to use /tmp?

    The hadoop version is 0.18.3 . Recently we got "out of space" issue. It's from "java.util.zip.ZipOutputStream".
    We found that /tmp is full and after cleaning /tmp the problem is solved.

    However why hadoop needs to use /tmp? We had already configured hadoop tmp to a local disk in: hadoop-site.xml

    <property>
    <name>hadoop.tmp.dir</name>
    <value> ... some large local disk ... </value>
    </property>


    Could it because java.util.zip.ZipOutputStream uses /tmp even if we configured hadoop.tmp.dir to a large local disk?

    The error log is here FYI:

    java.io.IOException: No space left on device
    at java.io.FileOutputStream.write(Native Method)
    at java.util.zip.ZipOutputStream.writeInt(ZipOutputStream.java:445)
    at java.util.zip.ZipOutputStream.writeEXT(ZipOutputStream.java:362)
    at java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:220)
    at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:301)
    at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:146)
    at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at org.apache.hadoop.streaming.JarBuilder.merge(JarBuilder.java:79)
    at org.apache.hadoop.streaming.StreamJob.packageJobJar(StreamJob.java:628)
    at org.apache.hadoop.streaming.StreamJob.setJobConf(StreamJob.java:843)
    at org.apache.hadoop.streaming.StreamJob.go(StreamJob.java:110)
    at org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:33)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
    at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)
    Executing Hadoop job failure
  • Gopal Gandhi at Aug 28, 2009 at 6:26 pm
    We are inviting gurus or major contributors of Hive and/or Hbase (or anything related to Hadoop) to give us presentations about the products. Would you name a few names? The gurus must be in bay area.
    Thanks.
  • Oliver B. Fischer at Sep 3, 2009 at 2:30 pm
    Hello Steve,

    I assume what the java.io.FileOutputStream uses /tmp as tempdir. As you
    can see, the errors occurs in a native method. As far I know, /tmp is
    standard temp directory on UNIX systems automatically used by many
    native library calls. May you can set $TEMPDIR
    (http://en.wikipedia.org/wiki/TMPDIR) to another directory?

    Best regards,

    Oliver

    Steve Gao schrieb:
    The hadoop version is 0.18.3 . Recently we got "out of space" issue. It's from "java.util.zip.ZipOutputStream".
    We found that /tmp is full and after cleaning /tmp the problem is solved.

    However why hadoop needs to use /tmp? We had already configured hadoop tmp to a local disk in: hadoop-site.xml

    <property>
    <name>hadoop.tmp.dir</name>
    <value> ... some large local disk ... </value>
    </property>


    Could it because java.util.zip.ZipOutputStream uses /tmp even if we configured hadoop.tmp.dir to a large local disk?

    The error log is here FYI:

    java.io.IOException: No space left on device
    at java.io.FileOutputStream.write(Native Method)
    at java.util.zip.ZipOutputStream.writeInt(ZipOutputStream.java:445)
    at java.util.zip.ZipOutputStream.writeEXT(ZipOutputStream.java:362)
    at java.util.zip.ZipOutputStream.closeEntry(ZipOutputStream.java:220)
    at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:301)
    at java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:146)
    at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at org.apache.hadoop.streaming.JarBuilder.merge(JarBuilder.java:79)
    at org.apache.hadoop.streaming.StreamJob.packageJobJar(StreamJob.java:628)
    at org.apache.hadoop.streaming.StreamJob.setJobConf(StreamJob.java:843)
    at org.apache.hadoop.streaming.StreamJob.go(StreamJob.java:110)
    at org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:33)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
    at org.apache.hadoop.mapred.JobShell.run(JobShell.java:194)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
    at org.apache.hadoop.mapred.JobShell.main(JobShell.java:220)
    Executing Hadoop job failure




    --
    Oliver B. Fischer, Schönhauser Allee 64, 10437 Berlin
    Tel. +49 30 44793251, Mobil: +49 178 7903538
    Mail: o.b.fischer@swe-blog.net Blog: http://www.swe-blog.net

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcommon-dev @
categorieshadoop
postedAug 27, '09 at 7:42p
activeSep 3, '09 at 2:30p
posts4
users3
websitehadoop.apache.org...
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase