FAQ
Hello!

I've hit all problems imaginable when trying to use Pig for scalable data
integration. I'd really appreciate if you tell me where to look at the
problem for this one:

org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
r.JobCreationException: ERROR 2017: Internal error creating job
configuration.

at org.python.core.PyException.fillInStackTrace(PyException.java:70)
at java.lang.Throwable.(Exception.java:29)
at java.lang.RuntimeException.(PyException.java:46)
at org.python.core.PyException.(Py.java:495)
at org.python.core.Py.JavaError(Py.java:488)
at
org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
at
org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
at org.python.core.PyObject.__call__(PyObject.java:387)
at org.python.core.PyObject.__call__(PyObject.java:391)
at org.python.core.PyMethod.__call__(PyMethod.java:109)
at
org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
at
org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
at org.python.core.PyTableCode.call(PyTableCode.java:165)
at org.python.core.PyCode.call(PyCode.java:18)
at org.python.core.Py.runCode(Py.java:1275)
at
org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
at
org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
... 11 more
Caused by:
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
ERROR 2017: Internal error creating job configuration.
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
at
org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
at
org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
at org.apache.pig.PigServer.execute(PigServer.java:1250)
at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
at org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
at
org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
at
org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
... 22 more
Caused by: java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:282)
at
java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
at java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
at
java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
at
org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

I am running on a machine with lot of tera bytes and no drive is full, and
the number of inodes used is as low as 1%.. I don't know which device is it
talking about.. maybe it's something related to the Pig Internal Filesystem
(or something) which I saw in the Pig output. Any sort of help is deeply
appreciated.

I'm useing CDH4.2 and running Pig 0.10 in batch mode through python. The
jython I use is the standalone version (2.5) to fix the problem with
imports if using the version that ships with Pig. The Machine runs Ubuntu
12.04 x64 and the user is not limited so it can use all of its huge
resources.

Sincerely,
Younos

--

Search Discussions

  • Varun kumar at Feb 10, 2013 at 7:37 pm
    Hi Younos,

    Try to clear or increase /hadoop/tmp directory.


    Regards,
    Varun Kumar.P
    On Sun, Feb 10, 2013 at 5:33 AM, Y. I. AboulNaga wrote:

    Hello!

    I've hit all problems imaginable when trying to use Pig for scalable data
    integration. I'd really appreciate if you tell me where to look at the
    problem for this one:

    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
    r.JobCreationException: ERROR 2017: Internal error creating job
    configuration.

    at
    org.python.core.PyException.fillInStackTrace(PyException.java:70)
    at java.lang.Throwable.<init>(Throwable.java:181)
    at java.lang.Exception.<init>(Exception.java:29)
    at java.lang.RuntimeException.<init>(RuntimeException.java:32)
    at org.python.core.PyException.<init>(PyException.java:46)
    at org.python.core.PyException.<init>(PyException.java:43)
    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
    at org.python.core.PyObject.__call__(PyObject.java:387)
    at org.python.core.PyObject.__call__(PyObject.java:391)
    at org.python.core.PyMethod.__call__(PyMethod.java:109)
    at
    org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
    at
    org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at
    org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
    at
    org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
    ... 11 more
    Caused by:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    ERROR 2017: Internal error creating job configuration.
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
    at
    org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
    at org.apache.pig.PigServer.execute(PigServer.java:1250)
    at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
    at org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 22 more
    Caused by: java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at
    java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
    at java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
    at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
    at
    java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
    at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at
    org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

    I am running on a machine with lot of tera bytes and no drive is full, and
    the number of inodes used is as low as 1%.. I don't know which device is it
    talking about.. maybe it's something related to the Pig Internal Filesystem
    (or something) which I saw in the Pig output. Any sort of help is deeply
    appreciated.

    I'm useing CDH4.2 and running Pig 0.10 in batch mode through python. The
    jython I use is the standalone version (2.5) to fix the problem with
    imports if using the version that ships with Pig. The Machine runs Ubuntu
    12.04 x64 and the user is not limited so it can use all of its huge
    resources.

    Sincerely,
    Younos

    --




    --
    Regards,
    Varun Kumar.P

    --
  • Younos Aboulnaga at Feb 10, 2013 at 9:39 pm
    Thanks for the response Varun. There is no such directory, neither on HDFS
    nor the local file system.. and eitherway, all my drivers have at least a
    few Giga bytes free.. I don't think that creating a Jar should take that
    much space... but anyway, I'll do what is best when dealing with the Hadoop
    stack and just free as much space as possible in all drives and let it
    consume what it want... they are all fat animals after all :P.
    On 10 February 2013 14:30, varun kumar wrote:

    Hi Younos,

    Try to clear or increase /hadoop/tmp directory.


    Regards,
    Varun Kumar.P


    On Sun, Feb 10, 2013 at 5:33 AM, Y. I. AboulNaga <
    younos.aboulnaga@uwaterloo.ca> wrote:
    Hello!

    I've hit all problems imaginable when trying to use Pig for scalable data
    integration. I'd really appreciate if you tell me where to look at the
    problem for this one:

    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
    r.JobCreationException: ERROR 2017: Internal error creating job
    configuration.

    at
    org.python.core.PyException.fillInStackTrace(PyException.java:70)
    at java.lang.Throwable.<init>(Throwable.java:181)
    at java.lang.Exception.<init>(Exception.java:29)
    at java.lang.RuntimeException.<init>(RuntimeException.java:32)
    at org.python.core.PyException.<init>(PyException.java:46)
    at org.python.core.PyException.<init>(PyException.java:43)
    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
    at org.python.core.PyObject.__call__(PyObject.java:387)
    at org.python.core.PyObject.__call__(PyObject.java:391)
    at org.python.core.PyMethod.__call__(PyMethod.java:109)
    at
    org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
    at
    org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at
    org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
    at
    org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
    ... 11 more
    Caused by:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    ERROR 2017: Internal error creating job configuration.
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
    at
    org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
    at org.apache.pig.PigServer.execute(PigServer.java:1250)
    at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
    at org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 22 more
    Caused by: java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at
    java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
    at
    java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
    at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
    at
    java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
    at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at
    org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

    I am running on a machine with lot of tera bytes and no drive is full,
    and the number of inodes used is as low as 1%.. I don't know which device
    is it talking about.. maybe it's something related to the Pig Internal
    Filesystem (or something) which I saw in the Pig output. Any sort of help
    is deeply appreciated.

    I'm useing CDH4.2 and running Pig 0.10 in batch mode through python. The
    jython I use is the standalone version (2.5) to fix the problem with
    imports if using the version that ships with Pig. The Machine runs Ubuntu
    12.04 x64 and the user is not limited so it can use all of its huge
    resources.

    Sincerely,
    Younos

    --




    --
    Regards,
    Varun Kumar.P
    --
  • Harsh J at Feb 10, 2013 at 9:57 pm
    The Pig's jar creation process uses a /tmp subdir by default to create
    the submit-jars. Does /tmp have adequate space allocated to it? You
    can alternatively also try to ask Pig to use a home directory instead
    of the tmpfs for this purpose.

    On Mon, Feb 11, 2013 at 3:08 AM, Younos Aboulnaga
    wrote:
    Thanks for the response Varun. There is no such directory, neither on HDFS
    nor the local file system.. and eitherway, all my drivers have at least a
    few Giga bytes free.. I don't think that creating a Jar should take that
    much space... but anyway, I'll do what is best when dealing with the Hadoop
    stack and just free as much space as possible in all drives and let it
    consume what it want... they are all fat animals after all :P.

    On 10 February 2013 14:30, varun kumar wrote:

    Hi Younos,

    Try to clear or increase /hadoop/tmp directory.


    Regards,
    Varun Kumar.P


    On Sun, Feb 10, 2013 at 5:33 AM, Y. I. AboulNaga
    wrote:
    Hello!

    I've hit all problems imaginable when trying to use Pig for scalable data
    integration. I'd really appreciate if you tell me where to look at the
    problem for this one:


    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
    r.JobCreationException: ERROR 2017: Internal error creating job
    configuration.

    at
    org.python.core.PyException.fillInStackTrace(PyException.java:70)
    at java.lang.Throwable.<init>(Throwable.java:181)
    at java.lang.Exception.<init>(Exception.java:29)
    at java.lang.RuntimeException.<init>(RuntimeException.java:32)
    at org.python.core.PyException.<init>(PyException.java:46)
    at org.python.core.PyException.<init>(PyException.java:43)
    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
    at org.python.core.PyObject.__call__(PyObject.java:387)
    at org.python.core.PyObject.__call__(PyObject.java:391)
    at org.python.core.PyMethod.__call__(PyMethod.java:109)
    at
    org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
    at
    org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at
    org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
    at
    org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
    ... 11 more
    Caused by:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    ERROR 2017: Internal error creating job configuration.
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
    at
    org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
    at org.apache.pig.PigServer.execute(PigServer.java:1250)
    at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
    at
    org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 22 more
    Caused by: java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at
    java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
    at
    java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
    at java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
    at
    java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
    at java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at
    org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

    I am running on a machine with lot of tera bytes and no drive is full,
    and the number of inodes used is as low as 1%.. I don't know which device is
    it talking about.. maybe it's something related to the Pig Internal
    Filesystem (or something) which I saw in the Pig output. Any sort of help is
    deeply appreciated.

    I'm useing CDH4.2 and running Pig 0.10 in batch mode through python. The
    jython I use is the standalone version (2.5) to fix the problem with imports
    if using the version that ships with Pig. The Machine runs Ubuntu 12.04 x64
    and the user is not limited so it can use all of its huge resources.

    Sincerely,
    Younos

    --




    --
    Regards,
    Varun Kumar.P

    --



    --
    Harsh J

    --
  • Younos Aboulnaga at Feb 11, 2013 at 5:36 am
    Thanks for the response Harsh. As I mentioned before, all drives have tens
    of Gigabytes free (and some have Teras). Now I got the script to work by
    submitting it 30 commands at a time, and I will have to merge the results.

    On 10 February 2013 16:56, Harsh J wrote:

    The Pig's jar creation process uses a /tmp subdir by default to create
    the submit-jars. Does /tmp have adequate space allocated to it? You
    can alternatively also try to ask Pig to use a home directory instead
    of the tmpfs for this purpose.

    On Mon, Feb 11, 2013 at 3:08 AM, Younos Aboulnaga
    wrote:
    Thanks for the response Varun. There is no such directory, neither on HDFS
    nor the local file system.. and eitherway, all my drivers have at least a
    few Giga bytes free.. I don't think that creating a Jar should take that
    much space... but anyway, I'll do what is best when dealing with the Hadoop
    stack and just free as much space as possible in all drives and let it
    consume what it want... they are all fat animals after all :P.

    On 10 February 2013 14:30, varun kumar wrote:

    Hi Younos,

    Try to clear or increase /hadoop/tmp directory.


    Regards,
    Varun Kumar.P


    On Sun, Feb 10, 2013 at 5:33 AM, Y. I. AboulNaga
    wrote:
    Hello!

    I've hit all problems imaginable when trying to use Pig for scalable
    data
    integration. I'd really appreciate if you tell me where to look at the
    problem for this one:

    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
    r.JobCreationException: ERROR 2017: Internal error creating job
    configuration.

    at
    org.python.core.PyException.fillInStackTrace(PyException.java:70)
    at java.lang.Throwable.<init>(Throwable.java:181)
    at java.lang.Exception.<init>(Exception.java:29)
    at java.lang.RuntimeException.<init>(RuntimeException.java:32)
    at org.python.core.PyException.<init>(PyException.java:46)
    at org.python.core.PyException.<init>(PyException.java:43)
    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
    at org.python.core.PyObject.__call__(PyObject.java:387)
    at org.python.core.PyObject.__call__(PyObject.java:391)
    at org.python.core.PyMethod.__call__(PyMethod.java:109)
    at
    org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
    at
    org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at
    org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
    at
    org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
    ... 11 more
    Caused by:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    ERROR 2017: Internal error creating job configuration.
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
    at
    org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
    at org.apache.pig.PigServer.execute(PigServer.java:1250)
    at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
    at
    org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 22 more
    Caused by: java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at
    java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
    at
    java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
    at
    java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
    at
    java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
    at
    java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at
    org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

    I am running on a machine with lot of tera bytes and no drive is full,
    and the number of inodes used is as low as 1%.. I don't know which
    device is
    it talking about.. maybe it's something related to the Pig Internal
    Filesystem (or something) which I saw in the Pig output. Any sort of
    help is
    deeply appreciated.

    I'm useing CDH4.2 and running Pig 0.10 in batch mode through python.
    The
    jython I use is the standalone version (2.5) to fix the problem with
    imports
    if using the version that ships with Pig. The Machine runs Ubuntu
    12.04 x64
    and the user is not limited so it can use all of its huge resources.

    Sincerely,
    Younos

    --




    --
    Regards,
    Varun Kumar.P

    --



    --
    Harsh J

    --


    --
  • Kyle McGovern at Feb 11, 2013 at 5:42 am
    Have you looked at free inodes of your local tmp directory? We have noticed
    with some of our job submission nodes that hadoop tmp fills up on the local
    machine due to not having enough inodes available.

    You can find this information with "df -i".

    On Sun, Feb 10, 2013 at 11:35 PM, Younos Aboulnaga wrote:

    Thanks for the response Harsh. As I mentioned before, all drives have tens
    of Gigabytes free (and some have Teras). Now I got the script to work by
    submitting it 30 commands at a time, and I will have to merge the results.

    On 10 February 2013 16:56, Harsh J wrote:

    The Pig's jar creation process uses a /tmp subdir by default to create
    the submit-jars. Does /tmp have adequate space allocated to it? You
    can alternatively also try to ask Pig to use a home directory instead
    of the tmpfs for this purpose.

    On Mon, Feb 11, 2013 at 3:08 AM, Younos Aboulnaga
    wrote:
    Thanks for the response Varun. There is no such directory, neither on HDFS
    nor the local file system.. and eitherway, all my drivers have at least a
    few Giga bytes free.. I don't think that creating a Jar should take that
    much space... but anyway, I'll do what is best when dealing with the Hadoop
    stack and just free as much space as possible in all drives and let it
    consume what it want... they are all fat animals after all :P.

    On 10 February 2013 14:30, varun kumar wrote:

    Hi Younos,

    Try to clear or increase /hadoop/tmp directory.


    Regards,
    Varun Kumar.P


    On Sun, Feb 10, 2013 at 5:33 AM, Y. I. AboulNaga
    wrote:
    Hello!

    I've hit all problems imaginable when trying to use Pig for scalable
    data
    integration. I'd really appreciate if you tell me where to look at the
    problem for this one:

    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
    r.JobCreationException: ERROR 2017: Internal error creating job
    configuration.

    at
    org.python.core.PyException.fillInStackTrace(PyException.java:70)
    at java.lang.Throwable.<init>(Throwable.java:181)
    at java.lang.Exception.<init>(Exception.java:29)
    at java.lang.RuntimeException.<init>(RuntimeException.java:32)
    at org.python.core.PyException.<init>(PyException.java:46)
    at org.python.core.PyException.<init>(PyException.java:43)
    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
    at org.python.core.PyObject.__call__(PyObject.java:387)
    at org.python.core.PyObject.__call__(PyObject.java:391)
    at org.python.core.PyMethod.__call__(PyMethod.java:109)
    at
    org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
    at
    org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at
    org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
    at
    org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
    ... 11 more
    Caused by:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    ERROR 2017: Internal error creating job configuration.
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
    at
    org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
    at org.apache.pig.PigServer.execute(PigServer.java:1250)
    at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
    at
    org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 22 more
    Caused by: java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at
    java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
    at
    java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
    at
    java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
    at
    java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
    at
    java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at
    org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

    I am running on a machine with lot of tera bytes and no drive is full,
    and the number of inodes used is as low as 1%.. I don't know which
    device is
    it talking about.. maybe it's something related to the Pig Internal
    Filesystem (or something) which I saw in the Pig output. Any sort of
    help is
    deeply appreciated.

    I'm useing CDH4.2 and running Pig 0.10 in batch mode through python.
    The
    jython I use is the standalone version (2.5) to fix the problem with
    imports
    if using the version that ships with Pig. The Machine runs Ubuntu
    12.04 x64
    and the user is not limited so it can use all of its huge resources.

    Sincerely,
    Younos

    --




    --
    Regards,
    Varun Kumar.P

    --



    --
    Harsh J

    --


    --


    --
  • Younos Aboulnaga at Feb 11, 2013 at 6:03 am
    Thanks Kyle... I've looked at that and only 1% of the inodes were used when
    I looked. I'm not trying to fix the problem any more.. I changed my script
    by dividing it to smaller parts, so that Pig doesn't choke on.. it doesn't
    chew well :D
    On 11 February 2013 00:42, Kyle McGovern wrote:

    Have you looked at free inodes of your local tmp directory? We have
    noticed with some of our job submission nodes that hadoop tmp fills up on
    the local machine due to not having enough inodes available.

    You can find this information with "df -i".


    On Sun, Feb 10, 2013 at 11:35 PM, Younos Aboulnaga <
    younos.aboulnaga@uwaterloo.ca> wrote:
    Thanks for the response Harsh. As I mentioned before, all drives have
    tens of Gigabytes free (and some have Teras). Now I got the script to work
    by submitting it 30 commands at a time, and I will have to merge the
    results.

    On 10 February 2013 16:56, Harsh J wrote:

    The Pig's jar creation process uses a /tmp subdir by default to create
    the submit-jars. Does /tmp have adequate space allocated to it? You
    can alternatively also try to ask Pig to use a home directory instead
    of the tmpfs for this purpose.

    On Mon, Feb 11, 2013 at 3:08 AM, Younos Aboulnaga
    wrote:
    Thanks for the response Varun. There is no such directory, neither on HDFS
    nor the local file system.. and eitherway, all my drivers have at least a
    few Giga bytes free.. I don't think that creating a Jar should take that
    much space... but anyway, I'll do what is best when dealing with the Hadoop
    stack and just free as much space as possible in all drives and let it
    consume what it want... they are all fat animals after all :P.

    On 10 February 2013 14:30, varun kumar wrote:

    Hi Younos,

    Try to clear or increase /hadoop/tmp directory.


    Regards,
    Varun Kumar.P


    On Sun, Feb 10, 2013 at 5:33 AM, Y. I. AboulNaga
    wrote:
    Hello!

    I've hit all problems imaginable when trying to use Pig for scalable
    data
    integration. I'd really appreciate if you tell me where to look at
    the
    problem for this one:

    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
    r.JobCreationException: ERROR 2017: Internal error creating job
    configuration.

    at
    org.python.core.PyException.fillInStackTrace(PyException.java:70)
    at java.lang.Throwable.<init>(Throwable.java:181)
    at java.lang.Exception.<init>(Exception.java:29)
    at
    java.lang.RuntimeException.<init>(RuntimeException.java:32)
    at org.python.core.PyException.<init>(PyException.java:46)
    at org.python.core.PyException.<init>(PyException.java:43)
    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
    at org.python.core.PyObject.__call__(PyObject.java:387)
    at org.python.core.PyObject.__call__(PyObject.java:391)
    at org.python.core.PyMethod.__call__(PyMethod.java:109)
    at
    org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
    at
    org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at
    org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
    at
    org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
    ... 11 more
    Caused by:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    ERROR 2017: Internal error creating job configuration.
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
    at
    org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
    at org.apache.pig.PigServer.execute(PigServer.java:1250)
    at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
    at
    org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 22 more
    Caused by: java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at
    java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
    at
    java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
    at
    java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
    at
    java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
    at
    java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at
    org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

    I am running on a machine with lot of tera bytes and no drive is
    full,
    and the number of inodes used is as low as 1%.. I don't know which
    device is
    it talking about.. maybe it's something related to the Pig Internal
    Filesystem (or something) which I saw in the Pig output. Any sort of
    help is
    deeply appreciated.

    I'm useing CDH4.2 and running Pig 0.10 in batch mode through python.
    The
    jython I use is the standalone version (2.5) to fix the problem with
    imports
    if using the version that ships with Pig. The Machine runs Ubuntu
    12.04 x64
    and the user is not limited so it can use all of its huge resources.

    Sincerely,
    Younos

    --




    --
    Regards,
    Varun Kumar.P

    --



    --
    Harsh J

    --


    --


    --


    --
  • Younos Aboulnaga at Feb 11, 2013 at 1:14 pm
    Correction.. the job still doesn't run.. it eventually throughs the same
    cryptic error that there is no more space left on the "device". Don't
    bother sending me any suggestions, because I won't touch anything from the
    Hadoop stack again during my master's.. Hadoop, Hbase, Hive, and Pig have
    taken 2 months from my masters in which I was learning nothing except about
    limits that a program can hit when running on Linux: 1) Out of memory, 2)
    Too much memory and the garbage collector overhead makes Java kill itself
    (thus I had to re-write my code to be similar to C written in Java syntax
    to avoid "creating too much objects"), 3) Thrift as a bottle neck to Hbase
    bringing it to its knees, 4) Number of file handles open, 5) Number of
    xcievers, just off the top of my head.. too much for a "scalable" stack.

    Just to let you know there is 7 GB free on the drive where /tmp is located
    which should be enough to create some jars.. and only 23% of the Inodes are
    in use. Anyway, I have also set all the tmp directories to be within my
    home which has a few Tera bytes free, but it seems that Pig and Hadoop are
    ignoring. The most important settings are:

    set mapreduce.jobtracker.staging.root.dir '/home/yaboulna/tmp/mapred_staging'
    (in the pig script)
    and
    -Djava.io.tmpdir=/home/yaboulna/tmp/ in mapred.child.java.opts

    -- Younos

    On 11 February 2013 01:03, Younos Aboulnaga
    wrote:
    Thanks Kyle... I've looked at that and only 1% of the inodes were used
    when I looked. I'm not trying to fix the problem any more.. I changed my
    script by dividing it to smaller parts, so that Pig doesn't choke on.. it
    doesn't chew well :D

    On 11 February 2013 00:42, Kyle McGovern wrote:

    Have you looked at free inodes of your local tmp directory? We have
    noticed with some of our job submission nodes that hadoop tmp fills up on
    the local machine due to not having enough inodes available.

    You can find this information with "df -i".


    On Sun, Feb 10, 2013 at 11:35 PM, Younos Aboulnaga <
    younos.aboulnaga@uwaterloo.ca> wrote:
    Thanks for the response Harsh. As I mentioned before, all drives have
    tens of Gigabytes free (and some have Teras). Now I got the script to work
    by submitting it 30 commands at a time, and I will have to merge the
    results.

    On 10 February 2013 16:56, Harsh J wrote:

    The Pig's jar creation process uses a /tmp subdir by default to create
    the submit-jars. Does /tmp have adequate space allocated to it? You
    can alternatively also try to ask Pig to use a home directory instead
    of the tmpfs for this purpose.

    On Mon, Feb 11, 2013 at 3:08 AM, Younos Aboulnaga
    wrote:
    Thanks for the response Varun. There is no such directory, neither on HDFS
    nor the local file system.. and eitherway, all my drivers have at least a
    few Giga bytes free.. I don't think that creating a Jar should take that
    much space... but anyway, I'll do what is best when dealing with the Hadoop
    stack and just free as much space as possible in all drives and let it
    consume what it want... they are all fat animals after all :P.

    On 10 February 2013 14:30, varun kumar wrote:

    Hi Younos,

    Try to clear or increase /hadoop/tmp directory.


    Regards,
    Varun Kumar.P


    On Sun, Feb 10, 2013 at 5:33 AM, Y. I. AboulNaga
    wrote:
    Hello!

    I've hit all problems imaginable when trying to use Pig for
    scalable data
    integration. I'd really appreciate if you tell me where to look at
    the
    problem for this one:

    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLaye
    r.JobCreationException: ERROR 2017: Internal error creating job
    configuration.

    at
    org.python.core.PyException.fillInStackTrace(PyException.java:70)
    at java.lang.Throwable.<init>(Throwable.java:181)
    at java.lang.Exception.<init>(Exception.java:29)
    at
    java.lang.RuntimeException.<init>(RuntimeException.java:32)
    at org.python.core.PyException.<init>(PyException.java:46)
    at org.python.core.PyException.<init>(PyException.java:43)
    at org.python.core.Py.JavaError(Py.java:495)
    at org.python.core.Py.JavaError(Py.java:488)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:188)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:204)
    at org.python.core.PyObject.__call__(PyObject.java:387)
    at org.python.core.PyObject.__call__(PyObject.java:391)
    at org.python.core.PyMethod.__call__(PyMethod.java:109)
    at
    org.python.pycode._pyx0.f$0(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py:132)
    at
    org.python.pycode._pyx0.call_function(/home/yaboulna/vmshared/Code/thesis/pig_scripts/compgrams_extend.py)
    at org.python.core.PyTableCode.call(PyTableCode.java:165)
    at org.python.core.PyCode.call(PyCode.java:18)
    at org.python.core.Py.runCode(Py.java:1275)
    at
    org.python.util.PythonInterpreter.execfile(PythonInterpreter.java:235)
    at
    org.apache.pig.scripting.jython.JythonScriptEngine$Interpreter.execfile(JythonScriptEngine.java:199)
    ... 11 more
    Caused by:
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationException:
    ERROR 2017: Internal error creating job configuration.
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.getJob(JobControlCompiler.java:727)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompiler.compile(JobControlCompiler.java:259)
    at
    org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher.launchPig(MapReduceLauncher.java:180)
    at org.apache.pig.PigServer.launchPlan(PigServer.java:1275)
    at
    org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1260)
    at org.apache.pig.PigServer.execute(PigServer.java:1250)
    at org.apache.pig.PigServer.executeBatch(PigServer.java:362)
    at
    org.apache.pig.scripting.BoundScript.exec(BoundScript.java:282)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:101)
    at
    org.apache.pig.scripting.BoundScript.runSingle(BoundScript.java:77)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
    Method)
    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at
    org.python.core.PyReflectedFunction.__call__(PyReflectedFunction.java:186)
    ... 22 more
    Caused by: java.io.IOException: No space left on device
    at java.io.FileOutputStream.writeBytes(Native Method)
    at java.io.FileOutputStream.write(FileOutputStream.java:282)
    at
    java.util.zip.ZipOutputStream.writeBytes(ZipOutputStream.java:456)
    at
    java.util.zip.ZipOutputStream.writeCEN(ZipOutputStream.java:400)
    at
    java.util.zip.ZipOutputStream.finish(ZipOutputStream.java:309)
    at
    java.util.zip.DeflaterOutputStream.close(DeflaterOutputStream.java:140)
    at
    java.util.zip.ZipOutputStream.close(ZipOutputStream.java:321)
    at
    org.apache.pig.impl.util.JarManager.createJar(JarManager.java:155)

    I am running on a machine with lot of tera bytes and no drive is
    full,
    and the number of inodes used is as low as 1%.. I don't know which
    device is
    it talking about.. maybe it's something related to the Pig Internal
    Filesystem (or something) which I saw in the Pig output. Any sort
    of help is
    deeply appreciated.

    I'm useing CDH4.2 and running Pig 0.10 in batch mode through
    python. The
    jython I use is the standalone version (2.5) to fix the problem
    with imports
    if using the version that ships with Pig. The Machine runs Ubuntu
    12.04 x64
    and the user is not limited so it can use all of its huge resources.

    Sincerely,
    Younos

    --




    --
    Regards,
    Varun Kumar.P

    --



    --
    Harsh J

    --


    --


    --


    --

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupcdh-user @
categorieshadoop
postedFeb 10, '13 at 12:03a
activeFeb 11, '13 at 1:14p
posts8
users4
websitecloudera.com
irc#hadoop

People

Translate

site design / logo © 2022 Grokbase