FAQ
Here is my std-error :
hive> insert overwrite local directory '/tmp/mystuff' select transform(*)
using 'my.py' FROM myhivetable;
Total MapReduce jobs = 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201002160457_0033, Tracking URL =
http://ec2-204-236-205-98.compute-1.amazonaws.com:50030/jobdetails.jsp?jobid=job_201002160457_0033
Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=
ec2-204-236-205-98.compute-1.amazonaws.com:8021 -kill job_201002160457_0033
2010-02-17 05:40:28,380 map = 0%, reduce =0%
2010-02-17 05:41:12,469 map = 100%, reduce =100%
Ended Job = job_201002160457_0033 with errors
FAILED: Execution Error, return code 2 from
org.apache.hadoop.hive.ql.exec.ExecDriver


I am trying to use the following command :

hive ql :

add file /root/my.py
insert overwrite local directory '/tmp/mystuff' select transform(*) using
'my.py' FROM myhivetable;

and following is my my.py:
#!/usr/bin/python
import sys
for line in sys.stdin:
line = line.strip()
flds = line.split('\t')
(cl_id,cook_id)=flds[:2]
sub_id=cl_id
if cl_id.startswith('foo'): sub_id=cook_id;
print ','.join([sub_id,flds[2],flds[3]])

This works fine, as I tested it in commandline using : echo -e
'aa\tbb\tcc\tdd' | /root/my.py

Any pointers ?

Search Discussions

  • Sonal Goyal at Feb 17, 2010 at 1:07 pm
    Hi,

    What do your Hive logs say? You can also check the Hadoop mapper and reduce
    job logs.

    Thanks and Regards,
    Sonal


    On Wed, Feb 17, 2010 at 4:18 PM, prasenjit mukherjee
    wrote:
    Here is my std-error :
    hive> insert overwrite local directory '/tmp/mystuff' select transform(*)
    using 'my.py' FROM myhivetable;
    Total MapReduce jobs = 1
    Number of reduce tasks is set to 0 since there's no reduce operator
    Starting Job = job_201002160457_0033, Tracking URL =
    http://ec2-204-236-205-98.compute-1.amazonaws.com:50030/jobdetails.jsp?jobid=job_201002160457_0033
    Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=
    ec2-204-236-205-98.compute-1.amazonaws.com:8021 -kill
    job_201002160457_0033
    2010-02-17 05:40:28,380 map = 0%, reduce =0%
    2010-02-17 05:41:12,469 map = 100%, reduce =100%
    Ended Job = job_201002160457_0033 with errors
    FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.ExecDriver


    I am trying to use the following command :

    hive ql :

    add file /root/my.py
    insert overwrite local directory '/tmp/mystuff' select transform(*) using
    'my.py' FROM myhivetable;

    and following is my my.py:
    #!/usr/bin/python
    import sys
    for line in sys.stdin:
    line = line.strip()
    flds = line.split('\t')
    (cl_id,cook_id)=flds[:2]
    sub_id=cl_id
    if cl_id.startswith('foo'): sub_id=cook_id;
    print ','.join([sub_id,flds[2],flds[3]])

    This works fine, as I tested it in commandline using : echo -e
    'aa\tbb\tcc\tdd' | /root/my.py

    Any pointers ?
  • Prasenjit mukherjee at Feb 18, 2010 at 5:35 am
    Sorry for the delay. Here is from my /tmp/root/hive.log file. Any other
    files I should be looking into.

    2010-02-18 00:29:56,082 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(580)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-02-18 00:30:39,506 ERROR exec.ExecDriver
    (SessionState.java:printError(279)) - Ended Job = job_201002171050_0011 with
    errors
    2010-02-18 00:30:39,514 ERROR ql.Driver (SessionState.java:printError(279))
    - FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.ExecDriver

    On Wed, Feb 17, 2010 at 6:36 PM, Sonal Goyal wrote:

    Hi,

    What do your Hive logs say? You can also check the Hadoop mapper and reduce
    job logs.

    Thanks and Regards,
    Sonal



    On Wed, Feb 17, 2010 at 4:18 PM, prasenjit mukherjee <prasen.bea@gmail.com
    wrote:
    Here is my std-error :
    hive> insert overwrite local directory '/tmp/mystuff' select transform(*)
    using 'my.py' FROM myhivetable;
    Total MapReduce jobs = 1
    Number of reduce tasks is set to 0 since there's no reduce operator
    Starting Job = job_201002160457_0033, Tracking URL =
    http://ec2-204-236-205-98.compute-1.amazonaws.com:50030/jobdetails.jsp?jobid=job_201002160457_0033
    Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=
    ec2-204-236-205-98.compute-1.amazonaws.com:8021 -kill
    job_201002160457_0033
    2010-02-17 05:40:28,380 map = 0%, reduce =0%
    2010-02-17 05:41:12,469 map = 100%, reduce =100%
    Ended Job = job_201002160457_0033 with errors
    FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.ExecDriver


    I am trying to use the following command :

    hive ql :

    add file /root/my.py
    insert overwrite local directory '/tmp/mystuff' select transform(*) using
    'my.py' FROM myhivetable;

    and following is my my.py:
    #!/usr/bin/python
    import sys
    for line in sys.stdin:
    line = line.strip()
    flds = line.split('\t')
    (cl_id,cook_id)=flds[:2]
    sub_id=cl_id
    if cl_id.startswith('foo'): sub_id=cook_id;
    print ','.join([sub_id,flds[2],flds[3]])

    This works fine, as I tested it in commandline using : echo -e
    'aa\tbb\tcc\tdd' | /root/my.py

    Any pointers ?
  • Sonal Goyal at Feb 18, 2010 at 5:36 am
    Can you edit your logging configurations to DEBUG, try again and check the
    logs? Also check hadoop mapper logs.

    Thanks and Regards,
    Sonal


    On Thu, Feb 18, 2010 at 11:04 AM, prasenjit mukherjee
    wrote:
    Sorry for the delay. Here is from my /tmp/root/hive.log file. Any other
    files I should be looking into.

    2010-02-18 00:29:56,082 WARN mapred.JobClient
    (JobClient.java:configureCommandLineOptions(580)) - Use GenericOptionsParser
    for parsing the arguments. Applications should implement Tool for the same.
    2010-02-18 00:30:39,506 ERROR exec.ExecDriver
    (SessionState.java:printError(279)) - Ended Job = job_201002171050_0011 with
    errors
    2010-02-18 00:30:39,514 ERROR ql.Driver (SessionState.java:printError(279))
    - FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.ExecDriver


    On Wed, Feb 17, 2010 at 6:36 PM, Sonal Goyal wrote:

    Hi,

    What do your Hive logs say? You can also check the Hadoop mapper and
    reduce job logs.

    Thanks and Regards,
    Sonal



    On Wed, Feb 17, 2010 at 4:18 PM, prasenjit mukherjee <
    prasen.bea@gmail.com> wrote:
    Here is my std-error :
    hive> insert overwrite local directory '/tmp/mystuff' select transform(*)
    using 'my.py' FROM myhivetable;
    Total MapReduce jobs = 1
    Number of reduce tasks is set to 0 since there's no reduce operator
    Starting Job = job_201002160457_0033, Tracking URL =
    http://ec2-204-236-205-98.compute-1.amazonaws.com:50030/jobdetails.jsp?jobid=job_201002160457_0033
    Kill Command = /usr/lib/hadoop/bin/hadoop job -Dmapred.job.tracker=
    ec2-204-236-205-98.compute-1.amazonaws.com:8021 -kill
    job_201002160457_0033
    2010-02-17 05:40:28,380 map = 0%, reduce =0%
    2010-02-17 05:41:12,469 map = 100%, reduce =100%
    Ended Job = job_201002160457_0033 with errors
    FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.ExecDriver


    I am trying to use the following command :

    hive ql :

    add file /root/my.py
    insert overwrite local directory '/tmp/mystuff' select transform(*)
    using 'my.py' FROM myhivetable;

    and following is my my.py:
    #!/usr/bin/python
    import sys
    for line in sys.stdin:
    line = line.strip()
    flds = line.split('\t')
    (cl_id,cook_id)=flds[:2]
    sub_id=cl_id
    if cl_id.startswith('foo'): sub_id=cook_id;
    print ','.join([sub_id,flds[2],flds[3]])

    This works fine, as I tested it in commandline using : echo -e
    'aa\tbb\tcc\tdd' | /root/my.py

    Any pointers ?
  • Prasenjit mukherjee at Feb 18, 2010 at 5:54 am
    Thanks a lot, that helped me to fix the problem. I ran with "hive -hiveconf
    hive.root.logger=DEBUG,console" and it threw some derby_lock_error, I re-ran
    after deleting the file '/var/lib/hive/metastore/${user.name}_db' and
    everything worked ok.

    Thanks again,
    -Prasen
    On Thu, Feb 18, 2010 at 11:06 AM, Sonal Goyal wrote:

    Can you edit your logging configurations to DEBUG, try again and check the
    logs? Also check hadoop mapper logs.

    Thanks and Regards,
    Sonal



  • Edward Capriolo at Feb 18, 2010 at 4:05 pm

    On Thu, Feb 18, 2010 at 12:54 AM, prasenjit mukherjee wrote:
    Thanks a lot,  that helped me to fix the problem. I ran with "hive -hiveconf
    hive.root.logger=DEBUG,console" and it threw some derby_lock_error, I re-ran
    after deleting the file '/var/lib/hive/metastore/${user.name}_db' and
    everything worked ok.

    Thanks again,
    -Prasen
    On Thu, Feb 18, 2010 at 11:06 AM, Sonal Goyal wrote:

    Can you edit your logging configurations to DEBUG, try again and check the
    logs? Also check hadoop mapper logs.

    Thanks and Regards,
    Sonal

    The meta-store lock: Is this because you are not running derby in server mode?
    http://wiki.apache.org/hadoop/HiveDerbyServerMode

    You need to run in server mode for multiple-concurrent access.
  • Aryeh Berkowitz at Feb 25, 2010 at 1:59 pm
    Can anybody tell me why I'm getting this error?

    hive> show tables;
    OK
    email
    html_href
    html_src
    ipadrr
    phone
    urls
    Time taken: 0.129 seconds
    hive> SELECT DISTINCT a.url, a.signature, a.size from urls a;
    Total MapReduce jobs = 1
    Launching Job 1 out of 1
    java.io.IOException: No such file or directory
    at java.io.UnixFileSystem.createFileExclusively(Native Method)
    at java.io.File.checkAndCreate(File.java:1704)
    at java.io.File.createTempFile(File.java:1792)
    at org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:87)
    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)
    at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:630)
    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)
    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)
    at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)
    at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)
    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:303)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
    FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.MapRedTask
  • Zheng Shao at Feb 25, 2010 at 9:26 pm
    Most probably $TMPDIR does not exist.
    I think by default it's "/tmp/<user>". Can you mkdir ?
    On Thu, Feb 25, 2010 at 5:58 AM, Aryeh Berkowitz wrote:
    Can anybody tell me why I’m getting this error?



    hive> show tables;

    OK

    email

    html_href

    html_src

    ipadrr

    phone

    urls

    Time taken: 0.129 seconds

    hive> SELECT DISTINCT a.url, a.signature, a.size from urls a;

    Total MapReduce jobs = 1

    Launching Job 1 out of 1

    java.io.IOException: No such file or directory

    at java.io.UnixFileSystem.createFileExclusively(Native Method)

    at java.io.File.checkAndCreate(File.java:1704)

    at java.io.File.createTempFile(File.java:1792)

    at
    org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:87)

    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)

    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)

    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:630)

    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)

    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)

    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)

    at
    org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)

    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:303)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    FAILED: Execution Error, return code 1 from
    org.apache.hadoop.hive.ql.exec.MapRedTask


    --
    Yours,
    Zheng
  • Carl Steinbach at Feb 25, 2010 at 9:37 pm
    You can also change the value of hive.exec.scratchdir (default value:
    /tmp/hive-${user.name}) to a path that you have permission to write to.

    The exception makes it look like you don't have permission to write to /tmp.

    Carl
    On Thu, Feb 25, 2010 at 1:25 PM, Zheng Shao wrote:

    Most probably $TMPDIR does not exist.
    I think by default it's "/tmp/<user>". Can you mkdir ?
    On Thu, Feb 25, 2010 at 5:58 AM, Aryeh Berkowitz wrote:
    Can anybody tell me why I’m getting this error?



    hive> show tables;

    OK

    email

    html_href

    html_src

    ipadrr

    phone

    urls

    Time taken: 0.129 seconds

    hive> SELECT DISTINCT a.url, a.signature, a.size from urls a;

    Total MapReduce jobs = 1

    Launching Job 1 out of 1

    java.io.IOException: No such file or directory

    at java.io.UnixFileSystem.createFileExclusively(Native Method)

    at java.io.File.checkAndCreate(File.java:1704)

    at java.io.File.createTempFile(File.java:1792)

    at
    org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:87)

    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)

    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:630)

    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)

    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)

    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)

    at
    org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)

    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:303)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    FAILED: Execution Error, return code 1 from
    org.apache.hadoop.hive.ql.exec.MapRedTask


    --
    Yours,
    Zheng
  • Aryeh Berkowitz at Feb 26, 2010 at 4:00 pm
    Thanks! I had to manually create the directory under tmp. Shouldn't it do that by itself?

    From: Carl Steinbach
    Sent: Thursday, February 25, 2010 4:37 PM
    To: hive-user@hadoop.apache.org
    Subject: Re: Execution Error

    You can also change the value of hive.exec.scratchdir (default value: /tmp/hive-${user.name<http://user.name>}) to a path that you have permission to write to.

    The exception makes it look like you don't have permission to write to /tmp.

    Carl
    On Thu, Feb 25, 2010 at 1:25 PM, Zheng Shao wrote:
    Most probably $TMPDIR does not exist.
    I think by default it's "/tmp/<user>". Can you mkdir ?
    On Thu, Feb 25, 2010 at 5:58 AM, Aryeh Berkowitz wrote:
    Can anybody tell me why I'm getting this error?



    hive> show tables;

    OK

    email

    html_href

    html_src

    ipadrr

    phone

    urls

    Time taken: 0.129 seconds

    hive> SELECT DISTINCT a.url, a.signature, a.size from urls a;

    Total MapReduce jobs = 1

    Launching Job 1 out of 1

    java.io.IOException: No such file or directory

    at java.io.UnixFileSystem.createFileExclusively(Native Method)

    at java.io.File.checkAndCreate(File.java:1704)

    at java.io.File.createTempFile(File.java:1792)

    at
    org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:87)

    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)

    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)

    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:630)

    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)

    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)

    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)

    at
    org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)

    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:303)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    FAILED: Execution Error, return code 1 from
    org.apache.hadoop.hive.ql.exec.MapRedTask

    --
    Yours,
    Zheng
  • Mafish Liu at Mar 30, 2010 at 5:42 am
    Patch has been uploaded, please refer
    https://issues.apache.org/jira/browse/HIVE-474 for details.

    2010/2/26 Aryeh Berkowitz <aryeh@iswcorp.com>:
    Thanks! I had to manually create the directory under tmp. Shouldn’t it do
    that by itself?



    From: Carl Steinbach
    Sent: Thursday, February 25, 2010 4:37 PM
    To: hive-user@hadoop.apache.org
    Subject: Re: Execution Error



    You can also change the value of hive.exec.scratchdir (default value:
    /tmp/hive-${user.name}) to a path that you have permission to write to.

    The exception makes it look like you don't have permission to write to /tmp.

    Carl

    On Thu, Feb 25, 2010 at 1:25 PM, Zheng Shao wrote:

    Most probably $TMPDIR does not exist.
    I think by default it's "/tmp/<user>". Can you mkdir ?
    On Thu, Feb 25, 2010 at 5:58 AM, Aryeh Berkowitz wrote:
    Can anybody tell me why I’m getting this error?



    hive> show tables;

    OK

    email

    html_href

    html_src

    ipadrr

    phone

    urls

    Time taken: 0.129 seconds

    hive> SELECT DISTINCT a.url, a.signature, a.size from urls a;

    Total MapReduce jobs = 1

    Launching Job 1 out of 1

    java.io.IOException: No such file or directory

    at java.io.UnixFileSystem.createFileExclusively(Native Method)

    at java.io.File.checkAndCreate(File.java:1704)

    at java.io.File.createTempFile(File.java:1792)

    at
    org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:87)

    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)

    at

    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)

    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:630)

    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)

    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)

    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)

    at
    org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)

    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:303)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at

    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    at

    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    FAILED: Execution Error, return code 1 from
    org.apache.hadoop.hive.ql.exec.MapRedTask

    --
    Yours,
    Zheng


    --
    Mafish@gmail.com
  • Arvind Prabhakar at Apr 14, 2010 at 1:32 am
    Yes - this has been fixed in the trunk. See:

    https://issues.apache.org/jira/browse/HIVE-1277

    <https://issues.apache.org/jira/browse/HIVE-1277>Arvind
    On Fri, Feb 26, 2010 at 8:59 AM, Aryeh Berkowitz wrote:

    Thanks! I had to manually create the directory under tmp. Shouldn’t it do
    that by itself?



    *From:* Carl Steinbach
    *Sent:* Thursday, February 25, 2010 4:37 PM
    *To:* hive-user@hadoop.apache.org
    *Subject:* Re: Execution Error



    You can also change the value of hive.exec.scratchdir (default value:
    /tmp/hive-${user.name}) to a path that you have permission to write to.

    The exception makes it look like you don't have permission to write to
    /tmp.

    Carl

    On Thu, Feb 25, 2010 at 1:25 PM, Zheng Shao wrote:

    Most probably $TMPDIR does not exist.
    I think by default it's "/tmp/<user>". Can you mkdir ?

    On Thu, Feb 25, 2010 at 5:58 AM, Aryeh Berkowitz wrote:
    Can anybody tell me why I’m getting this error?



    hive> show tables;

    OK

    email

    html_href

    html_src

    ipadrr

    phone

    urls

    Time taken: 0.129 seconds

    hive> SELECT DISTINCT a.url, a.signature, a.size from urls a;

    Total MapReduce jobs = 1

    Launching Job 1 out of 1

    java.io.IOException: No such file or directory

    at java.io.UnixFileSystem.createFileExclusively(Native Method)

    at java.io.File.checkAndCreate(File.java:1704)

    at java.io.File.createTempFile(File.java:1792)

    at
    org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:87)

    at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:107)

    at
    org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:55)
    at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:630)

    at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:504)

    at org.apache.hadoop.hive.ql.Driver.run(Driver.java:382)

    at
    org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:138)

    at
    org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:197)

    at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:303)

    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

    at
    sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at
    sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)

    at org.apache.hadoop.util.RunJar.main(RunJar.java:156)

    FAILED: Execution Error, return code 1 from
    org.apache.hadoop.hive.ql.exec.MapRedTask

    --
    Yours,
    Zheng

  • Mr.hipo at Feb 26, 2010 at 11:27 am
    hi,3ks for any help!

    there is a Execution Error infomation below,please tell me how to fix it ? 3ks again



    Hive history file=/tmp/searchadmin/hive_job_log_searchadmin_201002261907_1797898701.txt
    Total MapReduce jobs = 2
    Number of reduce tasks not specified. Estimated from input data size: 7
    In order to change the average load for a reducer (in bytes):
    set hive.exec.reducers.bytes.per.reducer=<number>
    In order to limit the maximum number of reducers:
    set hive.exec.reducers.max=<number>
    In order to set a constant number of reducers:
    set mapred.reduce.tasks=<number>
    Starting Job = job_201002261500_0005, Tracking URL = http://app-fy-160:6001/jobdetails.jsp?jobid=job_201002261500_0005
    Kill Command = /data/hadoop/hadoop/bin/hadoop job -Dmapred.job.tracker=app-fy-160:9001 -kill job_201002261500_0005
    2010-02-26 07:07:30,829 map = 0%, reduce =0%
    2010-02-26 07:07:42,967 map = 2%, reduce =0%
    2010-02-26 07:07:44,993 map = 3%, reduce =0%
    2010-02-26 07:07:46,002 map = 4%, reduce =0%
    2010-02-26 07:07:48,021 map = 5%, reduce =0%
    2010-02-26 07:07:49,037 map = 6%, reduce =0%
    2010-02-26 07:07:51,060 map = 7%, reduce =0%
    2010-02-26 07:07:52,074 map = 8%, reduce =0%
    2010-02-26 07:08:00,184 map = 10%, reduce =0%
    2010-02-26 07:08:01,193 map = 11%, reduce =0%
    2010-02-26 07:08:03,258 map = 12%, reduce =1%
    2010-02-26 07:08:04,332 map = 13%, reduce =2%
    2010-02-26 07:08:06,400 map = 14%, reduce =2%
    2010-02-26 07:08:09,429 map = 15%, reduce =2%
    2010-02-26 07:08:15,588 map = 17%, reduce =3%
    2010-02-26 07:08:18,612 map = 19%, reduce =4%
    2010-02-26 07:08:21,659 map = 21%, reduce =5%
    2010-02-26 07:08:24,717 map = 22%, reduce =5%
    2010-02-26 07:08:31,125 map = 24%, reduce =5%
    2010-02-26 07:08:34,160 map = 26%, reduce =6%
    2010-02-26 07:08:37,184 map = 28%, reduce =7%
    2010-02-26 07:08:40,216 map = 29%, reduce =7%
    2010-02-26 07:08:43,265 map = 30%, reduce =7%
    2010-02-26 07:08:46,296 map = 31%, reduce =7%
    2010-02-26 07:08:48,334 map = 32%, reduce =8%
    2010-02-26 07:08:49,344 map = 33%, reduce =8%
    2010-02-26 07:08:52,369 map = 35%, reduce =8%
    2010-02-26 07:08:55,863 map = 36%, reduce =9%
    2010-02-26 07:08:57,924 map = 36%, reduce =10%
    2010-02-26 07:08:58,932 map = 38%, reduce =10%
    2010-02-26 07:09:00,952 map = 39%, reduce =10%
    2010-02-26 07:09:03,988 map = 40%, reduce =10%
    2010-02-26 07:09:05,023 map = 40%, reduce =11%
    2010-02-26 07:09:08,084 map = 41%, reduce =11%
    2010-02-26 07:09:11,109 map = 43%, reduce =11%
    2010-02-26 07:09:13,124 map = 44%, reduce =12%
    2010-02-26 07:09:14,134 map = 45%, reduce =12%
    2010-02-26 07:09:16,173 map = 46%, reduce =12%
    2010-02-26 07:09:17,184 map = 47%, reduce =13%
    2010-02-26 07:09:20,248 map = 48%, reduce =13%
    2010-02-26 07:09:23,268 map = 49%, reduce =13%
    2010-02-26 07:09:26,288 map = 50%, reduce =14%
    2010-02-26 07:09:28,305 map = 51%, reduce =14%
    2010-02-26 07:09:29,316 map = 53%, reduce =14%
    2010-02-26 07:09:31,360 map = 53%, reduce =15%
    2010-02-26 07:09:32,376 map = 55%, reduce =16%
    2010-02-26 07:09:34,420 map = 57%, reduce =16%
    2010-02-26 07:09:40,508 map = 58%, reduce =16%
    2010-02-26 07:09:42,538 map = 59%, reduce =17%
    2010-02-26 07:09:43,580 map = 62%, reduce =17%
    2010-02-26 07:09:46,676 map = 62%, reduce =18%
    2010-02-26 07:09:47,684 map = 63%, reduce =18%
    2010-02-26 07:09:49,711 map = 64%, reduce =18%
    2010-02-26 07:09:51,744 map = 65%, reduce =19%
    2010-02-26 07:09:55,812 map = 66%, reduce =20%
    2010-02-26 07:09:56,823 map = 67%, reduce =20%
    2010-02-26 07:09:58,840 map = 69%, reduce =20%
    2010-02-26 07:10:01,893 map = 70%, reduce =20%
    2010-02-26 07:10:03,968 map = 70%, reduce =21%
    2010-02-26 07:10:04,977 map = 71%, reduce =21%
    2010-02-26 07:10:05,984 map = 72%, reduce =21%
    2010-02-26 07:10:08,004 map = 73%, reduce =22%
    2010-02-26 07:10:11,020 map = 72%, reduce =22%
    2010-02-26 07:10:14,050 map = 75%, reduce =22%
    2010-02-26 07:10:15,060 map = 75%, reduce =23%
    2010-02-26 07:10:16,079 map = 76%, reduce =23%
    2010-02-26 07:10:20,108 map = 77%, reduce =24%
    2010-02-26 07:10:23,136 map = 79%, reduce =24%
    2010-02-26 07:10:26,206 map = 80%, reduce =24%
    2010-02-26 07:10:31,288 map = 80%, reduce =25%
    2010-02-26 07:10:32,296 map = 82%, reduce =25%
    2010-02-26 07:10:34,344 map = 83%, reduce =25%
    2010-02-26 07:10:38,393 map = 84%, reduce =26%
    2010-02-26 07:10:41,440 map = 85%, reduce =26%
    2010-02-26 07:10:44,479 map = 87%, reduce =26%
    2010-02-26 07:10:47,508 map = 90%, reduce =27%
    2010-02-26 07:10:49,520 map = 91%, reduce =27%
    2010-02-26 07:10:50,534 map = 93%, reduce =27%
    2010-02-26 07:10:52,552 map = 93%, reduce =28%
    2010-02-26 07:10:53,560 map = 96%, reduce =28%
    2010-02-26 07:10:55,577 map = 97%, reduce =29%
    2010-02-26 07:10:56,584 map = 98%, reduce =30%
    2010-02-26 07:10:59,613 map = 99%, reduce =30%
    2010-02-26 07:11:02,655 map = 99%, reduce =31%
    2010-02-26 07:11:05,698 map = 99%, reduce =32%
    2010-02-26 07:11:08,729 map = 100%, reduce =32%
    2010-02-26 07:11:14,868 map = 99%, reduce =33%
    2010-02-26 07:11:20,915 map = 100%, reduce =100%
    Ended Job = job_201002261500_0005 with errors
    FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.ExecDriver



    table:

    CREATE TABLE commclick (comm STRING, navid INT, keyword STRING) PARTITIONED BY(dt STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '9' LINES TERMINATED BY '10';
    CREATE TABLE daily_hotkey(
    keyword STRING,
    navid STRING,
    countclick STRING)
    ROW FORMAT DELIMITED
    FIELDS TERMINATED BY '\t' STORED AS TEXTFILE;



    sql:

    INSERT OVERWRITE TABLE daily_hotkey
    select * from (select commclick.keyword keyword,commclick.navid navid,count(commclick.navid) countnavid
    from commclick GROUP BY commclick.keyword,commclick.navid) t
    order by keyword,countnavid desc





    _________________________________________________________________
    MSN十周年庆典,查看MSN注册时间,赢取神秘大奖
    http://10.msn.com.cn
  • Luocanrao at Feb 27, 2010 at 12:26 am
    There is a Execution Error in hive, can you help me figure out what problem
    is ?



    Ended Job = job_201002261500_0005 with errors
    FAILED: Execution Error, return code 2 from
    org.apache.hadoop.hive.ql.exec.ExecDriver



    Here is Hive.log in /tmp/, I can see the error info below

    .Failed map
    tasks:1,FileSystemCounters.FILE_BYTES_READ:42566756,FileSystemCounters.HDFS_
    BYTES_READ:6089415548,

    FileSystemCounters.FILE_BYTES_WRITTEN:678631707



    TaskProgress TASK_HADOOP_PROGRESS="2010-02-26 07:02:59,182 map = 99%,
    reduce =33%" TASK_NAME="org.apache.hadoo

    p.hive.ql.exec.ExecDriver" TASK_COUNTERS="Job Counters .Launched reduce
    tasks:8,Job Counters .Rack-local map ta

    sks:118,Job Counters .Launched map tasks:143,Job Counters .Data-local map
    tasks:25,Job Counters .Failed map tas

    ks:1,FileSystemCounters.FILE_BYTES_READ:42566756,FileSystemCounters.HDFS_BYT
    ES_READ:6089415548,FileSystemCounte

    rs.FILE_BYTES_WRITTEN:678631707,org.apache.hadoop.hive.ql.exec.MapOperator$C
    ounter.DESERIALIZE_ERRORS:0,Map-Red

    uce Framework.Combine output records:0,Map-Reduce Framework.Map input
    records:121944581,Map-Reduce Framework.Sp

    illed Records:24489407,Map-Reduce Framework.Map output
    bytes:590412105,Map-Reduce Framework.Map input bytes:608

    9173827,Map-Reduce Framework.Combine input records:0,Map-Reduce
    Framework.Map output records:22891209" TASK_ID=

    "Stage-1" QUERY_ID="searchadmin_20100226185858"
    TASK_HADOOP_ID="job_201002261500_0004" TIME="1267182179184"

    TaskProgress TASK_HADOOP_PROGRESS="2010-02-26 07:03:02,210 map = 100%,
    reduce =100%" TASK_NAME="org.apache.had

    oop.hive.ql.exec.ExecDriver" TASK_COUNTERS="Job Counters .Launched reduce
    tasks:11,Job Counters .Rack-local map

    tasks:118,Job Counters .Launched map tasks:143,Job Counters .Data-local map
    tasks:25,Job Counters .Failed map

    tasks:1,FileSystemCounters.FILE_BYTES_READ:42566756,FileSystemCounters.HDFS_
    BYTES_READ:6089415548,FileSystemCou

    nters.FILE_BYTES_WRITTEN:678631707,org.apache.hadoop.hive.ql.exec.MapOperato
    r$Counter.DESERIALIZE_ERRORS:0,Map-

    Reduce Framework.Combine output records:0,Map-Reduce Framework.Map input
    records:121944581,Map-Reduce Framework

    .Spilled Records:24489407,Map-Reduce Framework.Map output
    bytes:590412105,Map-Reduce Framework.Map input bytes:

    6089173827,Map-Reduce Framework.Combine input records:0,Map-Reduce
    Framework.Map output records:22891209" TASK_

    ID="Stage-1" QUERY_ID="searchadmin_20100226185858"
    TASK_HADOOP_ID="job_201002261500_0004" TIME="1267182182213"

    TaskEnd TASK_RET_CODE="2" TASK_HADOOP_PROGRESS="2010-02-26 07:03:02,210 map
    = 100%, reduce =100%" TASK_NAME="o

    rg.apache.hadoop.hive.ql.exec.ExecDriver" TASK_COUNTERS="Job Counters
    .Launched reduce tasks:11,Job Counters .R

    ack-local map tasks:118,Job Counters .Launched map tasks:143,Job Counters
    .Data-local map tasks:25,Job Counters

    .Failed map
    tasks:1,FileSystemCounters.FILE_BYTES_READ:42566756,FileSystemCounters.HDFS_
    BYTES_READ:6089415548,

    FileSystemCounters.FILE_BYTES_WRITTEN:678631707,org.apache.hadoop.hive.ql.ex
    ec.MapOperator$Counter.DESERIALIZE_

    ERRORS:0,Map-Reduce Framework.Combine output records:0,Map-Reduce
    Framework.Map input records:121944581,Map-Red

    uce Framework.Spilled Records:24489407,Map-Reduce Framework.Map output
    bytes:590412105,Map-Reduce Framework.Map

    input bytes:6089173827,Map-Reduce Framework.Combine input
    records:0,Map-Reduce Framework.Map output records:22

    891209" TASK_ID="Stage-1" QUERY_ID="searchadmin_20100226185858"
    TASK_HADOOP_ID="job_201002261500_0004" TIME="12

    67182182219"

    QueryEnd QUERY_STRING="INSERT OVERWRITE TABLE daily_hotkey select * from
    (select commclick.keyword keyword,c

    ommclick.navid navid,count(commclick.navid) countnavid from commclick GROUP
    BY commclick.keyword,commclick.navi

    d) t order by keyword,countnavid desc "
    QUERY_ID="searchadmin_20100226185858" QUERY_NUM_TASKS="2" TIME="1267182

    182219"

Related Discussions

Discussion Navigation
viewthread | post
Discussion Overview
groupuser @
categorieshive, hadoop
postedFeb 17, '10 at 10:49a
activeApr 14, '10 at 1:32a
posts14
users11
websitehive.apache.org

People

Translate

site design / logo © 2021 Grokbase