|| at Jun 16, 2009 at 5:27 pm
It is odd that PigServer supports some fs operations (mkdirs,
deleteFile) and not others (copyToLocal). Perhaps some of the
original designers of this class could chime in on the thinking here.
I do not know of any immediate plans to alter this interface. Kevin's
suggestion of using the hadoop classes directly is good.
On Jun 16, 2009, at 10:04 AM, George Pang wrote:
Thank you Kevin, this is one option. But my question to the Pig
guru is, is
there API for file IO between HDFS and local system, or will be
there in the
2009/6/16 Kevin Weil <firstname.lastname@example.org>
If you're already writing Pig from within Java, your best bet is to
go through the standard HDFS interfaces. In particular, seehttp://hadoop.apache.org/core/docs/current/api/org/apache/hadoop/fs/FileUtil.htmlfor
a utility class that exposes copy method from HDFS to the local file
On Thu, Jun 11, 2009 at 11:46 AM, George Pang <email@example.com>
Hi pig users,
I tried to copyToLocal my stored result from pig queries to my local
workspace. My lines of code in Java are:
*pigServer.registerQuery("copyToLocal output WorkingDir output ");*
And I know Pig Latin statements will execute only at "store" and
I think the last line of code won't get executed.
so I tried to add another line:
pigServer.registerQuery( "quit;" );
This will work in Grunt, but not here. So what's the best
practice to copy
the file to a local working directory for reading or further
Should I use the interface in Hadoop?