I've tried to load a sample file after creating an external table like
hive> create external table extab (key int, val string)
row format delimited fields terminated by '\t'
lines terminated by '\n'
Here, /user/hive/warehouse/test contains an HDFS file which I am going to
into table extab. this was OK. On load, though,
hive> load data inpath '/user/hive/warehouse/test/kv1.txt'
overwrite into table extab;
I found an error like below
FAILED: Error in semantic analysis: line 2:17 Path is not legal
Move from: hdfs://vm2:9000/user/hive/warehouse/test/kv1.txt to:
/user/hive/warehouse/test/ is not valid.
Please check that values for params "default.fs.name" and
"hive.metastore.warehouse.dir" do not onflict.
I've changed directories different ones, but to no avail. Can you suggest
By the way, is "default.fs.name" right? I could find "fs.default.name" but
----- Original Message -----
From: "Zheng Shao" <firstname.lastname@example.org>
Sent: Thursday, July 23, 2009 5:49 AM
Subject: Re: loading data from HDFS or local file to
If the huge file is already on HDFS (load data WITHOUT local), Hive
will just *move* the file into the table (NOTE: that means user won't
be able to see the file in its original directory afterwards)
If you don't want that to happen, you might want to use "CREATE
EXTERNAL TABLE .... LOCATION "/user/myname/myfiledir";"
If the huge file is on local file system, you will have to use (load
data WITH local), and Hive will copy the file.
On Wed, Jul 22, 2009 at 12:25 AM, Manhee Jowrote:
What really happens when a huge file (e.g. some tens of TB) is "LOADed
(LOCAL) INPATH ...
INTO TABLE"? Does hive need to scan the entire file before processing
anything even very simple (e.g. select)?
If so, are there any solutions to decrease the number of disk access? Is
partitioning a way to do it?