FAQ
My S3 secret key has a slash in it. After replacing the / with %2F I
can use it as a filesystem URL in something like:

$ hadoop fs -fs s3n://$KEY:$SECRET@$BUCKET -ls /
Found 1 items
drwxrwxrwx - 0 1969-12-31 16:00 /remote

But when I try a distcp, it crashes with:

$ hadoop distcp s3n://$KEY:$SECRET@$BUCKET/remote local
Copy failed: org.apache.hadoop.fs.s3.S3Exception:
org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error
Message: <?
xml version="1.0"
encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><Message>...

If I leave the slash unencoded, it crashes with:

$ hadoop distcp s3n://$KEY:$SECRET@$BUCKET/remote local
java.lang.IllegalArgumentException: Invalid hostname in URI
s3n://$KEY:$SECRET@$BUCKET/remote

Is this a bug in Hadoop, or does distcp require special handling of S3
URLs?? I am using plain old 0.19.2

Cheers,
Anthony

Search Discussions

Related Discussions

Discussion Navigation
viewthread | post
posts ‹ prev | 1 of 1 | next ›
Discussion Overview
groupcommon-user @
categorieshadoop
postedMay 27, '10 at 4:41p
activeMay 27, '10 at 4:41p
posts1
users1
websitehadoop.apache.org...
irc#hadoop

1 user in discussion

Anthony Urso: 1 post

People

Translate

site design / logo © 2022 Grokbase