[
https://issues.apache.org/jira/browse/HADOOP-4422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
David Phillips updated HADOOP-4422:
-----------------------------------
Description:
Both S3 file systems (s3 and s3n) try to create the bucket at every initialization. This is bad because
* Every S3 operation costs money. These unnecessary calls are an unnecessary expense.
* These calls can fail when called concurrently. This makes the file system unusable in large jobs.
* Any operation, such as a "fs -ls", creates a bucket. This is counter-intuitive and undesirable.
The initialization code should assume the bucket exists:
* Creating a bucket is a very rare operation. Accounts are limited to 100 buckets.
* Any check at initialization for bucket existence is a waste of money.
Per Amazon: "Because bucket operations work against a centralized, global resource space, it is not appropriate to make bucket create or delete calls on the high availability code path of your application. It is better to create or delete buckets in a separate initialization or setup routine that you run less often."
was:
S3 native file system tries to create the bucket at every initialization. This is bad because
* Every S3 operation costs money. These unnecessary calls are an unnecessary expense.
* These calls can fail when called concurrently. This makes the file system unusable in large jobs.
* Any operation, such as a "fs -ls", creates a bucket. This is counter-intuitive and undesirable.
The initialization code should assume the bucket exists:
* Creating a bucket is a very rare operation. Accounts are limited to 100 buckets.
* Any check at initialization for bucket existence is a waste of money.
Per Amazon: "Because bucket operations work against a centralized, global resource space, it is not appropriate to make bucket create or delete calls on the high availability code path of your application. It is better to create or delete buckets in a separate initialization or setup routine that you run less often."
Summary: S3 file systems should not create bucket (was: S3 native fs should not create bucket)
S3 file systems should not create bucket
----------------------------------------
Key: HADOOP-4422
URL:
https://issues.apache.org/jira/browse/HADOOP-4422Project: Hadoop Core
Issue Type: Bug
Components: fs/s3
Affects Versions: 0.18.1
Reporter: David Phillips
Assignee: David Phillips
Attachments: hadoop-s3n-nocreate.patch, hadoop-s3n-nocreate.patch
Both S3 file systems (s3 and s3n) try to create the bucket at every initialization. This is bad because
* Every S3 operation costs money. These unnecessary calls are an unnecessary expense.
* These calls can fail when called concurrently. This makes the file system unusable in large jobs.
* Any operation, such as a "fs -ls", creates a bucket. This is counter-intuitive and undesirable.
The initialization code should assume the bucket exists:
* Creating a bucket is a very rare operation. Accounts are limited to 100 buckets.
* Any check at initialization for bucket existence is a waste of money.
Per Amazon: "Because bucket operations work against a centralized, global resource space, it is not appropriate to make bucket create or delete calls on the high availability code path of your application. It is better to create or delete buckets in a separate initialization or setup routine that you run less often."
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.