Skip to content

S3

S3 Prefix

  • You can use prefixes to organize the data that you store in Amazon S3 buckets.
  • A prefix is a string of characters at the beginning of the object key name.
  • A prefix can be any length, subject to the maximum length of the object key name (1,024 bytes).
  • You can think of prefixes as a way to organize your data in a similar way to directories. However, prefixes are not directories.

Searching by prefix limits the results to only those keys that begin with the specified prefix. The delimiter causes a list operation to roll up all the keys that share a common prefix into a single summary list result.

The purpose of the prefix and delimiter parameters is to help you organize and then browse your keys hierarchically.

To do this, - first pick a delimiter for your bucket, such as slash (/), that doesn't occur in any of your anticipated key names. - You can use another character as a delimiter. There is nothing unique about the slash (/) character, but it is a very common prefix delimiter. - Next, construct your key names by concatenating all containing levels of the hierarchy, separating each level with the delimiter.

Example

For example, if you were storing information about cities, you might naturally organize them by continent, then by country, then by province or state. Because these names don't usually contain punctuation, you might use slash (/) as the delimiter. The following examples use a slash (/) delimiter.

  • Europe/France/Nouvelle-Aquitaine/Bordeaux
  • North America/Canada/Quebec/Montreal
  • North America/USA/Washington/Bellevue
  • North America/USA/Washington/Seattle

Performance

  • Your applications can easily achieve thousands of transactions per second in request performance when uploading and retrieving storage from Amazon S3.
  • Amazon S3 automatically scales to high request rates.
  • For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned Amazon S3 prefix.
  • There are no limits to the number of prefixes in a bucket.
  • You can increase your read or write performance by using parallelization.
  • For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55, 000 read requests per second.
  • Similarly, you can scale write operations by writing to multiple prefixes. The scaling, in the case of both read and write operations, happens gradually and is not instantaneous.
  • While Amazon S3 is scaling to your new higher request rate, you may see some 503 (Slow Down) errors.
  • These errors will dissipate when the scaling is complete.

Some data lake applications on Amazon S3 scan millions or billions of objects for queries that run over petabytes of data. These data lake applications achieve single-instance transfer rates that maximize the network interface use for their Amazon EC2 instance, which can be up to 100 Gb/s on a single instance. These applications then aggregate throughput across multiple instances to get multiple terabits per second.

Other applications are sensitive to latency, such as social media messaging applications. These applications can achieve consistent small object latencies (and first-byte-out latencies for larger objects) of roughly 100–200 milliseconds.