Amazon S3: Tutorial and Course

Amazon S3: Tutorial and Course - Amazon S3 Tutorial and Amazon S3 Course, The Ultimate Guide to Amazon S3. Learn Amazon S3 Tutorial and Amazon S3 Course at Amazon S3 Tutorial and Course.

Amazon S3 Tutorial and Amazon S3 Course



Amazon S3: Overview


Amazon S3 Tutorial and Course - Amazon S3 tutorial and Amazon S3 course, the ultimate guide to Amazon S3, including facts and information about Amazon S3. Amazon S3 Tutorial and Course is one of the ultimate created by to help you learn and understand Amazon S3 and the related cloud computing technologies, as well as facts and information about Amazon S3.



Cloud computing is an umbrella term for any data or software hosted outside of your local system. Cloud computing is categorized into three main service types: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS).





For all of the service types listed above, the service provider is responsible for managing the cloud system on behalf of the user. The user is spared the tedium of having to manage the infrastructure required to operate a particular service.



Amazon S3 (Amazon Simple Storage Service) is a scalability and high availability online file storage web service offered by Amazon Web Services. Amazon S3 provides storage through a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Amazon S3 gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. Amazon S3 (Amazon Simple Storage Service) aims to maximize benefits of scale and to pass those benefits on to developers.



Amazon S3: Tutorial and Course - Amazon S3 Tutorial and Amazon S3 Course by , The Ultimate Guide to Amazon S3.



Amazon S3: Tutorial and Course


What Is Amazon S3?


Amazon S3 (Amazon Simple Storage Service) is a scalability and high availability online file storage web service offered by Amazon Web Services. Amazon S3 provides storage through a simple web services interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Amazon S3 gives any developer access to the same highly scalable, reliable, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. Amazon S3 (Amazon Simple Storage Service) aims to maximize benefits of scale and to pass those benefits on to developers.



Amazon S3: Tutorial and Course


Amazon Web Services (AWS) is a bouquet of Web services offered by Amazon that together make up a cloud computing platform. The most essential and best known of these services are Amazon EC2 and Amazon S3. AWS also includes CloudFront, Simple Queue Service, SimpleDB, Elastic Block Store. In this Tutorial and Course for Amazon S3, we will focus exclusively on Amazon S3.



Amazon S3 is cloud-based data-storage infrastructure that is accessible to the user programmatically via a Web service API (either SOAP or REST). Using the API, the user can store various kinds of data in the S3 cloud. They can store and retrieve data from anywhere on the Web and at anytime using the API. But S3 is nothing like the file system you use on your computer. A lot of people think of S3 as a remote file system, containing a hierarchy of files and directories hosted by Amazon. Nothing could be further from the truth.



Amazon S3 is a flat-namespace storage system, devoid of any hierarchy whatsoever. Each storage container in S3 is called a "bucket", and each bucket serves the same function as that of a directory in a normal file system. However, there is no hierarchy within a bucket (that is, you cannot create a bucket within a bucket). Each bucket allows you to store various kinds of data, ranging in size from 1 B to a whopping 5 GB.



A file stored in a bucket is referred to as an object. An object is the basic unit of stored data on S3. Objects consist of data and meta data. The meta data is a set of name-value pairs that describe the object. Meta data is optional but often adds immense value, whether it's the default meta data added by S3 (such as the date last modified) or standard HTTP meta data such as Content-Type.



So, what kinds of objects can you store on S3? Any kind you like. It could be a simple text file, a style sheet, programming source code, or a binary file such as an image, video or ZIP file. Each S3 object has its own URL, which you can use to access the object in a browser (if appropriate permissions are set - more on this later).



The bucket's name here is deliberately simple, codediesel. It can be more complex, reflecting the structure of your application, like seouniversity.wordpress.backup or seouniversity.assets.images.



Every S3 object has a unique URL, formed by concatenating the following components:





In order to be able to identify buckets, the S3 system requires that you assign a name to each bucket, which must be unique across the S3 bucket namespace. So, if a user has named one of their buckets company-docs, you cannot create a bucket with that name anywhere in the S3 namespace. Object names in a bucket, however, must be unique only to that bucket; so, two different buckets can have objects with the same name. Also, you can describe objects stored in buckets with additional information using meta data.



Bucket names must comply with the following requirements:





In short, Amazon S3 provides a highly reliable cloud-based storage infrastructure, accessible via a SOAP or REST API. Some common usage scenarios for S3 are:





Amazon S3's Pricing Model


Amazon S3 is a paid service; you need to attach a credit card to your Amazon account when signing up. But it is surprisingly low priced, and you pay only for what you use; if you use no resources in your S3 account, you pay nothing. Also, as part of the AWS "Free Usage Tier", upon signing up, new AWS customers receive 5 GB of Amazon S3 storage, 20,000 GET requests, 2,000 PUT requests, and 15 GB of data transfer out each month free for one year.



So, how much do you pay after the free period. As a rough estimate, if you stored 5 GB of data per month, with data transfers of 15 GB and 40,000 GET and PUT requests a month, the cost would be around $2.60 per month. That's lower than the cost of a burger - inexpensive by any standard. The prices may change, so use the calculator on the S3 website.



Your S3 usage is charged according to three main parameters:





Your S3 storage charges are calculated on a unit known as a gigabyte/month. If you store 1 GB for one month, you'll be charged for one gigabyte/month, which is about $0.14 or less.



Your data transfer charges are based on the amount of data uploaded and downloaded from S3. Data transferred out of S3 is charged on a sliding scale, starting at $0.12 per gigabyte and decreasing based on volume, reaching $0.050 per gigabyte for all outgoing data transfer in excess of 350 terabytes per month. Note that there is no charge for data transferred within an Amazon S3 "region" via a COPY request, and no charge for data transferred between Amazon EC2 and Amazon S3 within the same region or for data transferred between the Amazon EC2 Northern Virginia region and the Amazon S3 US standard region. To avoid surprises, always check the latest pricing policies on Amazon.



Amazon S3 API And CloudFusion


Now with the theory behind us, let's get to the fun part: writing code. But before that, you will need to register with S3 and create an AWS account. If you don't already have one, you'll be prompted to create one when you sign up for Amazon S3.



Before moving on to the coding part, let's get acquainted with some visual tools that we can use to work with Amazon S3. Various visual and command-line tools are available to help you manage your S3 account and the data in it. Because the visual tools are easy to work with and user-friendly, we will focus on them in this article. I prefer working with the AWS Management Console for security reasons.



Amazon S3 AWS Management Console. Amazon S3: Tutorial and Course - Amazon S3 Tutorial and Amazon S3 Course by , The Ultimate Guide to Amazon S3.



The Management Console is a part of the AWS. Because it is a part of your AWS account, no configuration is necessary. Once you've logged in, you have full access to all of your S3 data and other AWS services. You can create new buckets, create objects, apply security policies, copy objects to different buckets, and perform a multitude of other functions.



Onto The Coding


As stated earlier, AWS is Amazon's Web service infrastructure that encompasses various cloud services, including S3, EC2, SimpleDB and CloudFront. Integrating these varied services can be a daunting task. Thankfully, we have at our disposal an SDK library in the form of CloudFusion, which enables us to work with AWS effortlessly. CloudFusion is now the official AWS SDK for PHP, and it encompasses most of Amazon's cloud products: S3, EC2, SimpleDB, CloudFront and many more. For this post, I downloaded the ZIP version of the CloudFusion SDK, but the library is also available as a PEAR package. So, go ahead: download the latest version from the official website, and extract the ZIP to your working directory or to your PHP include path. In the extracted directory, you will find the config-sample.inc.php file, which you should rename to config.inc.php. You will need to make some changes to the file to reflect your AWS credentials.



In the config file, locate the following lines:



 define('AWS_KEY', ''); define('AWS_SECRET_KEY', ''); 


Modify the lines to mirror your Amazon AWS' security credentials. You can find the credentials in your Amazon AWS account section, as shown below.



Get the keys, and fill them in on the following lines:



 define('AWS_KEY', 'your_access_key_id'); define('AWS_SECRET_KEY', 'your_secret_access_key'); 


You can retrieve your access key and secret key from your Amazon account page:



With all of the basic requirements in place, let's create our first bucket on Amazon S3, with a name of your choice. The following example shows a bucket by the name of com.seouniversity.images. (Of course, by the time you read this, this name may have already be taken.) Choose a structure for your bucket's name that is relevant to your work. For each bucket, you can control access to the bucket, view access logs for the bucket and its objects, and set the geographical region where Amazon S3 will store the bucket and its contents.



 /* Include the CloudFusion SDK class */ require_once( 'sdk-1.4.4/sdk.class.php'); /* Our bucket name */ $bucket = 'com.seouniversity.images'; /* Initialize the class */ $s3 = new AmazonS3(); /* Create a new bucket */ $resource = $s3->create_bucket($bucket, AmazonS3::REGION_US_E1); /* Check if the bucket was successfully created */ if ($resource->isOK()) { print("'${bucket}' bucket created\n"); } else { print("Error creating bucket '${bucket}'\n"); } 


Let's go over each line in the example above. First, we included the CloudFusion SDK class in our file. You'll need to adjust the path depending on where you've stored the SDK files.



 require_once( 'sdk-1.4.4/sdk.class.php'); 


Next, we instantiated the Amazon S3 class:



 $s3 = new AmazonS3(); 


In the next step, we created the actual bucket; in this case, com.seouniversity.images. Again, your bucket's name must be unique across all existing bucket names in Amazon S3. One way to ensure this is to prefix a word with your company's name or domain, as we've done here. But this does not guarantee that the name will be available. Nothing prevents anyone from creating a bucket named com.microsoft.apps or com.google.images, so choose wisely.



 $bucket = 'com.seouniversity.images'; $resource = $s3->create_bucket($bucket, AmazonS3::REGION_US_E1); 


To reiterate, bucket names must comply with the following requirements:





Also, you'll need to select a geographical location for your bucket. A bucket can be stored in one of several regions. Reasons for choosing one region over another might be to optimize for latency, to minimize costs, or to satisfy regulatory requirements. Many organizations have privacy policies and regulations on where to store data, so consider this when selecting a location. Objects never leave the region they are stored in unless you explicitly transfer them to another region. That is, if your data is stored on servers located in the US, it will never be copied or transferred by Amazon to servers outside of this region; you'll need to do that manually using the API or AWS tools.



Finally, we checked whether the bucket was successfully created:



 if ($resource->isOK()) { print("'${bucket}' bucket created\n"); } else { print("Error creating bucket '${bucket}'\n"); } 


Now, let's see how to get a list of the buckets we've created on S3. So, before proceeding, create a few more buckets to your liking. Once you have a few buckets in your account, it is time to list them.



 /* Include the CloudFusion SDK class */ require_once ('sdk-1.4.4/sdk.class.php'); /* Our bucket name */ $bucket = 'com.seouniversity.images; /* Initialize the class */ $s3 = new AmazonS3(); /* Get a list of buckets */ $buckets = $s3->get_bucket_list(); if($buckets) { foreach ($buckets as $b) { echo $b . "\n"; } } 


The only new part in the code above is the following line, which gets an array of bucket names:



 $buckets = $s3->get_bucket_list(); 


Finally, we printed out all of our buckets' names.



 if($buckets) { foreach ($buckets as $b) { echo $b . "\n"; } } 


Uploading Data To Amazon S3


Now that we've learned how to create and list buckets in S3, let's figure out how to put objects into buckets. This is a little complex, and we have a variety of options to choose from. The main method for doing this is create_object. The method takes the following format:



 create_object ( $bucket, $filename, [ $opt = null ] ) 


The first parameter is the name of the bucket in which the object will be stored. The second parameter is the name by which the file will be stored on S3. Using only these two parameters is enough to create an empty object with the given file name. For example, the following code would create an empty object named config-empty.inc in the com.seouniversity.resources bucket:



 $s3 = new AmazonS3(); $bucket = 'com.magazine.resources'; $response = $s3->create_object($bucket, 'config-empty.inc'); // Success? var_dump($response->isOK()); 


Once the object is created, we can access it using a URL. The URL for the object above would be:



 https://s3.amazonaws.com/com.seouniversity.resources/config-empty.inc 


Of course, if you tried to access the URL from a browser, you would be greeted with an "Access denied" message, because objects stored on S3 are set to private by default, viewable only by the owner. You have to explicitly make an object public.



To add some content to the object at the time of creation, we can use the following code. This would add the text "Hello World" to the config-empty.inc file.



 $response = $s3->create_object($bucket, config-empty.inc ', array( 'body' => Hello World!' )); 


As a complete example, the following code would create an object with the name simple.txt, along with some content, and save it in the given bucket. An object may also optionally contain meta data that describes that object.



 /* Initialize the class */ $s3 = new AmazonS3(); /* Our bucket name */ $bucket = 'com.magazine.resources'; $response = $s3->create_object($bucket, 'simple.txt', array( 'body' => Hello World!' )); if ($response->isOK()) { return true; } 


You can also upload a file, rather than just a string, as shown below. Although many options are displayed here, most have a default value and may be omitted.



 require_once( 'sdk-1.4.4/sdk.class.php'); $s3 = new AmazonS3(); $bucket = 'com.seouniversity.images'; $response = $s3->create_object($bucket, 'source.php', array( 'fileUpload' => 'test.php', 'acl' => AmazonS3::ACL_PRIVATE, 'contentType' => 'text/plain', 'storage' => AmazonS3::STORAGE_REDUCED, 'headers' => array( // raw headers 'Cache-Control' => 'max-age', 'Content-Encoding' => 'text/plain', 'Content-Language' => 'en-US', 'Expires' => 'Thu, 01 Dec 1994 16:00:00 GMT', ) )); // Success? var_dump($response->isOK()); 


With some background on Amazon S3 behind us, it is time to put our learning into practice. We are ready to build a WordPress plugin that will automatically back up our WordPress database to the S3 server and restore it when needed. For more information about how to host WordPress on Amazon S3, please visit our other Tutorials and Courses about Amazon S3 and WordPress.



Amazon S3: Further Reading