It allows us to upload all our data to Amazon Data Centers without worrying about any kind of limit. Photos, videos. One of the strengths of S3 is its transparency when it comes to hosting our data. We will never have to worry about the storage capacity of our account, as we will have a single container with virtually unlimited capacity. The more we store, the more we pay. Scalability is a concept that with S3 becomes superfluous. Amazon is already taking care of new machines and more storage units for us, and making everything work without us being aware of it. The files are saved in buckets, which then contain the folders and files we store. The most interesting thing is to integrate it with the REST API that S3 provides us from programming, since for example, we can from an EC2 server generate a large file that we automatically serve from S3 and free the EC2 server to serve it. A very important addition is the inclusion of CloudFront, which is a CDN for S3 that allows us to replicate the file worldwide and thanks to the same download URL, download it from the server closest to the user. The best thing is that this process is completely transparent to the developer, and simply by activating it from the CloudFront control panel we would already have it working.
Access to storage management on Amazon S3 is via web interface services that present the information stored in the cubes. However, this interface does not contain any order or search functions within the cubes themselves. This can cause problems when locating a file inside cubes with a large number of documents. In addition, the cube names are global, so when using simple names such as "abc", "storage", or "documents", it is likely that they have already been used and particular character combinations will have to be made.
Amazon offers several security services, including certificate management, encryption tools, hardware security modules to store private keys, and Web application firewalls. AWS native tools such as elastic load balancers (ELBs) can be used, to some extent, to mitigate DoS or DD attacks as the application is maintained because the ELB scales to multiple instances
From my point of view, Amazon has given a big push to all developers to facilitate a revolution in the web world. From now on, maintenance and infrastructure costs will not be a problem for all those who are thinking of developing a project with a great need for data storage. For an almost ridiculous price we'll have a mass storage service, high availability, and no need to worry about escalating.
In other words, in general terms, it is a fast, highly available and economical system.
Regarding its use, by default we will have three users: Owner (referring to the user who hosted the file), Authenticated Users (referring to users authenticated on Amazon), Everyone (referring to all non-authenticated users, that is to say, any client on the whole Internet). Although we will be able to add new S3 users with specific permissions for our data, it assures us 99.9% availability, which equals any high availability system we can hire, and would return up to 25% of our turnover in the event of a decrease in availability below 99%... Additionally, it has an API for the communication of our applications with S3, which accepts HTTP Request requests encrypted with Crypt/HMAC. Each access we make to this API must be validated by two keys that Amazon provides us and that, together with a hash based on a temporary seed, access information, and the destination key, generate a signature that the system will validate.
These HTTP requests will allow us to upload files, modify permissions, delete objects, create buckets... In short, all the necessary actions to manage our S3