Voiced by Amazon Polly |
Introduction
In Part 1 and Part 2, we have understood the policy setup part and investigated the access writes. We have gone through the permission policies for our specific user. Now, we will discuss Amazon S3 access control and some more access policies.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Overview
In this part of the article, we will learn how Amazon S3 folders allow us to follow the principle of least privilege and ensure that the right people can access only what they need.
Now, we will look at the four blocks of Mark’s policy. Let’s look at each individually.
Block 1: Allow required Amazon S3 console permissions
Before we start figuring out the particular folders Mark will have entry to, you ought to supply him with the permissions required for the Amazon S3 console to get entry to ListAllMyBuckets and GetBucketLocation.
1 2 3 4 5 6 |
{ "Sid": "AllowUserToSeeBucketListInTheConsole", "Action": ["s3:GetBucketLocation", "s3:ListAllMyBuckets"], "Effect": "Allow", "Resource": ["arn:aws:s3:::*"] } |
The ListAllMyBuckets grants the user (Mark) the permission to list all the buckets under his AWS account, which is required to access buckets in the Amazon S3 console (as a side note: it is currently not possible to selectively filter specific buckets, so users must have permission to list all buckets in the Amazon S3 console). The console also makes a GetBucketLocation call when users log in to the Amazon S3 console for the first time. Therefore, Mark also needs permission for this action. Mark receives an “Access Denied” error message without these two actions in the console.
Block 2: Allow listing objects in root and home folders
Meanwhile, Mark can access his home folder in the Amazon S3 console. Mark needs permission to list the objects at the root level of the ‘my-new-company-123456789’ bucket and the “home” folder. So, the following policy will grant these permissions to Mark:
1 2 3 4 5 6 7 |
{ "Sid": "AllowRootAndHomeListingOfCompanyBucket", "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::my-new-company-123456789"], "Condition":{"StringEquals":{"s3:prefix":["","home/", "home/Mark"],"s3:delimiter":["/"]}} } |
Mark can list all objects in the root and home folders with this policy, but he can only view the contents of the files and folders that are specifically his own (you specify these permissions in the next block).
In this scenario, Mark lists objects in the my-new-company-123456789 bucket. We have used the conditions s3:prefix and s3:delimiter to set the permissions for the root and home folders. Mark’s ListBucket permissions are restricted to the folders in the s3:prefix condition. Mark can, for instance, list the following directories and files in the bucket named “my-new-company-123456789.”
/root-file.txt
/confidential/
/home/Kumar/
/home/Mark/
/home/Guru/
However, Mark won’t be able to list any files or subfolders in the home/Kumar, home/Guru, or confidential/ directories.
The s3:delimiter condition isn’t required for console access, but we recommend including it in case Mark requests via the API. As mentioned above, delimiters are characters, such as a slash (/), that identify the folder in which an item is located. Separators are useful for listing objects as if they were on a file system. E.g., suppose you have thousands of objects stored in your my-new-company-123456789 bucket. If Mark includes a delimiter in his request, he can limit the number of objects returned for file names and subfolder names within the specified folder. Without a delimiter, Mark would get a list of all the files in the specified folder, plus all the files in all subfolders.
Block 3: Allow listing objects in Mark’s folder
Aside from the root and home folders, Mark needs access to everything in the home/Mark/ folder, including any subfolders he makes. The following policy permits this:
1 2 3 4 5 6 7 |
{ “Sid”: “AllowListingOfUserFolder”, “Action”: [“s3:ListBucket”], “Effect”: “Allow”, “Resource”: [“arn:aws:s3:::my-new-company-123456789”], "Condition":{"StringLike":{"s3:prefix":["home/Mark/*"]}} } |
The asterisk (*) serves as a wildcard in the condition above, where an object in Mark’s folder uses a StringLike expression. Mark can then list the files and folders in his home/Mark/ folder. Because the previous block (AllowRootAndHomeListingOfCompanyBucket) used the StringEquals expression, which would interpret the asterisk (*) as an asterisk rather than a wildcard, you were unable to include this condition in that block.
Next, we’ll notice that Resource elements in the AllowAllS3ActionsInUserFolder block in the following section specify my-new-company/home/Mark/*, which appears to be the condition stated in this section. You might assume that Mark’s folder in this block can be similarly specified using the Resource element. However, the Resource element for the ListBucket action only applies to bucket names and ignores folder names because the ListBucket action is a bucket-level operation. Therefore, you must use conditions to restrict actions at the object level (files and folders).
Block 4: Allow all Amazon S3 actions in Mark’s folder
Lastly, as the policy below illustrates, Mark’s actions (like read, write, and delete permissions) are restricted to his home folder only.
1 2 3 4 5 6 |
{ "Sid": "AllowAllS3ActionsInUserFolder", "Effect": "Allow", "Action": ["s3:*"], "Resource": ["arn:aws:s3:::my-new-company-123456789/home/Mark/*"] } |
Since you entered s3:* for the action element, Mark is authorized to perform any Amazon S3 action. To enable Mark to act on and inside the folder, you included an asterisk (*) (a wildcard) in the Resource element to designate Mark’s folder. Mark can alter the storage class of his folder, for instance. In addition, Make can make subfolders in his folder and upload, remove, and execute other actions.
Using policy variables to manage policies more easily
Mark’s home folder was specified in his folder-level policy. You need to make separate policies that specify each user’s home folder to create a similar policy for Kumar and Guru. Utilizing policy variables, you can create a group policy—a single policy that covers several users—instead of individual policies for each IAM Identity Center user. Policy variables provide placeholders. When you submit a request to an AWS service, the request value is substituted for the placeholder when the policy is assessed.
As demonstrated in the following policy (copy this policy to use in the procedure that follows), you can, for instance, use the previous policy and replace Mark’s user name with a variable that uses the requester’s user name through attributes and PrincipalTag:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
{ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowAllS3ActionsInUserFolder", "Effect": "Allow", "Action": [ "s3:DeleteObject", "s3:DeleteObjectTagging", "s3:DeleteObjectVersion", "s3:DeleteObjectVersionTagging", "s3:GetObject", "s3:GetObjectTagging", "s3:GetObjectVersion", "s3:GetObjectVersionTagging", "s3:ListBucket", "s3:PutObject", "s3:PutObjectTagging", "s3:PutObjectVersionTagging", "s3:RestoreObject" ], "Resource": [ "arn:aws:s3:::my-new-company-123456789", "arn:aws:s3:::my-new-company-123456789/home/${aws:PrincipalTag/userName}/*" ], "Condition": { "StringLike": { "s3:prefix": [ "home/${aws:PrincipalTag/userName}/*" ] } } } ] } |
Consider the policies a user might require and limit access by only granting access when required.
Conclusion
Over the course of this article, we have a good understanding of how Amazon S3 folders work, we can adhere to least privilege and check that the appropriate users have access to the required data, and only to the required data. There are many more ways to have the different user policies according to the user requirements. And for that, we need a detailed walkthrough of Amazon S3 policies, see Controlling access to a bucket with user policies.
Drop a query if you have any questions regarding Amazon S3 and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is an award-winning company and the first in India to offer cloud training and consulting services worldwide. As a Microsoft Solutions Partner, AWS Advanced Tier Training Partner, and Google Cloud Platform Partner, CloudThat has empowered over 850,000 professionals through 600+ cloud certifications winning global recognition for its training excellence including 20 MCT Trainers in Microsoft’s Global Top 100 and an impressive 12 awards in the last 8 years. CloudThat specializes in Cloud Migration, Data Platforms, DevOps, IoT, and cutting-edge technologies like Gen AI & AI/ML. It has delivered over 500 consulting projects for 250+ organizations in 30+ countries as it continues to empower professionals and enterprises to thrive in the digital-first world.
FAQs
1. What is an Access Point in Amazon S3?
ANS: – Access points with Amazon VPC network origins are always considered nonpublic, regardless of what the access point policy says.
2. Who can set up the Amazon S3 bucket?
ANS: – After you sign up for AWS, you’re ready to create a bucket in Amazon S3 using the AWS Management Console.

WRITTEN BY Guru Bhajan Singh
Guru Bhajan Singh is currently working as a Software Engineer - PHP at CloudThat and has 7+ years of experience in PHP. He holds a Master's degree in Computer Applications and enjoys coding, problem-solving, learning new things, and writing technical blogs.
Comments