Amazon S3
Files.com's integration with Amazon S3 allows you to integrate with files on a Amazon S3 bucket in several different ways.
Files.com's Remote Server Mount feature gives you the ability connect a specific folder on Files.com to the remote server in a real time manner.
That folder then becomes a client, or window, accessing the files stored in your remote server or cloud.
Once you configure a Mount, any operation you perform on or inside that folder will act directly on the remote in real time. Whether you are dropping a file into that folder, deleting a file, creating a subfolder, or performing any other file/folder operations your Files.com user has permissions for, those operations will "pass through" to the remote in real time.
This powerful feature enables a wide variety of use cases including accessing files on a counterparty (client or vendor)'s cloud without provisioning individual access to individual users, reducing storage costs by leveraging on-premise or bulk storage solutions, enabling applications to access 3rd party clouds via Files.com API, FTP, SFTP, or Files.com Apps and many more
Alternatively, Files.com's Sync feature give you the ability to push or pull files to or from S3 buckets. This means that the files will exist in both places at the end of the sync process.
A sync can be used to send files from your Files.com site to the S3 bucket or to pull files from the S3 bucket into your Files.com site.
Add Amazon S3 as a Remote Server
Add a new Remote Server to your site, and select Amazon S3 as the server type.
You must provide an Internal name for this connection. If you're managing multiple remote servers, make the name clear enough to easily identify this particular connection.
The Region and Bucket are required, because they define which bucket Files.com will connect to, and the Authentication Information contains the credentials Files.com will use for connecting to AWS.
Region and Bucket
Files.com supports connecting to S3 buckets in many regions, even regions where Files.com itself doesn't have an AWS presence. This includes Govcloud. We are happy to expand our list of supported regions; please contact us if you have a need to access a region you don't see listed.
Files.com requires access to the Bucket being used, so we recommend creating a bucket for the exclusive use of Files.com.
AWS Region
The AWS Region Code of your S3 bucket name can be found by using the get-bucket-location command of the AWS CLI tool. You can cross-reference the region code and region name using Amazon's online documentation.
Bucket Name
Your Amazon S3 bucket name can be found in the Amazon AWS Console, within the Amazon S3 section, under the Buckets list.
Authentication Information
Files.com supports two authentication methods for connecting to Amazon S3, Access Key with Secret Access Key and AWS STS with IAM Role Assumption.
Choose the method that aligns with your organization’s security and credential management policies.
The authentication information can be placed in the Remote Server Credential Manager and selected when configuring the Remote Server.
Access Key with Secret Access Key
Provide an AWS Access Key ID and Secret Access Key associated with an IAM user that has permission to access the target S3 bucket.
The IAM user must have appropriate permissions for the operations you intend to perform. Scope permissions to the specific bucket and paths required for least-privilege access.
This method stores long-lived credentials in Files.com.
AWS STS with IAM Role Assumption
Files.com supports authentication using AWS Security Token Service (STS) by assuming an IAM role in your AWS account. This method uses temporary credentials instead of long-lived access keys.
To configure role assumption:
- Create an IAM role in your AWS account.
- Configure a trust policy that allows the Files.com AWS account to assume that role. A sample policy is provided.
- Grant the role the necessary permissions to access the target S3 bucket.
- Provide the Role ARN when configuring the STS credential in Files.com.
This method reduces long-term credential exposure and aligns with AWS best practices for cross-account access.
When using STS, Files.com requests temporary credentials from AWS by assuming the specified role. AWS issues time-limited credentials that Files.com uses for S3 operations.
The Assume Role Session Duration setting, which defaults to 3600 seconds (1 hour), specifies the duration for the STS credential. This default value matches the default value for an IAM role. If the IAM role has a custom value for its Maximum session duration setting in AWS, then configure the Files.com setting to a value equal to or lower than the IAM role’s maximum session duration.
Files.com automatically renews temporary credentials before they expire to maintain uninterrupted access to S3.
Choosing an Authentication Method
Use Access Key with Secret Access Key when your environment relies on static credentials and key rotation policies. Use AWS STS with IAM role assumption when you prefer temporary credentials and centralized role-based access control.
For most production environments, role assumption via STS provides stronger security controls and easier credential management.
Access Permissions
Regardless of authentication method, the IAM user or role must have permissions appropriate to the intended use of the Remote Server.
Restrict permissions to the specific bucket and object prefix required for your integration.
Minimal Permissions for Full Access
You will need to apply a policy to the IAM user to grant it full permissions to the bucket being used.
These permissions represent the minimum required for Files.com to function correctly with your S3 bucket.
To use the below example, replace <Your IAM User ID> with the 12 digit IAM ID of the user, replace <Your IAM User Name> with the IAM user name of the user, and replace <bucketname> with your bucket name.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Your IAM User ID>:user/<Your IAM User Name>"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/*"
]
}
]
}
Read-Only Permission
Access to the S3 bucket is determined by the policies and permissions in Amazon S3.
If your S3 bucket is read-only, or you wish Files.com to be restricted to read-only permissions, then configure the S3 user policy (replace <bucketname> with your bucket name) as below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::<bucketname>",
"arn:aws:s3:::<bucketname>/path/to/subfolder/*"
]
}
]
}
Bear in mind that the bucket policy permissions are completely separate from Files.com permissions.
When you set the bucket policy to read-only then you should ensure that any Files.com users with permission to access this bucket should also be set to read-only user permissions.
Configuring Lifecycle Rules for Multipart Uploads
Files.com uses the Amazon S3 Multipart Upload API when transferring large files to your bucket. Multipart Upload improves performance and reliability by uploading file parts in parallel but unfinished uploads can accumulate if they’re never completed or aborted.
By default, S3 does not automatically delete incomplete multipart uploads. Any unfinished parts remain stored indefinitely and continue to incur storage costs.
To prevent unnecessary storage usage, we strongly recommend setting up an AbortIncompleteMultipartUpload lifecycle rule in your S3 bucket configuration. This rule ensures that uncompleted uploads are automatically cleaned up after a set period.
Here is an example rule:
{
"Rules": [
{
"ID": "Abort incomplete uploads after 7 days",
"Status": "Enabled",
"Filter": {},
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}
This rule tells Amazon S3: If a multipart upload was initiated more than seven days ago, and hasn’t been completed or aborted, automatically abort it and delete all uploaded parts. Adjust the DaysAfterInitiation value as needed for your storage policies.
Configuring this rule helps you avoid hidden storage charges from incomplete uploads, keep your bucket organized and clean, and maintain predictable storage costs.
Add Remote Server Mount
Remote Server Mounts are created by mounting them onto an empty folder in Files.com. This folder should not be the Root of your site, although that is supported if you need it.
Add Sync
After creating the Amazon S3 Remote Server, you can use it to perform Syncs between your bucket and Files.com.
Automations
Folders that have been configured with Remote Server Mount to Amazon S3 can also be used with Automations, allowing you to include S3 buckets as source locations or destinations for your Automations.
Folder Representation Using Slash Files
Amazon S3 Storage does not natively support hierarchical folders and instead stores data in a flat namespace. Files.com represents folder structures in S3 Storage using slash files, a convention that simulates directories while remaining compatible with Amazon's underlying storage model.
Get The File Orchestration Platform Today
4,000+ organizations trust Files.com for mission-critical file operations. Start your free trial now and build your first flow in 60 seconds.
No credit card required • 7-day free trial • Setup in minutes