Skip to main content

Google Cloud Storage (GCS)

Files.com's integration with Google Cloud Storage (GCS) allows you to integrate with files on a Google Cloud Storage bucket in several different ways.

Files.com's Remote Server Mount feature gives you the ability connect a specific folder on Files.com to the remote server in a real time manner.

That folder then becomes a client, or window, accessing the files stored in your remote server or cloud.

Once you configure a Mount, any operation you perform on or inside that folder will act directly on the remote in real time. Whether you are dropping a file into that folder, deleting a file, creating a subfolder, or performing any other file/folder operations your Files.com user has permissions for, those operations will "pass through" to the remote in real time.

This powerful feature enables a wide variety of use cases such as accessing files on a counterparty (client or vendor)'s cloud without provisioning individual access to individual users, reducing storage costs by leveraging on-premise or bulk storage solutions, enabling applications to access 3rd party clouds via Files.com API, FTP, SFTP, or Files.com Apps and many more.

Alternatively, Files.com's Sync feature give you the ability to push or pull files to or from remote servers. This means that the files will exist in both places at the end of the sync process.

A sync can be a "push", where files from your Files.com site are transferred to the remote server, or a "pull" where files are transferred from the remote server to your Files.com site.

Add Google Cloud Storage as a Remote Server

Add a new Remote Server to your site, and select Google Cloud Storage as the server type.

You must provide an Internal name for this connection. If you're managing multiple remote servers, make the name clear enough to easily identify this particular connection.

You must provide the Google Cloud Storage Bucket name that Files.com will connect to.

The Authentication Information is required because it contains the credentials Files.com will use for connecting to the remote system.

Once your Remote Server is added, you can integrate it to Files.com as either a Remote Server Mount or Sync.

Authentication Information

Files.com supports two authentication types for connecting to Google Cloud Storage. By default, JSON is selected.

With the JSON method, you enter your Google Cloud project ID and provide the JSON credentials from your service account. The Credentials (JSON) field must contain a JSON object that includes the access key and other required authentication details. You can open the downloaded JSON key file in a text editor and copy its entire contents into this field. This JSON object is generated when you create a service account key in the Google Cloud Console. To learn more, see Google's documentationExternal LinkThis link leads to an external website and will open in a new tab.

The second authentication type is HMAC (XML / S3-compatible API). It is intended for use cases where you're connecting through an S3-compatible API or where JSON credentials are not preferred or feasible. This method requires you to provide an access key and a secret key. These keys are part of the HMAC authentication mechanism supported by Google Cloud. You can generate them in the Google Cloud Console by following Google’s HMAC setup guideExternal LinkThis link leads to an external website and will open in a new tab.

Permissions for Files.com Access

When you connect a Google Cloud Storage (GCS) bucket to Files.com, Files.com authenticates using a Service Account that you create in your Google Cloud project. The permissions granted to that Service Account determine what Files.com can do inside your bucket, such as reading, writing, or deleting files.

Assign only the permissions your workflow requires. Files.com does not need to manage bucket-level settings or IAM policies.

Full Access

Full access enables Files.com to perform complete create, read, update, and delete (CRUD) operations on all objects within your GCS bucket. This configuration is appropriate when Files.com needs to both upload and download files, synchronize folders, or manage data through Automations and Syncs.

The recommended approach is to grant the predefined Google Cloud IAM role called Storage Object Admin (roles/storage.objectAdmin), which provides all necessary object-level permissions.

If you prefer to use a custom IAM role instead, it must include the following permissions:

storage.buckets.get
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.list

These permissions allow Files.com to add, modify, and remove files as required.

Full access is typically used when Files.com must synchronize content bi-directionally with your GCS bucket, move or delete files automatically through Automations, or manage an ongoing file exchange or archival workflow.

Read-Only Access

Read-only access allows Files.com to view and download files from your GCS bucket without making any changes to the data. This configuration is appropriate when Files.com needs to import data or read existing files but should not upload, overwrite, or delete anything in the bucket.

To enable this mode, assign the predefined IAM role Storage Object Viewer (roles/storage.objectViewer), which grants permission to view object metadata and read file contents.

If you prefer to use a custom IAM role instead, it must include the following permissions:

storage.buckets.get
storage.objects.get
storage.objects.list

These permissions allow Files.com to retrieve objects and list directory contents but prevent any modifications.

Read-only access is ideal for workflows where Files.com must import files for downstream processing, synchronize data in a one-way direction, or provide visibility into files for partners, auditors, or compliance use cases without altering the source data.

Write-Only Access

Write-only access in Google Cloud Storage functions as an archive-only mode, allowing Files.com to upload new files into your GCS bucket without viewing, modifying, or deleting any existing data. In this configuration, Files.com can deliver files to the bucket but cannot retrieve or replace objects that are already stored there. Because GCS enforces strong object immutability rules at the permission level, overwriting or deleting files is not permitted when access is limited in this way.

To configure this level of access, create a custom IAM role that includes the following permissions:

storage.buckets.get
storage.objects.create
storage.objects.list

The create permission enables Files.com to add new objects to the bucket, while the list permission is still required for upload operations to complete successfully. Without list access, GCS cannot validate the target location and will reject write requests, even when the file name is new.

This configuration is best suited for workflows where Files.com must act as a secure drop-off point for incoming data. Typical use cases include archival or compliance environments that prohibit modification of existing records, as well as one-way delivery pipelines that continuously write new files to GCS for downstream consumption or long-term retention.

Under write-only access, Files.com does not receive visibility into the contents of the bucket beyond confirming successful uploads. Existing objects remain fully protected and immutable from the Files.com connection.

Dedicated IPs

If your site has dedicated IP addresses, you may choose whether the Files.com platform will use those dedicated IP addresses to interact with the remote server. You may wish to enable this for simplifying networking rules in the remote system. If you do not have dedicated IP addresses, or you disable this option, then connections to the remote server may be made using any of Files.com's available IP Addresses.

Cleaning Up Incomplete Multipart Uploads in S3 Emulation Mode

When using Google Cloud Storage (GCS) in S3 compatibility mode with Files.com, uploads use the Multipart Upload process, just like Amazon S3.

If multipart uploads are not completed or aborted, the uploaded parts may remain stored indefinitely, consuming storage space and potentially incurring costs.

To prevent this, configure a GCS Object Lifecycle ManagementExternal LinkThis link leads to an external website and will open in a new tab rule that automatically aborts incomplete multipart uploads after a set period.

Here is an example GCS Lifecycle Rule:

{
  "rule": [
    {
       action {
         type = "AbortIncompleteMultipartUpload" # Abort incomplete uploads after 7 days
      }
       condition {
         age = 7
      }
    }
  ]
}

This rule ensures that any multipart uploads older than seven days are automatically aborted, and their parts deleted.

Add Remote Server Mount

Remote Server Mounts are created by mounting them onto an empty folder in Files.com. This folder should not be the Root of your site, although that is supported if you need it.

Add Sync

After creating the Remote Server, you can use it to perform Syncs between your remote server and Files.com.

Automations

Folders that have been configured with Remote Server Mount to Google Cloud can also be used with automations, allowing you to include Google Cloud Storage buckets as source locations or destinations for your automations.

Case Sensitivity

Be aware of case sensitivity differences when copying, moving, or syncing files and folders between Google Cloud Storage and other storage locations. Google Cloud Storage is a case sensitive system whereas other systems may not be. This can cause files to be overwritten, and folders to have their contents merged, if their case insensitive names are a match.

Ready to Transform Your File Infrastructure?

Join over 4,000 organizations that trust Files.com to manage their mission-critical file flows. Start your free trial today and see why we're the #1 rated file orchestration platform.

No credit card required • 7-day free trial • Setup in minutes