Skip to main content

Storage Connect for Frame.io: Registering Assets

Jared avatar
Written by Jared
Updated over 5 months ago

Storage Connect allows Frame.io’s Enterprise customers to use their own cloud storage endpoint as the backing storage of Frame.io. Today when a user uploads an asset to Frame.io, the asset flows through the application stack and is stored in Frame.io’s Amazon S3 bucket. Similarly, playback and delivery of an asset is serviced from a Frame.io managed Amazon S3 bucket.

Use your own AWS S3 as the source of truth while keeping Frame.io as the single surface to browse, search, share, and review. You can now connect:

  • One primary S3 bucket (read/write) — where new uploads from Frame.io land.

  • Any number of additional S3 buckets (read‑only) — make existing media in S3 visible and playable in Frame.io without copying or duplicating originals.

Frame.io generates lightweight proxies (thumbnails, previews, playback) while originals remain in your S3. To register files that already live in your connected buckets, use the V4 Public API: Import File endpoint.

Note: This offering is available to both net-new and existing Frame.io customers using Storage Connect. To enable existing customers, Frame.io offers a one-time migration of existing customer data historically stored in Frame.io’s managed Amazon S3 bucket to the customer-managed Amazon S3 bucket for general availability.

The below information is designed to inform net-new and existing Frame.io customers with a step-by-step guide configuring their S3 bucket for compatibility with Storage Connect. 

Prerequisites

  • Frame.io Enterprise account with Storage Connect enabled for your org.

  • Access to create/update AWS IAM roles/policies and S3 bucket permissions.

  • Access to the Frame.io V4 Public API (for the Import File endpoint used to register existing objects from read‑only buckets).

Note: For the exact IAM trust configuration and account mapping, your Frame.io contact (CSM / Implementation specialist) will provide the current OIDC/role setup and will validate permissions during onboarding.

Key Concepts

  • Primary (read/write) bucket: The S3 bucket that receives originals when users upload to Frame. Frame requires read + write permissions here.

  • Read‑only buckets: One or more S3 buckets that Frame can read, but not write. Assets are made visible in Frame by calling the Import File API (no copying of originals).

  • Proxies: Frame generates derivatives (thumbnails, previews, and streaming proxies) so assets are playable in Frame even while originals remain in your S3.

Part A — Configure the Primary Read/Write S3 Bucket

  1. Choose or create the S3 bucket you will use as your primary storage.

  1. Create an IAM Role for Frame.io (trusted by the Frame.io identity provider) and attach an IAM policy that grants read/write to the bucket/prefix.

  • Example policy skeleton (replace ARNs and restrict to required prefixes where possible):

{ 
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListPrimaryBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::YOUR_PRIMARY_BUCKET"
},
{
"Sid": "RWPrimaryObjects",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:AbortMultipartUpload",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::YOUR_PRIMARY_BUCKET/*"
}
]
}
  1. Provide details to Frame.io: the IAM Role ARN, bucket name, and any preferred object prefixes. Frame.io will complete the mapping so uploads from your users in Frame target this bucket.

  2. Validate by performing a test upload in Frame to confirm originals land in your S3 and proxies appear in Frame.

Part B — Add One or More Read‑Only S3 Buckets

You can associate additional S3 buckets as read‑only sources. Frame will read objects for proxy generation and playback, but will not write new objects to these buckets.

  1. Choose your additional S3 buckets (and optional folder prefixes) to expose in Frame.

  2. Create (or update) an IAM Role used by Frame to grant read‑only access to each bucket/prefix.

  • Example policy skeleton per bucket (adjust ARNs/prefixes and add for each bucket):

{  
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListReadOnlyBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::YOUR_RO_BUCKET"
},
{
"Sid": "ReadObjects",
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:GetObjectVersion"],
"Resource": "arn:aws:s3:::YOUR_RO_BUCKET/*"
}
]
}
  1. Provide details to Frame.io: the same Role ARN (or distinct ARNs per bucket), each bucket name and optional prefix that should be exposed as read‑only.

  2. Proceed to register existing files from these read‑only buckets using the Import File API (below).

Part C — Register Existing S3 Objects in Frame (Import API)

When you add a read‑only bucket, your media already lives in S3. Use the Import File endpoint to register those S3 objects into your Frame projects/folders without copying the originals.

What the Import does

  • Creates a File asset in Frame that points to your S3 object.

  • Triggers Frame to generate proxies for browsing/playback.

  • Leaves the original file in-place in your S3.

Before you call the API

  • Ensure Storage Connect mappings are complete (Frame.io has your Role ARN, buckets, and prefixes).

  • Have an OAuth token with the required file scopes.

  • Identify the destination container in Frame (project root or folder asset ID) where the imported file should appear.

  • Identify the S3 bucket and object key to import.

Example (pseudocode cURL)

Use the official API reference for the exact path and fields. The structure below illustrates the intent.

# PSEUDOCODE — see the "Import File" endpoint docs for the exact URL & schema

curl -X POST "https://api.frame.io/v4/files.import_file" \ 
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{
"container_id": "<FRAME_FOLDER_OR_PROJECT_ASSET_ID>",
"external": {
"provider": "s3",
"bucket": "my-readonly-bucket",
"key": "Marketing/2025/hero.mov"
},
"display_name": "hero.mov"
}'

Verify in Frame

  • Open your destination Project/Folder — imported files appear like any other assets.

  • Playback should be immediately available once proxies finish generating.

  • Check asset details to confirm the storage location is your S3.

Behavior & Notes

  • Uploads in any of the Frame clients (web iOS, transfer app, etc), go to the primary read/write bucket.

  • Imported assets from read‑only buckets remain in place; deleting an asset in Frame does not delete the original in a read‑only bucket. (Respect your org’s deletion policies for the read/write bucket.)

  • Lifecycle/archival: If objects move to Glacier or are temporarily unavailable, previously generated proxies may continue to allow browsing; original retrieval will depend on S3 tier and availability.

Troubleshooting

  • 403/AccessDenied on import/playback → verify IAM policy includes s3:GetObject for the object ARN.

  • Objects not discovered → confirm bucket/prefix mapping matches the object keys you’re importing.

  • Proxies not generating → confirm the file type is supported and IAM allows reads; check for transient S3 errors.

  • Rate limiting on bulk import → add retry with exponential backoff; throttle concurrency.

FAQ

Q: How many read‑only buckets can I add?

A: Any number. Map each bucket (and optional prefixes) with read‑only permissions.

Q: Do I need to move or copy my existing library into Frame?

A: No. Use the Import File API to register in place.

Q: Can I remove a bucket later?

A: Yes. Removing a read‑only bucket unpublishes those assets from Frame (your originals remain in S3).

Q: Does Frame ever write to my read‑only buckets?

A: No. Frame reads objects to generate proxies and stream originals when needed; writes occur only to the primary read/write bucket.

Related Links

This guide reflects the latest Storage Connect capability: one primary read/write bucket plus multiple read‑only buckets, with registration via the Import File API so you can work in Frame without duplicating originals.

Important limitation: SSE‑KMS–encrypted buckets/objects are not supported by Storage Connect at this time.

Did this answer your question?