Skip to main content

Enhanced search with media intelligence

Jared avatar
Written by Jared
Updated over a month ago

Frame.io gives you the power to search however you want, to find whatever you need, in many different ways, from Standard search to our latest visual and semantic search technologies with media intelligence. Read on to discover how to get the most out of your search options!

This feature is available on both web and mobile (iOS) platforms.

Standard search

Global

Our enhanced search functionality can find any media, folders, and projects across your full account and is designed for creatives and managers who need to efficiently locate, inspect, use and manage digital assets and Frame.io Projects. By leveraging metadata alongside relevance-based ranking, we’re transforming the way users interact with their content, making the search process faster, more intuitive, and more effective.

Begin by clicking on the magnifier icon on the left of any Frame page, clicking the search bar at the top of the Workspace page, or using the shortcut Ctrl/CMD + K.

Beyond just searching general phrases and titles, you can use this search feature to find metadata attached to your media. This includes Status, Keywords, Uploader, Assignee, File Type, Format, and Source Filename.

For example, if you are looking to find all of your media that has the status of Needs Review, typing "Needs Review" into the search bar will show results across your entire account of all media assets with that status listed. If you want to find all your files that are PDFs in your account, just type "PDF" and all your results will be PDF files.

The search feature will display any assets in list form with a preview panel on the right of the results. The preview will show a thumbnail for the asset and file information, including Workspace and Project location, size, file type, and what metadata matches with your search.

Note: No need to hit Enter/Return when searching. The results will populate automatically after you stop typing. Hitting Enter/Return in Search will actually open the first result of your search.

You can also combine keywords or metadata into one search by adding a comma (,) to make your searches more accurate.

For example, searching "Audio, .wav, Needs Review" will use three different metadata fields will show the proper results to more accurately find the audio file that needs to be reviewed rather than just searching "Audio". Another example is if you have a media asset with multiple Keywords, searching for as many keywords that apply to your search separated by commas will help you more accurately find the exact file you may be looking for.

You can open and view an asset from the search results by double-clicking on the name in the list or clicking the thumbnail preview. You can also click View in Source to open the source location of that media/folder and Copy URL to link to the source page.

To exit search, you can click outside of the search box.

Note: Search results are displayed per account.

Local search

Local Search in Frame.io is a quick and simplified search experience that’s linked to your current Project or folder. By using the magnifier icon in the top center of your current Project space, you can search for any keyword and the results of assets in that Project will appear automatically in the main page.

Standard results (local and global)

Standard results are particularly helpful when you already know the name of the item they wish to locate. Standard Results derive matches based on name for assets, folders, and projects. For assets, there’s also matches from metadata values for a core set of metadata fields:

  • Status

  • Keywords

  • Uploader

  • Assignee

  • File Type

  • Format

  • Source Filename

Tips and tricks: Standard search

  • When searching for a match on an asset, folder, or project name, try to be as complete as possible. If you are looking for an asset entitled “project x-ray,” you will have greater success with that full term vs. a part thereof, e.g.
    Best results: project x-ray

    Less accurate results: x-ray

  • It's also possible to search for a match on multiple names, enter each name with either a space or a comma:
    health, medical, life

    or

    10001.jpg, 10002.jpg, 10003.jpg

  • When searching for a match on a folder name, enter the name and once the results show up use the “Folder” filter: nature

  • When searching for a match on a project name, enter the name and once the results show up use the “Project” filter: stock assets

Media intelligence search

NLP (Natural Language Search)

Natural Language Search is Frame.io’s newest way to use the power of metadata fields to find more accurate results for the media you’re searching for metadata in your account. It has the ability to handle more complex queries which contain several match requirements across metadata and semantic search.

Instead of searching for project and folder titles, you can use a number of different metadata fields, such as rating, comment count, pages, duration, file size, audio sample rate, and transcription, and many more, to get even more exact search results.

For example, a basic search such as “video uploaded by Jane Doe,” can be improved by using more specific phrases. By adding file type, date uploaded, and keyword metadata’s, the search can become “4k ProRes video uploaded during August by Jane Doe tagged with ‘people’ and status ‘Needs Review’” to get even closer to the results you’re looking for.

Transcripts

If a video has a transcription generated or uploaded to it, your searches will also be able to find transcription results. For example, the phrase “Transcript ‘secure’” will show all results with that word in the video’s transcript. Clicking on the matches on the right will open the transcript panel and highlight the results.

Additional metadata fields can also be added for more accuracy as well (ex. “transcript ‘secure’ in the documentary project rated at 3 stars or above”). Search results when selected will bring you to the exact transcript location in the video.

Comments

Same can be done with Frame.io comments on your media. Search for a phrase or keyword found in your comments and NLP search will find the results (ex. “Comment 'Asset Manager'”). Search results when selected will bring you to the exact location of the comment.

Tips and tricks: NLP

  • When searching for assets that match based on metadata values, it helps to include the metadata field name followed by the desired value, e.g.
    status approved
    keyword nature
    assignee samuel
    project stock assets
    codec pro res
    uploaded over past 30 days
    uploader felicia

  • When searching for a match against a date range, then use words vs numeric shorthand:
    uploaded during December 2025
    uploaded between January 1st 2025 and today

  • When searching for a match within a transcript, then call that out together with the word or phrase you are seeking:
    transcript “motivation”
    or
    transcript contains motivation

  • When searching for a match within a comment, then call that out together with the word or phrase you are seeking:
    comment “hero”
    or
    comment contains hero

  • If you wish to build multi-intent query, then it can help to leave out words that aren’t strictly necessary.
    For example, while the example below will work:
    grab the PDF from the whisper fan project, it has two pages, Fabian uploaded it I think

    This version however, demonstrates a cleaner, more scalable approach:
    PDF 2 pages project whisper fan uploader Fabian

  • You can also search for numeric ranges, e.g.
    PDF with 20 pages or more

    Images rated at 3 stars or above

    Video resolution is greater than 1920 x 1080 but less than 3500 x 2000
    Assets deleted within the past 30 days

Semantic search

Note: This feature is for Team and Enterprise accounts only.

Semantic search is a visual search using media intelligence that will match words/phrases to find results inside the media itself. Unlike NLP search, semantic search will not be based on metadata and instead be able to recognize keywords to find visual references within the content of any images and videos.

For example, a search that includes “clockface” will show results with any media that has clock or time imagery, regardless of if it is using “clockface” in the title or as a metadata keyword. Clicking into any results will show white lines (subclips) on the timebar highlighting the phrase. You can navigate directly to a specific subclips using the detail panel to the right of the results list.

If you search “wedding footage featuring the groom,” Semantic search will be able to look through all videos that feature “wedding” and “groom” imagery as combined visual references as search results to be more accurate than searching just “wedding” or “groom.”

Results should highlight the most accurate results at the top and give more examples that are close to what is being searched for below. If you search for “black and white images of a person working out,” the first results might show very accurate image results but below will be results of other “black and white” “images” and “working out” examples. Selecting your result will open to the asset at the exact timestamp of the match.

Tips and tricks: Semantic search

  • You can search for simple visual matches, e.g.
    sunsets
    meadow
    flowers

  • You can search for basic visual matches with some situational awareness, e.g.
    sunsets over the ocean

    meadow with golden grasses

    red flowers in a vase

  • You can search for emotions, e.g.
    woman smiling

    child feels safe

    boy is excited

    man is fearful

  • You can search for concepts or mood, e.g.
    colorful, dramatically lit images of technology
    teens or young adults dancing at a party
    Black and white images of a woman working out in a gym

Hybrid search results

Hybrid searching can consist of combining NLP and Semantic searches in Frame.io, meaning you can search for visual imagery while including metadata fields for a hybrid search. For example, you can combine a phrase like “find videos of a family in the pool AND uploaded in the last 30 days.” The results will show all visual results and Date uploaded metadata field to filter the results to the last 30 days.

There will also be hybrid results in displaying both Standard results and media intelligent results when a search is made. Sometimes a search can be direct to the name of the title in the media, but may also refer to an object found in the frames of the video or in the metadata of the media. If, for example, a search was made for “All my interiors,” the first results will show the Standard results with “Interiors” in the title of the media, folder, project, or keywords. Below that list will show the media intelligent results, which can combine NLP and Semantic results, showing all metadata fields that match interiors or visual representations of interiors, such as bedrooms or lamps.

Tips and tricks: Hybrid search

  • Here are some examples of combining the power of lexical and media intelligence, to search for visual or conceptual matches while also specifying metadata attributes:
    4K ProRes clips of apex predators
    monochrome images with notes "character jake"
    videos of a mother with child swimming uploaded by Sean

    martial arts competition or challenge rated at 4 stars or above
    images and videos of people who have disabilities in sporting events
    video or images uploaded in 2025 showing urban density or big city life, shopping and leisure

Media intelligence results

Media intelligence Results are ideal if the user has a multi-intent query (e.g., matching across three different metadata fields). Media intelligence Results derive matches courtesy of NLP.

METADATA FIELD

EXAMPLE SEARCH

Alpha Channel

“Show me a photo with the alpha channel active”

Assignee

“I need to find a PDF uploaded by Paul”

Audio Bit Depth

“Give me a list of all my 24-bit depth audio files”

Audio Bit Rate

“Show me my 96 kpbs audio files”

Audio Channel

“My new uploads of 3 channel audio files”

Audio Codec

“List all my MPEG audio files from my latest project”

Audio Sample Rate

“Audio containing 48 kHz sample rate”

Bit Rate

“Video with a bit rate of over 5 Mbps”

Color Space

“List all my videos with RGB color space”

Comment Count

“Find an image that has over 25 comments”

Date Uploaded

“Show any video uploads from the under 30 days”

Duration

“List all the videos under 60 seconds”

Dynamic Range

“Where are my BT709 dynamic range files?”

End Time

“Audio with an end time of 3 minutes”

File Size

“Any videos that are over 1GB”

File Type

“List any image file types uploaded this month”

Format

“Can you show me any MOV files uploaded this week?”

Frame Rate

“Any videos with a frame rate of 24 fps”

Keywords

“Find any PDFs with the Private keyword”

Notes

“List any media files with a note containing Approved”

Page Count

“Show me a PDF that is exactly 55 pages long”

Rating

“Find audio with a 5-star rating”

Resolution - Height

“Any videos with a resolution that are exactly 1080 px...

Resolution - Width

“... by 1920 px”

Seen By

“See if any media files were seen by Scott”

Source Filename

“List any filenames containing ‘dailies’ in it”

Start Time

“Find me a video that start time is at 18:00”

Status

“Show me all assets that has a status of Approved”

Transcript

“Are there any files that have an Italian transcript?”

Uploader

“Anything uploaded by Debbie”

Video Bit Rate

“Find any 5 Mbps bit rate videos uploaded by Craig”

Video Codec

“List all videos with AVC codec uploaded this week”

Expectations

NLP search

Frame.io uses a learning language model, or LLM, to do a smart translation of your conversational speech into meaningful search queries. That being said, there are some barriers to our LLM:

  • It cannot keep a history of your submissions or continue conversation into additional searches, so try to keep all context in a single search.

  • It will not understand any use of negation (i.e. 'not', 'nor', 'except for', 'instead of', 'ignore xyz') and if you.

As long as the written text can be understood conversationally, the LLM will be able to provide relevant results.

Semantic search

Media intelligence found in semantic search is scene-based within the frame and will best track anything found prominently in that frame. If an item is too small or far in the background, it may be difficult to find an accurate match.

Beta optimizations

During the Beta period for NLP and Semantic search, iterations will continue to be made in tuning the ranking of results. For example, most recent results won't necessarily always appear near the top of the list.


The purpose of the Beta period is to collect feedback to help improve this experience for our customers. If you encounter search results that seem unexpected, please provide feedback via the in-app Search feedback button and/or contact support.

FAQ

Q: Are there any limitations with NLP or semantic search?
A:

  • Lexical (exact match) Search for contents of Documents

  • Lexical (exact match) Search for Collections, Shares, Workspaces

  • Semantic Search for Audio

  • No custom metadata

  • No Face detection

  • Limited to 50 Standard and 50 media intelligence Results

  • No saved searches or convert-to-Collection

  • Auto-tagging

Q: Are there any metadata fields not supported by NLP or Semantic search?

A:

  • Workspace

  • Seen By (people)

  • Seen by (count)

  • Date Time Deleted

  • Version Stack (is/is contained within)

  • Comments (mentions, hashtags, reactions, read receipts, timestamps)

Q: Can I search using the Frame.io panel in Premiere Pro?
A:
Yes. Premiere will search locally within the current project, while Frame.io searches account-wide and utilizes NLP.

Q: Does semantic search have access to the account’s media assets?

A: Yes. This is how the embeddings are created. We offer you the ability to opt out. Just contact support to let us know and your assets won’t be processed for semantic search.


Q: Does NLP have access to the account’s media assets?

A: No. NLP is only used to parse the text query entered by the user. It has no awareness of the content against which the final query will be run.

Q: Will customer assets be used to train/reinforce our AI model?

A: No. In line with Adobe’s industry-leading ethical posture, we do not use customers’ data to train or augment our internal model

Q: The new feature will generate references to visual objects. What is that and where will the associated metadata be stored?

A: The references are sets of decimal numbers that only Adobe can decode. The references are stored in our secure internal database. No decoding takes place other than when the customer searches in their account.

Q: Does semantic search support finding specific people in pictures and videos using facial recognition?

A: No. This is functionality under consideration.

Did this answer your question?