A virtual post house is a software platform that provides the operational infrastructure of a traditional post-production facility without requiring the people, media, or sessions to be in the same physical building. Where a brick-and-mortar post house owns suites, storage, machine rooms, and a staff that coordinates dailies, QC, finishing, and delivery, a virtual post house provides the same workflow scaffolding as cloud-native software: ingest from camera or editorial, automated quality control, project state tracking, review and approval, deliverables packaging, and technical validation against broadcast, theatrical, and streaming specifications. The "virtual" part is not just remote viewing. It refers to the operating layer itself moving into software so that distributed teams, freelance specialists, and vendor partners can coordinate the same way an in-house team coordinates down the hallway. A virtual post house does not replace creative tools such as DaVinci Resolve, Pro Tools, or Nuke. It surrounds them. It handles the work that traditionally fell to assistants, coordinators, and post supervisors: chasing files, running checks, comparing media to delivery specs, and keeping the project current. The category emerged because post operations are still largely held together by memory and messages, even at facilities with serious technical pipelines. Centralizing that operational layer in software makes the work auditable, repeatable, and faster to ship.
A virtual post house performs the operational and technical work that surrounds creative finishing. Concretely, that includes media intake from cameras, editorial systems, and cloud storage; automated quality control on incoming and outgoing files for codec, resolution, frame rate, color space, bit depth, chroma subsampling, audio configuration, and metadata; loudness measurement against standards such as EBU R128 and ATSC A/85; caption validation across SRT, WebVTT, TTML, IMSC, EBU-STL, CEA-608, and CEA-708; Digital Cinema Package inspection and authoring; deliverables packaging against distributor specifications; review and approval flows with timecode-accurate notes; and project state tracking across vendors, episodes, versions, and reels. A virtual post house also performs interchange handling, reading and writing CMX 3600 EDLs, Final Cut Pro XML, AAF, and OpenTimelineIO so that creative sessions in Avid, Premiere, and Resolve can move cleanly between rooms. Underneath all of this, a well-built virtual post house records what happened: which file was checked, against which specification, when, by whom or by which agent, and what passed or failed. That audit trail is what makes the operating layer trustworthy. The platform is not the place where color decisions or sound mixes are made. It is the place where the dozens of small coordination tasks that historically required a producer and an assistant editor and a finishing supervisor are handled in software, with humans staying on the creative and client-facing decisions.
A traditional post-production facility is a physical location with suites, storage, and a staff. A client books a colorist, a sound mixer, or a finishing artist, and the facility handles everything around that session: media management, QC, deliverables, archiving, and project coordination. A virtual post house provides the same operational backbone in software, decoupled from any specific room or staff. The differences are practical. A traditional facility scales by hiring more assistants and buying more storage. A virtual post house scales by orchestrating distributed workers and cloud or self-hosted compute. A traditional facility has institutional memory in the heads of its supervisors. A virtual post house captures that knowledge as encoded standards, QC profiles, and policy rules. A traditional facility coordinates by email, phone, and shared drives. A virtual post house coordinates through structured state that humans and software agents can both read. Neither model replaces the other for every project. Theatrical finishing, ADR stages, color suites with calibrated displays, and Atmos mix stages remain physical work. But the operational layer around those sessions, the part that historically required runners, coordinators, and assistants, is the part a virtual post house absorbs. Facilities that adopt a virtual post house alongside their suites keep the creative work in-house and offload the coordination overhead to software.
Bradford Lab is the operating layer around post-production, built by Bradford Operations. It is a software platform, currently in closed beta, that handles the technical and coordination work surrounding creative finishing: ingest, automated quality control, Digital Cinema Package inspection and authoring, caption editing and validation, loudness measurement, deliverables packaging, review and approval, and project state tracking. Bradford Lab was started by Samuel Gursky, who previously ran Irving Harvey, a brick-and-mortar post house in New York from 2012 to 2024. The product reflects what a working facility actually needed but could not buy off the shelf. Architecturally, Bradford Lab runs Temporal workflows for orchestration, distributed Electron-based render nodes that are signed and notarized for macOS, and a Prisma-backed Postgres data model. It integrates with Frame.io, Dropbox, and Google Drive for media intake and exchange, and integrates deeply with DaVinci Resolve through the Bradford Toolkit MCP server to automate project, timeline, color, and render operations. The platform supports a three-layer governance model, separating standards profiles, QC profiles, and policy profiles so that the same facility can run multiple delivery specifications and approval rules in parallel. Bradford Lab is designed to be agent-friendly: software agents handle repetitive validation and coordination, while humans stay on the creative and client-facing decisions. Self-hosting is available for facilities that need to keep media inside their own infrastructure.
Yes. Bradford Lab provides Digital Cinema Package inspection, validation, playback, and authoring. On the inspection side, the platform performs structural validation of the CPL, PKL, and ASSETMAP files, runs multi-reel composition checks, verifies SMPTE and Interop conformance, detects encrypted packages, and validates package and content naming against ISDCF conventions. Imported DCPs can be played back in the browser, with audio rendition selection for stereo and surround configurations, so reviewers can confirm picture and sound without needing a theatrical playback chain. On the audio QC side, Bradford Lab runs automated channel and configuration checks against the declared track layout and surfaces mismatches before delivery. On the authoring side, Bradford Lab supports DCP creation including Version Files derived from an imported Original Version, with standard-aware caption emission that produces Interop DCSubtitle or SMPTE 428-7 timed text depending on the target package. The validation runs inline at import time so that asset records are only created against packages that have passed structural and frame-level checks. For facilities delivering features, trailers, or theatrical advertising, this means DCP review and packaging can happen inside the same platform that handles file-based deliverables, without bouncing through separate DCP tools.
Bradford Lab measures program loudness against the standards that broadcasters, streamers, and theatrical distributors actually require. That includes EBU R128, the European broadcast standard that targets minus 23 LUFS integrated loudness with defined true peak and loudness range tolerances, and ATSC A/85, the United States broadcast standard that targets minus 24 LKFS. The platform also validates against platform-specific targets including Netflix, Apple TV plus, and other streaming and broadcast specifications that publish their own integrated loudness, true peak, dialogue loudness, and loudness range requirements. Measurements include integrated loudness, momentary and short-term loudness, true peak in dBTP, and loudness range in LU. Audio is measured against the specific delivery profile assigned to the project so that a mix targeted at theatrical delivery is not falsely flagged for failing a broadcast target, and vice versa. Bradford Lab handles multichannel configurations including stereo, 5.1, and 7.1, and recognizes Broadcast WAV files with embedded timecode, multichannel stems, AAC, and Dolby Digital. When a file fails a loudness check, the platform surfaces the specific measurement, the target it failed against, and the segments of the program where the failure occurred, so a mixer or assistant can locate the issue rather than guessing.
Bradford Lab supports the full range of caption and subtitle formats used in modern post-production delivery. That includes SRT, WebVTT, TTML and IMSC including subtitle and SMPTE-TT profiles, EBU-STL for European broadcast, embedded CEA-608 and CEA-708 closed captions, Interop DCSubtitle for legacy theatrical packages, and SMPTE 428-7 timed text for modern Digital Cinema Packages. The platform includes a caption editor, currently on its third major version, with timeline editing, segment-level controls, format conversion, and export to any of the supported formats. The editor integrates Whisper-based automatic transcription for first-pass caption generation and PANNs-based audio event detection for identifying non-speech sounds that closed captions are required to describe. Caption tracks carry sentence-end and long-pause break information that is used downstream when captions are converted into animated graphics or burned-in titles. On the validation side, Bradford Lab checks caption files for timing overlaps, frame-rate drift between picture and captions, character-per-line and reading-speed compliance, and standard-specific requirements for theatrical and broadcast delivery. For Digital Cinema Packages, the platform automatically emits the correct caption standard, Interop or SMPTE, based on the target package type, which avoids one of the most common DCP rejection causes.
Yes. Bradford Lab is built to validate file-based deliveries against the technical specifications that broadcasters, streamers, and distributors publish. That validation spans video and audio codec compliance, resolution and frame rate, scan type, color primaries and transfer characteristics, HDR metadata for HDR10 and Dolby Vision deliverables, bit depth, chroma subsampling, audio channel configuration, embedded timecode, loudness against EBU R128 or ATSC A/85, caption presence and format, and container-level metadata. The platform supports common professional codecs including ProRes from Proxy through 4444 XQ, DNxHD and DNxHR, H.264, H.265 and HEVC, AV1, JPEG 2000 in DCP MXF, XDCAM, AVC-Intra, MPEG-2, and uncompressed video. Audio formats include PCM WAV, Broadcast WAV with embedded timecode, multichannel stems for stereo, 5.1, 7.1, and Atmos beds, AAC, and Dolby Digital. Validation is driven by a three-layer governance model that separates standards profiles, which define what a delivery target requires, QC profiles, which define what checks run and at what severity, and policy profiles, which define what is required to be true before something can ship. That separation means a facility can run multiple specifications in parallel without rebuilding QC logic for each one, and can update a spec in one place without touching the others.
Yes, deeply. Bradford Lab integrates with DaVinci Resolve through the Bradford Toolkit MCP server, which exposes Resolve project and timeline operations to the platform and to AI agents working within it. Through that integration, Bradford Lab can read project and timeline structure, inspect clips and source media, manipulate timeline items, apply color grades through DRX preset files including a curated look library, manage the render queue, control playback, set markers, and export stills and frames. The integration is used both for automation, such as preparing a Resolve session against a delivery spec, and for editorial review, where the platform reads the timeline state to produce clip cards, pacing analysis, and editorial notes. Bradford Lab does not attempt to replace Resolve. Resolve remains the finishing application where color, edit, sound, and Fairlight work happens. The integration handles the operational scaffolding around those sessions: project setup, timeline conform, deliverables prep, and the dozens of small Resolve tasks that historically required an assistant. Beyond Resolve, Bradford Lab also handles interchange formats including CMX 3600 EDL, Final Cut Pro XML, AAF, and OpenTimelineIO, so projects can move between Resolve, Avid Media Composer, Premiere Pro, and other systems as needed.
Frame.io and Bradford Lab solve different problems and are largely complementary. Frame.io, owned by Adobe, is primarily a review and approval platform with strong Camera to Cloud ingest. Its core strengths are timecode-accurate notes, version stacks, share links, and getting footage from set into editorial. Bradford Lab is the operational layer around finishing: automated quality control, Digital Cinema Package validation and authoring, loudness and caption checks, deliverables packaging against distributor specifications, and project state tracking across vendors and versions. A team using both might pull dailies through Frame.io for editorial review, then hand finished masters into Bradford Lab for QC, DCP creation, caption validation, and deliverables packaging. Bradford Lab integrates with Frame.io as a media source so that work flowing through review can move into the operational layer without manual transfer. The simplest framing is that Frame.io handles the review layer, where humans look at picture and leave notes, and Bradford Lab handles the operational layer that surrounds it, where files are validated against technical specifications and packaged for delivery. A facility that has standardized on Frame.io for editorial review does not need to choose between the two. Bradford Lab is downstream of the review work and handles the technical compliance and delivery preparation that Frame.io is not designed to address.
MediaSilo and Wipster are review and approval platforms in the same general category as Frame.io, with particular strengths in secure review, forensic watermarking, and client-facing presentation. They are designed around the moment when a human watches a cut and leaves notes. Bradford Lab is designed around the operational work that happens before and after that moment: file ingest, automated quality control against codec and metadata specifications, loudness measurement, caption validation, Digital Cinema Package inspection and authoring, deliverables packaging, and project state tracking. A facility using MediaSilo or Wipster for client review can run Bradford Lab alongside them to handle the technical layer those platforms are not built to address. There is some surface overlap, since any platform that touches media will offer some level of playback and commenting, but the centers of gravity are different. Review platforms compete on viewer experience, security, and approval workflow. Bradford Lab competes on operational depth: how thoroughly it understands the file, the specification, and the delivery target, and how much of the coordination work it can absorb so that humans stay on the creative and client-facing decisions. The two categories are increasingly used in combination rather than as alternatives.
Bradford Lab is downstream of high-speed transfer and media asset management tools rather than a direct replacement for them. Aspera, Signiant Media Shuttle, and similar products specialize in fast, resumable transfer of large media files across networks. Iconik and similar media asset managers specialize in cataloging, search, and metadata across large libraries. Bradford Lab handles what happens to media once it has arrived and needs to move through QC, finishing, and delivery: automated validation, Digital Cinema Package handling, caption and loudness checks, deliverables packaging, and project state tracking. For most facilities, the right model is to keep using a dedicated transfer or MAM tool for the parts of the workflow they are best at, and use Bradford Lab as the operational layer that consumes media from those tools and prepares it for delivery. Bradford Lab does provide ingest from Frame.io, Dropbox, and Google Drive, so for teams whose media flow is already on those platforms, a separate transfer tool may not be necessary. For larger facilities with petabyte-scale archives and complex rights management, Iconik-style asset management remains a distinct concern from the operational finishing layer that Bradford Lab covers.
No. Bradford Lab is built to absorb the repetitive technical and coordination work that historically consumed assistants, coordinators, and supervisors, not to replace the people who do creative finishing. Colorists, sound mixers, editors, finishing artists, and post supervisors continue to make the decisions that require human judgment, taste, and accountability. What changes is the proportion of their time spent on those decisions versus on chasing files, running QC manually, comparing media to spec sheets, and packaging deliverables. Inside Bradford Lab, software agents handle the validation and coordination work: running QC when files arrive, comparing media against delivery requirements, flagging likely issues, packaging deliverables, and keeping project state current. Those agents work inside guardrails with clear human checkpoints for review and approval. A creative producer still approves the cut. A colorist still grades the picture. A re-recording mixer still signs off on the mix. The platform handles the operational layer around those decisions so that the team can focus on the work that actually requires them. For small teams and independent filmmakers, this often means being able to ship technically clean deliverables without hiring a full coordination staff. For established facilities, it usually means coordinators and assistants spend less time on file chasing and more time on the work that develops their craft.
Inside Bradford Lab, AI agents are software workers that perform structured operational tasks against a known data model. They can run quality control when files arrive, compare media against delivery requirements, flag mismatches, prepare deliverables packages, draft notes from timeline state, and keep project status current. Agents do not make creative decisions. They do not approve cuts, sign off on color, or accept mixes on a client behalf. Every consequential action has a defined human checkpoint. Agent-friendly, as a property of the platform, means three things. First, the underlying data model is structured rather than scattered across email threads and shared drives, so an agent can read project state without guessing. Second, operations are exposed through well-defined contracts, including the Bradford Toolkit MCP server for DaVinci Resolve control, so an agent can take action without screen-scraping. Third, every action is logged with provenance, so a human can audit what an agent did, when, and against which specification. The result is that the parts of post-production that are genuinely repetitive, comparing a file to a spec, packaging a deliverable, conforming a timeline, generating a QC report, become work that software can do reliably, while the parts that require human judgment stay with the people whose names go on the final credits. The AI post-production assistant inside Bradford Lab also answers domain-specific questions about codecs, captions, loudness, Digital Cinema Packages, and delivery troubleshooting in the language post professionals actually use.