← Virtual Post House

Everything Bradford Lab helps you do.

The platform is focused on the operational parts of post-production that create real downstream risk when handled loosely — especially across teams, vendors, and delivery partners.

QC

Automated Technical Checks

Run codec, loudness, resolution, metadata, and standards checks before a file gets shipped. Agents validate against delivery specs automatically.

DCP

DCP Review

Inspect structure, validate reels, review naming, and support playback workflows for cinema delivery. Full SMPTE and Interop compliance checking.

DEL

Deliverables Control

Track versions, package outputs, and keep every delivery tied to the right spec and status. Version history and delivery state in one place.

REV

Review & Approval

Keep notes, approvals, and delivery readiness in one workflow instead of scattered threads. Frame-accurate timecoded commenting.

AGT

Operational Automation

Use software agents for repetitive validation and coordination work while people stay in control. Agents run checks, surface issues, and keep state current.

AI

Post-Specific Assistant

Ask practical questions about specs, formats, captions, loudness, DCPs, and delivery troubleshooting. Built with deep post-production domain knowledge.

INT

Professional Integrations

Connect the workflow to storage, review tools, and finishing systems your team already uses. Frame.io, Dropbox, Google Drive, and DaVinci Resolve integration.

SEC

Production Security

Keep access scoped and controlled for sensitive work, pre-release assets, and client review. Organization-based access control with encrypted storage.

Automated technical QC for post-production media

Pre-delivery QC has never scaled by hiring more humans. A single feature delivery package can contain dozens of video masters, audio stems, caption files, and proxy renditions — each with its own codec profile, bit depth, chroma subsampling, color space metadata, resolution, frame rate, container conformance, and loudness target. Manual QC catches the obvious failures and misses the structural ones: a stereo downmix that drifts 0.4 LUFS over target, a ProRes file flagged as Rec. 709 with the wrong transfer characteristics, a caption file whose final cue overruns the program by two frames.

Bradford Lab runs automated technical QC across every file that enters the platform. Codec, bit depth, and chroma subsampling are probed and compared against the delivery spec on file. Loudness is measured to EBU R128 (integrated, short-term, and momentary), ATSC A/85 (CALM Act), and platform targets including Netflix (-27 LUFS), Apple TV+, and standard broadcast. Resolution, frame rate, color space metadata, and container conformance are validated against ProRes (Proxy through 4444 XQ), DNxHD, DNxHR, H.264, H.265/HEVC, AV1, JPEG2000, XDCAM, AVC-Intra, MPEG-2, and uncompressed profiles. A three-layer governance model — standards profiles, QC profiles, and policy profiles — keeps the checks themselves auditable: what was checked, against what spec, with what result. QC runs pre-delivery (before a file leaves the facility) and at delivery (before assets are accepted into the platform), so the same conformance logic gates both directions.

DCP validation, inspection, and authoring

A Digital Cinema Package is the theatrical delivery format: a directory of JPEG2000 MXF reels, WAV MXF audio reels, optional subtitle and closed caption tracks, and a cluster of XML files (CPL, PKL, ASSETMAP, VOLINDEX) that describe how the parts fit together. DCPs break in characteristic ways. Multi-reel compositions desynchronize when reel dimensions disagree. Encrypted packages fail to play because a KDM doesn't match the certificate of the target server. SMPTE 428-7 packages get authored with Interop DCSubtitle XML, or the reverse. ISDCF naming gets the audio descriptor or aspect ratio in the wrong slot. Subtitles defined by entry-point plus duration overrun a reel boundary and the projector throws an error five minutes before showtime.

Bradford Lab validates DCPs structurally on ingest. CPL, PKL, and ASSETMAP integrity is checked. Reel boundaries, encryption state, and KDM expectations are surfaced. ISDCF naming is parsed and the encoded fields (content type, aspect ratio, audio configuration, resolution, frame rate) are validated against the package contents. Inline validation runs through both clairmeta and easyDCP, so the same package gets checked against two independent reference implementations rather than one. In-browser playback supports inspection without spinning up a DCP-aware desktop player. On the authoring side, Bradford Lab supports DCP creation including Version File (VF) authoring from an imported Original Version (OV) — caption emission is standard-aware, producing either Interop DCSubtitle or SMPTE 428-7 timed text to match the parent package.

Deliverables management and packaging

Deliverables management is the operational discipline of tracking every version of every asset against every distributor's spec, and being able to answer — at any moment — what has been delivered, what is approved, what is pending, and what is at risk. The failure modes are familiar: a vendor delivers a 5.1 stem when the spec called for 7.1, a caption file ships in WebVTT to a partner who only accepts TTML/IMSC, a final master is replaced by a revision but the older version stays attached to the delivery record. None of these are technical problems individually; together they are the bulk of the operational overhead in finishing.

Bradford Lab tracks specs as first-class objects. Every deliverable is bound to a spec — video codec and profile, audio configuration, caption format, container, naming convention, and any platform-specific requirements. Package versions are recorded with provenance: which source files went into which package, who approved the package, when it was sent, and to which partner. Caption format coverage spans SRT, WebVTT, TTML/IMSC (subtitle and SMPTE-TT profiles), EBU-STL, embedded CEA-608 and CEA-708, Interop DCSubtitle, and SMPTE 428-7 timed text. Audio coverage includes PCM WAV, Broadcast WAV with embedded timecode, multichannel stems (stereo, 5.1, 7.1, Atmos bed configurations), AAC, and Dolby Digital. Interchange formats — CMX 3600 EDL, FCPXML, AAF, and OpenTimelineIO (OTIO) — are supported for editorial handoff. Every delivery ties back to a status state, so "what shipped" and "what's pending" are never separate spreadsheets.

Review and approval workflows

Notes get lost in email. Approvals get verbal-only on calls. A producer signs off on v7 in Slack while the colorist is already working on v9 because someone sent v8 to the director directly. Review and approval is not a creative problem — it is a state problem. The state of a cut, a grade, a sound mix, or a delivery master is the compound result of every note left on every version, every approval given by every decision-maker, and every outstanding fix still on the list. When that state is scattered, finishing slows down and mistakes ship.

Bradford Lab keeps review state inside the same system that holds the assets, the specs, and the delivery records. Notes are timecoded and frame-accurate. Approvals are recorded against specific versions, not against "the cut" in the abstract. The difference between Bradford Lab and a dedicated review tool like Frame.io is one of scope, not competition: Frame.io is the review-and-approval layer where directors, producers, and creative leads leave notes on cuts; Bradford Lab is the operational layer that tracks what those notes mean for delivery state, version control, and spec adherence. The two integrate. Frame.io review activity flows into Bradford Lab's version history; Bradford Lab can route assets to Frame.io for review and pull approval status back into the delivery record. The result is one chain of custody from creative note to final delivery.

Operational automation with software agents

"Agent" is an overloaded word. In Bradford Lab it means something specific and bounded: a software process that runs a defined check, compares a result to a known specification, flags an exception, or advances a state — and stops there. Agents don't approve deliveries. Agents don't sign off on creative. Agents don't decide whether a borderline loudness measurement is acceptable for a particular distributor. Humans do that. Agents do the work that doesn't need a human in the loop: probing a file, running an EBU R128 measurement, comparing a codec profile to a spec, checking a CPL against a PKL, generating a proxy rendition, packaging a deliverable.

The architecture underneath this is Temporal — durable workflows that survive node restarts, network partitions, and long-running jobs that take hours to complete. Render and processing work is distributed across electron-based render nodes that run on macOS (signed and notarized) and can be self-hosted on facility hardware. Human checkpoints are explicit: every approval gate, every spec exception, every decision that is not trivially mechanical is presented to a human operator with the relevant context. Agents work asynchronously and surface their results when they're ready, which matters for distributed teams operating across time zones — the overnight QC pass is done by morning, the proxy renditions are ready when the editor opens the project, and the delivery package is built and waiting for human review by the time the post supervisor sits down.

Post-specific AI assistant

General-purpose large language models are good at general-purpose questions. They are less good at post-production questions, where the right answer often depends on the difference between Rec. 709 and Rec. 2020, between EBU R128 integrated loudness and short-term loudness, between SMPTE 428-7 and Interop, between a ProRes 4444 XQ container that carries alpha correctly and one that doesn't. The cost of a generic answer in post is a re-delivery, a rejected QC report, or a missed slot on a broadcast schedule.

Bradford Lab includes a domain-specific assistant built around post-production knowledge: codec profiles and their characteristics, container limitations, loudness specifications across distributors, caption format requirements, DCP authoring rules, ISDCF naming, color space and transfer characteristic combinations, and the integration details of the tools Bradford Lab connects to. The assistant is aware of project context — it can answer "what's blocking delivery on this title" by reading the project's actual state, not by guessing. It can troubleshoot a specific QC failure by looking at the actual measurement, not a hypothetical one. It knows the difference between a question that should be answered with information ("what's the Netflix loudness target?") and a question that should be answered with an action ("run a loudness check on the latest master"), and routes accordingly to the tools that perform the work.

Professional integrations

A virtual post house is only useful if it connects to the tools post-production already runs on. Bradford Lab integrates with Frame.io for review and approval and for asset ingest — review activity flows into Bradford Lab's version history, and Bradford Lab can push assets back out to Frame.io for stakeholder review without breaking the chain of custody. Dropbox and Google Drive are supported as storage and intake sources, which matters for the practical reality that footage, deliverables, and reference materials don't always start their life inside a finishing facility.

The deepest integration is with DaVinci Resolve, via the Bradford Toolkit MCP (Model Context Protocol) server. This goes well beyond import/export. The toolkit automates project operations (creation, save, archive), timeline manipulation (importing media, building sequences from EDL or AAF, conforming against OpenTimelineIO), color grading via DRX presets (Resolve's native grade-export format, applied programmatically against shots that match defined criteria), and render queue management (adding jobs, monitoring progress, retrieving outputs). Color grading in particular follows a deliberate workflow — grab a still as undo, export a "before" reference, generate a DRX from the look library, apply, export an "after" reference, and label the still — so automated grade application stays auditable and reversible. The result is that Resolve becomes a programmable component of the Bradford Lab pipeline rather than a standalone destination that requires a colorist to manually perform every operation.

Production security and access control

Pre-release content has a value that drops to roughly zero the moment it leaks. Studios, distributors, and rights-holders enforce this with contractual security requirements that govern who can access an asset, from what location, on what device, and with what audit trail. A finishing platform that doesn't take these requirements seriously is not usable for studio-level work, full stop.

Bradford Lab is built with organization-based access control as a foundational concept rather than a bolt-on. Every asset, project, and delivery record belongs to an organization; access is scoped by organization membership and role. Storage is encrypted. Share links to pre-release assets can be scoped, expired, and revoked. Every access event — who viewed an asset, who downloaded it, who changed a delivery state — is recorded in an audit trail that can be reviewed after the fact. The platform's distributed architecture (render nodes that can be self-hosted on facility hardware) is relevant here too: organizations that need media to stay on-premises for the most sensitive titles can run processing locally while still using the platform for orchestration and state tracking, rather than choosing between cloud convenience and on-prem control.

Built around real finishing edge cases.

Bradford Lab understands the kinds of details that actually break deliveries:

  • That a DCP with mismatched reel dimensions needs per-reel scale and pad, not a global resize
  • That BWF timecode tells you where a file starts, not where content aligns — you need cross-correlation to verify
  • That EBU R128 loudness is measured after the "Summary:" line, not from the per-frame output
  • That ProRes 4444 XQ in a QuickTime container handles alpha differently than ProRes 4444
  • That subtitle entryPoint plus duration can overrun a reel boundary in multi-reel DCPs
  • That ISDCF naming conventions encode content type, aspect ratio, audio format, and resolution in a specific order
  • That "Rec. 709" in metadata does not guarantee correct gamma — you need to verify the transfer characteristics
  • That an audio stem delivery with mono WAV files needs proper channel grouping, not 48 individual tracks

Teams with real delivery pressure.

Post Supervisors

Reduce follow-up work and keep delivery state visible across projects.

Producers

Get clearer status on what is approved, what is pending, and what is at risk.

Editors & Colorists

Validate masters before they leave your hands and avoid preventable delivery mistakes.

Distributors

Get cleaner handoffs, clearer delivery readiness, and fewer technical surprises downstream.

Independent Filmmakers

Handle festival, streaming, and accessibility deliverables with more structure and less guesswork.

Remote Teams

Keep distributed collaborators working against the same requirements and review state.

Forward-Thinking Facilities

Add a better operational layer around your finishing pipeline without rebuilding everything.

Ready for a better system around finishing?

Request early access to Bradford Lab.

Virtual Post House · Sign in