AI PRODUCTION
MATTER WORKFLOWS

The New Production Timeline: What’s Actually Changed Going From Working Traditionally To With AI

Six week timeline cut to three weeks. That's the headline. But the more interesting shift is structural - clients now see real visual direction within days, feedback comes early when it's still cheap to act on, and iteration happens before anyone's committed to a direction. Here's how the workflow has changed.

Author: Heath Waugh

Date Published: 28 February 2026

------------

TL;DR - Traditional production ran six weeks minimum: brief, pre-production, shoot, feedback, post, dispatch - each stage locked before the next began. Our AI-integrated workflow runs three weeks, and more importantly, clients see real visual direction within days. Feedback comes early when it's cheap to act on. By the time we're in final post, everyone's already agreed on where we're going.

The New Production Timeline: What’s Actually Changed Going From Working Traditionally To With AI

Six weeks to three weeks. That's the headline.

But the more interesting story is about how the structure of creative production is shifting - not just getting faster, but working differently. The old timeline was built around a series of irreversible commitments. The new one is built around iteration and early visibility. We've had to adapt how we work, and it's taught us some things worth sharing.

The Old Model: Linear, Sequential, High-Stakes

Traditional photography production followed a well-worn path:

BriefPre-productionCasting / location scouting / wardrobe / crew bookingShootAgency feedbackPost-productionClient feedbackMore post-productionDispatch

Each stage had to be largely complete before the next could begin. You couldn't shoot until you'd locked locations. You couldn't get meaningful client feedback until you had actual images. The whole thing moved in one direction, with limited ability to course-correct once momentum built.

The timeline? Typically six weeks minimum for anything substantial. Often longer.

Here's the thing about that model: it front-loaded all the big decisions and back-loaded all the feedback. By the time the client saw real images, most of the budget was spent. If the direction was off, you were either living with it or paying for reshoots.

It worked. The industry has made good work within that structure for decades. But it is also rigid - dependent on getting everything right before anyone outside the production team saw what was actually being made.

Where the Friction Lives

The old model isn’t slow because people were inefficient. It’s slow because the medium demanded it.

Location scouting takes time because physical locations exist in the real world and you have to actually visit them. Casting takes time because you need to coordinate humans. Feedback on these elements takes time. Decisions have to make their way up and back down a chain. Crew booking takes time because skilled people have schedules. The shoot itself is a fixed commitment - everyone shows up on the day, and whatever happens, happens.

Post-production was where flexibility finally entered the process. But by then, you're working with fixed source material. You can colour grade, retouch, composite - but you can't fundamentally reimagine the shot. The raw ingredients were already determined.

Feedback loops were slow because there was nothing meaningful to react to until late in the process. Mood boards and references can only communicate so much. Clients and agencies were essentially being asked to approve a vision, then wait weeks to see if the reality matched.

What's Changed

At Matter our workflow now looks more like this:

BriefFirst artAgency/client feedbackSecond artPost-productionDispatch

Three weeks, sometimes less. But the speed is almost a byproduct - the real shift is in when decisions get made and when people see work.

The key difference: clients and agencies see real visual direction within days, not weeks. Not mood boards. Not references pulled from other people's work. Actual art that represents what we're proposing to create.

We've moved to shared Figma boards where stakeholders can see work evolving in close to real time. Feedback happens early, when changing direction is cheap. By the time we're in final post-production, everyone has already agreed on where we're going - because they've been watching us get there.

Why Early Visibility Matters

In the old model, the first time a client sees anything resembling finished work is after the shoot. All the conceptual conversations, all the pre-production planning, all the reference decks - those are abstractions. The real moment of truth comes late, when flexibility is lowest.

Now, "first art" is something tangible. It's not final - it's not meant to be - but it's concrete enough to react to. Does this colour palette feel right? Is this environment working? Is the mood landing? These conversations happen in week one, not week five.

This changes the nature of feedback. Instead of responding to finished work (where criticism feels high-stakes because changes are expensive), people are responding to direction (where input feels collaborative because we're still building).

When feedback comes early, it's additive. When feedback comes late, it's corrective. That's a meaningful difference in how projects feel for everyone involved.

More Iterations, Lower Stakes

Here's something we didn't fully anticipate: compressing the timeline has actually led to better creative outcomes.

In the old model, you got one shot at most things. One location. One casting choice. One shoot day. That bred a certain discipline, but it also meant creative exploration was front-loaded into the conceptual phase, then locked down.

Now, we explore visually before committing. First art isn't precious - it's propositional. If the initial direction isn't working, we pivot. If a concept that seemed strong in the brief doesn't translate visually, we find out immediately. If someone has a reaction we didn't anticipate, we can incorporate it while there's still room to manoeuvre.

The cumulative result is work that's been tested and refined through iteration, not work that's been guessed at and hoped for.

What gets lost

It would be dishonest not to say this plainly: some things that happen on a shoot can’t be replicated.

The unexpected moment. The way light falls differently than planned and turns out to be better. The talent who does something unscripted that becomes the hero shot. The texture of a real surface, the weight of a real object, the quality of presence that comes from something physically existing in space. These aren’t small things. They’re part of why the best commercial photography has always felt like it has a pulse.

AI-integrated production trades some of that spontaneity for control. You get precision and repeatability - the product looks exactly right, the light is exactly where it needs to be, the scene is exactly the scene you intended. What you don’t always get is the happy accident.

For some briefs, that trade-off is straightforward. Product imagery for e-commerce, campaign assets that need to be consistent across markets, content that needs to scale - these are cases where control is exactly what’s needed and spontaneity was never really the point.

For other briefs - brand campaigns that need to feel alive, editorial work that depends on a sense of presence, anything where the humanity of the image is the whole point - the answer is more nuanced. The tools are evolving quickly, and the gap is narrowing. But it’s worth being honest that the gap exists.

The studios doing interesting work in this space aren’t pretending otherwise. They’re figuring out where AI-integrated production genuinely serves the brief, and where traditional methods still have the edge - and building workflows that can move between both.

What We've Had to Learn

This model has required us to work differently. A few things we've figured out along the way:

Showing unfinished work takes adjustment. First art is propositional, not polished. Early in this transition, there was a temptation to over-refine before sharing - to make everything "presentable." That defeats the purpose. The whole point is getting reactions while there's still room to act on them.

Faster feedback means being ready to respond. Shorter timelines and continuous collaboration mean quicker turnarounds. When input comes in, we need to move on it. The rhythm is different.

The finishing still takes what it takes. AI compresses the early stages - ideation, exploration, direction-setting. But final post-production, the detail work that makes something genuinely broadcast-ready, still requires the time it always did. The timeline compression isn't evenly distributed.

Communication about process matters more. When people are seeing work at earlier stages, they need context for what they're looking at. First art isn't final art. A Figma board is a working space. Being clear about where we are in the process prevents misunderstandings.

The Bigger Picture

This isn't unique to us. Across the industry, studios working with AI are discovering similar shifts - timelines compressing, feedback loops tightening, the rhythm of collaboration changing.

The old production model is shaped by the constraints of its tools. Physical shoots, human crews, real locations - all of that created a certain structure. It isn’t arbitrary; it is necessary.

Those constraints have shifted. Not disappeared - we still shoot, still work with crews, still do physical production when it's the right choice. But the default has changed. Exploration that used to require expensive commitment now happens digitally. Feedback that used to come late now comes early.

We're still figuring out what this means. The tools keep evolving, and so do the workflows. But the direction is clear: faster iteration, earlier visibility, more collaborative process.

Six weeks to three weeks is the easy metric. The harder-to-measure part is how much better the work can be when everyone's seeing it sooner.

Heath Waugh

FAQ's

Q: How long does a production take with Matter compared to traditional photography?
A: Traditional production ran six weeks minimum. Matter's AI-integrated workflow runs three weeks, sometimes less. But the more meaningful change is structural - clients see real visual direction within days, not weeks.

Q: What does "first art" mean in Matter's workflow?
A: First art is the initial visual direction - not finished work, but concrete enough to react to. It replaces the mood board and reference deck stage with something tangible. Clients can respond to actual proposed imagery in week one, when changing direction is cheap, rather than week five, when it isn't.

Q: How does Matter handle client feedback during production?
A: Through shared Figma boards where stakeholders can see work evolving in close to real time. Feedback happens early and continuously, so by the time the project reaches final post-production, everyone has already agreed on direction - because they've been watching it develop.

Q: Does AI compress every part of the production timeline?
A: No. AI compresses the early stages - ideation, exploration, direction-setting. Final post-production, the detail work that makes something genuinely broadcast-ready, still takes what it takes. The timeline compression isn't evenly distributed, and it's important to understand that going in.

Latest from the Journal