When Machines Enter the Control Room

By Tom Kehn, VP, Solutions Consulting March 12, 2026

CHESA Fest 2026 brought together technology vendors, media organizations, and workflow architects to explore the architectural shifts reshaping modern content infrastructure. As part of the event, a series of vendor panels examined the deeper technical debates emerging across storage, asset management, and AI-driven workflows.

This discussion focused on one of those debates: as artificial intelligence becomes increasingly capable of observing, interpreting, and reacting to live signals, should AI remain an advisory tool within broadcast workflows, or begin operating inside the control loop; making real-time decisions that influence cameras, graphics, audio, and other elements of live production.

AI, Authority, and Real-Time Decision-Making in Live Production

For decades, live broadcast production has operated on a simple principle: humans make the decisions.

A director chooses the shot.

An operator triggers graphics.

An audio engineer adjusts the mix.

Infrastructure carries out human intent.

Every decision inside the signal chain had a person attached to it.

But that model is beginning to evolve.

Artificial intelligence can now detect speakers, identify key moments, trigger graphics automatically, translate audio in real time, and even adjust production elements dynamically based on what is happening inside the program feed.

The question is no longer whether AI can assist production workflows.

The question is whether it should be allowed to act inside the control loop.

At CHESA Fest 2026, Vendor Panel 4 examined what happens when AI moves from being a recommendation engine to becoming part of the live production decision stack.

Not whether AI belongs in broadcast workflows. But how much authority it should have when the program is already on air.

The Panel

The discussion was moderated by Jason Pepino, Director of Media Systems Design & Engineering at CHESA, and brought together representatives from several companies whose technologies operate at different layers of the live production chain.

Panelists included:

  • Chuck Davidson, Partner Account Manager at LiveU
  • Steve Cooperman, Regional Sales Manager at Vizrt
  • Kyle Phillips, VP of Global Sales Enablement at AI-Media
  • Dan Griffin, Pro AV Territory Manager at Netgear

Together, the group represented multiple points of the broadcast signal path, from edge contribution and transport, to networking infrastructure, to graphics automation and AI-driven captioning systems.

Rather than discussing AI as a general productivity tool, the panel focused on a much more specific architectural question: if AI systems can observe, interpret, and act in real time, should they be allowed to make production decisions while a broadcast is live?

Or should they remain advisory systems that support—but never replace—human control?

AI Is Already Inside the Production Stack

One of the first themes to emerge from the discussion was that AI is no longer theoretical in broadcast environments.

In many cases, it is already actively shaping production workflows.

Steve Cooperman of Vizrt pointed to sports production as an example where AI-driven technologies are already operating inside live broadcasts.

Sports analytics tools can track players on the field, generate real-time visual overlays, and automate graphical elements that previously required extensive manual effort.

For example, AI-assisted visual cutouts allow broadcasters to isolate athletes from the playing field and integrate those elements into augmented graphics environments in real time.

In situations like this, the speed and complexity of the task often exceed what a human operator could realistically execute manually.

In those cases, AI is not replacing the creative team—it is enabling effects that would otherwise be impossible.

But even in those scenarios, Cooperman emphasized that human oversight remains essential.

The system may automate the visual effect, but the production team still needs the ability to override or disable it if something behaves unexpectedly.

Where Automation Makes Sense

Across the panel, most participants agreed that the question is not whether AI should exist inside production systems, but where its authority should begin and end.

Dan Griffin of Netgear described this balance through the lens of live audio production.

In multi-speaker environments—panel discussions, talk shows, or live events—automatically adjusting microphone levels can actually be easier for machines than for humans.

An AI-driven system can react instantly to changes in speech patterns, detecting who is speaking and adjusting levels accordingly.

“It’s a hard job to sit and try to push mics up and down for a bunch of talking heads,” Griffin noted.

In scenarios like that, automation can improve both efficiency and consistency.

But the stakes of the broadcast matter.

If the event is high-profile or mission-critical, such as an emergency broadcast or a major live sporting event, human oversight becomes much more important.

Even if AI handles the majority of the operational workload, a human operator still needs to monitor the system and intervene if necessary.

The challenge is not eliminating humans from the process.

It is deciding where human intervention remains essential.

Designing Boundaries for AI

Kyle Phillips of AI-Media introduced a concept that captured much of the panel’s thinking: bounded autonomy.

Rather than giving AI unrestricted control, systems can be designed with defined operational limits.

AI might be allowed to adjust audio levels within a narrow range, automatically place captions on screen, or reposition elements dynamically to avoid overlapping with graphics.

But those actions occur inside parameters defined by human designers.

“You design what it’s able to do,” Phillips explained. “And what AI does really well is repetitive tasks.”

In other words, the role of AI is not necessarily to replace creative judgment.

It is to accelerate the mechanical tasks that surround it.

When those boundaries are designed carefully, AI can dramatically increase speed and efficiency without introducing unacceptable risk.

Responsibility When AI Fails

As the conversation turned toward governance, the panel addressed an uncomfortable but necessary question: if AI makes a mistake during a live broadcast, who is responsible?

Phillips framed the answer bluntly.

“You can’t blame the machine,” he said. “You have to blame the person who sets the parameters around the machine.”

In other words, responsibility ultimately rests with the humans who design and deploy the system.

Chuck Davidson of LiveU pointed to another layer of the solution: compliance and monitoring tools that track broadcast outputs in real time. LiveU’s acquisition of Actus, a compliance monitoring platform, provides one example of how oversight systems can serve as a safety net as AI becomes more embedded in the broadcast chain.

Originally designed to monitor broadcasts for regulatory compliance, those platforms may also evolve into governance layers that monitor AI-driven systems and detect anomalies in real time.

As AI capabilities expand, the need for visibility and auditing may become just as important as the automation itself.

Guardrails Inside the Control Loop

If AI systems are going to operate inside the signal chain, the panelists agreed that guardrails must be built directly into the architecture.

Steve Cooperman offered a practical example from Vizrt’s production tools: gaze correction technology.

The feature automatically adjusts a presenter’s eye direction so they appear to be looking directly at the camera, even if they are reading from a screen below.

In most situations, the effect works seamlessly.

But in certain edge cases, such as when a presenter moves their head rapidly, the automated correction can produce unnatural results.
In those situations, the production team needs the ability to disable the feature immediately.

That principle applies across most AI-driven broadcast tools. Automation can enhance production quality, but systems must always allow human operators to override the result.

Dan Griffin reinforced the same point from a network engineering perspective.

Even if AI assists with network design or configuration, engineers still need to verify the results before deployment.

Automation may accelerate the process, but it cannot replace the responsibility of validating the final system.

Davidson also noted that much of the hesitation surrounding AI mirrors earlier technology transitions in broadcast. When the industry moved from tape to digital workflows, many broadcasters resisted abandoning physical media. Over time, however, those changes became standard practice.

An Industry Still in the Early Stages

When the discussion opened to audience questions, the conversation turned toward a broader question: how far along is the industry in adopting AI-driven production systems?

The panelists agreed that broadcast technology is still in the early phases of this transition.

Griffin noted that AI tools have improved dramatically even within the past year, evolving from novelty features into genuinely useful production tools.

Phillips added that financial incentives will accelerate adoption. As broadcasters look for new ways to monetize archival content and expand into new markets, AI-driven translation, localization, and restoration technologies may unlock entirely new revenue streams.

Cooperman pointed to sports broadcasting as a clear example of rapid innovation.

Over the past year alone, the volume of real-time analytics, augmented graphics, and AI-assisted visual effects has increased dramatically across live sports coverage.

And that trend is unlikely to slow down.

So, Should AI Be Allowed Inside the Control Room?

The panel’s answer was nuanced.

AI is already entering the signal chain, analyzing content, triggering workflows, and assisting operators in real time.

But authority remains a human responsibility.

Automation can improve speed, reduce repetitive workloads, and enable new types of production effects.

Yet live broadcast environments still require human oversight, editorial judgment, and operational accountability.

The most likely future is not one where machines replace production teams.

It is one where humans design the boundaries, machines operate within them, and the control room evolves into a collaboration between the two.

And as AI becomes more capable, the most important role may not be deciding what the machines can do.

It may be deciding what they should never be allowed to do at all.

« BACK TO BLOG POSTS