Automation, AI, and the Limits of Machine Decision-Making
CHESA Fest 2026 brought together technology vendors, media organizations, and workflow architects to explore the architectural shifts reshaping modern content infrastructure. As part of
the event, a series of vendor panels examined the deeper technical debates emerging across storage, asset management, and AI-driven workflows.
This discussion focused on one of those debates, how the rapid acceleration of automation and AI-driven tooling is reshaping operational control inside media workflows, and where human
judgment must remain as pipelines become increasingly autonomous.
Where Human Judgment Still Matters in Media Operations
For decades, automation in media workflows meant something very specific.
Machines executed instructions.
Humans made the decisions.
Files were transcoded, assets were moved, QC checks were triggered, and workflows advanced step by step through carefully designed pipelines. Automation increased speed, but authority still belonged to the people designing and operating the systems.
That line is beginning to blur.
Today, automation doesn’t just execute tasks. Increasingly, it evaluates conditions, suggests edits, flags problems, and triggers decisions that once required human review. AI-assisted tools summarize content, generate metadata, recommend creative adjustments, and in some cases even assemble media outputs automatically.
The question is no longer whether automation can accelerate media workflows. That battle was won years ago.
The real question is what happens when automation begins to make operational decisions.
At CHESA Fest 2026, Vendor Panel 3 examined the tension emerging inside modern media pipelines: as AI-driven systems become more autonomous, where does operational authority actually reside? Are we simply building faster deterministic workflows, or are we gradually transferring judgment itself to software?
The discussion revealed that while automation continues to expand rapidly, the role of human judgment inside media operations remains far from obsolete.
In fact, it may be becoming more important than ever.
The Panel
The conversation was moderated by Felix Coats, Solutions Architect at CHESA, and brought together experts representing different layers of the modern media pipeline; from workflow orchestration and infrastructure automation to AI-assisted creative tooling.
Panelists included:
- Erik Zindulka, Senior Sales Engineer at Telestream
- Sarah Semlear, U.S. Sales Lead at Hiscale
- Greg Holick, VP of Business & Channel Development at Helmut US
- Dave Helmly, Director of Strategic Development – Professional Video at Adobe
- Scott Eik, Senior Engineer at Scale Logic
- Jason Whetstone, Senior Product Development Engineer at CHESA
Together, the panel explored a critical architectural question: as automation platforms become more intelligent and AI-driven tooling becomes embedded across production pipelines, what decisions must remain human, and which ones can safely be delegated to machines?
Automation Is Expanding — But Not Evenly
The panel opened with a deceptively simple question: by 2030, what percentage of media operations will be fully automated?
Even defining that percentage proved difficult. Several panelists noted that automation happens unevenly across organizations and workflows, with some environments already heavily automated while others still rely on largely manual processes.
The answers that followed reflected that uncertainty.
Dave Helmly of Adobe offered perhaps the most aggressive prediction. From his vantage point inside the creative tooling ecosystem, the direction of travel appears clear.
“I’m going to say 99 percent,” Helmly said. “Because there’s always that one person holding out the last one percent.”
Helmly’s reasoning wasn’t based on replacing creative professionals, but on eliminating the production tasks that consume enormous amounts of time while contributing little creative value.
In large-scale media operations, generating deliverables can quickly multiply from a single asset into hundreds of variations—different languages, aspect ratios, captions, and regional compliance edits. Increasingly, those variations are being processed automatically.
In that model, automation does not replace creativity. It removes the operational friction surrounding it.
Not everyone on the panel was ready to go that far.
Greg Holick of Helmut suggested that the industry is still early in the automation curve. Today’s AI systems are already effective at tasks like orchestrating pipelines, managing localization, or moving assets between systems. But that does not mean the entire production lifecycle is ready to run autonomously.
“I think right now we’re around the twenty percent phase,” Holick said. “AI is great at handling the mundane tasks that humans shouldn’t be doing in the first place.”
Holick estimated that by the end of the decade, automation could realistically reach 50 to 70 percent of media operations, but he emphasized that creative judgment will continue to require human involvement.
“There are things AI just isn’t aware of,” he explained. “Creative intent, cultural context, subtlety. Those are human capabilities.”
Sarah Semlear of Hiscale framed the issue in simpler terms. The real goal of automation is not to eliminate people from the process—it is to eliminate the work nobody wants to do.
Automation, Semlear argued, should function like a calculator: removing tedious effort and allowing people to focus on higher-value work. If that happens, the outcome is not a fully automated industry. It is a more enjoyable one.
The Accountability Problem
From there, the discussion shifted toward a more fundamental question: what operational decisions cannot safely be automated today?
Several panelists returned to the same underlying issue: accountability.
Erik Zindulka of Telestream referenced a line from early computer science training that still resonates decades later.
“A computer cannot be held accountable,” he said. “Therefore it cannot make a management decision.”
In media workflows, that principle still matters.
Automation can analyze files, detect patterns, and trigger processes, but responsibility for the final output remains human. Editorial standards, brand identity, and compliance obligations ultimately belong to the organizations producing the content.
“You come to a media outlet because you expect a certain type of output,” Zindulka explained. “That’s defined by the people behind it.”
Greg Holick added a practical layer to that idea: legal responsibility.
If an automated workflow publishes the wrong content, pulls the wrong advertisement, or distributes media incorrectly across regions, the consequences are not theoretical. Those decisions carry contractual, regulatory, and financial implications.
“And AI can’t be held responsible for that,” Holick said. “Only a human can.”
Automation may accelerate production, but ownership of the outcome remains human.
In that sense, the question is not whether AI will participate in decision-making. It already does. The real question is where organizations draw the boundary between automation and authority.
The Morality Question
As the conversation deepened, the panelists began exploring another dimension of AI-driven workflows: ethical judgment.
Felix Coats raised the question directly. If AI systems can be trained to follow rules, remove bias, and enforce guidelines, does human judgment still need to remain in the loop?
Sarah Semlear argued that morality is too contextual to encode into software.
According to Semlear, “Morality depends on culture. It depends on the country you’re in, the situation you’re in, and the people involved.”
Greg Holick agreed, noting that even sophisticated systems struggle with nuance.
“You can tell AI the rules,” he said, “but it doesn’t understand the cultural references or the creative intent behind something.”
Dave Helmly approached the issue from another angle: the way AI shapes content consumption. As recommendation systems become more sophisticated, they increasingly learn individual user behavior and tailor what content people see.
“It’s going to know me better than it knows me now,” Helmly said. “And it’s going to feed me the things it thinks I want.”
That dynamic introduces a new layer of responsibility for organizations deploying AI-driven media systems.
The issue is not simply whether automation can produce content. It is whether it can shape perception responsibly.
The Rise of Low-Code Workflows
The conversation then pivoted toward another emerging shift in media operations: the rise of low-code tools and AI-assisted scripting.
Modern workflow platforms increasingly allow operators to design complex orchestration visually, often without writing traditional code. At the same time, generative AI tools now allow users to produce scripts or automation logic through simple prompts.
In theory, that democratizes automation.
In practice, it also introduces risk.
Scott Eik of Scale Logic pointed out that operators who run AI-generated scripts without understanding what they do can create serious operational problems.
“If you don’t know what’s happening in the background,” he said, “you can end up with systems that break and nobody knows how to fix them.”
Dave Helmly raised another concern that organizations are only beginning to grapple with: intellectual property.
If AI generates code or workflows, the origins of that code may not always be clear.
“Where did that code come from?” Helmly asked. “You could end up using something that was effectively copied.”
Yet despite those risks, panelists broadly agreed that AI-assisted development is inevitable.
Jason Whetstone of CHESA described two ways engineers are currently using these tools. One approach treats AI as a substitute for research or manual work. The other treats AI more like a collaborator.
Whetstone compared the latter approach to pair programming, where developers work together to solve problems and learn from one another.
“When I use these tools,” he said, “I treat them as a partner. But I still have to define the problem and judge whether the result actually makes sense.”
In that model, AI becomes an amplifier for expertise rather than a replacement for it.
The “Wild West” Phase of AI
Several panelists suggested that the industry is currently experiencing an early and chaotic phase of AI adoption.
Semlear compared the moment to the early days of YouTube.
When online video platforms first appeared, traditional media organizations often dismissed them as amateurish or disruptive. But over time, the ecosystem matured. Production standards improved, and entirely new professional roles emerged around the technology.
AI may follow a similar trajectory.
“We’re in the wild west right now,” Semlear said. “But it will calm down.”
Over time, the technology will likely settle into the same role many other tools have played in the evolution of media production: an infrastructure layer that becomes invisible once it matures.
Where Human Oversight Evolves
As the panel moved toward its conclusion, the discussion turned to how human roles might evolve as AI becomes embedded across production systems.
Most panelists agreed that automation will not eliminate oversight. But it will change its nature.
Even highly automated systems still require humans monitoring outcomes, validating decisions, and stepping in when automation produces unintended results.
Scott Eik emphasized the importance of guardrails. AI systems produce far better results when they operate within clearly defined boundaries.
“If you give it guidelines and rules,” he said, “you get much better outcomes.”
Erik Zindulka also pointed to another emerging capability: AI-driven enrichment of media libraries during ingest and processing. Instead of relying solely on manually logged metadata, AI systems can analyze content as it enters the archive and continuously add contextual understanding over time.
Zindulka offered a simple example, searching an archive for “someone talking about topic x while wearing a red shirt and sitting on a beach” and having the system return the exact moment.
Because many media archives persist for decades, he noted that future AI systems could repeatedly analyze the same material, adding new layers of metadata each time. Over time, that process could produce some of the most richly described media archives ever created.
The Real Question
By the end of the discussion, one point had become clear.
Automation will continue to accelerate media workflows.
That much is certain.
But speed was never the real question.
As Felix Coats summarized in the closing moments of the panel:
“The question isn’t whether automation increases speed. The question is whether judgment remains human; or becomes encoded into software.”
For media organizations navigating the rise of AI-native workflows, that distinction may become one of the defining architectural questions of the next decade.





