- Events & Trade Shows
Themes, Shifts, and What It All Means
The 2026 NAB Show didn’t just wrap with a record 58,000+ registered attendees. It wrapped with a verdict: the broadcast and media technology industry has officially moved past the proof-of-concept phase. The questions that dominated the floor in 2024 and 2025 — “Can AI do this?” “Is cloud viable for live?” “Will IP replace SDI?” — have been answered. What replaced them was harder and more interesting: “How do we operationalize all of this without doubling headcount or tripling our infrastructure budget?”
Nearly half of those 58,000 attendees were first-timers. Content creators registered at more than double the rate of 2025. Corporate media professionals nearly doubled to over 13,000. Sports organizations sent representatives from 75 professional teams, 22 leagues, and 30 venues. The composition of the room changed, and with it, the nature of the conversation.
Here’s what we observed, what it means, and where we believe it’s heading.
AI Gets a Job Description
If NAB 2025 was the year everyone demonstrated AI, NAB 2026 was the year the industry collectively demanded receipts. The shift was unmistakable: two dedicated AI Pavilions, nearly double the AI exhibitors from the prior year, and AI woven into virtually every product story on the floor. But the tenor of the conversation changed.
The industry has quietly but decisively split AI into two camps. And understanding the distinction matters more than any product announcement.
Analytical AI: The Workhorse
The first camp is analytical AI: the kind that automates the tedious, repetitive, time-consuming work that has always been the hidden tax on production. Speech-to-text transcription. Facial detection. Logo identification. Shot classification. Automated metadata enrichment. Intelligent content routing. Scene description. Object recognition.
These capabilities aren’t headline-grabbing. They’re the work that a junior coordinator used to spend 40 hours a week doing manually, and they’re now running autonomously inside production platforms across the industry. The consensus among production professionals has tilted decisively toward this kind of AI, the kind that saves you a hire, not the kind that replaces one.
What made this year different is scale. These tools aren’t being piloted anymore. They’re running in production, and the reported efficiency gains are significant. One analysis cited an estimated 34% overall time savings across productions using AI-assisted workflows. Whether that number holds across every environment is debatable, but the direction is not.
Agentic AI: The New Frontier
The second camp is where things get genuinely interesting, and genuinely new. Agentic AI refers to systems that don’t just respond to commands but reason through multi-step goals, use specialized tools, and take autonomous action with human oversight.
The most visible example: the Avid–Google Cloud partnership, which integrates Gemini models and Vertex AI directly into Media Composer and Avid’s new Content Core platform. Editors can query media archives in natural language, automate logging, and leverage AI that understands visual, audio, and dialogue context simultaneously. AWS demonstrated the same pattern with the PGA TOUR’s automated broadcast production, where real-time shot data triggers intelligent production decisions across multiple courses.
In the newsroom space, agentic architectures are starting to appear in story automation, with systems that deploy hierarchies of AI agents to handle the retrieval, categorization, and scoring of incoming stories based on recency, relevance, and editorial priorities. The vision is that producers and editors spend more time on storytelling and less on the mechanical assembly of news packages.
This is just scratching the surface. Agentic AI will be a much larger story at NAB 2027 and beyond. The infrastructure is being laid now, but the production-grade deployments are still early. The organizations investing in understanding agentic patterns today will have a meaningful head start when the tooling matures.
AI Editing Assistants: The Third Rail
One category drew significant floor traffic and significant debate: AI editing assistants. Not generative tools that create synthetic footage. These solve the most persistent, unglamorous problem in post-production: the crushing volume of footage that no human team can efficiently review, organize, and cut. Editors consistently report spending 80% of their time finding and preparing footage and only 20% on the actual creative work. These tools are trying to flip that equation.
Eddie AI had their first NAB booth and it was packed. The v3 release debuted Night Shift, an overnight batch mode where editors drop a folder of media and Eddie sorts interviews from B-roll, syncs multicams, logs everything, and assembles a narrative rough cut ready to open in Premiere Pro, DaVinci Resolve, or Final Cut Pro by morning. The system proposes a story framework, assembles beat by beat, logs available B-roll, and places it over the A-roll spine automatically, using only real footage. Editors can even text a footage link to Eddie via SMS and receive the edit by 8 AM. Supports multilingual transcription, up to 6-camera multicam, and rough cuts up to 40 minutes.
Quickture, created by reality TV producer Irad Eyal (Bravo’s Southern Charm, Netflix’s Floor Is Lava), was demoed at the Avid booth. Built specifically for long-form unscripted content, Quickture works as a panel inside Premiere Pro and Avid Media Composer rather than as a standalone app. A deliberate choice, since as Eyal noted, “editors won’t let you change the color of a menu, let alone learn a whole new system.” Quickture Vision adds visual understanding, recognizing objects, locations, and actions to build assembled sequences matching B-roll to interview content. Its speaker ID system can distinguish 30+ voices in a single episode. Used by teams at A+E, ITV, Paramount, Banijay, and EndemolShine.
Selects by Cutback (San Francisco/Seoul) debuted at NAB targeting the phase before editors even open their NLE: multi-cam sync, angle alignment, and clip organization. The tool automatically structures raw multi-cam footage into a draft edit ready for handoff to Premiere Pro, Final Cut Pro, or DaVinci Resolve.
CaraOne (by Obvious Future GmbH, deployed via Scale Logic) takes a fundamentally different architectural approach: fully on-premises, running on a 2U GPU server with no internet connection required. For studios and broadcasters with strict data sovereignty requirements, this matters. Beyond search, CaraOne assembles rough cuts by intelligently sequencing clips into narrative structures, and its conversational interface lets teams ask questions about their footage in natural language. Supports 170 languages and integrates directly with Avid Media Composer, Premiere Pro, DaVinci Resolve, and Flame.
creative.space Intelligence (CSI) by DigitalGlue also debuted at NAB, automatically generating rough cuts, stringouts, and alternate versions of content. Its conversational interface lets producers and non-editors work directly with footage without waiting on editorial time.
Imagen Video left beta at NAB with a different angle entirely: AI-powered color grading integrated into Premiere Pro and DaVinci Resolve, automating the clip-by-clip corrections that traditionally consume hours of finishing time.
The editorial community remains split on these tools. But the practical reality is hard to argue with: when the footage ratio is 500:1 and the deadline hasn’t moved, anything that helps an editor find the right moment faster deserves evaluation. The tools gaining traction are the ones firmly in the assistance camp, handling logging, syncing, organizing, and initial assembly so human editors can focus on narrative, pacing, and emotional impact. The storytelling remains irreplaceable. The mechanical assembly increasingly doesn’t need to be.
Where Generative AI Landed (and Didn’t)
Generative AI, the shiny object of the last two NABs, took a notably quieter position in 2026. It wasn’t absent. But it wasn’t leading conversations the way it did in 2024 and 2025. The industry has grown appropriately cautious about generative content in professional production workflows, particularly around provenance, rights, and the risk of AI-generated material being indistinguishable from captured footage.
That said, several vendors pushed generative capabilities further into production-adjacent territory. Avid and Google Cloud demonstrated generative video creation directly inside Media Composer using Gemini models. The demo showed text-prompt-directed scene generation where an operator could instruct an AI-generated character to perform specific actions, with natural movement and realistic environments. The pitch: B-roll generation and visual ideation without leaving the NLE. Pricing and release date remain TBD, but the integration signals how close generative video is getting to the editorial timeline. Adobe expanded its Firefly platform into a browser-based video editor that combines generated clips, imported footage, and AI audio tools in a multi-track timeline, positioning Firefly as a production tool, not just a playground. ENCO demonstrated SPECai, which generates client spec ads using multiple scripts, varied background music, and dozens of AI voices, producing a finished spot in seconds. Amagi showed automated artwork generation with subject-aware cropping that formats content for platform-specific requirements across FAST and OTT channels. NVIDIA demonstrated an agentic control plane that coordinates multiple AI agents to collaboratively generate scripts and animated characters.
The pattern across these demos: the generative applications that resonated were narrow and well-scoped. B-roll ideation within controlled editorial environments. Spec ad generation for sales teams. Automated artwork for multi-platform distribution. These are workflows where generative AI solves a specific, bounded problem rather than attempting to replace human creative judgment wholesale. The fully autonomous content pipelines that some startups are building remain further out and further from industry comfort.
The broader signal: analytical and operational AI are hitting mainstream adoption. Agentic AI is building momentum but still early. Generative AI in production workflows remains a case-by-case evaluation with real guardrails required.
The MAM Evolution: From Search to Understanding
Media Asset Management has been a mature category for decades. But NAB 2026 exposed a generational fault line running through the middle of it, and the fault line is architectural.
The incumbent MAM platforms were built on traditional relational databases. Structured metadata. Manual tagging. Keyword search. These systems work when content is meticulously cataloged by humans. They struggle when the volume of content overwhelms the capacity to tag it, and they break entirely when the question being asked doesn’t map to a field someone thought to create.
A newer generation of platforms leads with a fundamentally different foundation: vector databases and semantic search powered by AI models that understand the content itself. Instead of matching keywords in a metadata field, these systems let users describe what they’re looking for in natural language (“a person skiing while holding a laptop,” “the CEO speaking at the Q3 town hall,” “a close-up of our product packaging on a retail shelf”) and the system finds the specific moment inside a video that matches. Platforms like Twelve Labs and Moments Lab exhibited at NAB with multimodal AI models that generate embeddings across visual, audio, and dialogue content, enabling contextual understanding that no amount of manual tagging could replicate at scale. CaraOne, mentioned earlier, takes this approach entirely on-prem, using computer vision, NLP, and emotional analysis to make archives searchable by concept, scene, spoken word, and context, without any data leaving the facility. Shade, the startup that won NAB Product of the Year, leads with neural search and automated tagging as core capabilities baked into its all-in-one platform.
Then there are the platforms working to bridge both worlds. Mimir offers what may be the most instructive example of where the category is heading. The platform has offered AI-powered auto-tagging for years, but at NAB 2026, Mimir showcased agnostic semantic search as a layer on top of its existing structured metadata foundation. Users can toggle between traditional term-matching search and natural language semantic search within the same interface. Mimir integrates with multiple AI video discovery platforms (including AWS, Google, Twelve Labs, and CoActive AI), letting customers choose their preferred intelligence stack while standardizing the search-to-edit workflow in Mimir. The agnostic approach is deliberate: rather than building proprietary AI, Mimir connects to whichever semantic engine fits the customer’s needs and budget.
Platforms like Iconik (Backlight) are taking a different approach entirely: rather than chasing semantic search, they’re doubling down on structured metadata as the governance layer AI needs to function properly. At NAB 2026, Iconik introduced AI-powered metadata enrichment that leverages what the system already knows about an asset to intelligently suggest new values, making the relational database increasingly self-populating rather than manually maintained. Their thesis is straightforward: at 903 million assets under management and 11 terabytes of new media ingested every hour, the permissions, rights tracking, and workflow control that a structured MAM provides isn’t optional. It’s the foundation everything else runs on.
Here’s the honest assessment: neither approach alone is the complete answer. Relational databases excel at structured workflow management (who has access, what version is approved, where does it go next). Vector search excels at content discovery: finding the moment you need across petabytes of media you couldn’t manually tag in a lifetime. The Shangri-La is the platform that seamlessly integrates both: structured workflow control with AI-driven content understanding, where an editor can search semantically for the right shot and the system simultaneously enforces permissions, tracks versions, and routes the asset through the correct approval chain.
Nobody has fully arrived there yet. But the platforms moving fastest toward that integration — whether by building semantic capabilities into relational systems, or by adding workflow management around vector search engines — are the ones that will define the next generation of media asset management. The organizations that figure out how to marry these two worlds will have a significant operational advantage over those still relying exclusively on either approach alone.
The Connectivity Revolution: MCP, Orchestration, and the End of the Custom Connector
Here’s a theme that didn’t get a dedicated pavilion at NAB but may end up being the most consequential shift of all: the way production tools connect to each other is about to fundamentally change.
For years, connecting one production platform to another (getting your work management platform to talk to your MAM, your NLE to talk to your Slack channel, your review tool to talk to your delivery platform) required custom integration work. APIs, webhooks, middleware, bespoke connector development. Companies like Embrace, Qibb, Helmut Cloud, Zapier, Make and others built businesses around orchestrating these connections, creating workflow automation layers that bridge the gaps between tools that weren’t designed to talk to each other.
That work is real, valuable, and not going away overnight. But something is shifting underneath it.
The Model Context Protocol (MCP), introduced by Anthropic in late 2024 and now supported by OpenAI, Google, and a growing ecosystem, has emerged as an open standard for how AI systems connect to external tools and data sources. Think of it as a universal adapter layer: one standardized protocol that lets any AI application discover and interact with any compliant tool without bespoke integration code.
The implications for media workflows are significant, even if they weren’t explicitly demonstrated on the NAB floor this year. The current integration model in media production is what engineers call an “N×M problem” where every tool needs a custom connection to every other tool. MCP collapses that to “N+M”: build a server once, and any compliant client can use it.
Gartner predicts that by the end of 2026, 75% of API gateway vendors and 50% of iPaaS (integration platform) vendors will have MCP features. Forrester predicts that 30% of enterprise application vendors will launch their own MCP servers. The community has already built thousands of MCP servers for common integrations: file systems, databases, Slack, Google Drive, and more.
What does this mean for media organizations? We’re approaching a world where AI becomes the orchestration layer between production tools. Instead of building custom connectors or purchasing middleware, an AI agent with access to MCP servers for your MAM, your storage, your NLE, and your delivery platform can coordinate actions across all of them, triggered by natural language instructions rather than coded workflows.
This doesn’t replace the workflow orchestration platforms. It changes what sits on top of them. The hard-won integration work that companies like Embrace, Qibb and Helmut have built becomes the foundation that AI agents operate through, not a layer that gets bypassed. But the expectation from production teams is going to shift: they’ll increasingly expect tools to exchange information with each other natively, and AI to manage the coordination.
We’re early in this transition. But if the pace of MCP adoption in the broader technology landscape is any indicator, media production will feel the effects sooner than most expect.
The Cloud NAS Space Race
There is a legitimate space race happening in media production storage right now, and NAB 2026 made it impossible to ignore.
The category that LucidLink effectively created, cloud-native shared file systems that behave like local storage, has matured into a competitive market with multiple serious players and fundamentally different architectural approaches. LucidLink announced LucidLink Connect, which enables teams to access and stream content from external sources like S3, Frame.io, Google Drive, and others directly inside the LucidLink workspace without copying or re-ingesting. Suite Studios landed one of the show’s most significant partnerships, embedding their file streaming technology into Adobe’s new Frame.io Drive application, and separately announced S3 Native File Streaming in beta, a direct read/write capability to S3-compatible storage without proprietary file formats or intermediary layers. Storj’s Object Mount offers full read/write access to any S3-compatible storage at dramatically lower price points. Shade arrived with $14 million in funding and a Product of the Year award for an all-in-one platform that combines cloud NAS, AI search, MAM, and review into a single system. Amove continues to carve out the multi-cloud management layer, mounting storage buckets from 30+ providers directly to the desktop and giving teams a unified interface for accessing, migrating, and managing content across cloud environments.
Strada Connect arrived with the most contrarian thesis on the floor: what if you didn’t need the cloud at all? Rather than mounting cloud storage locally, Strada lets remote collaborators access your existing local storage directly (drives, NAS, SAN) with folder-level permissions, no third-party bucket required. The media never moves. No cloud costs, no egress fees. At the booth, they streamed a 28GB BRAW file live from California to Las Vegas inside a web browser. Starts at $8/month.
And then, two weeks before NAB opened, AWS launched S3 Files, a feature that lets you mount any S3 bucket as a native file system directly on AWS compute resources with full read/write. Not a desktop product today. But the signal is loud.
The architectural race underneath all of this is the drive toward direct read/write access to cloud object storage without proprietary intermediary layers, or in Strada’s case, the argument that bypassing cloud entirely is the better answer. Different vendors are approaching this from fundamentally different angles, and the right choice depends heavily on your workflow, team distribution, and existing infrastructure.
The high-level takeaway: cloud NAS has become the default production storage conversation for distributed teams. On-prem storage hasn’t died. It’s shifted to a secondary but essential role as high-performance active archive, business continuity, and disaster recovery. The organizations controlling costs are the ones implementing intelligent lifecycle tiering between these tiers today.
Meanwhile, Quietly in the Corner: TAMS
While every vendor in the cloud NAS space was arguing over the best way to mount files from the cloud, a small but architecturally serious contingent at NAB 2026 was asking a different question entirely: what if the file is the wrong unit of work?
TAMS (Time Addressable Media Store) is an open-source API specification developed by BBC Research & Development that takes a fundamentally different approach to media storage. Instead of storing content as monolithic files that get mounted, streamed, synced, or transferred, TAMS breaks media into time-addressable chunks stored in cloud object storage, accessible via an open HTTP API indexed by timeline position. Any authorized system can request any portion of content by time range without downloading a file, mounting a drive, or installing a proprietary client. Live content becomes queryable the moment it’s captured. There’s no boundary between live and VOD. You don’t generate a new master file every time you add a track — the system simply references when each element plays alongside the others.
TAMS showed up in multiple places at NAB. AWS featured it in the West Hall booth. LucidLink hosted a session on their stage where AWS EMEA’s Chris Swan walked through how TAMS is transforming fast-turnaround workflows. Mavis Camera announced a beta TAMS integration for Camera-to-Cloud, enabling iPhone-captured content to upload progressively as discrete time-addressed chunks rather than waiting for a finished file. Konstrukt’s open-source Omakase Player added native TAMS playback. Norsk integrated TAMS so live content becomes immediately searchable as it’s being recorded. The specification has been adopted by AWS, Adobe, Sky, and others.
This isn’t replacing anyone’s production storage tomorrow. But it’s the kind of foundational open-standards work that, if adoption accelerates, could eventually reframe the entire conversation about how media is stored, accessed, and exchanged. Every vendor in the cloud NAS race is building a better way to work with files. TAMS is betting the file itself is the problem. Worth watching.
The IP Infrastructure Tipping Point
If there was a single product story that defined NAB 2026, it was the complete, end-to-end 100G Ethernet and SMPTE-2110 broadcast infrastructure stack announced by Blackmagic Design. Not a roadmap. Not a demo. Shipping products: cameras, switchers, recorders, converters, storage, network switches, and a free software-based audio mixer, all native 100G IP, all priced at levels that fundamentally change the cost equation for IP migration.
This wasn’t the only signal. Sony introduced MOXELA, a software-based media processing platform designed to run on commercial off-the-shelf servers. Ross Video’s Ultrix platform passed 5,000 frames deployed globally. FOR-A, a company that has exhibited hardware at NAB since 1980, pivoted its entire booth to software-defined, GPU-based solutions. When a 45-year hardware company says the future is software, the rest of the industry should pay attention.
For facilities planning infrastructure work in the next 12–24 months, the planning question has changed. It’s no longer “should we go IP.” It’s what speed, what timeline, and what migration path protects your current investment while getting you there. And critically: budget for network engineering skills. The transition from SDI to IP is as much a people challenge as a technology one. Most broadcast engineers didn’t grow up managing Ethernet fabrics, and closing that skills gap takes time.
What Caught Us Off Guard
Every NAB has moments that don’t fit neatly into trend narratives. These are the observations that made us stop, reconsider an assumption, or pull a colleague over to look.
Content authenticity is becoming infrastructure. Sony’s PXW-Z300 is the first ENG camera supporting the C2PA standard for video provenance, providing camera-originated proof that footage is real. In a world where AI-generated video approaches photorealism, the ability to establish chain of custody from capture to distribution is shifting from philosophical concern to operational requirement. News organizations will need this first. Everyone else will need it soon after.
Projection-based virtual production went multi-camera. A coalition including Christie, Vizrt, Disguise, HP, NVIDIA, Matrox, and Evertz demonstrated the first publicly shown multi-camera, projection-based VP workflow for broadcast. If projection achieves comparable results to LED volumes at lower cost, the addressable market for virtual production expands dramatically.
AI saved an estimated 34% of production time. That stat circulated widely at the show. Whether it holds up to scrutiny across all environments, it reflects the magnitude of the efficiency claims being made, and the urgency driving adoption.
Vertical video became a first-class production output. AWS Elemental Inference demonstrated automated 16:9 to 9:16 crop generation in 6–10 seconds, running AI in parallel with live video encoding. With Gen Z consuming 88% of streaming content on smartphones, vertical isn’t an afterthought anymore — it’s a primary deliverable.
The creator economy got serious infrastructure. The expanded Creator Lab in Central Hall, co-presented by Adobe and Blackmagic Design, focused not on follower counts but on monetization strategy, IP ownership, and the operational mechanics of running a content business. These creators are growing up, and their infrastructure needs are evolving from modest to serious. The vendors and integrators that figure out how to serve them will own a rapidly growing market segment.
Where It’s All Going
We walked the floor with a specific question: if a client called us the Monday after NAB, what should they actually be doing differently?
Invest in AI that has a job description. If you can name the specific manual workflow an AI tool replaces and measure the hours it saves in the first 90 days, proceed. If the value proposition requires a whiteboard and three hypothetical scenarios, wait.
Design for IP-native. If you’re specifying new infrastructure, SDI-only is legacy. Budget for 25G or 100G Ethernet and include network engineering in your staffing plan.
Get your storage lifecycle in order. If your architecture doesn’t have automated tiering between production, active archive, and deep archive, you’re overspending every month. The tools exist. The excuses don’t.
Start evaluating how your tools connect. The MCP-driven future where AI orchestrates cross-platform workflows is coming faster than the media industry expects. Understand which of your vendors are building toward open connectivity and which are still operating as closed ecosystems.
Put content authenticity on your roadmap. The C2PA standard and camera-to-distribution provenance chains are early, but the regulatory and market forces driving them are accelerating.
The Uncomfortable Summary
NAB 2026 was not a show of breakthroughs. It was a show of accountability. The technologies that the industry spent three years evaluating are mature enough that the evaluation phase is over. The gap between organizations that are deploying and organizations that are still studying is widening.
The vendors winning right now are the ones that understand they’re not selling technology. They’re selling time back to teams that don’t have enough of it.
The integrators winning right now are the ones helping clients navigate architectural decisions with honest trade-off analysis, not product pitches dressed up as consulting.
And the media organizations winning right now are the ones that stopped waiting for perfect clarity and started building with the tools that are ready today.
We’d rather be in that last group. We suspect you would too.
One More Thing: What’s Better Than Reading About a Show?
Having someone walk it with you.
Trade shows are a firehose. Hundreds of vendors. Overlapping product claims. Marketing gloss layered over genuine roadmap movement. A show floor explicitly designed to give you more information than you can process. Most teams walk out of NAB with a stack of business cards and a vague sense that they saw the right things — but no clear picture of what any of it means for their actual environment.
That’s the problem CHESA Consulting “Show Intelligence” exists to solve.
As part of the Chesa Strategic Services Partnership, Show Intelligence is built into the relationship for every event we attend. We come to the show with your team. We curate the vendor list against your real pain points. We facilitate the meetings. We provide an independent read on every claim, every demo, and every roadmap commitment, filtered against what we know about your workflows, your constraints, and your operating model.
After the show, we deliver a structured briefing: conversation summaries drawn from transcripts, announcements researched and verified, open decisions named, and a consolidated action register that tells you exactly what needs to happen next and who owns it. For a recent client, that meant 12 structured vendor engagements across 3 days, with complete full transcripts synthesized, and a post-show document that surfaced three critical decisions, including an infrastructure risk that wasn’t on the original agenda.
That’s not show coverage. That’s operational intelligence.
And it’s not limited to NAB. For any show we don’t already attend (IBC, HPA, InfoComm, or anything else on your calendar), we offer Show Intelligence as a standalone engagement. Pre-planning, vendor scheduling, on-floor facilitation, transcription, and a post-show deliverable. Same product, any event.
If your team is going to a show this year, go with a plan and come back with evidence. That’s what we build.

