Inside the Broadcast Revolution: Key Takeaways from DCMUG’s Night at Monumental Sports

By Tom Kehn, VP, Solutions Consulting March 20, 2026

On March 9th, the DC Media Users Group (DCMUG) gathered for what turned out to be one of its best events yet. Hosted by CHESA and a roster of top-tier technology sponsors, the evening kicked off at Clyde’s of Gallery Place before moving into an exclusive behind-the-scenes tour of the Monumental Sports Network Production Studio and Control Rooms at Capital One Arena. The night capped off with a suite for the Washington Capitals vs. Calgary Flames matchup.

The real main event, though, happened in between the tour and the puck drop: a wide-open Q&A session featuring a panel of practitioners who pulled no punches about what it actually takes to build, run, and future-proof a modern broadcast facility. The conversation touched on IP infrastructure, workforce evolution, cybersecurity, and the age-old question of when cutting-edge technology is the right call, and when it isn’t.

Here’s a deep dive into everything that came out of the room.

MEET THE VOICES IN THE ROOM

Leading the discussion was Jon Bednar, Founder and Principal Consultant of Codeso, and the architect behind the Monumental Sports Network facility, which the group had just toured. Jon is SMPTE 2110 Certified, a former Navy broadcast engineer and instructor. He has designed IP-based broadcast environments for clients ranging from the United Nations and the NFL to HHS and the US Department of State. His real-world candor set the tone for the entire conversation.

The CHESA team, rounding out the panel, included:

  • Patrick Johnson, Director of Federal Sales at CHESA, opened the event and kept the conversation moving.
  • Jason Paquin, CEO of CHESA, moderated the Q&A and brought context from years of client-facing discovery and integration work.
  • Jason “Pep” Pepino, Director of Media Systems Design & Engineering at CHESA, weighed in on the technical and design side throughout.
  • Roger Sherman, Senior Solutions Consultant at CHESA and former Chief Broadcast Technology Officer at Voice of America, offered a rare federal practitioner’s perspective on the 2110 decision.

The audience was a mix of federal agency broadcast professionals, including teams from HHS and HUD, and commercial media operators from around the DC metro area. The back-and-forth was as honest as it gets.

THE HARDEST PART OF A 2110 BUILD ISN’T THE TECHNOLOGY

When Jason Pepino asked Jon Bednar what the biggest challenge was in upgrading the Monumental Sports facility, a project that ran from conceptual design in 2021 through its launch in May 2024, the answer was immediate.

 

“Honestly, the people.”

The facility had been the NBC RSN operation, running on legacy SDI infrastructure for 10 to 12 years. The engineering team knew that world intimately. The upgrade took them from a traditional Grass Valley-based routing infrastructure to Panasonic’s Kairos production switcher and an EVS-based IP routing environment.  It would be hard to find a more dramatic technology transition in the broadcast world.

“A lot of legacy engineers lived in the SDI world for so long, and then everything changed,” Jon explained. “You can install the best, most well-engineered platform in the world, but if you don’t have people that can operate and maintain it, it’s only as good as the people.”

The philosophical shift between SDI and 2110 is significant. In an SDI environment, troubleshooting is reactive and tactile: plug in a meter, see the video, hear the audio, wait for something to break, and fix it. In 2110, that approach doesn’t work.

“With 2110, you have to be proactive. You constantly have to monitor and massage it. If something breaks, you can’t just put a meter on it. You have to know where the packet goes, where it was lost, what the fail rate is, whether it’s the red or the blue side.”

What training methodology worked best? Shadowing. Walking engineers through the commissioning process in real time, letting them ask questions, observe, and build intuition alongside the system as it came to life, proved far more effective than formal classroom instruction alone.

The goal Jon described for a well-designed 2110 environment is elegantly simple: an operator who has spent their entire career on SDI should be able to sit down at the console and not know the difference. The route button makes the route. Switch takes the camera. Fader up means louder. The IP world underneath is invisible to the production operator.

But for the engineers maintaining it? They need to think like network professionals. And some people’s brains, he acknowledged honestly, simply aren’t wired for that — and that’s okay. Those individuals can still contribute in production engineering roles that don’t require deep packet-level troubleshooting.

MONITORING IN A 2110 WORLD: A LAYERED APPROACH

Jason Paquin pushed Jon to talk specifics about monitoring, because it looks fundamentally different in a 2110 environment than in the SDI playbook. What Jon described across both the Capital One Arena and COA North (the off-site production facility) was a layered diagnostic stack, each tool serving a distinct purpose.

EVS serves as the orchestration platform and sits at the foundation. Its APIs integrate with Cisco NDFC and Arista EOS, providing the first level of visibility: bandwidth utilization per port, multicast flow tracking, and signal routing analytics. When a route fails to take, Jon’s first check is bandwidth saturation — “if it’s at 96%, you know why the route dropped.”

Telestream’s Prism Inspect is the next layer. When a signal looks off, routing it into Inspect immediately reveals the full ST 2110 flow: STP file comparisons between the red and blue redundant paths, audio presence, and stream metadata. With the ability to monitor roughly 32 signals simultaneously, it provides a broad at-a-glance health check.

TAG sits on the monitor wall, delivering alarm-based monitoring with penalty boxes and configurable thresholds, with nearly 1,000 alarms available out of the box. It gives operators a broad “something’s wrong” signal. Crucially, though, TAG tells you that you lost video, not why. That’s where the next layer comes in.

Providious handles deeper network-level packet analysis, called in when packet drops or RTP errors need investigation at the multicast level.

And underlying all of it: PTP timing. Precision Time Protocol is the heartbeat of any 2110 plant, and as Jon put it with a laugh, “It’s easy until it’s not.” A disproportionate number of mysterious signal issues can be traced back to PTP drift.

Roger Sherman raised a great observation: SDI alarm systems would often just flag “illegal video” — technically accurate but diagnostically useless. The 2110 monitoring ecosystem, by contrast, actually points you in the right direction. It doesn’t always hand you the answer, but it gives you a direction to start digging.

CONSTRUCTION DELAYS, THE CAPS’ WIN STREAK, AND ADAM SANDLER

One of the lighter, but genuinely instructive, threads of the evening was Jon’s account of what it took to actually complete the build.

The conceptual design started in 2021. The facility launched in May 2024. The biggest delays? Construction, on the heels of COVID, with its material and parts shortages. Arista had lead times of up to 11 months at one point.

But the arena portion of the build had a uniquely Washington problem. The agreement with the city required maintaining operational continuity throughout construction. The start date was locked. It could not move.

“Every time the Caps won, our timeline got shorter and shorter,” Jon said. “My wife would ask what was wrong, and I’d say, ‘They won again.’ Every win pushed us further.”

When the Capitals finally lost, he was the only person in Washington celebrating.

The first live event in the newly completed facility? An Adam Sandler show. Not exactly a stress test for the broadcast infrastructure, but the team used it to run some routing and cameras, a warm-up before the real thing.

Monumental has since built a Verizon dark fiber loop connecting the arena, the Capitals’ practice facility, and the Mystics’ arena. JPEG XS is traversing that loop today, with plans to eventually move all 2110 traffic across all facilities from a centralized production hub.

 

2110 VS. NDI: THE HONEST ANSWER IS “IT DEPENDS”

One of the most valuable portions of the evening was a direct question from Jason Paquin: setting budget aside, what are the actual deciding factors between going 2110 versus SDI?

The three panelists each offered a different lens.

Jason Pepino’s answer was scalability. In a traditional SDI router environment, a 128×128 frame is essentially maxed at day one. Adding capacity means a second router, tie lines between them, and rapidly escalating costs. With 2110, scaling is adding a network switch. For organizations with growth ambitions, that flexibility is meaningful even if the upfront investment is higher.

Roger Sherman’s answer came from experience at Voice of America. The driving factors there weren’t prestige or future-proofing for its own sake; they were practical. Distributing gateways and endpoints across the facility meant they weren’t pulling all signal paths back to a single central location, saving on copper runs, core holes, and installation labor. A second driver was resolution flexibility: some VOA services operated in SD, while others, particularly Eastern European bureaus, were pushing 4K. A single 2110 environment handled both simultaneously.

But Sherman was equally clear about the limits. He recalled a conversation with TV Martí, a much smaller operation that wanted to pursue 2110. His advice was direct: don’t. “It was prohibitive for their scale and their needs.”

Jon Bednar’s framework was the simplest: always ask why. If a client says they want 2110, his first question is what problem are they trying to solve. He described a client in New York City who wanted a full IP infrastructure, and when pressed, couldn’t articulate why. They ended up with NDI and SDI, and it worked perfectly for them. “They have no roadmap to go to 4K. They’re not scaling across multiple facilities. Save the money and put it somewhere else.”

For organizations making major capital investments, particularly federal customers who may not see a comparable budget for a decade or more, Jon and the panel were aligned on one thing: invest in the fiber backbone now, regardless of your current technology decision. The labor cost of the pull is the significant cost. The incremental cost of pulling 512 strands instead of 96 is comparatively small, and fiber is future-proof in a way that no endpoint device is. The Monumental team pulled 512 strands to each redundant rack. They needed 96 on day one.

THE ENTERPRISE NETWORKING PROBLEM: BROADCAST AND IT STILL DON’T SPEAK THE SAME LANGUAGE

An attendee from a federal agency raised a challenge that clearly resonated with most of the room: their broadcast and networking teams are siloed, they’re operating on enterprise networks not designed for video, and getting approval for the specific switches needed for a media production environment, even for NDI, is an uphill battle.

Jason Pepino was direct: broadcast media networks and enterprise IT networks have to be physically separated. Not VLANs; separate switches. The bandwidth profiles differ, the multicast requirements differ, and the update cadence for broadcast systems (where an OS may be intentionally frozen to maintain certification and stability) conflicts directly with enterprise IT’s security patching cycles.

“Corporate IT guys are going to ask why you’re passing so much bandwidth. And you still have to keep up on security, but some of these systems can only get to a certain point because the provider only brought the OS up so far.”

Jon added a practical tool for navigating internal budget conversations: engage Cisco and Arista directly. Both companies have media-specific technical teams with documentation that explicitly explains why a general-purpose enterprise switch won’t work on a broadcast media network, and why the media-optimized variant is required. That documentation can be decisive when you’re trying to make the case to an IT procurement team or an agency budget officer.

Roger Sherman reframed the underlying problem: it’s a trust and language issue as much as a technical one. If a broadcast engineer can walk into a conversation with enterprise IT and demonstrate security fluency, speak to how the media network is segmented, how threats are mitigated, what the exposure surface actually looks like, they have a much better chance of getting the hardware and support they need.

“Once you can speak the language, you can get them to trust you. Work with them together.” He also noted the challenge that many in the room nodded at: just when you build that trust with someone on the IT side, they get promoted or leave.

CYBERSECURITY: THE CONVERSATION THE BROADCAST INDUSTRY CANNOT IGNORE

Perhaps the most sobering thread of the evening was cybersecurity. As broadcast infrastructure migrates to IP, the attack surface expands, and bad actors are already active.

“Do you guys witness bad actors frequently?” someone asked.

“Frequently,” Jon replied. “Every facility I’ve ever worked at, there are metrics where it was 10,000 hits a day on the external firewall.”

This isn’t theoretical. A 2110 plant is not a closed SDI environment with copper everywhere. The orchestration platform that used to be a massive, dedicated piece of hardware is now a virtual machine. A single compromised VM could take down an entire broadcast infrastructure — audio, control, tally, routing, everything. If the facility generates revenue through live events or chargebacks, the business impact of a successful breach is severe.

Roger Sherman outlined a pragmatic approach to segmentation: certain assets, particularly ingest encoders taking feeds via SDI, can be placed in a DMZ outside the inner firewall. If someone compromises that encoder, the incoming signal is already SDI. The blast radius is limited. “I don’t care if you hack that encoder,” he said. “Put it outside the firewall. I’ve got fewer ports to worry about. You have fewer ports to worry about. And we can proceed.”

The architecture Jon used at Monumental started with the broadcast network as a complete island. Third-party signal delivery (like connectivity to Encompass in Atlanta) went over dedicated dark fiber with no shared firewall exposure — a direct line, touching nothing else. As operational needs grew and facility-to-facility connectivity became necessary, proper dual-firewall segmentation was added. Today, anything that crosses the public internet, Zixi feeds, and similar, passes through two firewalls. Monumental also hired dedicated cybersecurity staff specifically for broadcast and 2110 security.

Jason Paquin connected the cybersecurity conversation to a historical pattern: when facilities moved from SD to HD, broadcast engineering and IT had to merge for the first time, and the friction was real. He recalled being a young engineer watching a broadcast, and IT teams fighting across a conference table at WABC New York during a SAN installation, neither willing to acknowledge the other’s expertise. The current transition is the same collision, but at a higher level of complexity, with cybersecurity now in the mix.

His framing for the discovery conversation resonated throughout the room: if a client’s plan is to have their general IT team manage the broadcast network switches, someone needs to stop and calculate the cost of being down, and the cost of chasing issues with people who don’t have the right expertise. When that number starts approaching the cost of the proper solution, the conversation changes.

THE WORKFORCE IS CHANGING — AND THAT’S JUST THE TRUTH

Woven through every topic of the evening was a theme that nobody introduced directly, but that kept surfacing: the broadcast engineering workforce is in the middle of a generational shift, and the industry is moving whether people are ready or not.

“Legacy engineers leave, and they’re going to be backfilled by a broadcast IT guy,” Jon said. “A lot of the hires I see now for day-two support, it’s not the broadcast engineer from ABC. It’s a 25-year-old with an IT degree who also streams. That’s the perfect candidate for broadcast IT engineering. They understand enough about video. They understand more about networking. That’s just the blunt truth.”

The job postings already reflect this. Almost universally, broadcast engineering roles now require Cisco, Arista, and Layer 3 networking experience. They don’t ask whether you can troubleshoot an SDI frame.

Jon’s advice to his own teams, going back to when he ran AV integration in Baltimore: conduct 3 interviews a year. Not necessarily to leave, but to read the market. See what skills employers are asking for. The job requirements tell you where the industry is headed more clearly than any conference keynote.

The through line, as Jason Paquin framed it, is that IP migration isn’t just a technology change — it’s an operational, staffing, and cultural change, all at once. Organizations that treat it as a technology procurement project and ignore the people side will find themselves with a world-class system they can’t fully operate or maintain.

 

THANK YOU TO OUR SPONSORS

This DCMUG event was made possible by the generous support of our sponsors. Here’s a brief introduction to each:

Backlight

Josh Norman (President & CRO) and Alex Burke joined us, representing Backlight, makers of Iconik — one of the leading media asset management platforms in the broadcast industry. Iconik was referenced throughout the evening as a go-to MAM solution for media organizations managing large volumes of content.

EVS

Bevan Gibson (North American Operations) and Will Walz (Northeast) represented EVS. You likely know EVS for sports replay — if you’ve watched any live sport, you’ve seen their technology at work. EVS has a significant installation at Monumental Sports, including the EVS Neuron conversion platform that Jon discussed extensively during the Q&A. They also do control, orchestration infrastructure, and robotics.

LiveU

Mike Mahoney (VP of Growth Markets, US & Canada) and Jared Brody represented LiveU. Best known for broadcast-grade bonded cellular encoding and transmission, LiveU is now pushing into bonded IP over WAN and LEO satellite connectivity, with AI-enhanced workflows in development. If you’re watching a live news report from a field location, there’s a good chance LiveU is how it’s getting back to the studio.

Studio Network Solutions (SNS)

Chance Hayworth (Northeast Territory Manager & DoD Territory Manager) represented SNS, a company specializing in high-performance shared storage and complete workflow solutions. SNS also serves as the OEM manufacturer for Ross Video devices and works closely with the CHESA Federal team on a range of opportunities.

Telestream

Bob Barnshaw and engineer Dave Norman represented Telestream. As Jason Pepino noted to close out the sponsor introductions: “You’d be hard pressed to find a broadcast facility without something Telestream inside.” Their Prism Inspect platform was central to the monitoring discussion all evening. They also offer transcoding, test and measurement tools, and Stanza, their captioning application.

LucidLink

Rich Warren introduced LucidLink — a cloud-based storage collaboration platform that mounts as local, shared storage and is globally accessible. The short version: it puts everyone in the same studio, regardless of where they physically are.

ABOUT DCMUG

The DC Media Users Group holds quarterly events in the DC metro area, bringing together federal and commercial broadcast professionals to share what’s working and what isn’t. The format is deliberately practitioner-focused: not vendor pitches, but real conversations from people in the trenches.

Coming up: DCMUG will have a presence at NAB in Las Vegas, followed by an event alongside the Bits by the Bay Conference, held right on the Chesapeake. If you haven’t been to Bits by the Bay before, it’s worth looking into.

If you work in broadcast, media production, or AV integration in the DC metro area — whether in a federal agency, a commercial facility, or somewhere in between — this is a community worth being part of. The conversations are real, the people have done the work, and you’ll almost certainly walk out with something you can use.

Well, and there’s usually a sports game or concert involved. That doesn’t hurt either.

« BACK TO BLOG POSTS