Get all your news in one place.
100’s of premium titles.
One app.
Start reading
AVNetwork
AVNetwork
Technology
Andrew Starks

IPMX Unpacked: The Key Documents Shaping the Future of AV-over-IP

IPMX Unpacked: The Key Documents Shaping the Future of AV-Over-IP.

Momentum is building in the journey toward finalizing IPMX, with several core documents now ratified and final testing scheduled for later this year. This development marks an exciting phase for those of us in the AV-over-IP space, signaling a significant leap towards a comprehensive, interoperable standard that meets and exceeds the demands of professional AV.

IPMX introduces capabilities that are unprecedented, leveraging its foundations in the SMPTE ST 2110 and AES67 standards. With its development rooted in the Video Services Forum (VSF) TR-10 and Advanced Media Workflow Association (AMWA) NMOS specifications, IPMX is poised to redefine how we think about and implement AV over IP solutions. 

This article takes a closer look at the TR-10 and NMOS specifications and how they extend these foundational standards into Pro AV. Our focus is not just on the specifications but on their practical application within the IPMX framework. By examining real-world scenarios and the challenges faced by system designers and technicians, we aim to show how IPMX's components will simplify and enhance AV over IP systems. Our journey through the IPMX ecosystem will highlight the nuances and innovations that set this open ecosystem apart, offering a glimpse into the future of professional AV environments.

Imagining the (Your) First IPMX Install

Fast forward to the near future where you find yourself in the role of both system designer and field technician at a professional AV integration company. With IPMX now fully developed, you're ready to see firsthand how these standards can be applied in real-world settings, addressing both the opportunities and challenges they bring.

For your first project with IPMX, you are upgrading and expanding an existing AV system, integrating legacy baseband equipment with new IPMX endpoints. During the project, you encounter a few familiar yet frustrating challenges. This project includes huddle rooms, digital signage, and an auditorium equipped for live productions, which makes the fun even more intense.

At the outset, the digital signage system—partly upgraded—retains several older monitors. Encased in custom cabinets, these monitors are not slated for replacement, leading to an EDID-related headache. When multicasting to the new and older monitors in the system, everything connects wonderfully, thanks to NMOS IS-11’s connection negotiation capabilities, except that the displays are all reporting 4K30. Although this resolution might be workable, it falls short of the project's 4K60 specification. The older monitors' capabilities remain unclear, compounded by the absence of available datasheets. 

Recalling a segment from an IPMX and NMOS educational video, you consult the NMOS controller. It provides a comprehensive list of modes supported by the gateway and monitor and includes an EDID endpoint that confirms the older monitors only support 4K30.

Looking for a potential solution, you consider adding a gateway from an unfamiliar manufacturer that is capable of frame rate conversion. Traditionally, integrating a new, untested device could raise system compatibility concerns, as controlling specific device features beyond discovery and connection often requires a proprietary protocol not supported by your controller. However, the advent of NMOS IS-12 changes this, enabling you to directly adjust settings for scaling and frame rate conversion through your controller. Because you were able to pick a gateway from a bespoke manufacturer, you could address the compatibility issue seamlessly, sidestepping the need to replace the custom-encased monitors or devise makeshift solutions for configuring the new gateways.

Feeling confident, you now find yourself upgrading the break room with a new panel that is to display a mix of digital signage and over-the-air content. The customer wants company announcements to be shown in an “L-bar” around the side and bottom of the display, with the rest of the display showing content that is often protected with HDCP. In this situation, the mere mention of HDCP sends you reaching for that trusty, somewhat “rebellious” HDMI device that lets you “take care of” the whole HDCP “issue”. But this time, in a burst of honesty fueled by IPMX’s potential, you opt for an IPMX solution capable of handling content mixing while adhering to HDCP through HKEP, which is IPMX’s DCP-approved protocol for handling HDCP-compliant key exchange defined in TR-10-5.

But no good deed goes unpunished, right? Now the audio and video are way out of sync, so you’re back in detective mode. Maybe something is wrong with HDCP…again? Maybe it’s something with the content mixer that is messing up timing? At this point, you’re really grateful that IPMX is an open standard because a quick search of the interwebs reveals a couple of Wireshark dissectors, one for HKEP and another for IPMX RTCP sender reports. Inspecting those packets revealed nothing out of the ordinary, but at least now you know. 

Then you notice that your IPMX sender device supports TR-10-10 (draft available soon), which provides HDMI info frames over IP. After a quick inspection, you find that the HDMI switch in the breakroom is sending out very strange audio info frames with very large values in the LATENCY field. It looks like there is something amiss with this HDMI switch. Armed with this information, your first attempt is to use the IPMX receiver’s Link Offset Delay to compensate for the inaccurate values given to the IPMX sender by the HDMI switch. Since IPMX uses essence flows, where video and audio are their own separate RTP multicast flows, you’re able to delay the video enough to compensate for the out-of-sync audio. While this is a bit of an abuse of the Link Offset Delay property, which is meant to compensate for processing and network path differences within a system, it works great for your purposes—at least until you can get them a new HDMI switch.

Progressing through the project, an unexpected challenge arises. One of the huddle rooms is designated as a 'secure' space, a detail omitted from the initial specifications. The facilities manager informs you that all content traffic within this room must be encrypted. This introduces you to another aspect of IPMX that you hadn't yet explored: the Privacy Encryption Protocol (PEP), outlined in TR-10-13. PEP, which leverages the HDCP-compliant HKEP protocol, ensures multi-vendor, interoperable encryption for video, audio, and crucially, USB traffic, the latter being detailed in TR-10-14 (draft available soon!). This broad encryption capability underscores IPMX's versatility in securing diverse types of AV content. 

Your final stop is in the auditorium, where for the first time you need to work with PTP. This is owing to the fact that you need to minimize latency for Image Magnification (IMAG) and some of the equipment in the auditorium uses AES67, which doesn’t support asynchronous sources and requires PTP. Thanks to the specifications laid down in TR10-1, the requirements for PTP are far less concerning than what you’ve heard about in straight SMPTE ST 2110 networks.

Also important for the auditorium portion of your project is the fact that IPMX devices can receive both ST 2110 and AES67 content, so long as that content’s codec, or lack of codec, is supported by the receiving IPMX device. In this case, it means that connecting the huddle room to the live production system in the auditorium involves no drama, perfectly accommodating remote viewers.

On the flip side, because IPMX is built on top of AES67 and ST 2110, finding devices that support all three protocols is easy. In this way, incorporating asynchronous AV sources into the auditorium’s synchronous live production system becomes much simpler than in the olden days, where a thousand flavors of AV over IP were anything but compatible with PTP synchronized live production, and where managing multiple control planes kept systems designers and their users up at night.

Soon everything will be complete, and you take a moment to appreciate the difference that IPMX made in this project. Instead of trying to get around EDID and HDCP, you simply used the technologies as they were meant to be used, but this time in a multicast IP network. You were able to reason about timing as needed, and you even set up a system with a mix of synchronous and asynchronous sources. Even troubleshooting was straightforward, thanks to the standards and specifications being open, which also meant that the tools that support IPMX were often open, as well. In fact, you found the whole experience to be a lot less dramatic than you’re accustomed to with AV over IP. This gives you and your customer a good feeling. Maybe AV over IP is just the way we do video now and not the endeavor filled with frustrating compromises and complicated workarounds that it often was, before IPMX.

While much of this IPMX narrative reflects current capabilities, the full spectrum of benefits lies just on the horizon. We invite you to join us on this journey as IPMX comes to fruition, offering a complete, fully featured, and open interoperability ecosystem for AV over IP. Together, let’s usher in a new era of simplicity and efficiency in professional AV environments.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.