Media Composer Family

Learn how to edit film, video, and file-based footage faster and in more ways than any other editing system.

How Intelligent Computing Powers Our Editorial Architecture

Only published comments... Mar 12 2013, 12:00 AM by Shailendra Mathur
Filed under:


Hi! Just to introduce myself, I am the Chief Architect for Video products at Avid. In this role, I provide architecture and technology oversight to the editing, video server and broadcast graphics products at Avid. In this blog, I'd like to provide a preview of a topic I will be presenting on at the NVIDIA GPU Technology Conference (GTC) 2013 at the San Jose Convention Center in California on March 19th. The topic of my presentation is Avid's Stereoscopic Editorial Architecture - Light Fields, Intelligent Computing and Beyond.


The formats required for stereoscopic production and post production are a good indication of how the data rates are increasing in video and film productions, and why scalable and adaptable compute architectures are needed. Beyond the trend of increased resolution, frame rate and color bit-depth, the other trend in data rates that stereoscopic workflows represent is the number of views that need to be processed when editing a scene. Multiple possibilities for more creative story telling open up when we go beyond mono and stereo image capture. To build toward these possibilities, we, the Avid design and engineering team, inspired ourselves from an area of research called Light Fields, as well as a storied past in Multi-cam editing to develop the stereoscopic editing and data management architecture in Avid Media Composer 6.0. These topics will be the subject of another blog to follow.


In this blog, I will give you a brief introduction to another aspect of the GTC talk—a unique heterogeneous compute architecture that we built to scale up to the performance requirement that high data rate formats such as Stereoscopic 3D pose to maintain a seamless editing experience. We call this the Avid Intelligent Compute Architecture. Some of you may have also heard this referred to by the engineering name of the original project, ACPL (Avid Component Processing Library). Boy, we love our acronyms at Avid!


This compute architecture was initially developed when Avid Media Composer v3.0 and DS v10 came out in 2008 and has since been leveraged to rapidly add new processing formats and video processing functionality. The architecture served to replace the older FPGA-only Nitris classic acceleration with a player that could load balance processing across FPGA, Multi-threaded CPUs and GPU based processing. Rather than targeting just the GPU, just the CPU, or just the FPGA based cards, the philosophy changed to use them all in a holistic fashion. Rather than using a single PCIe card or a break-out box with FPGA compute acceleration, the whole system is turned into an accelerator. This required us to build a player as well as a scalable hardware abstraction framework that allowed new compute hardware and corresponding processors running on them to be plugged into existing applications without having to change the application code to accommodate them. The Intelligent media player in the application acts as an orchestra conductor, keeping as many of the resources playing to provide the performance required. Keeping a holistic view of the whole system in mind, particular attention is paid to the cost of transferring heavy video data across the system bus when deciding which compute hardware should be used for a particular process.


More details on this architecture, including the plug and play aspect, and benchmarks will be presented at the GTC conference. However, I will leave you with an example of how this architecture is applied to get playback performance in a high-data rate and computationally complex stereoscopic editing scenario.


In this scenario, the goal is to accelerate the 16 bit high quality rendering to DNxHD 220x 10 bit full frame stereo of a sequence that contains two edited tracks of full-frame 1080i 50 AVC-I 100, 10 bit stereoscopic media. It requires color balancing of both eyes and applying positional and rotational alignment that is followed by simultaneous depth and dissolve transitions, in order to combine the two stereoscopic tracks. Well, that's a mouthful!


The Avid Intelligent Architecture will evaluate the platform configuration, OS, hardware, and GPU capabilities in the system. Based on the availability of processors for the various compute hardware, it will execute them in an optimal pipelined and parallel processing manner. It will dynamically distribute the processing to the device best suited to the specific task for different segments of the timeline.


Let us consider a heterogeneous compute platform comprised of multi-core CPUs, an Avid-qualified NVIDIA GPU (graphics processing unit, or graphics card), and codec accelerating FPGA on the Avid Nitris DX card. In this case, the most optimal processing picked by the player will place the AVC-I decode on the multi-core CPUs. Since the AVC-I 100 decodes are computationally expensive, the multi-threaded decoder will likely saturate most of the CPU cores. Some more of the CPU capacity will be used up by placing some of the stereo effects. With the CPU cores now kept busy, the player can choose the GPU version over the CPU versions of some other pipelined effects processors. It will invoke efficient data transfers between the CPU RAM and the GPU memory for the 16 bit data to perform the processing on the GPU using GLSL shaders in full floating point accuracy. With both the CPU and GPU now busy with the heavy decoding and effects working on a total of four HD streams, the player will now evaluate between the CPU and FPGA based DNX220 encoders (no GPU version of the DNxHD encoder is available). With the CPU already burdened, it will pick the FPGA based hardware version.


Hopefully you see the analogy of the player to a conductor—it'll have the CPUs, GPUs and FPGA hardware playing harmoniously in no time!


That's it for now. Stay tuned for the next blog entry on the light-field inspired editing and data management data model for Stereo 3D. We'd love to see you at the GPU Technology Conference if you'd like to discuss this topic a bit more.


Thank you,


Shailendra Mathur

Leave a Comment

login or create an account to post a comment.

About Shailendra Mathur

Shailendra Mathur is the Chief Architect for video products at Avid with technology oversight over the editing, video server and broadcast graphics products. The responsibilities involve working with customers and technology partners externally, and product management and engineering internally to translate customer and business requirements into architectural and design strategies. With over 18 years of experience in the media industry and a research background in the area of computer vision and medical imaging, Shailendra has contributed to a wide gamut of technical products and solutions in the media space. Beyond his responsibilities in product development, his research and engineering interests have led to multiple publications and patents in the areas of computer vision, medical imaging, visual effects, graphics, animation, media players and high performance compute architectures. Over the past few years understanding the art and science of stereoscopy, color, high frame rates, high resolutions, and applying them to storytelling tools has been a passion. Other areas of interest are file based workflows, asset management and the trends around the merging IT and Broadcast technologies.

© Copyright 2011 Avid Technology, Inc.  Terms of Use |  Privacy Policy |  Site Map |  Find a Reseller