Say you’re looking for an H.264 video decoder solution to integrate into your system, and you’re trying to compare what's available. You figure this should be a straightforward process–after all, H.264 is a standards-based codec, so it should be easy to find apples-to-apples performance data.
You confidently search the Internet and are rewarded with a bunch of performance data. But (as is true with any multimedia solution) the format and type of data is all over the map. For example:
“81% CPU loading on Texas Instruments’ TMS320DM642 @ 600 MHz for full D1 NTSC at 4.0 Mbps.” - Ateme
“The Blackfin ADSP-BF561 processor provides full Digital 1 (D1) capability over limited bandwidth between 1.2 Mbps and 1.5 Mbps. D1 resolution refers to 720 x 576 DVD-quality resolution, which, at 30 fps, requires a 27MB/transfer rate based on the H.264 CODEC standard.” -- Analog Devices
You find yourself looking at performance claims that are based on different frame rates and resolutions’ so it’s tough to compare them. (And as I wrote in a previous column, video codec performance doesn’t scale well for different frame rates or resolutions.) Many vendors don’t specify which flavor of H.264 (i.e., profile and level) they’re using. And there’s also no way to tell what input video data was used to generate the performance figures. For all you know, one vendor used a hockey game and the other used three minutes of blue screen. Different input streams can result in very different computational loads.
As you plow through more data you realize that vendors define performance metrics in different ways–and you can’t always tell what definition they’re using. If they use die size, are they including things like L1 memory, or just the processing engine itself? If they use MHz, are they reporting peak MHz, or average?
You start to wonder if you should just pin up a sheet of paper with vendor names and throw darts to choose one.
As the building blocks for DSP systems have become more complex, it’s gotten harder to confidently assess how they compare to each other. What’s needed is a way to get credible apples-to-apples performance data; without that information, selecting a hardware/software solution is a painful process. You might want to keep some darts handy.
Add new comment