Jeff Bier’s Impulse Response—Embedded Processor Wars

Submitted by Jeff Bier on Wed, 12/19/2007 - 19:00

For a while there, it seemed as though DSP processors and general-purpose processors (GPPs) were morphing into one another. In an effort to provide better DSP performance, general-purpose processors (GPPs) were incorporating increasingly powerful DSP-oriented features. Meanwhile, as digital signal processing applications got more complex, DSP processors were becoming more CPU-like to enable efficient compilers and support more elaborate operating systems. It was getting hard to tell the DSPs and GPPs apart.

From a systems standpoint, this had some positive implications. For example, as the two classes of processors and their software development environments became more similar, DSP software developers and non-DSP software developers were able to move more fluidly between the two worlds and understand each other’s concerns more clearly. 

But with the increasing focus on multi-core chips, that trend may be reversed. When I discuss multi-core processors with embedded software developers who are not DSP specialists, they inevitably talk about homogeneous, symmetric multi-processor (SMP) chips that integrate a handful of CPU cores. In contrast, when I discuss multi-core chips processors with digital signal processing software developers, I find they’re often interested in “massively parallel” multi-core architectures (i.e., architectures with dozens—or even hundreds—of cores), many of which use heterogeneous assortments of cores.

While both classes of chip are called “multi-core,” they are radically different from one another.  They make different architectural trade-offs, and support different programming models.  So, after moving towards convergence for several years, why are DSP-oriented and non-DSP-oriented processors now heading in such different directions?

Fundamentally, signal processing-based applications have always had different demands and constraints compared to most other kinds of embedded software. For one thing, DSP applications are often well-characterized ahead of time; you know in advance, for example, that you’re going to be doing H.264 encoding or WiMax baseband processing, and that no-one is going to suddenly load some other random piece of software onto your machine. And DSP applications are typically very computationally demanding. Because of these characteristics, it’s often more practical and more attractive to use massively parallel architectures in DSP applications than in other applications.  The more predictable and stable the workload, the easier it is to partition it among many processing elements.  And the more computationally demanding the workload, the more incentive there is for doing so.

GPP applications, on the other hand, are more diverse and less predictable—and when the workload is less predictable, it's harder to farm it out to hundreds of processing elements.   Furthermore, GPP software developers tend to place a premium on software compatibility, which enables them to more easily re-use legacy software.  It is easier to accommodate this requirement in the multi-core SMP paradigm, where processor designers can use an existing instruction set architecture and simply replicate a few cores on a chip.

These same factors helped spur the original divergence in DSPs and GPPs in the early 1980s. It looks like history may be repeating itself, and the two may once again be evolving in different directions.

Add new comment

Log in to post comments