When people talk about processor benchmarks, the conversation usually ends up being all about speed. Other metrics, such as energy efficiency, are often given less attention or are completely overlooked. The unspoken assumption driving these conversations is that a faster processor is better.
For most embedded signal processing applications, this assumption doesn't make sense. These applications usually have fixed processing requirements. Typically, the goal is to meet those requirements while minimizing cost and energy consumption. For such applications, a processor certainly needs to be fast enough to meet the processing requirements, but being significantly faster than required doesn't necessarily add value.
In fact, it's often desirable to use the slowest processor that will get the job done. All other things being equal, slower processors tend to be less expensive and less energy-hungry than faster processors. And there can be other benefits to using slower chips. For example, slower, less expensive chips often use smaller packages than high-performance chips.
Of course, some signal processing applications demand enormous computational throughput. For such applications, it might seem obvious that a faster processor is better, but this is not always the case. Suppose you are building a communications system with hundreds of channels. Rather than trying to cram all of these channels into one super-speedy processor, it might be better to spread the workload across several slower processors.
For one thing, a multi-processor approach might be less expensive than a single-processor approach. Processor vendors usually charge a premium for their fastest parts, so slower parts often provide more bang for the buck than their speedy counterparts. In addition, sending all the channels through a single processor requires pumping a lot of data through that processor. This means you'll need high-bandwidth I/O interfaces and high-speed memory. These high-bandwidth components are likely to be expensive. By spreading the load across multiple processors, you can use slower, less expensive I/O and memory systems.
This is not to say that there is no advantage to having a faster processor. It is usually important to minimize the time, effort, and risk associated with the development process, and having a faster processor helps meet these goals. Having a faster processor lets programmers spend less time and effort squeezing inefficiencies out of the software—meaning the product can get to market faster.
And it's often a good idea to have a little extra speed so you can deal with surprises in the design process. For example, performance headroom is handy if features are added late in the design process, or if it turns out that the application requires more throughput than you thought. Having some processing power left over also makes it easier to re-use the design for derivative products. But this extra speed usually isn't free. You probably will have to sacrifice cost, energy efficiency, or other important factors to get the extra horsepower.
In short, "Which processor is fastest?" is usually the wrong question to ask for a signal-processing application. Instead, the question should be "Of the processors that are fast enough to get the job done, which will best meet my design goals of low cost, low power, short development time, and low risk?" More often than not, the answer will be the slowest, not the fastest, option.
This column originally appeared in the August 2005 edition of InsideDSP.
Add new comment