When my old PDA croaked recently, I wasn't too upset. I'd had it for years and was looking forward to upgrading. I bought a new HP iPaq and was immediately impressed with its speed and features—especially the bright color display. But while the performance and capabilities of this little machine are a huge step forward from the previous-generation technology, battery life has taken a big step backward. Whereas my old Palm V would run for two weeks between charges, the new unit needs to be plugged in every three days or so. This turns out to be a meaningful difference: now when I travel, I have to lug along the battery charger for my PDA.
I think I'm representative of many consumers. Yes, I want faster, more capable portable electronic products, but I also want convenient charging intervals. This leads to a difficult optimization problem for designers of low-power consumer devices: they need to increase processing speed, display quality, and storage capacity while maintaining—or even increasing—battery life. Similar challenges are faced by designers of many other types of digital-signal-processing based systems, such as implanted medical devices.
Designing systems for energy efficiency is difficult because energy consumption is affected by many factors. The choice of processor and the overall system architecture are both crucial. The design and implementation of application software and operating systems also play key roles. A further complication is that many systems require not only good active-mode energy efficiency but also low standby-mode power consumption—and meeting these two goals requires two different kinds of design techniques.
The more information the system designer has about how the end product will be used, the easier it is to optimize for energy efficiency. It is helpful if the designer knows, for example, what software will be run on the product, as is the case in “closed” systems like hearing aids. Devices like PDAs are harder, because the designer doesn’t necessarily know what software will be loaded on the device. Thus, PDAs must provide enough processing speed to handle the most demanding tasks that may be run on them—and this headroom may come at the cost of energy efficiency.
Early in the design process, the designer must determine just how much processing speed will be needed—and at what level of energy efficiency—because these are key considerations in the choice of a processor. Unfortunately, choosing the best processor is not as easy as comparing numbers from vendors’ data sheets. It requires a careful analysis of the processors’ speed and energy efficiency (not just power consumption) in the context of the application workload. This task is made more challenging because processor vendors often don’t provide enough information on processor power consumption.
Historically, most DSP processors (and many general-purpose processors) have been designed with speed as the primary goal and energy efficiency as a secondary goal. While this is still largely the case, some of today's processors incorporate sophisticated, specialized features to conserve energy, such as speed/voltage scaling. This is good news for low-power system designers, but makes it even more difficult to evaluate which processor is best for a given application.
In this edition of "Inside DSP" we investigate the challenges of energy-efficient design for signal-processing applications and highlight energy optimization techniques used at various levels of system design. We evaluate energy-efficient processor offerings and explain how to assess processor energy consumption. And we take a peek at some interesting established and emerging energy-constrained applications. So while your cell phone, PDA, and Game Boy are recharging, spend some time with us as we explore the key issues in energy-efficient signal processing system design.
Jennifer Eyre of BDTI contributed to this column.
Add new comment