Developing a signal processing-based system—like a cell phone or media player—can be a lot of work. Just how much work is a function of many factors, including the complexity of the application, the complexity of the hardware, and, increasingly, the quality of the development tools.
Signal processing applications are getting more complicated, as is the hardware that runs them. Engineers increasingly rely on powerful development tools to help them manage this complexity. These days, “development infrastructure”—including tools, software component libraries, on-chip debug capabilities, and development boards—is a key factor in the selection of hardware. Development infrastructure is a critical consideration when choosing which type of processing engine to use: whether it’s a general-purpose processor or DSP, for example, or maybe an alternative technology, such as an FPGA. Good tools can make the difference between a system that gets to market on budget and on time, and one that never makes it out of the lab.
The good news is that chip and tool vendors are paying more attention to development infrastructure, and as a result, the overall quality of tools has improved enormously. For example, compilers are efficient enough that a programmer typically needs to write assembly code only for a small portion the application. That wasn’t true a few years ago. And FPGA vendors have focused substantial effort on developing signal-processing-oriented tools for their chips, making them more attractive for use in these applications.
The bad news is that it can be difficult to assess the quality of the tools (or other infrastructure components) until you’ve done some serious work with them—at which point it’s often too late to make a different choice. Features that look great in demos may not work reliably in practice, or may not solve the problems you thought they would. And without a clear picture of capabilities, it can be difficult to choose an implementation approach. Should you use a programmable processor, or an FPGA? Should you buy software components, or build them yourself? Should you rely on a compiler, or plan on doing lots of assembly coding?
Some tools that sound novel really aren’t. For example, high-level tools used for synthesizing hardware or software implementations of signal processing algorithms have garnered a lot of press lately. Everyone would love to go straight from a block diagram, MATLAB, or simple C code to optimized software, or to a blazingly fast FPGA implementation. But this is not a new idea; commercial products have been pursuing it for 15 years or more. Unfortunately, previous incarnations of such tools have not been particularly successful. Will the new crop of high-level synthesis tools fare better, or is this kind of tool still more of an engineer’s fantasy than a reality?
Even the best tools don’t eliminate the need for a smart developer—they just change the nature of the development challenges. For example, using software building blocks can reduce programming effort, but may increase the required integration effort and the complexity of testing and debugging. Similarly, compilers can reduce the need for some optimization grunt-work, but there still needs to be a smart human around to figure out if a specific algorithm is really the best choice for the target hardware.
In this report, we’ll help you understand the tools that are available for today’s processors and FPGAs. We’ll give you a realistic picture of what these tools can do—and what they can’t. And we’ll explain how you can use these tools to decrease the time and effort required to develop a signal processing system
Add new comment