When is the right time to adopt a new way of doing things? It’s a no-brainer that systems designers have to select a new tool or component when the one they’ve been using is obsoleted. But should a company adopt a new design methodology when the one they’re using still works? After all, “if it ain’t broke, don’t fix it”—right? Well, maybe.
Established signal processing system design techniques are bending under the pressure of increasing integration, greater application complexity,
Read more...
A few years back I flew to Boston for a conference. Since I have a well-founded fear of driving in Boston, I rented a car with GPS navigation. I drove out of the airport and checked the GPS system, which was functioning perfectly. A short time later, I headed into a tunnel. Suddenly, there were exits coming up fast (inside the tunnel!), and I wasn’t sure which one to take. I looked to my navigation system for guidance, but it was completely clueless. Having lost the GPS signal when I entered
Read more...
On October 14, 2008, Texas Instruments introduced a high-performance multi-core DSP, the TMS320C6474 that is intended for use in computationally demanding applications such as communications infrastructure, video surveillance, and medical imaging. The chip features three 1 GHz ‘C64x+ cores, each with its own L1 data and program cache, along with 3 MBytes of aggregate (not shared) L2 cache. As shown in Figure 1, the chip also contains a Viterbi accelerator and turbo-decoding accelerator along
Read more...
In an ideal world, chip designers would evaluate their new designs on real applications. But who’s got the time to implement an entire cellular baseband or video codec just to see if their proposed design is efficient? That’s the reason chip designers use benchmarks. But benchmarking is not just about selecting the right algorithms. It’s also about careful implementation—careful crafting of software that is appropriately optimized for the target architecture. As a result, sound benchmarking
Read more...
Last month I wrote about how my colleagues and I believe that embedded processor vendors will need to become more involved in developing or acquiring proprietary algorithms to stay competitive in the coming decade. This month, I’ll discuss another long-term trend that we expect to see in processor-based chips: the dramatically expanded use of multi-die packaging (also called “system-in-package”).
We all know that integration of more functionality at the silicon die level has some powerful
Read more...
BDTI has released independent benchmark results for Tilera’s massively parallel TILE64 processor on the BDTI Communications Benchmark (OFDM)™. The TILE64 chip incorporates 64 processor cores connected to each other in a mesh configuration. The cores operate at 866 MHz and are fairly simple, three-issue VLIW machines that support limited SIMD operations, such as SIMD adds and subtracts (but not SIMD multiplies). Tilera expects engineers to program the chip using C/C++ along with intrinsics to
Read more...
Earlier this year my colleagues and I did some crystal ball analysis and identified a number of key trends that we expect to shape the embedded processor market over the next decade. One of these is that we expect embedded processor companies to be increasingly differentiated by their ownership of proprietary algorithms.
This may seem out of left field; what do processor companies have to do with proprietary algorithms? Here’s our reasoning. Processor prices are dropping, while processor
Read more...
Unless you’re announcing a laptop that runs off body heat or similar epochal breakthrough, it’s hard for technology companies to get media attention. And when a product does get editorial coverage, it’s even harder to distinguish what’s true from the infomercials. With every announcement claiming “better,” “new,” and “breakthrough,” what will grab legitimate attention? One ingredient of a successful announcement, PR professionals agree, is compelling data.
In 2007, an early-stage chip
Read more...
As computational requirements go up and fab processes increasingly bump up against inconvenient physical limitations, multicore solutions are becoming more attractive. The problem is that no one wants to program them, because there are lots of challenges associated with implementing applications on multiple cores. One challenge lies in handling inter-core communications. How will cores with different data formats, different interconnects, and different OS’s exchange data and talk to each
Read more...
At NI Week in August, National Instruments introduced a new product line, a set of eight boards that are intended as complete, off-the-shelf computing-plus-I/O solutions for medical, mechatronic, and industrial applications, among others. The boards are called “Single-Board RIOs” (RIO is short for “reconfigurable I/O”), and each board contains a PowerPC CPU, a Xilinx Spartan FPGA, and analog and digital I/O. The I/O channels are connected to the FPGA, enabling the user to customize timing and
Read more...