At NVIDIA's GTC (the yearly GPU Technology Conference) in March, the company trumpeted its intentions to broadly supply the embedded market with Tegra SoCs and associated hardware and software development tools. As a specific example of this overarching strategy, NVIDIA unveiled a small form factor development kit called "Jetson TKI" (Figure 1), based on the ARM Cortex-A15-based "Logan" Tegra K1 application processor introduced in January at the Consumer Electronics Show (see sidebar "A Series
Read more...
If you're a regular reader of this column, you know that I'm enthusiastic about the potential of "embedded vision" – the widespread, practical use of computer vision in embedded systems, mobile devices, PCs, and the cloud. Processors and sensors with sufficient performance for sophisticated computer vision are now available at price, size, and power consumption levels appropriate for many markets, including cost-sensitive consumer products and energy-sipping portable devices. This is ushering
Read more...
Back in October 2011, InsideDSP covered both recently introduced and pending CPU-plus-GPU products from AMD, along with the cores that they were based on. At the time, AMD referred to CPU-plus-GPU integration as "Fusion"; the company has subsequently renamed such products as APUs (Accelerated Processing Units). And back then, AMD was actively selling two APU lines; "Ontario" (along with the higher-power "Zacate" variant), based on the mainstream "Bobcat" CPU core, and the higher-end "Llano",
Read more...
By now, most people who work with processors—whether in data centers, PCs, mobile devices, or embedded systems—understand that parallel processing is the way to get both high compute performance and good energy efficiency for most applications. And most of these people also realize that programming parallel processors is challenging. There are many different types of parallel processors, including CPUs with single-instruction/multiple data capabilities, multi-core CPUs, DSPs, GPUs and FPGAs,
Read more...
Investment in a particular technology segment, not only by small startups but also by established suppliers, tends to be a dependable indication that the application has large business potential and lengthy staying power. Consider embedded vision, the use of computer vision techniques to extract meaning from visual inputs in embedded systems, mobile devices, PCs and the cloud. BDTI, accurately predicting that embedded vision would rapidly become an important market, founded the Embedded Vision
Read more...
Embedded vision, the use of computer vision techniques to extract meaning from visual inputs in embedded systems, mobile devices, PCs and the cloud, is rapidly becoming a significant adopter of digital signal processing technology and techniques. This fact is likely already well known to those of you familiar with the Embedded Vision Alliance, which BDTI founded more than two years ago. If you've visited the Alliance website, you're probably already aware from the content published there that
Read more...
I've been hearing a lot about "wearable tech" for the past year or so, and lately the buzz has intensified. At Qualcomm’s recent developer conference, the chipmaker unveiled its "Toq smartwatch," which was immediately followed with another smartwatch announcement by Samsung. And at the excellent Augmented World Expo conference in June, smart eyewear like Google Glass was a very hot topic, with wearable computer pioneer Steve Mann giving a riveting presentation and displaying his collection of
Read more...
In January 2013, InsideDSP covered the CEVA-MM3101, the company's first DSP core targeted not only at still and video image encoding and decoding tasks (akin to the prior-generation MM2000 and MM3000) but also at a variety of image and vision processing tasks. At that time, the company published the following table of MM3101 functions that it provides to its licensees (Table 1):
Table 1. The initial extensive software function library unveiled in conjunction with the CEVA-MM3101 introduction
Read more...
Vision science studies suggest that the eye is able to discern more than 11 bits of dynamic range for each of the three primary colors – red, green and blue – that typically comprise a given scene. The optical nerve connecting each eye to the brain, on the other hand, is only able to pass roughly five bits' (40 levels) worth of each primary color's data. Yet the brain still is capable of discerning more than 10 billion discrete levels of total color depth, equivalent to that of the 11-bit-per-
Read more...
In a recent interview in EE Times, BDTI co-founder and president Jeff Bier commented:
Multi-core CPUs are very powerful and programmable, but not very energy-efficient. So if you have a battery-powered device that is going to be doing a lot of vision processing, you may be motivated to run your vision algorithms on a more specialized processor.
Bier could have been speaking about CEVA's MM3101 processing core, which InsideDSP covered in its January 2012 edition. Or he could have been referring
Read more...