Germany-based processor IP provider videantis, launched in 2004, was one of the first of what has since become a plethora of vision processor suppliers. The company's latest fifth-generation v-MP6000UDX product line, anchored by an enhanced v-MP (media processor) core, is tailored for the deep learning algorithms that are rapidly becoming the dominant approach to implementing visual perception (Figure 1). Yet it's still capable of handling the traditional computer vision processing functions
Read more...
The effectiveness of a computer vision system is highly dependent not only on the algorithms but also on the quality of the images fed into the algorithms. "Garbage in, garbage out," as the saying goes. More than 20 years of consumer digital camera-fueled image sensor innovations have also delivered cost, resolution, sensitivity, dynamic range, power consumption, and other benefits to visible-light-based computer vision applications. OmniVision Technologies hopes to expand these benefits into
Read more...
As a kid, I was fascinated with electronics – especially digital electronics. The idea that one could build a computing machine out of simple logic gates was a revelation, and designing such things was thrilling. But as powerful and flexible as digital computers are, we live in an analog world. Hence, analog-to-digital converters play a critical role.
When I first encountered them, I found A/D converters exotic – even magical. With them, one could not only construct a computer, but also enable
Read more...
The buzz around artificial intelligence, computer vision and machine learning intensifies on a daily basis: There are a dizzying number of new processors, algorithms and tools for computer vision and machine learning. Investment and acquisition activity around AI companies is furious. Announcements of new AI-based applications and products are non-stop. Competition for engineering talent is fierce.
All this creates challenges for product designers and application developers who seek to
Read more...
With the proliferation of deep learning, NVIDIA has realized its longstanding aspirations to make general-purpose graphics processing units (GPGPUs) a mainstream technology. The company's GPUs are commonly used to accelerate neural network training, and are also being adopted for neural network inference acceleration in self-driving cars, robots and other high-end autonomous platforms. NVIDIA also sees plenty of opportunities for inference acceleration in IoT and other "edge" platforms,
Read more...
Hot on the heels of announced production availability for the Neural Compute Stick based on the Myriad 2 vision processor comes a next-generation SoC from Movidius (an Intel company), the Myriad X. With a 33% increase in the number of 128-bit VLIW processing cores, along with additional dedicated imaging and vision acceleration blocks and a brand new "neural compute engine," the new chip delivers a claimed 10x increase in floating-point performance versus its predecessor (Figure 1 and Table 1
Read more...
About seven years ago, my colleagues and I realized that it would soon become practical to incorporate computer vision into cost- and power-constrained embedded systems. We recognized that this would be a world-changing development, due to the vast range of valuable capabilities that vision enables. It’s been gratifying to see this potential come to fruition, with a growing number of innovative vision-enabled products finding market success.
What we didn’t anticipate in 2011 was the important
Read more...
Graphics IP supplier Imagination Technologies has long advocated the acceleration of edge-based deep learning inference operations via the combination of the company's GPU and ISP cores. Latest-generation graphics architectures from the company continue this trend, enhancing performance and reducing memory bandwidth and capacity requirements in entry-level and mainstream SoCs and systems based on them. And, for more demanding deep learning applications, the company has introduced its first
Read more...
Convolutional neural networks (CNNs) may be the hot artificial intelligence (AI) technology of the moment, in no small part due to their compatibility for both training and inference functions with existing GPUs, FPGAs and DSPs as accelerators, but they're not the only game in town. Witness, for example, Australia-based startup BrainChip Holdings and its alternative proprietary spiking neural network (SNN) technology (Figure 1). Now armed with both foundation software and acceleration hardware
Read more...
Qualcomm is expanding its reference camera module program with three new configurations targeting biometrics and depth-sensing functions in Android-based smartphones, tablets, AR (augmented reality) and VR (virtual reality) headsets, and other devices. While the modules' targeted computer vision tasks tend to be computationally intensive, a next-generation ISP (image signal processor) core optimized for the functions is intended to offload CPU, GPU and DSP resources inside a SoC, delivering
Read more...