Ten years ago, no one would have expected that neural networks would deliver such impressive results on computer vision problems. Since 2012, when AlexNet, a deep convolutional neural network, produced a significantly lower error rate on the ImageNet Large Scale Visual Recognition Challenge than traditional feature extraction approaches, investigation of neural network approaches for visual understanding has intensified. These efforts culminated in 2015 when a deep residual network (ResNet)
Read more...
Xilinx, like many companies, sees a significant opportunity in burgeoning deep neural network applications, as well as those that leverage computer vision...often times, both at the same time. Last fall, targeting acceleration of cloud-based deep neural network inference (when a neural network analyzes new data it’s presented with, based on its previous training), the company unveiled its Reconfigurable Acceleration Stack, an application-tailored expansion of its original SDAccel development
Read more...
Microsoft's Cognitive Toolkit (formerly CNTK) is a relatively recent entrant to the open-source deep learning framework market. And, as the company's Principal Researcher Cha Zhang acknowledged in a recent briefing, it has a ways to go before it can catch up with the population of developers enjoyed by well-known alternatives such as Caffe and Google's TensorFlow. Last year's transition from Microsoft's own CodePlex open source hosting site to the more widely known GitHub repository, along with
Read more...
At the recent Embedded Vision Summit, I was struck by the number of companies talking up their new processors for deep neural network applications. Whether they’re sold as chips, modules, systems, or IP cores, by my count there are roughly 50 companies offering processors for deep learning applications. That’s a staggering figure, considering that there were none just a few years ago.
Even NVIDIA, which has enjoyed wide adoption of its GPUs for deep learning applications, introduced a
Read more...
As system designers race to make IoT and edge devices more capable, they are incorporating increasingly complex and demanding algorithms. Cameras and microphones are now the eyes and ears of systems that help us drive our cars, maintain the safety of our homes, diagnose health issues, and much more. Processor vendors, seeking to meet escalating requirements of processing sensor data at the edge, are designing new heterogeneous devices that integrate CPU cores, DSPs, GPUs, and other specialized
Read more...
ARM's latest image signal processor (ISP), the Mali-C71, marks the first fruit of the company's year-ago acquisition of Apical. Tailored to optimize images not only for human viewing but also for computer vision algorithms, the Mali-C71 provides expanded capabilities such as wide dynamic range and multi-output support (Figure 1). And, in a nod to the ADAS (advanced driver assistance) and autonomous vehicle applications that the company believes are among its near-term high-volume opportunities
Read more...
At last year's Embedded Vision Summit, Cadence unveiled the Tensilica Vision P6 DSP, which augmented the imaging and vision processing capabilities of its predecessors with the ability to efficiently execute deep neural network (DNN) inference functions. Cadence returned to the Summit this year with a new IP offering, the Vision C5 DSP core, focused exclusively on deep neural networks. Vision C5 is intended for use alongside another core, such as the Vision P6, which will handle image signal
Read more...
Remember when mobile phones were for making phone calls? Given today’s reality, it can be difficult to recall the time – not so long ago – when mobile phones had one purpose: making phone calls. Today, the situation is very different; most people use their phones mainly for sending texts, reading email and news, social networking, navigating, shopping and watching videos. And maybe – rarely – making a phone call.
Video cameras are on a similar path: soon, most video cameras will not actually
Read more...
As the number of IoT devices increases, so does the need for intelligence at the edge—intelligence that will enable a device to acquire insights from its surroundings and make decisions in real-time. Particularly for devices such as drones, personal robots, and autonomous vehicles, real-time decision-making capability is a must. Machine learning approaches, such as deep and convolutional neural networks (DNNs and CNNs) are proving to be the most accurate means for object detection and
Read more...
Imagination Technologies' new PowerVR Furian graphics microarchitecture is the company's most significant advancement since 2011's Rogue, which has formed the microarchitecture foundation of multiple subsequent product families (Figure 1). While still based on the tile-based deferred rendering (TBDR) approach that dates back to the mid-1990s, Furian is tailored for not only the increasingly demanding graphics performance requirements of modern SoCs and systems based on them but also their
Read more...