If you’re a regular reader of this column, you know that I’m enthusiastic about the potential of “embedded vision” – the widespread use of computer vision in embedded systems, mobile devices, PCs, and the cloud. Processors and sensors with sufficient performance for sophisticated computer vision are now available at price, size, and power consumption levels appropriate for many markets, including cost-sensitive consumer products and energy-sipping portable devices. This is ushering in an era of machines that “see and understand”. While there are many challenges associated with implementing reliable embedded vision systems, I think we’re rapidly approaching the point where what we can do with embedded vision is limited mainly by what we can imagine doing with it.
Inspiring the imaginations of engineers is one of the primary goals of the Embedded Vision Alliance, an organization that my colleagues at BDTI and I founded in 2011, and that has grown rapidly to 32 member companies. And as part of its mission to inspire the imaginations of engineers – and empower them with practical know-how about embedded vision technology -- the Alliance will host its second conference for engineers, the Embedded Vision Summit, on April 25th in San Jose. The Alliance’s first conference, held in Boston last September, was a great success, with attendees giving it an average rating of 8.6 out of 10.
At this point you may (very reasonably) be wondering what any of this has to do with robots doing laundry. Here’s the connection: If you think about it, sorting and folding laundry is a task that’s extremely difficult to automate, because there’s so much unpredictability. Laundry comes out of the dryer in a jumbled mess, with items stuck together, inside out, and crumpled in odd shapes. It’s a testament to the sophistication of human perception and dexterity that humans find it merely tedious to process this mess. For a machine, it’s a really daunting problem. But not too daunting to tackle, as it turns out.
Professor Pieter Abbeel and his team at U.C. Berkeley have made impressive progress in teaching robots to fold laundry, as part of their ground-breaking work in machine learning, motion control, and computer vision. And I’m very excited that Professor Abbeel has agreed to present the keynote talk at the Embedded Vision Summit on April 25th. His talk is titled “Artificial Intelligence for Robotic Butlers and Surgeons,” and it will be one of the highlights of a full day of high-quality inspirational and educational presentations. The Summit will also feature over twenty demonstrations of leading-edge embedded vision technology, and opportunities to interact with experts in embedded vision applications, algorithms, tools, processors and sensors.
Lest I leave you with the impression that embedded vision is a technology that will proliferate at some point in the distant future, consider a few of my favorite examples of products available today using embedded vision. The Philips Vital Signs Camera app for iPhone and iPad measures heart rate and respiration rate using nothing but video of your face and shoulders. The 2013 Cadillac XTS sedan incorporates a vision-based collision-avoidance system. And Affectiva’s amazing emotion-sensing technology assesses your reactions to TV commercials. (Try their fun on-line demo.)
If you're an engineer involved in, or interested in learning about, incorporating embedded vision into your designs, I invite you to join us at the Embedded Vision Summit on April 25th in San Jose. Space is limited, so please register now. To begin the registration process for the Embedded Vision Summit, please fill out the registration application form here. We will respond with further details via email.
Jeff Bier is president of BDTI and founder of the Embedded Vision Alliance. Post a comment here or send him your feedback at http://www.BDTI.com/Contact.
Add new comment