I've been hearing a lot about "wearable tech" for the past year or so, and lately the buzz has intensified. At Qualcomm’s recent developer conference, the chipmaker unveiled its "Toq smartwatch," which was immediately followed with another smartwatch announcement by Samsung. And at the excellent Augmented World Expo conference in June, smart eyewear like Google Glass was a very hot topic, with wearable computer pioneer Steve Mann giving a riveting presentation and displaying his collection of early wearable computing prototypes (some dating from the 1980s).
It's undeniable that wearable computers are a hot topic. But are they a passing fad, or a fundamental technology that will change our lives? For a long time, I've suspected that wearable computing would prove to be a passing fad, because I really couldn't think of a compelling application for it.
Well, that was then. Now, I have a different perspective, thanks to two recent experiences: one very mundane experience at home, and the other involving a cutting-edge product about to be shipped.
I'll start with the cutting-edge product: Israeli start-up company OrCam is preparing to ship smart eyeglasses that enable visually impaired people to access visual information, by using computer vision to interpret images (such as faces and signposts) and read text (such as restaurant menus and street signs). The OrCam glasses translate visual inputs into synthesized speech, which is rendered through an earphone. It's amazing and exciting to see how powerful, miniaturized electronics combined with sophisticated algorithms promise to meaningfully enhance the lives of visually impaired people, enabling them to independently perform many of the thousands of small tasks requiring eyesight that most of us perform effortlessly every day.
Did I say "effortlessly?" Well, that brings me to the second experience that has converted me to a believer in wearable tech. Recently I've noticed that my middle-aged eyes are starting to have trouble reading small print on food and medicine packages. I figured "there must be an app for that," and indeed, I found a large selection of magnifier apps in the Google Play store. After a couple of tries, I found a "magnifying glass" app that works with my phone, and now I'm no longer at risk of accidentally taking my wife's vitamins.
I've learned two interesting things from using my smartphone as a reading aid. First, it's awkward to hold the smartphone in one hand and the object of interest (e.g., a medicine bottle) in the other hand, while trying to align the smartphone's image sensor with the area of interest and hold both items steady. And this problem becomes worse if the object of interest is not rigid, such as a piece of paper. In such cases, you really want to use both hands to stabilize the object, but you can't do that if you must also hold your smartphone.
The second thing I've learned from using the magnifier app is that there is a real use case here for augmented reality. Because, these days, when I read the fine print on, say, a food label, there's a pretty good chance my next stop is going to be Google, for example to figure out whether Red Dye 40 is something I want to ingest. The smartphone magnifier app puts my brain and fingers in between the food label and a Web search. There's little doubt about where the bottleneck is in that system. The process would be much faster and easier if the image were automatically converted to text, and a simple voice command or touchscreen press then triggered a Web search. And that sounds an awful lot like what the OrCam product is doing. Which makes me think that the market here is not "just" the 285 million people with visual impairments, but all of us who rely increasingly on the Internet to answer questions about the world around us.
To help engineers envision the possibilities for incorporating visual intelligence in their designs, and to help them gain the practical skills required to make those ideas a reality, in 2011 my colleagues and I at BDTI founded the Embedded Vision Alliance. The Alliance's third technical conference for engineers, the Embedded Vision Summit, will be held on October 2 in the Boston area. The Summit will provide a technical educational forum for engineers, including how-to presentations, demonstrations, an inspirational keynote presentation, and opportunities to interact with technology experts from leading vision supplier companies.
If you're an engineer involved in, or interested in learning about, how to incorporate embedded vision into your designs, I invite you to join us at the Embedded Vision Summit on October 2. Space is limited, so please register now.
Jeff Bier is president of BDTI and founder of the Embedded Vision Alliance. Please post a comment here or send him your feedback at http://www.BDTI.com/Contact. To subscribe to BDTI's monthly InsideDSP newsletter, click here.
Add new comment