The display lights up when the hand approaches the watch, and a blue signal moves across the screen. Ivan Poupyrev rubs two fingers together. Song titles, news stories and the weather appear, and then disappear again. Mr. Poupyrev, a senior executive at Google, hasn’t even touched the device yet.
“We can revolutionize the interaction between man and machine with gesture recognition,” says the designer, speaking on a stage in Mountain View, California, where Google is headquartered. He calls it “a comfortable alternative to the current methods of control, through touch or voice recognition.”
At its internal I/O conference for developers, Google is unveiling a new language for the Internet of Things, one that could solve one of the industry’s central problems. Until now, the vision of an interconnected world, in which humans easily communicate with clocks, thermostats, loudspeakers or cars, has suffered because of usability issues. The Smartwatch, for example, a minicomputer that fits on your wrist, still looks like a miniaturized telephone with hardly any space on the screen for effective navigation.
But leading the way in tackling such issues is the Munich-based chipmaker Infineon, which provides the key hardware needed to make a user’s movements recognizable and transform them into signals for the watch’s operating system. DAX-listed Infineon has been working with Google as part of Project Soli since late 2015. The unusual partnership could redefine the way in which people interact with technology.
“For the first time in history, tools are oriented toward their users, instead of the other way around,” says Andreas Urschitz, who runs Infineon’s Power Management & Multimarket (PMM) division, which includes the mobile communications business. The innovative chip has great potential, says Mr. Urschitz, predicting that “it could turn into a market worth billions.”