The Future is Spoken presents Roger Kibbe as this week’s guest.
Roger is currently a Senior Developer Evangelist for Viv Labs, the platform behind Samsung’s Bixby 2 voice engine. He works with executives, designers, and developers on voice and conversational AI strategy and execution.
Roger says he loves technology but has some gripes with how we interact with it. Technology can empower and enable us, but he acknowledges it can also be a time sink. You pull out your phone to look for something, get distracted by social media, and forget what you were originally looking for.
His fascination with voice-enabled technology started after using the Echo Dot. Considering the possibilities that voice-enabling afforded, he believed the technology was ground-breaking in that it allowed people to get specific things done and then “get out of the way.” The technology became a tool and less of a distraction. The “time sink” factor was eliminated.
In working with Samsung, and the Bixby voice engine that uses AI, Rogers says, they are developing new products and new ways of interacting. That raises interesting questions. How do you interact with a voice-enabled device that has a screen? How do you interact with a voice-enabled device without a screen? Being multimodal is a big part of developing new products.
Roger stresses that technology needs to enable inclusiveness. Accidentally, voice-enabled devices were at first embraced by the deaf. That wasn’t planned or intended, but unlocked inclusiveness for a group of people. Voice-enabled devices can make technology inclusive for people that cannot read. This bridges the gap and allows them to use new technology. This inclusiveness is something that should be mandated, not added later as an afterthought.
Roger also touches on the thorny issue of privacy as it relates to voice-enabled devices how that might be problematic as the technology continues to develop.
Find Roger on LinkedIn