young girl running away
Editorial

Want to Know the Future of Customer Experience? Ask a 2-Year-Old

4 minute read
Alan J. Porter avatar
SAVED
As voice interfaces creep into every corner of our lives, are we looking at a future where keyboards become near-obsolete?

If there’s one person I’m learning a lot from these days, it’s my 2-year-old granddaughter. Watching young Hazel encounter and learn to navigate her way in the world is a delight and incredibly instructional. She has no preconceived ideas of how things should work, nor built-in assumptions of what an interaction should be. She learns by copying, and most of all, by trying: pushing her limits of what is socially acceptable, and technologically feasible, to help her obtain her goals.

Hazel is the best predictor of the customer experience.

Goodbye Keyboards, Hello Voice?

Very early on she figured out voice assistants. If she sees a phone laying on the table, rather than pick it up she will shout at it — and make no mistake, she expects an answer. She loves listening to Siri, and will babble away at the phone for a while.

Watching her made me think there’s a strong possibility that as she grows up she may never need to touch a keyboard. Will all her digital experiences be voice-driven?

Certainly more and more of mine are. I no longer write (or type) a shopping list, I just tell my phone when I’ve run out of something and it gets added to my shopping list. I access my most commonly used notebooks with a voice command, I check the weather each morning with a sleepy command from deep beneath my comforter before rolling out of bed to tackle the day. I use voice commands in my car, at home I talk to a little black box to play music, control the heating system, play music, even sometimes to turn on the TV and find the program I want to watch.

With the rapid adoption and increasing number of voice activation interfaces for engaging with technology platforms, it seems that it is indeed the future of customer experience.

Unless you have an accent.

Learning Opportunities

I'm a Brit living in the US. And it appears that the current generation of voice assistants still have a way to go in dealing with inflections, phrasing, idioms and patterns of speech that fall outside of fairly narrow parameters. I’ve heard similar frustrating stories from friends with a variety of accents ranging from Polish, and Scottish, to Norwegian, and Japanese; all of whom speak excellent, clear English yet struggle to be understood by US-developed voice assistants. Localization of the technology is still a missing ingredient.

Related Article: It's Time to Get Serious About Voice

Voice Technology: Like Second Nature

Given the rapid development of voice technology I view the accent issue as a short-term obstacle. Voice recognition and voice-driven interaction is here to stay and will only keep growing. Why? Because it’s the most natural way of communicating we have. Talking and listening have always been how we communicated — before computers, before TV, before print, before written scripts, even before cave paintings, we used sound to communicate.

There’s a biological reason why voice assistants are so engaging. On average we type at around 40 to 80 words per minute, read at 250 words per minute, but we both speak and listen at around 130 words per minute. So conversational voice-driven user interfaces feel the most intuitive as the speed of delivery and processing is naturally synchronized.

Related Article: Enough With the Voice Interfaces For Now

A Final (Surmountable) Challenge 

If anything is holding back the speed of adoption of voice assistants in the user experience realm, it’s engineering the content to feed it. At the moment most assistants perform specific functions based on programmed keywords (and you only have to enter that keyword slightly wrong to experience some baffling results), or they input that command into a general search query and return best guess answers, often with a “is this what you were looking for” type modifier. We are some way from having a true interactive question and answer response paradigm as very little content is currently produced with the question-context-intent-answer model in mind.

I’ve written before about how artificial intelligence systems (such as voice assistants) need modular content to be successful. When we get that right, I believe we will see an exponential rise in the use of voice interfaces that will transform the customer experience in a way that young Hazel will just take for granted.

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author

Alan J. Porter

Alan Porter is an industry thought leader and catalyst for change with a strong track record in developing new ideas, embracing emerging technologies, introducing operational improvements and driving business value. He is the current founder and chief content officer of The Content Pool. Connect with Alan J. Porter:

Main image: Caroline Hernandez