29.5.17

Optimizing for voice assistants: Why actions speak louder than words

“Hey Siri, remind me to invent you in 30 years”

In 1987, Apple came up with the idea of a “Knowledge Navigator”. You can see the full video here, but it’s a concept that’s remarkably – and perhaps, not coincidentally – similar to our modern smart device assistants, Siri among them.

Its features included a talking screen, reacting to vocal commands to provide information and sort calendars.

In theory, we’re there, 30 years later – though the reality doesn’t always quite match up to the dream.

Even when it does work, voice hasn’t always been exactly what people were looking for. The thing most adults said they wish their voice search systems could do was find their keys (though teens said they most wished it could send them pizza).

Although we’re getting to the stage where that’s possible now, the majority of developments in voice have been voice search – talking to your phone to find out information.

Showing search results for “Why can’t you understand me, you stupid phone”

But while talking to a device can be a better experience than playing around with a virtual keyboard on a phone or a physical one on a computer, there are two major issues with voice search.

The first is that it’s still clunky. Half the time you have to repeat yourself in order to be understood, particularly if the word you’re trying to get across is slang or an abbreviation of some sort, which is to say, the default sort of language you’d think would be fitting for “conversational” search.

It doesn’t feel smooth, and it doesn’t feel effortless – and that pretty much removes the point of it.

The other is that it simply doesn’t add value. A voice search isn’t achieving anything you couldn’t do by simply typing in the same thing.

But recently, we’ve seen developments to the voice control industry, starting with Alexa. At this point, everyone’s familiar with the Echo and its younger sibling, the Echo Dot – it’s been in adverts, our friends have it, maybe we have it ourselves.

The Alexa devices were among Amazon’s best-selling products in 2016, especially around Christmas, and the trend doesn’t show significant signs of slowing. But if we’ve had Siri since 2011, why is Alexa picking up so much traction now?

The answer is that it’s not voice search. It’s voice commands. Alexa is more exciting and satisfying for users because it provides an action – you speak to it and something happens. You now can order a pizza – or an Uber, or a dollhouse.

That’s what people have been wanting from their devices – the ability to control the world around them by talking to it, not just have an alternative to a keyboard.

Ultimately, the commands are more personal. You can go on a website and order a pizza, and you can customise it and pay for it and it’ll show up, but talking to Alexa is akin to saying to your friend “Order a pizza?” (Except Alexa won’t stop mid-phone call to ask you what the other topping you wanted was).

Where the majority of mobile voice commands are used for search, Alexa’s use cases are dominated by home control – 34% of users have Alexa play music, just under 31% get her to play with the lights, and 24.5% use it as a timer.

While Siri and the Google Voice Search system are both examples of narrow AI like the Echo, they make much more limited use of its capabilities – compared to Alexa, Google is not OK, and Siri can say goodbye.

“OK Google – who would win in a fight, you or Alexa?”

Alexa’s success has put Google into catch-up mode, and they have been making some progress in the form of Google Home. Early reviews suggest that it might actually be the better product – but it lacks the market momentum of the Amazon product, and it seems unlikely that the sales will be on an even footing for a while yet.

However, Google does have the advantage of some high-end technology, namely Alphabet DeepMind.

DeepMind itself is the company name, but the more familiar connection is the technology the company produces. DeepMind are responsible for the program AlphaGo that beat the world’s foremost Go player 4 – 1, as well as a neural network that can learn how to play video games with the same approach as humans do.

DeepMind can offer Google systems their machine learning experience – which means that Google Home’s technology might have more room to start leaning towards Deep AI in the future. Your device will be able to start adapting itself to your needs – just don’t ask it to open the pod bay doors.

“Watson – what wine would you recommend with this?”

The other major contender in the AI race has only just started dipping into the B2C commercial market, and not nearly to the same scale as Alexa or Google Home.

IBM Watson has, however, won Jeopardy!, as well as found places in healthcare, teaching, and weather forecasting – essentially, absorbing a great deal of information and adapting it for different uses.

Watson is now used by The North Face, for example, to offer contextual shopping through conversational search. Users answer questions, and Watson suggests products based on the answers.

Likewise, Bear Naked uses Watson to “taste test” their customized granola system for the user, so once you’ve designed your meal, it can tell you if you might want to cut back on the chocolate chips.

AI is a competitive market – and it’s a market synergizing with conversational and voice search to bring us ever closer to the computer from Star Trek, and even beyond it.

For now, however, narrow AI is the market – and that means optimizing sites for it.

SE-OK Google

Voice search means that people are searching much more conversationally than they used to. The best way to accommodate that in your SEO strategy is to give more attention to your long-tail keywords, especially the questions.

Questions are opportunities best met with in-depth, mobile-friendly guides that offer information to your customers and clients.

But this also applies when it comes to using apps in the way that Alexa and Google Home do. People aren’t just making voice searches now – they’re also making voice commands.

With that in mind, to rank for some of these long-tail keywords, you need to start optimizing for action phrases and Google-approved AI commands like “search for [KEYWORD] on [APP]”, as well as carefully managing your API, if you have one. And it is worth having one, in order that you can integrate fully with these new devices.

You can break down the structure of common questions in your industry to optimize your long-tail keywords for devices.

You’ll also need to look into deep-linking to optimize your apps for search. Deep-linking allows searchers to see listings from an app directly on search, and open the app from those search rankings, making for a smoother user experience.

Search results show your app data and link directly into the app

This is only going to become more important over time – Google have just announced that they’re opening up their technology, “Instant Apps”, to all developers.

Instant Apps mean that if the user doesn’t have the app, it can “stream” the page from the app anyway. It’s not a stretch to imagine that before long Alexa won’t need Skills to complete commands – so long as you’ve properly set up your API to work with search.

Siri, likewise, already has SiriKit, which allows developers to build markup into their apps that Siri can read.

“Alexa – What’s the Best Way to Deal with AI?”

Voice search is a growing part of the search industry. But it’s not the biggest opportunity of it.

Rather, companies should be focusing on integrating voice actions into their strategy – by deep-linking their apps, ranking for long-tail keyword questions, and making sure everything they want a customer can do, they can do with their voice.



from Search Engine Watch - Latest http://ift.tt/2s6ArRi
via IFTTT

No comments:

Post a Comment