Is Google Ready to Turn Search into a Conversation?
Conversing with Google whilst we search- a possibility that could change the future of search engines.
The company has been considering the in's and out's for longer than we realise, and has been discussed more recently at the I/O developer conference, where two shiny new AI systems known as LaMDA (Language Model for Dialogue Applications) and MUM (Multitask Unified Model) were unveiled. Showing the true progress and possibilities this could mean for the future of AI, Google had LaMDA speak as the dwarf planet Pluto, parading factual accuracies and encouraging responses to questions such as- “Have you ever had any visitors?” a user asked LaMDA-as-Pluto. The AI responded: “Yes I have had some. The most notable was New Horizons, the spacecraft that visited me.”
Using more casual terms that come naturally to us, users will be able to talk to Google, as opposed to saying more simple terms in such a blunt way so it can understand what we want it to do, which is currently pretty limited. This new way of searching will include things such as information retrieval from personal messages, calendar, personal photos and more.
Computers have been tinkered with and AI has been developed to incredible levels, but is it too soon to assume we can make them adapt to all sorts of accents, terms, and every part of the human language to make something so accessible and easy to chat to without having to repeat yourself ten times over? (SIRI I'm looking at you!)
It's a process that will change the way we think about classic search engines, and with recent research paper from a group of Google engineers titled “Rethinking Search” asks exactly this: is it time to replace “classical” search engines, which provide information by highest ranked webpages, with AI language models that deliver these answers directly instead?
Although marketing is a key part of this revolutionary possibility, Google has been integrating speech-driven searches into their model for some time now. Beginning with Google Voice Search in 2011, then improving itself with Google Now in 2012; launched Assistant in 2016; and in numerous I/Os since, has become the forefront in speech technology, preparing the world for an exciting way of speaking to something inanimate and having it switch on lights in your home or play music, providing seamless efficiency in home-life.
Leaps and bounds have been made since the start of Google's vision of speech tech, but like most things that appear so ethereal, actual utility of this type of device will often be less than it appears to be. For example, the introduction of Google Home in 2016, the company assures users they will be able to “control things that aren't just in their homes, like booking a taxi, finding a bakery, or sending a bouquet of flowers directly to someone's door., and much, much more.” A few of these ideas are available to be done, but it doesn't yet reach the standards of a simple and faultless way how they intended for it to be.
Current technology can handle simple, direct commands that require the recognition of only a small number of verbs and nouns (think “turn up the volume,” “what's the time” and so on) as well as a few basic follow-ups, but there is a vast complexity to language and the systems will get drowned under the weight of it all, which is why this cannot be rushed per se, but it certainly can be done.
As Google’s CEO Sundar Pichai said at I/O last week: “Language is endlessly complex. We use it to tell stories, crack jokes, and share ideas. [...] The richness and flexibility of language make it one of humanity’s greatest tools and one of computer sciences’ greatest challenges.”
Keep up-to-date with the latest tech industry insights, trends as well as information technologies, app development, and small business content with the Proteams Blog
Follow us on LinkedIn for updates on the latest tech news here