Next phase of Google search: context is king

0


[ad_1]

At its Search On event today, Google showcased several new features that, taken together, are its most powerful attempts yet to get people to do more than type a few words into a search box. By leveraging its new Unified Multitasking Model (MUM) machine learning technology on a small scale, the company hopes to start a virtuous cycle: it will provide more detail and context-rich answers, and in return, it hopes users ask more detailed and contextual questions. – rich questions. The company hopes that the end result will be a richer and more in-depth research experience.

Google SVP Prabhakar Raghavan oversees search alongside Assistant, Ads, and other products. He likes to say – and repeated it in an interview last Sunday – that “research is not a solved problem”. That may be true, but the issues he and his team are trying to solve now have less to do with wrangling the web than with adding context to what they find there.

For its part, Google will begin to relax its ability to recognize constellations of related topics using machine learning and present them to you in an organized fashion. An upcoming Google Search redesign will start showing “Things to Know” boxes that take you to different subtopics. When there is a section of a video that is relevant to the general topic – even if the video as a whole is not – it will send you there. Shopping results will start showing inventory available in nearby stores, and even clothes of different styles associated with your search.

For your part, Google is offering – although “ask” is perhaps a better term – new ways to search that go beyond the text box. It is making an aggressive effort to introduce its Google Lens image recognition software in more places. It will be integrated with the Google app on iOS and also with the Chrome web browser on desktops. And with MUM, Google hopes to get users to do more than just identify flowers or landmarks, but instead use Lens directly to ask questions and make purchases.

“It’s a cycle that I think will continue to intensify,” says Raghavan. “More technology leads to more financial means for the user, leads to better expressiveness for the user and will demand more of us, technically.”

Google Lens will allow users to search using images and refine their query with text.
Image: Google

These two sides of the search equation are intended to kick off the next stage of Google search, one where its machine learning algorithms become more important in the process by directly organizing and presenting information. In this, Google’s efforts will be greatly aided by recent advances in AI language processing. Thanks to systems known as large language models (MUM is one of them), machine learning has improved a lot for mapping the connections between words and subjects. It is on these skills that the company relies to make research not only more precise, but also more exploratory and, it hopes, more useful.

One of Google’s examples is instructive. You might not have the first idea of ​​the name of the parts on your bike, but if something is broken you will have to find out. Google Lens can visually identify the derailleur (the shifting part hanging near the rear wheel) and rather than just giving you the discreet information, it will allow you to directly ask questions about fixing this thing, bringing you information (in this case, the excellent Berm Peak Youtube channel).

The push to get more users to open Google Lens more often is fascinating in itself, but the big picture (so to speak) is about Google’s attempt to gather more context on your queries. More complicated multimodal searches combining text and images require “an entirely different level of contextualization than we, the provider, need to have, and so it helps us tremendously to have as much context as possible,” says Raghavan.

We are very far from the so-called “ten blue links” of the search results provided by Google. It has long displayed information boxes, image results and straightforward responses. Today’s announcements are another step, a step in which the information Google provides is not just a ranking of relevant information, but a digest of what its machines understand as they scratch the web.

In some cases, like with purchases, this distillation means you’ll likely send more pageviews to Google. As with Lens, it’s important to keep an eye out for this trend: Google searches are pushing you more and more towards Google’s own products. But there is also a greater danger here. The fact that Google is telling you more things directly increases a burden it has always had: speaking with less bias.

By that I mean a bias in two different senses. The first is technical: The machine learning models that Google wants to use to improve search have well-documented issues of racial and gender bias. They are trained by reading large portions of the web and, as a result, tend to adopt obnoxious ways of speaking. Google’s issues with its AI ethics team are also well documented at this point – it fired two principal researchers after posting on the same topic. As Google Vice President of Research Pandu Nayak said, The edgeJames Vincent In his article on today’s MUM ads, Google knows all language models are prejudiced, but the company thinks it can avoid “offering it to people for direct consumption” .


A new feature called “Things to Know” will help users explore topics related to their research.
Image: Google

Either way (and to be clear, it might not be), it avoids another big question and another kind of bias. As Google begins to tell you more about its own news summaries directly, what point of view is it talking about? As journalists we often talk about the fact that the so-called “nowhere view” is an inadequate way of presenting our reporting. What is Google’s point of view? This is a problem that the business has faced in the past, sometimes known as a “one real answer” problem. When Google tries to give people short, definitive answers using automated systems, it often ends up spreading bad information.

Presented with this question, Raghavan responds by pointing out the complexity of modern linguistic models. “Almost all language models, if you look at them, are embeddings in a high dimensional space. There are some parts of these spaces that tend to be more authoritative, some parts that are less authoritative. We can mechanically assess these things quite easily, ”he explains. Raghavan says the challenge then is how to present some of that complexity to the user without overwhelming them.

But I feel like the real answer is that, for now at least, Google is doing what it can to avoid being confronted with the question from its search engine’s perspective by avoiding domains. of which he could be accused, as Raghavan puts it, of “excessive editorialization”. Often times, when speaking to Google executives about these issues of bias and trust, they focus on the more easily defined parts of these high-dimensional spaces as “authority.”

For example, Google’s new “Things to Know” boxes will not appear when someone searches for items that Google has identified as “particularly dangerous / sensitive”, although a spokesperson has stated that Google “does” does not allow or disallow specific categories, but our systems are capable of evolving understanding of topics for which these types of features should or should not trigger.

Google search, its inputs, outputs, algorithms and language models have all become almost unimaginably complex. When Google tells us that it is now able to understand the content of videos, we take it for granted that it has the computing means to do so – but the reality is that even indexing such a large corpus is a task. monumental that eclipses the original mission of indexing the web early. (Google only indexes audio transcripts from a subset of YouTube, for the record, although with MUM it aims to do visual indexing and other video platforms in the future.)

Often times when you talk to IT people the traveling salesman problem arises. It’s a famous puzzle where you try to calculate the shortest possible route between a given number of cities, but it’s also a rich metaphor for thinking about how computers perform their machinations.

“If you gave me all the machines in the world, I could solve some pretty big instances,” says Raghavan. But for research, he says it’s unresolved and maybe unsolvable by simply throwing more computers at it. Instead, Google needs to come up with new approaches, like MUM, that make better use of the resources that Google can realistically create. “If you gave me all the machines that exist, I am still limited by human curiosity and cognition. “

Google’s new ways of understanding information are impressive, but the challenge is what it will do with the information and how it will present it. The funny thing about the traveling salesperson problem is that no one seems to stop to ask what exactly is it, what is it showing all of its customers when it is working? door to door ?

[ad_2]

Leave A Reply

Your email address will not be published.