Google introduces new way to search that combines images and text in one query – TechCrunch


Earlier this year, at Google’s annual I / O Developer Conference, the company introduced a new AI milestone called the Multitask Unified Model, or MUM. This technology can simultaneously understand information in a wide variety of formats, including text, images and videos, and generate information and connections between topics, concepts and ideas. Today Google announced one of the ways it plans to put MUM to work in its own products with an update to its Google Lens visual search.

Google Lens is the company’s image recognition technology that allows you to use the phone’s camera to perform various tasks, such as real-time translation, plant and animal identification, copy – paste from photos, find items similar to those in the camera’s viewfinder, get help with math problems and much more.

Soon Google is announcing that it will take advantage of MUM’s capabilities to upgrade Google Lens with the ability to add text to visual searches to allow users to ask questions about what they see.

In practice, this is how such a feature might work. You can view a photo of a shirt you like in Google search, then tap the goal icon and ask Google to find you the same design, but on a pair of socks. By typing something like “socks with this pattern” you can tell Google to find relevant queries in a way that might have been more difficult to do if you had only used text input.

Image credits: Google

This could be especially useful for the type of queries Google struggles with today – where there’s a visual component to what you’re looking for that’s either hard to describe using words alone, or could be described in different ways. manners. By combining the image and the words into a single query, Google may have a better chance of delivering relevant search results.

In another example, part of your bike has been broken and you need to google for repair tips. However, you don’t know what the room is called. Instead of delving into repair manuals, you can point Google Lens at the broken part of your bike and then type “how to fix”. It might connect you directly with the exact moment in a video that might help.

Image credits: Google

The company sees these AI-based initiatives as ways to make its products “more useful” for end users by enabling new research methods. By using the phone’s camera as part of the search, Google aims to stay relevant in a market where many of its primary use cases are starting to shift to other properties. For example, many shopping searches now start directly on Amazon. And when iPhone users need to do something specific on their phone, they often just turn to Siri, Spotlight, the App Store, or a native app for help. And Apple is also developing its own alternative to Google search. You can see the beginnings of this work in the iOS 15 update to Spotlight Search, which now connects people directly to the information they need without the need for a Google query.

Google says it also puts MUM to work in other ways in Google search and video searches, the company said during its Search On live event today.

The Google Lens update will roll out in the coming months, noting that it still has to undergo “rigorous testing and evaluation,” which is part of every new AI model it deploys.


Source link

Previous The images of the Princess Bride Figures collection revealed by McFarlane Toys
Next Stunning images captured using the vivid properties of plant cells