Google began rolling out a new search feature on Thursday that will allow users to search for information using both text and images. The new multiple search feature is part of Google’s ongoing efforts to use AI to “create truly conversational, multi-modal, and personal information experiences,” as Google CEO Sundar Pichai recently said.
The multiple search function is integrated into Google Lens, the image recognition tool accessible via the Google app. For now, the feature is available in beta for US users searching with English text. It is also geared towards shopping searches.
For example, a user can take a screenshot of an orange dress and add “green” to their query to try to find the same dress in that color. It’s also useful for non-commercial research – a user can, for example, take a photo of a rosemary plant and add the query “care instructions” to understand how to care for their new plant.
To use the feature, open the Google app, tap the Lens camera icon, then browse for a screenshot or take a new photo. Then you swipe up and hit the “+ Add to your search” button to add text.
In its blog post on Thursday, Google said it was also exploring ways to improve functionality with MUM (Multitask Unified Model), Google’s latest AI model. The tech giant recently shared how it uses MUM and other AI models to more effectively deliver crisis assistance information to people seeking help.
In February, during Google’s fourth quarter conference call, Pichai explained Google’s investments in AI models that enable multimodal search.
“In 2022, we will remain focused on evolving our knowledge and information products, including search, Maps and YouTube, to be even more useful,” he said. “Investments in AI will be essential, and we will continue to make improvements to conversational interfaces like the Assistant.
“From MUM to Pathways to BERT and more, these deep investments in AI are helping us be at the forefront of search quality,” he continued. “They also fuel innovations beyond research.”