• Date: 25.07.2024
  • Text: Some text comes here.
  • Button: See more
  • Timer: 25.07.2024
  • bitcoinBitcoin (BTC) $ 42,977.00 0.18%
  • ethereumEthereum (ETH) $ 2,365.53 1.12%
  • tetherTether (USDT) $ 1.00 0.2%
  • bnbBNB (BNB) $ 302.66 0.19%
  • solanaSolana (SOL) $ 95.44 1.28%
  • xrpXRP (XRP) $ 0.501444 0.1%
  • usd-coinUSDC (USDC) $ 0.996294 0.34%
  • staked-etherLido Staked Ether (STETH) $ 2,367.26 1.4%
  • cardanoCardano (ADA) $ 0.481226 2.68%
  • avalanche-2Avalanche (AVAX) $ 34.37 1.19%
  • bitcoinBitcoin (BTC) $ 42,977.00 0.18%
    ethereumEthereum (ETH) $ 2,365.53 1.12%
    tetherTether (USDT) $ 1.00 0.2%
    bnbBNB (BNB) $ 302.66 0.19%
    solanaSolana (SOL) $ 95.44 1.28%
    xrpXRP (XRP) $ 0.501444 0.1%
    usd-coinUSDC (USDC) $ 0.996294 0.34%
    staked-etherLido Staked Ether (STETH) $ 2,367.26 1.4%
    cardanoCardano (ADA) $ 0.481226 2.68%
    avalanche-2Avalanche (AVAX) $ 34.37 1.19%
image-alt-13BTC Dominance: 51.25%
image-alt-14 ETH Dominance: 16.27%
image-alt-15 BTC/ETH Ratio: 13%
image-alt-16 Total Market Cap 24h: $1.65T
image-alt-17Volume 24h: $42.89B
image-alt-18 ETH Gas Price: 26 Gwei
 

Google adds AI-powered overviews for multisearch in Lens

Firefly Beautiful Magic Garden With A Lake, Flamingos, Colourful Flowers And Tropical Plants Art Syn

In addition to a new gesture-powered search feature for Android devices,

Google today also introduced an AI-powered addition to its visual search capabilities in Google Lens. Starting today, users will be able to point their camera or upload a photo or screenshot to Lens, then ask a question about what they’re seeing to get answers via generative AI.

The feature is an update to the multi-search capabilities in Lens, which allows web users to search using both text and images at the same time. Previously, these types of searches would take users to other visual matches, but with today’s launch, you’ll receive AI-powered results that offer insights, as well.

As one example, Google suggests the feature could be used to learn more about a plant by snapping a photo of the plant, and then asking “When do I water this?”. Instead of just showing the user other images of the plant, it identifies the plant and tells the user how often it should be watered, e.g. “every two weeks.” This feature relies on information pulled from the web, including information found on websites, product sites and in videos.

However, Google clarified that while the Lens multisearch feature is offering generative AI insights, it’s not the same product as Google’s experimental GenAI search SGE (Search Generative Experience), which remains opt-in only.

The AI-powered

overviews for multisearch in Lens are launching for everyone in the U.S. in English, starting today. Unlike some of Google’s other AI experiments, it’s not limited to Google Labs. To use the feature, you’ll just tap on the Lens camera icon in the Google search app for iOS or Android, or in the search box on your Android phone.

Similar to Circle to Search, the addition aims to maintain Google Search’s relevancy in the age of AI. While today’s web is cluttered with SEO-optimized garbage, Circle to Search and this adjacent AI-powered capability in Lens aim to improve search results by tapping into a web of knowledge — including many web pages in Google’s index — but delivering the results in a different format.

Google notes that its GenAI products

— like its Google Search Generative Experience, for example — will cite their sources, to allow users to fact-check its answers. And though SGE will remain in Labs, Google said it will begin to introduce generative AI advances more broadly, when relevant, as it’s doing now with multi-search results.

The AI overviews for multisearch in Lens arrive today, while the gesture-based Circle to Search arrives on January 31.

MORE FROM SPONSORED

 

UPCOMING EVENTS

Days
Hr
Min
Sec
 

Latest Iron Podcast episode