• bitcoinBitcoin (BTC) $ 42,977.00 0.18%
  • ethereumEthereum (ETH) $ 2,365.53 1.12%
  • tetherTether (USDT) $ 1.00 0.2%
  • bnbBNB (BNB) $ 302.66 0.19%
  • solanaSolana (SOL) $ 95.44 1.28%
  • xrpXRP (XRP) $ 0.501444 0.1%
  • usd-coinUSDC (USDC) $ 0.996294 0.34%
  • staked-etherLido Staked Ether (STETH) $ 2,367.26 1.4%
  • cardanoCardano (ADA) $ 0.481226 2.68%
  • avalanche-2Avalanche (AVAX) $ 34.37 1.19%
  • bitcoinBitcoin (BTC) $ 42,977.00 0.18%
    ethereumEthereum (ETH) $ 2,365.53 1.12%
    tetherTether (USDT) $ 1.00 0.2%
    bnbBNB (BNB) $ 302.66 0.19%
    solanaSolana (SOL) $ 95.44 1.28%
    xrpXRP (XRP) $ 0.501444 0.1%
    usd-coinUSDC (USDC) $ 0.996294 0.34%
    staked-etherLido Staked Ether (STETH) $ 2,367.26 1.4%
    cardanoCardano (ADA) $ 0.481226 2.68%
    avalanche-2Avalanche (AVAX) $ 34.37 1.19%
image-alt-13BTC Dominance: 51.25%
image-alt-14 ETH Dominance: 16.27%
image-alt-15 BTC/ETH Ratio: 13%
image-alt-16 Total Market Cap 24h: $1.65T
image-alt-17Volume 24h: $42.89B
image-alt-18 ETH Gas Price: 26 Gwei
 

MORE FROM SPONSORED

LIVE Iron News

 

ARTICLE INFORMATION

Firefly 2 Human Like Robots, Face To Face, Ready To Fight Style 46077 (1)

Google’s new Gemini AI model dominates benchmarks, beats GPT-4o and Claude-3

Gemini’s latest update arrived without fanfare and is currently labelled as experimental. But it quickly...

There’s a new top dog in the world of generative artificial intelligence benchmarks and its name is Gemini 1.5 Pro.

The previous champ, Open AI’s ChatGPT-4o, was finally surpassed on Aug. 1 when Google quietly launched an experimental release of its latest model.

Gemini’s latest update arrived without fanfare and is currently labelled as experimental. But it quickly gained the attention of the AI community across social media as reports began to trickle in that it was surpassing its rivals on benchmark scores.

Artificial intelligence benchmarks
Open AI’s Chat GPT has been the standard bearer for generative AI since the launch of GPT-3. Its latest model, GPT-4o, and its closest competitor, Anthropic’s Claude-3, have reigned supreme above most other models in most common benchmarks for the past year or so with little in the way of competition.

One of the most popular benchmarks is called the LMSYS Chatbot Arena. It tests models on a variety of tasks and assigns an overall competency score. GPT-4o received a score of 1,286 while Claude-3 earned a respectable 1,271.

A previous version of Gemini 1.5 Pro scored 1,261. But the experimental version (Gemini 1.5 Pro 0801) released on Aug 1 scored a whopping 1,300.

This indicates that it’s overall more capable than its competitors, but benchmarks aren’t necessarily an accurate representation of what an AI model can and can’t do.

Community excitement
Without deeper comparisons available, we’re entering an era where the AI chatbot market has matured enough to offer multiple options. It’s ultimately up to end-users to determine which AI model works best for them.

Anecdotally, there’s been a wave of excitement over the latest version of Gemini with users on social media calling it “insanely good.” One Redditor went so far as to write that it “blows 4o out of the water.”

It’s unclear at this time if the experimental version of Gemini 1.5 Pro will end up being the default going forward. While it remains generally available as of the time of this article’s publication, the fact that it’s in what’s considered an early release or testing phase indicates that it’s possible the model could be rescinded or changed for safety or alignment reasons.

Please consider sharing this article

FEATURED

EVENTS

Days
Hr
Min
Sec
 

ICN TALKS EPISODES