Bitcoin

Is Google’s Gemini really smarter than OpenAI’s GPT-4? Community Detective Finds Out

Google launched Gemini, its latest artificial intelligence (AI) model, on December 6 and announced that it is the most advanced AI model currently on the market, surpassing OpenAI’s GPT-4.

Gemini is multimodal. That is, it is built to understand and combine different types of information. It comes in three versions (Ultra, Pro, Nano) to cover a variety of use cases, and one area where it appears to outperform GPT-4 is its ability to perform advanced math and specialized coding.

Google released several benchmark tests comparing Gemini and GPT-4 upon its debut. The Gemini Ultra version achieved “state-of-the-art performance” on 30 of 32 academic benchmarks used in large language model (LLM) development.

Gemini and ChatGPT performance comparison. Source: Google

But critics on the internet are taking a stab at Gemini, questioning Google’s product marketing and the methods used to test benchmarks that suggest Gemini’s superiority.

Misleading Gemini Promotions

A user on the social media platform

He noted that Google may be exaggerating Gemini or promoting a “cherry-picking” case for its superiority. Nonetheless, he concluded that “I think Gemini is very competitive and will give GPT-4 a run for its money” and that competition in this space is good.

But soon after, he posted a second post saying Google should be ’embarrassed’ for ‘misleading’ the product it promoted in a promo video it made for Gemini’s launch.

In response to his tweet, other X users said they felt cheated by Google’s portrayal of Gemini. 1 user said Claims that Gemini will end the GPT-4 era have been “cancelled.”

Another user, a computer scientist, agreed, calling Google’s portrayal of Gemini’s superiority “dishonest.”

Benchmark Boching

Users pointed out that the comparison was redundant because Google included benchmarks using older versions of GPT-4 rather than current capacities.

Another area of ​​concern for social media sleuths was the parameters Google used to compare its Gemini model with GPT-4. Moreover, the prompts given to the two models were not identical, which could have greatly affected the results.

The user also pointed out that the results were achieved using tests performed on models that are currently “not publicly available.” other users pointed Testing Gemini’s advanced model against an advanced version of GPT-4, known as “turbo,” may result in different scores.

Related: Elon Musk’s xAI files with SEC over alleged private sale of $1 billion in unregistered securities

to test

Other social media users decided to ignore the benchmarks posted by Google and instead described their own experiences with Gemini compared to GPT-4.

Anne Moss, who works in a web publishing service and claims to be a regular user of AI, particularly GPT-4, used Gemini over Google’s Bard tool and said she was “underwhelmed by the experience.”

She concluded that she would stick with GPT-4, explaining that the differences she cited included Gemini/Bard refusing to answer political questions and “lying” about knowing personal information.

Another user who works in app development posted a screenshot of the same prompt asking both models to generate a code based on a photo. He pointed out the overwhelming response of Gemini/Bard compared to GPT-4.

According to Google, it plans to launch Gemini more broadly to the public in early 2024. This model will also integrate with Google’s suite of apps and services.

magazine: Real-life AI use cases in cryptocurrency: Cryptocurrency-based AI market and AI financial analysis