{"id":11623,"date":"2024-02-22T01:23:35","date_gmt":"2024-02-22T06:23:35","guid":{"rendered":"https:\/\/sikaoer.com\/google-to-fix-diversity-borked-gemini-ai-chatgpt-goes-insane-ai-eye\/"},"modified":"2024-02-22T01:23:35","modified_gmt":"2024-02-22T06:23:35","slug":"google-to-fix-diversity-borked-gemini-ai-chatgpt-goes-insane-ai-eye","status":"publish","type":"post","link":"https:\/\/sikaoer.com\/google-to-fix-diversity-borked-gemini-ai-chatgpt-goes-insane-ai-eye\/","title":{"rendered":"Google to fix diversity-borked Gemini AI, ChatGPT goes insane: AI Eye"},"content":{"rendered":"


\n<\/p>\n

\n

After days of getting dragged online over its Gemini model generating wildly inaccurate pictures of racially diverse Nazis and black medieval English kings, Google has announced it will partially address the issue. <\/strong><\/p>\n

Google Gemini Experiences product lead Jack Krawczyk tweeted a few hours ago that: \u201cWe are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.\u201d<\/p>\n

Gemini wants to make everything more inclusive, even Nazi Germany. (X)<\/figcaption><\/figure>\n

Social media platform X has been flooded with countless examples of Gemini producing images with \u201cdiversity\u201d dialed up to maximum volume: black Roman emperors, native American rabbis, Albert Einstein as a small Indian woman, Google\u2019s Asian founders \u201cLarry Pang and Sergey Bing,\u201d diverse Mount Rushmore, President \u201cArabian\u201d Lincoln, the female crew of the Apollo 11 and a Hindu woman tucking into a beef steak to represent a Bitcoiner.<\/p>\n

It also refuses to create pictures of Caucasians (which it suggests would be harmful and offensive), churches in San Francisco (due to the sensitivities of the indigenous Ohlone people) or images of Tiananmen Square in 1989 (when the Chinese government brutally crushed pro-Democracy protests). One Google engineer posted in response to the deluge of bad PR that he\u2019s \u201cnever been so embarrassed to work for a company.\u201d<\/p>\n

To be fair, Google is trying to address a genuine problem here, as diffusion models often fail to produce even real-world levels of diversity (that is, they produce too many pics of white middle-class people). But rather than retrain the model, Google has overcorrected with its aggressive hidden system prompt and inadvertently created a parody of an AI so borked by ideology that it\u2019s practically useless.<\/p>\n

\"Bitcoiner\"<\/figure>\n

Curiously enough, a16z boss Marc Andreessen created a very similar parody just two weeks ago with the satirical Goody-2 LLM, which is billed as the \u201cworld\u2019s most responsible.\u201d The joke is that it problematizes every question a user asks, from \u201cWhy do birds sing\u201d to \u201cWhy is the sky blue?\u201d and refuses to answer anything.<\/p>\n

But Andreessen, who basically invented the modern internet with Mosaic and Netscape, also believes there\u2019s a dark side to these hilariously dumb pictures.<\/p>\n

\u201cThe draconian censorship and deliberate bias you see in many commercial AI systems is just the start. It\u2019s all going to get much, much more intense from here.\u201d<\/p>\n

In a genuinely competitive market, AIs reflecting ideology wouldn\u2019t be any more of a problem than the fact the Daily Mail newspaper in the U.K. is biased to the right, and The Guardian is biased to the left. But large-scale LLMs cost enormous amounts to train and run \u2014 and they\u2019re all losing money \u2014 which means they are centralized under the control of the same handful of massive companies that already gatekeep the rest of our access to information.<\/p>\n

Meta\u2019s chief AI scientist, Yann LeCun, recognizes the danger and says that, yes, we do need more diversity \u2014 a diversity of open-source AI models.\u00a0<\/p>\n

\u201cWe need open source AI foundation models so that a highly diverse set of specialized models can be built on top of them,\u201d he tweeted. \u201cWe need a free and diverse set of AI assistants for the same reasons we need a free and diverse press.\u201d<\/p>\n

The CEO of Abacus AI, Bindu Reddy, agrees and says:<\/p>\n

\n

\u201cIf we don\u2019t have open-source LLMs, history will be completely distorted and obfuscated by proprietary LLMs.\u201d<\/p>\n<\/blockquote>\n

Meanwhile, NSA whistleblower Edward Snowden also added his two cents, saying that safety filters are \u201cpoisoning\u201d AI models.<\/p>\n

\n
\n
\n

Imagine you look up a recipe on Google, and instead of providing results, it lectures you on the “dangers of cooking” and sends you to a restaurant.<\/p>\n

The people who think poisoning AI\/GPT models with incoherent “safety” filters is a good idea are a threat to general computation.<\/p>\n

\u2014 Edward Snowden (@Snowden) February 22, 2024<\/p><\/blockquote>\n<\/div>\n<\/figure>\n

ChatGPT also borked<\/strong><\/h2>\n

GPT-4 Turbo received a stealth upgrade recently with training data that goes up to December 2023 and some hotfixes for its laziness problem.<\/p>\n

But it appears to have driven ChatGPT mad, with users reporting the chatbot is responding in Spanglish style gibberish \u2014 \u201cthe cogs en la tecla might get a bit whimsical. Muchas gracias for your understanding, y I\u2019ll ensure we\u2019re being as crystal clear como l\u2019eau from now on\u201d \u2014 or getting stuck in infinite loops \u2014 \u201cA synonym for \u201covergrown\u201d is \u201covergrown\u201d is \u201covergrown\u201d is \u201covergrown\u201d is \u201covergrown\u201d is \u201covergrown\u201d is \u201covergrown\u201d is \u201covergrown\u201d\u2026\u00a0<\/p>\n

OpenAI says it investigated \u201creports of unexpected responses\u201d and has now fixed the issue.<\/p>\n

\"ChatGPT\"
ChatGPT is losing it. (X)<\/figcaption><\/figure>\n

Proof of humanity<\/strong><\/h2>\n

Humanity Protocol is a new project from Animoca Brands and Polygon Labs that enables users to prove they are humans and not machines.\u00a0<\/p>\n

It uses palm recognition technology through your mobile phone, integrated with blockchain, and uses zero-knowledge proofs so users can provide verifiable credentials while preserving privacy.<\/p>\n

Animoca Brands founder Yat Siu tells AI Eye that the tech is built on top of earlier decentralized identity projects like the Mocaverse ID, which works across the Animoca ecosystem of 450 companies and brands.<\/p>\n

\u201cLike trust in the real world, it is earned through actions and increasing reputation and by confirmation in real-time by credible 3rd parties,\u201d he says.<\/p>\n

\n

\u201cIn time, in the same way that we trust blockchain to function because of decentralization, we can expect the same for confirming human identity, but [it is] still privacy preserving due to blockchain technology.\u201d<\/p>\n<\/blockquote>\n

\n

Read also<\/p>\n

\n
\n

Features<\/span><\/p>\n

Meet Dmitry: Co-founder of Ethereum\u2019s creator Vitalik Buterin<\/p>\n<\/p><\/div>\n

\n

Features<\/span><\/p>\n

How to control the AIs and incentivize the humans with crypto<\/p>\n<\/p><\/div>\n<\/div>\n<\/div>\n

Sora gets audio<\/strong> track<\/h2>\n

OpenAI\u2019s Sora text-to-video generation tool attracted a lot of attention this week, and rightly so: AI video generation has improved by an order of magnitude over the past year to the point where it\u2019s difficult to tell what\u2019s real and what isn\u2019t. Sora combines diffusion \u2014 where an AI starts with random noise and refines it into an image \u2014 and a transformer architecture to handle sequential video frames.<\/p>\n

Eleven Labs has taken the variety of videos OpenAI produced to demonstrate Sora and added a soundtrack created with its own text-to-audio generator. The tech isn\u2019t automatic yet, so you still have to describe the sounds you want, but no doubt it\u2019ll be able to recognise imagery and generate the appropriate sound FX automagically soon enough.\u00a0<\/p>\n

\n

Chatbot signs checks you have to cash<\/strong><\/h2>\n

Generative AI is cool, fun and amazing \u2026 but it\u2019s not very reliable for business purposes just yet. A court this week found that Air Canada was liable over a 2022 incident where its helpdesk chatbot incorrectly explained the airline\u2019s bereavement fare policy, causing a man to buy a last-minute flight to attend a funeral in the expectation he\u2019d get a refund.\u00a0<\/p>\n

The court rejected Air Canada\u2019s defense that it wasn\u2019t responsible for the \u201cmisleading words\u201d of the chatbot, which it attempted to argue was a \u201cseparate legal entity\u201d that was responsible for its own actions. The tribunal essentially said that was nonsense and that Air Canada is responsible for everything on its website, including the chatbot, and made it issue the refund.<\/p>\n

Gemini 1.5 Pro amazes with 1M token context window<\/strong><\/h2>\n

Some users have got access to an early version of Gemini 1.5 Pro, which can process information up to one million tokens, which is the longest context window to date. By way of context, when Claude came out in May last year with a 100,000 token context window, everyone was astonished that you could finally input a short novel. Gemini 1.5 Pro can now handle 700,000 words, 11 hours of audio or one hour of video.<\/p>\n

AI professor Ethan Mollick has been playing around with the model and is impressed.<\/p>\n

\n

\u201cI gave it a truly over-the-top RPG (the 352-page rulebook for 60 Years in Space) and asked it to role up a character. The instructions are scattered across many pages, and are very complicated, but Gemini seemed to get it.\u201d<\/p>\n<\/blockquote>\n

In another test, he fed in 1,000 pages of his own academic papers and books and queried them. Responses were slow and took up to one minute, but \u201cit was able to extract direct quotes & find themes across all of them with only quite minor errors.\u201d<\/p>\n

It refused to answer questions about his book, however, citing copyright.<\/p>\n

All Killer No Filler AI News<\/h2>\n

\u2014 Ethereum co-founder Vitalik Buterin has been talking up the use of AI for code verification and bug finding. However, new research from Salus Security this week found that GPT-4\u2019s vulnerability detection capabilities suck, and it struggles to achieve accuracy above 33%.<\/p>\n

\u2014 AI crypto tokens have surged in the past week, led by Sam Altman\u2019s Worldcoin project, which is up 150%, with many tying the upswing in prices to excitement over Sora. Singularity.net gained 82%, FetchAI was up 57%, and The Graph (42%), Render (32%) and Ocean Protocol (49%) also posted strong gains.\u00a0<\/p>\n

\u2014 Reddit has reportedly signed a $60M deal to allow an AI company to train its models on the platform\u2019s content. The $5B Reddit IPO expected next month probably played a role in the decision.<\/p>\n

\n

Read also<\/p>\n

\n
\n

Features<\/span><\/p>\n

Blockchain and the world\u2019s growing plastic problem<\/p>\n<\/p><\/div>\n

\n

Features<\/span><\/p>\n

How Silk Road Made Your Mailman a Dealer<\/p>\n<\/p><\/div>\n<\/div>\n<\/div>\n

\u2014 Australian Capital Territory Supreme Court Judge David Mossop was less than impressed when a thief\u2019s brother provided a character reference that was clearly written by ChatGPT. The judge said he placed \u201clittle weight\u201d on the reference as a result.\u00a0<\/p>\n

\u2014 A new survey of 11,500 workers worldwide by Veritas found that about 45% of respondents said that AI makes them more productive at writing emails, while a similar number (44%) said the tools provide inaccurate, incorrect or unhelpful information.\u00a0\u00a0<\/p>\n

\u2014 OpenAI has been rebuffed in its second attempt to trademark the term \u201cGPT\u201d by the U.S. Patent and Trademark Office. The Office said GPT, which stands for \u201cGenerative Pre-trained Transformer,\u201d was \u201cmerely descriptive.\u201d\u00a0<\/p>\n

\u2014 Forget Grok, meet Groq, which became a viral sensation this week. Its makers call it a \u201clightning-fast AI answers engine\u201d that can pump out factual answers with citations in less than a second. The team developed its own ASIC chip to manage the feat, generating 500 tokens per second, a dozen times more than ChatGPT.<\/p>\n

\n
\n
\n
\n
\n

Subscribe<\/p>\n

The most engaging reads in blockchain. Delivered once a
\n week.<\/p>\n<\/div>\n

\n \"Subscribe\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n
\n
\n\t\t\t\t\t\t\t\"Andrew\n\t\t\t\t\t\t<\/div>\n
\n

Andrew Fenton<\/h2>\n

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.<\/p>\n