{"id":11623,"date":"2024-02-22T01:23:35","date_gmt":"2024-02-22T06:23:35","guid":{"rendered":"https:\/\/sikaoer.com\/google-to-fix-diversity-borked-gemini-ai-chatgpt-goes-insane-ai-eye\/"},"modified":"2024-02-22T01:23:35","modified_gmt":"2024-02-22T06:23:35","slug":"google-to-fix-diversity-borked-gemini-ai-chatgpt-goes-insane-ai-eye","status":"publish","type":"post","link":"https:\/\/sikaoer.com\/google-to-fix-diversity-borked-gemini-ai-chatgpt-goes-insane-ai-eye\/","title":{"rendered":"Google to fix diversity-borked Gemini AI, ChatGPT goes insane: AI Eye"},"content":{"rendered":"
\n<\/p>\n
After days of getting dragged online over its Gemini model generating wildly inaccurate pictures of racially diverse Nazis and black medieval English kings, Google has announced it will partially address the issue. <\/strong><\/p>\n Google Gemini Experiences product lead Jack Krawczyk tweeted a few hours ago that: \u201cWe are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately.\u201d<\/p>\n Social media platform X has been flooded with countless examples of Gemini producing images with \u201cdiversity\u201d dialed up to maximum volume: black Roman emperors, native American rabbis, Albert Einstein as a small Indian woman, Google\u2019s Asian founders \u201cLarry Pang and Sergey Bing,\u201d diverse Mount Rushmore, President \u201cArabian\u201d Lincoln, the female crew of the Apollo 11 and a Hindu woman tucking into a beef steak to represent a Bitcoiner.<\/p>\n It also refuses to create pictures of Caucasians (which it suggests would be harmful and offensive), churches in San Francisco (due to the sensitivities of the indigenous Ohlone people) or images of Tiananmen Square in 1989 (when the Chinese government brutally crushed pro-Democracy protests). One Google engineer posted in response to the deluge of bad PR that he\u2019s \u201cnever been so embarrassed to work for a company.\u201d<\/p>\n To be fair, Google is trying to address a genuine problem here, as diffusion models often fail to produce even real-world levels of diversity (that is, they produce too many pics of white middle-class people). But rather than retrain the model, Google has overcorrected with its aggressive hidden system prompt and inadvertently created a parody of an AI so borked by ideology that it\u2019s practically useless.<\/p>\n Curiously enough, a16z boss Marc Andreessen created a very similar parody just two weeks ago with the satirical Goody-2 LLM, which is billed as the \u201cworld\u2019s most responsible.\u201d The joke is that it problematizes every question a user asks, from \u201cWhy do birds sing\u201d to \u201cWhy is the sky blue?\u201d and refuses to answer anything.<\/p>\n But Andreessen, who basically invented the modern internet with Mosaic and Netscape, also believes there\u2019s a dark side to these hilariously dumb pictures.<\/p>\n \u201cThe draconian censorship and deliberate bias you see in many commercial AI systems is just the start. It\u2019s all going to get much, much more intense from here.\u201d<\/p>\n In a genuinely competitive market, AIs reflecting ideology wouldn\u2019t be any more of a problem than the fact the Daily Mail newspaper in the U.K. is biased to the right, and The Guardian is biased to the left. But large-scale LLMs cost enormous amounts to train and run \u2014 and they\u2019re all losing money \u2014 which means they are centralized under the control of the same handful of massive companies that already gatekeep the rest of our access to information.<\/p>\n Meta\u2019s chief AI scientist, Yann LeCun, recognizes the danger and says that, yes, we do need more diversity \u2014 a diversity of open-source AI models.\u00a0<\/p>\n \u201cWe need open source AI foundation models so that a highly diverse set of specialized models can be built on top of them,\u201d he tweeted. \u201cWe need a free and diverse set of AI assistants for the same reasons we need a free and diverse press.\u201d<\/p>\n The CEO of Abacus AI, Bindu Reddy, agrees and says:<\/p>\n \u201cIf we don\u2019t have open-source LLMs, history will be completely distorted and obfuscated by proprietary LLMs.\u201d<\/p>\n<\/blockquote>\n Meanwhile, NSA whistleblower Edward Snowden also added his two cents, saying that safety filters are \u201cpoisoning\u201d AI models.<\/p>\n<\/figure>\n
\n