{"id":5562,"date":"2023-06-12T12:26:38","date_gmt":"2023-06-12T16:26:38","guid":{"rendered":"https:\/\/sikaoer.com\/is-ai-a-nuke-level-threat-why-ai-fields-all-advance-at-once-dumb-pic-puns-cointelegraph-magazine\/"},"modified":"2023-06-12T12:26:38","modified_gmt":"2023-06-12T16:26:38","slug":"is-ai-a-nuke-level-threat-why-ai-fields-all-advance-at-once-dumb-pic-puns-cointelegraph-magazine","status":"publish","type":"post","link":"https:\/\/sikaoer.com\/is-ai-a-nuke-level-threat-why-ai-fields-all-advance-at-once-dumb-pic-puns-cointelegraph-magazine\/","title":{"rendered":"Is AI a nuke-level threat? Why AI fields all advance at once, dumb pic puns \u2013 Cointelegraph Magazine"},"content":{"rendered":"


\n<\/p>\n

\n

Just as we don\u2019t allow just anyone to build a plane and fly passengers around, or design and release medicines, why should we allow AI models to be released\u00a0into the wild without proper testing and licensing?\u00a0<\/strong><\/p>\n

That\u2019s been the argument from an increasing number of experts and politicians in recent weeks.\u00a0<\/p>\n

With the United Kingdom holding a global summit on AI safety in autumn, and surveys suggesting around 60% of the public is in favor of regulations, it seems new guardrails are becoming more likely than not.\u00a0<\/p>\n

One particular meme taking hold is the comparison of AI tech to an existential threat like nuclear weaponry, as in a recent 23-word warning sent by the Center of AI Safety, which was signed by hundreds of scientists:<\/p>\n

\n

\u201cMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.\u201d<\/p>\n<\/blockquote>\n

Extending the metaphor, OpenAI CEO Sam Altman is pushing for the creation of a global body like the International Atomic Energy Agency to oversee the tech.<\/p>\n

\u201cWe talk about the IAEA as a model where the world has said, \u2018OK, very dangerous technology, let\u2019s all put (in) some guard rails,\u2019\u201d he said in India this week.\u00a0<\/p>\n

Libertarians argue that overstating the threat and calling for regulations is just a ploy by the leading AI companies to a) impose authoritarian control and b) strangle competition via regulation.\u00a0<\/p>\n

\n<\/div>\n
\n<\/div>\n

Princeton computer science professor Arvind Narayanan warned, \u201cWe should be wary of Prometheans who want to both profit from bringing the people fire and be trusted as the firefighters.\u201d<\/p>\n

Netscape and a16z co-founder Marc Andreessen released a series of essays this week on his technological utopian vision for AI. He likened AI doomers to \u201can apocalyptic cult\u201d and claimed AI is no more likely to wipe out humanity than a toaster because: \u201cAI doesn\u2019t want, it doesn\u2019t have goals \u2014 it doesn\u2019t want to kill you because it\u2019s not alive.\u201d<\/p>\n

This may or may not be true \u2014 but then again, we only have a vague understanding of what goes on inside the black box of the AI\u2019s \u201cthought processes.\u201d But as Andreessen himself admits, the planet is full of unhinged humans who can now ask an AI to engineer a bioweapon, launch a cyberattack or manipulate an election. So, it can be dangerous in the wrong hands even if we avoid the Skynet\/Terminator scenario.\u00a0<\/p>\n

The nuclear comparison is probably quite instructive in that people did get very carried away in the 1940s about the very real world-ending possibilities of nuclear technology. Some Manhattan Project team members were so worried the bomb might set off a chain reaction, ignite the atmosphere and incinerate all life on Earth that they pushed for the project to be abandoned.\u00a0<\/p>\n

After the bomb was dropped, Albert Einstein became so convinced of the scale of the threat that he pushed for the immediate formation of a world government with sole control of the arsenal.<\/p>\n

\n

Read also<\/p>\n

\n
\n

Features<\/span><\/p>\n

North Korean crypto hacking: Separating fact from fiction<\/p>\n<\/p><\/div>\n

\n

Features<\/span><\/p>\n

Game theory meets DeFi: Bouncing ideas around tokenomic design<\/p>\n<\/p><\/div>\n<\/div>\n<\/div>\n

The world government didn\u2019t happen but the international community took the threat seriously enough that humans have managed not to blow themselves up in the 80-odd years since. Countries signed agreements to only test nukes underground to limit radioactive fallout and set up inspection regimes, and now only nine countries have nuclear weapons.\u00a0<\/p>\n

In their podcast about the ramifications of AI on society, The AI Dilemma, Tristan Harris and Aza Raskin argue for the safe deployment of thoroughly tested AI models.<\/p>\n

\u201cI think of this public deployment of AI as above-ground testing of AI. We don\u2019t need to do that,\u201d argued Harris. <\/p>\n

\u201cWe can presume that systems that have capacities that the engineers don\u2019t even know what those capacities will be, that they\u2019re not necessarily safe until proven otherwise. We don\u2019t just shove them into products like Snapchat, and we can put the onus on the makers of AI, rather than on the citizens, to prove why they think that it\u2019s (not) dangerous.\u201d<\/p>\n

Also read: All rise for the robot judge \u2014 AI and blockchain could transform the courtroom<\/strong><\/p>\n

The genie is out of the bottle<\/h2>\n

Of course, regulating AI might be like banning Bitcoin: nice in theory, impossible in practice. Nuclear weapons are highly specialized technology understood by just a handful of scientists worldwide and require enriched uranium, which is incredibly difficult to acquire. Meanwhile, open-source AI is freely available, and you can even download a personal AI model and run it on your laptop.<\/p>\n

AI expert Brian Roemmele says that he\u2019s aware of 450 public open-source AI models and \u201cmore are made almost hourly. Private models are in the 100s of 1000s.\u201d<\/p>\n

Roemmele is even building a system to enable any old computer with a dial-up modem to be able to connect to a locally hosted AI.<\/p>\n

\n

Working on making ChatGPT available via dialup modem.<\/p>\n

It is very early days an I have some work to do.<\/p>\n

Ultimately this will connect to a local version of GPT4All.<\/p>\n

This means any old computer with dialup modems can connect to an LLM AI.<\/p>\n

Up next a COBOL to LLM AI connection! pic.twitter.com\/ownX525qmJ<\/p>\n

\u2014 Brian Roemmele (@BrianRoemmele) June 8, 2023<\/p>\n<\/blockquote>\n

The United Arab Emirates also just released its open-source large language model AI called Falcon 40B model free of royalties for commercial and research. It claims it \u201coutperforms competitors like Meta\u2019s LLaMA and Stability AI\u2019s StableLM.\u201d<\/p>\n

There\u2019s even a just-released open-source text-to-video AI video generator called Potat 1, based on research from Runway.\u00a0<\/p>\n

\n

I am happy that people are using Potat 1️⃣ to create stunning videos 🌳🧱🌊<\/p>\n

Artist: @iskarioto ❤ https:\/\/t.co\/Gg8VbCJpOY#opensource #generativeAI #modelscope #texttovideo #text2video @80Level @ClaireSilver12 @LambdaAPI https:\/\/t.co\/obyKWwd8sR pic.twitter.com\/2Kb2a5z0dH<\/p>\n

\u2014 camenduru (@camenduru) June 6, 2023<\/p>\n<\/blockquote>\n

The reason all AI fields advanced at once<\/strong><\/h2>\n

We\u2019ve seen an incredible explosion in AI capability across the board in the past year or so, from AI text to video and song generation to magical seeming photo editing, voice cloning and one-click deep fakes. But why did all these advances occur in so many different areas at once?<\/p>\n

Mathematician and Earth Species Project co-founder Aza Raskin gave a fascinating plain English explanation for this in The AI Dilemma, highlighting the breakthrough that emerged with the Transformer machine learning model.<\/p>\n

\n

Read also<\/p>\n

\n
\n

Features<\/span><\/p>\n

Crypto is changing how humanitarian agencies deliver aid and services<\/p>\n<\/p><\/div>\n

\n

Features<\/span><\/p>\n

The best (and worst) stories from 3 years of Cointelegraph Magazine<\/p>\n<\/p><\/div>\n<\/div>\n<\/div>\n

\u201cThe sort of insight was that you can start to treat absolutely everything as language,\u201d he explained. \u201cSo, you can take, for instance, images. You can just treat it as a kind of language, it\u2019s just a set of image patches that you can arrange in a linear fashion, and then you just predict what comes next.\u201d<\/p>\n

ChatGPT is often likened to a machine that just predicts the most likely next word, so you can see the possibilities of being able to generate the next \u201cword\u201d if everything digital can be transformed into a language.\u00a0<\/p>\n

\n

\u201cSo, images can be treated as language, sound you break it up into little microphone names, predict which one of those comes next, that becomes a language. fMRI data becomes a kind of language, DNA is just another kind of language. And so suddenly, any advance in any one part of the AI world became an advance in every part of the AI world. You could just copy-paste, and you can see how advances now are immediately multiplicative across the entire set of fields.\u201d<\/p>\n<\/blockquote>\n

It is and isn\u2019t like Black Mirror<\/h2>\n

A lot of people have observed that recent advances in artificial intelligence seem like something out of Black Mirror. But creator Charlie Brooker seems to think his imagination is considerably more impressive than the reality, telling Empire Magazine he\u2019d asked ChatGPT to write an episode of Black Mirror and the result was \u201cshit.\u201d<\/p>\n

\u201cI\u2019ve toyed around with ChatGPT a bit,\u201d Brooker said. \u201cThe first thing I did was type \u2018generate Black Mirror episode\u2019 and it comes up with something that, at first glance, reads plausibly, but on second glance, is shit.\u201d According to Brooker, the AI just regurgitated and mashed up different episode plots into a total mess.<\/p>\n

\u201cIf you dig a bit more deeply, you go, \u2018Oh, there\u2019s not actually any real original thought here,\u2019\u201d he said.<\/p>\n

\"Black
\u201cBlack Mirror\u201d was better at predicting AI advances than AI was at writing \u201cBlack Mirror\u201d scripts (Netflix)<\/figcaption><\/figure>\n

AI pictures of the week<\/h2>\n

One of the nice things about AI text-to-speech image generation programs is they can turn throwaway puns into expensive-looking images that no graphic designer could be bothered to make. Here then, are the wonders of the world, misspelled by AI (courtesy of redditor mossymayn).<\/p>\n

\n
\"Machu
Machu Pikachu (Reddit<\/figcaption><\/figure>\n
\"Grand
The Grand Crayon (Reddit)<\/figcaption><\/figure>\n
\"Great
The Great Ball of China (Reddit)<\/figcaption><\/figure>\n<\/figure>\n
\n
\"The
The Hooter Dam (Reddit)<\/figcaption><\/figure>\n
\"Sydney
The Sydney Oprah House (Reddit)<\/figcaption><\/figure>\n
\"Panacotta
China\u2019s Panacotta Army (Reddit)<\/figcaption><\/figure>\n<\/figure>\n

Video of the week<\/h2>\n

Researchers from the University of Cambridge demonstrated eight simple salad recipes to an AI robot chef that was then able to make the salads itself and come up with a ninth salad recipe on its own. <\/p>\n

\n<\/figure>\n
\n
\n
\n
\n

Subscribe<\/p>\n

The most engaging reads in blockchain. Delivered once a
\n week.<\/p>\n<\/div>\n

\n \"Subscribe\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n
\n
\n\t\t\t\t\t\t\t\"Andrew\n\t\t\t\t\t\t<\/div>\n
\n

Andrew Fenton<\/h2>\n

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.<\/p>\n