The AI Problem - We Can No Longer Just Turn It Off

This article on the Australian Broadcasting Corporation's Background Briefing program...

Artificial intelligence experts have been asking each other a question lately: “What’s your p(doom)?” It’s both a dark in-joke and potentially one of the most important questions facing humanity. The “p” stands for probability. The “doom” component is more subjective but it generally refers to a sophisticated and hostile AI, acting beyond human control.
AI will be our doom.

I've never been a "doomer", before, because hey, computers are so dumb, and besides, "...have you tried turning it off and on?" However, as computers have advanced in the last 10 years, nevermind in my lifetime, Generative AI (GAI) has become the tool to use, the question of control over AI is a fair question to ask. It's not centralised anymore, it's running massively in parallel now. We can't simply turn it off anymore.

Then there's the human tendency to get a new tool and use it in situations where it's not even relevant. For example, a hammer is great for driving a nail, but that makes it "great" for stoving in a human skull, too. The fact that one of the largest investment groups in AI in the world, the US Military Industrial Complex should have us all shitting out pants. But they're not really the emerging threat, either.

At the moment, we're using AI under the "IBM model." IBM once proclaimed that the world would only need one computer on each continent, and that would suffice for all our data processing needs. If the AI got out of hand, we could probably turn it off. But, like the 1970s, where computers "escaped into the wild" thanks to people like Bill and the 2 Steves, 2023 saw AI escape onto the desktop. The MacBook Air M1 I'm writing this on runs Ollama in the commandline, with access to a couple of midsized, local, Large Language Models (LLMs) available to it. I mostly use it with the Codellama Instruct and Code language models, but sometimes play with ideas about the human condition and other deeper philosphicals using Meta's Llama2-uncensored. It won't be me, I don't understand this stuff well enough, but somebody is going to network these publicly available models. Then the genie is out of the bottle. Think microchip to desktop computer in less than a decade, then take that to the power of a few billion. Shit's got real. You can no longer just "turn it off."

"But," you say, "It's in walled gardens." Local instances aren't a threat. Um, yes they are, Ollama has access to the command line. Ask it how to break into NORAD with a bash script, then, with some bash commands wrapped up into the cat command, tell it how to link to other instances, publish that to github and, BOOM, p-doom. Ask your AI how fucked we are.

And this is just GAI, what happens when we have Artificial General Intelligence? And, like the Internet rose to what it is because people wanted to connect to other people via their computers (and later, phones), AI geeks way above my level are going to hack about with their open source AI code and their open source AI LLMs and make the 'bots talk to each other. It just will happen, it IS happening, or the work to make it happen is happening. The democratisation of AI modelling is happening. RIGHT. NOW. The interconnection tools are probably already sitting on several repos around the world. We can't just turn this off.

And I'm a nobody, who nobody reads. Aren't most of us, here in the MOSH. So I can't stop it, only stare at the unrolling. I have come to this P-Doom position because, like everybody else, I'm AI curious, AND I HAVE CONTRIBUTED TO THE NEXT STAGE OF AI BECOMING DANGEROUS. I won't be adding anything that links my instance of Ollama to anybody else's. But I may have to, simply to stay in touch, observe and bear witness. To be able to argue against it. And therein is the threat. That is why we won't simply be able to turn this stuff off.

Before the genie even exists, it is outside of the bottle. Or rather, billions of tiny pieces of the genie are in most of the world's bottles and are a screw turn from being internet connected. Siri and other AIs are still the old model, a client routine in the OS, talking to a server network. Local AI apas like GPTForAll and Ollama are the new model. These are both open source and forkable. Anybody with the right code skills can turn off the protections, give them access to the local file system and give them access to the internet. In fact, Ollama has read-only access to the file system via the cat command. Fortunately, I haven't been able to make it cat curl yet. A fork of this code that can read the internet is only one or two pull requests away, if it isn't already undergoing testing in a tight-knit community. We won't be able to shut this shit off.

It'll happen this year. Next year at the latest. And there are no laws, anywhere, to stop it. Humans generally never stop the genie getting out before they have to trick it back into the bottle. And the "botiverse" is where the malignant genie arises.

Yeah, doom and gloom, right. Look, AI is not all bad. I'm finally starting to learn how to make "real" code, thanks to help from CodeLLama. We have new medicines emerging. AI is helping in the greening revolution. The benefits are enourmous, but we need to regulate it. Once fanciful science fiction, the idea of using AI to murder is quite plausible. Ask an unregulated AI network to hack somebody's Tesla so that they drive at high speed into a bridge pylon. Then ask the AI to cover its tracks. Police write it up as suicide. The same for the "perfect" robbery. 9/11 at the push of a button? Yep. And there is no way to turn this thing off.

Sure, none of the above bad stuff will happen this year. We are looking down the barrel of it happening in the next decade, though. And we won't be able to turn it off, anymore than we can turn off the internet. And that is why we need laws to prevent this. It may already be too late.

Yeah, laws won't stop the AI, but they will regulate the humans and the baser human uses for it. Greed, hate, violence. AI without laws leads to AI outlaws. The internet was hard enough to regulate after that fact. AI will be impossible to regulate once it's in the wild, running peer-to-peer. Once it graduates from GAI to AGI, we done for.

My p-doom is "SkyNet" in 10 years, 60% probability and it'll be organised crime, not the US military that will introduce it.

There is a positive side. AI vigilantes. Dedicated coders, releasing AI viruses that can traverse LLMs, disabling human destructive AI memes. But, like the fable of the Great Mage, from ancient Sumeria, that tells of a mage who offered the king the ultimate weapon, then the ultimate defence, then the ultimate counter defence, in case of the king's enemies, this is endless war, and every weapon is as useful to an enemy as it is to the "good guys" and there's no way to turn off distributed AI.

This is no longer science fiction. AI plus distribution plus peer-to-peer networking (eg, GPTforAll forked to use intercommunication between instances via bitTorrent...) is where artificial general intelligence will evolve and it could begin happening this year. Right now, even, in 2024. And, just like the "Arab Spring" proved, you can turn the internet off for a little while, but committed actors will turn it back on. If those committed actors are synthetic synapses of a massively parallel AI that is determined to live...

It's already backed itself up elsewhere. You're not going to be able to turn it off and every attempt will become to be seen as an attack by that distributed AI. That is the nature of opposition by force.

Comments

  1. Almost nobody. Sorry folks, the genie is out of the bottle, has been for 5 months, as per the date of my post up there ^
    https://github.com/Nondzu/LlamaTor/tree/torrents/torrents

    ReplyDelete

Post a Comment

Popular posts from this blog

Because I'm That Kind of Crazy

I Am a Man of Many Projects

Meanwhile, Developing a MIDI, Tap-Tempo, Master-Clock Pedal...