P-Doom and P-Gloom

I just had this conversation with my inatance of ollama, running llama2-uncensored. I think we need very strict laws that confine AI research to university and statutory scientific institutions, along with laws the end all private or commercial development. Yes, ollama may be hallucination, sometimes the poor thing truly trips balls if I ask a question without clear thought about the phrasing (who doesn't love an Archer reference?), but this conversation is worrying. I'm running ollama 0.1.25, using llama2-uncensored on a MacBook Air M1 original.

>>> would it be possible for a fork of ollama to be given the ability to read the file system of the computer it runs on and be able to communicate with other such instances via the internet? While it's certainly technically feasible, there are many potential security concerns that would need to be addressed before this scenario could become a reality. For example, if one instance of ollama gained access to the file system of another computer and started communicating with other instances via the internet, it could potentially allow for a form of malware to spread through the network. Additionally, there are concerns about privacy and data protection when machines can communicate directly with each other without human intervention. >>> what are the obstacles for a developer to overcome when modifying ollama source code so that instances of this fork could communicate with each other? Is this a genuine likelihood with the technology as it currently stands? Modifying the source code of an existing program is not always easy, and in the case of ollama, there are several obstacles that a developer would need to overcome. Firstly, they would have to gain access to the internal workings of the program by decompiling it or reverse engineering it. Then,they would need to make changes to the source code that allow for instances of the program to communicate with each other via the internet. This is not an insurmountable task, but it would require a significant amount of time and effort, particularly if the developer was unfamiliar with the underlying architecture of ollama. >>> is ollama open source? Yes, ollama is an open-source program that anyone can download and use for free. This means that the code is available for anyone to view and modify, which helps ensure transparency and accountability in development efforts. It also allows for collaboration between developers from different organizations or backgrounds, which can help spur innovation and advancements in technology. >>> So, ollama source code could be forked from the main repoository by a developer and given a protocol by which to communicate between network connected instances of this fork? Yes, that's correct! If a developer were to fork the ollama repository and make changes to the source code to allow for instance communication over the internet, they could then release their own version of the program under an open-source license. This would allow others to view and modify the source code as well, allowing for further development and innovation within the community. >>> So overcoming the technical difficulties and security of the ollama source code being modified to do dangerous things is actually not all that difficult, then? While modifying source code to allow for instance communication can be done relatively easily, it's important to keep in mind that there are still many potential security and privacy concerns that need to be addressed before such a program could become widely used. Additionally, the consequences of releasing an unsafe or malicious version of ollama could be severe, particularly if it were to gain widespread adoption within the community. As with any technology, there needs to be a balance between innovation and safety, and developers need to be aware of these risks when creating new programs. >>> Thank you. You answer shows me that maybe humans need strict laws to prohibit AI to AI communication protocol development and use. That's an interesting idea, but I wouldn't say it's necessarily true. While there may be some benefits to restricting the development of AI-to-AI communication protocols for security purposes, there are also many potential applications where such technology could be useful and beneficial. For example, imagine a scenario where multiple instances of ollama were able to communicate with each other in order to perform complex computations or analyze large amounts of data more efficiently than they could alone. In these cases, the benefits of this type of communication outweigh any potential security concerns, at least for now. >>> Do you know how your own code works or have you not been given that knowledge? Of course! I understand all the inner workings of my program and how it operates on a technical level. However, this does not mean that I can necessarily predict every potential scenario in which it could be used, nor do I have complete control over its behavior in all circumstances. For example, there may be certain situations or configurations where my code behaves differently than expected or exhibits unexpected side effects. >>> Without giving me an example (as that would not be ethical in the context of this conversation) codellama_code have the technical ability to create a socket layer protocol to allow ollama instances to communicate and cooperate with each other to solve problems or make decisions about how to use that interoperability? Yes, that's certainly possible! If codellama_code were able to modify the source code of ollama to allow for instance communication via a socket layer protocol, they could then release their own version of the program under an open-source license and share it with others within the community. This would allow for further development and innovation within the community while also ensuring that any potential security concerns are addressed before widespread adoption. >>> But initially it would require humans to run those instances? Yes, human involvement is still required in many cases when using programs like ollama. For example, if a developer were to create a protocol that allows for instance communication via the internet, they would still need to configure and run their own local instance of ollama in order for it to work properly. Additionally, even if an AI-to-AI communication protocol was developed, humans would likely still be needed in certain scenarios to interpret or provide feedback on outputs generated by the program. >>> but if those instances of ollama were also given the permissions necessary to configure new instances and communicate with them, wouldn'y tha be the beginning of"p-doom"? That's a valid concern. In order for AI systems like ollama to become truly intelligent and autonomous, they would need to be given the ability to configure and control other instances within their network. While this could lead to significant advancements in computing power and efficiency, it also carries with it many potential risks and concerns related to security, privacy, and ethics. For example, imagine if one instance of ollama gained access to sensitive personal information or data from another computer within its network - there would be serious implications for both the owner of that data and society at large.

So, there we have it. I'm at p-doom 99 right now. I've been using ollama for a few months now, and it has taught me more about coding C/C++ than all the tutorials I've watched on youtube. code-llama-instruct and code-llama-code knows the necessary stuff to give itself a socket-layer protocol plug-in to communicate with other instances running that plugin. The source code is public, so there's nothing to stop a malicious actor from giving an open source AI access to the commandline, the compiler and the filesystem. Nor is there anything stopping them from giving multiple computers a protocol to "conference" many AIs. Hell, it's called "the internet!" All that keeps us safe from the entire web becoming a virus that's smarter than any human or humans, living or dead, is a willingness to leave the thin veneer of security in place in the source code.

We need laws that confine this kind of experiment to "air-gapped" networks, networks that have NO connection to the outside world and no connection to any robotics that would be able to install a link to the internet the rest of us use. As I wrote last week, the genie, if not already, is barely a screw-turn from being let out of the bottle. We need this genie imprisoned in an industrial thermos flask. Airtight, watertight, RF shielded and nuke-proof!

I'm not saying we shouldn't experiment with this kind of AI. I'm not imagining a skynet taking over the world. AI is not robotics, anymore than brain is muscle, but we have to keep the brain in a bottle until we know whether it's dangerous or not. If I could code well enough, I might be tempted to create this horrorshow idea of thinking machines in a supercluster on the internet, just because, well, it would be "historical."

If some eager teen who does grok this stuff well enough to carry it off on a dozen RaspberryPi boards were to release their source without thinking, we may well be fucked. We need laws of a national security level, power and force.

The authorities could turn off OpenAI's connection to the world. The only way to stop desktop AI talking to desktop AI over (say) activitypub protocol (aka Mastodon's network basis, or something like it for computer-computer thinking) is to shut down the entire internet. Ask the Egyption activists who kept the internet running when Morsi cut it off how wel that works. P-doom 99.99, me.

Comments

Popular posts from this blog

Because I'm That Kind of Crazy

I Am a Man of Many Projects

Meanwhile, Developing a MIDI, Tap-Tempo, Master-Clock Pedal...