Or they didn't put in safeguards, or safeguards that were adequate.
An intelligent AI isn't magic; it's a machinery based on logic. What you need to do is to put in blocks and limitations in its foundation that prevent it from reaching conclusions you don't want it to reach. The exact implementation of these things would vary from AI to AI.Which was my point in the first place, that an AI can make its own decisions and learn about/come up with its own concepts. What you are describing are not AIs but merely machines as we have them now. An AI in a computer game so far is not a real AI, it is more like a series of scripts that pretends to be clever.
Given that we have designed them, we can test their software over and over and discover the most probable causes for a rogue AI. Several AIs going rogue at the same time from a little-probable cause is of course improbable, so unless they can really efficiently convert other AI units to their cause (which we could put extra blocks in to avoid), we'd be dealing with rare and isolated causes of AIs going rogue, much like is the case for humans within our own societies.
Bookmarks