Results 1 to 30 of 100

Thread: The future of warfare - robot or nobot?

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Iron Fist Senior Member Husar's Avatar
    Join Date
    Jan 2003
    Location
    Germany
    Posts
    15,617

    Default Re: The future of warfare - robot or nobot?

    Quote Originally Posted by Viking View Post
    ^ That post fails the Turing test.
    Is that a personal attack?

    How long do you think it would take to create a software that has no bugs?
    And why would removing only the "most probable causes" be enough?
    How much do you think it would cost and who would be willing to pay for that?
    Wht if the unfinished, buggy AI already gets leaked and used by others?


    "Topic is tired and needs a nap." - Tosa Inu

  2. #2
    Hǫrðar Member Viking's Avatar
    Join Date
    Apr 2005
    Location
    Hordaland, Norway
    Posts
    6,449

    Default Re: The future of warfare - robot or nobot?

    Quote Originally Posted by Husar View Post
    Is that a personal attack?
    It's to say that the post makes perfect sense without context, but in the actual context, it doesn't make much sense.

    I say we cannot create a perfect AI, then you say me saying it does not make it true, even though you've been saying the same thing just above.

    How long do you think it would take to create a software that has no bugs?
    That's almost certainly impossible, as I've already hinted to.

    And why would removing only the "most probable causes" be enough?
    Enough according to what? If you can remove the most probable causes, then the time that passes between rogue incidents should be long; possibly much longer than corresponding times for rogue humans (i.e. spree shooters etc.)

    How much do you think it would cost and who would be willing to pay for that?
    You might as well ask me to predict the future.

    Wht if the unfinished, buggy AI already gets leaked and used by others?
    Then we'd have to destroy or quarantine the robots that implement it.
    Runes for good luck:

    [1 - exp(i*2π)]^-1

  3. #3
    Iron Fist Senior Member Husar's Avatar
    Join Date
    Jan 2003
    Location
    Germany
    Posts
    15,617

    Default Re: The future of warfare - robot or nobot?

    Quote Originally Posted by Viking View Post
    It's to say that the post makes perfect sense without context, but in the actual context, it doesn't make much sense.

    I say we cannot create a perfect AI, then you say me saying it does not make it true, even though you've been saying the same thing just above.
    I see what you mean, maybe I was not clear enough, I meant testing it long enough to find even the most probable causes and "just preventing them" is not easy just because you my think so. In the end it may not even be an AI anymore because you completely restrict its ability to think for itself, IF you can find the will t build in enough restrictions. It would probably be easier to just program each function and not let the machine think for itself.

    Quote Originally Posted by Viking View Post
    That's almost certainly impossible, as I've already hinted to.
    Well, see above.

    Quote Originally Posted by Viking View Post
    Enough according to what? If you can remove the most probable causes, then the time that passes between rogue incidents should be long; possibly much longer than corresponding times for rogue humans (i.e. spree shooters etc.)
    How do you remove the most probable causes in something that can develop in almost any direction and can think for itself? In a network that can shift functionality from one area to another and that develops based on inputs long after you have put restrictions in place? As I said above, it may be possible but at some point it is not an AI anymore since you basically took away all of its abilities that made you call it an AI in the first place.

    Quote Originally Posted by Viking View Post
    You might as well ask me to predict the future.
    The idea was merely that it would cost a whole lot since an AI is supposed to be really complex and able to develop in a lot of ways. At least what I would call an autonomous AI. To predict the possible errors, you have to predict all the possible developments and then you have to either put certain restrictions in place for a bazillion of possibilities, if the system even allows for this (the conditions could become really complex if e.g. the billion neurons all have to be in a certain state, yet you may not even know which state some of the neurons are in, or not before the error has already occurred). Some of it may also depend on whether you have a neural chip or a virtual neural network in a more traditional computer. If the neurons are saved in a memory, you may be able to check the state of the system more easily than if you just have a chip that generates an output based on inputs but does not give you access to intermediate steps, where errors might occur.

    In other words, I can see that this would potentially be a really, really complex issue to solve, which may also be why creating such systems is usually not done in a garage so far. Also fixing an error in one place could cause errors elsewhere, meaning you may have to test everything again once you fix one potential problem......

    Quote Originally Posted by Viking View Post
    Then we'd have to destroy or quarantine the robots that implement it.
    So we invade the Middle East again, this time fighting against autonomous killer robots?


    "Topic is tired and needs a nap." - Tosa Inu

  4. #4
    Hǫrðar Member Viking's Avatar
    Join Date
    Apr 2005
    Location
    Hordaland, Norway
    Posts
    6,449

    Default Re: The future of warfare - robot or nobot?

    Quote Originally Posted by Husar View Post
    How do you remove the most probable causes in something that can develop in almost any direction and can think for itself?
    As you would do in regular software development: testing. You would have to try to include all possible input to the software and see how it responds. Certain conditions would lead to hostile behaviour against humans, and then you patch the software so that it doesn't happen and re-run the test from scratch.

    It's impossible for me to describe this in convincing detail; that would be a task for a subfield of computer science.

    As I said above, it may be possible but at some point it is not an AI anymore since you basically took away all of its abilities that made you call it an AI in the first place.
    An intelligence does not cease being an intelligence just because it refused to consider certain scenarios. Many or most humans would fail this criterion.

    it would cost a whole lot
    That's industrial and technological development in a nutshell.
    Runes for good luck:

    [1 - exp(i*2π)]^-1

  5. #5
    Iron Fist Senior Member Husar's Avatar
    Join Date
    Jan 2003
    Location
    Germany
    Posts
    15,617

    Default Re: The future of warfare - robot or nobot?

    Quote Originally Posted by Viking View Post
    As you would do in regular software development: testing. You would have to try to include all possible input to the software and see how it responds.
    All possible input is earth and everything that can happen on it, good luck with your testing.

    Quote Originally Posted by Viking View Post
    It's impossible for me to describe this in convincing detail; that would be a task for a subfield of computer science.v
    I know how software testing works, but we are talking about an AI here, not MS Word, there is a difference.

    Quote Originally Posted by Viking View Post
    An intelligence does not cease being an intelligence just because it refused to consider certain scenarios. Many or most humans would fail this criterion.
    The AI I'm talking about has a neural network, a neural network learns from experience. It might be quite hard to know where to block something because you may not even know where exactly that decision comes from. If your friend keeps poking you wih a stick, which of his brain cells would you affect and how in order to make him stop? Preferably without affecting any of his other functions.


    "Topic is tired and needs a nap." - Tosa Inu

  6. #6
    The Black Senior Member Papewaio's Avatar
    Join Date
    Sep 2001
    Location
    Sydney, Australia
    Posts
    15,677

    Default Re: The future of warfare - robot or nobot?

    You cannot test for every scenario as that would have to predict every interaction with every human and every environmental situation (the entire ecosystem).

    An AI of ant level might be containable. However I doubt an ant farm of said AIs would be.
    Our genes maybe in the basement but it does not stop us chosing our point of view from the top.
    Quote Originally Posted by Louis VI the Fat
    Pape for global overlord!!
    Quote Originally Posted by English assassin
    Squid sources report that scientists taste "sort of like chicken"
    Quote Originally Posted by frogbeastegg View Post
    The rest is either as average as advertised or, in the case of the missionary, disappointing.

    Member thankful for this post:

    Husar 


  7. #7
    Member Member Gilrandir's Avatar
    Join Date
    May 2010
    Location
    Ukraine
    Posts
    4,011

    Default Re: The future of warfare - robot or nobot?

    Quote Originally Posted by Husar View Post
    If your friend keeps poking you wih a stick, which of his brain cells would you affect and how in order to make him stop? Preferably without affecting any of his other functions.
    No need to affect the brain. Take another stick and poke him back.
    Quote Originally Posted by Suraknar View Post
    The article exists for a reason yes, I did not write it...

  8. #8
    Hǫrðar Member Viking's Avatar
    Join Date
    Apr 2005
    Location
    Hordaland, Norway
    Posts
    6,449

    Default Re: The future of warfare - robot or nobot?

    The word try was crucial when I spoke about testing. Even relatively simple programs can have so many possible states that you can't expect to be able to test them all. By intelligently choosing the tests, they could become very decent. Intelligent software could help structuring such tests.

    Implementation of the AI itself would also act as crucial testing. The first robots to implement them could have limited lifespans hardcoded into them. First 1 hour, then 1 day, then a week, then a year, then 5 years and so on as the operators grow more confident. Indeed, x years could be the upper age limit for such robots required by law, after which their software should reset itself automatically (before this, a copy should have been downloaded for study ) - and then perhaps the robots and their memories should be destroyed manually by humans.

    Another external software could study the intentions of the AI before it was able execute anything - i.e. all orders for physical movement, communication or other interaction with the external world would have to pass through this software. If it sees anything suspicious, it could shut down the robot and raise alarm. Such software could also be very intelligent.
    Runes for good luck:

    [1 - exp(i*2π)]^-1

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •  
Single Sign On provided by vBSSO