It's to say that the post makes perfect sense without context, but in the actual context, it doesn't make much sense.
I say we cannot create a perfect AI, then you say me saying it does not make it true, even though you've been saying the same thing just above.
That's almost certainly impossible, as I've already hinted to.How long do you think it would take to create a software that has no bugs?
Enough according to what? If you can remove the most probable causes, then the time that passes between rogue incidents should be long; possibly much longer than corresponding times for rogue humans (i.e. spree shooters etc.)And why would removing only the "most probable causes" be enough?
You might as well ask me to predict the future.How much do you think it would cost and who would be willing to pay for that?
Then we'd have to destroy or quarantine the robots that implement it.Wht if the unfinished, buggy AI already gets leaked and used by others?
Bookmarks