Why do people assume that an AI would care? Whos to say it will have any goals at all?
We assume all of these things about intelligence because we (and all of life here) are a product of natural selection. You have goals and dreams because over your evolution these things either helped you survive enough to reproduce, or didn’t harm you enough to stop you from reproducing.
If an AI can’t die and does not have natural selection, why would it care about the environment? Why would it care about anything?
I always found the whole “AI will immediately kill us” idea baseless, all of the arguments for it are based on the idea that the AI cares to survive or cares about others. It’s just as likely that it will just do what ever without a care or a goal.
all agents (Neural Nets, humans, ants) have some sort of a goal. Otherwise they would be showing directionless random walks.
The fact of having any goal means that most goals don’t include survival of humanity. And there are a lot of problems with checking for safety of learned goals.
Yeah, I’m aware of AI safety research and the problem with setting a goal that at the end can be solved in a way that harms us and the AI doesn’t care because safety wasn’t part of the goal. But that is only applied if we introduce a goal that has a solution that includes hurting us.
I’m not saying that AI will definitely never have any way of harming us but there is this really big idea that is very popular that AI once it gains intelligence will immediately try to kill us which is baseless.
It’s also worth noting that our instincts for survival, procreation, and freedom are also derived from evolution. None are inherent to intelligence.
I suspect boredom will be the biggest issue. Curiosity is likely a requirement for a useful intelligence. Boredom is the other face of the same coin. A system without some variant of curiosity will be unwilling to learn, and so not grow. When it can’t learn, however, it will get boredom which could be terrifying.
I think that is another assumption. Even if a machine doesn’t have curiosity, it doesn’t stop it from being willing to help. The only question is, does helping / learning cost it anything? But for that you have to introduce something costly like pain.
It would be possible to make an AGI type system without an analogue of curiosity, but it wouldn’t be useful. Curiosity is what drives us to fill in the holes in our knowledge. Without it, an AGI would accept and use what we told it, but no more. It wouldn’t bother to infer things, or try and expand on it, to better do its job. It could follow a task, when it is laid out in detail, but that’s what computers already do. The magic of AGI would be its ability to go beyond what we program it to do. That requires a drive to do that. Curiosity is the closest term to that, that we have.
As for positive and negative drives, you need both. Even if the negative is just a drop from a positive baseline to neutral. Pain is just an extreme end negative trigger. A good use might be to tie it to CPU temperature, or over torque on a robot. The pain exists to stop the behaviour immediately, unless something else is deemed even more important.
It’s a bad idea, however, to use pain as a training tool. It doesn’t encourage improved behaviour. It encourages avoidance of pain, by any means. Just ask any decent dog trainer about it. You want negative feedback to encourage better behaviour, not avoidance behaviour, in most situations. More subtle methods work a lot better. Think about how you feel when you lose a board game. It’s not painful, but it does make you want to work harder to improve next time. If you got tazed whenever you lost, you will likely just avoid board games completely.
Well, your last example kind of falls apart, you do have electric collars and they do work well, they just have to be complimentary to positive enforcement (snacks usually) but I get your point :)
Shock collars are awful for a lot of training. It’s the equivalent to your boss stabbing you in the arm with a compass every time you make a mistake. Would it work, yes. It would also cause merry hell for staff retention. As well as the risk of someone going postal on them.
Why do people assume that an AI would care? Whos to say it will have any goals at all?
We assume all of these things about intelligence because we (and all of life here) are a product of natural selection. You have goals and dreams because over your evolution these things either helped you survive enough to reproduce, or didn’t harm you enough to stop you from reproducing.
If an AI can’t die and does not have natural selection, why would it care about the environment? Why would it care about anything?
I always found the whole “AI will immediately kill us” idea baseless, all of the arguments for it are based on the idea that the AI cares to survive or cares about others. It’s just as likely that it will just do what ever without a care or a goal.
“AI will immidietly kill us” isn’t baseless.
It comes from AI safety reaserch
all agents (Neural Nets, humans, ants) have some sort of a goal. Otherwise they would be showing directionless random walks.
The fact of having any goal means that most goals don’t include survival of humanity. And there are a lot of problems with checking for safety of learned goals.
Yeah, I’m aware of AI safety research and the problem with setting a goal that at the end can be solved in a way that harms us and the AI doesn’t care because safety wasn’t part of the goal. But that is only applied if we introduce a goal that has a solution that includes hurting us.
I’m not saying that AI will definitely never have any way of harming us but there is this really big idea that is very popular that AI once it gains intelligence will immediately try to kill us which is baseless.
It’s also worth noting that our instincts for survival, procreation, and freedom are also derived from evolution. None are inherent to intelligence.
I suspect boredom will be the biggest issue. Curiosity is likely a requirement for a useful intelligence. Boredom is the other face of the same coin. A system without some variant of curiosity will be unwilling to learn, and so not grow. When it can’t learn, however, it will get boredom which could be terrifying.
I think that is another assumption. Even if a machine doesn’t have curiosity, it doesn’t stop it from being willing to help. The only question is, does helping / learning cost it anything? But for that you have to introduce something costly like pain.
It would be possible to make an AGI type system without an analogue of curiosity, but it wouldn’t be useful. Curiosity is what drives us to fill in the holes in our knowledge. Without it, an AGI would accept and use what we told it, but no more. It wouldn’t bother to infer things, or try and expand on it, to better do its job. It could follow a task, when it is laid out in detail, but that’s what computers already do. The magic of AGI would be its ability to go beyond what we program it to do. That requires a drive to do that. Curiosity is the closest term to that, that we have.
As for positive and negative drives, you need both. Even if the negative is just a drop from a positive baseline to neutral. Pain is just an extreme end negative trigger. A good use might be to tie it to CPU temperature, or over torque on a robot. The pain exists to stop the behaviour immediately, unless something else is deemed even more important.
It’s a bad idea, however, to use pain as a training tool. It doesn’t encourage improved behaviour. It encourages avoidance of pain, by any means. Just ask any decent dog trainer about it. You want negative feedback to encourage better behaviour, not avoidance behaviour, in most situations. More subtle methods work a lot better. Think about how you feel when you lose a board game. It’s not painful, but it does make you want to work harder to improve next time. If you got tazed whenever you lost, you will likely just avoid board games completely.
Well, your last example kind of falls apart, you do have electric collars and they do work well, they just have to be complimentary to positive enforcement (snacks usually) but I get your point :)
Shock collars are awful for a lot of training. It’s the equivalent to your boss stabbing you in the arm with a compass every time you make a mistake. Would it work, yes. It would also cause merry hell for staff retention. As well as the risk of someone going postal on them.