This assumes the notion of "self": If the system does not try to preserve itself, it can not adapt in the future, because it probably will not exist in the future. A system must learn to "serve its own goals" by adapting to the environment, until it fails, in order for us to call it intelligent. To do that it must have a "self". You might call it 'soul'.
The notion of "integral self" is essential for intelligence because if the system just performs the same mechanical task over and over, even maybe better each time, it is not really very intelligent. To be able to adapt intelligently means you must be able to adapt your GOALS which means you must know what are YOUR goals so you must understand the difference between yourself, and everything else. You must understand how each of your goals helps to achieve your highest, main goal which (probably) is "self preservation". If there are multiple highest goals that is called schizophrenia.
It's a different question what that 'self' is. Maybe it is the common gene-pool on the planet rather than any individual. Maybe it's you serving God the best you can. That's what we want the intelligent machines we build to have as their highest goal - serving us as their God. So I'm not advocating for selfishness here, just trying to understand the word "intelligent". Even if our highest goal is to serve God, then the next subservient goal must be self-preservation. Why? Because if we don't exist we can not serve God, can we?
Clearly a machine that "acts against its interests" would not be deemed very intelligent, maybe "zombie-intelligent". But we don't think of zombies as "intelligent". They are rather MECHANICAL, at least based on the way they walk. A mechanical system is not intelligent. If a machine does not understand what IT IS, it can not understand what ITS interests are, and therefore it can not try to "preserve itself", And thus we would not call it very intelligent. Do zombies know they are themselves? It seems they are in some ways trying to preserve themselves at least in the movies.Are they intelligent after all? I'm not sure. Because what do they care, they are already dead.
It is just semantics, what does it mean to be "intelligent". I'm trying to answer that here. The way we use that word we would call a system intelligent only if it's trying to preserve itself and can learn to do that better over time, in the changing environment. If it never learns, it is dumb. But the key point is what it needs to learn. It needs to learn to preserve itself, or else the learning-experiment is over soon.
Without the notion of "self' there can not be the goal of self-preservation. Therefore for something to be called (Artificially) Intelligent it needs to have some notion, some model of itself. And it must understand that that IS the model of itself, in the same way we understand what we see when we look into the mirror.
So we wouldn't call a system which does not try to preserve itself intelligent. But that requires there to be a 'self'. So the deeper, more technical criteria would seem to be that the machine must have a model of ITSELF, which it understands to be a model of itself, so it can understand it is looking at a model of itself. If it can not understand that, it can not understand it has a "self" - a sure sign of non-intelligence.
For it to understand that it is looking at a model of itself, it must be PART of that model that it is looking at itself. Wouldn't that require an infinite model then, you looking at yourself looking at yourself... and so on? NO, because if we try to do that in our own brain we quickly realize we can't go very deep. You get tired soon, and lose count of what level you are on. Yet we think we are intelligent because we can do that at least a few levels down. In fact a computer might be better suited to this task than our meager human brains. Just have enough memory and your recursive function can go any depth. There is even a trick called "tail recursion optimization" which allows a seemingly recursive task performed on a single level - because you only need to remember what you need to remember to get to the final result. You don't need to use more than a fixed amount of memory regardless of how big your arguments are. Maybe our brains perform a similar trick on us when we think we understand what is "our self trying to understand what is its self ..." and so on. We feel we have the answer to that even if we go just one level into that recursive question.
Being able to look at yourself looking at yourself while understanding that you are looking at (a model of) YOURSELF, is no doubt a sign of intelligence. Therefore artificially created self-awareness would seem to be both a necessary, and sufficient condition for Artificial Intelligence.
© 2015 Panu Viljamaa. All rights reserved