Friday, April 24, 2015

The Madness of Groovy

This post is not against Groovy the programming language. I don't want language wars. I do like Groovy. This is about a general problem that plagues other programming languages too. I just came up with a catchy title to get your attention. I was looking into the build-system Gradle, a powerful Groovy -based build-system, which I think is great. While doing that I naturally came in contact with some Groovy syntax, which gave me the inspiration to write this. A bit of madness can be a good thing when mixed with sanity.

The question I try to answer is: Should there be several different ways of saying the same thing in a given language, or just few? Or maybe just one?  It depends on the details of the specific situation of course. But my general belief is that it is BAD to have multiple ways, if there's no good reasons for that. I will try to illustrate this point with examples from Groovy.

According to  you can write this in Groovy:

def code = { 123 }
assert code() == 123
assert == 123

In other words  you can evaluate a closure by placing "()" after it, or by placing ".call()" after it. My question: Do we need two (or more) different ways of doing that? How does it help?

One way it could help is that the first way is shorter, and shorter is good, right? But then why do we need the second way?  Maybe there is a valid reason in this particular case and if so then this is clearly a good thing. But if there's no good reason for multiple different ways of doing the exact same thing, it is bad.

It is like you had two break-pedals in your car, just in case. That might make you feel safer, but in fact the multitude of pedals would get you confused. You'd end up pressing the gas-pedal to stop the car. Or it's like driving a bicycle. You can drive in a normal way like most people do. Or you can lift your hands up in the air and shout "Look Mom, no hands!".  So there are two different ways of driving a bicycle. But is that a good thing? I think it's clearly safer if you never even knew about that second way.

Second example: Calling a Groovy method that takes a closure as its last argument. It can be done in (at least?) three different ways. Let's first define the method:

def myMethod (aString, aClosure) {

We can now pass a closure to it as an argument in at least three different ways, which all have the same effect:

myMethod ('madness', {arg -> println arg})

Above two arguments are passed to myMethod() separated by comma, like you do in most other programming languages. But in Groovy the above can also be written like this:

myMethod ('madness')
{  arg -> println arg

That works because IF the last argument is a closure, it can be placed outside the parenthesis. You can omit the comma then too. Clever yes.  But is that enough of a reason to have this 2nd way, when the first example already works fine, and works like most other programming languages, without needing clever rules about the "type of the last argument"?  

myMethod 'madness', 
arg -> println arg

Above shows you can call a method without ANY parenthesis at all. But then, you must put the comma back in. Clever? Maybe too clever.

The final example is from :

// equivalent to: turn(left).then(right)
turn left then right

That  saves us four parenthesis and looks truly impressively clever. From the same document we can learn the rule "If your command chain contains an odd number of elements, the chain will be composed of method / arguments, and will finish by a final property access".

In the same document there are many other examples of clever ways of writing one thing in different ways. They are intended to show how you can use Groovy to create Domain Specific Languages. But by now I think I'd prefer a simple general purpose language instead,  without myriad rules about how statements can mean different things based on whether you have even or odd number of elements.

So let's get back to why having many different ways of writing the same thing is bad. You could say it doesn't matter because you don't need to learn them all, just learn one and use that. But you do need to learn them all if you want to read code written by someone else. And often being able to read code is as important as being able to write it.

Multiple different ways of doing things are bad because those multiple ways are different in each programming language. It's as if every car-make would have a completely different type of dashboard and set of controls, the pedals in different order etc. That would be dangerous, right? Cars are powerful machines that can get people killed in accidents. Programming languages are even more powerful and dangerous, they run nuclear plants! We should strive to make them less dangerous, while still keeping them powerful.

I do like Groovy the language. Its one flaw for me is that it tries to be a language for creating Domain Specific Languages, but doesn't quite get there either. If I really want my own domain to have its own language I think I'll use Xtext for that.

Groovy probably isn't the worst offender in its many ways of doing the same thing. Maybe Perl is. Here's an example of FIVE different ways you can iterate through a list in Perl:  To be able to read Perl -code you have to learn them all.

 © 2015 Panu Viljamaa. All rights reserved

Thursday, April 9, 2015

Artificial Intelligence requires Self-Awareness

What is Artificial Intelligence?  I would say that AI is about building systems which can adapt their behavior based on external stimuli in a way that allows them to adapt better, or at least adapt again in the future. But what does it mean to adapt? It means you change your behavior so that you are better able to SURVIVE.

This assumes the notion of "self":  If the system does not try to preserve itself, it can not adapt in the future, because it probably will not exist in the future.  A system must learn to "serve its own goals" by adapting to the environment, until it fails, in order for us to call it intelligent. To do that it must have a "self". You might call it 'soul'.

The notion of "integral self" is essential for intelligence because if the system just performs the same mechanical task over and over, even maybe better each time, it is not really very intelligent. To be able to adapt intelligently means you must be able to adapt your GOALS which means you must know what are YOUR goals so you must understand the difference between yourself, and everything else. You must understand how each of your goals helps to achieve your highest, main goal which (probably) is "self preservation".  If there are multiple highest goals that is called schizophrenia.

It's a different question what that 'self' is. Maybe it is the common gene-pool on the planet rather than any individual. Maybe it's you serving God the best you can. That's what we want the intelligent machines we build to have as their highest goal - serving us as their God.  So I'm not advocating for selfishness here, just trying to understand the word "intelligent".  Even if our highest goal is to serve God, then the next subservient goal must be self-preservation. Why? Because if we don't exist we can not serve God, can we?

Clearly a machine that "acts against its interests" would not be deemed very intelligent, maybe "zombie-intelligent". But we don't think of zombies as "intelligent". They are rather MECHANICAL, at least based on the way they walk. A mechanical system is not intelligent.   If a machine does not understand what IT IS,  it can not understand what ITS interests are, and therefore it can not try to "preserve itself", And thus we would not call it very intelligent.  Do zombies know they are themselves? It seems they are in some ways trying to preserve themselves at least in the movies.Are they intelligent after all?  I'm not sure. Because what do they care, they are already dead.

It is just semantics, what does it  mean to be "intelligent". I'm trying to answer that here.  The way we use that word we would call a system intelligent only if it's trying to preserve itself and can learn to do that better over time, in the changing environment. If it never learns, it is dumb. But the key point is what it needs to learn. It needs to learn to preserve itself, or else the learning-experiment is over soon.

Without the notion of "self' there can not be the goal of self-preservation.  Therefore for something to be called (Artificially) Intelligent it needs to have some notion, some model of itself. And it must understand that that IS the model of itself,  in the same way we understand what we see when we look into the mirror.

So we wouldn't call a system which does not try to preserve itself intelligent. But that requires there to be a 'self'. So the deeper, more technical criteria would seem to be that the machine must have a model of ITSELF,  which it understands to be a model of itself, so it can understand it is looking at a model of itself.  If it can not understand that, it can not understand it has a "self" -  a sure sign of non-intelligence.

For it to understand that it is looking at a model of itself, it must be PART of that model that it is looking at itself.  Wouldn't that require an infinite model then, you looking at yourself looking at yourself... and so on?  NO, because if we try to do that in our own brain we quickly realize we can't go very deep. You get tired soon, and lose count of what level you are on.  Yet we think we are intelligent because we can do that at least a few levels down.  In fact a computer might be better suited to this task than our meager human brains.  Just have enough memory and your recursive function can go any depth. There is even a trick called "tail recursion optimization" which allows a seemingly recursive task performed on a single level - because you only need to remember what you need to remember to get to the final result. You don't  need to use more than a fixed amount of memory regardless of how big your arguments are.  Maybe our brains perform a similar trick on us when we think we understand what is "our self trying to understand what is its self ..." and so on.  We feel we have the answer to that even if we go just one level into that recursive question.

Being able to look at yourself looking at yourself while understanding that you are looking at  (a model of) YOURSELF, is no doubt a sign of intelligence. Therefore artificially created self-awareness would seem to be both a necessary, and sufficient condition for Artificial Intelligence.

 © 2015 Panu Viljamaa. All rights reserved