Over at the Economist, they are reporting on how to figure out whether a bot can be human: that which we software geeks call the Turing test.
IF A computer could fool a person into thinking that he were interacting with another person rather than a machine, then it could be classified as having artificial intelligence. That, at least, was the test proposed in 1950 by Alan Turing, a British mathematician. Turing envisaged a typed exchange between machine and person, so that a genuine conversation could happen without the much harder problem of voice emulation having to be addressed.
It’s curious how Alan Turing managed to predict the arisal and social domination of things like IRC, ICQ and now Skype. Back to the Turing Test, some AI people are now doing it within competitions:
At a symposium on computational intelligence and games organised in Milan this week by America’s Institute of Electrical and Electronics Engineers, researchers are taking part in a competition called the 2K BotPrize. The aim is to trick human judges into thinking they are playing against other people in such a game. The judges will be pitted against both human players and “bots” over the course of several battles, with the winner or winners being any bot that convinces at least four of the five judges involved that they are fighting a human combatant. Last year, when the 2K BotPrize event was held for the first time, only one bot fooled any judges at all as to its true identity—and even then only two of them fell for it.
Can a competition help? Apparently, Yes! It revealed that the way to tell if it is a bot is to measure its perfection:
…But it must also have enough flaws to make it appear human. As Jeremy Cothran, a software developer from Columbia, South Carolina, who is another veteran of last year’s competition, puts it, “it is kind of like artificial stupidity”.
Mr Pelling says that one of the biggest challenges lies in programming the bots to account for sneaky tactics from the judges. It is relatively easy to manipulate the game and do unnatural things in order to elicit behavioural flaws in a badly programmed bot. And if a judge observes even a single instance of unnatural behaviour the game is, as it were, over.
To me, that’s a surprising result. Obvious now that I think about it.
Maybe competitions can help because they encourage really innovative things and thinking? Can they help us at CAcert?
To this end, we recently had the bright idea that one way to get our systems to the next level in security and robustness was to run a competition to create a signing server. The idea behind the signing server is that it is basically a hand-built small computer that just does signing. That part is simple, and the obvious approach is to buy a small machine, load up Linux or BSD, install Apache, and start signing. And, that’s precisely what we do! Today, right now, as it happens. Good luck, guys!
But how to make such a signing server secure? That’s a really tricky question. Worse, it is a question with many contradictory answers, and many very expensive answers. I have a feeling that it should be cheap, it should be something we can do without contradictory answers, and it should be something we can do ourselves.
It should also be fun! Maybe, just maybe, we can run a design competition to create the design for a new-generation, open and secure signing server. Any one agree?
Cool idea Ian, go for possum, and let’s see, if somebody will take the challenge 😉