This is an interesting, but also a possibly harrowing leap forward in artificial technology development. But, it could just be the Terminator paranoia in me. Some researches taught a computer how to play Civilization, through the game manual, but didn't teach it how to win.
The Escapist Wrote:Researchers have taken humanity one giant leap closer to robotic Armageddon by teaching a computer how to read, understand and very effectively apply the manual to the strategy classic Civilization.
Sure, we all like to joke about the looming machine apocalypse, but when I found out about how researchers at MIT taught a computer to read - and worse, to apply the knowledge it gained from said reading in a simulation about conquering and quite possibly blowing up the entire world - well, let's just say I started to think that maybe it's not all that funny after all.
Regina Barzilay, associate professor of computer science and electrical engineering at MIT, along with her graduate student S.R.K. Branavan and David Silver of University College London, presented a report at this year's meeting of the Association for Computational Linguistics about teaching a computer to "read" through a program in which it learned how to play the PC strategy game Civilization. To play it alarmingly well, in fact.
"Games are used as a test bed for artificial-intelligence techniques simply because of their complexity," said Branavan, who was first author on this paper as well as one from 2009 based on the simpler task of PC software installation. "Every action that you take in the game doesn't have a predetermined outcome, because the game or the opponent can randomly react to what you do. So you need a technique that can handle very complex scenarios that react in potentially random ways."
Game manuals, Barzilay added, are ideal for such experiments because they explain how to play but not how to win. "They just give you very general advice and suggestions, and you have to figure out a lot of other things on your own," she said.
But the truly amazing-slash-frightening part of the whole thing is the fact that the computer began with a very limited amount of information - the actions it could take, like right or left-clicking, information displayed on the screen and a measure of success or failure - and no prior knowledge of the what it's supposed to do, or even what language the manual was written in. Because of that blank-slate beginning, its initial play style was nearly random, but it gained knowledge as it progressed by comparing words on the screen with words in the manual and searching surrounding text for associated words, slowly figuring out what they meant and which actions led to positive results.
The augmented Civ-machine ended up winning 79 percent of the games it played, compared to a winning rate of only 46 percent for a computer that didn't have access to the written instructions. Some members of the ACL audience apparently criticized the report, saying the system performed so well because it was put up against relatively weak computer opponents, but according to Brown University Professor of Computer Science Eugene Charniak, that argument misses the point. "Who cares?" he said. "The important point is that this was able to extract useful information from the manual, and that's what we care about."
It's pretty heady stuff, with a more down-to-Earth benefit for gamers being the promise of far more sophisticated computerized opponents in videogames. Instead of the relatively exploitable preset routines we have today, we could in the relatively near future find ourselves squaring off against computerized opponents with the ability to actually learn, adapt and come at us with ever-evolving tactics and strategies. But the long-term prospects may not be so sunny. If that thing ever figures out how to play Alpha Centauri, we are screwed.
On a side note, I've only just learnt that this forum had a rich text editor to post threads... go figure. Wonder if it handles colours too...
My opinion: if we ever get remotely close to AI, there should be a base code, unable to be alterd by anyone not even the best hacker with the utmost protection that is possible so nobody, realy nobody could change it not even the computers themself that contains a simple rule for the AI, " NEVER, NEVER EVER do something that could hurt mankind in ANY way even if we wanted to destroy tehm"
A Psalm out of the outcast history books.
The dons are our shepherds; I shall not want. They make us live with a cause; They lead us trough the still vasts of space. They restore our souls; They lead us in the paths of righteousness For The Orange Dream. Yai, though we fly through the valley of the shadow of death, we will fear no evil; For the Spirits are with us; Your leadership and Your cardamine, they comfort us. You prepare a safe home for us in the presence of our enemies; Surely goodness and mercy shall follow us All the days of our life; And we will dwell in the house of the outcasts Forever
My opinion: if we ever get remotely close to AI, there should be a base code, unable to be alterd by anyone not even the best hacker with the utmost protection that is possible so nobody, realy nobody could change it not even the computers themself that contains a simple rule for the AI, " NEVER, NEVER EVER do something that could hurt mankind in ANY way even if we wanted to destroy tehm"
thats the thing, ever seen IRobot? the mother AI was trying to save mankind from itself.