skip to content

Latest University advice regarding Covid safety can be found here.

The Betty & Gordon Moore Library offers a range of online support services and ways you can access our collections, to help you learn, teach and research. 

The Betty & Gordon Moore Library


Tuesday 30 May 2017: Learned Machines

Last week I wrote about WikiCite,  a conference that promises further advances in open bibliography. Personal interest drew me also to follow the “Future of Go” conference in Wuzhen, China, that started on the same day as WikiCite. I have been a go player since I was a teenager, giving me reasons to visit East Asia, and Uganda. Now a program, AlphaGo, has taken over, reprising the Deep Blue revelation of the strength of machines that chess players experienced back in 1997.

[AlphaGo image]

From a data angle, AlphaGo differs from Deep Blue in a significant way. There is a go literature, mostly in East Asian languages, that stands comparison to what you can read about chess; and there are hundreds of pro players. There is a difference in databases, because those for high-level go games are an order of magnitude smaller than those for chess, partly because of cultural differences. Certainly in Japan, game collections traditionally were assembled as anthologies, rather than paying tribute to the archival “completism” that we in the West now find natural.

Go begins with an empty board, and its tree of reasonable ways to play branches, typically, faster than is seen in chess, with games also being longer. In short, its possibilities would be harder to exhaust. I accumulated books and magazines with go games in them, and later I took part in digitisation of game records, for GoGod, a curated collection. In contrast, what AlphaGo could do was to learn from records made by go servers, and these are generally for games played a little casually.

[Go Puzzle image]

Server go is really not the place to look for masterpieces. Formidably, AlphaGo’s trainers needed only enough material from go played by strong players. This was “noisy” data, comprising games with mistakes and even blunders, but enough of it was available. Some essentials of the game were captured, without further intervention, where Deep Blue used chess knowledge mediated by a team of human experts and its formalisation. Once the fundamental knowledge is gained, a program can play itself tirelessly, and rapidly build on it. That would be a working definition of “the basics”.

Children, it is noticeable, can unlearn and correct themselves, something we find familiar in their acquisition of language. Adult learners, of languages, go or anything else, can do well with flawless exemplars, but may make quite a meal of rules that prove to have exceptions. Instructional material for them must be made quibble-proof. A key conclusion, and one I’m taking to heart in my ContentMine work: machine learning can be expected now to cope with noise in data. Further, the advance from Deep Blue to AlphaGo looks much like the difference between the current method of getting humans to vet ContentMine facts and place them into Wikidata, and having a robot reader that does the bulk of the vetting for us. Back in 1997, I was teaching go to Demis Hassabis, creator of AlphaGo. Twenty years on, and, no bad thing, I’m the one with something to learn.

Engage with us


News link Read our latest news

Twitter logo Follow us on Twitter

Facebook logo Like us on Facebook

Instagram logo Follow us on Instagram

You Tube logo Subscribe to our YouTube channel