X
Innovation

Google Brain, Microsoft plumb the mysteries of networks with AI

Some heavy hitters in AI, from Microsoft, Google's Google Brain unit, Stanford, Cambridge, and Montreal's Institute for Learning Algorithms, report breakthroughs in getting neural networks to decipher hidden structure of social networks such as Reddit.
Written by Tiernan Ray, Senior Contributing Writer

We live in an age of networks. From the social graph of Facebook to the interactions of proteins in the body, more and more of the world is being conceived of and represented as the connections in a network.

And understanding of those connections can sometimes have stunning business implications, such as when Larry Page and Sergey Brin of Stanford University first modeled networks of webpages, called "PageRank," the foundation of Google.

Some heavy hitters in artificial intelligence have been working on ways to make machine learning techniques smarter about understanding networks. Late last week, a group of those researchers reported progress in having a neural network figure out the structure of various networks without having full knowledge of all of a network.

Also: Top 5: Things to know about AI TechRepublic

The paper, entitled "Deep Graph Infomax," is written by lead author Petar Veličković of Cambridge University, along with Yoshua Bengio and William Hamilton of the Montréal Institute for Learning Algorithms, researchers at Microsoft, Google's Google Brain unit, and Stanford University. They propose a new way to decipher unseen parts of networks.

Their invention, the Deep Graph Infomax, distributes global information about the whole of the social network Reddit, albeit incomplete, to figure out the details of smaller, "local" neighborhoods within Reddit. It's a kind of way of working backward from large pictures to small clues.

A network can be any set of things that are joined by connections. In the case of Reddit, individual posts by Reddit members have links to other posts, and the web of connections between posts give context and meaning to each post. The task here was for the neural network to predict the "community structure" of the Reddit network.

But there's a scaling problem. In a very large network such as Reddit, with millions of posts, it's impossible to gather all the posts and their connections from a standing start. This is a problem Page and Brin first faced when they were building Google in the late '90s: PageRank had to map all the web without being able to "see" parts of the network that were yet unknown.

The solution involves a pièce de resistance in combining multiple breakthroughs in neural networks.

Also: MIT ups the ante in getting one AI to teach another

The authors adapted an earlier work known as "Deep Infomax" by one of the authors, Microsoft's R. Devon Hjelm. Hjelm's Deep Infomax was trying to improve image recognition, not the understanding of networks. By sharing information between patches of an image, on the one hand, and the high-level "representations" of such images, a process known as "mutual information," Deep Infomax was able to perform better than other means of image recognition.

The authors took the Deep Infomax approach and translated it from images to network representations. They trained a convolutional neural network, or CNN, to coordinate what's known about a small area of the network's topology with what is known about the network overall. By doing so, they re-created the "labels" that are usually supplied by humans to train an AI model. The use of mutual information essentially re-creates the "supervision" that labels usually provide to a neural network.

The authors make the point that the Deep Graph Infomax is able to be competitive with other programs for analyzing graphs that it has never seen before, known as inductive analysis. While other approaches only know about the details of a part of the network, every "node" in the model the authors created "has access to structural properties of the entire graph" of the network.

Interestingly, by jettisoning typical approaches to network analysis, which are known as a "random walk," the authors write that their approach is more sophisticated than other analyses.

Also: Brewers are using AI to predict how your next beer will taste CNET

"The random-walk objective is known to over-emphasize proximity information at the expense of structural information." In that sense, the random walk has a bias, something AI scientists would like to eliminate.

In contrast, Deep Graph Infomax makes it so every single node of the network is "mindful of the global structural properties of the graph."

There's a larger point to the report: neural networks that can match information about details with information about the bigger picture, can achieve better "representations." Representations mean having a higher level of abstraction about a subject. As such, the work contributes to the ongoing quest to give AI higher levels of comprehension than the mere correlation on which it has focused.

36 of the best movies about AI, ranked

Previous and related coverage:

What is AI? Everything you need to know

An executive guide to artificial intelligence, from machine learning and general AI to neural networks.

What is deep learning? Everything you need to know

The lowdown on deep learning: from how it relates to the wider field of machine learning through to how to get started with it.

What is machine learning? Everything you need to know

This guide explains what machine learning is, how it is related to artificial intelligence, how it works and why it matters.

What is cloud computing? Everything you need to know about

An introduction to cloud computing right from the basics up to IaaS and PaaS, hybrid, public, and private cloud.

Related stories:

Editorial standards