Wolfram Computation Meets Knowledge

Wolfram Summer School

Alumni

Tatiana (Tanya) Grunina

Science and Technology

Class of 2016

Bio

After she finished her bachelor’s degree in data analysis and statistics at the Higher School of Economics (Moscow), Tanya moved to Dublin and works there as a business analyst at a search engine company. Genuinely inspired by gaining knowledge, she decided to continue her studies, and is currently is a first-year master’s student in big data systems. Tanya loves graphs: she won the student paper competition at the first global conference on social network analysis. Her research interest now is the comparison of missing link prediction methods in knowledge graphs.

During her free time, Tanya paints outdoors, goes to advanced cooking school, cycles, travels and enjoys rationality podcasts by Eliezer Yudkowsky.

Project: Neural Network Layer-by-Layer Visualization

Currently, neural networks (NN) are known to be one of the most powerful supervised learning methods in machine learning, as they show high performance on threshold, ordering and probability metrics among very different datasets [1], and often beat methods from other families [2]. At the same time, they are usually considered to be a “black-box” method, which means that it is hard to explain how a NN investigates the input and what the work behind the output is. Thus, we want to reveal what makes a NN so successful. We need to investigate NNs and test the following hypotheses:

H1: There is a way to understand what a NN learns from layer to layer.
In order to check this, we want to train our image recognition NN on picture datasets. Then we feed a noise to our net and ask it to find some concrete objects. We remember the activation paths of the objects—the sequences of neurons that were active on each layer. Then we cut some layers in the net, feed it the noise and ask it to produce the image using the neurons of the cut activation path. This results in new images that are different from the initial images: there may be some tangible features that make the difference, or some combinations of features that we will give special definitions. We can iterate the experiment, getting to know what a NN learns at each layer.

H2: We can visualize the NN learning process.
To deal with this task, we want to make a statistical analysis of the numerical outputs of each layer and compare the changes in the outputs with the changes in picture patterns layer by layer.

References

[1] R. Caruana, “Which Supervised Learning Method Works Best for What? An Empirical Comparison of Learning Methods and Metrics,” VideoLectures.net (Sep 20, 2016) videolectures.net/solomon_caruana_wslmw.

[2] R. Benenson, “What Is the Class of This Image?” GitHub.io (Sep 20, 2016) rodrigob.github.io/are_we_there _yet/build/classification_datasets_results.html.

Favorite 3-Color 2D Totalistic Cellular Automaton

Rule 689778371