Wolfram Computation Meets Knowledge

Wolfram Summer School

Alumni

Adam Dendek

Science and Technology

Class of 2017

Bio

Adam is a PhD student from AGH University of Science and Technology, Kraków, Poland, with a major in experimental physics. His research focuses on the application of machine learning in high energy physics. He works for LHCb, one of the most important experiments currently operating at CERN.

Outside of his PhD-related work, he likes training deep neural networks for computer-vision tasks. In his free time, he loves traveling, discovering new places and people, cooking and watching good movies.

Computational Essay

Deep Convolution Neural Networks for Dummies! »

Project: DeepLaetitia: Deep Reinforcement Learning That Makes You Smile

Goal of the project:

The aim of this project is to train a deep reinforcement learning agent to bring a smile to your face.

Summary of work:

The first part of the project was to train a deep convolutional neural network to predict one of five facial emotions (happy, neutral, sad, angry, surprised). The input to this classifier is taken directly from the user’s camera. Then, taking its prediction as an input, the reinforcement learning agent was trained to make the user smile. The tunable emoticon was used.

Results and future work:

The first phase of the project was to implement the facial emotion detector. At the beginning of my study, I trained classical machine learning models such as support vector machines, random forests and k-nearest neighborhoods. These models were used as a performance baseline for further studies based on deep convolutional neural networks. My analysis shows that the model containing two hidden convoluted layers is the best classifier. The selected model archived a performance of 91%, measured as the area under a receiver operating characteristic curve, in happiness recognition. Then, based on the chosen model prediction, the policy of finding the funniest emoticon was obtained. To do this, the Monte Carlo–based search method was applied. From the future perspective, the Q-learning approach will be implemented. A starting point for further studies will be the Monte Carlo–tuned policy from the previous step.