Back to Summer Programs

Wolfram Summer SchoolFounded in 2003

16th Annual Wolfram Summer School, held at Bentley University June 24–July 13, 2018


Bruno Lima de Souza


Class of 2016


Bruno is a passionate student of science and technology who is currently finishing his PhD in high energy physics at SISSA, in Italy. His main area of expertise is quantum field theory, and he is particularly interested in conformal field theories.

During his research, he has devoted some time to the development of automatized tools using the Wolfram Language to perform tasks such as classification of tensor structures in correlation functions, tensor reduction of Feynman integrals, exact evaluation of scalar Feynman integrals and more.

He is also interested in applications of machine learning techniques to all sorts of problems.

Project: Deep Compression

The aim of my project is to implement within the Wolfram Language a version of the "deep compression" algorithm proposed by Han et al. [1].

Neural networks are often memory costly, which makes their embedding difficult in systems with limited hardware resources. Thus it would be very desirable to find a procedure to compress neural networks while minimizing the loss in accuracy. The "deep compression" algorithm by Han et al. is one possible solution to this problem.

The algorithm has three stages:

  1. Pruning: all weights at each layer of the neural net that are smaller than a certain threshold are put to zero, and the neural net is retrained with these constraints;
  2. Trained quantization: at each layer we perform a cluster analysis, and exchange the weights in each cluster by their centroid value;
  3. Huffman coding: we look at the distribution of weights and cluster indices in the whole neural net, and, using their frequency, we Huffman encode them.

Han et al. verified the efficiency of their algorithm by applying it to two deep convolutional networks, AlexNet (240MB) and VGG-16 (552MB). They were able to get a compression factor of 35x (240MB -> 6.9MB) for the first one and 49x (552MB -> 11.3MB) for the second.


[1] S. Han, H. Mao and W. J. Dally, "Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding," published as a conference paper at ICLR 2016 (oral). (Feb 15, 2016) arXiv:1510.00149v5.

Favorite 3-Color 2D Totalistic Cellular Automaton

Rule 581907592

A very interesting pattern that I've found is: