Wolfram Computation Meets Knowledge

Wolfram Summer School

Alumni

Matteo Salvarezza

Technology and Innovation

Class of 2016

Bio

I got my master’s degree in theoretical physics in 2012 at the University “La Sapienza” in Rome, Italy, which is also my hometown. Right after completing my master’s degree, I started my PhD in theoretical high energy physics at the same institution, which I completed in early 2016. During my PhD, I carried out research in the field of physics beyond the Standard Model, with particular reference to the model building and phenomenology of composite Higgs models.

My most important personal interest is, by far, music; I have been playing guitar, carefully listening and composing music for 13 years.

Project: Image Transformation with Neural Networks: Real-Time Style Transfer and Super-Resolution

The goal of the project is to implement image transformation algorithms for style transfer and super-resolution. The style transfer algorithm takes two images as input: a style image and a content image. The result is a third image featuring the style of the first (colors, texture and style of the lines) mapped onto the content of the second image (the actual objects being represented)—for example, producing a picture of a cat as if it were painted by an impressionist artist. The super-resolution algorithm takes an image and tries to produce a detailed upsampled version of it.

Both algorithms are implemented using the same tools. The image transformation is performed by a deep convolutional neural network outputting the final result. The systems are trained over a given set of images, and the image transformation only requires a forward pass through the networks. A different system must be trained for each desired style and super-resolution factor. The loss functions used during the trainings are defined using a pre-trained deep network for image classification, the error being computed from the distance between high-level feature representations extracted from hidden layers of such a pre-trained network. Since the pre-trained network has already learned to encode the perceptual and semantic information in the images, the loss functions defined by using these high-level features are called perceptual loss functions.