Gabriel Hollander started simultaneously his studies of pure mathematics at the Vrije Universiteit and piano studies at the Royal Conservatory of Music in Brussels. After finishing the mathematics studies in 2012 at the University of Ghent, he devoted one year to the piano in Leipzig, Germany. During this year abroad, he also finished a personal photography project about the people and places he got to know in this Saxony town.
It was also photography that led him to know the world of digital image processing, and then of vector graphics and splines. Thanks to his experience in these domains, he grew fond of applied mathematics and started his PhD research at the Vrije Universiteit Brussel in electrical engineering in May 2014. His research is in the domain of nonlinear system identification and signal processing.
Project: Computer-Based Classification of Musical Instruments’ Sounds
Every musical instrument has its own “fingerprint” or its own “musical color”. This very personal sound of each instrument is one of the reasons we get attracted to an instrument. Humans can be trained to recognize and classify musical instruments by their sounds. In this process, we subconsciously analyze three properties of the sound:
- The start (attack) of the sound. This attack can be as abrupt as the hitting of the snare of a piano, as soft as a low clarinet sound, or anything in between.
- The evolution of the sound, after the attack. Piano sounds fade out slowly, while trumpet sounds can vibrate after the attack.
- The end of the sound. The end of a violin sound is dependent on the speed and pressure of the bow, and is different from the end of a tuba sound, which is dependent on the air speed and pressure.
In this project, we want to use a computer program to analyze and classify musical sounds.
Given a recording of a playing Western musical instrument, our project is to design a program that can identify the instrument. For this, the new built-in function Classify can be used, but will perhaps need some extra help: the sounds could be preprocessed before “=Classify= ing” them. This preprocessing will use standard Fourier analysis techniques in the frequency domain, or combined time-frequency domain techniques, like the spectrogram technique. These techniques show the amplitudes and phases of the underlying sound waves in the given recording, as a function of the frequency of these underlying sound waves.
During this project, we’ll use a data sample of approximately 13000 short recordings of distinct notes played by Western musical instruments.
Alm, J. F. and J. S. Walker. “Time-Frequency Analysis of Musical Instruments.” SIAM Review 44, no. 3 (2002).
Favorite Outer Totalistic r=1, k=2 2D Cellular Automaton