Alumni
Bio
Tushar Maharishi currently studies as an undergraduate at the University of Virginia in Charlottesville, Virginia, where he is aspiring to major in computer science with a minor in applied mathematics. His programming interests include machine learning, artificial intelligence, neural networks, and genetic algorithms. After graduation he hopes to go into the workforce to pursue these types of fields.
He has previously worked as a teacher aid for middle school students in computer science, robotics, and mathematics at a summer program called Fairfax Collegiate, where he taught students how to correctly use the software Unity to create simple video games and environments. His favorite classes have been discrete mathematics and ordinary differential equations.
Project: Cloud Classification through Machine Learning
The goal of this project was to develop a technique that can improve the accuracy of Mathematica’s built-in functions for the classification of various types of clouds using a machine learning algorithm. Mathematica already has an existing image identification function (ImageIdentify). Although it can efficiently determine if the image is of a cloud, it has trouble outputting the type of cloud (such as cumulus or stratus) with reliable accuracy. Currently, ImageIdentify achieves a 35.2% accuracy on our test set, which I hoped to improve.
I decided to test various learning algorithms with Mathematica’s Classify function to find if they could generate results equal to or better than those of ImageIdentify. I used stratified sampling to split the data into a test and training set, and performed several transformations (such as black and white, taking crops, or applying a gradient filter) to better the training. I also attempted to extract deep features using ImageIdentify’s neural network, but ran into complications with computation time, and was unable to complete this strategy.
Finally, I used Nvidia’s DIGIT framework running on an EC2 GPU instance to a GoogLeNet model. This network utilizes a series of convolution filters and random crops to improve the learning, as well as running through the data numerous times. Below is the graph of its accuracy and loss functions over the training epochs.