Nguyen Ton is a graduate student from the University of Virginia, majoring in physics. She’s working on her thesis project at Jefferson Lab in Newport News, Virginia. She’s interested in programming while she is working on her thesis analysis. She would like to work on a real computational project. She wants to be a software engineer. She hopes she can work on a good and exciting project during the Summer School.
Project: Predict the Orientation of an Image
Estimate the orientation of an image using a neural network. When the horizon or other horizontal and vertical lines in the image are missing, determining the rotation of an image can be very hard. Fortunately, a convolution network can learn the feature and predict the orientation if we give the network enough training data.
Main Results in Detail
Several approaches are used:
- 1. Use LeNet model on the MNIST dataset: 97.7% accuracy with the rotation angle in steps of 30 degrees from 0 to 330 degrees. 60,000 images (28x28x1) in the training set, 10,000 in testing and 10,000 in the validation set.
- 2. Use the ImageNet dataset with the Ademxapp model for training: 90% accuracy on 10,000 images (224x224x3) for training, 1,000 for testing and 1,000 for validation. Rotate from 0 to 360 degrees in steps of 90 degrees. The dataset was downloaded from ImageNet.
- 3. Using Google Street View with Ademxapp model, the same setting as above is used but with a different dataset focused on the street view only.
Higher precision can be achieved with more images and more rotation. Right now, we have only used 10,000 images for the training set, 1,000 images for the testing set and 1,000 images for the validation set. Due to limited time, I only rotated using four angles (0, 90, 180 and 270 degrees). With more angles of rotation, we could potentially use the same network to train it as a regression problem in addition to the classification problem.