Mehmet Sahin is an international student from Turkey. He is currently majoring in computer science at the Borough of Manhattan Community College (BMCC) in New York City. After completing his associate’s degree, Mehmet is planning to transfer to a four-year college to complete his bachelor’s degree. Some of Mehmet’s interests are robotics and machine learning. He is currently doing research with a professor at BMCC to enhance human-robot communication using a NAO robot.
Project: Predicting the Scale of Satellite Images Using Neural Networks
Many interesting projects have been done with satellite images using neural networks, such as semantic image segmentation for finding roads and homes. In this project, we are taking a similar and broader look at satellite images by using a 50-layer convolution neural network (CNN) to predict the scale of satellite images.
Main Results in Detail
After failing to find the right dataset for the project on the internet, I generated the dataset (satellite images) in the Wolfram Language using GeoImage. The dataset was focused on only three cities (Dallas, Chicago and Houston) with a scale of 0.06 miles to 10 miles. Training on an Amazon EC2 GPU took 13 hours with 33,162 training, 4,075 validation and 1,830 testing satellite images. After the training, I got a 0.08 mean absolute error (MAE), which means that the neural network can predict up to 92% correctly.
Train the neural network with:
- 1. More satellite images.
- 2. Different countries and broader ranges.