Wolfram Computation Meets Knowledge

Wolfram Summer School

Alumni

Alex Fafard

Summer School

Class of 2015

Bio

Alex Fafard is conducting his undergraduate studies at the Rochester Institute of Technology in Rochester, New York. He is there studying imaging science with a minor in microelectronic engineering & nanofabrication. He aspires to pursue a graduate degree before beginning a career based on the design and development of computer vision systems. Alex has conducted work with Riverside Research as an intern Imaging Engineer in their Intelligence Operations Directorate during the past summer as well as the present one. Some of his work there involves the design and implementation of advanced algorithms that are used to find anomalies in hyperspectral data. Supplementary to invaluable experience in industry, he has thus far also undertaken salient research within the domain of remote sensing and computer vision at the Chester Carlson Center for Imaging Science. Some of this research has included collaborative work on projects such as the dimensionality reduction and modeling of wildfire propagation using temporally oriented near-infrared flight lines, the development of a natural disaster rapid response pipeline based on a multimodal fusion of LiDAR and RGB airborne data, and the development of a “crainiofacial-phenotyper” in the form of a a 3D scanner based on the use of structured light. Next year, Alex will be simultaneously conducting two distinct research projects that reflect his wide research interests within imaging, the first of which is based on the study of arterial spin labeling (ASL) fMRI systems, and the second involves the fabrication and development of an economical ground-based scanning LiDAR system. Through his work and studies, he believes that we can best advance the depth of human understanding through the fusion of multiple information sources.

Project: Autonomous Aerial RGB Imagery Terrain Classification and Segmentation through Probabilistic Fusion of Computer Vision-Based Segmentation & Artificial Neural Networks

Through the advancements in sensor, aerospace, and computing technologies becoming readily available over the past decade, it has become increasingly feasible to capture large amounts of data quickly from airborne or orbital platforms. This data has proven invaluable in the estimation of a large range of parameters pertaining to different regions that are used in the support of many fields of research. With these advances in technology, the use of this data tends to be bounded by the availability of imagery analysts who function in the summarizing and analysis of data on a scene-by-scene basis. The task performed is referred to colloquially as image classification. This process is traditionally quite slow and prone to errors due to the requirement of sustained human analysis of data over large periods of time. In fact, it has become quite clear that the amount of data being collected has far outstripped the pace at which human analysts are able to reliably observe and distill information from it.

One step toward rectifying the current state of affairs may be through the implementation of a novel automated segmentation and classification scheme. Image classification is the process by which distinct targets or materials (classes) are established and then regions subsequently identified based on some specific set of characteristics for each class. The counterpart to this task is image segmentation, where these classes that have been identified are partitioned and quantized into isolation such that the results can be observed in a Boolean sense. It is desired that an algorithm is able to conduct this classification sans human input in order to produce a graphical segmented and classified representation of the terrain present within an RGB aerial photograph.

Previous work conducted in autonomous segmentation and clustering in RGB imagery has been via various approaches including linear methods (including principle components, independent components, and various matrix factorizations), optimal statistical models of spatial and frequency information, and artificial neural networks. Each method has its own strengths and weaknesses being suited to particular tasks and data structures. It is often the case that these techniques are used independently, and are inherently bound by their various shortcomings. In light of this, it is of interest to explore how these individual tasks can be optimally fused in a statistical sense so as to achieve a superior result compared to any singular approach.

This information fusion approach is proposed and utilized here. Initial data separations will be conducted through conventional techniques, while a more refined statistical texture analysis using Gaussian mixture models will be applied to delineate colorimetrically coincident structures. An artificial neural network will then be used to train and refine the results of the classification based on the classes isolated from the previous methods. Finally, this allows for the results to be statistically fused in a weighted Bayesian model. This processing methodology should prove to be a highly efficient and valuable tool for the dimensionality reduction of large volumes of imagery and the statistical summarizing of information embedded within it.