Wolfram Computation Meets Knowledge

Wolfram Summer School

Alumni

Ray Dougherty

Summer School

Class of 2007

Bio

My research of the past two years aims to discover the principles by which the structures of human language evolved from animal origins. Most of my work has focussed on the evolution of the inner ear and cochlea in lizards, birds, animals, and humans from the earliest fish, through the dinosaurs, and down to today. I have developed a mathematical theory of how the cochlea works and why it evolved to function as it does. Most recently I have been advising the Department of Education in the Dominican Republic about how to set up a Biolinguistics Institute to investigate two topics: (a) the evolution of sea shells and snail shells in the Carribbean and (b) the evolution of “insect songs” in cricket-like creatures, some of which live in water and mud. My earlier work, 1980-2000 was in “computational linguistics”, the attempt to build computing machines that can ‘understand’ human languages. I write essays all the time and in particular I have posted (a) my reflections on the NKS summer institute, (b) my reflections on the 9/11 WTC bombings, and (c) a memoriam for my close friend Maurice Gross.

Project: Using Finite State Automata (FSA) to Study Neurological Evolution: Gradualism or Saltationism?

A FSA constitutes the simplest machine for analyzing the complexity of signal sets. Among FSAs the simplest are 2-State 2-Color; then, 3-State, 2-Color; and then 4-State, 2-Color. Each FSA has a start state and a halt state. A FSA recognizes an N length string of {0, 1} if the process can move from the Start through States connected by paths (colors) that are labeled {0} or {1} and exhaust the string without reaching the halt state. We exhaustively enumerate the mentioned FSAs, and for each, we find the shortest sentence that passes through each of the states, say length N for FSA(M). We then take the 2^N conceivable {0,1} sentences of length N. We ask: What percentage of these are recognized by the FSA? Some recognize all, some none (direct start -> stop link), and some others recognize some fraction. Shannon claims the signal set has maximum information capacity if the FSA recognizes 50 percent and rejects 50 percent. In this case, each {0,1} sequence has 0.5 probability and its recognition (or not) conveys the maximum information possible for the signal set. We show that in any of the spaces of FSA machines considered, if we make the smallest variation possible in one of the maximum information FSA’s, we can reduce it to one of the lowest, and obviously vice versa. Our argument is this: If the smallest variation possible in the simplest comunication systems possible can lead to radical variations in Shannon Information Capacity, it is natural to expect that a single small change in a more complex machine (Turing machine, human brain…) would lead to large saltations in functional grammatical and language capacities.