Tobias Scheer, CNRS, Université Côte d'Azur & Alex Chabot, University of Maryland

Cog B: Phonology, cognition, the real world and their places (advanced!)

Tobias Scheer (CNRS, Université Côte d'Azur) Alex Chabot (University of Maryland)

The class discusses how different components of the cognitive system work, and how they communicate with other such components, as well as with the outside world (items that exist beyond the skin). The focus lies on language in general and phonology in particular, since there are explicit theories about their workings and interaction (which is less the case elsewhere).

The human faculty of phonology is an excellent object of study for the comprehension of human cognition since it both shares a number of characteristics with other cognitive systems while having its own distinct function and properties. Much of cognition is about information processing, where the brain must interpret the rich and detailed outside world and translate it into terms that are useful to the body. In the case of human language, the continuous waveforms of the speech signal must be broken down into discrete units that we interpret as the individual segments and other units such as syllables stored in our long-term memories. Phonology is the study of those units: what they are like, and how they interact with each other.

The class first provides an introduction to modularity, i.e. the very idea that the mind does not carry out all-purpose computation, but rather computation that is specific to a number of domains (audition, vision, language, numbers etc.). Thus the items computed by audition are not the same as the ones that are processed by, say, syntax.

After a brief survey of the kind of distinct computational systems (modules) that occur and the way they may be identified, intermodular communication is discussed: how do different systems talk to each other? The communication between language-internal systems (syntax, phonology, phonetics) is reasonably well understood and taken as a starting point to also inquire on how the skin barrier is crossed, either way: how do skin-external items become cognitive categories, e.g. how is time (skin-external) transformed into tense (cognitive category in syntax)? What is the relationship between both (what properties of time, if any, are carried into tense)? How are cognitive categories transformed into skin-external items (e.g. an idea that a speaker wants to express into waves that come out of the mouth)?

It is argued that communication both within the cognitive system (among modules) and between the cognitive system and the skin-external world are list-based, i.e. involve look-up tables where vocabulary items of the communicating systems are related, e.g. (skin-external) 450-480 nanometers ↔ (cognitive category) "blue".

These workings are illustrated with a number of examples, and consequences are discussed, namely the fact that the relationship of items that are related in a look-up table is arbitrary. That is, there is no reason why [past 3rd person singular] (syntax) comes out as ‑ed in English (he liv‑ed), rather than as, say, ‑ke or ‑pa. Just like there is no reason why the phoneme /r/ (phonology) is related to the pronunciation [r] (phonetics) in Spanish (but to [χ,ʁ] in French). Just like there is no reason why the span of 450-580 nm is related to "blue": in languages and cultures other than Western, it may be related to "yellow" or "red".