Say WHAT? Run that by me again…

Neural Committment, Perceptual Goodness, and Updating our Schemata (classes 1 and 2)

An hour or so of class discussion time didn’t really do justice to the first two chapters of Michael Rost’s Teaching and Researching Listening (which will be referred to from here on in as The Ear Book). At a recent JALT conference, I saw a teacher reading this text “for fun”–his own words– and I now understand why. It IS fun, not just because it’s extremely readable, but because it connects language acquisition theories and experiments with biology and neuroscience. Every TESOL student studies experiments related to critical or sensitive periods in language acquisition and discusses “plasticity”.  It’s interesting, then, to read about the process of neural commitment on page 23 of the Ear Book: 

As basic linguistic functions develop, they become confined to progressively smaller areas of neural tissue, a process called neural commitment. This leads to a beneficial increase in automaticity and speed of processing, but it also results inevitably in a decline in plasticity……It appears that the process of neural commitment leads to a neural separation between different languages in bilinguals and second language learners. The plasticity or neural flexibility required for language reorganization declines progressively through childhood and adolescence and may be the primary cause of some of the difficulties that adults face in second language learning.

What this means to me is that we do lose plasticity as we age, but hey, it’s a trade-off: we gain automaticity and speed (which means greater fluency). That’s pretty important, and actually a good consolation prize; whether the fluency referred to is in one’s mother tongue or in multiple languages, our brains are wired toward fluency, and decline in plasticity is simply an unfortunate result of that process. I also like the image of bilinguals’ different languages as being neurally separated ( like an obsessively neat person’s sock drawer, I imagine ). Mothers of my students would be relieved to know this, since many of them imagine English as something that will interfere with their child’s Japanese learning process ( imagine all the socks unballed and in a colorful mess in the drawer–that’s what they envision ). My husband and I are both late and successful second language learners who raised two fully bilingual children; between the four of us, we’ve had no trouble using either Japanese or English or mixing the two when necessary. The different language systems are firmly established in our minds; they don’t interfere with each other and they often combine to make for a more colorful style of communication.

On to chapter two of the Ear Book, which reveals that a linguist’s favorite snack is full of Perceptual Goodness. No, not really, but that’s the first thing I thought when I saw the phrase used on page 27: it sounds like a TV commercial. Actually, it refers to sound. I learned that phonemes have “identities” that clearly define them, as follows: “…..each individual phoneme of a language has a unique identity in terms of frequency ratios between the fundamental frequency of a sound…and the frequency of the sound in other harmonic ranges.” So, as I understand it, in pronouncing certain phonemes we are unconsciously manipulating sound frequencies. Although there’s a broad range of acceptable ratios between frequencies for each phoneme, if we move out of that range, the phoneme loses its identity (becomes unrecognizable or unintelligible). In other words? When a non-native speaker’s pronunciation is so far off that the listener can’t decode certain words, it’s a matter of mathematics. The ratio of the sound frequencies produced by the speaker is mismatched (not purposely, of course), and the listener must resort to guesswork or hypothesis (top-down processing) to comprehend what’s being said. It’s a very physical explanation for what defines phonemes and how we identify them. 

There are numerous blog-worthy topics detailed in chapters one to three of the Ear Book, but I particularly enjoyed thinking about Schemata. That’s one of those words that we’re familiar with, but perhaps in a fuzzy way. Well, no more fuzzy thinking now that I have Rost’s definition to work with. Here it is: “A schema is a figurative description for any set of simultaneously activated connections (related nodes) in the vast frontal cortex of the brain.” Schemata, then, refers to a set of “memory nodes” that are activated constantly as we attempt to understand the world around us, make decisions, and communicate with others. Read the morning newspaper? You’ve probably created a new schema by processing new facts and linking them together. Just met your new neighbor and exchanged greetings? In that short time, you’ve put a simple schema in place. Even more interesting is the fact that our existing schemata are constantly being undated as we read more, see more, hear more, experience more, and revise our former schema. In linguistic terms, this is being “parsimonius” (following the principle of Occam’s razor), since it is obviously a pain in the neck for your brain to be constantly creating entirely new schemata, and updating is much more efficient. I’m fairly certain that my mother-in-law is constantly updating her schemata in regards to me, her son’s American wife. And that is for the good, since her first schemata was rather alarming and not at all positive: the image of an American bride (she has since confessed to me) triggered fears of early divorce, wild and crazy spending, and greasy, unhealthy cooking. 

And that is enough reflection for now. I vow to not let my schemata get rusty, but to continue updating and creating nice shiny new ones.

 

Advertisements

Single Post Navigation

2 thoughts on “Neural Committment, Perceptual Goodness, and Updating our Schemata (classes 1 and 2)

  1. Hi Ruthie,

    Brilliant stuff. I love listening and I also spend quite a bit of my time worrying about things like schema. I wonder if you’ve read any John Field’s work? I think when Rost highlights the decline in plasticity, he is, perhaps unintentionally, making a case for spending lots and lots of class time on lower-level processing skills. The fact of the matter is, our students cannot hear certain phonemes very well, and when they miss these small bits of language, they’re schema probably won’t be able to help them out, in fact, it could end up being a barrier to them actually understanding what is being said. As Field (1998) notes, there is a world of difference between ‘I’ve lived there for three years’ and ‘I lived there for three years.’ If a 2nd language learner can’t distinguish between the two, they have to do quite a bit of processing to either use the contextual and background clues to figure out what was said–as the conversation goes on no less–or the have to hold two interpretations in their working memory simultaneously as they try to pin it down (Field 2008, p. 224). If they go with schema and guess wrong, huge chunks of the conversation will be misinterpreted. In fact, there is some evidence to suggest that learners will, “place more confidence in their preformed schema than in incoming data from the speech-stream. (Field, 2004, p. 369). So I think, in the grand scheme of things, we have to do our best to help students learn to decode aurally through lost of bottom-up decoding practice. That’s not to say that we should ignore the top-down side of things, but in my experience, huge chunks of class time are spent on activating schema (teacher: what do you think this conversation is going to be about?) and not much time is spent on the down and dirty of what sounds are actually being produced and how they are being heard.

    So happy you have joined the blogging world.

    Kevin

    Field, J. (1998). Skills and strategies: towards a new methodology for
    listening. ELT Journal 52 (2)

    Field, J. (2004). An insight into listeners’ problems: too much bottom-up or too much top-down? System 32

    Field, J. (2008) Listening in the Language Classroom. Cambridge: Cambridge University press.

    Liked by 1 person

    • Kevchan, Wow! How nice to hear from you, and yes, I’m familiar with and love John Field’s work. I was only introduced to it this past summer, through a course with Elvis Wagner from Temple, PA, whose specialty is listening. I also enjoyed Gary Buck’s insights, and used what I learned from both Field and Buck in a final project this August. Since I teach mainly young learners, I think that bottom-up processing is absolutely essential, and I only hope it hasn’t become a “lost art”. I love the down and dirty of sounds and how they’re produced; in fact, In my case, I feel that I’ve actually focused too heavily on phonology up until now, neglecting to encourage my students to begin top-down processing strategies as they progress developmentally. While I’m personally working on finding a balance, I agree that especially in Japan, the scales are tipped in favor of top-down processing. Junior high students’ introduction to phonics is cursory, yet they’re expected to be reading and comprehending from early on. It’s no wonder that guesswork and hypothesis win in the end.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: