Say WHAT? Run that by me again…

Archive for the category “Article Reviews”

William Littlewood on Communicative and Task-Based Teaching in Asia: article review

Article Title: Communicative and task-based language teaching in East Asian classrooms

Journal: Language Teacher 40 (3), July, 2007.

Doi: 10.1017/S0261444807004363

William Littlewood.

William Littlewood.

Author: This article was penned by William Littlewood, who wears many hats including language scholar, professor, curriculum developer, textbook writer, and teacher trainer. Littlewood began his teaching career in Germany working for the well-known Berlitz language school, then returned to teach in his native U.K. In 1991, he traveled to Hong Kong on a research grant and has been based there since, currently lecturing at Hong Kong Baptist University. In addition to training EFL/ESL practitioners, he is a prolific writer of both journal articles and books. His TESOL textbook entitled Communicative Language Teaching: An Introduction has been translated into Basque, Japanese, Malaysian, Spanish, Korean, Chinese, and Greek.  He has a lovely smile.

Type of article: A revised transcription of a plenary speech given by Littlewood at the 2006 International Conference of the Korean Association for Teachers of English.

Purpose:  To discern why East Asian educators have trouble implementing communicative teaching techniques in the classroom, and to reflect on how teachers adapt to the challenges they face.  Littlewood attempts to re-frame the concepts of both CLT and TBT to make them more relevant and practical for East Asian classrooms.

What Littlewood has to say: After establishing the widespread use of Task Based Language Teaching in East Asia (under the umbrella of Communicative Language Teaching), Littlewood discusses concerns that have been voiced by teachers struggling to successfully implement tasks. He explores the problem areas of classroom management (“The students have too much freedom and I can’t restore order!”), avoidance of English (“They’re really into the task, but no-one’s using the target language!”), minimal demands on language competence (“Students spent forty minutes and used only a few easy phrases!”), incompatibility with public assessment demands (“We don’t have time for this–the national exams are coming up in a month!”), and conflict with educational values and traditions (“We’re used to accumulating bodies of knowledge in this country!”).

Based on studies by various scholars based in East Asia, Littlewood paints a sympathetic picture of teachers caught between the ideal portrayed by their national policy and the reality of their classroom situtation. Educators described in the article respond, in some cases, by simply ignoring policy and continuing to teach in a way that’s familiar and effective for them, complying on paper with the national guidelines for Comunicative Language Teaching.  Other teachers in China and Japan have re-interpreted CLT and tasks in general, adjusting the framework to better fit their students’ needs. This watered-down type of Task Based Learning is often more about practicing discrete language items rather than negotiation for meaning, it seems, with the addition of “context” providing the communciative aspect. One enterprising teacher in Mainland China was managing to double up, focusing on traditional exam-based English grammar and drills while also encouraging student interaction in the L2, and creative language use. Bravo, Mr. Yang! 

Littlewood, however, points out that many Asian teachers are unclear on the fundamental concepts of CLT and TBLT, assuming that such approaches mean focus on speaking and communication, with no place for grammar. “Not teaching grammar” and “teaching only speaking” were the two most common misconceptions uncovered in the study, along with the fact that many Asian teachers have only a “fuzzy notion” of what a task actually is. Most recognize that it is not a drill, but what about “exercises”? Can they be considered tasks if one adds a communicative element? Some teachers have created a middle ground called “exercise-tasks”; Littlewood suggests that this might not be a bad idea, and could in fact be taken further, to create a continuum of task types.

On the form-focused end of this continuum would be Non-Communicative Learning, including grammar exercises and drills: next would be Pre-Communicative Learning, such as controlled (rather than free) question-and-answer practice. Communicative Language Practice is third on Littlewood’s continuum, defined by information exchange based on recently-taught predictible language. Fourth would be Structured Communication, where finally the focus moves to meaning and includes more complex information-exchange activites. Still, this stage is stuctured and teacher-directed. Lastly, the most meaning-oriented activities would be deemed Authentic Communication, in which language forms are unpredictible and creative, and problem-solving, content-based tasks, and true discussion can be implemented.

Finally, Littlewood concludes that in the current post-methods era, “…no single method or set of procedures will fit all teachers and learners in all contexts”.  In other words, there are no ready-made recipes, so teachers had better start experimenting in the kitchen until they get it right. Good luck to us!

What Ruthie has to say: Hooray for the continuum–best idea I’ve seen yet! One size does most certainly NOT fit all, and the idea of a communicative continuum takes away the pressure many EFL teachers in Asia face on a daily basis. Specifically, it helps us see “failed tasks” in a different perspective: rather than a “task-gone-wrong”, the day’s lesson can be viewed as “closer to the form-focused end of the spectrum”. And hopefully, as students progress in their interlanguage and gain confidence, lessons will come to more closely resemble the “meaning end of the spectrum”. Some might disagree, but I believe Littlewood’s article should be required reading alongside Willis and Willis, who present the ideal model. It’s an important bridge that encourages educators to reflect more closely on their own situation and to better adapt their methods and teaching style to the needs of their students.

Insight on the Listening Process from John Field: article review

Article title: An insight into listeners’ problems: too much bottom-up or too much top-down?

JournalSystem 32 (2004)

Author: John Field, professor at University of Leeds, UK. Teaches psycholinguistics, child language, and English grammar. His widely-used textbook, Listening in the Language Classroom, won the Ben Warren International House Trust Prize. A teacher trainer, materials writer, and syllabus designer as well as practitioner, Field travels widely lecturing on L2 listening.

Doi: 10.1016/j.system.2004.05.002

Type of Study: Empirical study based on observations gleaned from an analysis of three related experiments.

Purpose: To clarify the relationship between L2 listeners’ use of top-down and bottom-up listening strategies.

Research questions:  1) If top-down and bottom-up information are in apparent conflict, which one prevails? and 2) How do learners deal with new items of vocabulary when they crop up in a listening passage?

Procedure: 47 NNS students from a leading British EFL school were given listening tests in a classroom with good acoustics. The tests were designed to reveal learners’ listening strategies by presenting them with a series of problematic items, forcing them to choose between semantics (representing top-down listening strategy) or phonology (representing bottom-up). Each test was slightly different, and designed to explore different aspects of the research questions.

Results: The experiments produced both expected and unexpected results. Fields found that L2 learners often do re-interpret or misinterpret words to fit their own schema (relying on top-down, rather than bottom up processing), but also that they accurately perceive other words by relying on the onset sound. Lastly, he found that learners rely on a third strategy as well, which he calls a “lexical strategy”; in this case, learners bypass the top-down strategy in favor of matching an unknown word with a similar-sounding familiar word, regardless of the word’s semantic appropriateness.

My thoughts: First, let me pat myself on the back for choosing such a brilliant and easily comprehensible article to review. Now, let me see if I can elaborate a bit on the design, and its simple elegance.

Field begins with a history of the opposing views of bottom-up-influence versus top-down-influence proponents; midway into the discussion, in a section labeled “The legacy of scripted materials“, he points out an interesting connection that I hadn’t realized before: many L2 learners have developed an expectation of understanding everything in the text, since listening materials have traditionally been heavily scripted (contrived for the learner’s benefit, rather than reflecting natural spoken language) and graded according to level. Think about that: understanding everything in the text. That means doing the kind of precise and accurate bottom-up processing that is almost impossible for L2 listeners to do in a real-world context. Is it any wonder, then, that some learners panic when they’re exposed to the unpredictability of real conversation occurring at normal speed? Since the EFL world has become more “communicative”, learners must now rely more heavily on their top-down processing skills to hypothesize and compensate for the bits of language that they cannot fully process in the speech stream in real time. So while many scholars insist that L2 learners cannot focus on semantics when they’re unable to catch sounds and segment them into words, other scholars–such as Long (1989) and Field himself (1997)– counter that learners’ top-down processing ability is exactly what enables them to make sense of what they hear. This ability (they say) is what provides the support necessary to compensate for L2 learners’ imperfect decoding skills.

Moving on to the experiments themselves, I’ll briefly explain the design:

The first experiment: This test consisted of groups of four to six high-frequency words likely to be familiar to the learners ( all high elementary or low intermediate level). Sometimes the words all belonged to the same lexical field (for example, desk, chair, lamp, computer), and sometimes only the two last words had a semantic connection (e.g. sunny, excited, bumpy, hot, cold). For the target items, Field changed the onset of the last word only, making a similar word which “didn’t fit” semantically (i.e. sunny, elegant, bumpy, hot, bold). Foils, or examples where the last words were not changed, were mixed in, giving learners examples of target patterns. Fields wanted to see whether what learners expected to hear would override what they actually heard. And what he found was…….the opposite! Out of 18 example sets, listeners only “re-interpreted” one answer to fit their expectations. Well, now, that’s discouraging if you thought you had a good case in favor of top-down processing skills. Field, however, re-assessed his test design and found an important flaw: he had chosen to change the onset–rather than middle or the offset of the target words. Native speakers attach great importance to the initial sound of words, and Field (cheerfully?) proclaims that this in itself is an interesting result, since it clearly shows that L2 speakers also pay great attention to the onset of words, to their obvious advantage.

John Field

John Field

As an aside, I do hypothesize that John Field is a cheerful person. I listened to a podcast of him speaking about his reception of a prestigious award; he describes himself as “gobsmacked” upon hearing the news, which I take to mean pleased and surprised. The surprise implies a very appealing modesty and genuineness. I wouldn’t mind having tea (or a beer?) with him.

So, on to the second experiment, which produced more expected, but this time positive results. This next test consisted of semantically constrained sentences marked by “acceptable but unpredictable” final words. For instance, “I couldn’t listen to the radio because of the XXX”. One might expect an appropriate answer to be noise, but the actual spoken answer is boys. Again, Field was curious to see whether learners would choose a more predictable answer over the what they actually heard, which was less appropriate or expected. This time, results showed that words in seven out of the twenty items were substitutions of “expected” words rather than the words listeners actually heard. But once again, there were unexpected results as well: an analysis of the words chosen showed that test takers tended to change to a word whose onset was similar to the target word (for instance, Field predicted that noise would be the preferred choice in the previous example about listening to the radio; learners, however, preferred to substitute voice, which shares a similar labial onset sound). Aha–further evidence of the importance of onset sounds, concluded Field.

Finally, on to the last and most interesting experiment. This time, subjects were presented with a sentence designed to provide a meaningful context for the very last word….which was a potentially unknown low-frequency lexical item. For instance….”They’re lazy in that office; they like to shirk“. Field hypothesized that learners might be tempted to substitute a more familiar and phonetically similar word, even if the context didn’t fit. In the case of the office question, learners might choose work, a word that didn’t necessarily make sense, but that sounded similar and was associated with offices.

So what happened? Well, as Field reports, “results were striking”. He found that 33.31% of the listeners did not accept the acoustic evidence of unfamiliar words, instead deciding on familiar and phonetically similar ones that often had very little connection with the sentence context. Eliminating those listeners who left questions blank, the number went up to 42.39%. Again, words that learners chose were often not those that Field had predicted. And not only did learners often choose semantically inappropriate words, but they sometimes chose words from different word classes! The trend, he found, did not vary according to individual either, as only one of the forty-seven subjects did not use this strategy at least once. And learners chose to match heard words with known words in each of the 20 test questions. There was no case of a question where a learner had not used this particular strategy. A significant result, right?

It all depends on how you look at it. Field admits that if the data for the third test (not adjusting for those who left questions blank) were to be calculated, it would show that learners’ choice of inappropriate words was not statistically significant (i.e. it was little better than chance) by quantitative standards. But, as he makes clear, such a pronouncement does not take into account “the most striking fact about the figure–that it was achieved despite the evidence of the listener’s ears and, in many cases, to the evidence of the contrary”. Learners, it seems, can sometimes ignore both top-down (the logical contextual choice) and bottom-up (the word as heard phonetically) evidence when faced with a difficult unknown lexical item. The tendency to substitute a known for an unknown word, then, is an entirely different phenomenon which he calls a lexical strategy.

In the end, Field provides insights rather than answers, and that, for me, is the beauty of this article. He hypothesizes, considers his results accurately and honestly, revises and expands his ideas, and finally sheds some real light on the process of listening for L2 learners. A must-read for all TESOL students and language EFL/ESL practitioners, Field’s article is also an interesting and accessible read for language lovers in general. Treat yourself to an enjoyable afternoon with John Field, and you’ll come away a little wiser.


Music Training for the Development of Speech Segmentation

Article Title: Music Training for the Development of Speech Segmentation

Journal: Cerbral Cortex, September, 2013.

Authors: François Clément, Julie Chobert, Mireille Besson, Daniele Schön.

Doi: 10.1093/cercor/bhs180

Type of study: Longitudinal causal study using behavioral and electrophysiological measures

Purpose: To examine the influence of music training on speech segmentation in eight year old children.

Procedure: The tested children were pseudo-randomly assigned to either a music or a painting group, and given two years of training. They were tested before training, at one year, and after two years on their ability to extract words from a continuous flow of nonsense syllables.

Results: Researchers found improved speech segmentation skills for the musically trained group only.

My thoughts on the article: First of all, let me say that is an article about a subject whose surface I am just beginning to scratch and probably will never fully comprehend; nevertheless, I am fascinated by the work of these French researchers who have been exploring the interrelation of music and speech for many years. They have developed an artificial language, which they revise and use in different ways to determine how humans learn to distinguish words from continuous speech. 

The language is syllabic, with a fixed number of nonsense syllables which are arranged into “words”, with each word consisting of precisely three syllables. The words have been assigned tones, which are used consistently throughout  experiments, and sung by a synthesizer in a continuous stream, ( in varying order ) with no breaks between words to establish segmentation. With no audible segmentation and no semantic cues, how on earth would a listener recognize words in a garbled stream of “gysigipygygisisipysypymi”? According to the researchers’ conclusion, participants are able to perceive words through the integration of pitch with the statistical properties of the speech structure.

Let me explain further. The researchers’ artificial language possesses certain statistical properties: namely, in a continuous stream of speech with words in random order, syllables that appear next to each other within words will occur together more frequently than syllables that are separated by word boundaries. This is called “transitional probabilities”, and it is true for tones as well. In the words Gy-si-gi and Py-gy-gi ( for example) “si” and “gi” will occur together more often than “gi” and “Py”. So theoretically, listeners’ brains could come to recognize syllabic patterns that occur more frequently. Still, it is dubious how accurately a listener could guess, and indeed previous research showed that this was a difficult task to achieve when the continuous stream of speech was spoken, rather than sung.However, Clément et al have found that accuracy increases when the continuous stream of speech is sung rather than spoken. When listeners associate a certain syllable with a pitch (or tone), they can follow the melodic contour and unconsciously remember as “words” those frequently occurring syllable sequences.

The researchers had previously done multiple studies using adult participants, but had never worked with children, whose brains are more plastic. In this particular study, they deliberately focused on children, and chose a longitudinal study in an effort to prove causality. Here is how they went about it:

Thirty-seven 8 year old native French speaking school children were divided into two groups: a music group and a painting group, with care taken to ensure that children from each group represented similar socioeconomic backgrounds, and that no child had previous experience with either music or painting. Along the way, children either moved or had problems with attentiveness and the number of participants shrank to twenty four. The children were tested at the beginning of the experiment by entering a private booth and listening to the stream of sung speech. They were then presented with pairs of spoken words, one of which was an actual “word” from the artificial language, and the other a “non-word” created from the last syllable of a word combined with the first two syllables of another word, or vice-versa. Children were to decide which of the two “words” sounded familiar, based on the sung speech they had just heard. After this initial test, the two groups began two years of lessons. The music group had 45 minute lessons twice a week during the first year, and once a week during the second. The painting group also had 45 minute lessons, and followed the same pattern.

After the end of the first year, the two groups were tested again, and the results showed that the music group had improved in their ability to recognize the “familiar” words, while the painting group’s ability had actually decreased. At the end of two years, the music group showed yet another jump in performance, and the painting group’s performance also increased, although it hardly varied from the score they achieved two years before, which was deemed at “chance level”. Take a look at the following graph: the solid line shows the number of correct responses given by the music group over the two year period, while the dotted line shows the painting group’s lack of progress.

Screenshot 2015-10-13 18.40.06

You can also see the alignment of pitch and syllables, and imagine what a strange experience it must have been for eight year old children to listen to this, alone in a sound booth.

The researchers were particularly interested that the children from the music group were impressively accurate in distinguishing “words” even though the continuous sung sequence was actually designed to contain a higher percentage of “non-words”.  The graph on the left shows the progress of the music group compared to the painting group at the one year mark, but after two years, the music group’s percentage of correct responses actually increased to nearly 75% !  Allow me to add that exclamation mark, since this is not a formal paper and I personally find that amazing. Having had years of piano lessons as well as music theory and music appreciation classes, I would love to volunteer in just such a study, especially if it involved more free music lessons! And it’s not the kind of comment that one includes in formal article reviews, but hey: those kids lucked out. They got two years of training in skills that fostered their creativity and ( I am certain, though I cannot prove it empirically ) made them happier, more interesting individuals. I only hope that the children in the two groups didn’t know the study results; I would hate to have the painting kids think of themselves as losers who got “the wrong answers”. As long as this was not the case, then the painting kids absolutely lucked out too, with free painting lessons. Aside from the creepiness of listening to strange syllabic sequences by oneself in a booth, this was a win-win situation for the children, and I hope their parents appreciated the opportunity.

Finally, the researchers conclude their study with confident pronouncements: musically trained children are able to successfully determine word boundaries, showing that music can indeed play an important role in language acquisition. And I believe so, too. Yes, music is fun and motivating for language students, but its influence is much deeper. I look forward to following the further adventures of these French cognitive scientists and to reading about other cutting edge research related to the music/speech connection.

Post Navigation