Month: May 2018

Dismantling the Native-speakerarchy Post 2: “The role of vowel quality in ELF misunderstandings”

(This is the second post in the series “Dismantling the Native-speakerarchy.” Check out the first post here.)

It’s time to pull another Jenga block out of the Native-speakerarchy tower. That block is vowel quality in English as a Lingua Franca (ELF) interactions brought to you by the Asian Corpus of English.  

ELF v. EFL

English as a Lingua Franca (ELF) is often defined in juxtaposition to English as a Foreign Language (EFL). Yes, yes, the acronyms are irritatingly similar. Don’t shoot the messenger.

ELF refers to English used by speakers of other languages for intercultural communication. Think a French girl and Thai boy falling in love with English as their medium of communication. Or a Korean businesswoman negotiating with a Chinese board of directors in English. ELF prioritizes intelligibility and acknowledges that users will have variations (dropping articles, using relative pronouns like who and which interchangeably, etc.) that deviate from ‘native-speaker’ norms. The variations are a feature not a bug. A natural occurrence in language patterns, not a deficit.

Whereas, English as a Foreign Language is designed to prepare users for communicating with a ‘native-speaker,’ and implied is an attempt to conform to inner-circle (U.S., U.K. etc.) standards. Think a Japanese student studying English to matriculate in a Canadian university. Deviations from the standard are errors. English language instruction in an EFL model seeks to raise students’ accuracy levels to be accepted in academic and professional settings dominated by ‘native-speakers.’ Individual teachers of EFL might not have that philosophy, but mass market coursebooks, curriculum, assessments, and hiring practices demonstrate the pervasive nature of the ‘native-speaker’ norms.

Back to my bae, ELF. English as a Lingua Franca is a threat to the status of ‘native-speaker’ teachers as the gatekeepers of English AND I AM HERE FOR IT. ELF speakers bring the richness of their accents to English, and they don’t have time for all of English’s quirks. Third person singular ‘s,’ I am lookin’ at you.

The Paper

David Deterding and Nur Raihan Mohamed (2016) used the Asian Corpus of English (ACE) to investigate the impact of vowel quality on intelligibility. ACE is a collection of “naturally occurring, spoken, interactive ELF in Asia.” A veritable playground for ELF fanatics.

The OG ELF fangirl Jennifer Jenkins wrote the literal book on it and identified the Lingua Franca Core: a list of pronunciation features that are necessary to comprehensibility in English. Spoiler alert: it’s a short list. It includes “all the consonants of English apart from the dental fricatives,the distinction between long and short vowels, initial and medial consonant clusters, and the placement of intonational nucleus.” (Deterding and Mohamed, 2016, p. 293).  

Lemme ‘splain.

  • Most consonant sounds are necessary for intelligibility. However, the pesky sounds /θ/ as in thot and /ð/ as in that hoe over there are not necessary because substitutions like /f/, /v/, /d/ typically suffice.
  • Short v. long vowels. You know, your sheets v. shits, and your beachs v. bitches, etc. Mastering vowel length is considered important for intelligibility according to Jenkins’ research.
  • Initial and medial consonant clusters. Sounds like  /str/, /mp/, /xtr/, /pl/ /scr/, and so on at the beginning of words, and to a lesser extent, in the middle of words, need to be kept intact for the speaker to be comprehensible.
  • Placement of intonational nucleus: This is stress on a syllable in an intonational unit (group of words), and the wrong stress can throw off the listener, so Jenkins includes it in the Lingua Franca Core.

All other pronunciation features are deemed fair game in ELF by Jenkins, including vowel quality, which is what this paper focuses on. Vowel quality refers to what makes vowels sound different from each other: “I must leave the pep rally early to get a pap smear. Pip pip!”

Vowel quality is why JT’s delivery in “It’s Gonna Be Me” spawned this meme: 

From ACE, Deterding created the Corpus of Misunderstandings (incidentally, the name of my emo band) with data from exclusively outer and expanding circle English speakers.

This paper is building on Deterding’s earlier 2013 work that determined 86% of misunderstandings in CMACE involved pronunciation. He and Mohamed dig into vowel quality specifically because it was left off the Lingua Franca Core by Jenkins.  

Of the 183 tokens of misunderstanding in the corpus, 98 involved vowel quality. In many of those tokens vowel length and quality was an issue, but as vowel length is part of the Lingua Franca Core, they were not included in the analysis, leaving 22 tokens of short vowels misheard for other short vowels. Half of these tokens included /æ/ and /ɛ/, referred to as the TRAP and DRESS vowels in the literature, but what we will call the SASS and FEMME vowels.

When they analyzed each of the 22 tokens in context, they found other pronunciation features that probably caused the misunderstanding, and that vowel quality was indeed a minor factor. For example, “In Token 5, wrapping was misunderstood as ‘weapon’, but the key factor here was the occurrence of /w/ instead of /r/ at the start of the word” (p.229). Recall that consonant sounds are in the Lingua Franca Core and play a big role in intelligibility.

Conclusion

David Deterding and Nur Raihan Mohamed’s research supports Jenkins’ contention that conforming to ‘native-speaker’ standards in vowel quality is unnecessary for English users to successfully communicate. Let me put on my extrapolation cap because you know how I do. ‘Native-speaker’ English teachers don’t have a pronunciation edge over ‘non native-speaker’ teacher colleagues when it comes to vowel quality. It literally does not matter if someone pronounces it, “Thet’s eccentism, you esshet!”

Check out this article if you are a research bish that wants to see the kind of work that can be done with corpus linguistics. And if you’re a EFL bish or an ELF kween. And if you’re a NNEST.


ACE. 2014. The Asian Corpus of English. Director: Andy Kirkpatrick; Researchers: Wang Lixun, John Patkin, Sophiann Subhan. https://corpus.ied.edu.hk/ace/ (May 26, 2018)

Deterding, D. & Mohamed, N. R. (2016). The role of vowel quality in ELF misunderstandings. Journal of English as a Lingua Franca, 5(3). 291-307.

Jenkins, J. (2000). The phonology of English and an international language. Oxford: Oxford University Press.

Read More

Maybe it’s a grime [t]ing: TH-stopping among urban British youth

I’ve been thinking a lot lately about how identity is something that we perform. I was introduced to this idea through my exploration of the Iggy Azalea’s persona and performance for my first Linguabishes post (here). It was my first glimpse at the tricky area of identity research. Not dissimilar from code-switching, your identity performance at work is probably super different from the one you perform to your bishes. Identity can change from context to context and it depends on your audience.

Identity is complex and luckily it evolves. Imagine if you were currently performing your identity from age 15.

In Rob Drummond’s recent paper, “Maybe it’s a grime [t]ing: TH-stopping among urban British youth” he cites Bucholtz & Hall’s (2010:19–25) five principles of identity. The gist of which is that identities are not fully-formed, they’re not explicitly conceived, and they’re dynamic.

Adolescence is a time of emerging identities. One way teens attempt to craft their identities is by emulating their role models. Maybe you were Spice Girls fan in 1997 and tried out your first British accent, or an emo Avril Lavigne fan in 2002 and decided to go out and get a bunch of eyeliner. These would both be conscious attempts to appear to be in the same group or have a similar identity as your role models, but remember identity performance isn’t always a conscious choice.

When Drummond was working on the UrBEn-ID (Urban British English and Identity) Project in Manchester (the one in the UK, ok bishes?), he noticed something interesting about 4 students who liked a specific kind of music: they performed TH-stopping some of the time.

TH-stopping is pronouncing a voiceless th as a t, like ‘thing’ as ‘ting’. While less uncommon than  its voiced sister, DH-stopping, (pronouncing ‘them’ as ‘dem’), it occurs in many English varieties including West Indian Englishes and Creoles, Jamaican Creole, British Creole, Irish English, and Liverpudlian. It is also associated with AAE, so in it can be found in Hip-Hop and Grime.

Have you heard of Grime? It’s a type of music born out of early 2000’s East London. Think Fix up, Look Sharp. Grime, like Hip-Hop is rooted in urban black culture, but blooming out of East London, it is also cross-racial using a multiethnolect, an ethnically neutral dialect, called Multicultural London English (MLE). More on that (in search of a Multicultural Urban British English (MUBE)).

A lot of previous work has looked at the language-ethnicity link. Does language reflect ethnicity? Or is it a social performance of ethnicity? I guess no one’s really all that sure, but in this specific case, Drummond found that ethnicity was most definitely not a factor.

While most research that looks at identities of adolescents is in mainstream schools like Eckert’s research, the adolescents in this study were four boys outside of the mainstream education system. They attended a specialized learning center that was designed for students who didn’t fit into the mainstream system for a variety of reasons. The study took place over 2 years and had 25 participants, but TH-stopping was in such limited use that only these 4 boys stood out. To find out why they were TH-stopping they look at a whole bunch of different variables including sex, ethnicity, speech context, musical tastes, age, and a bunch more. Which variable stood out may surprise you…

While context was a significant factor (meaning that in a mock job interview TH-stopping didn’t occur), the biggest variable turned out to be music, but not reported taste in music. Specifically, it was whether the subject was observed to be rapping in class. For the 3 out of the 4 boys, rapping is almost a feature of speech since they regularly slip in and out of it during conversation.

The 4 boys used TH-stopping in conversations where they were trying to show ingroup status with the street, urban, tough culture embodied by Grime. One example is a conversation they had about a mutual acquaintance who was about to get out of jail. They were each trying to show that this person was a friend of theirs. They each in turn referred to him as a tief (for ‘thief’). Another example is of a different boy who in the context of discussing his favorite Grime artist does not TH-stop and then self-corrects in order to use it.

Drummond concludes that among the subjects in this study TH-stopping is not a marker of ethnicity, but a part identity performance. It is a “linguistic resource” that helps align them with a general sense of tough or street culture embodied by grime.

 

 

And just to be clear, it’s not like listening to this type of music has caused their dialects to change. It’s that in order to show that they live in the Grime world, they occasionally stop a TH and perform in-groupedness. This is the major take-away. That and the fact that ethnicity as a concept is not a meaningful mechanism for grouping people.

This should be taken into account in future studies that attempt to link identity and language.

——————————————————————————————————————-

Drummond, Rob. “Maybe Its a Grime [t]Ing: Th-Stopping among Urban British Youth.” Language in Society, vol. 47, no. 02, 2018, pp. 171–196., doi:10.1017/s0047404517000999.

Eckert, Penelope. “Linguistic Variation as Social Practice: The Linguistic Construction of Identity in Belten High (Review).” Language, vol. 77, no. 3, 2001, pp. 575–577., doi:10.1353/lan.2001.0193

 

Read More

Are Emojis Predictable?

Emojis are cool, right? Well typing that sure didn’t feel cool, but whatever. The paper “Are Emojis Predictable?” by Francesco Barbieri, Miguel Ballesteros, Horacio Saggion explores the relationships between words and emojis by creating robot-brains that can predict which emojis humans would use in emoji-less tweets.

But, what exactly are emoji (also is the plural, emoji, or emojis?) and how do they interact with our text messaging? Gretchen McCulloch says you can think about them like gestures. So if I threaten you by dragging a finger across my throat IRL, a single emoji of a knife might do the trick in a text. But if they act like gesture in some cases, what are we to make of the unicorn emoji? Or the zombie? It‘s not representative of eating brains right? Right?? Tell me the gesture isn’t eating brains!

So, obviously,  trying to figure out what linguistic roles emoji can play is tough and it doesn’t help that they haven’t been studied all that much from an Natural Language Processing (NLP) perspective. Not to mention the perspective of AI. Will emoji robots take over the world like that post-apocalyptic dystopian hellscape depicted in movies like… the Emoji Movie and…Lego Batman? Studying emojis will not only protect us from the emoji-ocalypse, but also help analyze social media content and public opinion. That’s called sentiment analysis btw, but more on all things I just tried to learn later.

The Study (or Machine Learning Models, oh my 😖)

For this study, the researchers (from my alma mater, Universitat Pompeu Fabra) used the Twitter APIs to determine the 20 most frequently used emojis from 40 million tweets out of the US between Oct 2015 and May 2016. Then they selected only those tweets that had a single emoji from the top 20 list. It was more than 584600 tweets. Then they removed the emoji from the tweet and trained machine learning models to predict which it was. Simple, right?

Now just to be clear, the methods in this study are way above my head. I don’t want anyone confusing me for someone who understands exactly what went on here because I was fully confused through the entire methods section. I tried to summarize what little understanding I think I walked away with, but found there was just way too much content. So here is a companion dictionary of terms for the most computationally thirsty bishes (link).

So actually two experiments were performed. The first was comparing the abilities of different machine learning models to predict which emoji should accompany a tweet. And the second was comparing the performance of the best model to human performance.

The Robot Face-Off (🤖 vs 🤖)

In the first experiment, the researchers removed the emoji from each tweet. Then they used 5 different models (see companion dictionary for more info) to predict what the emoji had been:

  1. A Bag of Words model
  2. Skip-Gram Average model
  3. A bidirectional LSTM model with word representations 
  4. A bidirectional LSTM model with character-based representations 
  5. A skip-gram model trained with and without pre-trained word vectors

They found that the last three (the neural models) performed better than the first two (the baselines). From this they drew the conclusion that emoji collocate with specific words. For example, the word love collocates with ❤. I’d also like to take a moment to point out this study which points out the emojis are mostly used with words and not to replace them. So we’re more likely to text “I love you ❤” than “I ❤ you.”

 

The Best “Robot”

The best performing model was the char-BLSTM with pretrained vectors on the 20-emojis. Apparently frequency has a lot to do with it. It shouldn’t be surprising that the model predicts the most frequent emojis more frequently. So in a case where the word love is used with the 💕, the model would prefer ❤. Also the model confuses emojis that are used in high frequency and varied contexts. 😂 and 😭 are an example of this. They’re both used in contexts with a lot of exclamation points, lols, hahas, and omgs and often with irony.

The case of 🎄 was interesting. There were only 3 in the test set and the model correctly predicted it in the two occasions where the word Christmas was in the tweet. The one case without it didn’t get the correct prediction from the model.

Second experiment: 🙍🏽vs 🤖

The second experiment was to compare human performance to the character-based representation BLSTM. These humans were asked to read a tweet with the emoji removed and then to guess which emoji of five emojis (😂, ❤, 😍, 💯, 🔥and ) fit.

They crowdsourced it. And guess what? The char-BLSTM won! It had a hard time with 😍 and 💯 and humans mainly messed up 💯 and 🔥. For some reasons, humans kept putting in 🔥 where it should have been 😂. Probably the char-BLSTM didn’t do that as much because of its preference for high frequency emojis.

5 Conclusion

  • The BLSTMs outperformed the other models and the humans. Which sounds a lot like a terminator-style emoji-ocalypse to me. This paper not only suggests that an automatic emoji prediction tool can be created, but also that it may predict emojis better than humans can and that there is a link between word sequences and emojis. But because different communities use them differently and because they’re not usually playing the role of words necessarily, it’s excessively difficult to define their semantic roles not to mention their “definitions.” And while there are some lofty attempts (notably Emojipedia and The Emoji Dictionary) to “define” them, the lack of consensus makes this basically impossible for the vast majority of them.

I recommend this article to emoji kweens,  computational bishes 💻, curious bishes 🤔, and doomsday bishes 🧟‍♀️.

Thanks to Rachael Tatman for her post “How do we use Emoji?” for bringing some great research to our attention. If you don’t have the stomach for computational methods, but care about emojis, then definitely check out her post.

 


 

Barbieri, Francesco, et al. “Are Emojis Predictable?” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, 2017, doi:10.18653/v1/e17-2017.

Dürscheid, C., & Siever, C. M. (2017). “Beyond the Alphabet–Communication of Emojis” Kurzfassung eines (auf Deutsch) zur Publikation eingereichten Manuskripts.

Tatman, Rachael. “How Do We Use Emoji?” Making Noise & Hearing Things, 22 Mar. 2018, makingnoiseandhearingthings.com/2018/03/17/how-do-we-use-emoji/.

Read More

Companion to “Are Emojis Predictable?”

Welcome to the companion to

Are Emojis Predictable?

by  Francesco Barbieri, Migual Ballesteros, and Horacio Saggion.

This is where I’ve attempted to provide some semblance of explanation for the methods of the study. Look, I tried my best with this, so don’t judge. I ordered it in terms of the difficulty had instead of alphabetically. References at the end for thirsty bishes who just can’t get enough.

Difficulty NLP Model or Term
 Grinning Face on Twitter Sentiment Analysis

A way of determining and categorizing opinions and attitudes in a text using computational methods. Also opinion mining.

 Smiling Face on Twitter Neural Network

A computer network that’s based on how the human brain works.

 Slightly Smiling Face on Twitter Recurrent Neural Network

A type of neural network that at can be trained by algorithms and that stores information to make context-based predictions. Also RNN.

 Slightly Smiling Face on Twitter Bag of Words

A neural network that basically counts up the number of instances of words in a text. It’s good at classifying texts by word frequencies, but because it determines words by the white space surrounding them and  disregards grammar and word order, phrases lose their meaning. Also BoW.

 Neutral Face on Twitter Skip Gram

A neural network model does the opposite of the BoW. Instead of looking at the whole context, the skip gram considers word pairs separately. It’s trying to predict the context from a word, so it weighs closer words more than further ones. So the order of words is actually relevant. Also Word2Vec.

 Neutral Face on Twitter Long Short-term Memory Network

A recurrent neural network that can learn the orders of items in sequences and so can predict them. Also LSTM.

 Expressionless Face on Twitter Bidirectional Long Short-term Memory Network

The same as above, but it’s basically time travel because half the neurons are searching backwards and half are searching forwards even if more items are added later. Also BLSTM.

 Downcast Face With Sweat on Twitter Char-BLSTM

A character-based approach that learns representations for words that look similar, so it can handle alternatives of the same word type. More accurate than the word-based variety.

 Confounded Face on Twitter Word-BLSTM

Some kind of word-based variant of the above? Probably?

 Face Vomiting on Twitter Word Vector

Ya, this one is umm… well, you see, it has magnitude and direction. And like, you have to pre-train it. So… “Fuel your lifestyle with .”

Congratulations if you’ve made it this far! You probably already know more than me. Scream it out. I know I did 🙂

 


 

REFERENCES

Bag of Words (BoW) – Natural Language Processing, ongspxm.github.io/blog/2014/12/bag-of-words-natural-language-processing/.

Britz, Denny. “Recurrent Neural Networks Tutorial, Part 1 – Introduction to RNNs.” WildML, 8 July 2016, www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/.

Brownlee, Jason. “A Gentle Introduction to Long Short-Term Memory Networks by the Experts.” Machine Learning Mastery, 19 July 2017, machinelearningmastery.com/gentle-introduction-long-short-term-memory-networks-experts/.

Brownlee, Jason Brownlee. “A Gentle Introduction to the Bag-of-Words Model.” Machine Learning Mastery, 21 Nov. 2017, machinelearningmastery.com/gentle-introduction-bag-words-model/.

Chablani, Manish. “Word2Vec (Skip-Gram Model): PART 1 – Intuition. – Towards Data Science.” Towards Data Science, Towards Data Science, 14 June 2017, towardsdatascience.com/word2vec-skip-gram-model-part-1-intuition-78614e4d6e0b.

Verwimp, et al. “Character-Word LSTM Language Models.” [1402.1128] Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition, Cornell University Library, 10 Apr. 2017, arxiv.org/abs/1704.02813.

Colah, Christopher. “Understanding LSTM Networks.” Understanding LSTM Networks — Colah’s Blog, colah.github.io/posts/2015-08-Understanding-LSTMs/.

Nielsen. “Neural Networks and Deep Learning.” Neural Networks and Deep Learning, Determination Press, 1 Jan. 1970, neuralnetworksanddeeplearning.com/chap1.html.

“Sentiment Analysis: Concept, Analysis and Applications.” Towards Data Science, Towards Data Science, 7 Jan. 2018, towardsdatascience.com/sentiment-analysis-concept-analysis-and-applications-6c94d6f58c17.

gk_. “Text Classification Using Neural Networks – Machine Learnings.” Machine Learnings, Machine Learnings, 26 Jan. 2017, machinelearnings.co/text-classification-using-neural-networks-f5cd7b8765c6.

Thireou, T., and M. Reczko. “Bidirectional Long Short-Term Memory Networks for Predicting the Subcellular Localization of Eukaryotic Proteins.” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 4, no. 3, 2007, pp. 441–446., doi:10.1109/tcbb.2007.1015.

“Vector Representations of Words  | TensorFlow.” TensorFlow, www.tensorflow.org/tutorials/word2vec.

“Word2Vec Tutorial – The Skip-Gram Model.” Word2Vec Tutorial – The Skip-Gram Model · Chris McCormick, mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/.

Read More