Tag: NLP

Field Notes from 2018’s Adventures in Applied Linguistics

Happy Birthday to us! We’ve been doing the bish thing for a year, so I guess we have to do that tired old practice of recapping because like Kylie, we had a big year.

TL;DR – following is a list of our plans for 2019 and a recap of what we learned in 2018.

This is a still from Kylie Jenner's 2016 New Year Resolutions video. It shows her head and shoulders with the quote "like, realizing things..."
This is a still from Kylie Jenner’s 2016 New Year Resolutions video. It shows her head and shoulders with the quote “like, realizing things…”

#goals

    1. We’re looking for guest writers. So if you know any other linguabishes, send them our way.
    2. We’re diversifying our content to include not just peer-reviewed articles in academic papers, but also conference papers, master’s theses, and whatever else strikes our fancies.
    3. We’re planning to provide more of our own ideas like in the Immigrant v. Migrant v. Expat series (posts 1, 2, and 3) and to synthesize multiple papers into little truth nuggets.
    4. Hopefully it won’t come up, but we’re not beyond dragging any other racist garbage parading as linguistics again.

Plans aside, here’s all the stuff we learned. We covered a lot of topics in 2018, so it’s broken down by theme.

Raciolinguistics and Language Ideology

We wrote 5 posts on language ideology and raciolinguistics and we gave you a new word: The Native-speakarchy. Like the Patriarchy, the Native-speakarchy must be dismantled. Hence Dismantling the Native-Speakarchy Posts 1, 2, and 3. Since we had a bish move to Ethiopia, we learned a little about linguistic landscape and language contact in two of its regional capitals. Finally, two posts about language ideology in the US touch on linguistic discrimination. One was about the way people feel about Spanish in Arizona and the other was about Spanish-English bilingualism in the American job market. 

This is a gif of J-Lo from the Dinero music video. She’s wearing black lingerie and flipping meat on a barbecue in front of a mansion. She is singing “I just want the green, want the money, want the cash flow. Yo quiero, yo quiero dinero, ay.”

Pop Culture and Emoji

But we also had some fun. Four of our posts were about pop culture. We learned more about cultural appropriation and performance from a paper about Iggy Azalea, and one about grime music. We also learned that J.K. Rowling’s portrayal of Hermione wasn’t as feminist as fans had long hoped. Finally, a paper about reading among drag queens taught that there’s more to drag queen sass than just sick burns.

Emojis aren’t a language, but they are predictable. The number one thing this bish learned about emojis though is that the methodology used to analyze their use is super confusing.

This is a gif of of the confused or thinking face emoji fading in and out of frame.

Lexicography and Corpus

We love a dictionary and we’ve got receipts. Not only did we write a whole 3-post series comparing the usages of Expat v. Immigrant v. Migrant in three different posts (1, 2, and 3), but we also learned what’s up with short-term lexicography, and made a little dictionary words for gay men in 1800’s.

Sundries

These comprise a grab bag of posts that couldn’t be jammed into one of our main categories. These are lone wolf posts that you only bring home to your parents to show them you don’t care what they think. These black sheep of the bish family wear their leather jackets in the summer and their sunglasses at night.

This is a black and white gif of Rihanna looking badass in shades and some kind of black fur stole.

Dank Memes

Finally, we learned that we make the dankest linguistics memes. I leave you with these.

 Thanks for reading and stay tuned for more in 2019!

Read More

Are Emojis Predictable?

Emojis are cool, right? Well typing that sure didn’t feel cool, but whatever. The paper “Are Emojis Predictable?” by Francesco Barbieri, Miguel Ballesteros, Horacio Saggion explores the relationships between words and emojis by creating robot-brains that can predict which emojis humans would use in emoji-less tweets.

But, what exactly are emoji (also is the plural, emoji, or emojis?) and how do they interact with our text messaging? Gretchen McCulloch says you can think about them like gestures. So if I threaten you by dragging a finger across my throat IRL, a single emoji of a knife might do the trick in a text. But if they act like gesture in some cases, what are we to make of the unicorn emoji? Or the zombie? It‘s not representative of eating brains right? Right?? Tell me the gesture isn’t eating brains!

So, obviously,  trying to figure out what linguistic roles emoji can play is tough and it doesn’t help that they haven’t been studied all that much from an Natural Language Processing (NLP) perspective. Not to mention the perspective of AI. Will emoji robots take over the world like that post-apocalyptic dystopian hellscape depicted in movies like… the Emoji Movie and…Lego Batman? Studying emojis will not only protect us from the emoji-ocalypse, but also help analyze social media content and public opinion. That’s called sentiment analysis btw, but more on all things I just tried to learn later.

The Study (or Machine Learning Models, oh my 😖)

For this study, the researchers (from my alma mater, Universitat Pompeu Fabra) used the Twitter APIs to determine the 20 most frequently used emojis from 40 million tweets out of the US between Oct 2015 and May 2016. Then they selected only those tweets that had a single emoji from the top 20 list. It was more than 584600 tweets. Then they removed the emoji from the tweet and trained machine learning models to predict which it was. Simple, right?

Now just to be clear, the methods in this study are way above my head. I don’t want anyone confusing me for someone who understands exactly what went on here because I was fully confused through the entire methods section. I tried to summarize what little understanding I think I walked away with, but found there was just way too much content. So here is a companion dictionary of terms for the most computationally thirsty bishes (link).

So actually two experiments were performed. The first was comparing the abilities of different machine learning models to predict which emoji should accompany a tweet. And the second was comparing the performance of the best model to human performance.

The Robot Face-Off (🤖 vs 🤖)

In the first experiment, the researchers removed the emoji from each tweet. Then they used 5 different models (see companion dictionary for more info) to predict what the emoji had been:

  1. A Bag of Words model
  2. Skip-Gram Average model
  3. A bidirectional LSTM model with word representations 
  4. A bidirectional LSTM model with character-based representations 
  5. A skip-gram model trained with and without pre-trained word vectors

They found that the last three (the neural models) performed better than the first two (the baselines). From this they drew the conclusion that emoji collocate with specific words. For example, the word love collocates with ❤. I’d also like to take a moment to point out this study which points out the emojis are mostly used with words and not to replace them. So we’re more likely to text “I love you ❤” than “I ❤ you.”

 

The Best “Robot”

The best performing model was the char-BLSTM with pretrained vectors on the 20-emojis. Apparently frequency has a lot to do with it. It shouldn’t be surprising that the model predicts the most frequent emojis more frequently. So in a case where the word love is used with the 💕, the model would prefer ❤. Also the model confuses emojis that are used in high frequency and varied contexts. 😂 and 😭 are an example of this. They’re both used in contexts with a lot of exclamation points, lols, hahas, and omgs and often with irony.

The case of 🎄 was interesting. There were only 3 in the test set and the model correctly predicted it in the two occasions where the word Christmas was in the tweet. The one case without it didn’t get the correct prediction from the model.

Second experiment: 🙍🏽vs 🤖

The second experiment was to compare human performance to the character-based representation BLSTM. These humans were asked to read a tweet with the emoji removed and then to guess which emoji of five emojis (😂, ❤, 😍, 💯, 🔥and ) fit.

They crowdsourced it. And guess what? The char-BLSTM won! It had a hard time with 😍 and 💯 and humans mainly messed up 💯 and 🔥. For some reasons, humans kept putting in 🔥 where it should have been 😂. Probably the char-BLSTM didn’t do that as much because of its preference for high frequency emojis.

5 Conclusion

  • The BLSTMs outperformed the other models and the humans. Which sounds a lot like a terminator-style emoji-ocalypse to me. This paper not only suggests that an automatic emoji prediction tool can be created, but also that it may predict emojis better than humans can and that there is a link between word sequences and emojis. But because different communities use them differently and because they’re not usually playing the role of words necessarily, it’s excessively difficult to define their semantic roles not to mention their “definitions.” And while there are some lofty attempts (notably Emojipedia and The Emoji Dictionary) to “define” them, the lack of consensus makes this basically impossible for the vast majority of them.

I recommend this article to emoji kweens,  computational bishes 💻, curious bishes 🤔, and doomsday bishes 🧟‍♀️.

Thanks to Rachael Tatman for her post “How do we use Emoji?” for bringing some great research to our attention. If you don’t have the stomach for computational methods, but care about emojis, then definitely check out her post.

 


 

Barbieri, Francesco, et al. “Are Emojis Predictable?” Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, 2017, doi:10.18653/v1/e17-2017.

Dürscheid, C., & Siever, C. M. (2017). “Beyond the Alphabet–Communication of Emojis” Kurzfassung eines (auf Deutsch) zur Publikation eingereichten Manuskripts.

Tatman, Rachael. “How Do We Use Emoji?” Making Noise & Hearing Things, 22 Mar. 2018, makingnoiseandhearingthings.com/2018/03/17/how-do-we-use-emoji/.

Read More

Companion to “Are Emojis Predictable?”

Welcome to the companion to

Are Emojis Predictable?

by  Francesco Barbieri, Migual Ballesteros, and Horacio Saggion.

This is where I’ve attempted to provide some semblance of explanation for the methods of the study. Look, I tried my best with this, so don’t judge. I ordered it in terms of the difficulty had instead of alphabetically. References at the end for thirsty bishes who just can’t get enough.

Difficulty NLP Model or Term
 Grinning Face on Twitter Sentiment Analysis

A way of determining and categorizing opinions and attitudes in a text using computational methods. Also opinion mining.

 Smiling Face on Twitter Neural Network

A computer network that’s based on how the human brain works.

 Slightly Smiling Face on Twitter Recurrent Neural Network

A type of neural network that at can be trained by algorithms and that stores information to make context-based predictions. Also RNN.

 Slightly Smiling Face on Twitter Bag of Words

A neural network that basically counts up the number of instances of words in a text. It’s good at classifying texts by word frequencies, but because it determines words by the white space surrounding them and  disregards grammar and word order, phrases lose their meaning. Also BoW.

 Neutral Face on Twitter Skip Gram

A neural network model does the opposite of the BoW. Instead of looking at the whole context, the skip gram considers word pairs separately. It’s trying to predict the context from a word, so it weighs closer words more than further ones. So the order of words is actually relevant. Also Word2Vec.

 Neutral Face on Twitter Long Short-term Memory Network

A recurrent neural network that can learn the orders of items in sequences and so can predict them. Also LSTM.

 Expressionless Face on Twitter Bidirectional Long Short-term Memory Network

The same as above, but it’s basically time travel because half the neurons are searching backwards and half are searching forwards even if more items are added later. Also BLSTM.

 Downcast Face With Sweat on Twitter Char-BLSTM

A character-based approach that learns representations for words that look similar, so it can handle alternatives of the same word type. More accurate than the word-based variety.

 Confounded Face on Twitter Word-BLSTM

Some kind of word-based variant of the above? Probably?

 Face Vomiting on Twitter Word Vector

Ya, this one is umm… well, you see, it has magnitude and direction. And like, you have to pre-train it. So… “Fuel your lifestyle with .”

Congratulations if you’ve made it this far! You probably already know more than me. Scream it out. I know I did 🙂

 


 

REFERENCES

Bag of Words (BoW) – Natural Language Processing, ongspxm.github.io/blog/2014/12/bag-of-words-natural-language-processing/.

Britz, Denny. “Recurrent Neural Networks Tutorial, Part 1 – Introduction to RNNs.” WildML, 8 July 2016, www.wildml.com/2015/09/recurrent-neural-networks-tutorial-part-1-introduction-to-rnns/.

Brownlee, Jason. “A Gentle Introduction to Long Short-Term Memory Networks by the Experts.” Machine Learning Mastery, 19 July 2017, machinelearningmastery.com/gentle-introduction-long-short-term-memory-networks-experts/.

Brownlee, Jason Brownlee. “A Gentle Introduction to the Bag-of-Words Model.” Machine Learning Mastery, 21 Nov. 2017, machinelearningmastery.com/gentle-introduction-bag-words-model/.

Chablani, Manish. “Word2Vec (Skip-Gram Model): PART 1 – Intuition. – Towards Data Science.” Towards Data Science, Towards Data Science, 14 June 2017, towardsdatascience.com/word2vec-skip-gram-model-part-1-intuition-78614e4d6e0b.

Verwimp, et al. “Character-Word LSTM Language Models.” [1402.1128] Long Short-Term Memory Based Recurrent Neural Network Architectures for Large Vocabulary Speech Recognition, Cornell University Library, 10 Apr. 2017, arxiv.org/abs/1704.02813.

Colah, Christopher. “Understanding LSTM Networks.” Understanding LSTM Networks — Colah’s Blog, colah.github.io/posts/2015-08-Understanding-LSTMs/.

Nielsen. “Neural Networks and Deep Learning.” Neural Networks and Deep Learning, Determination Press, 1 Jan. 1970, neuralnetworksanddeeplearning.com/chap1.html.

“Sentiment Analysis: Concept, Analysis and Applications.” Towards Data Science, Towards Data Science, 7 Jan. 2018, towardsdatascience.com/sentiment-analysis-concept-analysis-and-applications-6c94d6f58c17.

gk_. “Text Classification Using Neural Networks – Machine Learnings.” Machine Learnings, Machine Learnings, 26 Jan. 2017, machinelearnings.co/text-classification-using-neural-networks-f5cd7b8765c6.

Thireou, T., and M. Reczko. “Bidirectional Long Short-Term Memory Networks for Predicting the Subcellular Localization of Eukaryotic Proteins.” IEEE/ACM Transactions on Computational Biology and Bioinformatics, vol. 4, no. 3, 2007, pp. 441–446., doi:10.1109/tcbb.2007.1015.

“Vector Representations of Words  | TensorFlow.” TensorFlow, www.tensorflow.org/tutorials/word2vec.

“Word2Vec Tutorial – The Skip-Gram Model.” Word2Vec Tutorial – The Skip-Gram Model · Chris McCormick, mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/.

Read More