ABSTRACT
We propose a new method that embeds speakers into a spatial representation according to the linguistic similarity of their contributions to a debate. Such “speaker landscapes” can be constructed quantitatively using word embeddings by annotating text corpora of speech samples by tokens representing the speakers. The way embeddings are constructed from predictive machine learning models means that speaker-tokens are placed closer together if they are easier to confuse given their speech samples. The result is a nuanced measure of similarity in speech which takes into account the wealth of linguistic signification and structure that the text prediction model can learn. We validate this tool using two South African case studies: the Twitter debate around the arrest and imprisonment of former president Jacob Zuma, and quotes from the news media about land reform. The results show that speaker landscapes make social qualities such as group membership and opinions evident, and we discuss how speaker landscapes open up new methods for studying opinion discourse with text data.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Supplemental data
Supplemental data for this article can be accessed online at https://doi.org/10.1080/19312458.2023.2277958.
Notes
1 We will here always prepend the text with the token, but note that depending on the predictive machine learning model used to construct the embedding, the position of the speaker-token might matter.
2 See https://www.dailymaverick.co.za/article/2022-04-01-cyril-ramaphosa-attempted-july-insurrection-left-2-million-jobless-and-wiped-r50bn-from-the-economy/, accessed Feb 24, 2023.
3 We made no attempt to remove malicious accounts, preferring to represent these agents as part of the debate.
4 According to a private communication with William Bird, head of Media Monitoring Africa.
5 The least-squares error between the standarised predictions and targets for the WordSim353 benchmark is 1.56. Our model solves on average 17% of the analogy tasks in the 14 categories correctly, which means that the target word was among the 10 closest neighbors of the vector predicted by our model. On average over the categories, only 60% of the analogies contained words that were all found in our embedding (the others were discarded from the benchmark).
6 We define an analogy test as solved if the target word is amongst the 10 closest neighbors of the vector constructed from the three other words.
7 Note that we changed Dutch to Afrikaans after noting that Twitter’s algorithms do not seem to distinguish between the two. This might mislabel a few tweets that were actually written in Dutch.
8 This is a good example for why we need qualitative analysis: If we just computed a number of how much spoken language is separated by cluster, we would miss that the mixed-language clusters are justified as they capture the same debate.
9 We identified party affiliation by internet search. Note that the attribution of a political party is sometimes ambiguous, for example when a speaker changed parties during the time captured by the data. The assignment was made by considering the position of a speaker when they were most immediately associated with the land debate.
10 We used the following set of words: fuck, shit, fucking, damn, asshole, assholes fucker, bloody, stupid, gun, will, murder, kill, violence, wrong, shoot, bad, death, attack, feel, shot, action, arm, idiot, crazy, criminal, terrorist, mad, hell, crime, blame, fight, ridicule, insane, die, threat, terror, hate. The cosine distance was exaggerated by a nonlinear function. More details can be found in the code published alongside this paper.