The US State Department has produced regular reports on human rights practices across the world for many years. These monitoring reports play an important role both in the international human rights regime and in the production of human rights data. In a paper published in 2018, Benjamin Baozzi and Daniel Berliner analyse these reports in order to identify a set of topics and describe how these vary over time and space.
In today’s seminar, we will analyse the US State Department’s annual Country Reports on Human Rights Practices (1977–2012), by applying structural topic models (STMs) to identify the underlying topics of attention and scrutiny across the entire corpus and in each individual report. We will also assess the extent to which the prevalence of different topics in the corpus is related to covariates pertaining to each countries’ relationship with the US.
You will need to load the following packages before beginning the assignment
library(stm)
library(tidyverse)
library(quanteda)
library(wordcloud)
# If you cannot load these libraries, try installing them first. E.g.:
# install.packages("stm")
Today we will use data on 4067 Human Rights Reports from the US State Department. The table below describes some of the variables included in the data:
Variable | Description |
---|---|
cname |
The name of the country which is the subject of the report |
year |
The year of the report |
report |
The text of the report (note that these texts have already been stemmed and stop words have been removed) |
alliance |
Whether the country has a formal military alliance with the United States (1) or not (0). |
p_polity2 |
The polity score for the country |
logus_aid_econ |
The (log) level of foreign aid provided to the country by the US. |
oecd |
OECD membership dummy |
civil_war |
Civil war dummy |
This data is not stored on GitHub because the file is to large. Instead, you will need to download it from this Dropbox link.
You can get R to do this directly:
utils::download.file(url = 'https://dl.dropboxusercontent.com/s/dv4dp6mpzi9lbbo/human_rights_reports.csv',
destfile = 'human_rights_reports.csv')
Once you have downloaded the file and stored it somewhere sensible, you can load it into R:
human_rights <- read_csv("human_rights_reports.csv")
## Rows: 3949 Columns: 16
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (2): cname, report
## dbl (14): year, cowcode, logwdi_gdpc, p_polity2, alliance, logus_aid_econ, c...
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
You can take a quick look at the variables in the data by using the
glimpse()
function from the tidyverse
package:
glimpse(human_rights)
## Rows: 3,949
## Columns: 16
## $ cname <chr> "Albania", "Albania", "Albania", "Albania", "Albania…
## $ year <dbl> 1980, 1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988…
## $ cowcode <dbl> 339, 339, 339, 339, 339, 339, 339, 339, 339, 339, 33…
## $ logwdi_gdpc <dbl> 7.524573, 7.560410, 7.568337, 7.558117, 7.524482, 7.…
## $ p_polity2 <dbl> -9, -9, -9, -9, -9, -9, -9, -9, -9, -9, 1, 1, 5, 5, …
## $ alliance <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ logus_aid_econ <dbl> 0.00000, 0.00000, 0.00000, 0.00000, 0.00000, 0.00000…
## $ civilwar <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ oecd <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ logtrade_with_US <dbl> 3.010621, 2.502255, 3.131137, 2.263844, 2.627563, 2.…
## $ latentmean_Fariss <dbl> -0.915279270, -1.060029900, -1.053791400, -1.0242505…
## $ gd_ptsa <dbl> 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 2, 2, 3, 3, 3, 4…
## $ years_to_election <dbl> 0, 3, 2, 1, 0, 3, 2, 1, 0, 3, 2, 1, 0, 3, 2, 1, 0, 3…
## $ rep_pres <dbl> 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0…
## $ pres_chambers <dbl> 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 2, 2, 0, 0, 0…
## $ report <chr> "albania isol balkan nation peopl govern communist r…
We will begin by implementing the null model of the Structural Topic Model. This model is equivalent to the Correlated Topic Model – a close cousin of the LDA model that we covered in the lecture, though one in which the topics in the corpus are allowed to be correlated with each other (LDA assumes that topics are uncorrelated).
The stm()
function from the stm
package can
be used to fit the model. There are a few different arguments that you
will need to specify for this function:
Argument | Description |
---|---|
documents |
The DFM on which you intend to fit the stm model. |
K |
The number of topics you wish to estimate. |
prevalence |
A formula (with no response variable) specifying the covariates you wish to use to model the topic prevalences across documents. |
content |
A formula (with no response variable) specifying the covariate you wish to use to model the content of each topic across documents. |
seed |
A seed number to make the results replicable. |
human_rights
data. Then create
a dfm, making some feature selection decisions.Note: Topic models can take a long time to estimate so I would advise that you trim the DFM to keep it reasonably small for now.
Use the stm()
function from the stm
package to fit a topic model. Choose an appropriate number of topics.
You should not use any covariates in answer to this question. As the STM
model will take a while to run (probably a minute or two), you should
make sure you save the output of the model so that you don’t need to run
this code repeatedly.
Use the plot()
function to assess how common each
topic is in this corpus. What is the most common topic? What is the
least common?
Use the labelTopics()
function to extract the most
distinctive words for each topic. Do some interpretation of these topic
“labels”.[^seminar7-2] Is there a sexual violence topic? Is there a
topic about electoral manipulation? Create two word clouds illustrating
two of the most interesting topics using the cloud()
function.
Note: The stm
package provides various
different metrics for weighting words in estimated topic models. The
most relevant two for our purposes are Highest Prob
and
FREX
. Highest Prob
simply reports the words
that have the highest probability within each topic (i.e. inferred
directly from the \(\beta\)
parameters). FREX
is a weighting that takes into account
both frequency and exclusivity (words are upweighted when they are
common in one topic but uncommon in other topics).
Access the document-level topic-proportions from the estimated
STM object (use stm_out$theta
). How many rows does this
matrix have? How many columns? What do the rows and columns
represent?
Pick one of the topics and plot it against the year
variable from the human_rights
data. What does this plot
suggest?
A key innovation of the stm is that it allows us to include
arbitrary covariates into the text model, allowing us to assess the
degree to which topics vary with document metadata. In this question,
you should fit another stm, this time including a covariate in the
prevalence
argument. You can pick any covariate that you
think is likely to show interesting relationships with the estimated
topics. Again, remember to save your model output so that you don’t need
to estimate the model more than once.
We will want to be able to keep track of the estimated topics
from this model for use in the plotting functions later. Create a vector
of topic labels from the words with the highest "frex"
scores for each topic.
Use the estimateEffect()
function to estimate
differences in topic usage by one of the covariates in the
human_rights
data. This function takes three main
arguments:
Argument | Description |
---|---|
formula |
A formula for the regression. Should be of the form
c(1,2,3) ~ covariate_name , where the numbers on the
left-hand side indicate the topics for which you would like to estimate
effects. |
stmobj |
The model output from the stm()
function. |
metadata |
A data.frame where the covariates are to
be found. You can use docvars(my_dfm) for the
dfm you used to estimate the original model. |
Use the summary()
function to extract the estimated
regression coefficients. For which topics do you find evidence of a
significant relationship with the covariate you selected?
Plot some of the more interesting differences that you just
estimated using the plot.estimateEffect()
function. There
are various different arguments that you can provide to this function.
See the help file for assistance here
(?plot.estimateEffect
).
Fit an STM model which allows the content of the topics
to vary by one of the covariates in the data. You can do so by making
use of the content
argument to the stm()
function (see the lecture slides for an example). Once you have
estimated the model, inspect the output and create at least one plot
which demonstrates how word use for a given topic differs for the
covariate you included in the model. (Note: The use of the
content()
argument can cause the model to take a long time
to converge so you will need to be patient!)