Beyond Humanitarian 'Emergencies'
Analysing 20 years of Anglophone news output to see whether the meanings commonly associated with the term 'humanitarian' are changing over time and in relation to specific issues (refugees, climate change, CV19), as well as differences between news outlets around the world.
Corpus construction, corpus analysis and topic modelling, and discourse analysis. To do this, I am working with two colleagues, Dani-Madrid-Morales (University of Houston) who is leading the corpus construction and Anouk Lang (LLC) who is working on the computational analysis. We are assisted by our RA, Andrew Jones (Leicester).
Methodological Challenges and Questions
We face several challenges. at each stage of our study. In building a corpus of English-language news, we need to consider how to tackle issues of comprehensiveness, inclusiveness and representativeness. To overcome these, we combine multiple search approaches (using commercial databases, commercial and public news APIs, and combing through web search results) to identify all theoretically interesting sources, locate relevant news content, and remove duplicates. This task is particularly challenging given the 20-year range of the project.
Whilst computational analysis methods can readily identify specific words and phrases, moving beyond these to the discourses, metaphors and semantic associations that lie under the surface of the language requires creative use of methodologies initially developed for forms of quantitative analysis.
Manual discourse analysis then involves further methodological challenges: keeping the sharp, critical attention to power dynamics found in the kinds of discourse analysis espoused by social scientists, whilst also making the most of the affordances of new computational methods. Finally, we will need to work out how to ‘stand back’ sufficiently to be able to identify and articulate major trends, without effacing potentially significant differences between time periods, news outlets, cultures, and the coverage of specific issues.
Lexis Nexus, Google News API, Factiva, and a specially built web scraper for the corpus construction; AntConc and MALLET for computational analysis; and NViVO to aid discourse analysis.