Image and Text Analysis using Multi-modal Embeddings - Masterclass

Masterclass Description
For a long time, the computational analysis of visual content has been deemed nigh impossible. However, recent advancements in computer science offer promising solutions to conduct automated content analysis on images along with texts. In this workshop, we will introduce state-of-the-art techniques for image and text analysis using multi-modal embeddings. We will discuss CLIP (Contrastive Language–Image Pre-training), a multimodal embedding developed by OpenAI, and demonstrate the existing tools that leverage it for content analysis. Lastly, we will discuss the pitfalls of using AI-powered tools for social science research and outline the proper procedures for validation.
We will use Python in this workshop. Prior experience with Python is beneficial but not necessary. Participants with no programming experience will be able to follow the materials. Instructions on the installation of software will be provided before the workshop.
You may also be interested in the accompanying Project Deep Dive for this event.
Speaker Biography
Justin Chun-ting Ho is a postdoctoral researcher at the Amsterdam School of Communication Research. Before Amsterdam, he worked at Academia Sinica in Taipei and Sciences Po in Paris. He holds a PhD in Sociology from the University of Edinburgh. His work focuses on nationalism, social media analysis, and computational methods.
Booking Information
Upon booking, attendees will receive an automated confirmation email. This event will take place on Zoom. Attendees will receive a second email closer to the event date which will provide the Zoom link. Please use the same email for registering and when logging into Zoom.
Please inform us of any access requirements by emailing cdcs@ed.ac.uk. Further details about how CDCS uses your information obtained from booking onto our events can be found in our Events Privacy Statement.












