Silent Disco: Hackers' Guide to Language Models
Online
Our “Silent Discos” are based on tutorials on computational methods. This training event will follow content from the hands-on tutorial, from co-founder of fast.ai Jeremy Howard, introducing participants to the transformer architecture that powers large language models.
YouTube Video URL: https://www.youtube.com/watch?v=jkrNMKz9pWU
Notebook: https://github.com/fastai/lm-hackers/blob/main/lm-hackers.ipynb
In the tutorial, Jeremy discusses the capabilities and limitations of cutting-edge models like GPT-4 before exploring them under the hood. participants can follow along using the accompanying Jupyter notebook to explore more advanced topics, including fine-tuning models, decoding tokens, and optimising model performance.
This tutorial is designed for a diverse audience, from beginners to those with more programming experience. As all the code is already written in the notebook, beginners will be able to follow along, and coders can modify the exercises.
The workshop will take place via Microsoft Teams in a ‘Silent Disco’ format. Participants will work on the tutorial at their own pace. The facilitator will be available via Teams Chat to reply to any questions that arise during the workshop, and to help with installation, troubleshooting or other issues.
To attend this course, you will have to join the associated Microsoft Teams group. The link to join the group will be sent to attendees prior to the course start date, so please make sure to do so in advance.
This Silent Disco will be facilitated by Martin Disley.
After taking part in this event, you may decide that you need some further help in applying what you have learnt to your research. If so, you can book a Data Surgery meeting with one of our training fellows.
More details about Data Surgeries.
If you’re new to this training event format, or to CDCS training events in general, read more on what to expect from CDCS training. Here you will also find details of our cancellation and no-show policy, which applies to this event.
Learning Outcomes:
- Understand the Fundamentals: Gain insight into the basic concepts and architecture of LLMs.
- Evaluate Model Performance: Critically assess the capabilities and limitations of GPT-4 and other modern language models.
- Apply Practical Techniques: Utilize the OpenAI API for code writing and data analysis and implement a code interpreter with function calling.
- Fine-Tune Language Models: Apply techniques for fine-tuning models using specialized datasets and understand the process of decoding tokens.
If you're interested in other training on machine learning and AI, have a look at the following: