GenAI DevCon London
Talk 1: Lessons from years of prompt engineering
Speaker: Mike Taylor, Author at O'Reilly & Top Instructor Udemy
Talk 2: Crafting the Perfect Model: Fine-Tuning and Merging LLMs
Speaker: Maxime Labonne, Staff ML Scientist at Liquid AI
About the talk:
This presentation introduces scenarios where fine-tuning is most beneficial, popular frameworks for efficient implementation, and explores key methodologies. We'll cover both supervised fine-tuning techniques, such as LoRA and QLoRA, as well as preference alignment methods including DPO. Additionally, we'll discuss model merging techniques like SLERP, DARE, passthrough, and frankenMoEs. We'll share tips and tricks from the most popular models on the Hugging Face Hub to maximize the performance of your LLMs.
Talk 3: How we scaled from 0 to 100M tokens/day
Speakers: Bhargav & Priya, Software Engg & DevOps at Tune AI