PyCon Israel 2025

Efficiently Fine-Tuning Small Language Models with Python in 2025
09-09, 11:00–11:20 (Asia/Jerusalem), Hall 7
Language: עברית

Small language models (SLMs) can outperform larger models on domain-specific tasks. This talk shows how Python tools enable state-of-the-art results with lightweight models, even on modest hardware.


This 20‑minute session balances practical code snippets with strategic insights. We’ll look at:
- Why teams are shifting from large, general-purpose LLMs to small, specialized ones
- The key ingredients for efficient fine-tuning: QLoRA and high-quality datasets
- How Python tools like Axolotl, PEFT, Accelerate, and HuggingFace Transformers make fine-tuning accessible and reproducible
- A real-world case study: fine-tuning LLaMA‑1B to outperform GPT‑4 on a classification task

By the end, you’ll understand where fine-tuned SLMs fit in the landscape of AI development, how they can deliver cost‑effective accuracy, and what steps you can take to bring them into your own projects.

No prior ML expertise is required—just Python familiarity and curiosity about building smarter, faster, and more private AI systems.


Expected experience level of participants

Basic

Target audience

Developers, Data Scientists

With 20+ years in data, ML, and GenAI, I blend academic research with real-world innovation. After a PhD focused on early GenAI work, I led GenAI initiatives at Datomize and now build tailored Small Language Models at Datawizz. I'm a founder passionate about AI for good.