Mistral Fine-Tuning Lab

This guide documents a complete Mistral fine-tuning workflow built around four stages: dataset preparation, ChatML tokenization, QLoRA training, and inference testing.

What This Guide Covers

The workflow is organized as a reproducible fine-tuning pipeline:

  1. Prepare the dataset from OpenAssistant Guanaco.
  2. Tokenize conversations in ChatML format and apply label masking.
  3. Fine-tune a base Mistral model with QLoRA.
  4. Load the adapter and test the resulting agent interactively.

Each stage has its own page so you can read the workflow in order or jump directly to the part you need.

Before You Start

Read Environment Setup first if you want the repository prerequisites, expected tooling, and the local snapshot differences that matter before you run the project yourself.

Companion Repository

This documentation is the narrative layer of the project. The executable source of truth lives in the public repository:

Companion Repository

Use the docs to understand the pipeline, tradeoffs, and implementation details. Use the repository to run the code, inspect the full file tree, and work from the real project files instead of copying snippets out of the site.

Pipeline Map

StepMain fileReadsProducesWhy it matters
Dataset Preparation1_Dataset/prepare_dataset.pytimdettmers/openassistant-guanacoprepared_dataset_chatmlConverts the raw corpus into a consistent ChatML contract.
Tokenization2_Tokenizer/tokenizer.pyprepared_dataset_chatmltokenized_dataset_chatmlAdds ChatML tokens and masks user-side labels.
Fine-Tuning3_FineTuning/fineTuning.pytokenized_dataset_chatmlmistral-7b-chatml-adapterTrains a LoRA adapter on top of a quantized base model.
Testing Agent4_Testing_agent/chat_agent.pyBase model + adapterInteractive chat loopReuses the same ChatML template during inference.

How To Read The Tutorial

  • Start with Environment Setup if you are preparing a machine to run the pipeline.
  • Start with Dataset Preparation if you want the pipeline in sequence.
  • Jump to Tokenization if you care most about ChatML special tokens and masking.
  • Use Fine-Tuning and Inference Testing together, because training and inference share the same prompt format.
  • Keep the glossary pages nearby when reading the implementation details.

Snapshot Notes

Two project-level files referenced throughout the workflow are not present in this documentation workspace snapshot, but they do exist in the upstream public repository:

Snapshot Notes

The pages below keep those dependencies explicit, but they do not invent local values for files that are outside this site repository.

Suggested Reading Order

  1. Environment Setup
  2. Dataset Preparation
  3. Tokenization & ChatML
  4. Fine-Tuning with QLoRA
  5. Testing & Inference
  6. Fine-Tuning Glossary
  7. Inference Glossary

Baseline Environment Flow

The expected setup flow is:

conda env update --file environment.yml --prune
conda activate Mistral-FineTuning-Lab
huggingface-cli login

Treat those commands as the expected contract of the workflow, then verify the public repository files before running the pipeline end to end.

Continue with Environment Setup.