Skip to main content
This recipe walks through the complete lifecycle of creating a fine-tuned model in Maitai, from raw data to a deployed model.

1) Create a Dataset

The first step is gathering the data you want the model to learn from.
  1. Go to Forge > Finetuning.
  2. Click New Dataset.
  3. Choose a source (e.g., Production Traffic to filter real requests, or File Upload).
  4. Save the dataset.
See: Dataset Creation

2) Prepare & Refine Data

Once the dataset is created, you can improve its quality.
  1. Open the Dataset.
  2. Use AI Review to spot inconsistencies.
  3. Edit individual requests to fix errors or improve the “ideal” response.
  4. Apply Modifications (regex) or Augmentations (synthetic variations) if needed.
See: Dataset Preparation

3) Create a Composition

A Composition is a “recipe” that combines one or more datasets. This lets you mix and match data (e.g., “80% production data + 20% golden test set”) without duplicating the underlying requests.
  1. Go to Forge > Compositions.
  2. Click New Composition.
  3. Select the datasets to include.
  4. Adjust sampling weights if desired.
See: Compositions

4) Start a Fine-tune Run

Now you’re ready to train.
  1. Go to Forge > Finetuning (or start directly from a Composition).
  2. Click New Finetune Run.
  3. Select your Composition.
  4. Choose the Base Model (e.g., llama-3.1-8b-instruct).
  5. Configure hyperparameters (epochs, learning rate) or stick to defaults.
  6. Start the run.
See: Fine-tune Runs

5) Validate the Model

After training completes, Maitai automatically runs a validation step (if configured) or you can run one manually.
  1. Open the Fine-tune Run.
  2. Go to the Validation tab.
  3. Review the Validation Run results to see how the new model performed against a holdout set or specific Test Set.
  4. Check metrics like accuracy and pass rate.
See: Validation

6) Benchmarking against Golden Test Sets

Once you have a candidate model, the final hurdle before deployment is ensuring it actually outperforms your baseline on high-signal cases.
  1. Go to Test > Test Sets.
  2. Select your Golden Test Set for this intent.
  3. Start a New Test Run using your newly fine-tuned model.
  4. Use the Compare Runs feature to side-by-side the results of the new model against your current production model.
  5. Verify that the new model maintains or improves performance on critical “golden” examples.
See: Test Set Creation, Test Run Execution

7) Deploy

If the model performs well and passes your golden benchmarks:
  1. Click Deploy on the Fine-tune Run page.
  2. The model becomes available as a ft:... ID that you can use in your Applications or Intents configuration.
See: Deployment