API Reference

Fine-Tuning & Customization in Dot

1. Start with Lightweight Customization (Recommended Default)

👉 Tip: Dot “advises RAG, agent, orchestration, workflow” customization first; fine-tuning is only suggested if these do not reach the desired outcome.

2. When to Fine-Tune

Choose model fine-tuning only if the above steps cannot hit required accuracy, brand tone, or compliance thresholds.

3. Fine-tuning options supported in Dot

👉 Tip: All fine-tuning can be performed on open-source models and still deployed on-premise, so your data never leaves your environment.

4. Fine-Tuning Workflow in Dot (Under Development)

  1. Select a base model – Use any connected open-source LLM (e.g., Mistral or similar) from the Model Picker.
  2. Prepare training data – Upload domain examples in Source panel.
  3. Choose fine-tuning method – In the fine-tuning dialog pick LoRA, or DPO / PPO / RL as required.
  4. Run & monitor – Dot shows progress in Logic; training artifacts are stored securely on-prem or in your chosen environment.
  5. Validate – Test the tuned model in Simplified Mode or inside an agent flow.
  6. Deploy – Because Dot supports cloud, on-premise, and hybrid, you can serve the tuned model where compliance policies allow.

Best Practice Checklist

✓ Start with RAG and agents — quicker and more cost-efficient.

✓ Use LoRA when hardware is limited or you need quick iterations.

✓ Apply DPO/PPO/RL only for advanced alignment needs.

✓ Keep experiments isolated by using New Chat for each test run.

✓ Monitor token usage in Dot’s built-in tracking to stay on budget.

✓ Maintain data control with on-prem or hybrid deployment.

Was this section helpful?