Fine-Tuning & Customization in Dot
1. Start with Lightweight Customization (Recommended Default)
👉 Tip: Dot “advises RAG, agent, orchestration, workflow” customization first; fine-tuning is only suggested if these do not reach the desired outcome.
2. When to Fine-Tune
Choose model fine-tuning only if the above steps cannot hit required accuracy, brand tone, or compliance thresholds.
3. Fine-tuning options supported in Dot
👉 Tip: All fine-tuning can be performed on open-source models and still deployed on-premise, so your data never leaves your environment.
4. Fine-Tuning Workflow in Dot (Under Development)
- Select a base model – Use any connected open-source LLM (e.g., Mistral or similar) from the Model Picker.
- Prepare training data – Upload domain examples in Source panel.
- Choose fine-tuning method – In the fine-tuning dialog pick LoRA, or DPO / PPO / RL as required.
- Run & monitor – Dot shows progress in Logic; training artifacts are stored securely on-prem or in your chosen environment.
- Validate – Test the tuned model in Simplified Mode or inside an agent flow.
- Deploy – Because Dot supports cloud, on-premise, and hybrid, you can serve the tuned model where compliance policies allow.
Best Practice Checklist
✓ Start with RAG and agents — quicker and more cost-efficient.
✓ Use LoRA when hardware is limited or you need quick iterations.
✓ Apply DPO/PPO/RL only for advanced alignment needs.
✓ Keep experiments isolated by using New Chat for each test run.
✓ Monitor token usage in Dot’s built-in tracking to stay on budget.
✓ Maintain data control with on-prem or hybrid deployment.