API Reference

Multi-Model Support: Managing and Switching Between LLMs

On platforms powered by large language models (LLMs), each model exhibits distinct strengths depending on the task.

For example,

  • GPT-4 excels at creative generation,
  • Claude performs well with document comprehension,
  • Gemini tends to deliver more consistent answers in data-enriched scenarios.

Because of this, using a single model for all tasks can limit performance. Task-specific model selection is essential to achieving optimal outcomes.

Multi-model support allows for both manual and automatic model switching, providing flexibility not only in performance optimization but also in deployment strategy and data governance.

Organizations may choose to route certain tasks through on-premise models rather than cloud-based providers like OpenAI to meet compliance or data residency requirements.

Dot currently supports a wide range of models including GPT-4, Claude 3.5, Gemini, DeepSeek, and Mistral.

Supported AI Models in Dot

Users have several options for model configuration:

  • In Simplified Mode, Dot automatically assigns the most suitable model based on the prompt
  • In Focused Mode, users can explicitly define which model should be used for each agent
  • In the Playground, users can test agent behavior and seamlessly switch between models during experimentation

What truly differentiates Dot is its integrated approach to agent and model pairing. The system not only allows switching between models but also aligns each task within a workflow to the model best suited for that step. This enables mixed-model orchestration across multi-step processes, where different models operate within the same flow to maximize precision, speed, and contextual understanding.

Was this section helpful?