AI agents are digital entities designed to autonomously perform specific tasks, make decisions, and generate outputs. They are typically focused on a particular role or function such as a customer support representative, financial analyst, or content writer.
These agents take in inputs such as prompts, files, or user queries, interpret them, process them according to their defined purpose, and deliver results. While some agents operate based on predefined rules, others can learn from data and make context-aware decisions.
An AI agent is not just an information provider. It is a task executor that can manage processes and delegate responsibilities to other agents when necessary.
This is what differentiates them from classic chatbots.
For instance, a bot answering “What’s the weather today?” is not an AI agent.
But a system that analyzes six months of sales data, generates a report, and summarizes it in an executive format qualifies as a full-fledged AI agent.
The Dot platform offers customizable and reusable AI agents for hundreds of use cases. These agents are categorized as “Novus Agents” (pre-built) or “My Agents” (user-created).
Users can create their own agents from scratch using the “Create Agent” function in Focused Mode. Examples include:
Customer Support Agent: Automatically responds to customer inquiries by pulling data from connected CRM systems.
Financial Report Agent: Extracts data from SAP or Excel files and generates budget reports.
In Dot, each agent is configured based on
These agents are reusable, version-controlled, and shareable across an organization.
AI orchestration is the structured coordination of multiple AI agents working either sequentially or in parallel. This approach enables the creation of collaborative AI teams instead of isolated single agents.
Much like a real orchestra, each agent plays a specific role. One may perform data analysis, another may make decisions, another may generate output, and yet another may review the results.
Through AI orchestration, simple tasks evolve into complex, multi-step workflow management.
Orchestration is not limited to agent-to-agent transitions.
It also involves the definition of data flow, logical conditions, control mechanisms, and intelligent routing. The architecture typically includes the following types of agents:
Within the Dot platform, this orchestration is implemented through the Focused Mode.
Users can link multiple agents together to form a task chain in Playground. They can make workflows as an orchestration to solve complex problems.
In Dot, the Logic panel governs this entire flow with full transparency. It defines when each agent is activated, which model is assigned, and where the output is directed next.
This enables users to build dynamic, intelligent systems that automate high-value business processes reliably.
AI Hub is the centralized interface for managing all AI assets within an organization.
It enables users to view, edit, share, and version control their agents, data sources, workflows, and integration structures in one place.
With AI Hub, reusability becomes a core capability, and organizational-scale deployment of AI solutions becomes structured and efficient.
AI Hub is not merely a storage repository.
It serves as a control layer, access management system, and version tracking utility. Agents can be shared across teams, restricted to specific roles, or limited to predefined data sources. From a governance perspective, the hub provides visibility into critical operational metadata such as which agent was used, by whom, and which version was active at a given time.
In the Dot platform, the “Hub” section provides the following core views:
For developers and product managers, the Hub serves as a command center for cloning, maintaining, and distributing intelligent workflows across departments.
For example, an “Onboarding Agent” built for the HR team can be cloned, renamed, and repurposed by the sales team by simply switching the connected data source.
This modular architecture empowers cross-functional AI adoption without redundant configuration.
Materials is Dot’s integrated development environment that allows users to build, edit, and manage code-driven products or applications directly within the platform.
It transforms Dot from being just an AI automation hub into a full-stack development workspace.
With Materials, users can start from a blank canvas or import existing projects, and work end-to-end without switching to an external IDE or needing additional tooling.
In Dot, Materials provides AI-assisted code generation in multiple programming languages*, as well as refactoring support for existing codebases.
Whether you’re creating a web app, scripting internal tools, or prototyping an automation pipeline, the Materials environment offers real-time editing, file structuring, and instant preview capabilities.
It also integrates tightly with Dot’s agent and workflow systems, meaning the code you develop can be connected to AI agents or turned into callable services inside orchestrated workflows. This bridges the gap between coding and orchestration, allowing developers and non-developers alike to collaborate around production-ready outputs within a unified AI-native environment.
*Live preview is currently available for JavaScript, HTML, and CSS. Support for Python is under development and will be introduced in an upcoming update.
It determines which model should be triggered under specific conditions, when an agent should take over a task, and how conditional flows, validations, and output-based routing are executed. This framework ensures that decision-making within the workflow is both deterministic and dynamic based on real-time inputs and results.
While similar in function to Materials, Sources encompass a broader range of external inputs including APIs, live datasets, and integrations with platforms such as HubSpot or Notion. These allow agents to operate using up-to-date and business-specific data during execution.
Each agent’s result is automatically saved within this area, enabling users to access historical outputs, compare iterations, share results, or export deliverables such as PDF reports or structured documents.
Within the Dot platform, the Logic panel functions as the operational core of orchestration. It provides users with step-by-step control over:
The Sources panel allows users to attach external systems such as Google Sheets or SAP environments.
The Collection panel offers a transparent view of all results produced during the workflow, enabling end-to-end traceability and control.
On platforms powered by large language models (LLMs), each model exhibits distinct strengths depending on the task.
For example,
Because of this, using a single model for all tasks can limit performance. Task-specific model selection is essential to achieving optimal outcomes.
Multi-model support allows for both manual and automatic model switching, providing flexibility not only in performance optimization but also in deployment strategy and data governance.
Organizations may choose to route certain tasks through on-premise models rather than cloud-based providers like OpenAI to meet compliance or data residency requirements.
Dot currently supports a wide range of models including GPT-4, Claude 3.5, Gemini, DeepSeek, and Mistral.
Users have several options for model configuration:
What truly differentiates Dot is its integrated approach to agent and model pairing. The system not only allows switching between models but also aligns each task within a workflow to the model best suited for that step. This enables mixed-model orchestration across multi-step processes, where different models operate within the same flow to maximize precision, speed, and contextual understanding.