API Reference

No‑Code & Low‑Code AI in Dot: Setting up AI Solutions Without Coding

Access Dot & Pick the Right Workspace

  1. Sign in or sign up using your Email, Google, GitHub, or LinkedIn account (see: Create an Account & Log In).
  2. Choose a mode from the toggle beneath the chat box:
  • Simplified Mode – natural-language chat for quick answers, file Q&A, and summaries.
  • Focused Mode – visual agent builder and multi-step automations (required for no-code workflows).

Create an AI Agent (No Code)

  1. Switch to Focused Mode.
  2. Click Create Agent below the chat pane.
  3. In the dialog, enter:
    • Agent name
    • Description/purpose
    • Expected inputs
  4. Confirm. Your agent will appear under My Agents and can be triggered with @agent-name.

👉 Tip: Agents encapsulate LLM calls plus API connections and business logic, giving more power than a single chatbot.

Add Private Knowledge (No Coding Required)

  1. Open the Source tab in the right-side panel.
  2. Upload PDFs, Word, TXT, or other documents.
  3. Ask questions in the chat or to your agent — Dot automatically performs Retrieval-Augmented Generation (RAG).

Build end-to-end workflows visually

  1. In Focused Mode, go to Hub → Workflows.
  2. Drag existing agents from Novus Agents or My Agents into the canvas.
  3. Arrange them in sequence; Dot handles orchestration, multi-model routing, and data passing.
  4. Save – the workflow is now available to any teammate with appropriate permissions.

Low-code Extensions (Optional)

  1. Use Dot’s API hooks to trigger agents from your own app or send/receive JSON payloads.
  2. Embed an agent in a company UI while keeping all data in-house thanks to on-premise or hybrid deployment.

Best Practices Checklist

✓ Start small – prototype with one agent, then chain more.

✓ Reuse Novus Originals before building from scratch.

✓ Maintain data control – work on Cloud, On-Premise, or Hybrid environment.

Fine-Tuning & Customization in Dot

1. Start with Lightweight Customization (Recommended Default)

👉 Tip: Dot “advises RAG, agent, orchestration, workflow” customization first; fine-tuning is only suggested if these do not reach the desired outcome.

2. When to Fine-Tune

Choose model fine-tuning only if the above steps cannot hit required accuracy, brand tone, or compliance thresholds.

3. Fine-tuning options supported in Dot

👉 Tip: All fine-tuning can be performed on open-source models and still deployed on-premise, so your data never leaves your environment.

4. Fine-Tuning Workflow in Dot (Under Development)

  1. Select a base model – Use any connected open-source LLM (e.g., Mistral or similar) from the Model Picker.
  2. Prepare training data – Upload domain examples in Source panel.
  3. Choose fine-tuning method – In the fine-tuning dialog pick LoRA, or DPO / PPO / RL as required.
  4. Run & monitor – Dot shows progress in Logic; training artifacts are stored securely on-prem or in your chosen environment.
  5. Validate – Test the tuned model in Simplified Mode or inside an agent flow.
  6. Deploy – Because Dot supports cloud, on-premise, and hybrid, you can serve the tuned model where compliance policies allow.

Best Practice Checklist

✓ Start with RAG and agents — quicker and more cost-efficient.

✓ Use LoRA when hardware is limited or you need quick iterations.

✓ Apply DPO/PPO/RL only for advanced alignment needs.

✓ Keep experiments isolated by using New Chat for each test run.

✓ Monitor token usage in Dot’s built-in tracking to stay on budget.

✓ Maintain data control with on-prem or hybrid deployment.

Was this section helpful?