Overview

The Chat feature brings together AI language models and MCP tools for intelligent, interactive conversations. The LLM automatically discovers and calls MCP tools to answer questions, perform actions, and solve problems.


Key Capabilities

🤖 Streaming Responses

Real-time token-by-token output from AI models

🔧 Automatic Tool Calling

AI decides when and how to use MCP tools

📊 Token Usage Tracking

Monitor input/output/total tokens per message

🏷️ Per-Message Model Labels

Track which LLM model was used for each turn

📋 Copy Messages

One-click clipboard copy for any message

⌨️ Message History Navigation

Use Up/Down arrows to recall previous messages

💬 Prompt Picker

Type /prompt to select from MCP server prompts

🔒 Sensitive Data Protection

Automatic encryption and redaction of secrets

📄 MCP Tool Parameters Viewer

Inspect JSON parameters passed to tools

📤 Export Chat History

Save conversations as formatted markdown reports


Getting Started

Step 1: Configure an AI Model

Before chatting, configure at least one LLM model.

  1. Navigate to ⚙️ AI Models tab
  2. Click Add Model
  3. Fill in model details (see AI Models Guide)
  4. Click Save

📸 Screenshot needed: chat-prerequisite-ai-model.png Description: Show the AI Models configuration page with at least one model configured


Step 2: Select MCP Connections

Choose which MCP servers the AI can access.

  1. Navigate to 🤖 Chat tab
  2. Find the Connections section
  3. Check the boxes for connections you want to enable
  4. Connections connect automatically when checked

📸 Screenshot needed: chat-connections-selection.png Description: Show the Chat page connections section with multiple connections listed and some checkboxes checked

info: Lazy Connections: Connections are only opened when you check their checkbox, keeping unused servers disconnected.


Step 3: Select AI Model

Choose which LLM to use for the conversation.

  1. Find the Model dropdown at the top
  2. Select your configured AI model
  3. The model is now ready for chat

📸 Screenshot needed: chat-model-selection.png Description: Show the model dropdown with multiple AI models available (OpenAI, Azure, etc.)


Basic Chat

Send a Message

  1. Type your message in the input box at the bottom
  2. Press Enter or click Send
  3. Watch the “Thinking…” timer while waiting
  4. See the streamed response appear token-by-token

Message flow:

User sends → AI thinks → AI responds (or calls tools) → User sees answer

📸 Screenshot needed: chat-basic-conversation.png Description: Show a simple chat conversation with 2-3 exchanges between user and assistant


Thinking Timer

While waiting for the AI’s first tokens, a timer displays:

Thinking... 00:03

Format: mm:ss (minutes:seconds)

This helps you understand when complex requests or tool calls are taking longer.

📸 Screenshot needed: chat-thinking-timer.png Description: Show the chat interface with the “Thinking…” timer visible


Token Usage Badges

Each assistant message displays token counts:

Example badge:

📊 Input: 150 | Output: 320 | Total: 470 tokens

Use cases:

  • Monitor API costs
  • Optimize prompt efficiency
  • Track context window usage
  • Debug large conversations

📸 Screenshot needed: chat-token-usage-badge.png Description: Show an assistant message with the token usage badge displayed next to it


Tool Calling

Automatic Tool Discovery

When you enable MCP connections, the AI automatically:

  1. Receives the list of available tools
  2. Understands tool descriptions and parameters
  3. Decides when tools are needed
  4. Calls tools with appropriate arguments
  5. Reads tool responses
  6. Synthesizes the final answer

You don’t need to do anything—the AI handles it all!


Viewing Tool Calls

When the AI calls a tool, you’ll see:

  • Tool call message showing which tool was invoked
  • Model badge indicating which LLM made the call
  • JSON icon button to view parameters (when available)

📸 Screenshot needed: chat-tool-call-message.png Description: Show a tool call message in the chat, highlighting the tool name and the JSON icon button


MCP Tool Parameters Viewer

Click the JSON icon on a tool call to view the parameters passed to the MCP tool.

Features:

  • Color-coded JSON (keys, strings, numbers, booleans, null)
  • Sensitive data protection (masked by default)
  • Per-field show/hide toggles (👁️ icons)
  • Syntax highlighting
  • Collapsible nested structures

Sensitive fields (containing key, password, token, secret) are:

  • Encrypted at rest
  • Masked in the UI by default
  • Revealable per-field with eye icon toggles

📸 Screenshot needed: chat-tool-parameters-viewer.png Description: Show the expanded JSON parameters viewer with color-coded JSON, including at least one sensitive field with the eye icon toggle

warning: Privacy Note: Sensitive fields remain encrypted. Click the eye icon (👁️) to temporarily reveal values. Refresh the page to reset all reveals.


Tool Call Example

User asks:

“What’s the weather in San Francisco?”

AI flow:

  1. AI receives message
  2. AI identifies it needs weather data
  3. AI calls get_weather tool with location: "San Francisco"
  4. Tool returns weather data
  5. AI reads the response
  6. AI answers: “The weather in San Francisco is 65°F and sunny.”

You see:

  • Your message
  • Tool call: get_weather
  • AI’s final answer with weather info

Advanced Features

Message History Navigation

Quickly recall previous messages without retyping.

How to use:

  1. Click in the chat input box
  2. Press Up Arrow (↑) to cycle backward through sent messages
  3. Press Down Arrow (↓) to cycle forward
  4. Press Enter to send the recalled message (or edit first)

Benefits:

  • Retry failed requests
  • Test variations of prompts
  • Quickly reuse complex queries

Prompt Picker (Slash Command)

Access MCP server prompts directly from chat with /prompt.

How to use:

Basic usage:

  1. Type /prompt in the chat input
  2. A modal opens showing all prompts from selected connections
  3. Select a prompt
  4. Fill in any required arguments
  5. Prompt text replaces /prompt in the input
  6. Press Enter to send

Filtered usage:

  1. Type /prompt weather in the chat input
  2. Modal opens with search pre-filtered to “weather”
  3. Select matching prompt
  4. Continue as above

📸 Screenshot needed: chat-prompt-picker-modal.png Description: Show the prompt picker modal with multiple prompts listed, search box at top, and parameter fields for the selected prompt


Per-Message Copy

Copy any message to clipboard for use elsewhere.

How to copy:

  1. Hover over a user or assistant message
  2. Click the 📋 icon in the message header
  3. Icon briefly changes to to confirm
  4. Message text is in your clipboard

Use cases:

  • Save important responses
  • Share AI answers with team members
  • Document conversations
  • Copy for further analysis

📸 Screenshot needed: chat-copy-message-icon.png Description: Show a message with the copy icon (📋) visible in the header


Per-Message Model Labels

Each message shows which AI model was used.

Display:

  • Model badge next to user/assistant role
  • Example: gpt-4o, claude-3-5-sonnet-20241022, GPT-5

Benefits:

  • Track multi-model conversations
  • Identify which model produced which response
  • Debug model-specific behavior
  • Maintain clarity when switching models mid-session

📸 Screenshot needed: chat-model-badges.png Description: Show a chat conversation with different model badges visible on various messages


Sensitive Data Protection

Chat messages are automatically scanned for sensitive information.

Detection methods:

  1. Regex pattern matching (default): Keyword-based detection
  2. Heuristic scanning: Token composition analysis
  3. AI detection (optional): Context-aware identification (must be enabled)

Protected keywords:

  • password, secret, token, key, apikey, api_key
  • Custom keywords from your configuration

Visual treatment:

  • Inline badge: [●●●●●●●● 👁️]
  • Per-value show/hide toggles
  • AES-256 encrypted storage

Example:

User types:

“Set the api_key to abc123xyz456”

Displayed as:

“Set the api_key to [●●●●●●●● 👁️]”

📸 Screenshot needed: chat-sensitive-data-redaction.png Description: Show a user message with sensitive data redacted as badges, including the eye icon for toggling visibility

info: Learn More: See the Sensitive Data Protection Guide for detailed configuration options.


Export Chat History

Save conversations as formatted markdown reports.

How to export:

  1. Complete your chat conversation
  2. Click the Export button (or similar control)
  3. Markdown file is generated and downloaded

Report includes:

  • All user messages
  • All assistant responses
  • Tool calls with timestamps
  • Token usage summaries
  • Model information

Use cases:

  • Document testing sessions
  • Share results with team
  • Archive important conversations
  • Generate reports for stakeholders

Chat Input Features

Always at Bottom

The chat input box stays fixed at the bottom of the screen for easy access, even as messages scroll.

Multi-line Input

Press Shift + Enter to add new lines without sending.

Auto-focus

Input box is automatically focused when you:

  • Open the Chat tab
  • Send a message
  • Dismiss a modal

Common Workflows

Quick Question Answering

  1. Type a simple question
  2. AI answers directly (no tool calls needed)
  3. Continue with follow-up questions

Example:

User: “What is MCP?”
AI: “Model Context Protocol (MCP) is a standardized protocol for…”


Tool-Assisted Research

  1. Ask a question requiring external data
  2. AI calls appropriate tool(s)
  3. AI synthesizes tool results
  4. You get a comprehensive answer

Example:

User: “Get the latest stock price for AAPL”
AI: [calls get_stock_price tool]
AI: “Apple (AAPL) is currently trading at $185.32…”


Multi-Step Problem Solving

  1. Ask a complex question
  2. AI breaks it into steps
  3. AI calls multiple tools sequentially
  4. AI combines results
  5. You get a complete solution

Example:

User: “Find recent GitHub issues for my repo and summarize the top 3”
AI: [calls list_issues tool]
AI: [reads issue details]
AI: “Here are the top 3 issues: 1. Bug in authentication…”


Using Prompts in Chat

  1. Type /prompt to browse prompts
  2. Select a pre-configured prompt
  3. Fill in any arguments
  4. Send to AI for immediate execution

Example:

User: /prompt code_review
Modal: Select “code_review” prompt, enter code
Input: [prompt text inserted]
AI: “Here’s my code review…”


Troubleshooting

AI Not Calling Tools

Problem: AI responds without using available tools

Solutions:

  1. Ensure connections are checked and connected
  2. Verify tools are actually available (check Tools tab)
  3. Be more explicit: “Use the get_weather tool to…”
  4. Check model supports function calling (older models may not)

Streaming Stops Mid-Response

Problem: Response appears to freeze or cut off

Solutions:

  1. Check network connection
  2. Verify API key is valid and has credits
  3. Check for rate limiting errors in console
  4. Try a different model
  5. Reduce conversation length (context window limits)

Sensitive Data Not Detected

Problem: Passwords/keys shown in plain text

Solutions:

  1. Use recognized keywords (password, key, token, secret)
  2. Enable AI detection in Sensitive Fields settings
  3. Add custom keywords to Additional Sensitive Fields
  4. Use quotes around sensitive values for better detection

Tool Parameters Not Showing

Problem: JSON icon click doesn’t show parameters

Solutions:

  1. Ensure tool call actually completed
  2. Check browser console for errors
  3. Verify the tool call returned parameter data
  4. Refresh page and retry

Tips & Best Practices

🎯 Be Specific

The more specific your request, the better the AI can select and use tools.

🔗 Enable Relevant Connections

Only check connections with tools you need for the conversation to reduce noise.

💰 Monitor Token Usage

Watch the token counts to manage API costs, especially with long conversations.

🔄 Use Message History

Press Up Arrow to quickly retry or adjust previous prompts.

📋 Export Important Chats

Save conversations that contain valuable insights or test results.

🔒 Review Sensitive Data

Check what’s being redacted to ensure proper protection.

🛠️ Test Tools First

Execute tools manually in the Tools tab before relying on AI to use them.


Next Steps