Skip to main content

Agent Testing

Overview

The Test Me tab within an AI Agent's configuration provides a chat simulation where you can interact with your agent before deploying it to real conversations. You can send messages, view the agent's responses, and inspect detailed logs to understand how the agent processes each request. This is essential for verifying that your agent's instructions, knowledge base, and tools are working as expected.

How to Access

  • Navigate to Tools > Aurora AI > AI Agents, click on an agent
  • Select the Test Me tab (fourth tab)
  • Required role: Chatbot Admin

Interface Overview

The Test Me tab is divided into two panels:

  • Chat panel (left) - A simulated chat window where you type messages and see the agent's responses in real time
  • Logs panel (right) - A detailed log showing the agent's internal processing, including tool calls, knowledge base lookups, and decision steps

The chat panel includes a message input field at the bottom and a Restart button to clear the conversation and start fresh.

Features & Actions

Send a Test Message

What it does: Sends a message to the agent and displays the response in the chat window, just as a real customer would experience.

Steps:

  1. Type your message in the input field at the bottom of the chat panel
  2. Press Enter or click the send button
  3. Wait for the agent's response to appear in the chat
  4. View the corresponding logs in the right panel

Important notes:

  • The agent responds using all its configured instructions, knowledge base, and active tools
  • Responses may include text, images, files, audio, or video depending on the agent's capabilities
  • Each message exchange generates detailed logs you can review

Review Agent Logs

What it does: Shows the internal processing steps the agent took to generate each response, helping you understand and debug its behavior.

Steps:

  1. Send a message to the agent
  2. Look at the Logs panel on the right side
  3. Review the processing steps, which may include:
    • Knowledge base searches performed
    • Tools invoked (scheduling, funnel routing, etc.)
    • Reasoning and decision steps

Restart the Conversation

What it does: Clears the entire chat history and starts a fresh conversation with the agent.

Steps:

  1. Click the Restart button at the top of the chat panel
  2. The chat history is cleared
  3. Begin a new conversation by typing a message

Important notes:

  • Restarting does not change the agent's configuration — it only clears the test conversation
  • Use this to test different conversation scenarios from scratch

Important Notes

  • Test conversations do not affect real customers or live channels
  • The test console uses the same AI engine and configuration as live conversations
  • Logs are generated in real time as the agent processes each message
  • Media messages (images, files, video, audio) are rendered inline in the chat panel
  • The test conversation state is maintained until you restart or leave the page

FAQ

Q: Does testing count toward my usage or message limits? A: Test conversations use the AI engine, so they may count toward AI processing usage. They do not count as customer messages.

Q: Why is the agent not using my latest changes? A: Make sure you saved your changes in the Instructions, Knowledge Base, and Options tabs before testing. Unsaved changes are not reflected in the test console.

Q: Can I test specific tools like scheduling? A: Yes. If a tool is active, you can trigger it during testing by asking relevant questions (e.g., "I'd like to schedule an appointment"). The logs panel will show when tools are invoked.

Q: Why do I see media content in responses? A: The agent can send images, files, and other media if configured to do so. The test console renders all media types just as they would appear to a real customer.