Welcome to the LLM Chat Interface! This powerful tool allows you to have intelligent conversations with AI. Simply type your message in the input box at the bottom and press Enter or click the send button.
VerseAI
VerseAIDrag nodes from the left panel onto the canvas, then connect them together.
Learn how to automate tasks by building visual workflows that integrate with Verse AI chat, your Versonas, and external services.
The Workflow Builder lets you create automated pipelines visually. Each workflow is a graph of nodes connected by wires. Data flows from left to right — starting at a Trigger node through Action, Logic, and Connector nodes, ending at an Output node.
Every workflow needs at least one Trigger node. If you start with an empty canvas, click the "+ Add Trigger Node" button to get started quickly.
Ctrl+A to select all nodes.Delete or Backspace.Ctrl+C / Ctrl+V to duplicate nodes.Ctrl+Z / Ctrl+Y.Delete or Backspace.When a node finishes executing, its output object is passed to the next connected node. You access that output using variable syntax: {{input.path.to.field}}.
Every node wraps its output inside a data key. So the most common pattern is:
{{input.data}} — the entire output object
{{input.data.fieldName}} — a specific field inside the output
Variables use double curly braces and always start with input:
{{input.data.text}} — a string field
{{input.data.rows}} — an array field
{{input.data.count}} — a number field
{{input.conditionMet}} — a top-level boolean (IF node only)
{{input.triggered}} — a top-level boolean (triggers)
You can use variables in any text field across nodes — email bodies, LLM prompts, HTTP URLs, SQL parameters, notification messages, and more.
Use this table to know exactly what variables are available after each node.
| Node | Output Shape | How to Access |
|---|---|---|
| Manual Trigger | { data: {}, triggered: true } |
{{input.data}} (empty object) |
| Text Input | { data: { text: "..." } } |
{{input.data.text}} |
| Webhook | { data: { ...request body } } |
{{input.data.fieldName}} |
| Schedule | { data: {}, triggered: true } |
{{input.data}} (empty object) |
| Prompt Match | { data: { prompt: "..." } } |
{{input.data.prompt}} |
| Connector Hook | { data: { ...event payload } } |
{{input.data.fieldName}} |
| App Event | { data: { ...event payload } } |
{{input.data.fieldName}} |
| AI / LLM Call | { data: { response: "...", model: "..." } } |
{{input.data.response}} |
| HTTP Request | { data: { ...API response } } |
{{input.data.fieldName}} |
| Send Email | { data: { sent: true, to: "..." } } |
{{input.data.sent}} |
| Database Query | { data: { rows: [...], rowCount: N } } |
{{input.data.rows}}{{input.data.rowCount}} |
| Transform Data | { data: { ...your return value } } |
{{input.data.yourField}} |
| Delay / Wait | { data: { ...pass-through } } |
Same as input — data passes through unchanged |
| Send Notification | { data: { sent: true, channel: "..." } } |
{{input.data.sent}} |
| IF Condition | { data: { ...pass-through }, conditionMet: true/false } |
{{input.data.text}} (same as input){{input.conditionMet}} |
| Switch | { data: { ...pass-through } } |
Same as input — routed to the matching case output |
| Loop / Iterate | { data: { ...current item }, index: N } |
{{input.data.fieldName}}{{input.index}} |
| Code / Script | { data: { ...your return value } } |
{{input.data.yourField}} |
| Connectors (Slack, Gmail, Discord, etc.) |
{ data: { ...API response } } |
{{input.data.fieldName}} |
| Agent Call | { data: { result: "...", steps: N } } |
{{input.data.result}} |
| Log / Debug | { data: { ...pass-through } } |
Same as input — data passes through unchanged |
| Debug Screen | { data: { ...pass-through } } |
Same as input — data passes through unchanged |
Variables work in any text or textarea field in the properties panel. Common examples:
| Node Field | Example Value | What It Does |
|---|---|---|
| Email Body | Hi! The AI said: {{input.data.response}} |
Inserts the LLM response into the email |
| LLM User Prompt | Summarize this: {{input.data.text}} |
Passes user text to the LLM |
| HTTP URL | https://api.example.com/users/{{input.data.userId}} |
Builds a dynamic API URL |
| SQL Parameters | {"id": "{{input.data.userId}}"} |
Passes dynamic values to a SQL query |
| Slack Message | New order from {{input.data.name}} |
Sends a dynamic Slack message |
| IF Condition Field | {{input.data.status}} |
Evaluates the status field from the previous node |
| IF Condition Value | {{input.data.expected}} |
Compares against another dynamic value |
| Notification Message | Alert: {{input.data.response}} |
Sends a dynamic notification |
Here’s how to build a common debug workflow:
hello in the text box on the node.{{input.data.text}}is not emptyThis checks if the text input had any text. It will be True because you typed "hello".
The user said: {{input.data.text}}
Using {{input.text}} instead of {{input.data.text}}. The output is always wrapped in data, so you need the extra level.
Setting IF field to {{input.data.text}}, operator to equals, value to true. This does a string comparison — it checks if the text literally says "true". If you just want to check the text exists, use is not empty instead.
In email Body or LLM prompts, you must write {{input.data.text}} with the double curly braces. Without them, the literal text "input.data.text" appears instead of the value.
Variables must always start with input. The input prefix refers to "the data coming in from the previous node."
The IF node has two fields that accept variables:
{{input.data.text}}hello) or another variable: {{input.data.expected}}The operator determines how the comparison works:
| Operator | What it does | Example |
|---|---|---|
is not empty | True if the field has any value | Field: {{input.data.text}} — true if text was entered |
is empty | True if the field is blank or missing | Field: {{input.data.email}} — true if no email |
equals | Exact text match (case-sensitive) | Field: {{input.data.status}}, Value: active |
not equals | True if the values differ | Field: {{input.data.role}}, Value: admin |
contains | True if the text includes the value | Field: {{input.data.text}}, Value: urgent |
greater than | Numeric comparison (field > value) | Field: {{input.data.rowCount}}, Value: 0 |
less than | Numeric comparison (field < value) | Field: {{input.data.score}}, Value: 50 |
regex match | Regular expression test | Field: {{input.data.email}}, Value: @gmail\.com$ |
If you’re unsure what data a node is actually sending, drop a Debug Screen node between two nodes. It displays the full JSON payload on the canvas and passes the data through unchanged.
{{input.data.fieldName}} variables.{{input.data}} = entire output •
{{input.data.text}} = text field •
{{input.data.response}} = LLM response •
{{input.data.rows}} = database rows •
{{input.data.rowCount}} = row count •
{{input.conditionMet}} = IF result •
{{input.index}} = loop index
Workflows integrate directly into your Verse AI chat experience. Here's how they connect:
Use the Prompt Match trigger to fire a workflow when you type something specific in chat. For example, set the pattern to summarize my emails and Verse AI will automatically run that workflow when it detects a match.
This node sends a prompt to Verse AI or any of your Versonas (custom AI personas). You can chain it with other nodes to build intelligent pipelines — for instance, have Verse AI analyze data from an HTTP Request, then email you the summary.
Toggle a workflow Active using the switch in the toolbar. Active workflows run automatically based on their triggers. The green badge on the Chat tab shows how many workflows are currently active.
Each workflow can receive external HTTP calls via its Webhook URL. Share this URL with third-party services (e.g., Zapier, GitHub webhooks) to trigger your workflows from anywhere.
Combine a Prompt Match trigger with an AI / LLM Call and a Send Email node to create a chat command like "email me a summary of today's tasks" that actually works.
Trigger nodes start your workflow. Every workflow must have at least one trigger. They have no input port — only outputs.
Starts the workflow when you click Execute. Best for testing or one-off tasks.
Fires when an external HTTP request hits your workflow's webhook URL. Configure the HTTP method (GET, POST, etc.), authentication type, and response code.
Triggers on a recurring timer. Use Simple mode for interval-based scheduling (e.g., every 60 minutes) or Cron Expression for advanced patterns like 0 9 * * 1-5 (weekdays at 9 AM).
Fires when a user's chat message matches a pattern. Supports four match types:
Triggers when an event fires from a connected service (Gmail, Slack, Discord, GitHub, or a Custom API). Select the connector and the event type (New Message, Updated, Deleted, etc.).
A manual trigger with a built-in text box and ▶ Run button directly on the node. Type test data and click Run to send it downstream. The text is available as {{input.data.text}} in the next node.
Listens for internal events from Verse AI apps:
user.login, chat.message, file.upload, etc.Action nodes do work — they make requests, send messages, transform data, or call AI models.
Makes an HTTP request to any URL. Configure the method (GET, POST, PUT, PATCH, DELETE), headers as JSON, request body, and timeout. Perfect for calling external APIs.
{{input.data}} in the URL or body to inject dynamic values.Sends a prompt to Verse AI or one of your Versonas. Configure a system prompt, user prompt (with {{input.data}} for dynamic content), temperature, and max tokens.
Sends an email via Gmail SMTP. Fill in the recipient, subject, and body. Toggle HTML Body for rich formatting. Use {{input.data}} in any field to insert dynamic content.
Executes a SQL query against the default database or a custom connection string. Supports Execute Query, Insert, Update, and Delete operations. Use parameterized queries with {{input.userId}} syntax.
Reshapes data between nodes using JavaScript. Choose a mode:
return { summary: input.data.response };input variable.Pauses execution for a specified duration. Useful for rate limiting, waiting for external processes, or sequencing timed actions.
Pushes a notification via In-App, Browser Push, Email, or SMS. Set a title, message body, and priority level (Low, Normal, High, Urgent).
Logic nodes let you branch, loop, merge, and code custom behavior.
Branches the workflow into True and False paths based on a condition. Specify a field, operator (equals, contains, greater than, regex match, etc.), and comparison value.
{{input.status}} equals successRoutes data to one of four outputs (Case 1, Case 2, Case 3, Default) based on a field's value. Like a multi-way IF statement.
Combines data from two separate branches back into one. Supports Append, Merge By Key, Wait For All, and Multiplex modes.
Iterates over an array of items. For each item, data flows out the Each Item port. When iteration is complete, data flows out Done. Configure batch size and max iterations for safety.
Execute custom JavaScript (or Python server-side) code. Access the incoming data via the input variable and return a new object.
const result = input.data.toUpperCase(); return { data: result };Connector nodes integrate with third-party services.
Send messages, update messages, get channel info, or list channels. Specify a channel name or ID and your message content.
Send messages, send embeds, or get messages from a Discord channel. Provide the channel ID and content.
Read, send, search, or label Gmail messages. For sending, provide the recipient, subject, and body. For searching, use Gmail search query syntax.
Create issues, list issues, create pull requests, get repo info, or list commits. Provide the owner and repository name.
Connect to any REST API. Configure the base URL, authentication type (Bearer Token, API Key, Basic Auth, OAuth2), endpoint, method, and request body.
Advanced AI capabilities beyond simple LLM calls.
Invoke a Verse AI agent with a task description. The agent can autonomously perform multi-step operations. Set max steps and a timeout to control execution bounds.
Execute a registered skill (like summarize, translate, or extract) with parameters. Enable Retry on Failure for resilience.
Call an external tool or function. Supports MCP Server, REST API, JavaScript Function, and Server-Side tool types. Provide configuration as JSON.
Generate vector embeddings from text for semantic search or similarity matching. Choose between embedding models or a local option.
Classify input text into categories using AI. Define comma-separated categories and the classifier will output the best match along with a category routing output.
Output nodes determine what your workflow produces.
Defines the final output of the workflow. Choose the response type (JSON, Text, HTML, File) and provide a template. Use {{input.data}} to inject the last node's output.
Logs data to the Execution Log panel at the bottom of the workflow builder. Set the log level (Info, Warning, Error, Debug) and a message template. Pass Data Through is enabled by default so it doesn't break your chain.
Displays the full JSON payload of incoming data directly on the node canvas. Unlike Log / Debug (which only writes to the log panel), the Debug Screen shows everything visually on the node itself.
{{input.data.fieldName}} variables.Summarize these tasks: {{input.data}}{{input.data}}show active usersSELECT TOP 10 * FROM Users WHERE IsActive = 1{{input.action}} equals openedNew issue: {{input.data}} to #dev-alertsToggle your workflow Active for triggers to fire automatically. Inactive workflows can only be run manually with the Execute button.
Welcome to the LLM Chat Interface! This powerful tool allows you to have intelligent conversations with AI. Simply type your message in the input box at the bottom and press Enter or click the send button.
Enter Send message
Shift + Enter New line in message
Esc Close modals
If you're experiencing issues or have questions not covered here, please contact our support team or check the documentation for more detailed information.