How to Integrate with Flowise

Learn how to use the SnackPrompt AI Engine API as an external knowledge source in Flowise chatbots and AI workflows.

Overview

Flowise is a visual tool for building LLM applications. It offers several ways to integrate external APIs:

Method
Use Case
Description

Custom Tool

Agent with tools

Agent decides when to query the API

HTTP Request Node

Direct RAG

Call API at specific point in the flow

Custom Retriever

Replace vector store

Use external API instead of built-in retriever

API Chain

Sequential calls

Chain multiple API calls together

Integration Architecture

┌─────────────────────────────────────────────────────────┐
│                      Flowise                            │
│  ┌─────────────┐    ┌─────────────┐    ┌────────────┐   │
│  │    Chat     │───▶│  LLM Chain  │───▶│  Response │   │
│  │   Input     │    │  or Agent   │    │            │   │
│  └─────────────┘    └──────┬──────┘    └────────────┘   │
│                            │                            │
│                     ┌──────▼──────┐                     │
│                     │ Custom Tool │                     │
│                     │  or Chain   │                     │
│                     └──────┬──────┘                     │
└────────────────────────────┼────────────────────────────┘


              ┌──────────────────────────────┐
              │  SnackPrompt AI Engine API   │
              │  /v1/kb/search or /v1/kb/chat│
              └──────────────────────────────┘

Use when you want the agent to autonomously decide when to search your knowledge base.

Step 1: Create a Custom Tool

  1. In Flowise, go to Tools in the sidebar

  2. Click Add New

  3. Configure the tool:

Tool Configuration:

Field
Value

Name

search_knowledge_base

Description

Use this tool to search for information in the company knowledge base. Send a natural language query to find relevant documents about products, policies, procedures, etc. Returns relevant text snippets.

Step 2: Configure the Tool Code

Step 3: Build the Agent Flow

  1. Add Chat Trigger node

  2. Add Tool Agent or OpenAI Function Agent node

  3. Connect your Custom Tool

  4. Add Chat Model (OpenAI, Anthropic, etc.)


Method 2: HTTP Request Chain

Use when you want a deterministic flow where the search always happens.

Step 1: Create the Flow

  1. Add Chat Trigger node

  2. Add HTTP Request node

  3. Add LLM Chain node for response generation

Step 2: Configure HTTP Request Node

Request Configuration:

Field
Value

Method

POST

URL

https://api-integrations.snackprompt.com/v1/kb/search

Headers

See below

Body

See below

Headers:

Body:

Step 3: Process and Format Results

Add a JavaScript Function node to format the results:

Step 4: Generate Response with LLM

Add LLM Chain with prompt:


Method 3: Chat Endpoint for Complete Responses

Use the /v1/kb/chat endpoint when you want the API to handle all the RAG.

Simple Chatbot Flow

HTTP Request Configuration

Field
Value

URL

https://api-integrations.snackprompt.com/v1/kb/chat

Method

POST

Headers:

Body:

Parse Response

Extract the answer field from the response:


Method 4: Custom Retriever Node

For advanced users who want to replace Flowise's built-in retrievers.

Step 1: Create Custom Retriever

Create a custom node that implements the retriever interface:

Step 2: Use in Conversational Retrieval Chain

Connect your custom retriever to a Conversational Retrieval QA Chain:


Practical Use Cases

1. Simple Support Chatbot

Direct integration for simple Q&A chatbots.

2. Agent with Multiple Tools

Agent that can search different knowledge bases and perform calculations.

3. RAG with Memory

Chatbot that remembers conversation history while retrieving from knowledge base.

4. Multi-Source RAG

Search multiple knowledge bases and combine results.

Combine external API results with local vector store.


Environment Variables

Set these in your Flowise deployment:

Variable
Description

SNACKPROMPT_API_KEY

Your SnackPrompt API key

SNACKPROMPT_TENANT_ID

Your tenant ID

Setting Environment Variables

Docker:

Local:


Configuration Tips

1. Tool Description is Critical

For agents, the tool description determines when it's used:

2. Limit Results

Too many results can confuse the LLM:

3. Use Filters

Filter by tags when you know the context:

4. Handle Errors

Add error handling in your JavaScript nodes:

5. Cache Responses

For frequently asked questions, consider caching:


Complete Example: Customer Support Bot

Flow Structure

  1. Chat Trigger: Receives user message

  2. Tool Agent: Processes with GPT-4

  3. Custom Tools: Search knowledge base

  4. Response: Returns answer with sources

Agent System Prompt

Tool Configuration

Tool
Name
Description

Search

search_kb

Search for product info, policies, and FAQs

Search Products

search_products

Search specifically for product details and pricing


Troubleshooting

Error: "tenant_id is required"

Ensure tenant_id is inside the filters object:

Agent doesn't use the tool

  1. Improve the tool description

  2. Add examples in the system prompt

  3. Test with explicit queries like "search for..."

Empty results

  1. Verify environment variables are set

  2. Check tenant_id is correct

  3. Remove tag filters to search all content

  4. Test the API directly with curl

Timeout errors

  1. Increase timeout in HTTP Request node

  2. Reduce the limit parameter

  3. Check network connectivity


External Resources

Last updated

Was this helpful?