Skip to main content
This example shows two approaches to chat completion in GenAI Launchpad and how to choose between them.

Implementation Modes

Direct response mode

The frontend sends a prompt via POST. The backend processes it synchronously and returns a complete AI‑generated response. Best for: Simple chatbots, quick responses, minimal infrastructure

Real‑time background mode

The backend queues the request as a background job. The frontend receives updates over Supabase WebSockets as the database updates. Best for: Long‑running tasks, complex workflows, scalable applications
Both modes simulate streaming on the frontend by progressively displaying the message for a natural typing effect.

Setup Guide

1

Switch to the example branch

Navigate to your backend repository and checkout the chat example:
git checkout example/chat
2

Apply database migrations

Navigate to the app/ directory and run migrations:
cd app/
./migrate.sh
3

Clone the frontend repository

Get the example frontend application:
git clone git@github.com:datalumina/genai-launchpad-chat-example-frontend.git
4

Configure the frontend

Follow the README instructions in the frontend repository to:
  • Install dependencies
  • Configure environment variables
  • Set up Supabase connection
  • Run the development server

Architecture Overview

  • Direct Response
  • Background Processing
Characteristics:
  • Synchronous processing
  • Lower latency for short responses
  • Simpler error handling
  • Limited by request timeout

Key Features

  • Even though the backend returns complete responses, the frontend simulates a streaming effect by progressively revealing characters. This provides a more engaging user experience similar to ChatGPT.
  • The background mode leverages Supabase’s real-time capabilities to push updates to the frontend immediately when the database is updated, eliminating the need for polling.
I