What’s New in v3.3.0
- SSE Streaming with OpenAI-compatible
/v1/chat/completionsendpoint - Native Langfuse integration for observability and tracing
- New
AgentStreamingNodefor streaming LLM responses - Modular Docker compose files for flexible deployments
- Python 3.13.7+ requirement
Core Concept: Workflows and Nodes
Everything in GenAI Launchpad is built around one pattern: Workflows execute Nodes that pass data through a TaskContext.- Workflow: Orchestrates execution of connected nodes
- Node: A processing unit (fetch data, call an LLM, route logic)
- TaskContext: Pydantic model passed between nodes containing event data and outputs
- WorkflowSchema: Defines the structure—which node starts, how they connect
Multi-Provider LLM Support
GenAI Launchpad uses PydanticAI for LLM access. Switch providers by changing one line:Production Infrastructure
The stack is pre-configured and ready to deploy:- FastAPI - API endpoints that receive events
- Celery + Redis - Background task processing
- PostgreSQL - Event persistence and results storage
- Supabase - Auth, realtime, and storage (optional)
- Langfuse - LLM observability and tracing
- Alembic - Database migrations
- Docker + Caddy - Containerized deployment with automatic HTTPS
What GenAI Launchpad Is Not
Not an Agent Framework: While you can build agent-like systems using our workflow architecture, GenAI Launchpad isn’t primarily an agent framework like AutoGPT, CrewAI, or LangGraph. Instead, it provides the infrastructure to build any type of AI application. These frameworks can be integrated into the Launchpad alongside or instead of PydanticAI. Not Opinionated About AI Logic: We don’t dictate how you implement your AI logic:- Use our built-in workflow system
- Integrate LangChain or LlamaIndex
- Build custom solutions
- Swap Redis for RabbitMQ
- Use different model providers
- Implement custom workflow processors
Use Cases
The workflow/node pattern works well for:- Document Processing - Chain nodes: extract → analyze → summarize → store
- Customer Support - Route node determines intent, specialized nodes handle responses
- Content Generation - Sequential nodes for research → outline → draft → refine
- Data Pipelines - Concurrent nodes process multiple data sources in parallel