Skip to main content
Langfuse is an open-source observability platform for LLM applications that provides tracing, monitoring, and debugging. The integration is built into the Launchpad’s core using the native Langfuse SDK.
You can self-host for full data control and privacy, which is useful when sensitive data must stay within your infrastructure.

Why Langfuse?

  • Complete Tracing: Track every workflow step, node execution, and LLM call
  • Performance Monitoring: Monitor response times, costs, and success rates
  • Debug Issues: Detailed logs and traces for troubleshooting failures
Datalumina uses this integration in production to monitor and trace workflows.

Quick Setup

1

Get Langfuse Account

Create a free account at langfuse.com and get your API keys
2

Update Environment

Add to your .env files:
LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com  # Or your self-hosted URL
3

Enable Tracing in Your Workflow

Pass enable_tracing=True when initializing your workflow:
workflow = MyWorkflow(enable_tracing=True)
result = workflow.run(event_data)
4

Test Integration

Run a workflow and check your Langfuse dashboard for traces:
python playground/workflow_playground.py

How It Works

The Langfuse integration uses the native Langfuse SDK to create spans around workflow and node execution:
from langfuse import get_client

class Workflow(ABC):
    def __init__(self, enable_tracing: bool = True):
        if enable_tracing:
            langfuse = get_client()
            if langfuse.auth_check():
                self.langfuse = langfuse
            else:
                raise LangfuseAuthenticationError(
                    "Failed to authenticate with Langfuse."
                )
When tracing is enabled:
  • A parent span is created for the entire workflow execution
  • Each node gets its own child span with inputs and outputs
  • LLM calls within AgentNodes are automatically instrumented
  • Errors are captured with full context

Enabling and Disabling Tracing

Tracing is controlled per-workflow instance:
# Enable tracing (default for most workflows)
workflow = StreamingExampleWorkflow(enable_tracing=True)

# Disable tracing for local testing or performance
workflow = StreamingExampleWorkflow(enable_tracing=False)
If enable_tracing=True but Langfuse credentials are missing or invalid, the workflow will raise a LangfuseAuthenticationError.

Core Integration Features

  • Automatic Tracing: Every workflow execution is automatically traced when enabled
  • Node-Level Visibility: Individual node executions, inputs, and outputs are captured
  • LLM Call Tracking: All LLM interactions including prompts, responses, and metadata
  • Error Monitoring: Failed executions with full stack traces and context
  • Streaming Support: SSE streaming workflows are fully traced

Dashboard Features

Workflow Traces

View complete workflow execution paths with timing, inputs, and outputs for each node.

Performance Analytics

Monitor average response times, success rates, and cost analysis across workflows.

LLM Usage Tracking

Track token usage, model performance, and costs across different LLM providers.

Debug Information

Detailed error logs with full context when workflows fail or perform unexpectedly.