Documentation Index
Fetch the complete documentation index at: https://launchpad.datalumina.com/llms.txt
Use this file to discover all available pages before exploring further.
This example shows how the Launchpad integrates with Langfuse to trace every workflow step and every LLM call. It ships as LangfuseTracingWorkflow in app/launchpad/workflows/examples/langfuse_tracing/ and is registered as WorkflowRegistry.LANGFUSE_TRACING.
What the workflow does
A simple moderation pipeline for user comments:
ViolationDetectionNode — an AgentNode that classifies whether a comment violates policy.
ContextSummaryResult — an AgentNode that summarizes the comment for the audit log.
RemoveCommentNode — a plain Node that deletes the comment when the previous step flagged it.
Each node runs inside its own Langfuse span when enable_tracing=True, so you can see timings, inputs, outputs, and LLM calls for the whole run in the Langfuse dashboard.
Schema
class LangfuseTracingEventSchema(BaseModel):
event: str
timestamp: datetime
comment_id: str
thread_id: str
user_id: str
content: str
Workflow definition
class LangfuseTracingWorkflow(Workflow):
workflow_schema = WorkflowSchema(
description="",
event_schema=LangfuseTracingEventSchema,
start=ViolationDetectionNode,
nodes=[
NodeConfig(
node=ViolationDetectionNode,
connections=[ContextSummaryResult],
),
NodeConfig(
node=ContextSummaryResult,
connections=[RemoveCommentNode],
),
NodeConfig(
node=RemoveCommentNode,
connections=[],
),
],
)
Violation detection node
class ViolationDetectionNode(AgentNode):
class OutputType(AgentNode.OutputType):
comment_id: str
violation: bool
reason: Optional[str] = None
def get_agent_config(self) -> AgentConfig:
return AgentConfig(
instructions=(
"Determine whether the comment is a violation or not. If it is a "
"violation, provide a reason for violation. If it is not a "
"violation, provide a reason for non-violation."
),
output_type=self.OutputType,
deps_type=LangfuseTracingEventSchema,
model_provider=ModelProvider.OPENAI,
model_name="gpt-5.4-mini",
instrument=True,
)
async def process(self, task_context: TaskContext) -> TaskContext:
event: LangfuseTracingEventSchema = task_context.event
@self.agent.instructions
async def add_context() -> str:
return event.model_dump_json()
result = await self.agent.run(user_prompt=event.model_dump_json())
self.save_output(result.output)
return task_context
The other two nodes are thin — ContextSummaryResult follows the same pattern with a summarization prompt, and RemoveCommentNode reads ViolationDetectionNode.OutputType via get_output() and logs the deletion.
Running the example
Set Langfuse credentials
Add to .env (or your shell environment):LANGFUSE_PUBLIC_KEY=pk-lf-...
LANGFUSE_SECRET_KEY=sk-lf-...
LANGFUSE_BASE_URL=https://cloud.langfuse.com
Run the playground script
uv run playground/langfuse_tracing.py
The script loads app/launchpad/workflows/examples/langfuse_tracing/request_examples/violation.json, instantiates the workflow via WorkflowRegistry.LANGFUSE_TRACING.value(), and runs it.Inspect the trace
Open the Langfuse dashboard. You should see a trace named LangfuseTracingWorkflow with child spans for each node (ViolationDetectionNode, ContextSummaryResult, RemoveCommentNode) and the underlying LLM generations.
The playground instantiates the workflow without arguments, which defaults to enable_tracing=False. To capture traces, update the script to WorkflowRegistry.LANGFUSE_TRACING.value(enable_tracing=True), or instantiate LangfuseTracingWorkflow(enable_tracing=True) directly.
Example event
app/launchpad/workflows/examples/langfuse_tracing/request_examples/violation.json:
{
"event": "comment_posted",
"timestamp": "2026-04-17T12:00:00Z",
"comment_id": "comment-123",
"thread_id": "thread-abc",
"user_id": "user-42",
"content": "This is a test comment that should be evaluated for policy violations."
}