Skip to main content

Documentation Index

Fetch the complete documentation index at: https://launchpad.datalumina.com/llms.txt

Use this file to discover all available pages before exploring further.

AgentNode picks a model backend via the ModelProvider enum in app/launchpad/core/nodes/agent.py. Switching providers is a one-line change in the node’s get_agent_config(); credentials come from environment variables loaded by python-dotenv at import time.
from launchpad.core.nodes.agent import AgentConfig, ModelProvider

class MyNode(AgentNode):
    def get_agent_config(self) -> AgentConfig:
        return AgentConfig(
            model_provider=ModelProvider.OPENAI,
            model_name="gpt-5.4-mini",
            output_type=self.OutputType,
        )
All providers are reached through pydantic-ai. AgentConfig.instrument=True (the default) wires each call into Langfuse when enable_tracing=True on the workflow.

Supported providers

OpenAI — ModelProvider.OPENAI

Uses OpenAIResponsesModel. Good default for new workflows.
Env varPurpose
OPENAI_API_KEYStandard OpenAI API key.
AgentConfig(model_provider=ModelProvider.OPENAI, model_name="gpt-5.4-mini")

Azure OpenAI — ModelProvider.AZURE_OPENAI

Routes through an AsyncAzureOpenAI client and the fastmcp AzureProvider.
Env varPurpose
AZURE_OPENAI_ENDPOINTAzure resource endpoint.
AZURE_OPENAI_API_KEYResource API key.
AZURE_OPENAI_API_VERSIONAPI version (defaults to 2025-03-01-preview).
model_name falls back to gpt-5-mini when left blank.
AgentConfig(model_provider=ModelProvider.AZURE_OPENAI, model_name="gpt-5-mini")

Anthropic — ModelProvider.ANTHROPIC

Env varPurpose
ANTHROPIC_API_KEYAnthropic API key.
AgentConfig(model_provider=ModelProvider.ANTHROPIC, model_name="claude-sonnet-4-6")

Google Gemini — ModelProvider.GOOGLE_GEMINI

Uses GoogleModel with the standard GoogleProvider (API-key auth).
Env varPurpose
GOOGLE_API_KEYGemini API key.
AgentConfig(model_provider=ModelProvider.GOOGLE_GEMINI, model_name="gemini-2.5-pro")

Google Vertex AI — ModelProvider.GOOGLE_VERTEX_AI

Uses a service account to authenticate against Vertex AI.
Env varPurpose
GOOGLE_APPLICATION_CREDENTIALSAbsolute path to a service account JSON file.
GOOGLE_VERTEX_AI_LOCATIONRegion (defaults to europe-west1).
AgentConfig(model_provider=ModelProvider.GOOGLE_VERTEX_AI, model_name="gemini-2.5-pro")

Mistral — ModelProvider.MISTRAL

Uses MistralModel. pydantic-ai reads the API key from the Mistral SDK’s default environment variable.
Env varPurpose
MISTRAL_API_KEYMistral API key.
AgentConfig(model_provider=ModelProvider.MISTRAL, model_name="mistral-small-2506")

AWS Bedrock — ModelProvider.BEDROCK

Creates a boto3 bedrock-runtime client and passes it to BedrockConverseModel.
Env varPurpose
BEDROCK_AWS_ACCESS_KEY_IDAWS access key.
BEDROCK_AWS_SECRET_ACCESS_KEYAWS secret.
BEDROCK_AWS_REGIONAWS region hosting the model.
AgentConfig(
    model_provider=ModelProvider.BEDROCK,
    model_name="anthropic.claude-sonnet-4-6-v1:0",
)

Ollama — ModelProvider.OLLAMA

Uses OpenAIChatModel with the pydantic-ai OllamaProvider. Ideal for local development against an ollama serve instance.
Env varPurpose
OLLAMA_BASE_URLFull base URL, e.g. http://localhost:11434/v1. Required; the node raises KeyError otherwise.
AgentConfig(model_provider=ModelProvider.OLLAMA, model_name="llama3.2")

Other AgentConfig knobs

AgentConfig forwards common pydantic-ai fields so most tuning happens in one place:
  • instructions — static system prompt (can be augmented with @self.agent.instructions for per-run context).
  • output_type — return a plain str or a BaseModel subclass for structured output.
  • deps_type — a Pydantic model containing dependencies exposed via RunContext inside tools and instruction callbacks.
  • tools, builtin_tools — pydantic-ai tool definitions.
  • model_settings — a ModelSettings object to override temperature, max tokens, etc.
  • retries, output_retries — retry behavior on model errors and validation failures.
  • instrument — defaults to True; set to False to opt a node out of Langfuse instrumentation even when the workflow has tracing enabled.