Documentation Index
Fetch the complete documentation index at: https://launchpad.datalumina.com/llms.txt
Use this file to discover all available pages before exploring further.
AgentNode picks a model backend via the ModelProvider enum in app/launchpad/core/nodes/agent.py. Switching providers is a one-line change in the node’s get_agent_config(); credentials come from environment variables loaded by python-dotenv at import time.
from launchpad.core.nodes.agent import AgentConfig, ModelProvider
class MyNode(AgentNode):
def get_agent_config(self) -> AgentConfig:
return AgentConfig(
model_provider=ModelProvider.OPENAI,
model_name="gpt-5.4-mini",
output_type=self.OutputType,
)
All providers are reached through pydantic-ai. AgentConfig.instrument=True (the default) wires each call into Langfuse when enable_tracing=True on the workflow.
Supported providers
OpenAI — ModelProvider.OPENAI
Uses OpenAIResponsesModel. Good default for new workflows.
| Env var | Purpose |
|---|
OPENAI_API_KEY | Standard OpenAI API key. |
AgentConfig(model_provider=ModelProvider.OPENAI, model_name="gpt-5.4-mini")
Azure OpenAI — ModelProvider.AZURE_OPENAI
Routes through an AsyncAzureOpenAI client and the fastmcp AzureProvider.
| Env var | Purpose |
|---|
AZURE_OPENAI_ENDPOINT | Azure resource endpoint. |
AZURE_OPENAI_API_KEY | Resource API key. |
AZURE_OPENAI_API_VERSION | API version (defaults to 2025-03-01-preview). |
model_name falls back to gpt-5-mini when left blank.
AgentConfig(model_provider=ModelProvider.AZURE_OPENAI, model_name="gpt-5-mini")
Anthropic — ModelProvider.ANTHROPIC
| Env var | Purpose |
|---|
ANTHROPIC_API_KEY | Anthropic API key. |
AgentConfig(model_provider=ModelProvider.ANTHROPIC, model_name="claude-sonnet-4-6")
Google Gemini — ModelProvider.GOOGLE_GEMINI
Uses GoogleModel with the standard GoogleProvider (API-key auth).
| Env var | Purpose |
|---|
GOOGLE_API_KEY | Gemini API key. |
AgentConfig(model_provider=ModelProvider.GOOGLE_GEMINI, model_name="gemini-2.5-pro")
Google Vertex AI — ModelProvider.GOOGLE_VERTEX_AI
Uses a service account to authenticate against Vertex AI.
| Env var | Purpose |
|---|
GOOGLE_APPLICATION_CREDENTIALS | Absolute path to a service account JSON file. |
GOOGLE_VERTEX_AI_LOCATION | Region (defaults to europe-west1). |
AgentConfig(model_provider=ModelProvider.GOOGLE_VERTEX_AI, model_name="gemini-2.5-pro")
Mistral — ModelProvider.MISTRAL
Uses MistralModel. pydantic-ai reads the API key from the Mistral SDK’s default environment variable.
| Env var | Purpose |
|---|
MISTRAL_API_KEY | Mistral API key. |
AgentConfig(model_provider=ModelProvider.MISTRAL, model_name="mistral-small-2506")
AWS Bedrock — ModelProvider.BEDROCK
Creates a boto3 bedrock-runtime client and passes it to BedrockConverseModel.
| Env var | Purpose |
|---|
BEDROCK_AWS_ACCESS_KEY_ID | AWS access key. |
BEDROCK_AWS_SECRET_ACCESS_KEY | AWS secret. |
BEDROCK_AWS_REGION | AWS region hosting the model. |
AgentConfig(
model_provider=ModelProvider.BEDROCK,
model_name="anthropic.claude-sonnet-4-6-v1:0",
)
Ollama — ModelProvider.OLLAMA
Uses OpenAIChatModel with the pydantic-ai OllamaProvider. Ideal for local development against an ollama serve instance.
| Env var | Purpose |
|---|
OLLAMA_BASE_URL | Full base URL, e.g. http://localhost:11434/v1. Required; the node raises KeyError otherwise. |
AgentConfig(model_provider=ModelProvider.OLLAMA, model_name="llama3.2")
Other AgentConfig knobs
AgentConfig forwards common pydantic-ai fields so most tuning happens in one place:
instructions — static system prompt (can be augmented with @self.agent.instructions for per-run context).
output_type — return a plain str or a BaseModel subclass for structured output.
deps_type — a Pydantic model containing dependencies exposed via RunContext inside tools and instruction callbacks.
tools, builtin_tools — pydantic-ai tool definitions.
model_settings — a ModelSettings object to override temperature, max tokens, etc.
retries, output_retries — retry behavior on model errors and validation failures.
instrument — defaults to True; set to False to opt a node out of Langfuse instrumentation even when the workflow has tracing enabled.