Quickstart: Your First Agent in 5 Minutes¶
This guide takes you from zero to a working AI agent, step by step. No API keys needed for the first two steps.
Step 1: Install¶
Step 2: Build Your First Agent (No API Key Needed)¶
Create a file called my_agent.py:
from selectools import Agent, AgentConfig, tool
from selectools.providers.stubs import LocalProvider
@tool(description="Look up the price of a product")
def get_price(product: str) -> str:
prices = {"laptop": "$999", "phone": "$699", "headphones": "$149"}
return prices.get(product.lower(), f"No price found for {product}")
@tool(description="Check if a product is in stock")
def check_stock(product: str) -> str:
stock = {"laptop": "In stock (5 left)", "phone": "Out of stock", "headphones": "In stock (20 left)"}
return stock.get(product.lower(), f"Unknown product: {product}")
agent = Agent(
tools=[get_price, check_stock],
provider=LocalProvider(),
config=AgentConfig(max_iterations=3),
)
result = agent.ask("What is the price of a laptop?")
print(result.content)
Run it:
What just happened:
- You defined two tools with the
@tooldecorator — Selectools auto-generates JSON schemas from your type hints - You created an agent with
LocalProvider(a built-in stub that works offline) - You asked a question with
agent.ask()and the agent decided which tool to call
LocalProvideris a testing stub that echoes tool results. It is great for learning the API and running tests, but it does not actually call an LLM. Step 3 shows you how to connect to a real model.
Step 3: Connect to a Real LLM¶
Add your provider's API key to a .env file in your project root and swap the provider:
from selectools import Agent, AgentConfig, OpenAIProvider, tool
from selectools.models import OpenAI
@tool(description="Look up the price of a product")
def get_price(product: str) -> str:
prices = {"laptop": "$999", "phone": "$699", "headphones": "$149"}
return prices.get(product.lower(), f"No price found for {product}")
@tool(description="Check if a product is in stock")
def check_stock(product: str) -> str:
stock = {"laptop": "In stock (5 left)", "phone": "Out of stock", "headphones": "In stock (20 left)"}
return stock.get(product.lower(), f"Unknown product: {product}")
agent = Agent(
tools=[get_price, check_stock],
provider=OpenAIProvider(default_model=OpenAI.GPT_4O_MINI.id),
config=AgentConfig(max_iterations=5),
)
result = agent.ask("Is the phone in stock? And how much are headphones?")
print(result.content)
print(f"\nCost: ${agent.total_cost:.6f} | Tokens: {agent.total_tokens}")
The only line that changed is the provider= argument. Your tools stay identical.
Other providers work the same way:
from selectools import AnthropicProvider, GeminiProvider, OllamaProvider
# Anthropic Claude
agent = Agent(tools=[...], provider=AnthropicProvider())
# Google Gemini (free tier available)
agent = Agent(tools=[...], provider=GeminiProvider())
# Ollama (fully local, fully free)
agent = Agent(tools=[...], provider=OllamaProvider())
Step 4: Add Conversation Memory¶
Make the agent remember previous turns:
from selectools import Agent, AgentConfig, ConversationMemory, OpenAIProvider, tool
@tool(description="Save a note for the user")
def save_note(text: str) -> str:
return f"Saved note: {text}"
memory = ConversationMemory(max_messages=20)
agent = Agent(
tools=[save_note],
provider=OpenAIProvider(),
config=AgentConfig(max_iterations=3),
memory=memory,
)
agent.ask("My name is Alice and I work at Acme Corp")
result = agent.ask("What company do I work at?")
print(result.content) # Remembers "Acme Corp" from the previous turn
Step 5: Add Document Search (RAG)¶
Give the agent a knowledge base to search:
from selectools import OpenAIProvider
from selectools.embeddings import OpenAIEmbeddingProvider
from selectools.models import OpenAI
from selectools.rag import Document, RAGAgent, VectorStore
# Create an embedding provider and vector store
embedder = OpenAIEmbeddingProvider(model=OpenAI.Embeddings.TEXT_EMBEDDING_3_SMALL.id)
store = VectorStore.create("memory", embedder=embedder)
# Load your documents
docs = [
Document(text="Our return policy allows returns within 30 days of purchase.", metadata={"source": "policy.txt"}),
Document(text="Shipping takes 3-5 business days for domestic orders.", metadata={"source": "shipping.txt"}),
Document(text="Premium members get free expedited shipping.", metadata={"source": "membership.txt"}),
]
# Create the agent — chunking, embedding, and tool setup happen automatically
agent = RAGAgent.from_documents(
documents=docs,
provider=OpenAIProvider(default_model=OpenAI.GPT_4O_MINI.id),
vector_store=store,
)
result = agent.ask("How long does shipping take for premium members?")
print(result.content)
Step 6: Get Structured Output¶
Get typed, validated results from the LLM:
from pydantic import BaseModel
from typing import Literal
class Classification(BaseModel):
intent: Literal["billing", "support", "sales"]
confidence: float
result = agent.ask("I need help with my bill", response_format=Classification)
print(result.parsed) # Classification(intent="billing", confidence=0.95)
print(result.trace.timeline()) # See what the agent did
print(result.reasoning) # Why it chose that classification
Step 7: Provider Fallback¶
Wrap multiple providers in a priority chain. If the primary fails, the next one is tried automatically:
from selectools import Agent, AgentConfig, FallbackProvider, OpenAIProvider, AnthropicProvider
from selectools.providers.stubs import LocalProvider
provider = FallbackProvider(
providers=[
OpenAIProvider(), # Try OpenAI first
AnthropicProvider(), # Fall back to Anthropic
LocalProvider(), # Last resort (offline)
],
max_failures=3, # Skip after 3 consecutive failures
cooldown_seconds=60, # Skip for 60 seconds
on_fallback=lambda name, err: print(f"Skipping {name}: {err}"),
)
agent = Agent(tools=[...], provider=provider, config=AgentConfig(max_iterations=5))
result = agent.ask("Hello!")
The built-in circuit breaker avoids wasting time on providers that are consistently down.
Step 8: Tool Policy¶
Control which tools can run with declarative rules and human-in-the-loop approval:
from selectools import Agent, AgentConfig, tool
from selectools.policy import ToolPolicy
@tool(description="Read a file")
def read_file(path: str) -> str:
return open(path).read()
@tool(description="Delete a file")
def delete_file(path: str) -> str:
os.remove(path)
return f"Deleted {path}"
policy = ToolPolicy(
allow=["read_*"], # Always allowed
review=["send_*"], # Needs human approval
deny=["delete_*"], # Always blocked
)
def approve(tool_name, tool_args, reason):
return input(f"Allow {tool_name}({tool_args})? [y/n] ") == "y"
agent = Agent(
tools=[read_file, delete_file],
provider=provider,
config=AgentConfig(
tool_policy=policy,
confirm_action=approve,
approval_timeout=30,
),
)
Step 9: Monitor with AgentObserver¶
For production observability, use AgentObserver — a class-based protocol with 45 lifecycle events. Every callback gets a run_id for cross-request correlation. For simpler integrations, use SimpleStepObserver which routes all events to a single callback:
from selectools import Agent, AgentConfig
from selectools.observer import AgentObserver, LoggingObserver
class MyObserver(AgentObserver):
def on_run_start(self, run_id, messages, system_prompt):
print(f"[{run_id[:8]}] Starting with {len(messages)} messages")
def on_tool_end(self, run_id, call_id, tool_name, result, duration_ms):
print(f"[{run_id[:8]}] {tool_name} took {duration_ms:.0f}ms")
def on_run_end(self, run_id, result):
print(f"[{run_id[:8]}] Done — {result.usage.total_tokens} tokens")
agent = Agent(
tools=[...],
provider=provider,
config=AgentConfig(
observers=[MyObserver(), LoggingObserver()],
),
)
result = agent.ask("Hello!")
# Export execution trace as OpenTelemetry spans
otel_spans = result.trace.to_otel_spans()
LoggingObserver emits structured JSON to Python's logging module — plug it into Datadog, ELK, or any log aggregator.
Step 10: Add Guardrails¶
Validate inputs and outputs with a pluggable guardrail pipeline:
from selectools import Agent, AgentConfig
from selectools.guardrails import GuardrailsPipeline, TopicGuardrail, PIIGuardrail, GuardrailAction
guardrails = GuardrailsPipeline(
input=[
TopicGuardrail(deny=["politics", "religion"]),
PIIGuardrail(action=GuardrailAction.REWRITE), # redact PII
],
)
agent = Agent(
tools=[...],
provider=provider,
config=AgentConfig(guardrails=guardrails),
)
# PII is automatically redacted before the LLM sees it
result = agent.ask("Look up customer user@example.com")
# Blocked topics raise GuardrailError
from selectools.guardrails import GuardrailError
try:
agent.ask("Tell me about politics")
except GuardrailError as e:
print(f"Blocked: {e.reason}")
Five built-in guardrails: TopicGuardrail, PIIGuardrail, ToxicityGuardrail, FormatGuardrail, LengthGuardrail. Or subclass Guardrail to write your own.
Step 11: Audit Logging & Security¶
Add a JSONL audit trail and prompt injection defence:
from selectools import Agent, AgentConfig, tool
from selectools.audit import AuditLogger, PrivacyLevel
audit = AuditLogger(
log_dir="./audit",
privacy=PrivacyLevel.KEYS_ONLY, # redact argument values
)
@tool(description="Fetch web page", screen_output=True) # screen for injection
def fetch_page(url: str) -> str:
import requests
return requests.get(url).text
agent = Agent(
tools=[fetch_page],
provider=provider,
config=AgentConfig(
observers=[audit], # JSONL audit log
screen_tool_output=True, # prompt injection screening
coherence_check=True, # verify tool calls match intent
coherence_model="gpt-4o-mini",
),
)
Step 12: Persistent Sessions¶
Save conversation state across agent restarts:
from selectools import Agent, AgentConfig, ConversationMemory, tool
from selectools.sessions import JsonFileSessionStore
@tool(description="Save a note")
def save_note(text: str) -> str:
return f"Saved: {text}"
store = JsonFileSessionStore(directory="./sessions", default_ttl=3600)
# First run — starts fresh, auto-saves on completion
agent = Agent(
tools=[save_note],
provider=provider,
config=AgentConfig(session_store=store, session_id="user-123"),
memory=ConversationMemory(max_messages=50),
)
agent.ask("Remember that my favorite color is blue")
# Second run — auto-loads previous session
agent2 = Agent(
tools=[save_note],
provider=provider,
config=AgentConfig(session_store=store, session_id="user-123"),
)
result = agent2.ask("What is my favorite color?")
# Agent remembers the previous conversation
Three backends available: JsonFileSessionStore, SQLiteSessionStore, RedisSessionStore. All support TTL-based expiry.
Step 13: Entity Memory¶
Track named entities across conversation turns:
from selectools import Agent, AgentConfig
from selectools.entity_memory import EntityMemory
entity_mem = EntityMemory(provider=provider, max_entities=50)
agent = Agent(
tools=[...],
provider=provider,
config=AgentConfig(entity_memory=entity_mem),
)
agent.ask("I'm working with Alice from Acme Corp on Project Alpha")
# Agent now tracks: Alice (person), Acme Corp (organization), Project Alpha (project)
# Entities are injected as [Known Entities] context in subsequent turns
Step 14: Knowledge Graph¶
Extract and query relationship triples:
from selectools import Agent, AgentConfig
from selectools.knowledge_graph import KnowledgeGraphMemory
kg = KnowledgeGraphMemory(provider=provider, storage="memory")
agent = Agent(
tools=[...],
provider=provider,
config=AgentConfig(knowledge_graph=kg),
)
agent.ask("Alice manages Project Alpha and reports to Bob")
# Graph stores: (Alice, manages, Project Alpha), (Alice, reports_to, Bob)
# Query-relevant triples are injected as [Known Relationships] context
Use SQLiteTripleStore for persistent storage across sessions.
Step 15: Cross-Session Knowledge¶
Give the agent durable memory across conversations:
from selectools import Agent, AgentConfig
from selectools.knowledge import KnowledgeMemory
knowledge = KnowledgeMemory(directory="./memory", recent_days=2)
agent = Agent(
tools=[...],
provider=provider,
config=AgentConfig(knowledge_memory=knowledge),
)
# The agent gets a `remember` tool automatically
agent.ask("Remember that I prefer dark mode")
# Stored in memory/MEMORY.md as a persistent fact
# Future conversations inject [Long-term Memory] + [Recent Memory] context
Step 16: Terminal Tools¶
Some tools should stop the agent loop after execution -- no further LLM call:
from selectools import tool
@tool(terminal=True)
def present_question(question_id: int) -> str:
"""Present a question and wait for the user's answer."""
return f"Question {question_id} presented"
# Or use a dynamic condition:
config = AgentConfig(
stop_condition=lambda tool_name, result: "present" in tool_name,
)
When a terminal tool fires, AgentResult.content contains the tool's return value.
Step 17: Multi-Agent Graph¶
from selectools import Agent, AgentConfig, AgentGraph, tool
from selectools.providers.stubs import LocalProvider
@tool()
def plan_task(task: str) -> str:
"""Break a task into steps."""
return f"Plan for '{task}': 1) Research, 2) Draft, 3) Review"
@tool()
def write_draft(outline: str) -> str:
"""Write a draft from an outline."""
return f"Draft based on: {outline}"
planner = Agent(tools=[plan_task], provider=LocalProvider(), config=AgentConfig(max_iterations=2))
writer = Agent(tools=[write_draft], provider=LocalProvider(), config=AgentConfig(max_iterations=2))
graph = AgentGraph()
graph.add_node("planner", planner)
graph.add_node("writer", writer)
graph.add_edge("planner", "writer")
graph.add_edge("writer", AgentGraph.END)
graph.set_entry("planner")
result = graph.run("Write a blog post about Python")
print(result.content)
print(f"Nodes executed: {list(result.node_results.keys())}")
Multi-agent graphs let you compose agents into pipelines. Each node can be an Agent, an async function, or a sync callable. The graph handles routing, state passing, and trace aggregation.
Step 18: Supervisor Agent¶
from selectools import SupervisorAgent
from selectools.providers.stubs import LocalProvider
supervisor = SupervisorAgent(
agents={"planner": planner, "writer": writer},
provider=LocalProvider(),
strategy="round_robin",
max_rounds=2,
)
result = supervisor.run("Write a blog post about AI agents")
print(result.content)
print(f"Steps taken: {result.steps}")
SupervisorAgent wraps AgentGraph with automatic coordination. Four strategies: plan_and_execute, round_robin, dynamic, magentic. See Orchestration and Supervisor for full details.
What's Next?¶
You now know the core API. Here is where to go from here:
| Goal | Read |
|---|---|
| Define more complex tools | Tools Guide |
| Get typed LLM responses | Agent Guide — Structured Output |
| See what the agent did | Agent Guide — Execution Traces |
| Switch between providers | Providers Guide |
| Auto-failover between providers | Providers Guide — Fallback |
| Classify multiple requests at once | Agent Guide — Batch Processing |
| Control which tools can run | Agent Guide — Tool Policy |
| Monitor with AgentObserver | Agent Guide — Observer Protocol |
| Export traces to OpenTelemetry | Agent Guide — OTel Export |
| Stream responses in real time | Streaming Guide |
| Use hybrid search (keyword + semantic) | Hybrid Search Guide |
| Load tools from plugin files | Dynamic Tools Guide |
| Cache LLM responses to save money | Agent Guide — Caching |
| Browse 152 models with pricing | Models Guide |
| Track costs and token usage | Usage Guide |
| Understand the full architecture | Architecture |
| Add input/output guardrails | Guardrails Guide |
| Add audit logging | Audit Guide |
| Screen tool outputs for injection | Security Guide |
| Enable coherence checking | Security Guide — Coherence |
| Use 24 pre-built tools | Toolbox Guide |
| Handle errors gracefully | Exceptions Guide |
| Look up model pricing at runtime | Models Guide — Pricing API |
| Use structured output helpers | Agent Guide — Structured Helpers |
| Persist sessions across restarts | Sessions Guide |
| Track entities across turns | Entity Memory Guide |
| Build a knowledge graph | Knowledge Graph Guide |
| Add cross-session memory | Knowledge Memory Guide |
| See working examples | examples/ (61 numbered scripts, 01–61) |
The API in one table:
| You want to... | Code |
|---|---|
| Ask a question (simple) | agent.ask("What is X?") |
| Get typed results | agent.ask("...", response_format=MyModel) |
| Send structured messages | agent.run([Message(role=Role.USER, content="...")]) |
| Ask asynchronously | await agent.aask("What is X?") |
| Stream tokens | async for chunk in agent.astream("What is X?"): ... |
| Classify a batch | agent.batch(["msg1", "msg2"], max_concurrency=5) |
| Check cost | agent.total_cost, agent.get_usage_summary() |
| See execution trace | result.trace.timeline() |
| See reasoning | result.reasoning |
| Export to OTel | result.trace.to_otel_spans() |
| Add an observer | AgentConfig(observers=[MyObserver()]) |
| Set tool policy | AgentConfig(tool_policy=ToolPolicy(allow=["read_*"])) |
| Add guardrails | AgentConfig(guardrails=GuardrailsPipeline(input=[...])) |
| Add audit logging | AgentConfig(observers=[AuditLogger(log_dir="./audit")]) |
| Screen tool output | @tool(screen_output=True) or AgentConfig(screen_tool_output=True) |
| Check coherence | AgentConfig(coherence_check=True, coherence_model="gpt-4o-mini") |
| Reset state | agent.reset() |
| Add a tool at runtime | agent.add_tool(my_tool) |
| Remove a tool | agent.remove_tool("tool_name") |