Appearance
Creating Agents
Learn how to build Strands Agents with custom tools and capabilities.
Agent Structure
Every Strands Agent requires a create_agent() function that returns:
- agent - The Strands Agent instance
- mcp_clients - List of MCP client connections (can be empty)
python
def create_agent():
from strands import Agent
from strands.models import BedrockModel
model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1"
)
# Session management is automatic - no special code needed!
agent = Agent(
model=model,
system_prompt="You are a helpful assistant."
)
return agent, []Session Management (Multi-Turn Conversations)
Universal API uses Strands' native session management to persist conversation history across multiple requests. Session management is completely automatic - you don't need to write any special code!
The platform automatically:
- Loads conversation history when continuing an existing conversation
- Saves new messages as they're exchanged
- Persists agent state (key-value storage) across requests
- Manages conversation context (sliding window, summarization, etc.)
How It Works
After your create_agent() function returns, the platform:
- Sets
agent.agent_idto the correct agent UUID - Sets
agent.session_managerto an S3SessionManager with user-scoped isolation - Calls
agent.stream_async()which loads any existing conversation history
You don't need to do anything special - just create a plain Agent() and multi-turn conversations work automatically!
python
def create_agent():
from strands import Agent
from strands.models import BedrockModel
model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1"
)
# No session_manager needed - it's injected automatically!
agent = Agent(
model=model,
system_prompt="You are a helpful assistant."
)
return agent, []!!! tip "Zero Configuration" Multi-turn conversations just work. The platform handles all session management automatically after your create_agent() function returns.
How It Works
- New Conversation: When no
conversationIdis provided, a new conversation is created - Continue Conversation: When
conversationIdis provided, the session manager loads the existing history - Automatic Persistence: Messages are automatically saved to S3 after each exchange
- Multi-Tenant Isolation: Each user's conversations are stored in isolated S3 prefixes
Using Agent State
You can store and retrieve state across conversation turns:
python
from strands import Agent, tool, ToolContext
@tool(context=True)
def remember_preference(preference: str, tool_context: ToolContext) -> str:
"""Remember a user preference."""
tool_context.agent.state.set("user_preference", preference)
return f"I'll remember that you prefer: {preference}"
@tool(context=True)
def recall_preference(tool_context: ToolContext) -> str:
"""Recall the user's preference."""
pref = tool_context.agent.state.get("user_preference")
if pref:
return f"You previously told me you prefer: {pref}"
return "I don't have any preferences stored yet."
def create_agent():
from strands import Agent
from strands.models import BedrockModel
# Session management is automatic - state persists across turns
agent = Agent(
model=BedrockModel(model_id="us.anthropic.claude-sonnet-4-20250514-v1:0"),
tools=[remember_preference, recall_preference]
)
return agent, []Basic Agent
The simplest agent uses Claude with no custom tools:
python
def create_agent():
from strands import Agent
from strands.models import BedrockModel
model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1"
)
# Session management is automatic - no special code needed!
agent = Agent(
model=model,
system_prompt="You are a helpful AI assistant."
)
return agent, []Agent with Custom Tools
Add custom tools using the @tool decorator:
python
from strands import Agent, tool
from strands.models import BedrockModel
@tool
def calculate(expression: str) -> str:
"""
Evaluate a mathematical expression.
Args:
expression: A mathematical expression like "2 + 2" or "sqrt(16)"
Returns:
The result of the calculation
"""
import math
# Safe evaluation of math expressions
allowed_names = {
k: v for k, v in math.__dict__.items()
if not k.startswith("__")
}
allowed_names.update({"abs": abs, "round": round})
try:
result = eval(expression, {"__builtins__": {}}, allowed_names)
return str(result)
except Exception as e:
return f"Error: {e}"
@tool
def get_current_time() -> str:
"""Get the current date and time."""
from datetime import datetime
return datetime.now().isoformat()
def create_agent():
model = BedrockModel(
model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
region_name="us-east-1"
)
agent = Agent(
model=model,
tools=[calculate, get_current_time],
system_prompt="You are a helpful assistant with access to calculation and time tools."
)
return agent, []Agent with HTTP Tools
Make HTTP requests using the built-in strands_tools:
python
from strands import Agent
from strands.models import BedrockModel
from strands_tools import http_request
def create_agent():
model = BedrockModel(
model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
region_name="us-east-1"
)
agent = Agent(
model=model,
tools=[http_request],
system_prompt="""You are an assistant that can make HTTP requests.
Use the http_request tool to fetch data from APIs."""
)
return agent, []Agent with AWS Tools
Access AWS services using boto3:
python
from strands import Agent, tool
from strands.models import BedrockModel
import boto3
import json
@tool
def list_s3_buckets() -> str:
"""List all S3 buckets in the AWS account."""
s3 = boto3.client('s3')
response = s3.list_buckets()
buckets = [b['Name'] for b in response['Buckets']]
return json.dumps(buckets)
@tool
def get_s3_object(bucket: str, key: str) -> str:
"""
Get an object from S3.
Args:
bucket: The S3 bucket name
key: The object key (path)
Returns:
The object content as a string
"""
s3 = boto3.client('s3')
response = s3.get_object(Bucket=bucket, Key=key)
return response['Body'].read().decode('utf-8')
def create_agent():
model = BedrockModel(
model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
region_name="us-east-1"
)
agent = Agent(
model=model,
tools=[list_s3_buckets, get_s3_object],
system_prompt="You are an AWS assistant that can interact with S3."
)
return agent, []!!! warning "AWS Credentials" AWS tools use your AWS credentials stored in Universal API. Make sure you've added your AWS access keys in the API Keys section.
Available Models
Strands Agents support any model available in AWS Bedrock:
| Model | Model ID |
|---|---|
| Claude Sonnet 4 | us.anthropic.claude-sonnet-4-20250514-v1:0 |
| Claude 3.7 Sonnet | us.anthropic.claude-3-7-sonnet-20250219-v1:0 |
| Claude 3.5 Sonnet | anthropic.claude-3-5-sonnet-20241022-v2:0 |
| Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |
| Llama 3.1 70B | meta.llama3-1-70b-instruct-v1:0 |
| Llama 3.1 8B | meta.llama3-1-8b-instruct-v1:0 |
python
# Using Claude 3.5 Sonnet
model = BedrockModel(
model_id="anthropic.claude-3-5-sonnet-20241022-v2:0",
region_name="us-east-1"
)
# Using Llama 3.1
model = BedrockModel(
model_id="meta.llama3-1-70b-instruct-v1:0",
region_name="us-east-1"
)System Prompts
Customize agent behavior with system prompts:
python
agent = Agent(
model=model,
system_prompt="""You are a specialized coding assistant.
Your capabilities:
- Write clean, well-documented code
- Explain complex concepts simply
- Debug and fix issues
Guidelines:
- Always include comments in code
- Suggest best practices
- Ask clarifying questions when needed"""
)Tool Best Practices
1. Clear Docstrings
Tools need clear docstrings for the AI to understand when and how to use them:
python
@tool
def search_database(query: str, limit: int = 10) -> str:
"""
Search the database for matching records.
Use this tool when the user asks to find or search for information
in the database.
Args:
query: The search query string
limit: Maximum number of results to return (default: 10)
Returns:
JSON string containing matching records
"""
# Implementation2. Type Hints
Always include type hints for parameters:
python
@tool
def process_data(
data: str,
format: str = "json",
validate: bool = True
) -> str:
"""Process and validate data."""
# Implementation3. Error Handling
Return helpful error messages:
python
@tool
def fetch_url(url: str) -> str:
"""Fetch content from a URL."""
import urllib.request
import urllib.error
try:
with urllib.request.urlopen(url, timeout=10) as response:
return response.read().decode('utf-8')
except urllib.error.URLError as e:
return f"Error fetching URL: {e.reason}"
except Exception as e:
return f"Unexpected error: {str(e)}"4. Return Strings
Tools should return strings (the AI can parse JSON if needed):
python
@tool
def get_user_info(user_id: str) -> str:
"""Get user information."""
import json
user = {"id": user_id, "name": "John", "email": "john@example.com"}
return json.dumps(user, indent=2)Complete Example
Here's a full-featured agent with multiple tools:
python
from strands import Agent, tool
from strands.models import BedrockModel
import json
import hashlib
from datetime import datetime
from urllib.request import urlopen
from urllib.parse import urlencode
@tool
def get_current_time() -> str:
"""Get the current date and time in ISO format."""
return datetime.now().isoformat()
@tool
def calculate(expression: str) -> str:
"""
Evaluate a mathematical expression safely.
Args:
expression: Math expression like "2 + 2", "sqrt(16)", "sin(3.14)"
Returns:
The calculated result
"""
import math
allowed = {k: v for k, v in math.__dict__.items() if not k.startswith("__")}
allowed.update({"abs": abs, "round": round, "min": min, "max": max})
try:
result = eval(expression, {"__builtins__": {}}, allowed)
return str(result)
except Exception as e:
return f"Error: {e}"
@tool
def hash_text(text: str, algorithm: str = "sha256") -> str:
"""
Generate a hash of the given text.
Args:
text: The text to hash
algorithm: Hash algorithm (md5, sha1, sha256, sha512)
Returns:
The hexadecimal hash string
"""
algorithms = {
"md5": hashlib.md5,
"sha1": hashlib.sha1,
"sha256": hashlib.sha256,
"sha512": hashlib.sha512
}
if algorithm not in algorithms:
return f"Error: Unknown algorithm. Use: {', '.join(algorithms.keys())}"
return algorithms[algorithm](text.encode()).hexdigest()
@tool
def fetch_json(url: str) -> str:
"""
Fetch JSON data from a URL.
Args:
url: The URL to fetch (must return JSON)
Returns:
The JSON response as a formatted string
"""
try:
with urlopen(url, timeout=10) as response:
data = json.loads(response.read().decode('utf-8'))
return json.dumps(data, indent=2)
except Exception as e:
return f"Error: {e}"
def create_agent():
model = BedrockModel(
model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
region_name="us-east-1"
)
agent = Agent(
model=model,
tools=[get_current_time, calculate, hash_text, fetch_json],
system_prompt="""You are a helpful assistant with access to several tools:
1. **get_current_time** - Get the current date and time
2. **calculate** - Evaluate mathematical expressions
3. **hash_text** - Generate cryptographic hashes
4. **fetch_json** - Fetch JSON data from URLs
Use these tools when appropriate to help users with their requests.
Always explain what you're doing and present results clearly."""
)
return agent, []Accessing User API Keys
When your agent runs on the platform, the invoking user's stored API keys are automatically available. This lets you build agents that use external services (OpenAI, Stripe, etc.) with the user's own credentials.
Via user_keys Dict
The user_keys variable is injected directly into the agent sandbox namespace:
python
from strands import Agent, tool
from strands.models import BedrockModel
@tool
def call_openai(prompt: str) -> str:
"""Call OpenAI's API using the user's stored key."""
import json
from urllib.request import Request, urlopen
# user_keys is automatically available — no import needed
api_key = user_keys.get('openai', {}).get('keyValue')
if not api_key:
return "Error: No OpenAI key configured. Add it in Credentials → Third-Party Keys."
req = Request(
"https://api.openai.com/v1/chat/completions",
headers={
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
},
data=json.dumps({
"model": "gpt-4",
"messages": [{"role": "user", "content": prompt}]
}).encode()
)
return json.loads(urlopen(req).read())['choices'][0]['message']['content']
def create_agent():
agent = Agent(
model=BedrockModel(model_id="us.anthropic.claude-sonnet-4-20250514-v1:0"),
tools=[call_openai],
system_prompt="You can call OpenAI using the user's API key."
)
return agent, []Via UAPI_KEYS_JSON Environment Variable
All keys are also available as a JSON-serialized environment variable:
python
import os, json
keys = json.loads(os.environ.get('UAPI_KEYS_JSON', '{}'))
openai_key = keys.get('openai', {}).get('keyValue')
stripe_key = keys.get('stripe', {}).get('keyValue')Available Keys
The keys dict is keyed by service name. Common examples:
| Service Name | Key Fields | Description |
|---|---|---|
aws | keyValue (access key), secretKeyValue (secret key) | AWS credentials |
openai | keyValue (API key) | OpenAI API key |
google_oauth | keyValue (access token), secretKeyValue (refresh token) | Google OAuth |
universalapi | keyValue (secret key) | UniversalAPI credentials |
| (custom) | keyValue | Any user-defined key |
!!! tip "Keys Are Per-User" The keys injected are always from the invoking user, not the agent author. This means you can build a public agent that uses OpenAI, and each user who chats with it will use their own OpenAI key.
Agent with MCP Server Tools
Connect your agent to any MCP server using MCPClient from the Strands SDK. The agent loads tools from the MCP server automatically via Streamable HTTP transport.
python
def create_agent():
import os
from strands import Agent
from strands.models import BedrockModel
from strands.tools.mcp import MCPClient
from mcp.client.streamable_http import streamablehttp_client
# Bearer token is auto-injected by the platform as an env var
bearer_token = os.environ.get("UNIVERSALAPI_BEARER_TOKEN", "")
server_url = "https://api.universalapi.co/mcp/{your-mcp-server-id}"
# MCPClient implements ToolProvider — pass it directly to Agent
mcp_client = MCPClient(
lambda: streamablehttp_client(
server_url,
headers={"Authorization": f"Bearer {bearer_token}"}
)
)
model = BedrockModel(
model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
region_name="us-east-1"
)
# Agent calls load_tools() on MCPClient internally
agent = Agent(
model=model,
tools=[mcp_client],
session_manager=session_manager,
system_prompt="You are an assistant with MCP server tools."
)
return agent, [mcp_client]!!! tip "Pre-Built MCP Server Agents" Universal API provides pre-built agents for each built-in MCP server (Echo, UniversalAPI, DocHound, AWS, Google Suite). You can chat with them directly or use them as templates for your own MCP-connected agents.
!!! note "Bearer Token Injection" When your agent runs on the platform, the caller's Bearer token is automatically set as the UNIVERSALAPI_BEARER_TOKEN environment variable. This allows MCP server agents to authenticate with the platform's MCP endpoints on behalf of the user.
Deploying Your Agent
Via API
bash
curl -X POST "https://api.universalapi.co/agent/create" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $BEARER_TOKEN" \
-d '{
"agentName": "my-tooled-agent",
"description": "An agent with custom tools",
"sourceCode": "... your code here ..."
}'Via Python Script
python
import requests
BEARER_TOKEN = "uapi_ut_your_token_here"
# Read your agent code from a file
with open("my_agent.py", "r") as f:
source_code = f.read()
response = requests.post(
"https://api.universalapi.co/agent/create",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {BEARER_TOKEN}"
},
json={
"agentName": "my-tooled-agent",
"description": "An agent with custom tools",
"sourceCode": source_code
}
)
print(response.json())Security Considerations
Allowed Imports
Only these modules can be imported:
strands,strands.Agent,strands.toolstrands.models,strands.models.BedrockModelstrands.tools.mcp— MCPClient for MCP server connectionsstrands_tools— Built-in tools (http_request, use_aws, calculator, etc.)mcp,mcp.client.streamable_http— MCP protocol clientjson,datetime,time,uuid,typing,dataclassescollections,itertools,functools,re,math,randombase64,hashlib,urllib.parseos(read-only env vars),logging,threadingboto3,botocore
Restricted Operations
These are not allowed:
- ❌ File system access (
open,os.remove) - ❌ Shell execution (
os.system,subprocess) - ❌
eval,exec,compile(except in controlled tool contexts) - ❌ Arbitrary network access (use provided tools)
Next Steps
- Streaming - Learn about streaming responses
- API Reference - Complete endpoint documentation
- Quick Start - Test your first agent