Appearance
Core Concepts
This guide explains the fundamental concepts behind Universal API and how they work together to provide a powerful, flexible platform for API orchestration.
What is Universal API?
Universal API is an intelligent API orchestration platform that fundamentally transforms how systems interact with external services. By combining AI-powered orchestration, a crowd-sourced catalog of reusable actions, and dynamic problem-solving capabilities, Universal API allows developers and systems to simply express what they want to accomplish rather than how to accomplish it.
Action Catalog
The Action Catalog is Universal API's primary value driver - a comprehensive, crowd-sourced library of executable functions that perform specific tasks against external APIs and services:
- Pre-built Integration Code: Ready-to-use actions that wrap and simplify external API interactions
- Multi-Runtime Support: Actions in Python, Node.js and other languages
- Dynamic Execution: Action code stored in DynamoDB and executed on-demand
- Community Contribution: Growing ecosystem of user-created actions
- Simplified Interfaces: Complex API operations reduced to simple function calls
- Consistent Error Handling: Standardized error management across disparate services
- Tested & Verified: Actions are validated for reliability and performance
Instead of writing and hosting your own Lambda functions to interact with external APIs, you can leverage our extensive Action Catalog to immediately access hundreds of services through standardized interfaces.
API Catalog
The API Catalog serves as a central repository of knowledge about external APIs:
- External API Documentation: Structured information about third-party API capabilities, endpoints, and requirements
- Health Monitoring: Real-time and historical status of external API endpoints
- Performance Metrics: Response time, error rates, and other key metrics
- Mapping to Actions: Clear relationships between external APIs and the actions that utilize them
- API Discovery: Help users find appropriate APIs for their needs
The API Catalog enables Universal API to provide comprehensive information about external services while the Action Catalog provides the actual implementation code to interact with them.
Strands Agents
Strands Agents are serverless AI agents powered by the Strands Agents SDK. They provide:
- Streaming Responses: Real-time output via API Gateway response streaming
- 15-Minute Timeout: Long-running AI tasks with streaming updates
- Custom Tools: Add your own Python functions as agent tools
- Conversation History: Automatic context management across messages
- AWS Bedrock Integration: Use Claude, Llama, and other models
python
from strands import Agent, tool
from strands.models import BedrockModel
@tool
def calculate(expression: str) -> str:
"""Evaluate a math expression."""
return str(eval(expression))
def create_agent():
model = BedrockModel(model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0")
agent = Agent(model=model, tools=[calculate])
return agent, []See Strands Agents for details.
MCP Servers
MCP (Model Context Protocol) servers allow you to create custom tools that AI agents like Claude can use directly. You can:
- Create serverless tool servers in Node.js
- Register multiple tools with typed input schemas
- Deploy instantly to Universal API's infrastructure
- Connect Claude and other MCP-compatible clients
See MCP Servers for details.
Intelligent Orchestration
The heart of Universal API is the Universal Handler - an agentic orchestrator that:
- Analyzes incoming requests to understand intent
- Creates step-by-step action plans to fulfill that intent
- Maps steps to existing actions from the catalog
- Dynamically executes actions in sequence
- Uses AI (Claude via AWS Bedrock) as a fallback for steps without specific mapped actions
- Returns comprehensive results with context and explanation
Authentication Models
Universal API implements a dual authentication system to serve different types of users:
Public API - Rate Limiting & Key Authentication
The public API (api.universalapi.co) uses a simple yet effective security model:
Authenticated Users: Identified by a
userIdandsecretUniversalKeypair- Can access private actions they own
- Can execute calls with their own API keys
- Have access to a credit system (100 credits, replenished monthly if below threshold)
- No rate limits applied
Anonymous Users: Identified by IP address
- Limited to public actions only
- Subject to strict rate limiting (10 requests per 24 hours)
- No credit system - rely solely on rate limiting
- Encouraged to authenticate for better access
Private API - Cognito Authentication
The public API also supports Cognito JWT tokens for browser-based authentication:
- Required for:
- Creating and managing actions
- User profile management
- Admin functions
- Managing API keys
API Key Management
Universal API implements a sophisticated API key management system:
User API Keys: Each Universal API user receives a unique API key that:
- Identifies the user for billing and rate limiting
- Tracks usage metrics
- Controls access to specific actions and APIs
- Currently the
secretUniversalKeyserves this purpose
External Service Keys: Users can provide their own API keys for external services, which:
- Are securely stored and encrypted in our system
- Can be passed to the UniversalHandler at request time
- Are used in actions to authenticate with external services
- Allow direct billing to the user's account for those external services
Credit System
Universal API uses a credit-based billing system. 1 credit = $0.001.
How Credits Are Charged
Every invocation has up to three cost components:
- Infrastructure Cost — AWS costs (Lambda compute + API Gateway + Bedrock tokens) × 1.20 (20% infrastructure fee)
- Author Price — Optional pricing set by the resource author (4 dimensions: per-invocation, per-GB-second, per-input-token, per-output-token). Authors receive 100% of what they set.
- Marketplace Fee — 20% of the author's price, charged to the user (not deducted from the author)
Credits charged = max(1, ceil(total cost / $0.001))
Cost Examples
| Scenario | Credits |
|---|---|
| MCP tool call (no LLM) | 1 (minimum) |
| Light agent (500 tokens) | ~5 |
| Medium agent (3K tokens) | ~24 |
| Heavy agent (15K tokens) | ~117 |
Subscription Tiers
| Free | Starter ($29/mo) | Professional ($575/mo) | |
|---|---|---|---|
| Credits/month | 100 | 30,000 | 600,000 |
| Knowledge storage | 100 MB | 50 GB | 1 TB |
| Support | Community | Email (72h SLA) | Email (24h SLA) |
| Annual billing | — | $27.55/mo (5% off) | $546.25/mo (5% off) |
- Free tier: 100 credits on signup, replenished monthly if below 100
- Paid tiers: Credits replenish monthly on your billing date. Unused credits do not roll over.
- Extra credit packs: Available on any tier — 5K/$5, 25K/$24.25, 100K/$96
- Anonymous users are rate-limited (10 requests/day)
- All tiers include full access to every platform feature
Manage your subscription at universalapi.co/pricing.
Platform Bedrock (Managed AI)
No AWS account needed to use AI agents. If you don't have AWS credentials stored, Universal API automatically provides Bedrock access:
- Your agent calls use Universal API's own Bedrock credentials
- Bedrock token costs + 20% infrastructure fee are charged to your credits
- Requires ≥ 5 credits to start an agent call
- Zero configuration — it just works
If you store your own AWS credentials, those are used instead and Bedrock costs go to your AWS bill directly.
Author Monetization
If you create public resources (MCP servers, actions, or agents) on Universal API, you can monetize them by setting author pricing and optionally attaching author credentials.
Author Pricing Dimensions
Set pricing on any resource you create. There are 8 pricing dimensions — use any combination:
Compute & Token Pricing:
| Dimension | Field | Use Case | Example |
|---|---|---|---|
| Per Invocation | pricePerInvocation | Flat fee per call | 0.001 ($0.001/call) |
| Per GB-Second | pricePerGbSecond | Compute-intensive tasks | 0.00001667 |
| Per Input Token | pricePerInputToken | LLM-based resources | 0.000003 |
| Per Output Token | pricePerOutputToken | LLM-based resources | 0.000015 |
Data Transfer Pricing:
| Dimension | Field | Use Case | Example |
|---|---|---|---|
| Per MB Ingress | pricePerMbIngress | Charge for large request payloads | 0.01 ($0.01/MB) |
| Per MB Egress | pricePerMbEgress | Charge for large response payloads | 0.01 ($0.01/MB) |
| Per MB External Egress | pricePerMbExternalEgress | Bytes sent to external APIs (e.g., PDF → Textract) | 0.05 ($0.05/MB) |
| Per MB External Ingress | pricePerMbExternalIngress | Bytes received from external APIs (e.g., Textract results) | 0.02 ($0.02/MB) |
Authors receive 100% of the price they set. A 20% marketplace fee is charged separately to the invoking user (not deducted from the author's revenue).
Data Transfer Pricing
The data transfer dimensions are especially useful for MCP servers that call external paid APIs (AWS Textract, OpenAI, image generation services). The author pays the external service and recovers costs proportional to the data volume processed.
Author Credentials
Authors who build resources that call external paid services (AWS Textract, OpenAI, Stripe, etc.) can attach a role token (uapi_rt_*) to their resource. At runtime, the author's credentials are injected alongside the invoking user's keys — enabling the author to pay for the external service and recover costs via authorPricing.
How it works:
- Author creates a role token with their service API keys
- Author attaches the role token to their resource via
authorRoleToken - When a user invokes the resource, the author's keys are available at runtime
- The user is charged the author's price (converted to credits)
- The author earns revenue tracked in the Author Dashboard
See Role Tokens and Creating MCP Servers — Author Monetization for implementation details.
Author Payouts
Authors with a connected Stripe account receive monthly payouts for their earnings. Connect your Stripe account in the Author Dashboard.
Request Lineage and Hierarchy
Universal API uses a decentralized parent-child model for Lambda invocations, creating a linked-list style hierarchy for request tracking:
- Request IDs: Every request has a unique ID for tracking and debugging
- Lineage Tracking: Every request maintains references to:
requestId: The ID for the current requestrootRequestId: The ID of the original request that started the workflowparentRequestId: The ID of the immediate parent request
This model enables comprehensive tracking and visualization of complex workflows.
Conversational Context & Follow-Up Questions
Universal API maintains context across multiple requests, enabling truly conversational interactions:
Context Preservation: The system remembers previous requests and their outcomes, allowing users to make follow-up requests that reference previous actions without restating all details.
Follow-Up Questions: When the Universal API needs additional information to complete a task, it can:
- Return a specific question to the user
- Store the partial context and action plan
- Resume processing when the user provides the requested information
Request Formats
Universal API accepts requests in multiple formats:
Natural Language Queries
GET /api?query=What is the weather forecast for New York City?Structured Requests
json
POST /api
{
"intent": "analyze_sentiment",
"data": {
"text": "I absolutely love this product! Would recommend to everyone."
}
}Function-Like Calls
json
POST /api
{
"function": "convert_currency",
"params": {
"amount": 100,
"from": "USD",
"to": "EUR"
}
}Response Format
Universal API returns responses in a consistent format:
json
{
"success": true,
"result": {
// Action-specific response data
},
"requestId": "req-1234567890",
"executionTimeMs": 345
}Error responses follow this format:
json
{
"success": false,
"error": {
"message": "Error message",
"code": "ERROR_CODE"
},
"requestId": "req-1234567890"
}Next Steps
Now that you understand the core concepts of Universal API, you can:
- Build Strands Agents for AI with streaming ⭐
- Learn how to make your first request
- Explore the API reference
- Set up authentication
- Build MCP Servers for AI agents
- Create Actions for serverless functions