Pegasi API Reference
Introduction
The Pegasi API delivers professional AI capabilities with quality assurance, evaluation frameworks, and content optimization. This documentation covers the Sandbox Edition of the Pegasi API, which provides developers with a full-featured environment to build and test integrations.
Note: The Pegasi Enterprise API offers additional controls, customization options, and dedicated infrastructure. For information about enterprise features, please contact our solutions team.
Core Capabilities
- Model Selection: Routes requests to appropriate AI models based on content requirements and quality parameters
- Content Safety Controls: Enforces basic content safety and factual accuracy verification
- Quality Assurance: Identifies and addresses quality issues in AI-generated content
- Evaluation Framework: Provides performance analytics and content assessment against predefined criteria
API Endpoints
https://sandbox-api.pegasi.ai/v1
The sandbox environment is designed for development and testing. For production deployments or enterprise requirements, please contact our solutions team to discuss appropriate options.
Authentication & Security
All API requests require authentication using API keys. Include your API key in the Authorization header:
Authorization: Bearer YOUR_PEGASI_API_KEY
For sandbox usage, standard API keys provide access to all available functionality. Enterprise deployments offer enhanced security options with configurable permissions and usage restrictions.
API Architecture
The Pegasi API follows RESTful design principles with consistent resource naming and HTTP verb usage. The table below outlines our endpoint architecture:
Current Implementation | Recommended Implementation | Implementation Benefits |
---|---|---|
POST /v1/chat/completion | POST /v1/chat/completions (plural) | Ensures compatibility with existing OpenAI and Anthropic integrations for streamlined migration. |
POST /v1/evaluate | POST /v1/evaluate + optional GET /v1/evaluate/{id} | Maintains consistent resource-oriented design patterns and enables asynchronous evaluation for complex workflows. |
POST /v1/autocorrect | POST /v1/autocorrect + GET /v1/autocorrect/{id} (optional) | Provides consistent interface design with detailed metadata access for quality metrics and performance analysis. |
Endpoint Specifications
Chat Completions
- Endpoint:
POST /v1/chat/completions
- Purpose: Generate AI responses with quality controls and safety guardrails
- Key Features:
- Model selection
- Content safety enforcement
- Quality assurance
- Documentation: Chat Completions API
Evaluations
- Endpoints:
POST /v1/evaluate
(create evaluation)GET /v1/evaluate/{id}
(retrieve evaluation results)POST /v1/evaluate/security/slopsquatting
(detect hallucinated dependencies)
- Purpose: Assess AI-generated content against criteria and benchmarks
- Key Features:
- Evaluation rubrics
- Scoring analytics
- Assessment logic
- Supply chain security for code generation
- Documentation: Create Evaluation | Get Evaluation
Autocorrections
- Endpoints:
POST /v1/autocorrect
(create autocorrection)GET /v1/autocorrect/{id}
(retrieve correction results)
- Purpose: Detect and address quality issues in AI-generated content
- Key Features:
- Factual verification
- Context-aware corrections
- Configurable correction strategies
- Documentation: Create Autocorrection | Get Autocorrection
Integration Resources
SDK Libraries
Pegasi provides SDK libraries for easy integration:
# Python SDK implementation
from pegasi import PegasiClient
# Initialize client with authentication
client = PegasiClient(api_key="YOUR_PEGASI_API_KEY")
# Generate content with quality controls
response = client.chat.completions.create(
model="pegasi-standard",
messages=[
{"role": "system", "content": "You are a helpful assistant providing accurate information."},
{"role": "user", "content": "Explain how machine learning models are evaluated."}
],
quality_risk_threshold=0.7
)
# Process response
content = response.choices[0].message.content
Integration Best Practices
For optimal integration, we recommend:
- Error handling with appropriate retry logic
- Request tracking for debugging and analytics
- Rate limit awareness with backoff strategies
Service Levels
The sandbox environment provides:
- Availability: Best-effort availability for development and testing
- Performance: Standard response times without guarantees
- Support: Community support and documentation
For production SLAs, uptime guarantees, and dedicated support, please inquire about our production and enterprise offerings.
Support Resources
- Documentation: Comprehensive guides and API references
- Community Forum: Connect with other developers using Pegasi
- Support: For technical questions, please contact our team
For enterprise support options including dedicated technical account management and 24/7 emergency response, please inquire about our enterprise offerings.