Compare performance across multiple AI model API providers and models with comprehensive testing and analytics.

Configuration

Accuracy & Stability
Delay between starting concurrent requests. 0 = simultaneous burst.
Send one throwaway request per model+protocol before timing.
Loads tiktoken from CDN. Server-reported usage is always preferred.
When deployed to Cloudflare Pages with the bundled Function, route requests through /proxy on this same origin. No CORS, no per-origin connection limits.
Custom Cloudflare Worker / reverse proxy URL. Leave empty if using the same-origin proxy above.
Auto-filled based on protocol. Edit if using a custom endpoint.
Click "Fetch Models" or type models below

Results

Model Prompt Round First Token (ms) Output Speed (t/s) Result Error Status

Statistics

Model Avg First Token (ms) Avg Output Speed (t/s) Success Rate

History Records