v1.8.91-d84675c
← Back to Benchmarks

Benchmark Methodology Overview

A methodology guide for running fair, repeatable proxy benchmarks.

Scorecard

Readiness Score
80
Measures how consistent and repeatable your benchmark setup is.

Methodology

  • • Fix destination categories and keep them consistent
  • • Use identical concurrency levels across runs
  • • Track both errors and timeouts separately
  • • Record the data window and version your results

Metrics

Sampling window: The total duration of the test run.
Concurrency: Number of concurrent requests per session.
Destination class: A consistent group of targets with similar block behavior.
Last updated 2026-03-11 • 30-day window

Why methodology matters

Benchmark results are only useful if the testing process is consistent. This guide provides a practical framework for sampling, measuring, and comparing proxy performance so that results are repeatable and defensible.

What this guide includes

  • A consistent sampling approach
  • Standard metric definitions
  • Tips for avoiding biased results
  • A readiness checklist before you run any test

How to use it

Use this methodology before running any benchmark or comparing proxy pools across regions. Capture a baseline, then change one variable at a time to isolate impact.

Recommended Baseline Setup

  • **Destinations**: Choose a fixed set of target categories (e.g., ecommerce, search, social)
  • **Concurrency**: Start with a low, stable baseline
  • **Time window**: Use the same time window for each run
  • **Session mode**: Keep sticky vs rotating consistent per run

Interpreting Results

  • Compare success rates relative to your baseline, not a single run
  • Track latency distribution (p50, p90) instead of only averages
  • Document anomalies such as regional outages or transient blocks

Reporting

Summarize results with your configuration, time window, and destination set so that teams can reproduce the test later. Include a short summary of what changed between runs and why, plus any external factors that may have affected outcomes. Document the test environment to make comparisons fair.

Steps

1
Define scope
Pick a destination class and keep it stable.
2
Standardize sampling
Use the same cadence and time window.
3
Document results
Record versions and settings for each run.

Tips

  • • Avoid changing more than one variable per benchmark run.

Related Resources

Cookie Preferences

We use cookies to ensure the best experience. You can customize your preferences below. Learn more