v1.8.91-d84675c
← Back to Hex Proxies

Proxy Provider Evaluation: Speed, Uptime, and Success Rate Benchmarks

Last updated: April 2026

By Hex Proxies Engineering Team

A comprehensive guide to evaluating proxy providers using speed, uptime, and success rate benchmarks. Includes a methodology framework, DIY benchmark scripts, and red flags to watch for when comparing providers.

intermediate15 minutesproxy-evaluation

Prerequisites

  • Basic understanding of proxy types (residential, ISP, datacenter)
  • Familiarity with HTTP request/response cycle

Steps

1

Understand the three core metrics

Learn what speed (latency and throughput), uptime (availability percentage), and success rate (non-blocked responses) actually measure at a technical level.

2

Evaluate benchmark methodology

Apply the six-point methodology framework to determine whether a provider benchmark is trustworthy: test locations, sample sizes, target sites, concurrency, warm-up periods, and measurement windows.

3

Run your own benchmark

Follow the step-by-step code walkthrough to build a simple benchmark script that tests any proxy provider across speed, uptime, and success rate from your own infrastructure.

4

Identify benchmark manipulation

Learn the common tactics providers use to inflate their numbers and how to spot misleading claims.

5

Calculate true cost per successful request

Combine benchmark results with pricing data to compute the metric that actually matters: how much you pay for each request that returns usable data.

Proxy Provider Evaluation: Speed, Uptime, and Success Rate Benchmarks Explained

Choosing a proxy provider based on marketing claims is like buying a car based on the brochure photos. Every provider says they are the fastest, most reliable, and most successful. The difference between good and bad proxy infrastructure only shows up in the numbers -- but only if you know how to read those numbers and, more importantly, how to generate your own.

This guide breaks down the three metrics that define proxy performance -- speed, uptime, and success rate -- explains what each actually measures at a technical level, provides a framework for evaluating any benchmark you encounter, and walks you through running your own evaluation. By the end, you will be equipped to cut through marketing spin and make a data-driven provider decision.

---

Quick Answer

**The three core proxy metrics are: speed (latency to first byte and total download time), uptime (percentage of time the proxy gateway accepts connections), and success rate (percentage of requests that return a non-blocked, usable response).** Of these, success rate is the most important for most workloads because a fast proxy that gets blocked 30% of the time costs more per successful request than a slightly slower proxy with a 98% success rate. Always evaluate benchmarks by checking the testing methodology -- location, sample size, target sites, concurrency, and measurement window -- before trusting the numbers.

---

The Three Core Metrics Explained

Speed: What It Actually Measures

Speed in proxy benchmarking has two components that are often conflated:

**Latency (Time to First Byte / TTFB):** The time between sending your HTTP request through the proxy and receiving the first byte of the response. This measures the proxy infrastructure overhead -- how quickly the proxy gateway receives your request, selects an exit IP, forwards the request to the target, and begins streaming the response back to you.

Typical latency ranges by proxy type: - ISP proxies: 50--150 ms - Datacenter proxies: 20--80 ms - Residential proxies: 200--800 ms - Mobile proxies: 300--1,200 ms

**Throughput (Total Download Time):** The total time to download the complete response through the proxy. This depends on the proxy provider's bandwidth capacity, the exit node's connection speed, and any throttling applied by the target site.

A benchmark that only reports latency hides throughput problems. A proxy can connect in 50 ms but take 3 seconds to download a 2 MB page if the exit node's bandwidth is throttled. Always look for both metrics.

**Percentiles matter more than averages.** A provider reporting 100 ms average latency might have a p50 of 60 ms but a p99 of 2,000 ms. That means 1 in 100 requests takes 20 times longer than the median. For high-volume workloads, p95 and p99 latency determine your tail performance.

Uptime: What 99.9% Really Means

Uptime measures what percentage of the time the proxy gateway accepts and processes connections. It is typically expressed as a percentage over a rolling 30-day window.

| Uptime Claim | Allowed Downtime/Month | Allowed Downtime/Year | |---|---|---| | 99% | 7.3 hours | 87.6 hours (3.65 days) | | 99.5% | 3.65 hours | 43.8 hours (1.83 days) | | 99.9% | 43.8 minutes | 8.77 hours | | 99.95% | 21.9 minutes | 4.38 hours | | 99.99% | 4.38 minutes | 52.6 minutes |

**What uptime does NOT measure:** A proxy that is "up" but returning CAPTCHA pages or 403 errors on 40% of requests is technically available but functionally broken. Uptime tells you the gateway is reachable, not that the proxies are working.

**Planned maintenance windows** are sometimes excluded from uptime calculations. Ask whether the provider's SLA counts planned downtime. If they schedule 2 hours of maintenance every month but exclude it, their real uptime could be 99.7% while they claim 99.99%.

**How to verify uptime independently:** Use a third-party monitoring service (UptimeRobot, Pingdom, Better Stack) to ping the proxy gateway endpoint every 60 seconds from multiple regions. After 30 days, you will have your own uptime data that is independent of the provider's claims.

Success Rate: The Metric That Matters Most

Success rate is the percentage of proxy requests that return a usable, non-blocked response. A "successful" response means:

  1. The proxy connected to the target (no connection timeout or refused connection)
  2. The target returned an HTTP 200 (or expected status code)
  3. The response body contains the expected content (not a CAPTCHA page, block page, or empty response)

This third point is critical. Many benchmarks count any HTTP 200 as a success, but anti-bot systems often return HTTP 200 with a CAPTCHA or JavaScript challenge in the body. A rigorous benchmark must parse the response content.

**Success rate varies dramatically by target site.** A proxy with a 99% success rate against a basic WordPress site might have a 60% success rate against a site protected by Cloudflare Bot Management. Benchmarks that do not disclose target sites are meaningless.

**Retry-adjusted success rate** is the practical metric: if your success rate is 80%, you need 1.25 requests on average to get one successful response. At 95%, you need 1.05 requests. The difference compounds at scale -- for 1 million target requests at 80% success rate, you send 1.25 million total requests (250,000 wasted).

---

The Six-Point Methodology Framework

When you encounter a proxy benchmark -- whether from a provider, a review site, or an independent tester -- evaluate it against these six criteria:

1. Test Locations

Where did the benchmark run from? A test from a server in the same datacenter as the proxy gateway will show lower latency than a test from a different continent. Credible benchmarks disclose the test server locations and ideally test from multiple geographic regions.

**Red flag:** Benchmark shows uniformly low latency without disclosing test server location. The test server is likely co-located with the proxy infrastructure.

2. Sample Size

How many requests were made? A benchmark of 100 requests tells you almost nothing -- it could represent a lucky or unlucky 5-minute window. Meaningful benchmarks require thousands of requests over multiple days.

**Minimum credible sample:** 10,000 requests per provider, spread across at least 7 consecutive days, covering both peak and off-peak hours.

3. Target Sites

What sites were the requests sent to? The difficulty of scraping varies enormously by target:

| Difficulty Tier | Examples | Typical Success Rate | |---|---|---| | Easy (no anti-bot) | Static blogs, government sites, Wikipedia | 95--99% | | Medium (basic protection) | Medium-traffic e-commerce, news sites | 85--95% | | Hard (advanced anti-bot) | Amazon, LinkedIn, Nike, Shopify with bot protection | 60--85% | | Extreme (aggressive blocking) | Ticketmaster, Supreme, financial data providers | 30--60% |

A benchmark that only tests against easy targets will show inflated success rates. Credible benchmarks include a mix of difficulty tiers.

4. Concurrency Level

How many simultaneous requests were active? Most proxy gateways handle 10 concurrent requests well. At 100, 500, or 1,000 concurrent requests, performance characteristics change. Queue depth increases, IP pool pressure grows, and latency percentiles spread.

**Red flag:** Benchmark does not disclose concurrency. It was likely run with minimal concurrency, which favors the provider.

5. Warm-Up Period

Did the benchmark discard the initial requests? The first 50--100 requests through a proxy gateway often have higher latency as connections are established, DNS is resolved, and the IP pool is warmed. Discarding the warm-up period is standard practice in credible benchmarks.

6. Measurement Window

Over what time period were the tests conducted? A 1-hour benchmark on a Tuesday afternoon does not capture weekend traffic patterns, peak-hour congestion, or provider maintenance windows. The minimum credible measurement window is 7 days continuous.

---

How to Run Your Own Benchmark

Rather than trusting provider benchmarks, run your own evaluation. Here is a lightweight methodology you can execute in an afternoon.

Step 1: Define Your Test Matrix

Decide what to test before writing any code:

  • **Providers:** Test at least 2--3 providers plus your current one as a control.
  • **Target sites:** Include 3--5 sites that represent your actual workload. Mix difficulty tiers.
  • **Test location:** Run from your production server or a server in the same region.
  • **Duration:** Minimum 4 hours per provider, ideally 24--48 hours.
  • **Concurrency:** Match your production concurrency level.

Step 2: Build the Test Script

A basic benchmark script needs to: 1. Send an HTTP GET request through the proxy 2. Record the TTFB and total response time 3. Verify the response is not a CAPTCHA or block page 4. Log all results to a structured format (CSV or JSON)

// benchmark.mjs -- lightweight proxy benchmark
import https from 'node:https';

const PROXY_HOST = 'gate.hexproxies.com'; const PROXY_PORT = 8080; const PROXY_USER = 'your-username'; const PROXY_PASS = 'your-password';

const TARGETS = [ { url: 'https://httpbin.org/get', name: 'httpbin', difficulty: 'easy' }, { url: 'https://example.com', name: 'example', difficulty: 'easy' }, ];

const RESULTS_FILE = 'benchmark-results.csv'; const TOTAL_REQUESTS = 500; const CONCURRENCY = 10;

async function sendRequest(target) { const start = performance.now(); let ttfb = 0;

return new Promise((resolve) => { const proxyAuth = Buffer.from( PROXY_USER + ':' + PROXY_PASS ).toString('base64');

const req = https.get( target.url, { host: PROXY_HOST, port: PROXY_PORT, headers: { 'Proxy-Authorization': 'Basic ' + proxyAuth, Host: new URL(target.url).hostname, }, timeout: 30000, }, (res) => { ttfb = performance.now() - start; let body = ''; res.on('data', (chunk) => { body += chunk; }); res.on('end', () => { const total = performance.now() - start; const blocked = body.includes('captcha') || body.includes('blocked') || res.statusCode !== 200; resolve({ target: target.name, difficulty: target.difficulty, statusCode: res.statusCode, ttfb: Math.round(ttfb), totalMs: Math.round(total), bodyLength: body.length, blocked, timestamp: new Date().toISOString(), }); }); } );

req.on('error', () => { resolve({ target: target.name, difficulty: target.difficulty, statusCode: 0, ttfb: 0, totalMs: Math.round(performance.now() - start), bodyLength: 0, blocked: true, timestamp: new Date().toISOString(), }); });

req.on('timeout', () => { req.destroy(); }); }); }

async function runBatch(target, count, concurrency) { const results = []; for (let i = 0; i < count; i += concurrency) { const batch = Array.from( { length: Math.min(concurrency, count - i) }, () => sendRequest(target) ); const batchResults = await Promise.all(batch); results.push(...batchResults); } return results; }

async function main() { const header = 'target,difficulty,statusCode,ttfb,totalMs,bodyLength,blocked,timestamp\n'; await writeFile(RESULTS_FILE, header);

for (const target of TARGETS) { console.log('Testing: ' + target.name + ' (' + target.difficulty + ')'); const results = await runBatch(target, TOTAL_REQUESTS, CONCURRENCY); const csv = results .map((r) => Object.values(r).join(',')) .join('\n') + '\n'; await appendFile(RESULTS_FILE, csv);

const successful = results.filter((r) => !r.blocked); const successRate = ((successful.length / results.length) * 100).toFixed(1); const avgTtfb = successful.length > 0 ? Math.round(successful.reduce((s, r) => s + r.ttfb, 0) / successful.length) : 0; console.log( ' Success rate: ' + successRate + '% | ' + 'Avg TTFB: ' + avgTtfb + 'ms | ' + 'Total requests: ' + results.length ); } }

main(); ```

Step 3: Analyze the Results

After collecting data, compute these summary statistics per provider and per target:

  • **Success rate:** Successful requests / total requests x 100
  • **Latency p50, p95, p99:** Sort TTFB values and pick the 50th, 95th, and 99th percentile values
  • **Throughput p50, p95, p99:** Same calculation for total download time
  • **Error breakdown:** Categorize failures (timeouts, connection refused, CAPTCHA, HTTP 403/429)
  • **Cost per successful request:** (monthly cost / total successful requests in test period) extrapolated to monthly volume

Step 4: Compare and Decide

Create a comparison matrix:

| Metric | Provider A | Provider B | Provider C | |---|---|---|---| | Success rate (hard targets) | -- | -- | -- | | Latency p50 | -- | -- | -- | | Latency p99 | -- | -- | -- | | Uptime (7-day) | -- | -- | -- | | Cost/1K successful requests | -- | -- | -- | | Geographic coverage | -- | -- | -- | | Support response time | -- | -- | -- |

**Weight the metrics by your priorities.** A sneaker bot operation cares about p99 latency and success rate on Shopify. A price monitoring operation cares about cost per successful request and geographic coverage.

---

Common Benchmark Manipulation Tactics

Be aware of how providers inflate their numbers:

Cherry-Picked Target Sites

Testing only against easy, unprotected sites produces 98%+ success rates that do not reflect real-world performance. If a benchmark does not name the target sites and their difficulty tier, assume the worst.

Optimal Concurrency

Running benchmarks at 5 concurrent requests when most customers use 100+ masks congestion issues. Ask the provider what concurrency level their benchmarks use.

Geographic Proximity

Running tests from a server in the same facility as the proxy gateway produces sub-10ms latency numbers. Real users connect from around the world. Look for multi-region test data.

Short Measurement Windows

A 1-hour test during off-peak hours avoids capturing maintenance windows, traffic spikes, and IP pool depletion. Credible benchmarks run for at least a week.

Excluding Failures from Averages

Some benchmarks compute average latency only for successful requests, making the provider look faster by ignoring timed-out requests that would drag the average up. Check whether the methodology includes or excludes failures.

Aggregate vs Per-Target Numbers

A provider reporting 95% overall success rate might achieve 99% on easy sites and 70% on hard sites. If your workload is primarily hard sites, the aggregate number is misleading.

---

What to Ask Before Choosing a Provider

Based on the framework above, here are the specific questions to ask any proxy provider during evaluation:

  1. **What is your success rate on [specific target sites I care about]?** Generic success rate claims are useless. Get site-specific data.
  2. **What does your SLA guarantee?** Look for financial credits for SLA violations, not just promises.
  3. **Can I run a free trial with my actual workload?** A trial is the only way to get benchmark data that reflects your reality.
  4. **Where are your proxy servers located?** Geographic proximity to your targets affects latency.
  5. **What happens at high concurrency?** Ask about rate limits, queue behavior, and degradation under load.
  6. **How do you handle IP bans?** Does the provider rotate banned IPs automatically? How large is the available pool?
  7. **What is the IP refresh rate?** For residential pools, how often are new IPs added and stale IPs retired?

---

Putting It Together: The Evaluation Scorecard

Score each provider on a 1--5 scale across these dimensions:

| Dimension | Weight | Description | |---|---|---| | Success rate on your targets | 30% | Tested against your actual target sites at your concurrency | | Latency (p95) | 15% | 95th percentile TTFB from your production region | | Uptime (verified independently) | 15% | Measured via third-party monitoring | | Cost per successful request | 20% | Total monthly cost / successful requests | | Geographic coverage | 10% | Number of countries and cities available | | Support quality | 10% | Response time and technical competence |

Multiply each score by the weight, sum the results, and you have a data-driven provider ranking that reflects your specific needs -- not marketing claims.

---

Frequently Asked Questions

**What is a good proxy success rate?** For ISP proxies on moderately protected sites, expect 95--99% success rate. For residential proxies on hard targets, 80--90% is strong. Below 75% on your target sites means the proxy pool is likely contaminated or too small.

**How often should I re-benchmark my proxy provider?** Run a lightweight benchmark (1,000 requests against your top 3 target sites) every 30 days. IP pools degrade, anti-bot systems evolve, and provider infrastructure changes. Monthly checks catch performance drift before it impacts your operations.

**Does higher price always mean better performance?** No. Price correlates with IP quality and pool size but not linearly. Some premium providers charge 3x for 10% better success rates. Calculate cost per successful request to find the actual value leaders.

**Can I benchmark without a paid subscription?** Most reputable providers offer free trials (typically 100 MB to 1 GB for residential, or 3--5 IPs for ISP). Use the trial to run a focused benchmark against your hardest target sites. If a provider refuses a trial, that itself is a data point.

Tips

  • *Never trust a single benchmark number. Always ask: tested from where, against what sites, at what concurrency, over what time window.
  • *A provider claiming 99.9% uptime with no methodology disclosure is a red flag. 99.9% means less than 8.8 hours of downtime per year.
  • *Success rate matters more than speed for most use cases. A proxy that is 50 ms faster but has a 20% lower success rate costs you more in retries.
  • *Run benchmarks from the same geographic region as your production workloads. A US-based test tells you nothing about performance from Singapore.
  • *Test at your actual concurrency level. Many proxies perform well at 10 concurrent requests but degrade at 500.

Ready to Get Started?

Put this guide into practice with Hex Proxies.

Cookie Preferences

We use cookies to ensure the best experience. You can customize your preferences below. Learn more