Proxy Performance Optimization
Proxy performance depends on connection management, timeout tuning, concurrency control, and geographic routing. This guide covers techniques to maximize throughput and minimize latency with Hex Proxies.
Connection Pooling
Reuse connections to avoid the overhead of TCP handshakes and TLS negotiation:
import requests
from requests.adapters import HTTPAdaptersession = requests.Session()
# Configure connection pooling adapter = HTTPAdapter( pool_connections=20, pool_maxsize=50, max_retries=Retry(total=3, backoff_factor=1, status_forcelist=[429, 503]) ) session.mount("http://", adapter) session.mount("https://", adapter)
proxy = "http://YOUR_USERNAME:YOUR_PASSWORD@gate.hexproxies.com:8080" session.proxies = {"http": proxy, "https": proxy} ```
// Node.js: Keep-alive agent with proxy
import { HttpsProxyAgent } from 'https-proxy-agent';const agent = new HttpsProxyAgent('http://YOUR_USERNAME:YOUR_PASSWORD@gate.hexproxies.com:8080', { keepAlive: true, maxSockets: 50, maxFreeSockets: 10, timeout: 30000, }); ```
Timeout Configuration
Set timeouts strategically to balance reliability and speed:
# Connection timeout: how long to wait for the proxy to accept the connection
# Read timeout: how long to wait for the target to respond through the proxy
session.request("GET", url, timeout=(5, 20))
# ^connect ^read| Timeout Type | Recommended | Purpose | |-------------|-------------|---------| | Connect | 5-10 seconds | Proxy handshake | | Read | 15-30 seconds | Target response | | Total | 30-45 seconds | Full request lifecycle |
Concurrency Tuning
import concurrent.futuresdef benchmark_concurrency(session, urls, workers): """Test different concurrency levels to find the sweet spot.""" start = time.time() success = 0 failed = 0
with concurrent.futures.ThreadPoolExecutor(max_workers=workers) as executor: futures = {executor.submit(session.get, url, timeout=20): url for url in urls} for future in concurrent.futures.as_completed(futures): try: resp = future.result() if resp.status_code == 200: success += 1 else: failed += 1 except Exception: failed += 1
elapsed = time.time() - start rps = len(urls) / elapsed print(f"Workers: {workers} | RPS: {rps:.1f} | Success: {success} | Failed: {failed} | Time: {elapsed:.1f}s") return rps
# Find optimal concurrency urls = ["https://httpbin.org/ip"] * 100 for workers in [5, 10, 20, 50]: benchmark_concurrency(session, urls, workers) ```
Geographic Routing for Latency
Choose proxy exit points close to your target servers to minimize round-trip time:
# Target is a US-based site -- use US exit nodes# Target is EU-based -- use European exit nodes eu_proxy = "http://YOUR_USERNAME-country-de:YOUR_PASSWORD@gate.hexproxies.com:8080" ```
ISP vs Residential Performance
| Metric | ISP Proxies | Residential Proxies | |--------|------------|-------------------| | Avg latency | 50-150ms | 100-500ms | | Throughput | Higher | Moderate | | Stability | Very stable | Varies by IP | | Best for | Speed-critical tasks | Detection-sensitive tasks |
Use Hex Proxies ISP proxies (250K+ IPs) for latency-sensitive workloads and residential (10M+ IPs) for tasks that prioritize IP diversity.
Monitoring and Metrics
import timeclass ProxyMetrics: def __init__(self): self.metrics = defaultdict(list)
def record(self, url, status, latency): self.metrics["status_codes"].append(status) self.metrics["latencies"].append(latency)
def report(self): latencies = self.metrics["latencies"] statuses = self.metrics["status_codes"] success = sum(1 for s in statuses if s == 200) total = len(statuses)
print(f"Total requests: {total}") print(f"Success rate: {success/total:.1%}") print(f"Avg latency: {sum(latencies)/len(latencies)*1000:.0f}ms") print(f"P95 latency: {sorted(latencies)[int(len(latencies)*0.95)]*1000:.0f}ms") print(f"P99 latency: {sorted(latencies)[int(len(latencies)*0.99)]*1000:.0f}ms") ```
Optimization Checklist
- Reuse HTTP sessions and connection pools instead of creating new connections per request.
- Set connect and read timeouts separately for fine-grained control.
- Benchmark different concurrency levels to find the optimal worker count.
- Route traffic through geographically close exit nodes to reduce latency.
- Use ISP proxies for speed-critical tasks and residential for detection-sensitive ones.
- Monitor success rate, latency percentiles, and error distribution continuously.