v1.10.82-f67ee7d
Skip to main content
← Back to Code Snippets

Python Requests Proxy

Complete Requests proxy integration example for Python with Hex Proxies. Includes authentication, timeouts, and error handling.

PythonRequests
Install:pip install requests
Python / Requests
import requests

proxies = {
    "http": "http://user:pass@gate.hexproxies.com:8080",
    "https": "http://user:pass@gate.hexproxies.com:8080",
}

session = requests.Session()
session.proxies.update(proxies)

try:
    response = session.get("https://httpbin.org/ip", timeout=30)
    response.raise_for_status()
    print("Origin IP:", response.json()["origin"])
except requests.exceptions.ProxyError as e:
    print("Proxy connection failed:", e)
except requests.exceptions.Timeout:
    print("Request timed out")
except requests.exceptions.RequestException as e:
    print("Request error:", e)

Why Python Requests for Proxy Work

Python dominates the proxy and web scraping landscape for good reason. The Requests library, downloaded over 300 million times per month on PyPI, provides the most intuitive HTTP interface of any programming language. Its design philosophy of being "HTTP for Humans" extends perfectly to proxy integration, where complexity should live in the infrastructure rather than the application code. Python's vast ecosystem of parsing libraries like BeautifulSoup, lxml, and parsel means that once your proxied request returns data, you have world-class tools for extracting value from it.

The combination of Python and Requests has become the de facto standard for proxy-driven data collection. Data scientists, market researchers, and automation engineers all gravitate toward this pairing because it eliminates boilerplate and lets them focus on business logic. The Session object in Requests is particularly valuable for proxy work because it persists TCP connections, cookies, and headers across multiple requests, reducing overhead when routing hundreds of calls through gate.hexproxies.com:8080.

Configuration Patterns

The Requests library accepts proxy configuration through three mechanisms, each suited to different deployment scenarios. The dictionary-based approach shown above gives you explicit per-protocol control. Environment variables (HTTP_PROXY and HTTPS_PROXY) work well in containerized deployments where you want proxy config decoupled from application code. The Session-level proxy assignment strikes the best balance for most production workloads because it centralizes configuration while allowing per-request overrides when needed.

For authenticated proxies, embed credentials directly in the proxy URL using the format `http://user:pass@gate.hexproxies.com:8080`. If your password contains special characters, URL-encode them using `urllib.parse.quote()` to prevent parsing failures. When working with sticky sessions, append the session identifier to your username with a separator as documented in your Hex Proxies dashboard.

Common Pitfalls

The most frequent mistake Python developers make is forgetting to set explicit timeouts. Without a timeout parameter, Requests will wait indefinitely for a response, which can stall your entire pipeline. Always pass both connect and read timeouts as a tuple: `timeout=(10, 30)` gives you 10 seconds for the proxy connection and 30 seconds for the response.

Another common issue is creating a new Session object for every request. This defeats connection pooling and forces a fresh TCP and TLS handshake through the proxy for each call, dramatically increasing latency. Instantiate one Session, configure its proxy settings, and reuse it across your entire workload. Watch out for the `REQUESTS_CA_BUNDLE` environment variable in Docker containers, as misconfigured certificate paths cause silent TLS failures through proxies.

Performance Optimization

For high-throughput scenarios, tune the connection pool size with a custom HTTPAdapter. Mount an adapter with `pool_connections=50` and `pool_maxsize=50` to allow up to 50 concurrent keep-alive connections to the proxy gateway. Combine this with `requests-futures` or `concurrent.futures.ThreadPoolExecutor` for parallel request execution. While Requests itself is synchronous, thread-based parallelism scales well to several hundred concurrent proxy connections before you need to consider async alternatives.

Measure your p95 response time, not just averages. Proxy latency has a long tail, and a 30-second timeout that seems generous can still trigger on 2-3 percent of requests during peak hours. Implement structured logging that captures the proxy round-trip time separately from target server processing time so you can identify whether bottlenecks are in the proxy layer or the destination.

Tips

  • 1
    Use session objects to reuse TCP connections and reduce latency.
  • 2
    Set explicit timeouts to avoid hanging requests.
  • 3
    Catch ProxyError separately from other request exceptions for clearer debugging.

Ready to Integrate?

Get proxy credentials and start coding in minutes.