How to Optimize Proxy Bandwidth Usage
Residential proxy plans are often billed per gigabyte. Even with ISP proxies offering unlimited bandwidth, optimizing data transfer improves speed and reduces unnecessary load. These techniques can reduce bandwidth consumption by 40-70%.
Technique 1: Request Compression Headers
Always send Accept-Encoding headers to get compressed responses:
headers = { "Accept-Encoding": "gzip, deflate, br", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36", }
proxy = "http://YOUR_USER:YOUR_PASS@gate.hexproxies.com:8080" with httpx.Client(proxy=proxy, headers=headers) as client: resp = client.get("https://example.com") # httpx auto-decompresses — check original size vs transferred print(f"Content length: {len(resp.content)} bytes") ```
Technique 2: Response Filtering
Only download what you need. Use CSS selectors or XPath to extract specific content:
from bs4 import BeautifulSoupdef fetch_prices_only(url: str, proxy: str) -> list[str]: """Fetch page but only extract price elements — ignores images, scripts, etc.""" with httpx.Client(proxy=proxy, timeout=30) as client: resp = client.get(url, headers={ "Accept": "text/html", "Accept-Encoding": "gzip, deflate, br", }) soup = BeautifulSoup(resp.text, "html.parser") prices = [el.text.strip() for el in soup.select("[data-price], .price, .cost")] return prices ```
Technique 3: Conditional Requests
Use ETags and If-Modified-Since to avoid re-downloading unchanged content:
@dataclass(frozen=True) class CachedResponse: url: str etag: str last_modified: str content: str
class ConditionalFetcher: def __init__(self, proxy: str): self._proxy = proxy self._cache: dict[str, CachedResponse] = {}
def fetch(self, url: str) -> str: headers = {"Accept-Encoding": "gzip, deflate, br"} cached = self._cache.get(url) if cached: if cached.etag: headers["If-None-Match"] = cached.etag if cached.last_modified: headers["If-Modified-Since"] = cached.last_modified
with httpx.Client(proxy=self._proxy, timeout=30) as client: resp = client.get(url, headers=headers) if resp.status_code == 304: return cached.content # Not modified — zero bandwidth
content = resp.text self._cache = { **self._cache, url: CachedResponse( url=url, etag=resp.headers.get("etag", ""), last_modified=resp.headers.get("last-modified", ""), content=content, ), } return content ```
Technique 4: Block Unnecessary Resources in Browser Automation
When using Playwright or Puppeteer, block resources you do not need:
// Playwright example
await page.route('**/*', (route) => {
const blocked = ['image', 'stylesheet', 'font', 'media'];
if (blocked.includes(route.request().resourceType())) {
return route.abort();
}
return route.continue();
});This can reduce per-page bandwidth by 60-80%.
Technique 5: HEAD Requests for Existence Checks
If you only need to check whether a page exists, use HEAD instead of GET:
def check_url_exists(url: str, proxy: str) -> bool:
with httpx.Client(proxy=proxy, timeout=10) as client:
resp = client.head(url, follow_redirects=True)
return resp.status_code == 200Bandwidth Impact Summary
| Technique | Bandwidth Reduction | Effort | |-----------|-------------------|--------| | Compression headers | 30-50% | Minimal | | Resource blocking | 60-80% per page | Low | | Conditional requests | 90%+ for unchanged | Medium | | Content extraction | 80-95% | Medium | | HEAD requests | 95%+ | Minimal |
Hex Proxies Bandwidth Facts
ISP proxies include unlimited bandwidth — no per-GB billing. Residential proxies are billed per GB, making these optimization techniques directly cost-saving. Either way, reducing unnecessary data transfer improves speed and reduces network load.