Proxies for SEO Tools
Accurate SEO data requires requests from real residential IPs in specific geographic locations. Search engines personalize results by location, device, and browsing history. Hex Proxies provides 10M+ residential IPs across every major country, enabling precise SERP data collection.
Why Proxies for SEO
- **Localized results**: Google shows different rankings for users in New York vs London vs Tokyo.
- **Avoid blocks**: Search engines rate-limit automated queries and block datacenter IPs.
- **Competitive analysis**: Monitor competitor rankings from multiple locations simultaneously.
- **Accurate data**: Residential IPs get the same results as real users, not bot-filtered results.
SERP Rank Tracking
import requestsclass RankTracker: def __init__(self, proxy_user, proxy_pass): self.proxy_user = proxy_user self.proxy_pass = proxy_pass self.gateway = "gate.hexproxies.com:8080"
def check_ranking(self, keyword, target_domain, country="us", num_results=100): proxy = f"http://{self.proxy_user}-country-{country}:{self.proxy_pass}@{self.gateway}" proxies = {"http": proxy, "https": proxy}
headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36", "Accept-Language": "en-US,en;q=0.9", }
encoded_kw = quote_plus(keyword) url = f"https://www.google.com/search?q={encoded_kw}&num={num_results}&hl=en&gl={country}"
resp = requests.get(url, proxies=proxies, headers=headers, timeout=20) resp.raise_for_status()
# Parse results and find target domain position position = self.find_domain_position(resp.text, target_domain) return { "keyword": keyword, "country": country, "position": position, "target_domain": target_domain, }
def find_domain_position(self, html, domain): # Simplified -- use a proper HTML parser in production from bs4 import BeautifulSoup soup = BeautifulSoup(html, "html.parser") results = soup.select("div.g a[href]") for i, link in enumerate(results, 1): href = link.get("href", "") if domain in href: return i return None
# Track rankings from multiple locations tracker = RankTracker("YOUR_USERNAME", "YOUR_PASSWORD") locations = ["us", "gb", "de", "fr", "au"] keyword = "best residential proxies"
for loc in locations: result = tracker.check_ranking(keyword, "hexproxies.com", country=loc) print(f"{loc}: Position {result['position']}") ```
Multi-Location SERP Analysis
def track_keyword_globally(tracker, keyword, domain, locations): results = [] with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor: futures = { executor.submit(tracker.check_ranking, keyword, domain, loc): loc for loc in locations } for future in concurrent.futures.as_completed(futures): results.append(future.result()) return results ```
SEO Use Case Summary
| Task | Proxy Type | Session Mode | Volume | |------|-----------|--------------|--------| | Rank tracking | Residential | Rotating + geo | Medium | | SERP scraping | Residential | Rotating | High | | Backlink checking | Residential | Rotating | Medium | | Competitor monitoring | Residential | Rotating + geo | Medium | | Local SEO audits | Residential | Geo-targeted | Low |
Best Practices for Search Engine Scraping
- Add 5-15 second delays between search queries.
- Rotate User-Agent strings to match real browser distributions.
- Use geographic targeting to get accurate localized results.
- Respect rate limits -- search engines will escalate from captchas to IP bans.
- Cache results to avoid redundant queries.
- Use Hex Proxies residential IPs (10M+ pool) for the highest success rates against search engines.