Your Selenium scraper just got blocked after 10 requests. Sound familiar?
Without proxies, browser automation hits a wall fast. Anti-bot systems track your IP and shut you down before you've collected meaningful data.
This guide shows you how to configure proxies in Selenium for Chrome, Firefox, and Edge. You'll learn unauthenticated and authenticated setups, rotation strategies, and advanced techniques to stay undetected in 2026.
What Are Proxies in Selenium?
Proxies in Selenium route your browser automation traffic through intermediate servers, masking your real IP address. They make your scraper appear as multiple users from different locations, preventing rate limiting and bypassing geographic restrictions.
Selenium supports HTTP, HTTPS, and SOCKS5 proxies through browser-specific options like ChromeOptions, FirefoxOptions, and EdgeOptions. The configuration differs slightly between browsers, but the core principle stays the same.
Why Use Proxies With Selenium
Proxies solve three critical problems in browser automation.
Avoid IP Bans
Websites track request frequency per IP address. Make too many requests from one IP and you'll get blocked within minutes.
Proxies distribute your requests across multiple IPs. This makes your automation look like organic traffic from different users rather than a single bot hammering the server.
Bypass Geo-Restrictions
Many websites serve different content based on location. E-commerce sites show different prices, streaming platforms restrict content by region, and some services block entire countries.
Proxies let you test how applications behave for users in different countries. This is essential for localization testing and competitive analysis.
Increase Scraping Throughput
Using multiple proxies simultaneously means you can run parallel Selenium instances. Each instance uses a different proxy IP.
This multiplies your scraping throughput without triggering anti-bot measures. A scraper that took 10 hours with one IP can finish in 30 minutes with 20 proxies.
Setup Unauthenticated Proxy in Chrome
Unauthenticated proxies don't require username and password. They're the simplest to configure but offer less security.
Here's the basic Chrome proxy setup:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
# Replace with your actual proxy
PROXY = "185.199.229.156:7492"
chrome_options = Options()
chrome_options.add_argument(f'--proxy-server={PROXY}')
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
The --proxy-server argument tells Chrome to route all traffic through the specified proxy.
Visit httpbin.org/ip to verify the proxy works. The response shows your current IP address in JSON format.
Pro tip: Always test proxies before running large jobs. Free proxies often fail or get blocked within hours.
Add Headless Mode for Server Environments
Running on a server without a display? Add headless mode:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
PROXY = "185.199.229.156:7492"
chrome_options = Options()
chrome_options.add_argument(f'--proxy-server={PROXY}')
chrome_options.add_argument('--headless=new')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
The --headless=new flag uses Chrome's newer headless implementation with better compatibility. The --no-sandbox and --disable-dev-shm-usage flags prevent common crashes in containerized environments.
Setup Unauthenticated Proxy in Firefox
Firefox requires a different configuration approach using the Proxy class.
from selenium import webdriver
from selenium.webdriver.common.proxy import Proxy, ProxyType
from selenium.webdriver.firefox.options import Options
PROXY_HOST = "185.199.229.156"
PROXY_PORT = 7492
firefox_options = Options()
proxy = Proxy({
'proxyType': ProxyType.MANUAL,
'httpProxy': f'{PROXY_HOST}:{PROXY_PORT}',
'sslProxy': f'{PROXY_HOST}:{PROXY_PORT}',
'noProxy': 'localhost,127.0.0.1'
})
firefox_options.proxy = proxy
driver = webdriver.Firefox(options=firefox_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
The Proxy class requires separate settings for HTTP and SSL connections.
Setting both to the same proxy ensures consistent routing regardless of whether the target site uses HTTP or HTTPS.
ProxyType.MANUAL tells Selenium you're manually configuring the proxy rather than using system settings.
Alternative Firefox Method Using Preferences
Firefox also supports proxy configuration through preferences:
from selenium import webdriver
from selenium.webdriver.firefox.options import Options
PROXY_HOST = "185.199.229.156"
PROXY_PORT = 7492
firefox_options = Options()
# Configure proxy through Firefox preferences
firefox_options.set_preference("network.proxy.type", 1)
firefox_options.set_preference("network.proxy.http", PROXY_HOST)
firefox_options.set_preference("network.proxy.http_port", PROXY_PORT)
firefox_options.set_preference("network.proxy.ssl", PROXY_HOST)
firefox_options.set_preference("network.proxy.ssl_port", PROXY_PORT)
firefox_options.set_preference("network.proxy.no_proxies_on", "localhost,127.0.0.1")
driver = webdriver.Firefox(options=firefox_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
This method gives you more granular control over Firefox's proxy behavior.
Setup Unauthenticated Proxy in Edge
Edge is Chromium-based, so the configuration mirrors Chrome exactly.
from selenium import webdriver
from selenium.webdriver.edge.options import Options
PROXY = "185.199.229.156:7492"
edge_options = Options()
edge_options.add_argument(f'--proxy-server={PROXY}')
driver = webdriver.Edge(options=edge_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
Any Chromium argument that works in Chrome also works in Edge.
For authenticated proxies in Edge, use the Selenium Wire method covered in the next section.
Setup Authenticated Proxy With Selenium Wire
Most commercial proxy providers require authentication with username and password. Standard Selenium doesn't support this directly.
Selenium Wire extends Selenium to intercept browser requests and inject authentication headers.
Install Selenium Wire
pip install selenium-wire blinker==1.7.0
Important: Selenium Wire requires blinker==1.7.0 specifically. Newer versions of blinker break the library.
Configure Authenticated Proxy
from seleniumwire import webdriver
PROXY_USER = "your_username"
PROXY_PASS = "your_password"
PROXY_HOST = "proxy.provider.com"
PROXY_PORT = "8080"
proxy_options = {
'proxy': {
'http': f'http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'https': f'http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'no_proxy': 'localhost,127.0.0.1'
}
}
driver = webdriver.Chrome(seleniumwire_options=proxy_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
Selenium Wire handles authentication automatically by intercepting requests and adding credentials.
The no_proxy setting ensures local requests don't go through the proxy. This speeds up localhost debugging.
Performance Note
Selenium Wire adds 10-15% performance overhead because it intercepts every request. This trade-off is worth it for authenticated proxy support.
For high-volume scraping where every millisecond counts, consider using the Chrome extension method instead.
Setup Authenticated Proxy With Chrome Extension
Creating a custom Chrome extension handles authentication without Selenium Wire's overhead. This method is more complex but faster.
Step 1: Create manifest.json
{
"version": "1.0.0",
"manifest_version": 3,
"name": "Proxy Auth Helper",
"permissions": [
"proxy",
"tabs",
"webRequest",
"webRequestAuthProvider"
],
"host_permissions": [
"<all_urls>"
],
"background": {
"service_worker": "background.js"
}
}
Step 2: Create background.js
const config = {
mode: "fixed_servers",
rules: {
singleProxy: {
scheme: "http",
host: "proxy.provider.com",
port: 8080
}
}
};
chrome.proxy.settings.set({value: config, scope: "regular"});
chrome.webRequest.onAuthRequired.addListener(
(details) => {
return {
authCredentials: {
username: "your_username",
password: "your_password"
}
};
},
{urls: ["<all_urls>"]},
["blocking"]
);
Step 3: Create Extension Programmatically in Python
For automation, generate the extension on the fly:
import zipfile
import os
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
def create_proxy_extension(proxy_host, proxy_port, proxy_user, proxy_pass):
"""Create a Chrome extension for proxy authentication."""
manifest_json = """
{
"version": "1.0.0",
"manifest_version": 3,
"name": "Proxy Auth",
"permissions": ["proxy", "tabs", "webRequest", "webRequestAuthProvider"],
"host_permissions": ["<all_urls>"],
"background": {"service_worker": "background.js"}
}
"""
background_js = f"""
const config = {{
mode: "fixed_servers",
rules: {{
singleProxy: {{
scheme: "http",
host: "{proxy_host}",
port: {proxy_port}
}}
}}
}};
chrome.proxy.settings.set({{value: config, scope: "regular"}});
chrome.webRequest.onAuthRequired.addListener(
(details) => {{
return {{
authCredentials: {{
username: "{proxy_user}",
password: "{proxy_pass}"
}}
}};
}},
{{urls: ["<all_urls>"]}},
["blocking"]
);
"""
extension_path = "proxy_auth_extension.zip"
with zipfile.ZipFile(extension_path, 'w') as zp:
zp.writestr("manifest.json", manifest_json)
zp.writestr("background.js", background_js)
return extension_path
# Usage
extension = create_proxy_extension(
"proxy.provider.com",
8080,
"username",
"password"
)
chrome_options = Options()
chrome_options.add_extension(extension)
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
# Clean up
os.remove(extension)
This method generates the extension dynamically with your credentials. The zip file contains both the manifest and background script.
Security note: Don't commit extensions with hardcoded credentials to version control.
Configure SOCKS5 Proxy
SOCKS5 proxies handle more protocols than HTTP proxies and offer better anonymity.
SOCKS5 in Chrome
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
SOCKS_PROXY = "185.199.229.156:1080"
chrome_options = Options()
chrome_options.add_argument(f'--proxy-server=socks5://{SOCKS_PROXY}')
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
The only difference from HTTP is adding the socks5:// prefix.
SOCKS5 With DNS Resolution Through Proxy
For complete anonymity, resolve DNS queries through the proxy too:
chrome_options.add_argument(f'--proxy-server=socks5://{SOCKS_PROXY}')
chrome_options.add_argument('--host-resolver-rules="MAP * ~NOTFOUND , EXCLUDE localhost"')
This prevents DNS leaks that could reveal your real location.
SOCKS5 With Authentication Using Selenium Wire
from seleniumwire import webdriver
PROXY_USER = "username"
PROXY_PASS = "password"
PROXY_HOST = "socks.provider.com"
PROXY_PORT = "1080"
proxy_options = {
'proxy': {
'http': f'socks5://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'https': f'socks5://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'no_proxy': 'localhost,127.0.0.1'
}
}
driver = webdriver.Chrome(seleniumwire_options=proxy_options)
driver.get("https://httpbin.org/ip")
print(driver.page_source)
driver.quit()
SOCKS5 supports UDP traffic and offers better performance for certain use cases. HTTP/HTTPS proxies work fine for most web scraping scenarios.
Implement Proxy Rotation
Using a single proxy for multiple requests increases detection risk. Proxy rotation switches IPs between requests.
Basic Rotation With New Browser Instances
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import random
PROXY_LIST = [
"185.199.229.156:7492",
"194.126.37.94:8080",
"178.79.172.11:3128",
"165.232.73.180:8080",
"103.152.112.162:80"
]
def create_driver_with_proxy(proxy):
"""Create a Chrome driver with specified proxy."""
chrome_options = Options()
chrome_options.add_argument(f'--proxy-server={proxy}')
chrome_options.add_argument('--headless=new')
return webdriver.Chrome(options=chrome_options)
def scrape_with_rotation(urls):
"""Scrape multiple URLs using rotating proxies."""
results = []
for url in urls:
proxy = random.choice(PROXY_LIST)
print(f"Using proxy: {proxy}")
driver = create_driver_with_proxy(proxy)
try:
driver.get(url)
results.append({
'url': url,
'proxy': proxy,
'content': driver.page_source[:500]
})
except Exception as e:
print(f"Error with {proxy}: {e}")
finally:
driver.quit()
return results
# Usage
urls = [
"https://httpbin.org/ip",
"https://httpbin.org/headers",
"https://httpbin.org/user-agent"
]
data = scrape_with_rotation(urls)
This creates a new browser instance with a different proxy for each request.
The downside is performance overhead from starting new browsers repeatedly.
Round-Robin Rotation for Predictable Distribution
from itertools import cycle
PROXY_LIST = [
"185.199.229.156:7492",
"194.126.37.94:8080",
"178.79.172.11:3128"
]
proxy_cycle = cycle(PROXY_LIST)
def get_next_proxy():
"""Get next proxy in rotation."""
return next(proxy_cycle)
# Each call returns the next proxy in sequence
print(get_next_proxy()) # First proxy
print(get_next_proxy()) # Second proxy
print(get_next_proxy()) # Third proxy
print(get_next_proxy()) # Back to first proxy
Round-robin ensures even distribution across all proxies, preventing any single IP from getting burned.
Dynamic Mid-Session Proxy Switching
Standard Selenium locks proxy settings at browser startup. Selenium Wire allows changing proxies mid-session without restarting the browser.
from seleniumwire import webdriver
from selenium.webdriver.common.by import By
PROXIES = [
{
'http': 'http://user1:pass1@proxy1.com:8080',
'https': 'http://user1:pass1@proxy1.com:8080'
},
{
'http': 'http://user2:pass2@proxy2.com:8080',
'https': 'http://user2:pass2@proxy2.com:8080'
}
]
# Start with first proxy
driver = webdriver.Chrome(
seleniumwire_options={'proxy': PROXIES[0]}
)
# First request with proxy 1
driver.get("https://httpbin.org/ip")
ip1 = driver.find_element(By.TAG_NAME, 'body').text
print(f"First IP: {ip1}")
# Switch to second proxy mid-session
driver.proxy = PROXIES[1]
# Second request with proxy 2
driver.get("https://httpbin.org/ip")
ip2 = driver.find_element(By.TAG_NAME, 'body').text
print(f"Second IP: {ip2}")
driver.quit()
This technique is powerful for scraping sessions that need different IPs without the overhead of restarting browsers.
Caveat: Some websites track browser fingerprints beyond IP. Switching proxies mid-session might not fool sophisticated anti-bot systems.
Combine Proxies With Undetected ChromeDriver
Standard ChromeDriver gets detected by anti-bot systems. Undetected ChromeDriver patches detection vectors.
Install Undetected ChromeDriver
pip install undetected-chromedriver
Basic Setup With Proxy
import undetected_chromedriver as uc
PROXY = "185.199.229.156:7492"
options = uc.ChromeOptions()
options.add_argument(f'--proxy-server={PROXY}')
driver = uc.Chrome(options=options)
driver.get("https://nowsecure.nl")
# Take screenshot to verify bypass
driver.save_screenshot('nowsecure_result.png')
driver.quit()
Authenticated Proxy With Undetected ChromeDriver and Selenium Wire
Combine both libraries for maximum stealth with authenticated proxies:
from seleniumwire import undetected_chromedriver as uc
PROXY_USER = "username"
PROXY_PASS = "password"
PROXY_HOST = "proxy.provider.com"
PROXY_PORT = "8080"
proxy_options = {
'proxy': {
'http': f'http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'https': f'http://{PROXY_USER}:{PROXY_PASS}@{PROXY_HOST}:{PROXY_PORT}',
'no_proxy': 'localhost,127.0.0.1'
}
}
driver = uc.Chrome(seleniumwire_options=proxy_options)
# Now you have both stealth and authenticated proxy support
driver.get("https://www.cloudflare.com")
driver.save_screenshot('cloudflare_result.png')
driver.quit()
Important: Import undetected_chromedriver from seleniumwire, not separately. This ensures proper integration.
Add Random Delays for Human-Like Behavior
import undetected_chromedriver as uc
import time
import random
def human_delay():
"""Add random delay to mimic human behavior."""
time.sleep(random.uniform(1.5, 4.0))
options = uc.ChromeOptions()
options.add_argument(f'--proxy-server=185.199.229.156:7492')
driver = uc.Chrome(options=options)
urls = [
"https://example1.com",
"https://example2.com",
"https://example3.com"
]
for url in urls:
driver.get(url)
human_delay() # Random pause between requests
# Extract data here
driver.quit()
Random delays between 1.5-4 seconds mimic natural browsing patterns.
Combine Proxies With Nodriver
Nodriver is the successor to Undetected ChromeDriver. It uses a custom CDP implementation for better stealth.
Install Nodriver
pip install nodriver
Basic Proxy Setup With Nodriver
import nodriver as nd
async def main():
browser = await nd.start(
browser_args=[
'--proxy-server=185.199.229.156:7492'
]
)
page = await browser.get("https://httpbin.org/ip")
content = await page.get_content()
print(content)
await browser.stop()
if __name__ == '__main__':
nd.loop().run_until_complete(main())
Nodriver is fully asynchronous, making it faster for concurrent scraping.
Nodriver With Multiple Pages and Rotation
import nodriver as nd
import random
PROXIES = [
"proxy1.com:8080",
"proxy2.com:8080",
"proxy3.com:8080"
]
async def scrape_with_proxy(url, proxy):
browser = await nd.start(
browser_args=[f'--proxy-server={proxy}']
)
page = await browser.get(url)
content = await page.get_content()
await browser.stop()
return content
async def main():
urls = [
"https://httpbin.org/ip",
"https://httpbin.org/headers"
]
for url in urls:
proxy = random.choice(PROXIES)
print(f"Scraping {url} with {proxy}")
content = await scrape_with_proxy(url, proxy)
print(content[:200])
if __name__ == '__main__':
nd.loop().run_until_complete(main())
Note: Nodriver's authenticated proxy support is still maturing. For production use with authenticated proxies, stick with Selenium Wire or the Chrome extension method.
Build a Production-Ready Proxy Manager
Here's a complete proxy management class for production scraping:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.common.exceptions import WebDriverException
import random
import time
from typing import List, Optional
from dataclasses import dataclass
@dataclass
class ProxyConfig:
host: str
port: int
username: Optional[str] = None
password: Optional[str] = None
protocol: str = "http"
@property
def url(self) -> str:
if self.username and self.password:
return f"{self.protocol}://{self.username}:{self.password}@{self.host}:{self.port}"
return f"{self.host}:{self.port}"
@property
def simple_url(self) -> str:
return f"{self.host}:{self.port}"
class ProxyManager:
def __init__(self, proxies: List[ProxyConfig]):
self.proxies = proxies
self.working_proxies = proxies.copy()
self.failed_proxies = []
def get_random_proxy(self) -> Optional[ProxyConfig]:
"""Get a random working proxy."""
if not self.working_proxies:
return None
return random.choice(self.working_proxies)
def mark_failed(self, proxy: ProxyConfig):
"""Mark a proxy as failed."""
if proxy in self.working_proxies:
self.working_proxies.remove(proxy)
self.failed_proxies.append(proxy)
print(f"Proxy {proxy.host}:{proxy.port} marked as failed")
def test_proxy(self, proxy: ProxyConfig, timeout: int = 10) -> bool:
"""Test if a proxy is working."""
chrome_options = Options()
chrome_options.add_argument(f'--proxy-server={proxy.simple_url}')
chrome_options.add_argument('--headless=new')
chrome_options.page_load_timeout = timeout
try:
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://httpbin.org/ip")
body = driver.find_element(By.TAG_NAME, 'body').text
driver.quit()
# Verify the proxy IP appears in response
if proxy.host.split('.')[0] in body:
return True
return True # Response received, proxy works
except Exception as e:
print(f"Proxy test failed: {e}")
return False
def validate_all(self):
"""Test all proxies and update working list."""
print(f"Testing {len(self.proxies)} proxies...")
self.working_proxies = []
self.failed_proxies = []
for proxy in self.proxies:
if self.test_proxy(proxy):
self.working_proxies.append(proxy)
print(f"✓ {proxy.host}:{proxy.port} working")
else:
self.failed_proxies.append(proxy)
print(f"✗ {proxy.host}:{proxy.port} failed")
print(f"\nWorking: {len(self.working_proxies)}/{len(self.proxies)}")
class SeleniumScraper:
def __init__(self, proxy_manager: ProxyManager):
self.proxy_manager = proxy_manager
self.driver = None
self.current_proxy = None
def create_driver(self, proxy: Optional[ProxyConfig] = None) -> webdriver.Chrome:
"""Create a new Chrome driver with optional proxy."""
chrome_options = Options()
chrome_options.add_argument('--headless=new')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
if proxy:
chrome_options.add_argument(f'--proxy-server={proxy.simple_url}')
self.current_proxy = proxy
return webdriver.Chrome(options=chrome_options)
def scrape(self, url: str, max_retries: int = 3) -> Optional[str]:
"""Scrape a URL with automatic proxy rotation on failure."""
for attempt in range(max_retries):
proxy = self.proxy_manager.get_random_proxy()
if not proxy:
print("No working proxies available")
return None
try:
print(f"Attempt {attempt + 1}: Using {proxy.host}:{proxy.port}")
if self.driver:
self.driver.quit()
self.driver = self.create_driver(proxy)
self.driver.get(url)
return self.driver.page_source
except WebDriverException as e:
print(f"Failed with proxy {proxy.host}:{proxy.port}: {e}")
self.proxy_manager.mark_failed(proxy)
if self.driver:
try:
self.driver.quit()
except:
pass
return None
def close(self):
"""Clean up driver."""
if self.driver:
self.driver.quit()
# Usage example
if __name__ == "__main__":
# Define your proxy list
proxies = [
ProxyConfig("185.199.229.156", 7492),
ProxyConfig("194.126.37.94", 8080),
ProxyConfig("178.79.172.11", 3128),
]
# Initialize manager and validate proxies
manager = ProxyManager(proxies)
manager.validate_all()
# Create scraper and fetch data
scraper = SeleniumScraper(manager)
urls = [
"https://httpbin.org/ip",
"https://httpbin.org/headers"
]
for url in urls:
content = scraper.scrape(url)
if content:
print(f"Successfully scraped {url}")
print(content[:300])
print("-" * 50)
scraper.close()
This production-ready class handles proxy validation, automatic failover, and retry logic.
Debug Common Proxy Issues
407 Proxy Authentication Required
This error means your credentials are incorrect or the proxy doesn't recognize them.
Solutions:
- Verify username and password are correct
- Check if your IP needs whitelisting with the proxy provider
- Use Selenium Wire for authenticated proxies instead of Chrome's
--proxy-serverflag
ERR_PROXY_CONNECTION_FAILED
The proxy server is unreachable.
Solutions:
- Test the proxy with curl first:
curl -x http://185.199.229.156:7492 https://httpbin.org/ip
- Check if the proxy IP and port are correct
- Verify the proxy service is online and accepting connections
ERR_TUNNEL_CONNECTION_FAILED
This happens with HTTPS sites when the proxy doesn't support CONNECT tunneling.
Solutions:
- Switch to a different proxy that supports HTTPS tunneling
- Use HTTP-only scraping if possible
- Try a SOCKS5 proxy instead
Proxy Not Changing Between Requests
Selenium locks proxy settings at browser startup.
Wrong approach:
driver = webdriver.Chrome(options=chrome_options)
for url in urls:
# Changing proxy here doesn't work
driver.get(url)
Correct approach:
for url in urls:
driver = webdriver.Chrome(options=chrome_options)
driver.get(url)
driver.quit() # Must restart for new proxy
Or use Selenium Wire's dynamic proxy switching feature.
SSL Certificate Errors
Some proxies use self-signed certificates that Chrome rejects.
Solution:
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--ignore-ssl-errors')
Warning: Only use these flags in development. They make your scraper vulnerable to MITM attacks.
Compare Proxy Methods
| Method | Authentication | Setup Complexity | Performance | Best For |
|---|---|---|---|---|
| ChromeOptions | No | Simple | Fast | Free proxies, testing |
| Selenium Wire | Yes | Medium | Moderate | Most use cases |
| Chrome Extension | Yes | Complex | Fast | Production systems |
| Server-side Rotation | Yes | Simple | Fastest | High-volume scraping |
| Nodriver | Limited | Medium | Fast | Anti-bot bypass |
Recommendations:
- Testing/Development: Use ChromeOptions with free proxies
- Production scraping: Use Selenium Wire for simplicity or Chrome extension for performance
- Anti-bot sites: Combine Undetected ChromeDriver or Nodriver with residential proxies
- High volume: Use server-side rotating proxies from providers like Roundproxies
Verify Your Proxy Setup
Always test your proxy configuration before running large scraping jobs:
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
import json
def test_proxy_complete(proxy_address: str) -> dict:
"""Comprehensive proxy test."""
chrome_options = Options()
chrome_options.add_argument(f'--proxy-server={proxy_address}')
chrome_options.add_argument('--headless=new')
results = {
'proxy': proxy_address,
'ip_check': False,
'https_support': False,
'response_time': None,
'detected_ip': None
}
try:
import time
start = time.time()
driver = webdriver.Chrome(options=chrome_options)
# Test IP
driver.get("https://httpbin.org/ip")
response = driver.find_element(By.TAG_NAME, 'body').text
results['response_time'] = round(time.time() - start, 2)
results['ip_check'] = True
try:
data = json.loads(response)
results['detected_ip'] = data.get('origin', 'Unknown')
except:
results['detected_ip'] = 'Parse error'
# Test HTTPS
driver.get("https://www.google.com")
results['https_support'] = True
driver.quit()
except Exception as e:
results['error'] = str(e)
return results
# Test multiple proxies
proxies = [
"185.199.229.156:7492",
"194.126.37.94:8080"
]
print("Testing proxies...\n")
for proxy in proxies:
result = test_proxy_complete(proxy)
print(f"Proxy: {result['proxy']}")
print(f" IP Check: {'✓' if result['ip_check'] else '✗'}")
print(f" HTTPS: {'✓' if result['https_support'] else '✗'}")
print(f" Detected IP: {result.get('detected_ip', 'N/A')}")
print(f" Response Time: {result.get('response_time', 'N/A')}s")
if 'error' in result:
print(f" Error: {result['error']}")
print()
This comprehensive test checks IP masking, HTTPS support, and response time for each proxy.
Conclusion
Setting up proxies in Selenium requires different approaches depending on your browser and authentication needs.
Key takeaways:
- Chrome and Edge use the
--proxy-serverargument for unauthenticated proxies - Firefox uses the
Proxyclass or preferences - Authenticated proxies need Selenium Wire or Chrome extensions
- Proxy rotation prevents IP bans by distributing requests across multiple IPs
- Selenium Wire enables mid-session proxy changes without browser restarts
- Combine proxies with Undetected ChromeDriver or Nodriver for anti-bot bypass
For high-volume scraping in 2026, use commercial residential proxies with server-side rotation. They handle the complexity while you focus on extracting data.
Test your proxies before running production scrapes to avoid wasted time on dead connections.
FAQ
Can I use free proxies with Selenium?
Yes, but free proxies are unreliable and often blocked. They work for testing but fail quickly under load. Commercial residential proxies offer 99%+ uptime and are worth the investment for production scraping.
How many proxies do I need for web scraping?
Start with 10-20 proxies for small projects. Scale to 100+ for large-scale scraping. More proxies mean lower request frequency per IP, reducing detection risk.
Does Selenium support proxy authentication natively?
No. You need Selenium Wire extension or a custom Chrome extension to handle authenticated proxies in Selenium.
What's the difference between HTTP and SOCKS5 proxies?
HTTP proxies only handle web traffic (HTTP/HTTPS). SOCKS5 supports all protocols including FTP, email, and UDP. Use HTTP/HTTPS for web scraping unless you need protocol flexibility or better anonymity.
How do I rotate proxies without restarting the browser?
Use Selenium Wire. Set driver.proxy = new_proxy_dict to change proxies mid-session. Standard Selenium locks proxy settings at browser startup and requires a restart.
Which is better: Selenium Wire or Chrome extension for authenticated proxies?
Selenium Wire is easier to set up and maintain. Chrome extensions offer better performance (no interception overhead) but require more initial configuration. Use Selenium Wire for development and Chrome extensions for production where performance matters.
Can I use Selenium proxies with headless mode?
Yes. Add --headless=new to your Chrome options along with the proxy argument. Both work together without issues.
Why is my proxy getting blocked quickly?
Several reasons: too many requests per minute, datacenter IP ranges that are pre-blocked, or fingerprinting detection. Use residential proxies, add random delays between requests, and combine with Undetected ChromeDriver for better success rates.