Proxies route your browser automation traffic through intermediary servers, masking your IP address and helping you avoid blocks, access geo-restricted content, and distribute requests across multiple IPs.

In this guide, we'll show you how to configure proxies in Playwright across different scenarios—from basic setup to advanced rotation strategies.

Why use proxies with Playwright?

When you're scraping at scale or testing from different locations, websites can detect and block your real IP address. Proxies solve this by:

  • Avoiding IP bans: Distribute requests across multiple IPs to stay under rate limits
  • Accessing geo-restricted content: Test how your site appears to users in different countries
  • Bypassing anti-bot measures: Residential proxies make your traffic look like real users
  • Testing from multiple locations simultaneously: Run parallel sessions with different IP addresses

The trick is knowing when to use proxies at the browser level versus the context level, and how to handle authentication without exposing credentials in your codebase.

Basic proxy setup

Playwright supports HTTP, HTTPS, and SOCKS5 proxies. The simplest approach is configuring the proxy when launching the browser—this applies to all pages and contexts.

Here's the basic setup in Node.js:

const { chromium } = require('playwright');

(async () => {
  const browser = await chromium.launch({
    proxy: {
      server: 'http://proxy-server.com:8080'
    }
  });
  
  const page = await browser.newPage();
  await page.goto('https://httpbin.org/ip');
  console.log(await page.content());
  await browser.close();
})();

And in Python:

from playwright.sync_api import sync_playwright

with sync_playwright() as p:
    browser = p.chromium.launch(
        proxy={
            'server': 'http://proxy-server.com:8080'
        }
    )
    page = browser.new_page()
    page.goto('https://httpbin.org/ip')
    print(page.content())
    browser.close()

This routes all traffic through the specified proxy server. But what if your proxy requires authentication?

Proxy authentication

Most premium proxies require a username and password. Playwright makes this straightforward:

const browser = await chromium.launch({
  proxy: {
    server: 'http://proxy-server.com:8080',
    username: 'your_username',
    password: 'your_password'
  }
});

Don't hardcode credentials. Use environment variables instead:

require('dotenv').config();

const browser = await chromium.launch({
  proxy: {
    server: process.env.PROXY_SERVER,
    username: process.env.PROXY_USERNAME,
    password: process.env.PROXY_PASSWORD
  }
});

Your .env file would look like:

PROXY_SERVER=http://proxy-server.com:8080
PROXY_USERNAME=your_username
PROXY_PASSWORD=your_password

Converting standard proxy URLs

Many proxy providers give you credentials in the format http://username:password@host:port. Here's a utility function to convert this to Playwright's format:

function convertProxyFormat(proxyUrl) {
  const url = new URL(proxyUrl);
  return {
    server: `${url.protocol}//${url.host}`,
    username: url.username,
    password: url.password
  };
}

// Usage
const proxyOptions = convertProxyFormat(process.env.PROXY_URL);
const browser = await chromium.launch({ proxy: proxyOptions });

This keeps your code clean and prevents accidentally committing credentials to version control.

SOCKS5 proxies

SOCKS5 proxies work at a lower level than HTTP proxies and can handle any type of traffic. Playwright supports them, but with a caveat: authentication doesn't work consistently across all browsers.

const browser = await chromium.launch({
  proxy: {
    server: 'socks5://proxy-server.com:1080'
  }
});

If you need SOCKS5 with authentication, you'll hit a wall with Chromium. The browser itself doesn't support SOCKS5 proxy authentication natively. Your options:

  1. Use HTTP/HTTPS proxies instead (most reliable)
  2. Set up a local proxy relay that handles authentication
  3. Use Chrome extensions (hacky, but works)

For most scraping use cases, stick with HTTP/HTTPS proxies—they have better support and compatibility.

Context-level proxies

Here's where Playwright gets powerful. Instead of setting one proxy for the entire browser, you can assign different proxies to different contexts. This is huge for efficiency.

A browser context is like an incognito window—isolated cookies, storage, and cache. By using contexts, you can run multiple sessions with different proxies in a single browser instance, saving RAM and CPU.

const browser = await chromium.launch(); // No proxy here

// Create two contexts with different proxies
const context1 = await browser.newContext({
  proxy: {
    server: 'http://proxy1.com:8080',
    username: 'user1',
    password: 'pass1'
  }
});

const context2 = await browser.newContext({
  proxy: {
    server: 'http://proxy2.com:8080',
    username: 'user2',
    password: 'pass2'
  }
});

const page1 = await context1.newPage();
const page2 = await context2.newPage();

// These pages use different proxies
await page1.goto('https://httpbin.org/ip');
await page2.goto('https://httpbin.org/ip');

This approach is more resource-efficient than launching multiple browser instances. Playwright is memory-intensive, so minimizing browser instances matters when you're running at scale.

Important: Firefox has a known bug where context-level proxies can get overwritten. If you're using Firefox, stick to browser-level proxies or use Chromium instead.

Proxy rotation strategies

The real power of proxies comes from rotation—switching IPs between requests to avoid detection. Let's look at a few approaches.

Rotating on browser launch

The simplest method: create a proxy pool and randomly select one for each browser session.

const proxyPool = [
  { server: 'http://proxy1.com:8080', username: 'user1', password: 'pass1' },
  { server: 'http://proxy2.com:8080', username: 'user2', password: 'pass2' },
  { server: 'http://proxy3.com:8080', username: 'user3', password: 'pass3' }
];

function getRandomProxy() {
  return proxyPool[Math.floor(Math.random() * proxyPool.length)];
}

async function scrapeWithRotation() {
  const proxy = getRandomProxy();
  const browser = await chromium.launch({ proxy });
  
  const page = await browser.newPage();
  await page.goto('https://example.com');
  
  // Do your scraping
  
  await browser.close();
}

// Run multiple times with different proxies
for (let i = 0; i < 10; i++) {
  await scrapeWithRotation();
}

This works, but launching browsers is slow. You're paying a performance penalty for each rotation.

Rotating with contexts (better)

Use a single browser with multiple contexts, each using a different proxy:

const browser = await chromium.launch();

async function scrapeWithContext(url, proxyIndex) {
  const proxy = proxyPool[proxyIndex % proxyPool.length];
  const context = await browser.newContext({ proxy });
  const page = await context.newPage();
  
  await page.goto(url);
  const data = await page.content();
  
  await context.close();
  return data;
}

// Scrape multiple URLs with rotating proxies
const urls = ['https://example1.com', 'https://example2.com', 'https://example3.com'];
const results = await Promise.all(
  urls.map((url, i) => scrapeWithContext(url, i))
);

await browser.close();

This is faster because you're reusing the browser instance while still getting IP rotation benefits.

Provider-managed rotation

Premium proxy services like Bright Data, Oxylabs, and others offer rotating proxies where the provider handles rotation automatically. You connect to a single endpoint, and the proxy service rotates IPs behind the scenes.

const browser = await chromium.launch({
  proxy: {
    server: 'http://rotating-endpoint.provider.com:8080',
    username: 'customer-CUSTOMER_ID-session-random',
    password: 'your_password'
  }
});

The session-random parameter tells the proxy to use a new IP for each request. Check your provider's documentation for specific syntax—it varies.

Bypassing proxies for specific domains

Sometimes you want to use a proxy for most traffic but bypass it for certain domains (like localhost or internal services):

const browser = await chromium.launch({
  proxy: {
    server: 'http://proxy-server.com:8080',
    bypass: 'localhost,127.0.0.1,*.internal.com'
  }
});

The bypass parameter accepts a comma-separated list of domains. Use * for wildcard matching.

Handling proxy failures with retries

Proxies fail. Even premium ones. Your code needs to handle this gracefully.

async function scrapeWithRetry(url, maxRetries = 3) {
  let attempt = 0;
  
  while (attempt < maxRetries) {
    const proxy = getRandomProxy();
    const browser = await chromium.launch({ proxy });
    
    try {
      const page = await browser.newPage();
      await page.goto(url, { timeout: 30000 });
      const content = await page.content();
      await browser.close();
      return content;
    } catch (error) {
      console.log(`Attempt ${attempt + 1} failed: ${error.message}`);
      await browser.close();
      attempt++;
      
      if (attempt >= maxRetries) {
        throw new Error(`Failed after ${maxRetries} attempts`);
      }
      
      // Wait before retrying
      await new Promise(resolve => setTimeout(resolve, 1000 * attempt));
    }
  }
}

This pattern handles timeouts, connection failures, and other proxy issues by rotating to a different proxy and retrying.

Reducing resource usage

Playwright browsers are resource-heavy. When using proxies, you can optimize by:

Blocking unnecessary resources

const context = await browser.newContext({ proxy });

await context.route('**/*.{png,jpg,jpeg,gif,svg,css,font}', route => route.abort());

const page = await context.newPage();
await page.goto('https://example.com');

This blocks images, CSS, and fonts, reducing bandwidth and speeding up page loads. Useful when you only need text content.

Using headless mode

Headless browsers use less memory:

const browser = await chromium.launch({
  headless: true,
  proxy: { server: 'http://proxy-server.com:8080' }
});

For debugging, set headless: false to see what's happening.

Advanced: Request interception with proxies

Playwright lets you intercept and modify requests. Combined with proxies, this is powerful for debugging and customization.

const page = await browser.newPage();

// Log all requests to see what's being proxied
page.on('request', request => {
  console.log('>>', request.method(), request.url());
});

page.on('response', response => {
  console.log('<<', response.status(), response.url());
});

await page.goto('https://example.com');

You can also modify requests:

await page.route('**/*', route => {
  const headers = route.request().headers();
  headers['User-Agent'] = 'CustomBot/1.0';
  route.continue({ headers });
});

This lets you customize request headers, cookies, or even mock responses while routing through a proxy.

Testing proxies

Before scraping with a proxy, verify it's working:

async function testProxy(proxy) {
  const browser = await chromium.launch({ proxy });
  const page = await browser.newPage();
  
  try {
    await page.goto('https://httpbin.org/ip', { timeout: 10000 });
    const content = await page.content();
    console.log('Proxy IP:', content);
    await browser.close();
    return true;
  } catch (error) {
    console.log('Proxy failed:', error.message);
    await browser.close();
    return false;
  }
}

// Test all proxies in your pool
const workingProxies = [];
for (const proxy of proxyPool) {
  if (await testProxy(proxy)) {
    workingProxies.push(proxy);
  }
}

This filters out dead proxies before your scraping job starts.

Common proxy issues and fixes

Authentication fails

Double-check your username and password. Some providers use special formats:

// Some providers need this format
username: 'customer-CUSTOMER_ID-country-US'

Connection timeouts

Increase the timeout and add retry logic:

await page.goto('https://example.com', { 
  timeout: 60000,  // 60 seconds
  waitUntil: 'domcontentloaded'  // Don't wait for all resources
});

Proxy detected/blocked

Use residential proxies instead of datacenter proxies. Residential IPs come from real devices and are harder to detect:

const browser = await chromium.launch({
  proxy: {
    server: 'http://residential-proxy.com:8080',
    username: 'your_username',
    password: 'your_password'
  }
});

Different results than expected

Some websites serve different content based on geolocation. Make sure your proxy is in the right region:

// Example: US-based proxy
username: 'customer-CUSTOMER_ID-country-US-state-CA-city-LosAngeles'

Choosing the right proxy type

Not all proxies are created equal:

Datacenter proxies: Fast and cheap, but easily detected. Good for non-critical scraping or when you need speed over stealth.

Residential proxies: IPs from real homes/devices. Much harder to detect and block. More expensive but essential for scraping protected sites.

Mobile proxies: IPs from mobile carriers. Most expensive, but virtually undetectable. Use for high-value targets.

For most projects, residential proxies hit the sweet spot of reliability and cost.

Putting it all together

Here's a complete example combining everything we've covered:

require('dotenv').config();
const { chromium } = require('playwright');

// Proxy pool from environment
const proxyPool = [
  {
    server: process.env.PROXY1_SERVER,
    username: process.env.PROXY1_USERNAME,
    password: process.env.PROXY1_PASSWORD
  },
  {
    server: process.env.PROXY2_SERVER,
    username: process.env.PROXY2_USERNAME,
    password: process.env.PROXY2_PASSWORD
  }
];

function getRandomProxy() {
  return proxyPool[Math.floor(Math.random() * proxyPool.length)];
}

async function scrapeWithRetry(url, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const proxy = getRandomProxy();
    const browser = await chromium.launch({
      headless: true,
      proxy
    });
    
    try {
      const page = await browser.newPage();
      
      // Block unnecessary resources
      await page.route('**/*.{png,jpg,jpeg,gif}', route => route.abort());
      
      // Navigate with timeout
      await page.goto(url, {
        timeout: 30000,
        waitUntil: 'domcontentloaded'
      });
      
      // Extract data
      const data = await page.evaluate(() => {
        return document.querySelector('h1')?.textContent;
      });
      
      await browser.close();
      return data;
      
    } catch (error) {
      console.log(`Attempt ${attempt + 1} failed: ${error.message}`);
      await browser.close();
      
      if (attempt === maxRetries - 1) {
        throw error;
      }
      
      // Exponential backoff
      await new Promise(resolve => 
        setTimeout(resolve, 1000 * Math.pow(2, attempt))
      );
    }
  }
}

// Run scraper
(async () => {
  try {
    const result = await scrapeWithRetry('https://example.com');
    console.log('Scraped data:', result);
  } catch (error) {
    console.error('Scraping failed:', error);
  }
})();

This handles proxy rotation, retries, resource blocking, and proper error handling—everything you need for production-ready proxy usage.

Final thoughts

Proxies are essential for serious browser automation and web scraping. The key takeaways:

  • Use environment variables for credentials
  • Context-level proxies are more efficient than multiple browsers
  • Always implement retry logic
  • Test proxies before running production jobs
  • Choose the right proxy type for your use case
  • Block unnecessary resources to save bandwidth

Start simple with a single proxy, then scale up to rotation as your needs grow. Playwright makes proxy management straightforward once you understand the different configuration levels and patterns.

For large-scale scraping, consider managed proxy services that handle rotation automatically—they're worth the cost when your time matters more than the proxy fees.