How to Use Patchright: Make Your Web Scraper Undetectable

Patchright is a patched version of Playwright that bypasses modern bot detection systems by fixing critical CDP (Chrome DevTools Protocol) leaks and browser fingerprinting issues.

Instead of leaving telltale signs like the Runtime.enable command that screams "I'm a bot!", Patchright executes JavaScript in isolated contexts, making your scraper virtually indistinguishable from a real browser.

Getting your web scraper blocked by Cloudflare, DataDome, or other anti-bot systems? You're not alone—these systems can detect vanilla Playwright instantly through CDP leaks. But here's the thing: switching from Playwright to Patchright takes literally one line of code and suddenly your bot passes detection tests that would normally get you blocked in seconds.

In this post, I'll show you exactly how to set up and use Patchright to bypass even the most aggressive anti-bot systems, including a sneaky trick to handle those pesky closed Shadow DOM elements that normal scrapers can't touch.

Why You Can Trust This Method

Problem: Regular Playwright gets detected immediately because it sends the Runtime.enable CDP command, which anti-bot systems like Cloudflare and DataDome specifically look for. This single command is like waving a giant flag saying "Hey, I'm automated!"

Solution: Patchright patches Playwright at the source level to avoid using Runtime.enable entirely, executing JavaScript in isolated ExecutionContexts instead.

Proof: With proper configuration, Patchright passes all major detection tests:

  • ✅ Brotector (with CDP-Patches)
  • ✅ Cloudflare
  • ✅ DataDome
  • ✅ CreepJS (0% headless score)
  • ✅ Fingerprint.com
  • ✅ BrowserScan

Step 1: Install Patchright (The Right Way)

First things first—Patchright only works with Chromium-based browsers. Forget about Firefox or WebKit support; they're not happening.

For Python Users:

# Install Patchright from PyPI
pip install patchright

# Install Chrome (not Chromium!) for better stealth
patchright install chrome

For Node.js Users:

# Install via npm
npm install patchright

# Install Chrome browser
npx patchright install chrome

Pro Tip: Always use Google Chrome instead of Chromium. Real users don't browse with Chromium, and anti-bot systems know this. Using channel="chrome" makes you blend in with the crowd.

Step 2: Configure for Maximum Stealth

Here's where most people mess up—they think just installing Patchright is enough. Wrong. You need the right configuration to be truly undetectable.

Python Configuration:

from patchright.sync_api import sync_playwright

with sync_playwright() as p:
    # Launch with stealth configuration
    browser = p.chromium.launch_persistent_context(
        user_data_dir="/tmp/patchright_profile",  # Use a real profile
        channel="chrome",  # Use real Chrome, not Chromium
        headless=False,    # Never use headless for critical scraping
        no_viewport=True,  # Disable viewport to use native resolution
        # CRITICAL: Do NOT add custom headers or user_agent here!
    )
    
    page = browser.new_page()
    page.goto("https://example.com")
    # Your scraping logic here
    browser.close()

Node.js Configuration:

const { chromium } = require('patchright');

(async () => {
    const browser = await chromium.launchPersistentContext('/tmp/patchright_profile', {
        channel: 'chrome',
        headless: false,
        viewport: null,
        // Don't add custom browser headers or userAgent
    });
    
    const page = await browser.newPage();
    await page.goto('https://example.com');
    // Your scraping logic here
    await browser.close();
})();

The Hidden Tweaks Patchright Makes:

Behind the scenes, Patchright automatically applies these command-line flags:

  • --disable-blink-features=AutomationControlled (removes navigator.webdriver)
  • Removes --enable-automation flag
  • Removes --disable-popup-blocking to prevent crashes
  • Removes --disable-component-update to avoid detection
  • Enables extensions and default apps (like a real browser)

Warning: The console API is completely disabled in Patchright. If you need logging, use a JavaScript logger library instead of console.log().

Step 3: Execute JavaScript Without Getting Caught

This is where Patchright's magic happens. Instead of using the detectable Runtime.enable command, it executes JavaScript in isolated contexts.

# Python example with isolated context
result = page.evaluate("""
    () => {
        // This runs in an isolated context, undetectable by anti-bot systems
        return document.querySelector('.price').innerText;
    }
""", isolated_context=True)  # Default is True

Switching to Main Context (When Necessary):

Sometimes you need access to the main window context (rare but happens):

# Access main context when absolutely necessary
result = page.evaluate("""
    () => {
        // This runs in main context - use sparingly!
        return window.customGlobalVariable;
    }
""", isolated_context=False)

The Shadow DOM Superpower:

Here's something vanilla Playwright can't do—Patchright can interact with closed Shadow DOM elements:

# This just works in Patchright, even with closed shadow roots!
element = page.locator('custom-element >>> .shadow-content')
element.click()

# XPath works too
shadow_element = page.locator('//custom-element//div[@class="shadow-content"]')

Step 4: Test Your Setup Against Detection Systems

Don't just hope your setup works—verify it. Here's how to test against real detection systems:

Quick Detection Test:

import time
from patchright.sync_api import sync_playwright

def test_detection():
    with sync_playwright() as p:
        browser = p.chromium.launch_persistent_context(
            user_data_dir="/tmp/test_profile",
            channel="chrome",
            headless=False,
            no_viewport=True
        )
        
        page = browser.new_page()
        
        # Test 1: CreepJS (should show 0% headless)
        page.goto("https://abrahamjuliot.github.io/creepjs/")
        time.sleep(5)
        
        # Test 2: BrowserScan CDP detection
        page.goto("https://www.browserscan.net/")
        time.sleep(5)
        
        # Test 3: Rebrowser bot detector
        page.goto("https://bot.rebrowser.net/")
        time.sleep(5)
        
        input("Check the results and press Enter to close...")
        browser.close()

test_detection()

What to Look For:

  • CreepJS: Headless score should be 0%
  • BrowserScan: No CDP detection warning
  • Rebrowser: Should pass all tests except potentially timing-based ones

Advanced Bypass Technique:

For sites with extreme detection (looking at you, Cloudflare), combine Patchright with request interception:

async def stealth_request(page, url):
    # Intercept and modify requests for extra stealth
    await page.route('**/*', lambda route: route.continue_({
        'headers': {
            **route.request.headers,
            'sec-ch-ua': '"Chromium";v="122", "Not(A:Brand";v="24", "Google Chrome";v="122"',
            'sec-ch-ua-mobile': '?0',
            'sec-ch-ua-platform': '"Windows"'
        }
    }))
    
    await page.goto(url)

Common Pitfalls to Avoid

  1. Using Headless Mode: Just don't. Modern detection can spot headless browsers from a mile away.
  2. Custom User Agents: Adding custom user agents in the browser launch options is a dead giveaway. Let Patchright handle it.

Ignoring Mouse Movement: Real users move their mouse. Simulate it:

await page.mouse.move(100, 100)
await page.mouse.move(200, 300, steps=10)

Fast Navigation: Don't immediately start clicking and scraping. Add realistic delays:

await page.wait_for_timeout(random.randint(2000, 5000))

Final Thoughts

Patchright solves the biggest problem with web scraping today—the CDP detection that gets most bots instantly blocked. By executing JavaScript in isolated contexts and patching critical browser leaks, it makes your scraper virtually undetectable to current anti-bot systems.

Remember: Patchright gives you the tools to bypass detection, but you still need to act human. Combine it with proper delays, mouse movements, and realistic browsing patterns for maximum effectiveness.

Marius Bernard

Marius Bernard

Marius Bernard is a Product Advisor, Technical SEO, & Brand Ambassador at Roundproxies. He was the lead author for the SEO chapter of the 2024 Web and a reviewer for the 2023 SEO chapter.