How to Use Nodriver for Web Scraping in 7 Steps

Modern websites have become battlegrounds between scrapers and anti-bot systems. If you've tried scraping a Cloudflare-protected site with vanilla Selenium, you know the pain of endless captchas and blocked requests.

Nodriver changes the game entirely. It's the official successor to Undetected ChromeDriver, built from the ground up to bypass anti-bot detection systems.

In this guide, you'll learn how to use nodriver for web scraping, from basic setup to advanced techniques that actually work against protected websites.

What Is Nodriver?

Nodriver is an asynchronous Python library that automates Chrome browsers without depending on Selenium or ChromeDriver binaries. It communicates directly with browsers using a custom implementation of the Chrome DevTools Protocol, which makes it significantly harder for anti-bot systems to detect.

The library was created by the same developer behind Undetected ChromeDriver. It strips away the detectable components that make traditional browser automation tools easy to spot.

Unlike Selenium-based solutions, nodriver doesn't leave telltale fingerprints in the browser's JavaScript environment. This means sites using Cloudflare, Imperva, or hCaptcha have a much harder time identifying your scraper as a bot.

Why Nodriver Beats Traditional Scraping Tools

Standard web scraping tools have a detection problem. Selenium and regular ChromeDriver modify browser properties that anti-bot systems check constantly.

These modifications include navigator.webdriver being set to true, missing browser plugins, and automated testing flags. Sites detect these instantly.

Nodriver eliminates this entire category of detection vectors. It launches a real browser instance without the automation markers.

The asynchronous architecture also makes nodriver faster. You can run multiple scraping tasks concurrently without blocking your main thread. This matters when you're extracting data from hundreds of pages.

Here's what makes nodriver stand out:

  • No Selenium dependency
  • No ChromeDriver binary required
  • Automatic cleanup of browser profiles
  • Built-in stealth features
  • Full Chrome DevTools Protocol access
  • Async-first design for better performance

Step 1: Install Nodriver and Set Up Your Environment

Getting nodriver running takes just a few minutes. You need Python 3.8 or higher and a Chrome-based browser installed on your system.

Start by creating a project directory and virtual environment:

mkdir nodriver-scraper
cd nodriver-scraper
python -m venv venv

Activate the virtual environment. On Windows, run venv\Scripts\activate. On Mac or Linux, use source venv/bin/activate.

Now install nodriver with pip:

pip install nodriver

That's it for dependencies. Nodriver automatically detects your Chrome installation and handles browser management.

Important: Don't name your Python file nodriver.py. This creates an import conflict that will crash your script with cryptic errors.

Create a file called scraper.py to test your installation:

import nodriver as uc

async def main():
    browser = await uc.start()
    page = await browser.get('https://www.google.com')
    print("Nodriver is working!")
    await browser.close()

if __name__ == '__main__':
    uc.loop().run_until_complete(main())

Run it with python scraper.py. If a browser window opens and you see the success message, your setup is complete.

Step 2: Configure Browser Options for Stealth

Default nodriver settings work well for basic scraping. But protected sites require more careful configuration.

The browser runs in non-headless mode by default. This is intentional. Headless browsers have different fingerprints that anti-bot systems detect easily.

Here's how to configure nodriver with stealth-optimized settings:

import nodriver as uc

async def main():
    browser = await uc.start(
        headless=False,
        browser_args=[
            '--disable-blink-features=AutomationControlled',
            '--disable-dev-shm-usage',
            '--no-sandbox'
        ],
        lang="en-US"
    )
    
    page = await browser.get('https://nowsecure.nl')
    await page.sleep(5)
    await page.save_screenshot('stealth_test.png')
    await browser.close()

if __name__ == '__main__':
    uc.loop().run_until_complete(main())

Let me explain each configuration option.

The headless=False parameter keeps the browser visible. Some sites specifically check for headless mode indicators. For production scraping on servers, you'll need to use a virtual display with Xvfb.

The --disable-blink-features=AutomationControlled argument removes a key automation marker. Chrome normally sets this when controlled by automation tools.

The --disable-dev-shm-usage and --no-sandbox flags help with stability on Linux systems, especially in Docker containers.

Setting lang="en-US" ensures consistent locale settings. Mismatched language settings between your IP location and browser can trigger suspicion.

For even more control, use the Config object:

from nodriver import Config
import nodriver as uc

async def main():
    config = Config()
    config.headless = False
    config.browser_args = [
        '--disable-blink-features=AutomationControlled'
    ]
    config.lang = "en-US"
    
    browser = await uc.start(config=config)
    # Your scraping code here
    await browser.close()

This approach keeps your configuration organized when settings grow complex.

Step 3: Navigate Pages and Extract Basic Data

With your browser configured, you can start scraping. Nodriver provides intuitive methods for page navigation and element selection.

Here's a complete example that extracts book data from a practice site:

import nodriver as uc
import json

async def scrape_books():
    browser = await uc.start()
    page = await browser.get('https://books.toscrape.com')
    
    # Wait for content to load
    await page.sleep(2)
    
    books = []
    
    # Find all book articles
    book_elements = await page.select_all('article.product_pod')
    
    for book in book_elements:
        # Extract title from the h3 > a element
        title_element = await book.query_selector('h3 > a')
        title = title_element.attrs.get('title', 'Unknown')
        
        # Extract price
        price_element = await book.query_selector('p.price_color')
        price = price_element.text if price_element else 'N/A'
        
        # Extract rating class
        rating_element = await book.query_selector('p.star-rating')
        rating_class = rating_element.attrs.get('class', '')
        
        books.append({
            'title': title,
            'price': price,
            'rating': rating_class
        })
    
    # Save results
    with open('books.json', 'w') as f:
        json.dump(books, f, indent=2)
    
    print(f"Scraped {len(books)} books")
    await browser.close()

if __name__ == '__main__':
    uc.loop().run_until_complete(scrape_books())

The code follows a clear pattern. First, launch the browser and navigate to your target URL using browser.get().

Then use page.select_all() to find multiple elements matching a CSS selector. For single elements, use page.select() or page.query_selector().

Extracting text content requires accessing the .text property directly. This is an attribute, not a method, so don't use await.

For HTML attributes like href or title, access them through the .attrs dictionary. This returns all attributes as key-value pairs.

Common mistake: Using await with .text will raise an error. The text property is synchronous even though nodriver is async-based.

Step 4: Handle Dynamic Content and JavaScript-Rendered Pages

Many sites load content dynamically with JavaScript. The HTML that arrives from the server is just a skeleton that gets populated after JavaScript executes.

Nodriver handles this naturally since it runs a full browser. But you need to wait for the dynamic content to actually load.

Here are three approaches for handling dynamic content:

Approach 1: Fixed Sleep

The simplest method is waiting a set number of seconds:

async def scrape_dynamic_page():
    browser = await uc.start()
    page = await browser.get('https://example.com/dynamic')
    
    # Wait 3 seconds for JS to load
    await page.sleep(3)
    
    content = await page.get_content()
    print(content)
    await browser.close()

This works but wastes time when content loads faster. It also fails if content takes longer than expected.

Approach 2: Wait for Specific Element

Better practice is waiting for a specific element to appear:

async def scrape_with_wait():
    browser = await uc.start()
    page = await browser.get('https://example.com/dynamic')
    
    # Wait for the data container to appear
    await page.wait_for_selector('#data-container', timeout=10)
    
    data_element = await page.select('#data-container')
    content = data_element.text
    print(content)
    await browser.close()

The wait_for_selector() method pauses execution until the specified element exists in the DOM. The timeout parameter prevents infinite waiting.

Approach 3: Execute JavaScript

For complex scenarios, run custom JavaScript to check page state:

async def scrape_with_js_check():
    browser = await uc.start()
    page = await browser.get('https://example.com/dynamic')
    
    # Wait until a JS variable indicates loading is complete
    while True:
        is_loaded = await page.evaluate('window.dataLoaded === true')
        if is_loaded:
            break
        await page.sleep(0.5)
    
    # Now extract the data
    data = await page.evaluate('JSON.stringify(window.pageData)')
    print(data)
    await browser.close()

The evaluate() method runs JavaScript in the browser context. This gives you access to any JavaScript variables or functions on the page.

Step 5: Implement Pagination and Multi-Page Scraping

Real scraping jobs rarely involve single pages. You need to handle pagination to collect complete datasets.

Here's a robust pagination pattern for nodriver:

import nodriver as uc
import json

async def scrape_all_pages():
    browser = await uc.start()
    all_data = []
    current_url = 'https://quotes.toscrape.com'
    
    while current_url:
        page = await browser.get(current_url)
        await page.sleep(2)
        
        # Extract quotes from current page
        quotes = await page.select_all('div.quote')
        
        for quote in quotes:
            text_element = await quote.query_selector('span.text')
            author_element = await quote.query_selector('small.author')
            
            quote_text = text_element.text if text_element else ''
            author = author_element.text if author_element else ''
            
            # Get tags
            tag_elements = await quote.query_selector_all('a.tag')
            tags = [tag.text for tag in tag_elements]
            
            all_data.append({
                'quote': quote_text,
                'author': author,
                'tags': tags
            })
        
        # Find next page link
        next_button = await page.select('li.next > a')
        
        if next_button:
            # Extract href attribute
            attrs = next_button.attributes
            for i in range(len(attrs)):
                if attrs[i] == 'href':
                    next_path = attrs[i + 1]
                    current_url = f'https://quotes.toscrape.com{next_path}'
                    break
        else:
            current_url = None
            print("Reached last page")
    
    # Save all collected data
    with open('all_quotes.json', 'w', encoding='utf-8') as f:
        json.dump(all_data, f, ensure_ascii=False, indent=2)
    
    print(f"Scraped {len(all_data)} quotes total")
    await browser.close()

if __name__ == '__main__':
    uc.loop().run_until_complete(scrape_all_pages())

The pagination logic checks for a "next" button after scraping each page. When found, it extracts the href attribute and continues. When the button disappears, scraping ends.

Working with attributes in nodriver: The attributes property returns a flat array, not a dictionary. You need to iterate through it to find specific attribute values. This is admittedly awkward compared to other libraries.

For infinite scroll pages, use a different approach:

async def scrape_infinite_scroll():
    browser = await uc.start()
    page = await browser.get('https://example.com/infinite-scroll')
    
    previous_height = 0
    items = []
    
    while True:
        # Scroll to bottom
        await page.scroll_down(1000)
        await page.sleep(2)
        
        # Check if we've reached the end
        current_height = await page.evaluate('document.body.scrollHeight')
        
        if current_height == previous_height:
            print("Reached end of content")
            break
        
        previous_height = current_height
        
        # Extract newly loaded items
        new_items = await page.select_all('.item:not(.scraped)')
        
        for item in new_items:
            # Mark as scraped to avoid duplicates
            await page.evaluate(f'arguments[0].classList.add("scraped")', item)
            text = item.text
            items.append(text)
    
    print(f"Collected {len(items)} items")
    await browser.close()

This scrolls the page repeatedly, checking if new content loads. When the page height stops changing, all content has loaded.

Step 6: Add Proxy Support to Avoid IP Blocks

Scraping at scale requires rotating IP addresses. Even with nodriver's stealth features, hitting a site repeatedly from one IP will get you blocked.

Nodriver supports proxy configuration through browser arguments. For basic proxy usage:

import nodriver as uc

async def scrape_with_proxy():
    proxy_url = 'http://proxy.example.com:8080'
    
    browser = await uc.start(
        browser_args=[f'--proxy-server={proxy_url}']
    )
    
    page = await browser.get('https://httpbin.org/ip')
    await page.sleep(2)
    
    # Check your IP
    content = await page.get_content()
    print(content)
    
    await browser.close()

if __name__ == '__main__':
    uc.loop().run_until_complete(scrape_with_proxy())

For authenticated proxies, the format changes slightly:

async def scrape_with_auth_proxy():
    # Format: http://username:password@host:port
    proxy_url = 'http://user:pass@proxy.example.com:8080'
    
    browser = await uc.start(
        browser_args=[f'--proxy-server={proxy_url}']
    )
    
    # Rest of your scraping code
    await browser.close()

For serious scraping projects, you'll want rotating residential proxies. Services like Roundproxies.com offer residential and datacenter proxy pools that rotate automatically, making your scraper appear as thousands of different users.

Here's a pattern for rotating proxies between requests:

import nodriver as uc
import random

PROXY_LIST = [
    'http://user:pass@proxy1.example.com:8080',
    'http://user:pass@proxy2.example.com:8080',
    'http://user:pass@proxy3.example.com:8080',
]

async def scrape_with_rotation(urls):
    all_data = []
    
    for url in urls:
        # Pick random proxy
        proxy = random.choice(PROXY_LIST)
        
        browser = await uc.start(
            browser_args=[f'--proxy-server={proxy}']
        )
        
        try:
            page = await browser.get(url)
            await page.sleep(2)
            
            content = await page.get_content()
            all_data.append({'url': url, 'content': content})
            
        except Exception as e:
            print(f"Error scraping {url}: {e}")
        
        finally:
            await browser.close()
    
    return all_data

This creates a new browser instance with a different proxy for each URL. While slower than reusing browsers, it provides maximum IP diversity.

Pro tip: SOCKS5 proxies often work better than HTTP proxies for browser automation. They handle all traffic types cleanly without protocol issues.

Step 7: Handle Errors and Build Robust Scrapers

Production scrapers encounter errors constantly. Network timeouts, changed page structures, anti-bot blocks, and JavaScript errors all happen regularly.

Here's a robust scraping pattern with comprehensive error handling:

import nodriver as uc
import json
import logging
from datetime import datetime

# Configure logging
logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

async def robust_scrape(url, max_retries=3):
    """Scrape a URL with retries and error handling."""
    
    for attempt in range(max_retries):
        browser = None
        
        try:
            logger.info(f"Attempt {attempt + 1} for {url}")
            
            browser = await uc.start(
                browser_args=['--disable-blink-features=AutomationControlled']
            )
            
            page = await browser.get(url)
            await page.sleep(3)
            
            # Check for anti-bot detection
            page_content = await page.get_content()
            
            if 'captcha' in page_content.lower():
                logger.warning("CAPTCHA detected, retrying...")
                raise Exception("CAPTCHA detected")
            
            if 'access denied' in page_content.lower():
                logger.warning("Access denied, changing strategy...")
                raise Exception("Access denied")
            
            # Extract data
            title = await page.evaluate('document.title')
            
            # Take debug screenshot
            timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
            await page.save_screenshot(f'debug_{timestamp}.png')
            
            logger.info(f"Successfully scraped: {title}")
            
            return {
                'url': url,
                'title': title,
                'success': True,
                'timestamp': timestamp
            }
            
        except Exception as e:
            logger.error(f"Error on attempt {attempt + 1}: {str(e)}")
            
            if attempt < max_retries - 1:
                # Wait before retry with exponential backoff
                wait_time = 2 ** attempt
                logger.info(f"Waiting {wait_time} seconds before retry...")
                await uc.sleep(wait_time)
            
        finally:
            if browser:
                try:
                    await browser.close()
                except:
                    pass
    
    logger.error(f"All retries failed for {url}")
    return {
        'url': url,
        'success': False,
        'error': 'Max retries exceeded'
    }

async def main():
    urls = [
        'https://example.com/page1',
        'https://example.com/page2',
        'https://example.com/page3'
    ]
    
    results = []
    
    for url in urls:
        result = await robust_scrape(url)
        results.append(result)
    
    # Save results
    with open('scrape_results.json', 'w') as f:
        json.dump(results, f, indent=2)
    
    # Summary
    successful = sum(1 for r in results if r['success'])
    logger.info(f"Completed: {successful}/{len(urls)} successful")

if __name__ == '__main__':
    uc.loop().run_until_complete(main())

Key error handling patterns in this code:

The retry loop attempts each URL multiple times before giving up. Exponential backoff increases wait time between attempts, which helps when you're hitting rate limits.

Detection checks scan page content for common block indicators. When detected, the code retries instead of returning bad data.

Debug screenshots capture the page state when errors occur. This helps diagnose problems that only happen in production.

The finally block ensures browser cleanup happens regardless of success or failure. Leaked browser processes consume memory and can crash your server.

Logging provides visibility into what's happening during long scraping runs. Include timestamps and enough context to debug issues later.

Advanced Techniques for Complex Scraping Scenarios

Once you've mastered the basics, these advanced patterns will help you tackle more challenging sites.

Running Concurrent Scraping Tasks

The async architecture allows parallel scraping that dramatically speeds up large jobs:

import nodriver as uc
import asyncio

async def scrape_single_page(url, proxy=None):
    """Scrape one page independently."""
    browser_args = []
    if proxy:
        browser_args.append(f'--proxy-server={proxy}')
    
    browser = await uc.start(browser_args=browser_args)
    
    try:
        page = await browser.get(url)
        await page.sleep(2)
        title = await page.evaluate('document.title')
        return {'url': url, 'title': title, 'success': True}
    except Exception as e:
        return {'url': url, 'error': str(e), 'success': False}
    finally:
        await browser.close()

async def scrape_multiple_concurrent(urls, max_concurrent=5):
    """Scrape multiple URLs with controlled concurrency."""
    semaphore = asyncio.Semaphore(max_concurrent)
    
    async def limited_scrape(url):
        async with semaphore:
            return await scrape_single_page(url)
    
    tasks = [limited_scrape(url) for url in urls]
    results = await asyncio.gather(*tasks)
    return results

if __name__ == '__main__':
    urls = [f'https://example.com/page/{i}' for i in range(1, 21)]
    results = uc.loop().run_until_complete(scrape_multiple_concurrent(urls))
    print(f"Scraped {len([r for r in results if r['success']])} pages successfully")

The semaphore limits how many browsers run simultaneously. Running too many at once consumes excessive memory and triggers rate limits. Five concurrent browsers works well on most machines.

Intercepting Network Requests

Access the Chrome DevTools Protocol to monitor and modify network activity:

import nodriver as uc

async def scrape_with_network_monitoring():
    browser = await uc.start()
    page = await browser.get('about:blank')
    
    # Enable network monitoring via CDP
    await page.send(uc.cdp.network.enable())
    
    # Track all requests
    requests_made = []
    
    async def on_request(event):
        requests_made.append({
            'url': event.request.url,
            'method': event.request.method
        })
    
    page.add_handler(uc.cdp.network.RequestWillBeSent, on_request)
    
    # Navigate and let requests happen
    await page.get('https://example.com')
    await page.sleep(5)
    
    print(f"Captured {len(requests_made)} network requests")
    
    for req in requests_made[:10]:
        print(f"  {req['method']} {req['url'][:80]}")
    
    await browser.close()

if __name__ == '__main__':
    uc.loop().run_until_complete(scrape_with_network_monitoring())

Network interception helps debug what resources a page loads. You can identify API endpoints that return data in structured formats, often easier to parse than HTML.

Handling Login-Protected Content

Many valuable datasets sit behind login walls. Here's a pattern for authenticated scraping:

import nodriver as uc
import json
import os

COOKIES_FILE = 'session_cookies.json'

async def login_and_save_session(username, password):
    """Perform login and save cookies for later use."""
    browser = await uc.start()
    page = await browser.get('https://example.com/login')
    
    await page.sleep(2)
    
    # Fill login form
    username_field = await page.select('input[name="username"]')
    await username_field.send_keys(username)
    
    password_field = await page.select('input[name="password"]')
    await password_field.send_keys(password)
    
    # Submit form
    submit_button = await page.select('button[type="submit"]')
    await submit_button.click()
    
    # Wait for redirect after successful login
    await page.sleep(5)
    
    # Save cookies
    cookies = await page.get_cookies()
    with open(COOKIES_FILE, 'w') as f:
        json.dump(cookies, f)
    
    print("Login successful, cookies saved")
    await browser.close()

async def scrape_with_saved_session():
    """Use saved cookies to access protected content."""
    if not os.path.exists(COOKIES_FILE):
        print("No saved session found. Please login first.")
        return
    
    browser = await uc.start()
    page = await browser.get('https://example.com')
    
    # Load saved cookies
    with open(COOKIES_FILE, 'r') as f:
        cookies = json.load(f)
    
    await page.set_cookies(cookies)
    
    # Navigate to protected page
    await page.get('https://example.com/dashboard')
    await page.sleep(2)
    
    # Now scrape the protected content
    content = await page.get_content()
    print(f"Scraped {len(content)} characters of protected content")
    
    await browser.close()

Cookie persistence eliminates repeated logins. Save cookies after authenticating, then reload them in future sessions. This reduces requests and avoids triggering security alerts from frequent logins.

Extracting Data from Shadow DOM Elements

Modern web components use Shadow DOM, which hides elements from normal selectors. Access shadow content with JavaScript:

async def scrape_shadow_dom():
    browser = await uc.start()
    page = await browser.get('https://example.com/web-components')
    
    await page.sleep(3)
    
    # Pierce shadow DOM with JavaScript
    shadow_content = await page.evaluate('''
        const host = document.querySelector('custom-element');
        const shadow = host.shadowRoot;
        const innerElement = shadow.querySelector('.inner-content');
        return innerElement ? innerElement.textContent : null;
    ''')
    
    print(f"Shadow DOM content: {shadow_content}")
    await browser.close()

Shadow DOM can't be accessed with regular CSS selectors. JavaScript executed via evaluate() can reach inside shadow roots to extract the hidden content.

Nodriver Limitations You Should Know

Nodriver is powerful but not perfect. Understanding its limitations helps you choose the right tool for each job.

Headless mode issues: As of late 2024, nodriver has problems running in headless mode. You'll likely encounter recursion errors when setting headless=True. For server deployments, use Xvfb to create a virtual display instead.

Limited page interactions: Some interaction methods documented in nodriver don't work reliably yet. The mouse_click() and click_mouse() methods have known issues. Stick to basic click() on elements.

Attribute extraction is clunky: Getting element attributes requires iterating through an array rather than accessing a dictionary. This is inconvenient compared to Selenium or Playwright.

Still detectable by some systems: While nodriver bypasses most anti-bot systems, advanced protections like PerimeterX and some Cloudflare configurations can still detect it. No tool provides 100% bypass rates.

Smaller community: Being newer than Selenium or Playwright, nodriver has fewer Stack Overflow answers, tutorials, and community support. You'll solve more problems by reading source code.

Alternatives to Nodriver

Different scraping scenarios call for different tools. Here's when to consider alternatives:

Playwright offers more mature features and better documentation. It supports Firefox and WebKit in addition to Chromium. Use Playwright when you need cross-browser testing or don't face aggressive anti-bot protection. The playwright-stealth plugin adds some anti-detection capabilities.

Selenium with undetected-chromedriver remains viable for simpler protection systems. It has more community support and integrates with existing Selenium codebases. However, it's slower than nodriver and more easily detected by modern anti-bot systems.

Raw HTTP requests with libraries like httpx work best for APIs and sites without JavaScript requirements. This approach is fastest and uses minimal resources. Add cloudscraper for basic Cloudflare bypass.

Commercial scraping APIs handle anti-bot bypass on their end. Services manage browser infrastructure, proxy rotation, and CAPTCHA solving. They cost more but save development time on protected sites.

For most scraping projects targeting protected sites, nodriver offers the best balance of stealth capability and development experience. Start with nodriver and move to alternatives only when hitting specific limitations.

Conclusion

Nodriver solves the fundamental problem of browser automation detection. By communicating directly with Chrome without traditional webdriver dependencies, it slips past anti-bot systems that block standard tools.

You've learned the complete workflow: installing nodriver, configuring stealth settings, extracting data, handling pagination, using proxies, and building robust error handling. These patterns form the foundation for any serious scraping project.

The key to successful scraping with nodriver is combining its technical capabilities with smart practices. Rotate proxies, respect rate limits, handle errors gracefully, and always test your selectors before running full scrapes.

Start with the basic examples in this guide, then adapt them to your specific targets. Every site presents unique challenges, but the patterns here will handle most situations you encounter.

FAQ

How is nodriver different from Selenium?

Nodriver communicates directly with Chrome without using ChromeDriver or Selenium. This removes automation markers that anti-bot systems detect. Selenium-based solutions modify browser properties that reveal automation, while nodriver maintains a clean browser fingerprint.

Can nodriver bypass all anti-bot systems?

No tool bypasses everything. Nodriver handles most protection from Cloudflare, Imperva, and hCaptcha effectively. However, advanced systems like PerimeterX and some custom implementations may still detect it. Combining nodriver with quality residential proxies improves success rates significantly.

Does nodriver work in headless mode?

Currently, headless mode has stability issues in nodriver. Running with headless=True often causes recursion errors. For server deployments without displays, use Xvfb to create a virtual display and run nodriver in non-headless mode.

How do I handle CAPTCHAs with nodriver?

Nodriver avoids triggering CAPTCHAs in most cases through its stealth features. When CAPTCHAs do appear, you'll need external solving services. Integrate a CAPTCHA solving API, detect the CAPTCHA type, send it for solving, and enter the solution using nodriver's form interaction methods.

Can I run multiple nodriver instances simultaneously?

Yes, nodriver's async design supports concurrent operations. You can run multiple browser instances in parallel using asyncio.gather(). Each instance should use different proxy configurations to avoid IP-based rate limits.