Cloudflare blocks roughly 40% of the websites you want to scrape. Your Python requests hit a wall, returning 403 errors or challenge pages instead of data.

Cloudscraper solves this problem without launching heavy browser instances. It works like Python Requests but automatically handles Cloudflare's JavaScript challenges behind the scenes.

In this guide, you'll learn how to install Cloudscraper, configure advanced browser profiles, integrate proxies, handle v3 challenges, combine it with TLS fingerprinting tools, and troubleshoot the edge cases that trip up most scrapers.

What Is Cloudscraper and How Does It Bypass Cloudflare?

Cloudscraper is a Python library that bypasses Cloudflare's anti-bot protection by impersonating a real web browser. It uses optimized HTTP headers and a JavaScript interpreter to solve Cloudflare's "I'm Under Attack Mode" challenges automatically. The library waits approximately 5 seconds on your first request to complete the challenge, then reuses session cookies for subsequent requests without delay.

This approach works because Cloudflare's basic protection relies on JavaScript execution to verify real browsers. Cloudscraper runs these scripts using interpreters like Node.js or js2py, making your requests appear legitimate.

The enhanced 2025/2026 versions now include support for Cloudflare v2, v3, and even Turnstile challenges through third-party CAPTCHA solvers.

How Cloudflare Detects and Blocks Scrapers in 2026

Before diving into Cloudscraper configuration, understanding Cloudflare's detection layers helps you avoid common mistakes.

Cloudflare assigns each request a "bot score" based on multiple signals. Lower scores mean higher suspicion.

Detection Layer 1: IP Reputation

Requests from data center IPs, VPNs, or previously flagged addresses get scrutinized heavily. Residential IPs pass more easily.

Cloudflare maintains massive databases of known bot networks. They cross-reference your IP against historical abuse patterns.

Detection Layer 2: TLS Fingerprinting (JA3/JA4)

Cloudflare examines how your client establishes secure connections. Python's default TLS handshake differs from Chrome's, which raises red flags.

The JA3 fingerprint creates a hash of your TLS handshake parameters. If your headers claim you're Chrome but your JA3 says Python Requests, instant block.

Detection Layer 3: HTTP Headers

Missing or mismatched browser headers signal automation. A Chrome User-Agent with Firefox-specific headers gets blocked immediately.

Cloudflare checks header order, presence of specific headers, and consistency with the claimed browser.

Detection Layer 4: JavaScript Challenges

Cloudflare injects scripts that require execution. Standard HTTP libraries can't run JavaScript, triggering the infamous "Checking your browser" page.

The v3 challenges now run in a JavaScript Virtual Machine with dynamic code generation.

Detection Layer 5: Request Patterns

Scraping at machine speeds or hitting endpoints in unusual sequences flags your session. Human-like timing matters.

Installing Cloudscraper in 2026

Getting Cloudscraper running takes one command:

pip install cloudscraper

The library depends on Python Requests and js2py for JavaScript execution. Both install automatically.

Installing the Enhanced Version

For v3 challenge support and additional features, install the enhanced fork:

pip install cloudscraper25

This version includes Cloudflare v3 JavaScript VM challenge support, Turnstile challenge handling, and enhanced stealth mode.

Setting Up Node.js for Better Performance

For better JavaScript performance, install Node.js separately. Cloudscraper detects and uses it automatically:

# Ubuntu/Debian
sudo apt-get install nodejs

# macOS
brew install node

# Windows - download from nodejs.org

Node.js handles complex challenges faster than the pure Python js2py interpreter. This becomes critical for v2/v3 challenges.

Verify Your Installation

import cloudscraper

scraper = cloudscraper.create_scraper()
response = scraper.get("https://httpbin.org/headers")
print(response.status_code)

A 200 status confirms everything installed correctly.

Basic Usage: Your First Cloudscraper Request

Cloudscraper mirrors Python Requests syntax exactly. If you've used Requests before, you already know how to use Cloudscraper.

Simple GET Request

import cloudscraper

# Create scraper instance
scraper = cloudscraper.create_scraper()

# Make request exactly like requests.get()
response = scraper.get("https://example-cloudflare-site.com")

print(response.status_code)
print(response.text[:500])

The create_scraper() function returns a session object. Use it for all requests to maintain cookies and avoid repeating Cloudflare challenges.

POST Requests with Form Data

scraper = cloudscraper.create_scraper()

data = {"username": "user", "query": "search term"}
response = scraper.post("https://example.com/api", data=data)

print(response.json())

JSON Payloads

import cloudscraper
import json

scraper = cloudscraper.create_scraper()

payload = {"key": "value", "nested": {"data": True}}
response = scraper.post(
    "https://example.com/api",
    json=payload,
    headers={"Content-Type": "application/json"}
)

print(response.status_code)

The first request to any Cloudflare-protected site triggers a ~5 second delay while Cloudscraper solves the challenge. Subsequent requests use cached session cookies and complete instantly.

Advanced Browser Profiles and User-Agents

Default settings work for many sites, but some require specific browser configurations to bypass Cloudflare successfully.

Configuring Browser Emulation

import cloudscraper

scraper = cloudscraper.create_scraper(
    browser={
        'browser': 'chrome',
        'platform': 'windows',
        'desktop': True,
        'mobile': False
    }
)

Available Browser Configuration Options

Parameter Values Default
browser chrome, firefox Random
platform linux, windows, darwin, android, ios Random
desktop True, False True
mobile True, False False

Mobile Emulation Trick

Mobile emulation sometimes bypasses stricter desktop protections:

scraper = cloudscraper.create_scraper(
    browser={
        'browser': 'chrome',
        'platform': 'android',
        'desktop': False,
        'mobile': True
    }
)

This works because many sites have less aggressive protection on mobile views.

Custom User-Agent with Automatic Header Matching

scraper = cloudscraper.create_scraper(
    browser={
        'custom': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36'
    }
)

Cloudscraper attempts to match your custom string against known browser signatures. Matching signatures get appropriate headers and cipherSuite applied automatically.

Hidden Trick: Combining Platform and Browser for Edge Cases

Some sites block common combinations. Try unusual but valid combinations:

# Safari on Windows (rare but valid)
scraper = cloudscraper.create_scraper(
    browser={
        'browser': 'firefox',
        'platform': 'darwin',  # macOS
        'desktop': True
    }
)

JavaScript Interpreter Selection

Cloudscraper supports multiple JavaScript engines for solving Cloudflare challenges. Your choice affects speed and compatibility.

Available Interpreters

js2py (Default): Pure Python implementation. No external dependencies but slower on complex challenges. Comes installed with Cloudscraper.

Node.js (Recommended): Fastest option. Handles modern JavaScript challenges better. Requires separate installation.

ChakraCore: Microsoft's JavaScript engine. Alternative when Node.js isn't available.

V8: Google's engine via v8eval Python module. Powerful but complex setup.

Specifying Your Interpreter

scraper = cloudscraper.create_scraper(interpreter='nodejs')

Automatic Fallback Strategy

Create a smart fallback system that tests different interpreters:

import cloudscraper

def create_robust_scraper(target_url):
    interpreters = ['nodejs', 'js2py', 'native']
    
    for interp in interpreters:
        try:
            scraper = cloudscraper.create_scraper(interpreter=interp)
            response = scraper.get(target_url, timeout=30)
            if response.status_code == 200:
                print(f"Success with {interp}")
                return scraper
        except Exception as e:
            print(f"Failed with {interp}: {e}")
            continue
    
    # Final fallback with defaults
    return cloudscraper.create_scraper()

Node.js handles most challenges best, but js2py occasionally works when Node.js doesn't.

Using Proxies with Cloudscraper

Proxy integration follows standard Python Requests patterns but with critical considerations for Cloudflare.

Basic Proxy Configuration

import cloudscraper

scraper = cloudscraper.create_scraper()

proxies = {
    'http': 'http://user:pass@proxy.example.com:8080',
    'https': 'http://user:pass@proxy.example.com:8080'
}

response = scraper.get("https://target-site.com", proxies=proxies)

Critical Rule: Session-Proxy Consistency

Keep the same proxy throughout your session. Cloudflare ties challenge solutions to specific IP addresses. Switching proxies mid-session triggers new challenges or blocks.

Rotating Proxies Correctly

For rotating proxies, create new scraper instances:

import cloudscraper

proxy_list = [
    'http://user:pass@proxy1.example.com:8080',
    'http://user:pass@proxy2.example.com:8080',
    'http://user:pass@proxy3.example.com:8080'
]

def scrape_with_rotation(target_url, proxy_list):
    for proxy in proxy_list:
        # Create NEW scraper instance for each proxy
        scraper = cloudscraper.create_scraper()
        proxies = {'http': proxy, 'https': proxy}
        
        try:
            response = scraper.get(target_url, proxies=proxies, timeout=30)
            if response.status_code == 200:
                return response
        except Exception as e:
            print(f"Proxy {proxy} failed: {e}")
            continue
    
    return None

SOCKS Proxy Support

# Requires: pip install pysocks
proxies = {
    'http': 'socks5://user:pass@proxy.example.com:1080',
    'https': 'socks5://user:pass@proxy.example.com:1080'
}

Why Residential Proxies Matter

Residential proxies from providers like Roundproxies.com work best for Cloudflare-protected sites. Data center IPs often get blocked regardless of your Cloudscraper configuration.

Cloudflare maintains lists of known data center IP ranges and applies extra scrutiny to requests from them.

Handling CAPTCHAs with Third-Party Solvers

When Cloudflare escalates to CAPTCHA challenges, Cloudscraper integrates with solving services.

2captcha Integration

scraper = cloudscraper.create_scraper(
    captcha={
        'provider': '2captcha',
        'api_key': 'your_2captcha_api_key'
    }
)

Supported CAPTCHA Services

  • 2captcha
  • anticaptcha
  • CapSolver
  • CapMonster Cloud
  • deathbycaptcha
  • 9kw

AntiCaptcha Example

scraper = cloudscraper.create_scraper(
    captcha={
        'provider': 'anticaptcha',
        'api_key': 'your_anticaptcha_key'
    }
)

Turnstile Challenge Support (2025/2026 Feature)

The enhanced Cloudscraper versions now support Cloudflare Turnstile:

# Using enhanced cloudscraper with Turnstile support
scraper = cloudscraper.create_scraper(
    captcha={
        'provider': '2captcha',
        'api_key': 'your_api_key'
    }
)

CAPTCHA challenges indicate heightened protection. Consider whether the site's terms allow automated access before proceeding.

Challenge Delay Configuration

Cloudflare's standard challenge requires approximately 5 seconds before submission. Cloudscraper handles this automatically, but you can override the timing.

Increasing Delay for Complex Challenges

scraper = cloudscraper.create_scraper(delay=10)

Increase delays when sites use extended challenge timers. Some implementations wait 10-15 seconds.

Risky: Shorter Delays

# Risky - might fail on some sites
scraper = cloudscraper.create_scraper(delay=3)

Shorter delays risk premature submission and failed challenges.

Debug Mode for Timing Issues

scraper = cloudscraper.create_scraper(debug=True)

This logs challenge detection and solving attempts to help troubleshoot failures.

For integration with other tools or to reuse sessions, extract Cloudscraper's session cookies.

Extracting Cookies

scraper = cloudscraper.create_scraper()
response = scraper.get("https://cloudflare-protected-site.com")

# Get cookies as dictionary
cookies = scraper.cookies.get_dict()
print(cookies)

# Get specific Cloudflare cookies
cf_clearance = cookies.get('cf_clearance')
print(f"Clearance cookie: {cf_clearance}")

Reusing Cookies with Standard Requests

The cf_clearance cookie proves you passed the challenge. Use it with the matching User-Agent in other HTTP clients:

import requests
import cloudscraper

# First, get cookies from cloudscraper
scraper = cloudscraper.create_scraper()
scraper.get("https://cloudflare-protected-site.com")

cookies = scraper.cookies.get_dict()
user_agent = scraper.headers['User-Agent']

# Now use with regular requests
session = requests.Session()
session.cookies.update(cookies)
session.headers.update({'User-Agent': user_agent})

# These requests work without Cloudscraper
response = session.get("https://cloudflare-protected-site.com/api/data")

Critical: Cloudflare validates that the cookie's User-Agent matches subsequent requests. Always pair extracted cookies with the exact User-Agent string.

Programmatic Token Extraction

import cloudscraper

tokens, user_agent = cloudscraper.get_tokens("https://target-site.com")
print(f"Cookies: {tokens}")
print(f"User-Agent: {user_agent}")

Or as a formatted cookie header:

cookie_string, user_agent = cloudscraper.get_cookie_string("https://target-site.com")
print(f"Cookie header: {cookie_string}")

Enhanced Stealth Mode (2025/2026 Feature)

The enhanced Cloudscraper versions include stealth features for better detection avoidance.

Enabling Stealth Mode

scraper = cloudscraper.create_scraper(
    browser='chrome',
    debug=True,
    # Enhanced stealth features (if using enhanced version)
)

What Stealth Mode Does

  • Human-like request timing with adaptive delays
  • Browser fingerprint resistance
  • Behavioral analysis resistance
  • Mouse movement and typing pattern simulation (where applicable)

Handling Cloudflare V3 Challenges

Cloudflare v3 challenges represent the latest evolution in bot protection. Unlike v1 and v2, v3 challenges run in a JavaScript Virtual Machine with dynamically generated code.

V3 Challenge Characteristics

  • Challenges execute in a sandboxed JavaScript environment
  • More sophisticated algorithms to detect automated behavior
  • Challenge code is dynamically created and harder to reverse-engineer
  • Requires modern JavaScript interpreter support

V3 Detection

Look for this pattern in challenge pages:

window._cf_chl_opt = {
    cvId: '3',
    cZone: 'example.com',
    cType: 'managed',
    // ... other parameters
}

Solving V3 with Enhanced Cloudscraper

# The enhanced version handles v3 automatically
import cloudscraper

scraper = cloudscraper.create_scraper(
    interpreter='nodejs'  # Node.js recommended for v3
)

response = scraper.get("https://v3-protected-site.com")

When V3 Fails

If Cloudscraper fails against v3 challenges, you may need alternative approaches like FlareSolverr or browser automation.

Combining Cloudscraper with curl_cffi for TLS Fingerprinting

For sites that detect Python's TLS fingerprint, combine Cloudscraper's cookie extraction with curl_cffi's browser impersonation.

Why This Works

Cloudscraper solves the JavaScript challenge, but Python's TLS handshake might still get detected. curl_cffi impersonates browser TLS signatures.

Installation

pip install curl_cffi

Hybrid Approach

import cloudscraper
from curl_cffi import requests as curl_requests

# Step 1: Get Cloudflare cookies with cloudscraper
cf_scraper = cloudscraper.create_scraper()
cf_scraper.get("https://protected-site.com")
cookies = cf_scraper.cookies.get_dict()
user_agent = cf_scraper.headers.get('User-Agent')

# Step 2: Use curl_cffi with browser TLS fingerprint + cookies
response = curl_requests.get(
    "https://protected-site.com/api/data",
    impersonate="chrome",  # Impersonate Chrome's TLS fingerprint
    cookies=cookies,
    headers={"User-Agent": user_agent}
)

print(response.status_code)
print(response.text)

Direct curl_cffi Alternative

For sites where Cloudscraper fails due to TLS fingerprinting:

from curl_cffi import requests

# Impersonate Chrome browser including TLS/JA3 fingerprint
response = requests.get(
    "https://cloudflare-protected-site.com",
    impersonate="chrome124"  # Specific Chrome version
)

print(response.status_code)

curl_cffi supports multiple browser versions:

  • chrome, chrome124, chrome133
  • safari, safari_ios
  • edge

FlareSolverr Integration for Difficult Sites

When Cloudscraper alone fails, FlareSolverr provides a Docker-based solution using real browsers.

Quick Docker Setup

docker run -d \
  --name=flaresolverr \
  -p 8191:8191 \
  -e LOG_LEVEL=info \
  --restart unless-stopped \
  ghcr.io/flaresolverr/flaresolverr:latest

Using FlareSolverr with Python

import requests

def solve_with_flaresolverr(url):
    payload = {
        "cmd": "request.get",
        "url": url,
        "maxTimeout": 60000
    }
    
    response = requests.post(
        'http://localhost:8191/v1',
        headers={'Content-Type': 'application/json'},
        json=payload
    )
    
    result = response.json()
    
    if result['status'] == 'ok':
        return {
            'html': result['solution']['response'],
            'cookies': result['solution']['cookies'],
            'user_agent': result['solution']['userAgent']
        }
    
    return None

Hybrid: FlareSolverr Cookies + Cloudscraper

Get cookies from FlareSolverr, then use them with Cloudscraper or standard requests:

import requests
import cloudscraper

# Get cookies from FlareSolverr
fs_result = solve_with_flaresolverr("https://difficult-site.com")

if fs_result:
    # Create session with FlareSolverr cookies
    session = requests.Session()
    
    for cookie in fs_result['cookies']:
        session.cookies.set(
            cookie['name'],
            cookie['value'],
            domain=cookie['domain']
        )
    
    session.headers['User-Agent'] = fs_result['user_agent']
    
    # Now use session for subsequent requests
    response = session.get("https://difficult-site.com/data")

Troubleshooting Common Errors

Error: CloudflareChallengeError - Detected a Cloudflare version 2/v3 challenge

This means the site uses advanced protection that Cloudscraper's open-source version can't handle.

Solutions:

# Try different browser profiles
scraper = cloudscraper.create_scraper(
    browser={
        'browser': 'chrome',
        'platform': 'android',
        'mobile': True
    }
)

Or switch to browser automation tools like Playwright or use FlareSolverr.

Error: 403 Forbidden Despite Apparent Success

The challenge solved but subsequent requests fail. Usually a cookie or User-Agent mismatch.

Solution:

scraper = cloudscraper.create_scraper()
scraper.headers.update({
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
    'Accept-Language': 'en-US,en;q=0.5',
    'Accept-Encoding': 'gzip, deflate, br',
    'Connection': 'keep-alive',
    'Upgrade-Insecure-Requests': '1'
})

Error: Connection Timeouts During Challenges

Solution: Increase timeout values:

response = scraper.get(url, timeout=30)

Error: JavaScript Interpreter Errors

Switch interpreters or ensure Node.js is properly installed:

node --version  # Should return version number

If Node.js isn't found, Cloudscraper falls back to js2py automatically.

Error: SSL Certificate Verification Failed

scraper = cloudscraper.create_scraper()
response = scraper.get(url, verify=False)  # Disable SSL verification

Only use this for testing. In production, fix the certificate issue.


When Cloudscraper Won't Work

Cloudscraper fails against:

Cloudflare Bot Management v2/v3: Advanced fingerprinting beyond JavaScript challenges. Sites like G2.com use this level of protection.

Turnstile CAPTCHAs (without solver): Cloudflare's newer invisible challenge system requires real browser execution or CAPTCHA solver integration.

Heavy TLS fingerprinting: Some configurations detect Python's TLS stack regardless of other protections.

Rate limiting: Too many requests from one IP triggers blocks that Cloudscraper can't bypass.

Alternative Tools for 2026

Nodriver: The official successor to undetected-chromedriver. CDP-minimal automation that passes most detection tests.

Camoufox: Firefox-based stealth browser with C++ level fingerprint spoofing. Achieves 0% headless detection scores.

Playwright with Stealth: Use playwright-extra with stealth plugins for sites requiring full browser automation.

curl_cffi: For TLS fingerprint issues without needing a full browser.

Production-Ready Scraper Example

Here's a complete production-ready scraper combining all best practices:

import cloudscraper
from bs4 import BeautifulSoup
import time
import random

class CloudflareScaper:
    def __init__(self, proxy_list=None):
        self.proxy_list = proxy_list or []
        self.current_proxy_index = 0
        self.scraper = None
        
    def _get_next_proxy(self):
        if not self.proxy_list:
            return None
        proxy = self.proxy_list[self.current_proxy_index]
        self.current_proxy_index = (self.current_proxy_index + 1) % len(self.proxy_list)
        return {'http': proxy, 'https': proxy}
    
    def _create_scraper_with_fallback(self):
        """Try multiple configurations until one works."""
        configs = [
            {'browser': 'chrome', 'platform': 'windows', 'desktop': True},
            {'browser': 'chrome', 'platform': 'android', 'mobile': True},
            {'browser': 'firefox', 'platform': 'linux', 'desktop': True},
            {'browser': 'firefox', 'platform': 'darwin', 'desktop': True}
        ]
        
        for config in configs:
            try:
                scraper = cloudscraper.create_scraper(
                    browser=config,
                    interpreter='nodejs'
                )
                return scraper
            except Exception:
                continue
        
        return cloudscraper.create_scraper()
    
    def _add_human_headers(self, scraper):
        """Add realistic browser headers."""
        scraper.headers.update({
            'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
            'Accept-Language': 'en-US,en;q=0.9',
            'Accept-Encoding': 'gzip, deflate, br',
            'Connection': 'keep-alive',
            'Upgrade-Insecure-Requests': '1',
            'Sec-Fetch-Dest': 'document',
            'Sec-Fetch-Mode': 'navigate',
            'Sec-Fetch-Site': 'none',
            'Sec-Fetch-User': '?1',
            'Cache-Control': 'max-age=0'
        })
        return scraper
    
    def scrape(self, url, max_retries=3):
        """Scrape URL with automatic retry and proxy rotation."""
        
        for attempt in range(max_retries):
            try:
                # Create new scraper for each attempt (for proxy rotation)
                self.scraper = self._create_scraper_with_fallback()
                self.scraper = self._add_human_headers(self.scraper)
                
                proxies = self._get_next_proxy()
                
                # Add human-like delay
                time.sleep(random.uniform(2, 5))
                
                response = self.scraper.get(
                    url,
                    proxies=proxies,
                    timeout=30
                )
                
                if response.status_code == 200:
                    return response
                
                print(f"Attempt {attempt + 1}: Status {response.status_code}")
                time.sleep(random.uniform(3, 7))
                
            except Exception as e:
                print(f"Attempt {attempt + 1} failed: {e}")
                time.sleep(random.uniform(5, 10))
        
        return None
    
    def get_cookies(self, url):
        """Get Cloudflare cookies for use with other clients."""
        response = self.scrape(url)
        if response:
            return {
                'cookies': self.scraper.cookies.get_dict(),
                'user_agent': self.scraper.headers.get('User-Agent')
            }
        return None


# Usage example
if __name__ == "__main__":
    # Optional: Add your residential proxies
    proxies = [
        # 'http://user:pass@proxy1.example.com:8080',
        # 'http://user:pass@proxy2.example.com:8080',
    ]
    
    scraper = CloudflareScaper(proxy_list=proxies)
    
    url = "https://example-cloudflare-site.com"
    response = scraper.scrape(url)
    
    if response:
        soup = BeautifulSoup(response.text, 'html.parser')
        title = soup.find('title')
        print(f"Page title: {title.text if title else 'Not found'}")
        
        # Get cookies for reuse
        cookie_data = scraper.get_cookies(url)
        if cookie_data:
            print(f"Cookies: {cookie_data['cookies']}")
    else:
        print("All attempts failed")

This implementation:

  • Tries multiple browser profiles automatically
  • Adds realistic browser headers
  • Supports proxy rotation with new scraper instances
  • Uses Node.js for better challenge solving
  • Includes human-like timing delays
  • Provides retry logic for transient failures
  • Extracts cookies for reuse with other clients

Performance Optimization Tips

Reuse Sessions

# Create once, use many times
scraper = cloudscraper.create_scraper()

urls = ['url1', 'url2', 'url3']
for url in urls:
    response = scraper.get(url)
    # Process response
    time.sleep(1)  # Small delay between requests

Async Scraping (with async_cloudscraper)

pip install async-cloudscraper
import asyncio
import async_cloudscraper

async def scrape_async(urls):
    scraper = async_cloudscraper.create_scraper()
    results = []
    
    for url in urls:
        response = await scraper.get(url)
        results.append(response)
        await asyncio.sleep(1)
    
    return results

Memory Management

For long-running scrapers, close sessions properly:

scraper = cloudscraper.create_scraper()
try:
    # Your scraping logic
    pass
finally:
    scraper.close()

Conclusion

Cloudscraper handles Cloudflare's basic JavaScript challenges effectively without the overhead of browser automation. Install it with pip, create a scraper instance, and make requests like you would with Python Requests.

For best results in 2026:

  • Use residential proxies for better IP reputation
  • Match browser profiles to your target site
  • Maintain consistent sessions with the same proxy
  • Consider curl_cffi for TLS fingerprinting issues
  • Use FlareSolverr as a fallback for difficult sites

When sites deploy advanced Bot Management (v2/v3), consider browser automation tools like Nodriver or Camoufox instead.

The library saves significant development time compared to building challenge solvers from scratch. Just remember its limitations against newer Cloudflare protections and have fallback strategies ready.

FAQ

Does Cloudscraper work with all Cloudflare-protected sites?

Cloudscraper works against standard Cloudflare "I'm Under Attack Mode" and basic JavaScript challenges. It fails against Cloudflare Bot Management v2/v3, Turnstile CAPTCHAs (without solver integration), and sites with heavy TLS fingerprinting. Test your target site before building extensive scrapers around it.

Why does my first request take 5 seconds?

Cloudflare requires browsers to wait approximately 5 seconds before submitting challenge answers. Cloudscraper mimics this behavior on your first request, then reuses the session cookie for instant subsequent requests.

Can I rotate proxies with Cloudscraper?

You must keep the same proxy throughout a session because Cloudflare ties challenge solutions to IP addresses. Create new scraper instances when switching proxies. Each new instance solves the challenge fresh with its assigned IP.

What's the difference between cloudscraper and cloudscraper25?

cloudscraper25 is an enhanced fork that includes Cloudflare v3 challenge support, Turnstile handling, and additional stealth features. Use it for more heavily protected sites.

How do I know if a site uses Cloudflare?

Check for "cf_clearance" and "__cf_bm" cookies after visiting the site, or look for the "Checking your browser" interstitial page. You can also check DNS records pointing to Cloudflare IP ranges.

The library itself is legal software. However, scraping websites may violate their terms of service. Review target site policies before automated access. Copyright laws and computer access regulations vary by jurisdiction.