Cloudflare blocks your scraper. You've rotated user agents, added delays, even swapped IPs. Still a 403.
FlareSolverr fixes this in about five minutes. It's an open-source proxy server that spins up a real Chrome browser, solves Cloudflare's JavaScript challenges, and hands you back the HTML and cookies. Those cookies then let you scrape normally with any HTTP client.
In this guide, I'll walk you through installing FlareSolverr, sending your first request, reusing cookies for fast scraping, managing sessions, configuring proxies, and handling every common failure mode. Working Python code included for every step.
What Is FlareSolverr?
FlareSolverr is an open-source proxy server that bypasses Cloudflare and DDoS-GUARD protection by running a real browser under the hood. It uses Selenium with undetected-chromedriver to solve JavaScript challenges automatically, then returns the HTML content and session cookies to your application.
You send a POST request to the FlareSolverr API. It launches a headless Chrome instance, navigates to your target URL, waits for the Cloudflare challenge to clear, and sends back everything you need. Those cookies can then be reused with standard HTTP clients like Python's requests library — no browser required for subsequent requests.
FlareSolverr sits idle when there's nothing to do, consuming minimal resources. It only launches a browser when a request comes in.
How FlareSolverr Actually Works
Here's the request lifecycle:
- Your script sends a POST request to
http://localhost:8191/v1 - FlareSolverr launches Chrome via undetected-chromedriver
- Chrome loads the target URL and hits the Cloudflare challenge page
- The browser executes Cloudflare's JavaScript verification automatically
- Once the challenge clears, FlareSolverr captures the HTML, cookies, and headers
- Everything gets returned as JSON to your script
- The browser instance shuts down (unless you're using sessions)
One thing to understand upfront: FlareSolverr cannot solve CAPTCHAs. If Cloudflare escalates from a JavaScript challenge to an interactive CAPTCHA, FlareSolverr will time out. This matters because Cloudflare has been escalating challenge difficulty steadily throughout 2025 and into 2026.
More on that limitation — and how to work around it — later.
Prerequisites
Before installing FlareSolverr, make sure you have:
- Docker (recommended) or Python 3.11+ with Chrome/Chromium
- Python 3.8+ for the client scripts in this tutorial
- The
requestslibrary (pip install requests) - At least 2GB of RAM — each browser instance uses 100-200MB
Step 1: Install FlareSolverr with Docker
Docker is the cleanest way to run FlareSolverr. The image bundles Chromium, all Python dependencies, and the correct configuration. No version mismatches, no missing libraries.
Pull and start the container with a single command:
docker run -d \
--name=flaresolverr \
-p 8191:8191 \
-e LOG_LEVEL=info \
--restart unless-stopped \
ghcr.io/flaresolverr/flaresolverr:latest
The -d flag runs it in the background. Port 8191 is mapped to your localhost. The --restart unless-stopped flag ensures it survives reboots.
Verify it's running:
curl http://localhost:8191
You should see a JSON response like this:
{
"msg": "FlareSolverr is ready!",
"version": "3.4.6",
"userAgent": "Mozilla/5.0 (X11; Linux x86_64) ..."
}
If you see that, FlareSolverr is live and accepting requests.
Docker Compose Setup
For projects where FlareSolverr runs alongside other services, use a docker-compose.yml:
version: "3.8"
services:
flaresolverr:
image: ghcr.io/flaresolverr/flaresolverr:latest
container_name: flaresolverr
environment:
- LOG_LEVEL=info
- TZ=UTC
- HEADLESS=true
- BROWSER_TIMEOUT=60000
ports:
- "8191:8191"
restart: unless-stopped
deploy:
resources:
limits:
memory: 2G
The memory limit prevents runaway browser instances from consuming all your server's RAM. Start it with:
docker compose up -d
Step 2: Install FlareSolverr Without Docker
If Docker isn't an option, you can run FlareSolverr natively.
Linux
Install the dependencies first:
sudo apt update
sudo apt install python3.11 chromium-browser xvfb
The xvfb package provides a virtual display for headless Chrome on servers without a monitor.
Clone and install:
git clone https://github.com/FlareSolverr/FlareSolverr.git
cd FlareSolverr
pip install -r requirements.txt
python src/flaresolverr.py
Windows
Download the precompiled binary from the FlareSolverr GitHub releases. Extract the ZIP and run FlareSolverr.exe. Allow it through Windows Firewall when prompted.
To set environment variables on Windows before launching:
set LOG_LEVEL=debug
set BROWSER_TIMEOUT=90000
FlareSolverr.exe
Step 3: Send Your First Request with Python
Now that FlareSolverr is running, let's actually use it. Install the requests library if needed:
pip install requests
Here's a basic GET request through FlareSolverr:
import requests
url = "http://localhost:8191/v1"
headers = {"Content-Type": "application/json"}
payload = {
"cmd": "request.get",
"url": "https://example-cloudflare-site.com",
"maxTimeout": 60000
}
response = requests.post(url, headers=headers, json=payload)
result = response.json()
if result["status"] == "ok":
html = result["solution"]["response"]
cookies = result["solution"]["cookies"]
user_agent = result["solution"]["userAgent"]
print(f"Success! Got {len(html)} bytes of HTML")
else:
print(f"Failed: {result['message']}")
The cmd field specifies the request type. The maxTimeout value is in milliseconds — 60 seconds is usually enough, but some heavily protected sites need more.
The solution object in the response contains:
response— the full HTML of the pagecookies— a list of cookie dictionariesuserAgent— the user agent string the browser usedstatus— HTTP status codeheaders— response headers
Note the user agent. You'll need it for the next step.
Step 4: Reuse Cookies for Fast Scraping
This is the pattern most tutorials skip, and it's the most efficient way to use FlareSolverr.
Instead of routing every single request through FlareSolverr (which launches a browser each time), solve the challenge once, grab the cookies, then use those cookies directly with the requests library for all subsequent pages.
import requests
# Step 1: Solve the challenge once via FlareSolverr
fs_url = "http://localhost:8191/v1"
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000
}
result = requests.post(fs_url, json=payload).json()
if result["status"] != "ok":
raise Exception(f"FlareSolverr failed: {result['message']}")
# Step 2: Extract cookies and user agent
fs_cookies = result["solution"]["cookies"]
fs_ua = result["solution"]["userAgent"]
# Step 3: Build a requests session with those cookies
session = requests.Session()
session.headers.update({"User-Agent": fs_ua})
for cookie in fs_cookies:
session.cookies.set(
cookie["name"],
cookie["value"],
domain=cookie["domain"],
path=cookie["path"]
)
# Step 4: Scrape directly — no browser needed
pages = ["/page/1", "/page/2", "/page/3", "/page/4", "/page/5"]
for page in pages:
resp = session.get(f"https://target-site.com{page}")
print(f"{page}: {resp.status_code} ({len(resp.text)} bytes)")
This approach is 10-20x faster than routing every request through FlareSolverr. The cookies typically stay valid for 15-30 minutes, sometimes longer depending on the site's Cloudflare configuration.
The critical detail most people miss: the user agent in your requests session must match the one FlareSolverr used. Cloudflare ties the clearance cookie to the user agent string. If they don't match, you'll hit the challenge page again.
Get Only Cookies (Skip the HTML)
If you only need the cookies and don't care about the initial page's HTML, use returnOnlyCookies:
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"returnOnlyCookies": True
}
This reduces response size and parsing overhead when you're just bootstrapping a session.
Step 5: Manage Sessions for Multi-Page Scraping
FlareSolverr sessions keep a browser instance alive between requests. This is useful when you need to maintain state across multiple pages or when the site requires sequential navigation.
Create a Session
create_payload = {
"cmd": "sessions.create",
"session": "scrape-job-1"
}
response = requests.post(fs_url, json=create_payload)
print(response.json())
Use the Session
payload = {
"cmd": "request.get",
"url": "https://target-site.com/dashboard",
"session": "scrape-job-1",
"maxTimeout": 60000
}
response = requests.post(fs_url, json=payload)
The browser instance persists, keeping cookies, local storage, and session state intact between requests.
Destroy the Session
Always clean up when you're done:
destroy_payload = {
"cmd": "sessions.destroy",
"session": "scrape-job-1"
}
requests.post(fs_url, json=destroy_payload)
Each open session holds a Chrome instance in memory (100-200MB). On a 4GB server, leaving three or four sessions open will cause problems fast.
Step 6: Add Proxies for IP Rotation
If you're making many requests to the same domain, Cloudflare will eventually flag your IP regardless of having valid cookies. Adding proxies to FlareSolverr solves this.
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"proxy": {
"url": "http://proxy-host:port"
}
}
FlareSolverr supports http://, socks4://, and socks5:// proxy protocols.
For proxies requiring authentication:
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"proxy": {
"url": "http://proxy-host:port",
"username": "your_user",
"password": "your_pass"
}
}
You can also attach a proxy to a session at creation time. All requests using that session will route through it:
create_payload = {
"cmd": "sessions.create",
"session": "proxy-session",
"proxy": {
"url": "socks5://proxy-host:1080"
}
}
Residential proxies work best here since they're less likely to be flagged by Cloudflare's IP reputation system. If you need residential or datacenter proxies for this, Roundproxies has options built for scraping use cases.
Step 7: Speed Up Requests by Skipping Resources
FlareSolverr loads the full page by default — images, CSS, fonts, everything. If you only need the HTML, disable media loading:
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"skipResource": True
}
Setting skipResource to True prevents the browser from loading images, stylesheets, and fonts. This reduces page load time and bandwidth usage noticeably, especially on media-heavy sites.
This parameter doesn't affect the Cloudflare challenge itself — the JavaScript verification still runs. It just skips the visual resources that you don't need for data extraction.
Step 8: Take Screenshots for Debugging
When FlareSolverr returns unexpected results or empty HTML, a screenshot tells you exactly what the browser saw:
payload = {
"cmd": "request.get",
"url": "https://target-site.com",
"maxTimeout": 60000,
"screenshot": True
}
result = requests.post(fs_url, json=payload).json()
if result["status"] == "ok":
import base64
screenshot_b64 = result["solution"]["screenshot"]
with open("debug_screenshot.png", "wb") as f:
f.write(base64.b64decode(screenshot_b64))
print("Screenshot saved to debug_screenshot.png")
The screenshot is returned as a base64-encoded PNG string. This is invaluable when debugging — you'll immediately see whether the browser hit a CAPTCHA, a different challenge type, or a completely different page than expected.
FlareSolverr Environment Variables Reference
Fine-tune FlareSolverr's behavior with these environment variables:
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL |
info |
Logging verbosity: debug, info, warn, error |
HEADLESS |
true |
Set to false to see the browser window (debugging) |
BROWSER_TIMEOUT |
60000 |
Default timeout in milliseconds |
TZ |
UTC |
Timezone for the browser instance |
LANG |
none | Browser language (e.g., en_US) |
PROXY_URL |
none | Default proxy for all requests |
PORT |
8191 |
API listening port |
CAPTCHA_SOLVER |
none |
CAPTCHA solver service (no working options currently) |
SESSION_TTL |
none | Auto-rotate sessions after X minutes |
LOG_HTML |
false |
Log full HTML responses (verbose) |
Set them in Docker with -e flags or in your environment before launching the binary.
Building a Production-Ready FlareSolverr Client
Here's a reusable Python class with retry logic, timeout handling, and cookie extraction built in:
import requests
from time import sleep
class FlareSolverrClient:
def __init__(self, host="localhost", port=8191):
self.base_url = f"http://{host}:{port}/v1"
def solve(self, url, retries=3, timeout=60000, proxy=None):
"""Solve a Cloudflare challenge. Returns dict with
html, cookies, and user_agent on success."""
payload = {
"cmd": "request.get",
"url": url,
"maxTimeout": timeout,
"skipResource": True
}
if proxy:
payload["proxy"] = {"url": proxy}
for attempt in range(retries):
try:
resp = requests.post(
self.base_url,
json=payload,
timeout=(timeout / 1000) + 15
)
data = resp.json()
if data["status"] == "ok":
return {
"html": data["solution"]["response"],
"cookies": data["solution"]["cookies"],
"user_agent": data["solution"]["userAgent"],
"success": True
}
print(f"Attempt {attempt + 1}/{retries}: {data['message']}")
except requests.exceptions.Timeout:
print(f"Attempt {attempt + 1}/{retries}: Timed out")
except Exception as e:
print(f"Attempt {attempt + 1}/{retries}: {e}")
sleep(3)
return {"success": False, "error": "All retries failed"}
def build_session(self, url, **kwargs):
"""Solve challenge and return a requests.Session
pre-loaded with valid cookies."""
result = self.solve(url, **kwargs)
if not result["success"]:
return None
session = requests.Session()
session.headers["User-Agent"] = result["user_agent"]
for c in result["cookies"]:
session.cookies.set(
c["name"], c["value"],
domain=c["domain"], path=c["path"]
)
return session
Usage:
client = FlareSolverrClient()
# Get a pre-authenticated session in one call
session = client.build_session("https://target-site.com")
if session:
# Scrape normally — no browser overhead
resp = session.get("https://target-site.com/data")
print(resp.status_code)
The build_session method is the workflow I use in production. One browser launch, then fast HTTP requests for everything else.
Troubleshooting Common FlareSolverr Issues
"Challenge not solved" / Timeout
The most common failure. Causes:
- Timeout too short. Try increasing
maxTimeoutto 90000 or 120000. - CAPTCHA challenge. FlareSolverr cannot solve interactive CAPTCHAs. Check with a
screenshot: truerequest to confirm. - Cloudflare updated their challenge. Pull the latest FlareSolverr image:
docker pull ghcr.io/flaresolverr/flaresolverr:latest.
Connection Refused
Your script can't reach FlareSolverr at all.
- Is the container running? Check with
docker ps. - Is port 8191 mapped correctly? Run
docker port flaresolverr. - Firewall blocking the port? Test with
curl http://localhost:8191.
High Memory Usage / Container Crashes
Each browser instance consumes 100-200MB. If you're making concurrent requests, memory adds up fast.
- Limit concurrent requests to 3-5 on a 4GB server.
- Use sessions and destroy them when done.
- Set memory limits in Docker Compose (
deploy.resources.limits.memory). - Use the cookie-reuse pattern from Step 4 to minimize browser launches.
FlareSolverr Returns HTML but It's a Challenge Page
This means the challenge wasn't actually solved before the response was sent. Two fixes:
- Increase
maxTimeoutto give the browser more time. - Check if the site requires a specific browser language. Set the
LANGenvironment variable.
User Agent Mismatch After Cookie Reuse
If you're reusing cookies but still hitting the challenge page, your user agent doesn't match. Always use the exact userAgent string from the FlareSolverr response.
FlareSolverr Limitations: What You Should Know
FlareSolverr is a solid tool for small-to-medium scraping projects, but it has real constraints:
CAPTCHAs are a dead end. Cloudflare has been rolling out Turnstile challenges more aggressively. When a site escalates to a CAPTCHA, FlareSolverr can't solve it. The CAPTCHA_SOLVER environment variable exists but there are no working solver integrations at this time.
Cloudflare keeps evolving. The TRaSH Guides community (popular in the *arr media automation ecosystem) has noted that FlareSolverr frequently breaks after Cloudflare updates. The maintainers patch it, but there can be gaps. Keep it updated and have a fallback plan.
Not built for high volume. Each request launches a browser. At 100-200MB per instance and 5-15 seconds per challenge solve, you're looking at real resource costs at scale. The cookie-reuse pattern helps enormously, but if you're scraping millions of pages, you'll want to build a custom Playwright or Puppeteer solution.
Single-threaded by default. FlareSolverr handles requests sequentially unless you're using sessions. For parallel scraping, run multiple FlareSolverr containers behind a load balancer or use sessions.
When to Use FlareSolverr
FlareSolverr is the right tool when:
- You're scraping a handful of Cloudflare-protected sites at moderate volume
- You need a quick setup without writing browser automation code
- You want to integrate Cloudflare bypass into Prowlarr, Jackett, or Sonarr/Radarr
- You're prototyping a scraper and need to validate that a site is scrapable
Consider other approaches when:
- The target site uses aggressive CAPTCHAs
- You need to scrape at very high volume (10,000+ pages/day)
- You need sub-second latency
- You're running on a server with less than 2GB RAM
For high-volume production use, building your own solution with Playwright or Puppeteer gives you more control over resource usage and evasion techniques.
Wrapping Up
FlareSolverr takes the friction out of Cloudflare bypass for most scraping projects. Install it with Docker in five minutes, solve your first challenge, then use the cookie-reuse pattern for everything else.
The key workflow: launch FlareSolverr once, extract cookies, then scrape with plain HTTP requests. That one optimization makes FlareSolverr practical for projects that would otherwise choke on browser overhead.
Keep FlareSolverr updated. Cloudflare doesn't sit still, and neither should your tooling. When a challenge type goes beyond what FlareSolverr can handle, that's your signal to either wait for a patch or build something custom.
FAQ
Does FlareSolverr work with all Cloudflare-protected sites?
FlareSolverr handles JavaScript challenges and browser verification checks. Sites using Cloudflare's managed challenge with interactive CAPTCHAs or enterprise-tier protection will likely block it. Test with screenshot: true to see exactly what FlareSolverr encounters.
How much RAM does FlareSolverr need?
Plan for 100-200MB per concurrent browser instance, plus ~200MB for FlareSolverr itself. A 4GB server handles 5-10 concurrent requests comfortably. Use the cookie-reuse pattern to minimize how many browser instances you actually need.
Can I run multiple FlareSolverr instances?
Yes. Run separate Docker containers on different ports and distribute requests across them. This gives you both parallelism and redundancy.
How long do Cloudflare cookies stay valid?
Typically 15-30 minutes, but it varies by site. Some sites grant cookies that last hours. Monitor your success rate and re-solve the challenge when requests start getting 403s again.
Is FlareSolverr legal?
FlareSolverr is legal open-source software. However, bypassing a site's security measures may violate their terms of service. Always check a site's ToS and respect robots.txt before scraping.