Byparr

How to use Byparr in 2026: Bypass anti-bots step-by-step

Cloudflare and other anti-bot systems block your scrapers before they even reach the data. Byparr solves this by running a real browser behind the scenes, solving challenges automatically, and returning cookies your scripts can reuse.

In this guide, you'll learn how to set up Byparr with Docker, call its API from Python, configure proxies for better success rates, and integrate it with popular tools like Prowlarr.

What is Byparr?

Byparr is a self-hosted anti-bot bypass server that acts as a drop-in replacement for FlareSolverr. It uses Camoufox (a Firefox-based anti-detection browser) and FastAPI to provide an HTTP API that returns valid session cookies and headers for websites protected by Cloudflare and similar systems.

When your scraper sends a request to Byparr, it opens a real browser instance, loads the target URL, waits for any challenges to complete, and returns the page HTML along with cookies and headers. You then reuse those cookies in your regular HTTP requests.

The project is actively maintained and works on Windows, macOS, Linux, and ARM devices. It's particularly popular with media server users running Prowlarr, Jackett, Sonarr, and Radarr.

Prerequisites

Before installing Byparr, make sure you have the following:

  • Docker installed on your system
  • Basic command-line knowledge
  • Python 3.x (for the scripting examples)
  • A target website to test against

If you don't have Docker, download it from the official Docker website. The installation takes about five minutes on most systems.

Step 1: Install Byparr with Docker

Docker is the fastest way to get Byparr running. The image is hosted on GitHub Container Registry and includes everything you need.

Open your terminal and run this command:

docker run -d \
  --name byparr \
  -p 8191:8191 \
  --restart unless-stopped \
  ghcr.io/thephaseless/byparr:latest

This command does several things. The -d flag runs the container in detached mode so it stays running in the background.

The --name byparr assigns a friendly name to the container. The -p 8191:8191 maps port 8191 from the container to your host machine.

The --restart unless-stopped ensures Byparr starts automatically when your system reboots. Finally, the image is pulled from the official GitHub registry.

Using Docker Compose

For more control, create a compose.yaml file:

services:
  byparr:
    image: ghcr.io/thephaseless/byparr:latest
    container_name: byparr
    environment:
      - LOG_LEVEL=INFO
    ports:
      - "8191:8191"
    restart: unless-stopped

Save this file and run:

docker compose up -d

Docker Compose makes it easier to manage environment variables and integrate Byparr with other services in your stack.

Step 2: Verify the Installation

Once the container is running, verify everything works by accessing the API documentation.

Open your browser and navigate to:

http://localhost:8191/docs

You should see the FastAPI Swagger documentation. This interactive page lets you test API endpoints directly from your browser.

Alternatively, check the container status with:

docker ps

Look for a container named "byparr" with status "Up." If the container keeps restarting, check the logs:

docker logs byparr

The logs show startup messages and any errors that occur during initialization.

Step 3: Make Your First API Request

Byparr exposes a FlareSolverr-compatible API. The main endpoint is /v1 which accepts POST requests.

Here's a basic curl request to test the API:

curl -X POST "http://localhost:8191/v1" \
  -H "Content-Type: application/json" \
  -d '{
    "cmd": "request.get",
    "url": "https://nowsecure.nl",
    "maxTimeout": 60000
  }'

This request tells Byparr to open nowsecure.nl (a Cloudflare test page) and return the results.

The response JSON includes several important fields:

{
  "status": "ok",
  "message": "Challenge solved!",
  "solution": {
    "url": "https://nowsecure.nl",
    "status": 200,
    "cookies": [...],
    "userAgent": "Mozilla/5.0...",
    "headers": {...},
    "response": "<html>..."
  }
}

The cookies array contains the Cloudflare clearance cookies. The userAgent string must be used with those cookies in subsequent requests. The response field contains the full HTML of the page.

Step 4: Use Byparr Cookies in Python

Now let's build a Python script that uses Byparr to scrape a protected website.

First, install the requests library:

pip install requests

Create a file called byparr_scraper.py:

import requests

BYPARR_URL = "http://localhost:8191/v1"

def get_cloudflare_cookies(target_url):
    """
    Send a request to Byparr and get clearance cookies.
    """
    payload = {
        "cmd": "request.get",
        "url": target_url,
        "maxTimeout": 60000
    }
    
    response = requests.post(BYPARR_URL, json=payload)
    data = response.json()
    
    if data.get("status") != "ok":
        raise Exception(f"Byparr failed: {data.get('message')}")
    
    return data["solution"]

This function sends a request to your local Byparr instance and returns the solution containing cookies and headers.

Next, add a function to use those cookies:

def scrape_with_cookies(solution, target_url):
    """
    Use the Byparr cookies to make a regular HTTP request.
    """
    # Build cookies dictionary
    cookies = {}
    for cookie in solution["cookies"]:
        cookies[cookie["name"]] = cookie["value"]
    
    # Use the same user agent
    headers = {
        "User-Agent": solution["userAgent"]
    }
    
    response = requests.get(target_url, cookies=cookies, headers=headers)
    return response.text

The key here is using the exact same user agent that Byparr returned. Cloudflare ties cookies to specific user agents, so mismatching them triggers a new challenge.

Finally, put it all together:

def main():
    target = "https://nowsecure.nl"
    
    print("Getting cookies from Byparr...")
    solution = get_cloudflare_cookies(target)
    
    print(f"Got {len(solution['cookies'])} cookies")
    print(f"User agent: {solution['userAgent'][:50]}...")
    
    print("Making request with cookies...")
    html = scrape_with_cookies(solution, target)
    
    if "You are now accessing" in html:
        print("Success! Cloudflare bypassed.")
    else:
        print("Challenge may not have been solved.")
    
    print(f"Response length: {len(html)} characters")

if __name__ == "__main__":
    main()

Run the script:

python byparr_scraper.py

If everything works, you'll see a success message and the HTML content of the protected page.

Step 5: Configure Proxies for Better Success

Byparr works without proxies, but adding them significantly improves success rates. Proxies mask your IP address and make requests appear to come from different locations.

Setting Up Proxy Environment Variables

When running Docker, pass proxy settings as environment variables:

docker run -d \
  --name byparr \
  -p 8191:8191 \
  -e PROXY_SERVER="http://proxy.example.com:8080" \
  -e PROXY_USERNAME="your_username" \
  -e PROXY_PASSWORD="your_password" \
  --restart unless-stopped \
  ghcr.io/thephaseless/byparr:latest

The proxy format follows the pattern protocol://host:port. Byparr supports HTTP and SOCKS5 proxies.

Docker Compose with Proxy

Update your compose.yaml:

services:
  byparr:
    image: ghcr.io/thephaseless/byparr:latest
    container_name: byparr
    environment:
      - LOG_LEVEL=INFO
      - PROXY_SERVER=http://proxy.example.com:8080
      - PROXY_USERNAME=your_username
      - PROXY_PASSWORD=your_password
    ports:
      - "8191:8191"
    restart: unless-stopped

Choosing the Right Proxy Type

For bypassing Cloudflare, residential proxies work best. They use real IP addresses assigned by ISPs, making them harder to detect as proxy traffic.

If you need reliable residential proxies, Roundproxies.com offers residential, datacenter, ISP, and mobile proxies optimized for web scraping.

Datacenter proxies are faster and cheaper but more likely to be blocked by aggressive anti-bot systems.

Step 6: Integrate with Prowlarr and Media Tools

Byparr works as a drop-in replacement for FlareSolverr in media server setups. If you're running Prowlarr, Jackett, Sonarr, or Radarr, the integration is straightforward.

Prowlarr Configuration

Open Prowlarr and navigate to Settings → Indexers.

Click the plus button and select "FlareSolverr" under the Generic category.

Enter your Byparr URL:

http://localhost:8191

If Byparr runs in the same Docker network as Prowlarr, use the container name:

http://byparr:8191

Set the request timeout to 60000 milliseconds (60 seconds) to give Byparr enough time to solve challenges.

Save the settings and test an indexer that requires Cloudflare bypass. Prowlarr will automatically route requests through Byparr when needed.

Jackett Configuration

For Jackett, the process is similar. Go to the FlareSolverr configuration section and enter your Byparr URL.

The API is compatible, so no code changes are required on the Jackett side.

Troubleshooting Common Issues

Container Won't Start

Check the logs for error messages:

docker logs byparr

Common causes include port conflicts (another service using 8191) and insufficient memory. Byparr needs at least 512MB RAM to run the browser.

Challenge Not Solved

Some websites have particularly aggressive protection. Try these steps:

  1. Increase the timeout to 120000ms
  2. Add a proxy server
  3. Check if the target site is actually using Cloudflare

Not all "checking your browser" pages are Cloudflare. Some sites use custom protection that Byparr can't bypass.

Running the Test Suite

If you're having persistent issues, run the built-in tests:

docker build --target test .

This builds a test image and runs the test suite. If tests pass, the issue is likely site-specific rather than a Byparr problem.

Local Installation (Without Docker)

For troubleshooting or development, you can run Byparr locally:

# Install uv package manager
pip install uv

# Clone the repository
git clone https://github.com/ThePhaseless/Byparr.git
cd Byparr

# Install dependencies
uv sync

# Run Byparr
uv run main.py

Local installation gives you more control and makes debugging easier.

Advanced Configuration Options

Byparr supports several environment variables for fine-tuning:

Variable Default Description
HOST 0.0.0.0 IP address to bind the server
PORT 8191 Port number for the API
PROXY_SERVER None Proxy URL (protocol://host:port)
PROXY_USERNAME None Username for proxy authentication
PROXY_PASSWORD None Password for proxy authentication
LOG_LEVEL INFO Logging verbosity (DEBUG, INFO, WARNING, ERROR)

Binding to Specific Interfaces

To restrict access to localhost only:

docker run -d \
  --name byparr \
  -p 127.0.0.1:8191:8191 \
  -e HOST=127.0.0.1 \
  ghcr.io/thephaseless/byparr:latest

This prevents external access to your Byparr instance.

Custom Port Configuration

If port 8191 conflicts with another service:

docker run -d \
  --name byparr \
  -p 9999:9999 \
  -e PORT=9999 \
  ghcr.io/thephaseless/byparr:latest

Remember to update any scripts or configurations that reference the port.

Byparr vs FlareSolverr

Both tools solve the same problem, but Byparr offers several advantages:

Active Development: Byparr receives regular updates. FlareSolverr has had periods of stagnation.

Modern Browser Engine: Byparr uses Camoufox, a hardened Firefox build designed to avoid detection. FlareSolverr uses Selenium with undetected-chromedriver.

Better API Documentation: The FastAPI backend provides interactive Swagger docs at /docs.

Smaller Image Size: The Byparr Docker image is approximately 1.1GB, smaller than FlareSolverr.

Drop-in Replacement: If you're already using FlareSolverr, switching requires only changing the server URL. The API is compatible.

For new projects, Byparr is the recommended choice. For existing setups, migration is simple since both tools use the same API format.

Building a Complete Scraping Pipeline

Let's create a more robust scraping system that handles errors and caches cookies.

import requests
import time
from typing import Optional

class ByparrClient:
    def __init__(self, base_url: str = "http://localhost:8191"):
        self.base_url = base_url
        self.cached_solution: Optional[dict] = None
        self.cache_time: float = 0
        self.cache_ttl: float = 300  # 5 minutes
    
    def _solution_valid(self) -> bool:
        """Check if cached solution is still valid."""
        if not self.cached_solution:
            return False
        return (time.time() - self.cache_time) < self.cache_ttl

This class caches the Byparr solution to avoid unnecessary browser launches.

    def get_solution(self, url: str, force_refresh: bool = False) -> dict:
        """Get a solution, using cache when possible."""
        if self._solution_valid() and not force_refresh:
            return self.cached_solution
        
        payload = {
            "cmd": "request.get",
            "url": url,
            "maxTimeout": 60000
        }
        
        response = requests.post(
            f"{self.base_url}/v1",
            json=payload,
            timeout=90
        )
        
        data = response.json()
        
        if data.get("status") != "ok":
            raise Exception(f"Byparr error: {data.get('message')}")
        
        self.cached_solution = data["solution"]
        self.cache_time = time.time()
        
        return self.cached_solution

The caching mechanism reduces load on Byparr and speeds up your scraping.

    def make_request(self, url: str, method: str = "GET", **kwargs) -> requests.Response:
        """Make an authenticated request using Byparr cookies."""
        solution = self.get_solution(url)
        
        cookies = {c["name"]: c["value"] for c in solution["cookies"]}
        headers = kwargs.pop("headers", {})
        headers["User-Agent"] = solution["userAgent"]
        
        return requests.request(
            method,
            url,
            cookies=cookies,
            headers=headers,
            **kwargs
        )

Now you can use the client like this:

client = ByparrClient()

# First request triggers Byparr
response = client.make_request("https://example-protected-site.com/page1")
print(response.status_code)

# Second request uses cached cookies
response = client.make_request("https://example-protected-site.com/page2")
print(response.status_code)

FAQ

Does Byparr work with all anti-bot systems?

Byparr primarily targets Cloudflare protection. It may work with other systems, but results vary. Report successful bypasses to the GitHub repository so the maintainers know what works.

Byparr itself is legal software. However, how you use it matters. Always respect website terms of service and robots.txt files. Scraping copyrighted content or bypassing paywalls may violate laws in your jurisdiction.

Can I run multiple Byparr instances?

Yes. Run multiple containers on different ports if you need parallel processing:

docker run -d --name byparr1 -p 8191:8191 ghcr.io/thephaseless/byparr:latest
docker run -d --name byparr2 -p 8192:8191 ghcr.io/thephaseless/byparr:latest

How much memory does Byparr need?

Plan for at least 512MB per instance. The browser consumes memory when rendering pages. If you're processing many concurrent requests, increase the allocation.

Why does Byparr sometimes fail?

Cloudflare continuously updates its detection methods. Byparr's success depends on:

  • Current detection techniques used by the target site
  • Your IP reputation
  • Proxy quality (if using proxies)
  • Request patterns and frequency

No tool guarantees 100% success against modern anti-bot systems.

Can I use Byparr on a NAS device?

Support for NAS devices like Synology is limited. The ARM64 images work on some devices, but resource constraints and Docker implementation differences can cause issues. Test thoroughly before relying on it for production use.

Error Handling and Retry Logic

Production scrapers need robust error handling. Here's a pattern that implements retries with exponential backoff:

import time
import random

def solve_with_retry(target_url: str, max_retries: int = 3) -> dict:
    """
    Attempt to solve a challenge with automatic retries.
    """
    last_error = None
    
    for attempt in range(max_retries):
        try:
            solution = get_cloudflare_cookies(target_url)
            return solution
        except Exception as e:
            last_error = e
            wait_time = (2 ** attempt) + random.uniform(0, 1)
            print(f"Attempt {attempt + 1} failed, waiting {wait_time:.2f}s")
            time.sleep(wait_time)
    
    raise Exception(f"All {max_retries} attempts failed: {last_error}")

The exponential backoff prevents hammering the API during temporary failures.

Each retry waits longer than the previous one. Adding random jitter prevents thundering herd problems when multiple scripts retry simultaneously.

Handling Specific Error Types

Different errors require different responses:

def handle_byparr_response(data: dict) -> dict:
    """
    Parse response and handle different error conditions.
    """
    status = data.get("status")
    message = data.get("message", "")
    
    if status == "ok":
        return data["solution"]
    
    if "timeout" in message.lower():
        # Challenge took too long - might need more time
        raise TimeoutError("Challenge solving timed out")
    
    if "no browser" in message.lower():
        # Browser failed to start - infrastructure issue
        raise RuntimeError("Browser initialization failed")
    
    # Generic failure
    raise Exception(f"Unknown error: {message}")

Categorizing errors lets you respond appropriately. Timeouts might warrant a retry with longer timeout. Browser failures might indicate a Docker restart is needed.

Session Management for Multiple Requests

When scraping multiple pages on the same domain, reuse the same cookies. Creating new cookies for every request is slow and may trigger additional challenges.

class SessionManager:
    def __init__(self):
        self.sessions = {}  # domain -> (solution, timestamp)
        self.ttl = 600  # 10 minute TTL
    
    def get_session(self, domain: str) -> dict:
        """Get cached session or None if expired."""
        if domain not in self.sessions:
            return None
        
        solution, timestamp = self.sessions[domain]
        if time.time() - timestamp > self.ttl:
            del self.sessions[domain]
            return None
        
        return solution
    
    def set_session(self, domain: str, solution: dict):
        """Cache a session for a domain."""
        self.sessions[domain] = (solution, time.time())

Extract the domain from URLs automatically:

from urllib.parse import urlparse

def get_domain(url: str) -> str:
    """Extract domain from URL."""
    parsed = urlparse(url)
    return parsed.netloc

Now your scraper reuses sessions efficiently:

manager = SessionManager()

def smart_scrape(url: str) -> str:
    domain = get_domain(url)
    solution = manager.get_session(domain)
    
    if not solution:
        solution = get_cloudflare_cookies(url)
        manager.set_session(domain, solution)
    
    return scrape_with_cookies(solution, url)

This pattern significantly reduces the number of browser launches needed.

Monitoring and Logging

Add logging to track success rates and identify problems:

import logging
from datetime import datetime

logging.basicConfig(
    level=logging.INFO,
    format='%(asctime)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)

class MetricsTracker:
    def __init__(self):
        self.total_requests = 0
        self.successful_requests = 0
        self.failed_requests = 0
        self.total_time = 0
    
    def record_success(self, duration: float):
        self.total_requests += 1
        self.successful_requests += 1
        self.total_time += duration
        logger.info(f"Success in {duration:.2f}s")
    
    def record_failure(self, error: str):
        self.total_requests += 1
        self.failed_requests += 1
        logger.error(f"Failed: {error}")
    
    def get_stats(self) -> dict:
        success_rate = 0
        if self.total_requests > 0:
            success_rate = self.successful_requests / self.total_requests
        
        avg_time = 0
        if self.successful_requests > 0:
            avg_time = self.total_time / self.successful_requests
        
        return {
            "total": self.total_requests,
            "success_rate": f"{success_rate:.1%}",
            "avg_time": f"{avg_time:.2f}s"
        }

Wrap your API calls to collect metrics:

metrics = MetricsTracker()

def tracked_solve(url: str) -> dict:
    start = time.time()
    try:
        solution = get_cloudflare_cookies(url)
        metrics.record_success(time.time() - start)
        return solution
    except Exception as e:
        metrics.record_failure(str(e))
        raise

Review metrics periodically to spot degradation before it becomes critical.

Best Practices for Production Use

Rate Limiting

Don't overwhelm the target site. Implement delays between requests:

import time
from collections import deque

class RateLimiter:
    def __init__(self, requests_per_minute: int = 10):
        self.window = 60  # seconds
        self.max_requests = requests_per_minute
        self.timestamps = deque()
    
    def wait_if_needed(self):
        now = time.time()
        
        # Remove old timestamps
        while self.timestamps and now - self.timestamps[0] > self.window:
            self.timestamps.popleft()
        
        # Wait if at capacity
        if len(self.timestamps) >= self.max_requests:
            sleep_time = self.window - (now - self.timestamps[0])
            if sleep_time > 0:
                time.sleep(sleep_time)
        
        self.timestamps.append(time.time())

Ten requests per minute is a reasonable starting point. Adjust based on the target site's tolerance.

Cloudflare cookies expire after a period of inactivity. Monitor response status codes:

def request_with_refresh(url: str, session_manager, max_retries: int = 2):
    """Make request, refreshing cookies if they expired."""
    domain = get_domain(url)
    
    for attempt in range(max_retries):
        solution = session_manager.get_session(domain)
        
        if not solution:
            solution = get_cloudflare_cookies(url)
            session_manager.set_session(domain, solution)
        
        response = make_request_with_solution(url, solution)
        
        if response.status_code == 403:
            # Cookies expired, force refresh
            session_manager.sessions.pop(domain, None)
            continue
        
        return response
    
    raise Exception("Failed after cookie refresh attempts")

A 403 response often indicates expired cookies rather than a permanent block.

Resource Cleanup

Long-running scrapers should clean up Docker resources periodically:

# Remove stopped containers
docker container prune -f

# Clear browser cache
docker exec byparr rm -rf /tmp/browser-cache/*

Schedule these commands with cron for unattended operation.

Conclusion

This guide covered everything you need to run a production-ready scraping system powered by Byparr. The Docker-based setup takes minutes, and the FlareSolverr-compatible API integrates with existing tools and scripts.

Key takeaways:

  • Use Docker for the quickest setup
  • Always match the returned user agent with your cookies
  • Add proxies for improved success rates
  • Cache solutions to reduce unnecessary browser launches
  • Implement retry logic with exponential backoff
  • Monitor success rates to catch problems early
  • The API works as a drop-in replacement for FlareSolverr

For production scraping at scale, combine the patterns shown here with rotating residential proxies. The session management and caching code prevents overloading both your infrastructure and target sites.

Check the GitHub repository for the latest updates and to report issues.