Fastly's antibot system just blocked your scraper again. You've tried switching user agents, adding delays, even rotating through a handful of proxies. Nothing works.
Here's the thing: Fastly's detection has evolved significantly. The platform now combines JA3/JA4 TLS fingerprinting, behavioral analysis, and edge-based challenges to identify automated traffic in milliseconds.
I've spent the past year testing bypass methods against Fastly-protected sites, tracking which techniques work and which get you blocked instantly. This guide covers the 7 methods that consistently bypass Fastly's protections in 2026.
What Is Fastly Antibot and Why Is It Blocking You?
Fastly Antibot is a multi-layered bot management system that operates at the network edge. Unlike traditional WAFs that only check HTTP headers, Fastly analyzes the entire request lifecycle from TLS handshake to post-render behavior.
The system earned a 2025 DEVIES Award for its detection capabilities. It blocks scrapers through several mechanisms working in parallel.
Fastly's Detection Layers
TLS Fingerprinting (JA3/JA4): Fastly captures your TLS ClientHello packet and generates a fingerprint hash. Python's requests library has a completely different fingerprint than Chrome. This detection happens before any HTTP data is exchanged.
Behavioral Analysis: Request timing, mouse movements (via client-side JavaScript), and navigation patterns are monitored. Bots that hit endpoints in predictable sequences get flagged.
IP Reputation: Fastly maintains databases of known bot IPs, datacenter ranges, and proxy networks. A fresh residential IP passes where an AWS IP fails.
JavaScript Challenges: Dynamic challenges require browsers to execute JavaScript and prove human interaction. Simple HTTP clients can't handle these.
Header Order Analysis: Fastly checks if HTTP headers arrive in the exact order a real browser would send them. Python libraries often send headers in different sequences than Chrome.
Understanding these layers is critical. You need to address all of them simultaneously for consistent access.
Method 1: TLS Fingerprint Spoofing with curl_cffi
Standard Python HTTP clients like requests or httpx immediately expose themselves through TLS fingerprinting. Their JA3 hashes are well-documented and blocklisted.
curl_cffi solves this by wrapping curl-impersonate, a modified cURL that replicates real browser TLS fingerprints exactly.
Installation
pip install curl_cffi
Basic Implementation
from curl_cffi import requests
# Make a request impersonating Chrome 131
response = requests.get(
"https://fastly-protected-site.com",
impersonate="chrome131"
)
print(response.status_code)
print(response.text[:500])
The impersonate parameter tells curl_cffi which browser's TLS fingerprint to replicate. This single parameter handles the TLS version, cipher suites, extensions, and ALPN values automatically.
Advanced Configuration with Sessions
from curl_cffi import requests as curl_requests
# Create a session for persistent connections
session = curl_requests.Session()
# Configure custom headers alongside impersonation
headers = {
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip, deflate, br',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-User': '?1',
}
response = session.get(
"https://fastly-protected-site.com/api/data",
impersonate="chrome131",
headers=headers,
timeout=30
)
# Maintain cookies across requests
print(session.cookies.get_dict())
The session object maintains cookies and connection state between requests. This mimics how real browsers handle persistent connections.
Supported Browser Fingerprints
curl_cffi supports multiple browser versions:
chrome99throughchrome131safari15_3throughsafari18_4edge101throughedge131firefox109throughfirefox120
Choose a fingerprint matching the User-Agent you're sending. Mismatches between TLS fingerprint and User-Agent header trigger detection.
Adding Proxy Support
from curl_cffi import requests
proxy = "http://username:password@proxy-server.com:8080"
proxies = {
"http": proxy,
"https": proxy
}
response = requests.get(
"https://fastly-protected-site.com",
impersonate="chrome131",
proxies=proxies
)
For Fastly bypass, residential proxies from providers like Roundproxies work significantly better than datacenter IPs. The IP reputation check happens before TLS fingerprinting.
Method 2: Puppeteer with Stealth Plugin
When you need full JavaScript execution, Puppeteer with the stealth plugin remains effective. The plugin patches over 15 automation indicators that Fastly's client-side detection looks for.
Setup
npm install puppeteer puppeteer-extra puppeteer-extra-plugin-stealth
Basic Stealth Implementation
const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');
// Enable stealth plugin with all evasion modules
puppeteer.use(StealthPlugin());
(async () => {
const browser = await puppeteer.launch({
headless: 'new', // Use new headless mode
args: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-blink-features=AutomationControlled',
'--window-size=1920,1080'
]
});
const page = await browser.newPage();
// Set a realistic viewport
await page.setViewport({ width: 1920, height: 1080 });
// Navigate to Fastly-protected site
await page.goto('https://fastly-protected-site.com', {
waitUntil: 'networkidle2',
timeout: 60000
});
// Extract page content
const content = await page.content();
console.log(content);
await browser.close();
})();
The AutomationControlled flag is particularly important. Fastly's client-side JavaScript specifically checks for this blink feature.
Custom Stealth Configuration
const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');
// Configure specific evasion modules
const stealth = StealthPlugin({
webglVendor: "Google Inc. (Intel)",
webglRenderer: "Intel Iris OpenGL Engine",
navigator: {
platform: "MacIntel",
languages: ["en-US", "en"]
}
});
puppeteer.use(stealth);
(async () => {
const browser = await puppeteer.launch({
headless: 'new',
args: [
'--disable-blink-features=AutomationControlled',
'--disable-dev-shm-usage',
'--disable-accelerated-2d-canvas',
'--no-first-run',
'--no-zygote'
]
});
const page = await browser.newPage();
// Additional fingerprint modifications
await page.evaluateOnNewDocument(() => {
// Override plugins length
Object.defineProperty(navigator, 'plugins', {
get: () => [1, 2, 3, 4, 5]
});
// Override webdriver
Object.defineProperty(navigator, 'webdriver', {
get: () => false
});
});
await page.goto('https://fastly-protected-site.com');
// Add human-like delay
await page.waitForTimeout(2000 + Math.random() * 3000);
const data = await page.evaluate(() => {
return document.body.innerHTML;
});
console.log(data);
await browser.close();
})();
The evaluateOnNewDocument function runs before any page JavaScript loads. This ensures your patches are in place before Fastly's detection scripts execute.
Mouse Movement Simulation
const puppeteer = require('puppeteer-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');
puppeteer.use(StealthPlugin());
// Human-like mouse movement function
async function humanMove(page, x, y) {
const steps = 25 + Math.floor(Math.random() * 10);
for (let i = 0; i <= steps; i++) {
// Bezier curve calculation for natural movement
const progress = i / steps;
const eased = progress < 0.5
? 2 * progress * progress
: -1 + (4 - 2 * progress) * progress;
const currentX = page.mouse._x + (x - page.mouse._x) * eased;
const currentY = page.mouse._y + (y - page.mouse._y) * eased;
await page.mouse.move(currentX, currentY);
await page.waitForTimeout(10 + Math.random() * 20);
}
}
(async () => {
const browser = await puppeteer.launch({ headless: 'new' });
const page = await browser.newPage();
await page.goto('https://fastly-protected-site.com');
// Simulate mouse movement to a button
await humanMove(page, 500, 300);
await page.waitForTimeout(500);
// Click after natural movement
await page.click('#target-button');
await browser.close();
})();
Fastly's behavioral analysis tracks mouse entropy. The Bezier curve movement creates natural acceleration and deceleration patterns that pass detection.
Method 3: Playwright Stealth with Patchright
Playwright offers better performance than Puppeteer for parallel scraping. The Patchright library patches Playwright to avoid CDP (Chrome DevTools Protocol) detection.
Standard Playwright sends Runtime.enable CDP commands that Fastly specifically detects. Patchright eliminates this leak.
Installation
pip install patchright
patchright install chrome
Basic Implementation
from patchright.sync_api import sync_playwright
import time
import random
def scrape_fastly_site(url):
with sync_playwright() as p:
# Launch Chrome, not Chromium, for better stealth
browser = p.chromium.launch_persistent_context(
user_data_dir="/tmp/patchright_profile",
channel="chrome",
headless=False, # Headed mode for maximum stealth
no_viewport=True
)
page = browser.new_page()
# Navigate with realistic timing
page.goto(url)
# Random delay to simulate reading
time.sleep(random.uniform(2, 5))
content = page.content()
browser.close()
return content
# Usage
html = scrape_fastly_site("https://fastly-protected-site.com")
print(html[:1000])
Using launch_persistent_context with a real user data directory creates a browser profile that persists cookies and local storage. This builds trust over multiple sessions.
Async Implementation for Scale
import asyncio
from patchright.async_api import async_playwright
import random
async def scrape_page(browser, url):
page = await browser.new_page()
try:
await page.goto(url, wait_until='networkidle')
await asyncio.sleep(random.uniform(1, 3))
content = await page.content()
return content
finally:
await page.close()
async def main():
async with async_playwright() as p:
browser = await p.chromium.launch_persistent_context(
user_data_dir="/tmp/patchright_async",
channel="chrome",
headless=False,
no_viewport=True
)
urls = [
"https://fastly-protected-site.com/page1",
"https://fastly-protected-site.com/page2",
"https://fastly-protected-site.com/page3"
]
# Scrape pages with concurrency limit
tasks = [scrape_page(browser, url) for url in urls]
results = await asyncio.gather(*tasks)
for url, content in zip(urls, results):
print(f"{url}: {len(content)} bytes")
await browser.close()
asyncio.run(main())
Patchright passes all major detection tests including CreepJS, BrowserScan, and Rebrowser's bot detector. The key is using real Chrome (not Chromium) with no_viewport=True.
Method 4: SeleniumBase Undetected Chrome Mode
SeleniumBase's UC Mode builds on undetected-chromedriver with additional evasions. It disconnects chromedriver during sensitive actions, preventing detection during clicks and form submissions.
Installation
pip install seleniumbase
Basic UC Mode
from seleniumbase import Driver
# Initialize with undetected Chrome mode
driver = Driver(uc=True, headless=False)
# Use special uc_open method for stealthy navigation
driver.uc_open_with_reconnect("https://fastly-protected-site.com",
reconnect_time=6)
# Wait for page to fully load
driver.sleep(3)
# Get page content
content = driver.get_page_source()
print(content[:1000])
driver.quit()
The uc_open_with_reconnect method disconnects chromedriver before loading the URL, then reconnects after a delay. This prevents detection during the critical page load phase.
Handling Clicks Without Detection
from seleniumbase import Driver
driver = Driver(uc=True, headless=False)
driver.uc_open_with_reconnect("https://fastly-protected-site.com")
# Wait for button to appear
driver.wait_for_element("#submit-button", timeout=10)
# Use uc_click for stealthy clicking
# This disconnects chromedriver before the click
driver.uc_click("#submit-button")
# Re-establish connection after action
driver.sleep(3)
content = driver.get_page_source()
driver.quit()
The uc_click method schedules your click, disconnects chromedriver from Chrome, waits, then reconnects. Fastly's detection typically only runs during page loads and specific events.
Full Workflow Example
from seleniumbase import Driver
import random
def scrape_with_seleniumbase(url):
driver = Driver(
uc=True,
headless=False,
agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36"
)
try:
# Open with stealth
driver.uc_open_with_reconnect(url, reconnect_time=5)
# Random scroll to simulate reading
for _ in range(random.randint(2, 5)):
scroll_amount = random.randint(200, 500)
driver.execute_script(f"window.scrollBy(0, {scroll_amount});")
driver.sleep(random.uniform(0.5, 1.5))
# Extract data
content = driver.get_page_source()
return content
finally:
driver.quit()
html = scrape_with_seleniumbase("https://fastly-protected-site.com")
The random scrolling adds behavioral authenticity. Fastly's client-side scripts track scroll patterns and flag pages where no scrolling occurs.
Method 5: Residential Proxy Rotation
Even with perfect TLS fingerprinting, Fastly blocks datacenter IPs aggressively. IP reputation is the first detection layer.
Residential proxies route through real ISP addresses, appearing as legitimate home users. Combined with proper fingerprinting, they bypass IP reputation checks.
Python Implementation with curl_cffi
from curl_cffi import requests
import random
# Residential proxy pool (example format)
proxy_pool = [
"http://user:pass@residential1.roundproxies.com:8080",
"http://user:pass@residential2.roundproxies.com:8080",
"http://user:pass@residential3.roundproxies.com:8080",
]
def get_with_proxy_rotation(url, max_retries=3):
for attempt in range(max_retries):
proxy = random.choice(proxy_pool)
proxies = {"http": proxy, "https": proxy}
try:
response = requests.get(
url,
impersonate="chrome131",
proxies=proxies,
timeout=30
)
if response.status_code == 200:
return response.text
# Retry with different proxy on failure
print(f"Attempt {attempt + 1} failed with status {response.status_code}")
except Exception as e:
print(f"Attempt {attempt + 1} error: {e}")
return None
content = get_with_proxy_rotation("https://fastly-protected-site.com")
Rotate proxies between requests, not during a session. Changing IPs mid-session triggers Fastly's session anomaly detection.
Session-Sticky Proxy Management
from curl_cffi import requests
import hashlib
class ProxyManager:
def __init__(self, proxy_list):
self.proxies = proxy_list
self.session_map = {}
def get_proxy_for_session(self, session_id):
"""Returns consistent proxy for a session"""
if session_id not in self.session_map:
# Hash session ID to consistently map to same proxy
index = int(hashlib.md5(session_id.encode()).hexdigest(), 16) % len(self.proxies)
self.session_map[session_id] = self.proxies[index]
return self.session_map[session_id]
def rotate_session(self, session_id):
"""Force rotation to new proxy"""
if session_id in self.session_map:
del self.session_map[session_id]
# Usage
proxy_manager = ProxyManager([
"http://user:pass@resi1.roundproxies.com:8080",
"http://user:pass@resi2.roundproxies.com:8080",
"http://user:pass@resi3.roundproxies.com:8080",
])
session = requests.Session()
# Same proxy for entire workflow
proxy = proxy_manager.get_proxy_for_session("user_workflow_123")
proxies = {"http": proxy, "https": proxy}
response = session.get(
"https://fastly-protected-site.com/login",
impersonate="chrome131",
proxies=proxies
)
Session-sticky proxies maintain the same IP throughout a logical workflow (like login → navigate → scrape). This prevents the IP switching that Fastly flags as suspicious.
Method 6: Go-Based Fastly Solver
For high-performance scraping, Go offers memory efficiency and concurrency that Python can't match. The fastify-solver package handles Fastly's JavaScript challenges server-side.
Installation
go get github.com/pagpeter/fastify/pkg/solver
Basic Usage
package main
import (
"fmt"
"github.com/pagpeter/fastify/pkg/solver"
)
func main() {
// Create solver for Fastly-protected URL
s, err := solver.NewFastifySolver("https://fastly-protected-site.com/data")
if err != nil {
fmt.Printf("Error creating solver: %v\n", err)
return
}
// Solve challenge and get valid cookie
cookie, err := s.Solve()
if err != nil {
fmt.Printf("Error solving challenge: %v\n", err)
return
}
fmt.Printf("Valid cookie obtained: %s\n", cookie)
}
The solver executes Fastly's JavaScript challenge code and returns valid session cookies. You can then use these cookies with any HTTP client.
Integration with HTTP Requests
package main
import (
"fmt"
"io"
"net/http"
"github.com/pagpeter/fastify/pkg/solver"
)
func scrapeWithFastlyBypass(targetURL string) (string, error) {
// First, solve the challenge
s, err := solver.NewFastifySolver(targetURL)
if err != nil {
return "", err
}
cookieValue, err := s.Solve()
if err != nil {
return "", err
}
// Create HTTP client with solved cookie
client := &http.Client{}
req, err := http.NewRequest("GET", targetURL, nil)
if err != nil {
return "", err
}
// Set the bypass cookie
req.Header.Set("Cookie", cookieValue)
req.Header.Set("User-Agent", "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36")
req.Header.Set("Accept", "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8")
resp, err := client.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return "", err
}
return string(body), nil
}
func main() {
content, err := scrapeWithFastlyBypass("https://fastly-protected-site.com")
if err != nil {
fmt.Printf("Error: %v\n", err)
return
}
fmt.Printf("Content length: %d bytes\n", len(content))
}
Go's goroutines allow concurrent solving of multiple Fastly challenges. For high-volume scraping, this approach significantly outperforms Python browser automation.
Method 7: Mobile API Endpoint Discovery
Many Fastly-protected websites maintain separate mobile API infrastructure. These endpoints often have lighter bot protection since they expect traffic from controlled mobile apps.
Finding Mobile APIs
Traffic Analysis with mitmproxy:
# Install mitmproxy
pip install mitmproxy
# Start proxy
mitmproxy --mode regular@8080
Configure your phone to use this proxy, then browse the target app. Watch for API calls to subdomains like:
api.example.comm.example.commobile.example.comapp-api.example.com
Common Mobile API Patterns
from curl_cffi import requests
# Common mobile API URL patterns to test
api_patterns = [
"https://api.{domain}/mobile/v2/",
"https://m.{domain}/api/",
"https://mobile-api.{domain}/",
"https://{domain}/api/v3/mobile/",
]
def discover_mobile_endpoints(base_domain):
found_endpoints = []
for pattern in api_patterns:
url = pattern.format(domain=base_domain)
try:
response = requests.get(
url,
impersonate="chrome131",
headers={
"User-Agent": "Mozilla/5.0 (Linux; Android 14; Pixel 8) AppleWebKit/537.36",
"Accept": "application/json",
},
timeout=10
)
if response.status_code != 404:
found_endpoints.append({
"url": url,
"status": response.status_code,
"content_type": response.headers.get("content-type", "")
})
except:
continue
return found_endpoints
# Usage
endpoints = discover_mobile_endpoints("example.com")
for ep in endpoints:
print(f"{ep['url']} - {ep['status']} - {ep['content_type']}")
Accessing Discovered APIs
from curl_cffi import requests
def fetch_mobile_api(api_url, endpoint):
headers = {
"User-Agent": "ExampleApp/5.2.1 (Android 14; Pixel 8 Pro)",
"Accept": "application/json",
"Accept-Language": "en-US",
"X-App-Version": "5.2.1",
"X-Device-ID": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"X-Platform": "android"
}
response = requests.get(
f"{api_url}{endpoint}",
impersonate="chrome131",
headers=headers,
timeout=30
)
return response.json()
# Example: fetch product data from mobile API
data = fetch_mobile_api(
"https://api.example.com/mobile/v2/",
"products?category=electronics&page=1"
)
print(data)
Mobile APIs return structured JSON data, making parsing far simpler than scraping HTML. The protection is often minimal because apps bundle API keys and authentication tokens.
Troubleshooting Common Fastly Blocks
403 Forbidden
Your TLS fingerprint or IP is blocked.
Solutions:
- Switch browser fingerprint in curl_cffi
- Move to residential proxies
- Clear cookies and start fresh session
# Test different fingerprints
fingerprints = ["chrome131", "chrome124", "safari18_4", "edge131"]
for fp in fingerprints:
response = requests.get(url, impersonate=fp)
print(f"{fp}: {response.status_code}")
429 Too Many Requests
You're hitting rate limits.
Solutions:
- Implement exponential backoff
- Expand proxy pool
- Add random delays between requests
import time
import random
def request_with_backoff(url, max_retries=5):
for attempt in range(max_retries):
response = requests.get(url, impersonate="chrome131")
if response.status_code == 429:
wait_time = (2 ** attempt) + random.uniform(0, 1)
print(f"Rate limited. Waiting {wait_time:.2f}s")
time.sleep(wait_time)
continue
return response
return None
JavaScript Challenge Loop
The page keeps challenging without resolving.
Solutions:
- Use a real browser profile with history
- Enable JavaScript rendering
- Check for missing browser features
from seleniumbase import Driver
# Use persistent profile to build trust
driver = Driver(
uc=True,
user_data_dir="/tmp/fastly_profile",
headless=False
)
CAPTCHA Challenges
Fastly detected suspicious behavior and triggered interactive verification.
Solutions:
- Switch to higher-reputation residential proxies
- Reduce request rate significantly
- Implement mouse movement simulation
- Consider CAPTCHA solving services (2Captcha, Anti-Captcha)
2026 Best Practices Summary
Layer your defenses. No single technique bypasses all of Fastly's detection. Combine TLS fingerprinting + residential proxies + behavioral mimicry for consistent results.
Respect rate limits. Even with perfect fingerprinting, aggressive scraping triggers detection. Aim for request rates that match human browsing patterns.
Update regularly. Fastly updates detection logic continuously. Fingerprint libraries need regular updates to match current browser versions.
Monitor success rates. Track your bypass success rate over time. A sudden drop indicates Fastly has deployed new detection methods.
Use the right tool for the job. curl_cffi for simple requests, Puppeteer/Playwright for JavaScript-heavy sites, Go for high-volume concurrent scraping.
Quick Reference: Method Selection Guide
| Use Case | Recommended Method | Why |
|---|---|---|
| Simple API scraping | curl_cffi | Fastest, lowest resource usage |
| JavaScript-rendered content | Puppeteer Stealth | Full browser capabilities |
| High-volume concurrent scraping | Go Fastly Solver | Memory efficient, fast |
| Login-required scraping | SeleniumBase UC | Session persistence |
| Mobile app data | Mobile API Discovery | Often unprotected |
| Maximum stealth | Patchright + Residential Proxies | Passes all detection tests |
Final Thoughts
Fastly's bot management continues to evolve, but the fundamental bypass strategies remain consistent. TLS fingerprinting, behavioral mimicry, and IP reputation management form the core of any successful approach.
The methods in this guide work against current Fastly deployments. Test against your specific target, since protection configurations vary between sites.
Start with curl_cffi for simple requests. Escalate to browser automation only when JavaScript execution is required. Always combine fingerprint spoofing with quality residential proxies.
The cat-and-mouse game between scrapers and bot detection never ends. Stay current with library updates and be prepared to adapt when detection methods change.