Chrome's X-Browser-Validation header is a sneaky integrity check that can block your scraping bots faster than you can say "403 Forbidden".
This undocumented header uses SHA-1 hashing to verify if your user agent actually matches the browser you claim to be running.
In this guide, we'll show you how to reverse-engineer and generate valid X-Browser-Validation headers using the chrome-x-browser-validation-header
toolkit, plus some alternative approaches that might save your bacon when Google tightens the screws.
Understanding Chrome's X-Browser Headers
Before diving into the implementation, let's understand what we're dealing with. Chrome recently started sending these headers:
{
"x-browser-channel": "stable",
"x-browser-copyright": "Copyright 2025 Google LLC. All rights reserved.",
"x-browser-validation": "6h3XF8YcD8syi2FF2BhuE2KllQo=",
"x-browser-year": "2025"
}
The validation header is a base64-encoded SHA-1 hash that combines:
- A platform-specific API key (hardcoded in Chrome binaries)
- Your full user agent string
This creates a fingerprint that Google services can use to detect when someone's spoofing Chrome's user agent without actually using Chrome.
Step 1: Install the X-Browser-Validation Toolkit
First, grab the toolkit from GitHub and install it in your Python environment:
# Clone the repository
git clone https://github.com/dsekz/chrome-x-browser-validation-header.git
cd chrome-x-browser-validation-header
# Install the package
pip install -e .
Pro tip: If you're running this in production, consider freezing the specific commit hash to avoid breaking changes:
pip install git+https://github.com/dsekz/chrome-x-browser-validation-header.git@<commit-hash>
Step 2: Generate Valid Headers for Your User Agent
Now for the fun part. The toolkit makes generating valid headers dead simple:
from xbv import generate_validation_header
# Your Chrome user agent
ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
# Generate the validation header
header_value = generate_validation_header(ua)
print(f"X-Browser-Validation: {header_value}")
The toolkit automatically selects the correct API key based on your platform. Here's what's happening under the hood:
# Platform-specific API keys (extracted from Chrome binaries)
API_KEYS = {
'Windows': 'AIzaSyA2KlwBX3mkFo30om9LUFYQhpqLoa_BNhE',
'Linux': 'AIzaSyBqJZh-7pA44blAaAkH6490hUFOwX0KCYM',
'macOS': 'AIzaSyDr2UxVnv_U85AbhhY8XSHSIavUW0DC-sY'
}
# The magic formula
data = API_KEY + USER_AGENT
validation_header = base64.b64encode(hashlib.sha1(data.encode()).digest()).decode()
Step 3: Integrate Headers into Your Scraping Setup
With Python Requests
The simplest integration is with the requests library:
import requests
from xbv import generate_validation_header
ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
headers = {
'User-Agent': ua,
'X-Browser-Validation': generate_validation_header(ua),
'X-Browser-Channel': 'stable',
'X-Browser-Year': '2025',
'X-Browser-Copyright': 'Copyright 2025 Google LLC. All rights reserved.',
# Don't forget other Chrome headers
'sec-ch-ua': '"Chromium";v="138", "Not(A:Brand";v="24", "Google Chrome";v="138"',
'sec-ch-ua-mobile': '?0',
'sec-ch-ua-platform': '"Windows"',
}
response = requests.get('https://example.com', headers=headers)
With Selenium/Undetected ChromeDriver
For browser automation, you'll need to intercept and modify requests:
from seleniumwire import webdriver
from xbv import generate_validation_header
def interceptor(request):
ua = request.headers.get('user-agent')
if ua:
request.headers['X-Browser-Validation'] = generate_validation_header(ua)
request.headers['X-Browser-Channel'] = 'stable'
request.headers['X-Browser-Year'] = '2025'
driver = webdriver.Chrome()
driver.request_interceptor = interceptor
driver.get('https://example.com')
With Playwright
For Playwright users, use the route interception:
from playwright.sync_api import sync_playwright
from xbv import generate_validation_header
def handle_route(route):
headers = route.request.headers
ua = headers.get('user-agent')
if ua:
headers['x-browser-validation'] = generate_validation_header(ua)
headers['x-browser-channel'] = 'stable'
headers['x-browser-year'] = '2025'
route.continue_(headers=headers)
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
page.route('**/*', handle_route)
page.goto('https://example.com')
Step 4: Handle Edge Cases and Platform-Specific Keys
Sometimes the automatic key selection isn't enough. Here's how to handle specific scenarios:
from xbv import generate_validation_header
# Explicitly specify the API key for cross-platform consistency
windows_key = "AIzaSyA2KlwBX3mkFo30om9LUFYQhpqLoa_BNhE"
ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
header_value = generate_validation_header(ua, api_key=windows_key)
# Handle different Chrome versions
def get_chrome_headers(version="138"):
ua = f"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/{version}.0.0.0 Safari/537.36"
return {
'User-Agent': ua,
'X-Browser-Validation': generate_validation_header(ua),
'sec-ch-ua': f'"Chromium";v="{version}", "Not(A:Brand";v="24", "Google Chrome";v="{version}"'
}
Step 5: Implement Alternative Bypass Methods
When the X-Browser-Validation header isn't enough (or when Chrome updates break things), here are battle-tested alternatives:
Method 1: CDP Connection to Real Chrome
Skip the header game entirely by connecting to a real Chrome instance:
import subprocess
from playwright.sync_api import sync_playwright
# Launch Chrome with remote debugging
chrome_process = subprocess.Popen([
'/path/to/chrome',
'--remote-debugging-port=9222',
'--user-data-dir=/tmp/chrome-profile'
])
# Connect via CDP
with sync_playwright() as p:
browser = p.chromium.connect_over_cdp('http://localhost:9222')
page = browser.contexts[0].pages[0]
page.goto('https://example.com')
Method 2: Request-Based Approach (No Browser)
Sometimes the smartest move is avoiding browser detection altogether:
import httpx
from xbv import generate_validation_header
# Use HTTP/2 with proper header ordering
client = httpx.Client(http2=True)
ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
# Chrome's exact header order matters
headers = [
('user-agent', ua),
('x-browser-validation', generate_validation_header(ua)),
('accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8'),
('accept-language', 'en-US,en;q=0.9'),
('accept-encoding', 'gzip, deflate, br'),
('upgrade-insecure-requests', '1'),
('sec-fetch-dest', 'document'),
('sec-fetch-mode', 'navigate'),
('sec-fetch-site', 'none'),
('sec-fetch-user', '?1'),
]
response = client.get('https://example.com', headers=headers)
Method 3: Residential Proxy with Header Injection
Use a proxy service that can inject headers at the network level:
import requests
from xbv import generate_validation_header
proxy = {
'http': 'http://username:password@residential-proxy.com:8080',
'https': 'http://username:password@residential-proxy.com:8080'
}
# Some proxies support header injection via special headers
ua = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/138.0.0.0 Safari/537.36"
headers = {
'User-Agent': ua,
'X-Proxy-Header-X-Browser-Validation': generate_validation_header(ua),
}
response = requests.get('https://example.com', headers=headers, proxies=proxy)
Common Pitfalls to Avoid
- Header Consistency: Always ensure your X-Browser headers match your User-Agent's Chrome version
- TLS Fingerprinting: Chrome has a specific TLS fingerprint - consider using
curl-impersonate
or similar tools - Header Order: Chrome sends headers in a specific order - randomizing can blow your cover
- JavaScript Execution: Some sites check
navigator.webdriver
and other JS properties
Final Thoughts
The X-Browser-Validation header is just one piece of Chrome's fingerprinting puzzle. While this toolkit helps you generate valid headers, remember that sophisticated anti-bot systems look at dozens of signals including TLS fingerprints, JavaScript behavior, and network patterns.
The key to successful scraping isn't just mimicking Chrome perfectly - it's about understanding the detection mechanisms and choosing the right tool for the job. Sometimes a simple requests-based approach with proper headers beats a full browser automation setup.
Next Steps
- Explore Browser Fingerprinting: Check out fingerprint.js to understand what else sites can detect
- Monitor Chrome Updates: Keep an eye on Chromium commits for changes to these headers
- Consider Alternative Browsers: Firefox and Safari don't use these validation headers (yet)
- Join the Discussion: Share your findings and bypasses with the community
Remember: Always respect robots.txt and rate limits. Happy scraping! 🕷️