You're scraping data, automating a task, or just refreshing a page too quickly. Then boom—Cloudflare Error 1015 blocks you cold.
This error means your IP sent too many requests in a short window. Cloudflare flagged your traffic as suspicious and temporarily banned you from the site.
In this guide, you'll learn exactly what triggers Cloudflare Error 1015 and five proven methods to fix it. Each solution includes working code you can implement right now.
What Is Cloudflare Error 1015?
Cloudflare Error 1015 occurs when you exceed a website's request rate limit. The error message typically reads "You are being rate limited" or "The owner of this website has banned you temporarily."
This is Cloudflare's Web Application Firewall (WAF) protecting the site from excessive traffic. When your IP makes too many requests within the configured time window, Cloudflare blocks further access.
The block is usually temporary. It can last anywhere from 10 seconds to 24 hours depending on how the site owner configured their rate limiting rules.
Here's what the error looks like in your terminal when scraping:
HTTP/1.1 429 Too Many Requests
CF-RAY: 8a1b2c3d4e5f6g7h
Error 1015: You are being rate limited
Why Does Cloudflare Rate Limit Requests?
Rate limiting serves several protective purposes. Understanding these helps you work around them ethically.
DDoS Protection: Cloudflare blocks IPs that generate traffic patterns resembling denial-of-service attacks. Rapid consecutive requests trigger this defense.
Brute Force Prevention: Login pages and APIs use stricter rate limits to prevent password guessing attacks.
Resource Protection: Sites limit requests to prevent server overload. A single scraper hitting thousands of pages per minute could crash a poorly configured server.
Bot Detection: Cloudflare's anti-bot system analyzes request patterns, headers, and browser fingerprints. Traffic that looks automated gets rate limited faster.
Common Triggers for Error 1015
Several behaviors commonly trigger the Cloudflare Error 1015 rate limiting response.
High Request Frequency: Sending dozens of requests per second from one IP is the most obvious trigger. Even 5-10 requests per second can hit some sites' limits.
Missing or Inconsistent Headers: Requests without proper User-Agent strings, Accept-Language headers, or cookies look suspicious. Cloudflare flags them as bot traffic.
Headless Browser Fingerprints: Vanilla Selenium or Puppeteer setups leak automation indicators. Properties like navigator.webdriver=true instantly mark you as a bot.
Shared IP Addresses: If you're on a VPN, corporate network, or shared proxy, other users' traffic counts against the same rate limit.
Sequential Access Patterns: Real users don't visit pages in alphabetical or numerical order. Scrapers that crawl URLs sequentially stand out.
Fix 1: Add Random Delays Between Requests
The simplest fix adds random wait times between each request. This makes your traffic pattern look more human.
Fixed delays are easy to detect. Cloudflare's bot detection flags consistent timing as machine-like behavior.
Random delays within a range solve this problem.
Here's a Python implementation:
import requests
import time
import random
def fetch_with_delay(url, min_delay=2, max_delay=5):
"""Fetch a URL with a random delay before the request."""
# Random delay between min and max seconds
delay = random.uniform(min_delay, max_delay)
time.sleep(delay)
response = requests.get(url)
return response
# Usage example
urls = [
"https://example.com/page1",
"https://example.com/page2",
"https://example.com/page3"
]
for url in urls:
response = fetch_with_delay(url, min_delay=3, max_delay=8)
print(f"Status: {response.status_code} - {url}")
This approach works for light scraping. However, it's slow for large-scale data collection.
Fix 2: Implement Exponential Backoff
When you hit a rate limit, don't retry immediately. Use exponential backoff to wait progressively longer between retries.
This technique increases the delay after each failed attempt. It shows the server you're backing off rather than hammering it repeatedly.
import requests
import time
import random
def fetch_with_backoff(url, max_retries=5):
"""Fetch URL with exponential backoff on rate limit errors."""
for attempt in range(max_retries):
response = requests.get(url)
# Check for rate limiting
if response.status_code == 429 or "1015" in response.text:
# Calculate exponential delay: 2^attempt + random jitter
base_delay = 2 ** attempt
jitter = random.uniform(0, 1)
delay = base_delay + jitter
print(f"Rate limited. Waiting {delay:.2f} seconds...")
time.sleep(delay)
continue
return response
raise Exception(f"Max retries exceeded for {url}")
# Usage
try:
response = fetch_with_backoff("https://example.com/api/data")
print(response.text)
except Exception as e:
print(f"Failed: {e}")
The exponential increase (2, 4, 8, 16 seconds) gives the rate limit window time to reset. The random jitter prevents synchronized retry storms.
Fix 3: Rotate IP Addresses with Proxies
Each IP address has its own rate limit counter. Distributing requests across multiple IPs bypasses per-IP rate limits.
Proxy rotation is the most effective solution for high-volume scraping. Each request can come from a different IP address.
Here's how to implement proxy rotation in Python:
import requests
import random
from itertools import cycle
class ProxyRotator:
def __init__(self, proxy_list):
"""Initialize with a list of proxy URLs."""
self.proxies = cycle(proxy_list)
self.current_proxy = None
def get_next_proxy(self):
"""Get the next proxy in rotation."""
self.current_proxy = next(self.proxies)
return {
"http": self.current_proxy,
"https": self.current_proxy
}
def fetch(self, url, timeout=10):
"""Fetch URL using the next proxy in rotation."""
proxy = self.get_next_proxy()
try:
response = requests.get(
url,
proxies=proxy,
timeout=timeout
)
return response
except requests.exceptions.RequestException as e:
print(f"Proxy failed: {self.current_proxy}")
return None
# Example usage with proxy list
proxy_list = [
"http://proxy1.example.com:8080",
"http://proxy2.example.com:8080",
"http://proxy3.example.com:8080"
]
rotator = ProxyRotator(proxy_list)
urls = ["https://example.com/page" + str(i) for i in range(10)]
for url in urls:
response = rotator.fetch(url)
if response:
print(f"Success: {url}")
For best results, use residential or mobile proxies. Datacenter IPs often have low trust scores with Cloudflare and may get blocked immediately.
If you need reliable residential, datacenter, ISP, or mobile proxies, providers like Roundproxies.com offer rotating proxy pools specifically designed for high-volume data collection.
Fix 4: Rotate User-Agents and Headers
Cloudflare fingerprints requests based on headers. Using the same User-Agent string for every request is a dead giveaway.
Rotate through realistic browser User-Agents and include proper headers that browsers normally send.
import requests
import random
# Collection of real browser User-Agents
USER_AGENTS = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:121.0) Gecko/20100101 Firefox/121.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.2 Safari/605.1.15",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
]
def get_realistic_headers():
"""Generate realistic browser headers."""
user_agent = random.choice(USER_AGENTS)
headers = {
"User-Agent": user_agent,
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
"Accept-Language": "en-US,en;q=0.9",
"Accept-Encoding": "gzip, deflate, br",
"Connection": "keep-alive",
"Upgrade-Insecure-Requests": "1",
"Sec-Fetch-Dest": "document",
"Sec-Fetch-Mode": "navigate",
"Sec-Fetch-Site": "none",
"Sec-Fetch-User": "?1",
"Cache-Control": "max-age=0"
}
return headers
def fetch_with_headers(url):
"""Fetch URL with realistic rotating headers."""
headers = get_realistic_headers()
response = requests.get(url, headers=headers)
return response
# Usage
response = fetch_with_headers("https://example.com")
print(f"Status: {response.status_code}")
Important: Make sure your headers match your User-Agent. A Chrome User-Agent with Firefox-specific headers is a red flag.
Fix 5: Use Stealth Browser Automation
For sites with aggressive bot detection, basic HTTP requests won't work. You need a browser that executes JavaScript and passes fingerprint checks.
Standard Selenium or Puppeteer setups get detected instantly. Use stealth plugins to patch the obvious automation indicators.
Here's a Python solution using undetected-chromedriver:
import undetected_chromedriver as uc
import time
import random
def create_stealth_browser():
"""Create a browser that evades detection."""
options = uc.ChromeOptions()
# Realistic window size
options.add_argument("--window-size=1920,1080")
# Disable automation indicators
options.add_argument("--disable-blink-features=AutomationControlled")
# Create driver
driver = uc.Chrome(options=options)
return driver
def scrape_with_stealth(url):
"""Scrape a page using stealth browser."""
driver = create_stealth_browser()
try:
driver.get(url)
# Wait for page load like a human would
time.sleep(random.uniform(2, 4))
# Get page content
page_source = driver.page_source
return page_source
finally:
driver.quit()
# Usage
html = scrape_with_stealth("https://example.com")
print(f"Got {len(html)} characters")
Install undetected-chromedriver with:
pip install undetected-chromedriver
This approach is slower than direct HTTP requests. But it's the most reliable method for heavily protected sites.
Combining Multiple Techniques
For best results, combine several methods. Here's a complete solution that uses delays, proxy rotation, header rotation, and exponential backoff:
import requests
import time
import random
from itertools import cycle
class RobustScraper:
def __init__(self, proxy_list=None):
self.proxies = cycle(proxy_list) if proxy_list else None
self.user_agents = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 Chrome/120.0.0.0",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 Chrome/120.0.0.0",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:121.0) Gecko/20100101 Firefox/121.0"
]
def get_headers(self):
return {
"User-Agent": random.choice(self.user_agents),
"Accept": "text/html,application/xhtml+xml",
"Accept-Language": "en-US,en;q=0.9"
}
def get_proxy(self):
if self.proxies:
proxy = next(self.proxies)
return {"http": proxy, "https": proxy}
return None
def fetch(self, url, max_retries=5):
for attempt in range(max_retries):
# Random pre-request delay
time.sleep(random.uniform(1, 3))
try:
response = requests.get(
url,
headers=self.get_headers(),
proxies=self.get_proxy(),
timeout=15
)
if response.status_code == 429:
delay = (2 ** attempt) + random.uniform(0, 1)
print(f"Rate limited. Backing off {delay:.1f}s")
time.sleep(delay)
continue
return response
except Exception as e:
print(f"Request failed: {e}")
continue
return None
# Usage
scraper = RobustScraper(proxy_list=["http://proxy1:8080", "http://proxy2:8080"])
response = scraper.fetch("https://example.com")
This combined approach handles most rate limiting scenarios. It adapts to failures and distributes traffic across multiple identities.
Debugging Cloudflare Error 1015
When you hit Cloudflare Error 1015, gather diagnostic information before changing your approach.
Check Response Headers: Look for CF-RAY header and Retry-After values. These tell you when to try again.
response = requests.get(url)
print(f"CF-RAY: {response.headers.get('CF-RAY')}")
print(f"Retry-After: {response.headers.get('Retry-After')}")
Inspect the Error Page: The HTML response often contains details about why you were blocked.
Test Different IPs: If one IP is blocked, try another. If all IPs get blocked immediately, the issue is your request fingerprint, not rate limiting.
Start Slow: Begin with one request per 10 seconds. Gradually increase until you find the threshold.
For Website Owners: Fixing False Positives
If you're a site owner seeing legitimate users blocked by Cloudflare Error 1015, review your rate limiting configuration.
Increase Request Thresholds: Single page loads can generate 50+ requests for images, scripts, and fonts. Set limits high enough to accommodate this.
Extend Time Windows: Use 60-second windows instead of 10-second windows. This smooths out traffic spikes.
Whitelist Trusted IPs: Add your office IP, monitoring services, and partners to bypass rate limiting.
Review Block Duration: Short blocks (10-60 seconds) work better than hour-long bans that frustrate real users.
FAQ
How long does Cloudflare Error 1015 last?
The block duration depends on the site's configuration. It ranges from 10 seconds to 24 hours. Most sites use temporary blocks lasting 1-10 minutes. Check the Retry-After response header for the exact duration.
Can I get permanently banned from Error 1015?
Repeated violations can escalate to longer bans. Some sites permanently block IPs that consistently trigger rate limits. However, most Cloudflare Error 1015 blocks are temporary by design.
Why do I get Error 1015 on ChatGPT?
OpenAI uses Cloudflare protection. During high traffic periods, they tighten rate limits. If you're refreshing frequently or using unofficial API wrappers, you'll trigger the block. Wait a few minutes and try again.
Does using a VPN help avoid Error 1015?
Sometimes. VPN IPs often have mixed results because many users share them. If the VPN exit IP already has a poor reputation with Cloudflare, you might get blocked faster. Residential proxies typically perform better.
Is bypassing rate limits illegal?
Rate limiting is a technical measure, not a legal boundary. However, violating a site's Terms of Service or causing harm through excessive traffic could have legal implications. Always scrape ethically and respect robots.txt guidelines.
Conclusion
Cloudflare Error 1015 stops your requests when you exceed the site's rate limits. The fix depends on your situation.
For light usage, add random delays between requests. For heavy scraping, rotate proxies and headers to distribute traffic across multiple identities. For heavily protected sites, use stealth browser automation.
Combine multiple techniques for the best results. Use delays, proxy rotation, realistic headers, and exponential backoff together.
The key is making your traffic look like legitimate human browsing. Vary your timing, rotate your identity, and back off when you hit limits.
Start implementing these solutions now. Pick the method that matches your scale and test it against your target site.