How to Use Rnet: The Blazing-Fast Python HTTP Client

Rnet is a Rust-powered Python HTTP client that provides advanced TLS fingerprinting to bypass anti-bot detection systems. Unlike traditional HTTP libraries, Rnet can perfectly mimic browser behavior at the TLS level, making your requests virtually indistinguishable from real browser traffic while delivering exceptional performance.

Ever hit a wall trying to scrape a website, only to get blocked despite setting the right headers? That's TLS fingerprinting at work - and it's why your typical requests or httpx calls fail.

Here's the deal: when you send an HTTPS request, a TLS handshake occurs, producing a unique TLS fingerprint. Most Python HTTP libraries have fingerprints that scream "bot!" to modern anti-bot systems. Rnet changes the game by using Rust's performance with browser-accurate TLS signatures.

In this guide, I'll show you exactly how to leverage Rnet to make undetectable HTTP requests that are faster than traditional Python libraries - we're talking outperforming requests, httpx, and curl_cffi according to benchmarks.

Contents

  • Why Rnet Beats Traditional HTTP Libraries
  • Step 1: Install Rnet and Set Up Your Environment
  • Step 2: Master Basic Requests with Browser Impersonation
  • Step 3: Handle Advanced Features (Proxies, Sessions, Cookies)
  • Step 4: Leverage Async for Lightning-Fast Concurrent Requests
  • Step 5: Implement WebSocket Connections
  • Step 6: Deploy Production-Ready Error Handling
  • Next Steps and Pro Tips

Why Rnet Beats Traditional HTTP Libraries

Before diving into the code, let's understand what makes Rnet special. TLS fingerprinting is based on parameters in the unencrypted client hello message, and every HTTP client has a unique signature. Regular Python libraries like requests use OpenSSL, which creates fingerprints that are easily detected.

Rnet solves this by:

  • Using BoringSSL: The same TLS library Chrome uses
  • Rust Performance: Native-level speed with Python convenience
  • Accurate Browser Emulation: Precisely emulates Chrome, Firefox, Safari, and OkHttp, accurately replicating TLS/HTTP2 signatures
  • Zero-Copy Transfers: Efficient memory usage for large payloads

The Technical Edge

When a server performs TLS fingerprinting, it analyzes the JA3 hash - a fingerprint created from five TLS handshake fields. Rnet provides exact browser matches for these fingerprints, including support for the latest browser versions like Chrome 136 and Firefox 139.

Step 1: Install Rnet and Set Up Your Environment

Prerequisites

First, ensure you have Python 3.7+ installed. Rnet requires a relatively modern Python version due to its async capabilities.

Installation

pip install rnet

That's it! No complex dependencies or system libraries needed. Rnet bundles everything including the Rust components.

Verify Installation

import rnet
print(f"Rnet version: {rnet.__version__}")

Platform Support

Rnet supports:

  • Linux: glibc >= 2.34 (x86_64, aarch64, armv7, i686) and musl
  • macOS: x86_64 and Apple Silicon (aarch64)
  • Windows: x86_64, i686, and ARM64

Step 2: Master Basic Requests with Browser Impersonation

Let's start with the killer feature - browser impersonation. This is where Rnet shines compared to other libraries.

Synchronous Requests

from rnet import BlockingClient, Impersonate

# Create a client that impersonates Chrome 136
client = BlockingClient(impersonate=Impersonate.Chrome136)

# Make a GET request
response = client.get("https://tls.peet.ws/api/all")
print(response.status_code)
print(response.json())  # This will show your TLS fingerprint matches Chrome!

The Power of Impersonation

Rnet supports numerous browser versions:

# Latest stable versions
Impersonate.Chrome136  # Latest Chrome
Impersonate.Firefox139  # Latest Firefox
Impersonate.Safari18_3_1  # Latest Safari

# Mobile browsers
Impersonate.SafariIos18_1_1
Impersonate.FirefoxAndroid135

# Privacy-focused variants
Impersonate.FirefoxPrivate136

POST Requests with JSON

client = BlockingClient(impersonate=Impersonate.Chrome135)

# Send JSON data
data = {
    "username": "techpro",
    "action": "authenticate"
}

response = client.post(
    "https://api.example.com/login",
    json=data,
    headers={"X-Custom-Header": "MyValue"}
)

Form Data Submission

# URL-encoded form data
form_data = {
    "field1": "value1",
    "field2": "value2"
}

response = client.post(
    "https://httpbin.org/post",
    data=form_data  # Automatically sets Content-Type to application/x-www-form-urlencoded
)

Step 3: Handle Advanced Features (Proxies, Sessions, Cookies)

Rotating Proxies (The Smart Way)

Here's a trick most tutorials miss - intelligent proxy rotation:

from rnet import BlockingClient, Proxy
import random

class SmartProxyClient:
    def __init__(self, proxy_list):
        self.proxies = [Proxy.all(p) for p in proxy_list]
        self.clients = {}
        
    def get_client(self, sticky_key=None):
        """Get a client with smart proxy selection"""
        if sticky_key and sticky_key in self.clients:
            return self.clients[sticky_key]
        
        proxy = random.choice(self.proxies)
        client = BlockingClient(
            impersonate=Impersonate.Chrome135,
            proxies=[proxy]
        )
        
        if sticky_key:
            self.clients[sticky_key] = client
        
        return client

# Usage
proxy_list = [
    "http://proxy1.com:8080",
    "socks5://proxy2.com:1080",
    "http://user:pass@proxy3.com:3128"
]

smart_client = SmartProxyClient(proxy_list)

# Use same proxy for same domain (sticky sessions)
client = smart_client.get_client("example.com")
response = client.get("https://example.com/api/data")
from rnet import BlockingClient, Cookie

client = BlockingClient(
    impersonate=Impersonate.Firefox139,
    cookie_store=True  # Enable cookie jar
)

# Set custom cookie
client.set_cookie(
    "https://example.com",
    Cookie(name="session_id", value="abc123", path="/", secure=True)
)

# Cookies persist across requests
response1 = client.get("https://example.com/login")
response2 = client.get("https://example.com/dashboard")  # Cookies automatically sent

# Get cookies for a URL
cookies = client.get_cookies("https://example.com")

Headers Order Manipulation

This is crucial for avoiding detection - browsers send headers in specific orders:

client = BlockingClient(
    impersonate=Impersonate.Chrome136,
    headers_order=[
        "accept",
        "user-agent", 
        "accept-encoding",
        "accept-language",
        "cookie"
    ]
)

Step 4: Leverage Async for Lightning-Fast Concurrent Requests

Here's where Rnet really flexes - async performance that rivals Go and Rust implementations.

Basic Async Usage

import asyncio
from rnet import Client, Impersonate

async def fetch_multiple():
    client = Client(impersonate=Impersonate.Chrome135)
    
    urls = [
        "https://api.github.com/users/github",
        "https://api.github.com/users/torvalds",
        "https://api.github.com/users/gvanrossum"
    ]
    
    # Fire all requests concurrently
    tasks = [client.get(url) for url in urls]
    responses = await asyncio.gather(*tasks)
    
    for resp in responses:
        data = await resp.json()
        print(f"{data['login']}: {data['public_repos']} repos")
    
    await client.close()

# Run it
asyncio.run(fetch_multiple())

Advanced: Rate-Limited Async Scraping

Here's a production-ready pattern with rate limiting:

import asyncio
from rnet import Client, Impersonate
import time

class RateLimitedScraper:
    def __init__(self, max_per_second=10):
        self.client = Client(impersonate=Impersonate.Chrome136)
        self.max_per_second = max_per_second
        self.semaphore = asyncio.Semaphore(max_per_second)
        self.last_request_time = 0
        
    async def fetch(self, url):
        async with self.semaphore:
            # Ensure we don't exceed rate limit
            current_time = time.time()
            time_since_last = current_time - self.last_request_time
            if time_since_last < 1.0 / self.max_per_second:
                await asyncio.sleep(1.0 / self.max_per_second - time_since_last)
            
            self.last_request_time = time.time()
            return await self.client.get(url)
    
    async def scrape_batch(self, urls):
        tasks = [self.fetch(url) for url in urls]
        return await asyncio.gather(*tasks, return_exceptions=True)
    
    async def close(self):
        await self.client.close()

# Usage
async def main():
    scraper = RateLimitedScraper(max_per_second=5)
    
    urls = [f"https://httpbin.org/delay/{i%3}" for i in range(20)]
    responses = await scraper.scrape_batch(urls)
    
    for i, resp in enumerate(responses):
        if isinstance(resp, Exception):
            print(f"URL {i} failed: {resp}")
        else:
            print(f"URL {i}: {resp.status_code}")
    
    await scraper.close()

asyncio.run(main())

Step 5: Implement WebSocket Connections

Rnet's WebSocket support maintains the same TLS fingerprint, making it perfect for real-time data that's behind anti-bot protection.

Basic WebSocket

from rnet import BlockingClient, Message

client = BlockingClient(impersonate=Impersonate.Chrome136)

# Connect to WebSocket
ws = client.websocket("wss://echo.websocket.org")

# Send message
ws.send(Message.from_text("Hello, WebSocket!"))

# Receive response
message = ws.recv()
print(f"Received: {message.data}")

ws.close()

Async WebSocket with Auto-Reconnect

import asyncio
from rnet import Client, Message
import logging

class ResilientWebSocket:
    def __init__(self, url, impersonate=Impersonate.Chrome136):
        self.url = url
        self.impersonate = impersonate
        self.client = None
        self.ws = None
        self.running = False
        
    async def connect(self):
        """Connect with automatic retry"""
        max_retries = 5
        retry_delay = 1
        
        for attempt in range(max_retries):
            try:
                self.client = Client(impersonate=self.impersonate)
                self.ws = await self.client.websocket(self.url)
                logging.info(f"WebSocket connected to {self.url}")
                return True
            except Exception as e:
                logging.error(f"Connection attempt {attempt + 1} failed: {e}")
                if attempt < max_retries - 1:
                    await asyncio.sleep(retry_delay)
                    retry_delay *= 2  # Exponential backoff
        
        return False
    
    async def send_message(self, text):
        """Send with automatic reconnection"""
        if not self.ws:
            if not await self.connect():
                raise ConnectionError("Failed to establish WebSocket connection")
        
        try:
            await self.ws.send(Message.from_text(text))
        except Exception as e:
            logging.error(f"Send failed, attempting reconnect: {e}")
            if await self.connect():
                await self.ws.send(Message.from_text(text))
    
    async def receive_loop(self, message_handler):
        """Continuous receive with reconnection"""
        self.running = True
        
        while self.running:
            try:
                if not self.ws:
                    if not await self.connect():
                        await asyncio.sleep(5)
                        continue
                
                message = await self.ws.recv()
                await message_handler(message)
                
            except Exception as e:
                logging.error(f"Receive error: {e}")
                self.ws = None
                await asyncio.sleep(1)
    
    async def close(self):
        self.running = False
        if self.ws:
            await self.ws.close()
        if self.client:
            await self.client.close()

# Usage example
async def handle_message(message):
    print(f"Received: {message.data}")

async def main():
    ws = ResilientWebSocket("wss://stream.example.com/live")
    
    # Start receiving messages
    receive_task = asyncio.create_task(ws.receive_loop(handle_message))
    
    # Send periodic pings
    for i in range(10):
        await ws.send_message(f"ping_{i}")
        await asyncio.sleep(5)
    
    await ws.close()
    await receive_task

asyncio.run(main())

Step 6: Deploy Production-Ready Error Handling

Comprehensive Error Handling Pattern

from rnet import BlockingClient, Impersonate
import time
import logging
from typing import Optional, Dict, Any

class RobustRnetClient:
    def __init__(
        self,
        impersonate=Impersonate.Chrome136,
        max_retries=3,
        timeout=30,
        backoff_factor=2
    ):
        self.impersonate = impersonate
        self.max_retries = max_retries
        self.timeout = timeout
        self.backoff_factor = backoff_factor
        self.session = None
        self._init_session()
    
    def _init_session(self):
        """Initialize or reinitialize session"""
        if self.session:
            try:
                self.session.close()
            except:
                pass
        
        self.session = BlockingClient(
            impersonate=self.impersonate,
            timeout=self.timeout,
            # Connection pooling for performance
            pool_max_size=100,
            pool_max_idle_per_host=10
        )
    
    def request_with_retry(
        self,
        method: str,
        url: str,
        **kwargs
    ) -> Optional[Any]:
        """
        Make request with exponential backoff retry
        """
        last_exception = None
        delay = 1
        
        for attempt in range(self.max_retries):
            try:
                # Get the method (get, post, put, etc.)
                request_method = getattr(self.session, method.lower())
                response = request_method(url, **kwargs)
                
                # Check for rate limiting
                if response.status_code == 429:
                    retry_after = response.headers.get('Retry-After', delay)
                    logging.warning(f"Rate limited. Waiting {retry_after}s")
                    time.sleep(float(retry_after))
                    continue
                
                # Check for server errors (5xx)
                if 500 <= response.status_code < 600:
                    if attempt < self.max_retries - 1:
                        logging.warning(f"Server error {response.status_code}. Retrying...")
                        time.sleep(delay)
                        delay *= self.backoff_factor
                        continue
                
                # Success or client error (4xx) - return response
                return response
                
            except ConnectionError as e:
                last_exception = e
                logging.error(f"Connection error on attempt {attempt + 1}: {e}")
                
                # Try reinitializing session for connection errors
                if attempt < self.max_retries - 1:
                    self._init_session()
                    time.sleep(delay)
                    delay *= self.backoff_factor
                    
            except TimeoutError as e:
                last_exception = e
                logging.error(f"Timeout on attempt {attempt + 1}: {e}")
                
                if attempt < self.max_retries - 1:
                    time.sleep(delay)
                    delay *= self.backoff_factor
                    
            except Exception as e:
                last_exception = e
                logging.error(f"Unexpected error: {e}")
                break
        
        # All retries exhausted
        logging.error(f"All {self.max_retries} attempts failed for {url}")
        raise last_exception or Exception("Request failed")
    
    def get(self, url: str, **kwargs):
        return self.request_with_retry('GET', url, **kwargs)
    
    def post(self, url: str, **kwargs):
        return self.request_with_retry('POST', url, **kwargs)

# Usage
client = RobustRnetClient(
    impersonate=Impersonate.Firefox139,
    max_retries=5,
    timeout=60
)

try:
    response = client.get("https://api.example.com/data")
    print(response.json())
except Exception as e:
    logging.error(f"Failed after all retries: {e}")

Bypass CloudFlare Detection (Advanced Trick)

from rnet import BlockingClient, Impersonate
import time

def bypass_cloudflare(url):
    """
    Advanced CloudFlare bypass using browser behavior mimicry
    """
    client = BlockingClient(
        impersonate=Impersonate.Chrome136,
        # Enable all browser-like features
        cookie_store=True,
        allow_redirects=True,
        # Use HTTP/2 like real Chrome
        http2_only=True
    )
    
    # First request - might get challenged
    response = client.get(url)
    
    if "challenge" in response.text.lower() or response.status_code == 403:
        # Wait like a real user would
        time.sleep(2)
        
        # Retry with referrer (like clicking a link)
        response = client.get(
            url,
            headers={
                "Referer": url,
                "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
                "Accept-Language": "en-US,en;q=0.5",
                "Accept-Encoding": "gzip, deflate, br",
                "DNT": "1",
                "Connection": "keep-alive",
                "Upgrade-Insecure-Requests": "1"
            }
        )
    
    return response

# Use it
response = bypass_cloudflare("https://protected-site.com")

Next Steps

You've now mastered Rnet's core capabilities. Here are advanced techniques to explore:

1. Custom TLS Fingerprints

For targets that aren't browsers, create custom fingerprints:

client = BlockingClient(
    ja3="771,4865-4866-4867-49195-49199...",  # Custom JA3 string
    akamai="2:0:0:0:0:0:0:0:0:0..."  # Custom Akamai fingerprint
)

2. Performance Optimization

  • Use connection pooling for multiple requests to same host
  • Implement DNS caching with lookup_ip_strategy
  • Leverage zero-copy transfers for large file downloads

3. Integration Ideas

  • Build a Scrapy middleware using Rnet
  • Create a requests-compatible wrapper for drop-in replacement
  • Integrate with playwright for JavaScript-heavy sites

Conclusion

Rnet represents a paradigm shift in Python HTTP clients. By combining Rust's performance with accurate browser fingerprinting, it solves problems that have plagued web scrapers and API consumers for years.

The key takeaways:

  • TLS fingerprinting is real - and Rnet beats it
  • Performance matters - Rust backing gives you speed without sacrificing Python's simplicity
  • Browser accuracy is crucial - Perfect impersonation means fewer blocks

Whether you're building scrapers, API clients, or automation tools, Rnet gives you the edge you need in today's anti-bot landscape.

Further Resources

Happy scraping - and remember, with great power comes great responsibility. Always respect robots.txt and rate limits!

Marius Bernard

Marius Bernard

Marius Bernard is a Product Advisor, Technical SEO, & Brand Ambassador at Roundproxies. He was the lead author for the SEO chapter of the 2024 Web and a reviewer for the 2023 SEO chapter.