Managing multiple browser profiles for web scraping used to mean juggling virtual machines or getting constantly blocked. XLogin.us changed that for me.
Whether I'm scraping e-commerce prices across regions, collecting data from sites with aggressive anti-bot protection, or running parallel data collection tasks, XLogin.us keeps each session isolated with unique fingerprints.
If you're new to XLogin.us and want to use it for web scraping, you're in the right place. This guide covers everything from installation to building automated scrapers with Selenium.
What is XLogin.us?
XLogin.us is an antidetect browser designed for managing multiple browser profiles, each with a unique digital fingerprint. It creates isolated browsing environments where cookies, local storage, and cache files stay completely separate between profiles.
Unlike regular browsers that expose your real device information, XLogin.us replaces your browser fingerprint with custom values. This includes:
- Canvas and WebGL fingerprints
- Audio context fingerprints
- Screen resolution and color depth
- Timezone and language settings
- Hardware concurrency and device memory
- User agent strings
For web scraping, this means you can run multiple concurrent sessions without websites linking them together or detecting automation patterns.
XLogin.us supports automation through Selenium WebDriver and provides a REST API running on http://127.0.0.1:35000 for programmatic profile management.
Why use XLogin.us for web scraping?
Standard Selenium scrapers get detected fast. Websites check browser fingerprints, and a headless Chrome instance screams "bot" to any anti-bot system.
XLogin.us solves this by making each browser profile appear as a legitimate, unique user.
XLogin.us vs. regular Selenium
| Feature | Regular Selenium | Selenium + XLogin.us |
|---|---|---|
| Fingerprint consistency | Obvious automation markers | Realistic human fingerprints |
| Multi-session support | All sessions linked | Each profile is isolated |
| Proxy integration | Manual per-session config | Built-in per-profile proxies |
| Cookie persistence | Lost on restart | Saved per profile |
| Detection resistance | Low | High |
When to use XLogin.us
XLogin makes sense when you need to:
- Scrape sites with anti-bot protection like Cloudflare or DataDome
- Run multiple accounts or sessions in parallel
- Maintain persistent sessions across scraping runs
- Collect data from different geographic regions
- Avoid IP bans and fingerprint-based blocking
When to skip it
For simple, low-volume scraping of static pages without protection, XLogin adds unnecessary complexity. Use basic Requests + BeautifulSoup instead.
How to install and set up XLogin.us
XLogin currently runs only on Windows. Here's how to get started.
Step 1: Download XLogin.us
Visit xlogin.us and download the installer.
The free trial gives you 3 days with 5 browser profiles, unlimited fingerprints, and full API access.
Step 2: Create an account
Launch XLogin.us and register a new account. You'll need a valid email for verification.
Step 3: Enable browser automation
This step is critical for Selenium integration.
- Open XLogin settings (gear icon)
- Navigate to "Browser Automation"
- Enable "Launch the browser automation port"
- Set the port to
35000(default) - Save settings
Without this, your Python scripts can't connect to XLogin profiles.
Step 4: Install Python dependencies
Open your terminal and install the required packages:
pip install selenium requests
For newer Selenium versions, the driver manager handles ChromeDriver automatically. But since we're connecting to XLogin's browser instances, we won't need it.
Creating your first browser profile
Before scraping, you need at least one browser profile.
Manual profile creation
- Click "New Browser Profile" in XLogin
- Enter a profile name (e.g., "scraper-profile-1")
- Choose the browser kernel version (latest Chrome recommended)
- Configure basic settings:
- Operating System: Windows 10/11
- Screen Resolution: Common values like 1920x1080
- Language: Match your target site's region
- Leave fingerprint settings on "Auto" for realistic values
- Click "Create"
Creating profiles via API
For automated setup, use XLogin's REST API:
import requests
# XLogin API endpoint
API_BASE = "http://127.0.0.1:35000/api/v1"
def create_profile(name, proxy_config=None):
"""
Create a new browser profile via XLogin API.
Args:
name: Profile display name
proxy_config: Optional proxy string (type/host/port/user/pass)
Returns:
Profile ID if successful, None otherwise
"""
endpoint = f"{API_BASE}/profile/create"
params = {
"name": name,
"kernel": "chrome", # Browser engine
"os": "win", # Operating system
}
if proxy_config:
params["proxy"] = proxy_config
response = requests.get(endpoint, params=params)
if response.status_code == 200:
data = response.json()
if data.get("status") == "OK":
return data.get("value")
return None
# Create a new profile
profile_id = create_profile("scraper-profile-api")
print(f"Created profile: {profile_id}")
The API returns a unique profile ID that you'll use for all subsequent operations.
Configuring proxies for scraping
Every serious scraping setup needs proxy rotation. XLogin.us lets you assign proxies at the profile level.
Setting proxies in the UI
- Select your profile in XLogin.us
- Click "Edit Profile"
- Scroll to "Proxy Server"
- Enable "Use Proxy"
- Choose proxy type: HTTP, HTTPS, or SOCKS5
- Enter proxy details:
- Host:
proxy.example.com - Port:
8080 - Username and Password (if authenticated)
- Host:
- Click "Check Proxy" to verify connectivity
- Save the profile
Setting proxies via API
def create_profile_with_proxy(name, proxy_type, host, port, username=None, password=None):
"""
Create a profile with proxy configuration.
Args:
name: Profile name
proxy_type: http, https, or socks5
host: Proxy server hostname
port: Proxy server port
username: Auth username (optional)
password: Auth password (optional)
Returns:
Profile ID
"""
# Build proxy string: type/host/port/user/pass
if username and password:
proxy_string = f"{proxy_type}/{host}/{port}/{username}/{password}"
else:
proxy_string = f"{proxy_type}/{host}/{port}"
endpoint = f"{API_BASE}/profile/create_start"
params = {
"name": name,
"proxy": proxy_string
}
response = requests.get(endpoint, params=params)
return response.json()
# Example: Create profile with residential proxy
result = create_profile_with_proxy(
name="geo-profile-us",
proxy_type="http",
host="us.residential-proxy.com",
port="8080",
username="user123",
password="pass456"
)
Proxy rotation strategy
For large-scale scraping, create multiple profiles with different proxies:
proxies = [
{"host": "us1.proxy.com", "port": "8080"},
{"host": "us2.proxy.com", "port": "8080"},
{"host": "uk1.proxy.com", "port": "8080"},
{"host": "de1.proxy.com", "port": "8080"},
]
profiles = []
for i, proxy in enumerate(proxies):
profile_id = create_profile_with_proxy(
name=f"scraper-{i}",
proxy_type="http",
host=proxy["host"],
port=proxy["port"]
)
profiles.append(profile_id)
print(f"Created {len(profiles)} profiles with different proxies")
Automating XLogin.us with Selenium
Here's where XLogin.us shines. You can connect Selenium to any XLogin profile and control it programmatically.
Understanding the connection flow
- Start an XLogin.us profile via API
- API returns a WebDriver port (e.g.,
http://127.0.0.1:XXXXX) - Connect Selenium's Remote WebDriver to that port
- Control the browser as normal
- Stop the profile when done
Basic Selenium connection
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
import requests
import time
API_BASE = "http://127.0.0.1:35000/api/v1"
def start_profile(profile_id):
"""
Start an XLogin profile and return the WebDriver URL.
Args:
profile_id: UUID of the profile to launch
Returns:
WebDriver URL string or None
"""
endpoint = f"{API_BASE}/profile/start"
params = {
"automation": "true",
"profileId": profile_id
}
response = requests.get(endpoint, params=params)
if response.status_code == 200:
data = response.json()
if data.get("status") == "OK":
return data.get("value")
return None
def stop_profile(profile_id):
"""Stop a running XLogin profile."""
endpoint = f"{API_BASE}/profile/stop"
params = {"profileId": profile_id}
requests.get(endpoint, params=params)
def connect_selenium(webdriver_url):
"""
Connect Selenium to an XLogin browser instance.
Args:
webdriver_url: URL returned by start_profile()
Returns:
Selenium WebDriver instance
"""
options = Options()
driver = webdriver.Remote(
command_executor=webdriver_url,
options=options
)
return driver
# Example usage
profile_id = "YOUR-PROFILE-ID-HERE"
# Start the profile
webdriver_url = start_profile(profile_id)
print(f"WebDriver URL: {webdriver_url}")
# Give the browser time to fully launch
time.sleep(3)
# Connect Selenium
driver = connect_selenium(webdriver_url)
# Navigate to a page
driver.get("https://httpbin.org/headers")
print(driver.page_source)
# Clean up
driver.quit()
stop_profile(profile_id)
The key insight: XLogin's browser instance exposes a WebDriver-compatible endpoint. You connect to it exactly like you'd connect to Selenium Grid.
Building a complete web scraper
Let's build a practical scraper that extracts product data from an e-commerce site.
Project structure
xlogin-scraper/
├── config.py # Profile IDs and settings
├── xlogin_client.py # XLogin API wrapper
├── scraper.py # Main scraping logic
└── requirements.txt # Dependencies
XLogin.us client wrapper
Create a reusable client for XLogin.us operations:
# xlogin_client.py
import requests
import time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
class XLoginClient:
"""Wrapper for XLogin API and Selenium integration."""
def __init__(self, api_base="http://127.0.0.1:35000/api/v1"):
self.api_base = api_base
self.active_profiles = {}
def start_profile(self, profile_id, wait_time=3):
"""
Start a profile and return a connected WebDriver.
Args:
profile_id: XLogin profile UUID
wait_time: Seconds to wait for browser launch
Returns:
Selenium WebDriver instance
"""
endpoint = f"{self.api_base}/profile/start"
params = {
"automation": "true",
"profileId": profile_id
}
response = requests.get(endpoint, params=params)
data = response.json()
if data.get("status") != "OK":
raise Exception(f"Failed to start profile: {data}")
webdriver_url = data.get("value")
time.sleep(wait_time)
options = Options()
driver = webdriver.Remote(
command_executor=webdriver_url,
options=options
)
self.active_profiles[profile_id] = driver
return driver
def stop_profile(self, profile_id):
"""Stop a profile and close its WebDriver."""
if profile_id in self.active_profiles:
try:
self.active_profiles[profile_id].quit()
except:
pass
del self.active_profiles[profile_id]
endpoint = f"{self.api_base}/profile/stop"
params = {"profileId": profile_id}
requests.get(endpoint, params=params)
def stop_all(self):
"""Stop all active profiles."""
for profile_id in list(self.active_profiles.keys()):
self.stop_profile(profile_id)
def get_profile_list(self):
"""Get all available profiles."""
endpoint = f"{self.api_base}/profile/list"
response = requests.get(endpoint)
return response.json()
Main scraper implementation
# scraper.py
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
import json
import time
import random
from xlogin_client import XLoginClient
class ProductScraper:
"""Scrape product data using XLogin profiles."""
def __init__(self, profile_id):
self.client = XLoginClient()
self.profile_id = profile_id
self.driver = None
def __enter__(self):
self.driver = self.client.start_profile(self.profile_id)
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.client.stop_profile(self.profile_id)
def random_delay(self, min_sec=1, max_sec=3):
"""Add human-like delays between actions."""
time.sleep(random.uniform(min_sec, max_sec))
def wait_for_element(self, by, value, timeout=10):
"""Wait for an element to be present."""
try:
element = WebDriverWait(self.driver, timeout).until(
EC.presence_of_element_located((by, value))
)
return element
except TimeoutException:
return None
def scrape_product_page(self, url):
"""
Extract product data from a single page.
Args:
url: Product page URL
Returns:
Dictionary with product data
"""
self.driver.get(url)
self.random_delay(2, 4)
product = {"url": url}
# Wait for page to load
self.wait_for_element(By.TAG_NAME, "body")
# Extract title
try:
title_elem = self.driver.find_element(By.CSS_SELECTOR, "h1.product-title")
product["title"] = title_elem.text.strip()
except:
product["title"] = None
# Extract price
try:
price_elem = self.driver.find_element(By.CSS_SELECTOR, ".price-current")
product["price"] = price_elem.text.strip()
except:
product["price"] = None
# Extract description
try:
desc_elem = self.driver.find_element(By.CSS_SELECTOR, ".product-description")
product["description"] = desc_elem.text.strip()
except:
product["description"] = None
# Extract availability
try:
avail_elem = self.driver.find_element(By.CSS_SELECTOR, ".stock-status")
product["in_stock"] = "in stock" in avail_elem.text.lower()
except:
product["in_stock"] = None
return product
def scrape_multiple(self, urls):
"""
Scrape multiple product pages.
Args:
urls: List of product URLs
Returns:
List of product dictionaries
"""
products = []
for i, url in enumerate(urls):
print(f"Scraping {i+1}/{len(urls)}: {url}")
try:
product = self.scrape_product_page(url)
products.append(product)
except Exception as e:
print(f"Error scraping {url}: {e}")
products.append({"url": url, "error": str(e)})
# Random delay between pages
if i < len(urls) - 1:
self.random_delay(3, 7)
return products
def main():
"""Run the scraper."""
profile_id = "YOUR-PROFILE-ID"
urls = [
"https://example-shop.com/product/1",
"https://example-shop.com/product/2",
"https://example-shop.com/product/3",
]
with ProductScraper(profile_id) as scraper:
products = scraper.scrape_multiple(urls)
# Save results
with open("products.json", "w") as f:
json.dump(products, f, indent=2)
print(f"Scraped {len(products)} products")
if __name__ == "__main__":
main()
This scraper uses context managers for clean resource handling and includes human-like delays to avoid detection.
Advanced techniques
Running multiple profiles in parallel
For faster scraping, run several profiles simultaneously:
from concurrent.futures import ThreadPoolExecutor, as_completed
from xlogin_client import XLoginClient
def scrape_with_profile(profile_id, urls):
"""Scrape URLs using a specific profile."""
client = XLoginClient()
results = []
try:
driver = client.start_profile(profile_id)
for url in urls:
driver.get(url)
# ... extraction logic ...
results.append({"url": url, "data": "..."})
finally:
client.stop_profile(profile_id)
return results
def parallel_scrape(profile_ids, all_urls, max_workers=4):
"""
Distribute URLs across multiple profiles.
Args:
profile_ids: List of XLogin profile UUIDs
all_urls: Complete list of URLs to scrape
max_workers: Number of concurrent profiles
Returns:
Combined results from all profiles
"""
# Split URLs among profiles
chunks = [[] for _ in profile_ids]
for i, url in enumerate(all_urls):
chunks[i % len(profile_ids)].append(url)
all_results = []
with ThreadPoolExecutor(max_workers=max_workers) as executor:
futures = {
executor.submit(scrape_with_profile, pid, chunk): pid
for pid, chunk in zip(profile_ids, chunks)
if chunk # Skip empty chunks
}
for future in as_completed(futures):
profile_id = futures[future]
try:
results = future.result()
all_results.extend(results)
print(f"Profile {profile_id[:8]}... completed {len(results)} URLs")
except Exception as e:
print(f"Profile {profile_id[:8]}... failed: {e}")
return all_results
# Usage
profiles = ["profile-1-uuid", "profile-2-uuid", "profile-3-uuid"]
urls = ["https://site.com/page/" + str(i) for i in range(100)]
results = parallel_scrape(profiles, urls, max_workers=3)
Importing and managing cookies
Maintain login sessions by importing cookies:
import base64
import json
def import_cookies(profile_id, cookies):
"""
Import cookies into an XLogin profile.
Args:
profile_id: Target profile UUID
cookies: List of cookie dictionaries
"""
# XLogin expects base64-encoded JSON
cookies_json = json.dumps(cookies)
cookies_b64 = base64.b64encode(cookies_json.encode()).decode()
endpoint = f"{API_BASE}/profile/cookies/import"
params = {
"profileId": profile_id,
"cookies": cookies_b64
}
response = requests.post(endpoint, data=params)
return response.json()
def export_cookies(profile_id):
"""Export cookies from a profile."""
endpoint = f"{API_BASE}/profile/cookies/export"
params = {"profileId": profile_id}
response = requests.get(endpoint, params=params)
data = response.json()
if data.get("status") == "OK":
cookies_b64 = data.get("value")
cookies_json = base64.b64decode(cookies_b64).decode()
return json.loads(cookies_json)
return None
# Export cookies after manual login
cookies = export_cookies("logged-in-profile-id")
# Import to a new profile
import_cookies("new-profile-id", cookies)
Batch profile creation
Create many profiles at once for large-scale operations:
def batch_create_profiles(count, name_prefix, proxy_list=None):
"""
Create multiple profiles with optional proxy rotation.
Args:
count: Number of profiles to create
name_prefix: Prefix for profile names
proxy_list: Optional list of proxy configs
Returns:
List of created profile IDs
"""
created = []
for i in range(count):
name = f"{name_prefix}-{i:03d}"
proxy = None
if proxy_list:
proxy = proxy_list[i % len(proxy_list)]
endpoint = f"{API_BASE}/profile/create"
params = {"name": name}
if proxy:
params["proxy"] = f"http/{proxy['host']}/{proxy['port']}"
response = requests.get(endpoint, params=params)
data = response.json()
if data.get("status") == "OK":
created.append(data.get("value"))
print(f"Created: {name}")
else:
print(f"Failed: {name}")
return created
# Create 10 profiles with rotating proxies
proxies = [
{"host": "proxy1.com", "port": "8080"},
{"host": "proxy2.com", "port": "8080"},
]
profile_ids = batch_create_profiles(10, "scraper", proxies)
Common errors and troubleshooting
"Connection refused" on port 35000
Cause: XLogin.us isn't running or automation isn't enabled.
Fix:
- Make sure XLogin.us is open
- Go to Settings → Browser Automation
- Enable "Launch the browser automation port"
- Restart XLogin.us
"Profile not found" error
Cause: Invalid profile ID or profile was deleted.
Fix:
# List all profiles to find the correct ID
response = requests.get(f"{API_BASE}/profile/list")
profiles = response.json()
print(json.dumps(profiles, indent=2))
Selenium times out connecting
Cause: Profile didn't fully launch before Selenium tried to connect.
Fix: Increase the wait time after starting the profile:
webdriver_url = start_profile(profile_id)
time.sleep(5) # Wait longer for slow systems
driver = connect_selenium(webdriver_url)
"WebDriver not reachable" after stopping profile
Cause: Profile was stopped but WebDriver reference wasn't cleaned up.
Fix: Always call driver.quit() before stopping the profile:
try:
driver.quit()
except:
pass
finally:
stop_profile(profile_id)
Profile fingerprint detected
Cause: Some sites use advanced fingerprinting that detects inconsistencies.
Fix:
- Use "Auto" fingerprint settings instead of manual
- Ensure timezone matches your proxy's location
- Set language and locale to match the target region
- Keep the browser kernel updated
Best practices
1. One proxy per profile
Never share proxies between profiles. If two profiles use the same IP, sites can link them despite different fingerprints.
2. Match fingerprint to proxy location
If your proxy is in Germany, set the profile's timezone, language, and locale to German settings. Mismatches trigger detection.
3. Add realistic delays
Scraping at machine speed gets you blocked. Add random delays:
import random
time.sleep(random.uniform(2, 5))
4. Rotate user agents occasionally
Even with XLogin's fingerprint protection, rotating user agents across sessions adds another layer:
user_agents = [
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/120...",
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/119...",
]
5. Handle failures gracefully
Sites go down. Connections fail. Build retry logic:
def scrape_with_retry(url, max_retries=3):
for attempt in range(max_retries):
try:
return scrape_page(url)
except Exception as e:
if attempt == max_retries - 1:
raise
time.sleep(5 * (attempt + 1))
6. Save cookies regularly
Export cookies after important actions (login, session refresh) to maintain state:
# After successful login
cookies = driver.get_cookies()
save_cookies_to_profile(profile_id, cookies)
7. Keep XLogin.us updated
XLogin.us regularly updates browser kernels and fingerprint databases. Outdated versions get detected more easily.
FAQs
Is XLogin free?
XLogin offers a 3-day free trial with 5 browser profiles and full API access. Paid plans start at $99/month for 200 profiles.
Can XLogin.us bypass Cloudflare?
XLogin.us helps by providing realistic fingerprints, but Cloudflare's advanced challenges may still require additional techniques like residential proxies and human-like behavior patterns.
How many profiles can I run simultaneously?
This depends on your hardware. Each profile consumes RAM and CPU. A typical machine handles 5-10 concurrent profiles comfortably. For more, you'll need beefier specs or distributed setups.
Is web scraping with XLogin.us legal?
XLogin.us itself is legal software. The legality of scraping depends on what you scrape, how you use the data, and your jurisdiction. Always check a site's Terms of Service and relevant laws like GDPR and CFAA.
Can I use XLogin.us with Puppeteer instead of Selenium?
Yes, XLogin.us supports Puppeteer automation. The connection process is similar—you start the profile via API, get the WebSocket URL, and connect Puppeteer to it.
Conclusion
XLogin.us transforms web scraping from a constant battle with detection systems into a manageable operation.
The key workflow is straightforward:
- Create profiles with unique fingerprints and proxies
- Start profiles via the REST API
- Connect Selenium to the WebDriver endpoint
- Scrape with human-like behavior
- Stop profiles and export cookies for persistence
Start with the free trial to test your use case. Once you've validated that XLogin.us works for your target sites, scale up with more profiles and parallel execution.