Ever spent hours building the perfect scraper only to watch it get bloxcked within minutes? That's the frustrating reality of scraping protected websites in 2026.
Modern anti-bot systems don't just check your IP address. They analyze browser fingerprints, detect automation patterns, and flag anything that doesn't look like a genuine user.
GoLogin solves this problem by letting you create multiple browser profiles, each with unique fingerprints that pass detection. If you're scraping sites protected by Cloudflare, DataDome, or PerimeterX, this tool can be a game-changer.
In this guide, I'll show you how to set up GoLogin for web scraping. You'll learn to integrate it with Selenium, Playwright, and Puppeteer—complete with working code examples.
What is GoLogin?
GoLogin is an antidetect browser designed for managing multiple browser profiles with unique digital fingerprints. Each profile appears as a completely different user to websites, making it nearly impossible for anti-bot systems to connect your scraping activities.
In practice, GoLogin lets you:
- Create hundreds of isolated browser profiles
- Spoof browser fingerprints including canvas, WebGL, and audio
- Integrate residential or datacenter proxies per profile
- Connect to automation tools via Selenium, Playwright, or Puppeteer
- Run profiles locally or through GoLogin's cloud infrastructure
- Save and restore session data including cookies and localStorage
GoLogin is popular among web scrapers, affiliate marketers, and anyone managing multiple online accounts. Founded in 2019, it's become one of the most accessible antidetect browsers thanks to its Python SDK and competitive pricing.
The key differentiator? GoLogin uses its custom Orbita browser engine built on Chromium. This engine is specifically designed to resist fingerprinting detection—something standard Chrome with automation flags simply can't match.
Why use GoLogin for web scraping?
Standard scraping approaches fail against modern protection for one simple reason: they look automated.
Even with rotating proxies and random user agents, sites detect patterns in how your browser behaves. Canvas fingerprinting, WebGL hashes, and navigator properties all reveal automation.
GoLogin addresses this at the browser level. Instead of trying to mask an automated browser, you're running what appears to be a legitimate user's browser.
Here's when GoLogin makes sense:
Sites with aggressive bot protection. Cloudflare, Akamai, and PerimeterX all analyze browser fingerprints. GoLogin profiles pass these checks consistently.
Multi-account operations. Need to scrape from multiple logged-in accounts? Each GoLogin profile maintains separate cookies and sessions.
Long-running scraping sessions. Standard automation gets flagged over time. GoLogin profiles maintain realistic fingerprints across sessions.
Sites requiring human-like behavior. Some targets need mouse movements, scrolling, and interaction patterns. GoLogin combined with automation tools handles this well.
The tradeoff is complexity and cost. For simple targets without bot protection, requests or basic Selenium works fine. GoLogin shines when those approaches fail.
GoLogin vs other approaches
Before committing to GoLogin, consider how it compares to alternatives:
| Approach | Best For | Fingerprint Protection | Cost | Complexity |
|---|---|---|---|---|
| GoLogin | Protected sites, multi-account | Excellent | $24-199/mo | Medium |
| Selenium + undetected-chromedriver | Moderate protection | Good | Free | Low |
| Playwright stealth | JS-heavy sites | Moderate | Free | Low |
| Residential proxies only | IP-based blocking | None | $10-50/GB | Low |
| Multilogin | Enterprise, maximum stealth | Excellent | $99-399/mo | Medium |
| Kameleo | Mobile fingerprints | Excellent | $59-199/mo | Medium |
Choose GoLogin when:
- Free tools like undetected-chromedriver still get blocked
- You need multiple persistent browser profiles
- You want a balance of capability and cost
- Python is your primary language
Consider alternatives when:
- You're scraping unprotected sites (use requests or Scrapy)
- Budget is extremely limited (try undetected-chromedriver first)
- You need enterprise features (Multilogin offers more)
Getting started: Installation and setup
Prerequisites
Before starting, ensure you have:
- Python 3.8 or higher
- A GoLogin account (free tier available)
- Your GoLogin API token
Step 1: Create a GoLogin account
Head to gologin.com and sign up. The free plan includes 3 browser profiles—enough to test your scraping workflow.
After registration, you'll get a 7-day trial of premium features with access to 1,000 profiles.
Step 2: Get your API token
Your API token authenticates Python scripts with GoLogin's service.
- Log into GoLogin
- Navigate to Settings → API
- Click "New Token"
- Copy and save the token securely
Never commit this token to version control. Use environment variables instead.
Step 3: Install the Python SDK
Install GoLogin's official package:
pip install gologin
For Selenium integration, also install:
pip install selenium webdriver-manager
For Playwright:
pip install playwright
playwright install chromium
Step 4: Verify installation
Create a test script to confirm everything works:
import os
from gologin import GoLogin
# Load token from environment variable
token = os.environ.get('GL_API_TOKEN')
gl = GoLogin({
'token': token
})
# Create a test profile
profile = gl.create({
'name': 'Test Profile',
'os': 'win',
'navigator': {
'language': 'en-US',
'platform': 'Win32'
}
})
print(f"Created profile: {profile['id']}")
Run with your token:
GL_API_TOKEN=your_token_here python test_gologin.py
If you see a profile ID, you're ready to scrape.
GoLogin core concepts
Before writing scrapers, understand these key concepts:
Browser profiles
A profile is an isolated browser environment with its own fingerprint, cookies, and settings. Think of each profile as a separate person's computer.
Profiles persist across sessions. Close the browser, start it later, and you're still "logged in" with the same cookies and history.
Browser fingerprints
A fingerprint is the unique combination of browser properties websites use to identify you. This includes:
- User agent string
- Screen resolution
- Installed fonts
- Canvas rendering patterns
- WebGL capabilities
- Audio context properties
- Hardware concurrency
- Device memory
GoLogin generates realistic fingerprints for each profile. The values come from real device databases, not random generation.
Orbita browser
GoLogin's custom browser engine based on Chromium. It's modified to prevent fingerprint leakage that occurs in standard Chrome automation.
When you connect via Selenium or Playwright, you're controlling an Orbita instance—not regular Chrome.
Local vs cloud profiles
Local profiles run on your machine. The browser opens visibly (or headless) and you control it directly.
Cloud profiles run on GoLogin's servers. You connect via WebSocket and control a remote browser. This is useful for scaling or when you can't run browsers locally.
Debugger address
When you start a profile, GoLogin returns a debugger address like 127.0.0.1:35421. This is the Chrome DevTools Protocol endpoint your automation tools connect to.
Your first GoLogin scraper
Let's build a complete scraper that extracts product data from a protected eCommerce site.
Step 1: Create a browser profile
First, create a profile configured for scraping:
import os
from gologin import GoLogin
token = os.environ.get('GL_API_TOKEN')
gl = GoLogin({
'token': token
})
# Create profile with scraping-optimized settings
profile = gl.create({
'name': 'Scraper Profile 1',
'os': 'win',
'navigator': {
'language': 'en-US',
'platform': 'Win32',
'userAgent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36'
},
'proxyEnabled': False, # We'll add proxy later
'webRTC': {
'mode': 'alerted',
'enabled': True
}
})
profile_id = profile['id']
print(f"Profile created: {profile_id}")
Save the profile ID—you'll use it to launch the browser.
Step 2: Start the profile and connect Selenium
Now connect Selenium to the running GoLogin profile:
import os
import time
from gologin import GoLogin
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
token = os.environ.get('GL_API_TOKEN')
profile_id = 'your_profile_id_here'
# Initialize GoLogin
gl = GoLogin({
'token': token,
'profile_id': profile_id
})
# Start the browser and get debugger address
debugger_address = gl.start()
print(f"Browser started at: {debugger_address}")
# Connect Selenium to the running browser
chrome_options = Options()
chrome_options.add_experimental_option('debuggerAddress', debugger_address)
# Get matching ChromeDriver version
chromium_version = gl.get_chromium_version()
service = Service(
ChromeDriverManager(driver_version=chromium_version).install()
)
driver = webdriver.Chrome(service=service, options=chrome_options)
Step 3: Scrape the target site
With Selenium connected, scrape like normal:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
# Navigate to target
driver.get('https://example-store.com/products')
# Wait for products to load
wait = WebDriverWait(driver, 10)
products = wait.until(
EC.presence_of_all_elements_located((By.CSS_SELECTOR, '.product-card'))
)
# Extract data
results = []
for product in products:
name = product.find_element(By.CSS_SELECTOR, '.product-name').text
price = product.find_element(By.CSS_SELECTOR, '.product-price').text
results.append({
'name': name,
'price': price
})
print(f"Scraped {len(results)} products")
Step 4: Clean up properly
Always stop the profile when finished:
# Close Selenium
driver.quit()
# Wait briefly for clean shutdown
time.sleep(2)
# Stop the GoLogin profile
gl.stop()
print("Profile stopped successfully")
Failing to stop profiles leaves orphan processes and can consume your account limits.
Selenium integration
For production scrapers, wrap GoLogin in a reusable class:
import os
import time
from gologin import GoLogin
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.chrome.service import Service
from webdriver_manager.chrome import ChromeDriverManager
class GoLoginScraper:
"""Wrapper for GoLogin + Selenium scraping."""
def __init__(self, profile_id, token=None):
self.token = token or os.environ.get('GL_API_TOKEN')
self.profile_id = profile_id
self.driver = None
self.gl = None
def start(self):
"""Start browser and return Selenium driver."""
self.gl = GoLogin({
'token': self.token,
'profile_id': self.profile_id
})
debugger_address = self.gl.start()
chrome_options = Options()
chrome_options.add_experimental_option(
'debuggerAddress',
debugger_address
)
chromium_version = self.gl.get_chromium_version()
service = Service(
ChromeDriverManager(driver_version=chromium_version).install()
)
self.driver = webdriver.Chrome(
service=service,
options=chrome_options
)
return self.driver
def stop(self):
"""Clean shutdown of browser and profile."""
if self.driver:
self.driver.quit()
time.sleep(1)
if self.gl:
self.gl.stop()
def __enter__(self):
return self.start()
def __exit__(self, exc_type, exc_val, exc_tb):
self.stop()
return False
# Usage with context manager
with GoLoginScraper('your_profile_id') as driver:
driver.get('https://example.com')
print(driver.title)
# Profile automatically stopped
Handling dynamic content
Many protected sites load content via JavaScript. Wait for elements properly:
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException
def scrape_with_waits(driver, url):
"""Scrape page with proper wait handling."""
driver.get(url)
wait = WebDriverWait(driver, 15)
try:
# Wait for specific element indicating page loaded
wait.until(
EC.presence_of_element_located(
(By.CSS_SELECTOR, '[data-loaded="true"]')
)
)
except TimeoutException:
print("Page load timeout - continuing anyway")
# Additional wait for any lazy-loaded content
time.sleep(2)
return driver.page_source
Running multiple profiles concurrently
Scale your scraping with multiple profiles:
from concurrent.futures import ThreadPoolExecutor, as_completed
def scrape_url(profile_id, url):
"""Scrape single URL with dedicated profile."""
with GoLoginScraper(profile_id) as driver:
driver.get(url)
title = driver.title
return {'url': url, 'title': title}
# Define your profiles and URLs
profiles = ['profile_1', 'profile_2', 'profile_3']
urls = [
'https://site1.com',
'https://site2.com',
'https://site3.com'
]
# Map URLs to profiles
tasks = list(zip(profiles, urls))
# Run concurrently
results = []
with ThreadPoolExecutor(max_workers=3) as executor:
futures = {
executor.submit(scrape_url, pid, url): (pid, url)
for pid, url in tasks
}
for future in as_completed(futures):
try:
result = future.result()
results.append(result)
except Exception as e:
print(f"Error: {e}")
print(f"Completed {len(results)} scrapes")
Playwright integration
Playwright offers better performance and auto-waiting compared to Selenium. Here's how to integrate it with GoLogin:
import os
from gologin import GoLogin
from playwright.sync_api import sync_playwright
def scrape_with_playwright(profile_id, url):
"""Use Playwright with GoLogin profile."""
token = os.environ.get('GL_API_TOKEN')
gl = GoLogin({
'token': token,
'profile_id': profile_id
})
# Start profile and get WebSocket endpoint
debugger_address = gl.start()
ws_endpoint = f"ws://{debugger_address}/devtools/browser"
with sync_playwright() as p:
# Connect to running GoLogin browser
browser = p.chromium.connect_over_cdp(
f"http://{debugger_address}"
)
# Use existing context (preserves cookies)
context = browser.contexts[0]
page = context.pages[0] if context.pages else context.new_page()
# Scrape
page.goto(url)
page.wait_for_load_state('networkidle')
title = page.title()
content = page.content()
browser.close()
gl.stop()
return {'title': title, 'html': content}
Async Playwright for better performance
For high-volume scraping, use async Playwright:
import asyncio
import os
from gologin import GoLogin
from playwright.async_api import async_playwright
async def async_scrape(profile_id, url):
"""Async Playwright scraping with GoLogin."""
token = os.environ.get('GL_API_TOKEN')
gl = GoLogin({
'token': token,
'profile_id': profile_id
})
debugger_address = gl.start()
async with async_playwright() as p:
browser = await p.chromium.connect_over_cdp(
f"http://{debugger_address}"
)
context = browser.contexts[0]
page = await context.new_page()
await page.goto(url)
await page.wait_for_load_state('networkidle')
data = await page.evaluate('''() => {
return {
title: document.title,
h1: document.querySelector('h1')?.innerText
}
}''')
await browser.close()
gl.stop()
return data
# Run async scraping
async def main():
result = await async_scrape('your_profile_id', 'https://example.com')
print(result)
asyncio.run(main())
Puppeteer integration
For Node.js projects, connect Puppeteer to GoLogin:
const puppeteer = require('puppeteer-core');
async function scrapeWithGoLogin(profileId, token, url) {
// GoLogin cloud browser endpoint
const wsEndpoint = `wss://cloudbrowser.gologin.com/connect?token=${token}&profile=${profileId}`;
const browser = await puppeteer.connect({
browserWSEndpoint: wsEndpoint,
defaultViewport: null
});
const page = await browser.newPage();
await page.goto(url, {
waitUntil: 'networkidle2',
timeout: 30000
});
const data = await page.evaluate(() => ({
title: document.title,
url: window.location.href
}));
await browser.close();
return data;
}
// Usage
const token = process.env.GL_API_TOKEN;
const profileId = 'your_profile_id';
scrapeWithGoLogin(profileId, token, 'https://example.com')
.then(console.log)
.catch(console.error);
The cloud browser approach is useful when you can't install the GoLogin desktop app.
Configuring proxies in GoLogin
Proxies are essential for avoiding IP-based blocks. GoLogin supports several proxy types.
Adding a proxy to a profile
from gologin import GoLogin
gl = GoLogin({'token': token})
# Create profile with proxy
profile = gl.create({
'name': 'Proxied Profile',
'os': 'win',
'proxy': {
'mode': 'http',
'host': '192.168.1.1',
'port': 8080,
'username': 'user',
'password': 'pass'
},
'proxyEnabled': True
})
Using GoLogin's built-in proxies
GoLogin offers free proxies for testing. Add them programmatically:
# Add GoLogin proxy to existing profile
gl.addGologinProxyToProfile(profile_id, 'us') # US proxy
gl.addGologinProxyToProfile(profile_id, 'uk') # UK proxy
Available country codes include: us, uk, de, fr, ca, au, and more.
Rotating proxies per session
For large-scale scraping, rotate proxies:
import random
def get_random_proxy(proxy_list):
"""Select random proxy from list."""
proxy = random.choice(proxy_list)
return {
'mode': 'http',
'host': proxy['host'],
'port': proxy['port'],
'username': proxy.get('username', ''),
'password': proxy.get('password', '')
}
def create_profile_with_rotation(gl, proxy_list):
"""Create profile with random proxy."""
proxy = get_random_proxy(proxy_list)
profile = gl.create({
'name': f'Rotated Profile',
'os': 'win',
'proxy': proxy,
'proxyEnabled': True
})
return profile
Advanced techniques
Headless mode
Run profiles without visible browser windows:
gl = GoLogin({
'token': token,
'profile_id': profile_id,
'extra_params': ['--headless=new']
})
debugger_address = gl.start()
Note: Some sites detect headless mode. Test thoroughly before production use.
Persisting sessions across runs
GoLogin automatically saves cookies and localStorage. To ensure persistence:
# Start profile - previous session data loads automatically
gl = GoLogin({
'token': token,
'profile_id': profile_id
})
debugger_address = gl.start()
# ... do work ...
# Stop profile - session data persists for next run
gl.stop()
Custom fingerprint parameters
Override specific fingerprint values:
profile = gl.create({
'name': 'Custom Fingerprint',
'os': 'mac',
'navigator': {
'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)...',
'resolution': '1920x1080',
'language': 'en-US',
'platform': 'MacIntel',
'hardwareConcurrency': 8,
'deviceMemory': 8,
'maxTouchPoints': 0
},
'webGL': {
'mode': 'noise', # Adds noise to WebGL fingerprint
},
'canvas': {
'mode': 'noise' # Adds noise to canvas fingerprint
}
})
Handling CAPTCHAs
GoLogin reduces CAPTCHA frequency but doesn't eliminate them. For sites that still trigger CAPTCHAs:
- Slow down requests. Add delays between page loads.
- Simulate human behavior. Random scrolling and mouse movements help.
- Use residential proxies. Datacenter IPs trigger more CAPTCHAs.
- Integrate CAPTCHA solvers. Services like 2Captcha work with any browser automation.
import random
import time
def human_like_delay():
"""Random delay mimicking human behavior."""
time.sleep(random.uniform(1.5, 4.0))
def random_scroll(driver):
"""Scroll randomly like a human would."""
scroll_amount = random.randint(300, 700)
driver.execute_script(f"window.scrollBy(0, {scroll_amount})")
time.sleep(random.uniform(0.5, 1.5))
Common errors and troubleshooting
"Profile not found"
Cause: Invalid profile ID or profile was deleted.
Fix: List your profiles and verify the ID:
profiles = gl.getProfiles()
for p in profiles:
print(f"{p['id']}: {p['name']}")
"Token invalid or expired"
Cause: API token was revoked or incorrectly copied.
Fix: Generate a new token from Settings → API in GoLogin dashboard.
"Connection refused" when connecting Selenium
Cause: Browser didn't start or wrong debugger address.
Fix: Ensure gl.start() completed successfully and use the returned address:
address = gl.start()
print(f"Connect to: {address}") # Should show host:port
ChromeDriver version mismatch
Cause: GoLogin's Orbita version doesn't match your ChromeDriver.
Fix: Use the version from GoLogin:
chromium_version = gl.get_chromium_version()
service = Service(
ChromeDriverManager(driver_version=chromium_version).install()
)
Profile takes too long to start
Cause: Large profile with many cookies or slow network.
Fix: Create fresh profiles periodically and clear unnecessary data:
# Delete old profile
gl.delete(old_profile_id)
# Create clean replacement
new_profile = gl.create({...})
Best practices
Rotate profiles for large jobs
Don't hammer one profile. Spread requests across multiple:
def get_profile_for_request(request_num, profiles):
"""Round-robin profile selection."""
index = request_num % len(profiles)
return profiles[index]
Implement polite scraping delays
Even with fingerprint protection, aggressive scraping gets noticed:
import random
import time
def polite_request(driver, url, min_delay=2, max_delay=5):
"""Request with random delay."""
driver.get(url)
delay = random.uniform(min_delay, max_delay)
time.sleep(delay)
return driver.page_source
Monitor profile health
Track success rates per profile:
from collections import defaultdict
profile_stats = defaultdict(lambda: {'success': 0, 'failed': 0})
def record_result(profile_id, success):
if success:
profile_stats[profile_id]['success'] += 1
else:
profile_stats[profile_id]['failed'] += 1
def get_healthy_profiles(min_success_rate=0.8):
"""Return profiles with good success rates."""
healthy = []
for pid, stats in profile_stats.items():
total = stats['success'] + stats['failed']
if total > 10: # Minimum sample size
rate = stats['success'] / total
if rate >= min_success_rate:
healthy.append(pid)
return healthy
Handle failures gracefully
Implement retry logic with exponential backoff:
import time
from functools import wraps
def retry_on_failure(max_retries=3, base_delay=1):
"""Decorator for retry logic."""
def decorator(func):
@wraps(func)
def wrapper(*args, **kwargs):
for attempt in range(max_retries):
try:
return func(*args, **kwargs)
except Exception as e:
if attempt == max_retries - 1:
raise
delay = base_delay * (2 ** attempt)
print(f"Attempt {attempt + 1} failed: {e}")
print(f"Retrying in {delay}s...")
time.sleep(delay)
return wrapper
return decorator
@retry_on_failure(max_retries=3)
def scrape_page(driver, url):
driver.get(url)
return driver.page_source
Save data incrementally
Don't wait until the end to save results:
import json
def save_result(data, filename='results.jsonl'):
"""Append single result to file."""
with open(filename, 'a') as f:
f.write(json.dumps(data) + '\n')
# In your scraping loop
for url in urls:
result = scrape_page(driver, url)
save_result(result) # Saved immediately
GoLogin pricing
GoLogin offers several pricing tiers (as of 2026):
| Plan | Profiles | Team Members | Price (Monthly) | Price (Annual) |
|---|---|---|---|---|
| Forever Free | 3 | 1 | $0 | $0 |
| Professional | 100 | 1 | $49 | $24/mo |
| Business | 300 | 10 | $99 | $49/mo |
| Enterprise | 1,000 | 20 | $199 | $99/mo |
| Custom | 2,000+ | 100 | $299+ | Custom |
All paid plans include:
- API access for automation
- Cloud browser profiles
- Proxy management
- Profile sharing
- Priority support
The free plan works for testing but limits you to 3 profiles with no API access. For serious scraping, the Professional plan at $24/month (annual) offers good value.
FAQs
Is GoLogin legal for web scraping?
GoLogin itself is legal. The legality depends on what you scrape and how.
Scraping public data is generally legal, but violating a site's terms of service can have consequences. Always check the target site's ToS and robots.txt.
Can GoLogin bypass all anti-bot protection?
No tool bypasses everything. GoLogin handles most protection including Cloudflare's JavaScript challenges, but extremely aggressive systems may still detect automated patterns.
Combine GoLogin with realistic behavior patterns and quality residential proxies for best results.
How many profiles can I run simultaneously?
This depends on your hardware. Each profile is a separate browser process.
On a typical 8GB RAM machine, expect to run 3-5 profiles comfortably. For more, use cloud profiles or scale horizontally across machines.
Does GoLogin work with headless mode?
Yes, but with caveats. Some sites specifically detect headless browsers. GoLogin's Orbita browser resists some headless detection, but test thoroughly.
For maximum stealth, run headed (visible) browsers when possible.
Can I use my existing proxies with GoLogin?
Yes. GoLogin supports HTTP, HTTPS, and SOCKS5 proxies. You can configure proxies per profile or use GoLogin's built-in free proxies.
For best results, residential proxies outperform datacenter proxies on protected sites.
Conclusion
GoLogin fills an important gap in the web scraping toolkit. When standard approaches fail against fingerprint-based detection, it provides a reliable way to appear as legitimate users.
The key takeaways:
- Use GoLogin when simpler tools get blocked
- Create separate profiles for different scraping jobs
- Combine with quality proxies for IP rotation
- Implement polite delays and human-like behavior
- Monitor profile health and rotate as needed
Start with the free tier to test against your target sites. If GoLogin consistently bypasses their protection, upgrade to Professional for API access and more profiles.
For sites without aggressive protection, stick with simpler tools. GoLogin adds complexity—only use it when you need to.