How to Scrape Viagogo in 2026

Viagogo is one of the world's largest secondary ticket marketplaces, connecting buyers and sellers for concerts, sports events, and live entertainment across 70+ countries.

If you're building a price comparison tool, monitoring ticket availability, or conducting market research on the live events industry, you'll inevitably need to extract data from Viagogo.

The challenge? Viagogo sits behind Cloudflare's bot protection, uses heavy JavaScript rendering, and implements multiple layers of anti-scraping defenses. In this guide, I'll walk you through the technical approaches that actually work in 2026—from basic HTTP requests to advanced browser automation with stealth techniques.

What You'll Learn

  • Understanding Viagogo's technical architecture and anti-bot measures
  • When to use HTTP requests vs. browser automation
  • Implementing Playwright with stealth plugins for maximum effectiveness
  • Handling dynamic pricing and JavaScript-rendered content
  • Proxy rotation and fingerprint randomization strategies
  • Legal and ethical considerations for ticket scraping

Understanding Viagogo's Defense Mechanisms

Before diving into code, you need to understand what you're up against. Viagogo employs a multi-layered defense strategy:

Cloudflare Bot Management: Viagogo uses Cloudflare's enterprise-level bot protection, which includes JavaScript challenges, browser fingerprinting, and behavioral analysis. This isn't the basic "I'm Under Attack" mode—it's the sophisticated version that uses machine learning to identify automation patterns.

JavaScript-Heavy Rendering: Nearly all content on Viagogo loads dynamically through React. If you try a simple HTTP GET request, you'll receive an HTML shell with almost no actual ticket data. The pricing, availability, and event details all load after the initial page render through AJAX calls.

Rate Limiting: Viagogo monitors request patterns aggressively. Make too many requests from a single IP in a short window, and you'll get temporarily banned—sometimes for hours.

Geographic Restrictions: Certain events and pricing information vary by region. Viagogo checks your IP geolocation and serves different data accordingly, which complicates scraping if you need comprehensive international data.

Approach 1: The HTTP Request Method (When It Works)

Let's start with the simplest approach. While it won't work for most Viagogo pages, understanding why helps you grasp the challenges ahead.

Here's what a naive HTTP request looks like:

import requests

url = "https://www.viagogo.com/Concert-Tickets/Rock-and-Pop/Taylor-Swift-Tickets"
headers = {
    "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36"
}

response = requests.get(url, headers=headers)
print(response.text[:500])

This will give you HTML, but it'll be mostly empty—just the page skeleton. Why? Because Viagogo uses client-side rendering. The actual ticket data loads through subsequent API calls after JavaScript executes.

When this approach works: If you can identify Viagogo's internal API endpoints (by inspecting network traffic in DevTools), you might hit those directly. However, these APIs typically require:

  • Valid cookies from an authenticated session
  • Specific request headers that mimic the browser
  • Anti-CSRF tokens that rotate

Here's a more sophisticated HTTP approach targeting an API endpoint:

import requests
import json

# First, establish a session to maintain cookies
session = requests.Session()

# Get the main page to establish cookies
session.get("https://www.viagogo.com")

# Now target the API endpoint (found via DevTools Network tab)
api_url = "https://www.viagogo.com/api/events/search"
params = {
    "query": "taylor swift",
    "limit": 50
}

headers = {
    "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36",
    "Accept": "application/json",
    "Referer": "https://www.viagogo.com/Concert-Tickets",
    "X-Requested-With": "XMLHttpRequest"
}

response = session.get(api_url, params=params, headers=headers)
data = response.json()
print(json.dumps(data, indent=2))

The catch: This approach falls apart when Cloudflare's challenge kicks in. You'll get a 403 Forbidden or be served a challenge page instead of JSON data. For production scraping, you need something more robust.

Approach 2: Browser Automation with Playwright Stealth

When HTTP requests fail, browser automation becomes necessary. Playwright combined with stealth plugins is currently the most effective method for scraping Viagogo in 2026.

Setting Up Playwright with Stealth

First, install the necessary packages:

npm install playwright playwright-extra puppeteer-extra-plugin-stealth

Here's a basic scraper that actually bypasses Viagogo's defenses:

const { chromium } = require('playwright-extra');
const StealthPlugin = require('puppeteer-extra-plugin-stealth');

// Apply the stealth plugin
chromium.use(StealthPlugin());

async function scrapeViagogoEvent(eventUrl) {
  const browser = await chromium.launch({
    headless: true,
    args: [
      '--disable-blink-features=AutomationControlled',
      '--disable-features=IsolateOrigins,site-per-process',
    ]
  });
  
  const context = await browser.newContext({
    userAgent: 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36',
    viewport: { width: 1920, height: 1080 },
    locale: 'en-US',
    timezoneId: 'America/New_York'
  });
  
  const page = await context.newPage();
  
  // Navigate and wait for the dynamic content
  await page.goto(eventUrl, { waitUntil: 'networkidle' });
  
  // Wait for ticket listings to load (adjust selector based on actual page structure)
  await page.waitForSelector('[data-testid="ticket-card"]', { timeout: 10000 });
  
  // Extract ticket data
  const tickets = await page.$$eval('[data-testid="ticket-card"]', cards => {
    return cards.map(card => {
      const price = card.querySelector('.ticket-price')?.textContent.trim();
      const section = card.querySelector('.ticket-section')?.textContent.trim();
      const quantity = card.querySelector('.ticket-quantity')?.textContent.trim();
      
      return { price, section, quantity };
    });
  });
  
  console.log(`Found ${tickets.length} tickets:`);
  console.log(tickets);
  
  await browser.close();
  return tickets;
}

// Usage
scrapeViagogoEvent('https://www.viagogo.com/Concert-Tickets/Rock-and-Pop/Taylor-Swift-Tickets/E-12345678');

Why This Works Better

The stealth plugin handles several critical anti-detection measures:

  1. Removes navigator.webdriver: This property is set to true in automation browsers, making them easy to detect. The stealth plugin sets it to undefined.
  2. Masks Chrome Headless: The default user agent in headless mode includes "HeadlessChrome", which is a dead giveaway. The plugin replaces this with a normal Chrome signature.
  3. Fixes WebGL and Plugin Arrays: Headless browsers have empty plugin lists and different WebGL rendering. The stealth plugin injects realistic values.
  4. Randomizes Browser Fingerprints: Each session gets slightly different canvas fingerprints, making it harder to track individual scrapers.

Handling Dynamic Content and Pagination

Viagogo loads ticket listings progressively. Here's how to handle infinite scroll pagination:

async function scrapeAllTickets(page) {
  const tickets = [];
  let previousHeight = 0;
  
  while (true) {
    // Scroll to bottom
    await page.evaluate(() => {
      window.scrollTo(0, document.body.scrollHeight);
    });
    
    // Wait for new content to load
    await page.waitForTimeout(2000);
    
    // Extract currently visible tickets
    const newTickets = await page.$$eval('[data-testid="ticket-card"]', cards => {
      return cards.map(card => {
        const price = card.querySelector('.ticket-price')?.textContent.trim();
        const section = card.querySelector('.ticket-section')?.textContent.trim();
        return { price, section };
      });
    });
    
    // Check if we've reached the bottom
    const currentHeight = await page.evaluate(() => document.body.scrollHeight);
    if (currentHeight === previousHeight) {
      break; // No new content loaded
    }
    
    previousHeight = currentHeight;
    tickets.push(...newTickets);
  }
  
  // Remove duplicates (same section + price)
  const unique = tickets.filter((ticket, index, self) =>
    index === self.findIndex(t => t.section === ticket.section && t.price === ticket.price)
  );
  
  return unique;
}

Advanced Technique: Intercepting API Calls

Here's a trick that significantly speeds up scraping: instead of waiting for the page to render completely, intercept the API responses that contain the ticket data.

async function interceptViagogoAPI(eventUrl) {
  const browser = await chromium.launch({ headless: true });
  const page = await browser.newPage();
  
  const ticketData = [];
  
  // Intercept all network responses
  page.on('response', async (response) => {
    const url = response.url();
    
    // Look for the ticket listings API endpoint
    if (url.includes('/api/events/') && url.includes('/listings')) {
      try {
        const json = await response.json();
        if (json.listings) {
          ticketData.push(...json.listings);
        }
      } catch (e) {
        // Not JSON or parsing failed
      }
    }
  });
  
  await page.goto(eventUrl, { waitUntil: 'networkidle' });
  
  // Wait a bit for all API calls to complete
  await page.waitForTimeout(3000);
  
  await browser.close();
  return ticketData;
}

This approach is faster because you're not parsing HTML—you're grabbing the raw JSON that Viagogo itself uses to populate the page. The downside? The API endpoint structure might change, breaking your scraper until you update it.

Dealing with Cloudflare Challenges

Even with stealth plugins, Cloudflare might still present challenges. Here's a pattern for handling them:

async function handleCloudflare(page) {
  try {
    // Check if we're on a Cloudflare challenge page
    const title = await page.title();
    
    if (title.includes('Just a moment') || title.includes('Cloudflare')) {
      console.log('Cloudflare challenge detected, waiting...');
      
      // Wait for the challenge to resolve automatically
      await page.waitForSelector('body', { state: 'attached', timeout: 30000 });
      
      // The stealth plugin should handle this, but give it extra time
      await page.waitForTimeout(5000);
      
      // Check if we're past the challenge
      const newTitle = await page.title();
      if (newTitle.includes('Cloudflare')) {
        throw new Error('Failed to bypass Cloudflare');
      }
    }
  } catch (error) {
    console.error('Cloudflare handling failed:', error);
    throw error;
  }
}

Scaling with Proxy Rotation

For large-scale scraping, you'll need to rotate IP addresses. Here's how to integrate residential proxies with Playwright:

async function createBrowserWithProxy(proxyUrl) {
  const [protocol, rest] = proxyUrl.split('://');
  const [auth, server] = rest.split('@');
  const [username, password] = auth.split(':');
  
  const browser = await chromium.launch({
    headless: true,
    proxy: {
      server: `${protocol}://${server}`,
      username: username,
      password: password
    }
  });
  
  return browser;
}

// Usage with a proxy list
const proxies = [
  'http://user1:pass1@proxy1.example.com:8080',
  'http://user2:pass2@proxy2.example.com:8080',
  // Add more proxies
];

async function scrapeWithRotation(urls) {
  const results = [];
  
  for (let i = 0; i < urls.length; i++) {
    const proxy = proxies[i % proxies.length];
    const browser = await createBrowserWithProxy(proxy);
    const page = await browser.newPage();
    
    try {
      const data = await scrapePage(page, urls[i]);
      results.push(data);
    } catch (error) {
      console.error(`Failed to scrape ${urls[i]}:`, error);
    } finally {
      await browser.close();
    }
    
    // Rate limiting: wait between requests
    await new Promise(resolve => setTimeout(resolve, 3000));
  }
  
  return results;
}

Proxy tips for Viagogo:

  • Use residential or mobile proxies, not datacenter IPs (Viagogo blocks most datacenter ranges)
  • Rotate after every 5-10 requests to avoid pattern detection
  • Match your proxy location to the event location for consistent pricing data

Fingerprint Randomization

Beyond the stealth plugin, you can randomize additional browser properties to appear even more human-like:

async function createRandomizedBrowser() {
  const userAgents = [
    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36...',
    'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36...',
    'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36...'
  ];
  
  const viewports = [
    { width: 1920, height: 1080 },
    { width: 1366, height: 768 },
    { width: 1536, height: 864 }
  ];
  
  const timezones = ['America/New_York', 'America/Los_Angeles', 'America/Chicago'];
  
  const randomUA = userAgents[Math.floor(Math.random() * userAgents.length)];
  const randomViewport = viewports[Math.floor(Math.random() * viewports.length)];
  const randomTZ = timezones[Math.floor(Math.random() * timezones.length)];
  
  const browser = await chromium.launch({ headless: true });
  const context = await browser.newContext({
    userAgent: randomUA,
    viewport: randomViewport,
    timezoneId: randomTZ,
    locale: 'en-US',
    permissions: []
  });
  
  return { browser, context };
}

Parsing the Data

Once you've successfully loaded the page, extracting the data requires understanding Viagogo's DOM structure:

async function extractEventDetails(page) {
  return await page.evaluate(() => {
    const data = {
      eventName: document.querySelector('[data-testid="event-title"]')?.textContent.trim(),
      venue: document.querySelector('[data-testid="venue-name"]')?.textContent.trim(),
      date: document.querySelector('[data-testid="event-date"]')?.textContent.trim(),
      tickets: []
    };
    
    // Extract all ticket listings
    const ticketCards = document.querySelectorAll('[data-testid="ticket-card"]');
    ticketCards.forEach(card => {
      const ticket = {
        section: card.querySelector('.section-name')?.textContent.trim(),
        row: card.querySelector('.row-info')?.textContent.trim(),
        quantity: parseInt(card.querySelector('.quantity')?.textContent.trim()),
        price: card.querySelector('.price-value')?.textContent.trim(),
        pricePerTicket: card.querySelector('.per-ticket-price')?.textContent.trim(),
        restrictions: card.querySelector('.restrictions')?.textContent.trim()
      };
      
      data.tickets.push(ticket);
    });
    
    return data;
  });
}

Handling Errors Gracefully

Real-world scraping requires robust error handling. Here's a production-ready wrapper:

async function scrapeWithRetry(url, maxRetries = 3) {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    let browser;
    try {
      browser = await chromium.launch({ headless: true });
      const page = await browser.newPage();
      
      await page.goto(url, { waitUntil: 'networkidle', timeout: 30000 });
      await handleCloudflare(page);
      
      const data = await extractEventDetails(page);
      return data;
      
    } catch (error) {
      console.error(`Attempt ${attempt} failed:`, error.message);
      
      if (attempt === maxRetries) {
        throw new Error(`Failed after ${maxRetries} attempts: ${error.message}`);
      }
      
      // Exponential backoff
      const waitTime = Math.pow(2, attempt) * 1000;
      await new Promise(resolve => setTimeout(resolve, waitTime));
      
    } finally {
      if (browser) {
        await browser.close();
      }
    }
  }
}

Before deploying a Viagogo scraper, understand the legal landscape:

Terms of Service: Viagogo's ToS explicitly prohibits automated access. While web scraping publicly available data is generally legal in many jurisdictions, violating ToS could expose you to civil liability.

Rate Limiting: Always implement rate limiting. Aggressive scraping can impact Viagogo's servers and potentially constitute a denial-of-service attack, which is illegal.

Data Usage: How you use the scraped data matters. Price monitoring for personal use sits in a different legal category than reselling ticket data commercially.

Best Practices:

  • Scrape during off-peak hours
  • Implement exponential backoff on failures
  • Respect robots.txt (even though it likely disallows scraping)
  • Never scrape personal user data or account information
  • Cache results to minimize repeated requests

Alternative Approach: Browser Extension Method

If you need data occasionally rather than at scale, a browser extension can be simpler:

// content_script.js
function extractTickets() {
  const tickets = [];
  const cards = document.querySelectorAll('[data-testid="ticket-card"]');
  
  cards.forEach(card => {
    tickets.push({
      price: card.querySelector('.price')?.textContent,
      section: card.querySelector('.section')?.textContent
    });
  });
  
  // Send to background script
  chrome.runtime.sendMessage({ action: 'saveTickets', data: tickets });
}

// Run when page loads
if (window.location.hostname === 'www.viagogo.com') {
  extractTickets();
}

This approach runs in a real browser session with your cookies and avoids most bot detection, though it doesn't scale well.

Performance Optimization

For scraping hundreds of events, optimize with concurrency:

const pLimit = require('p-limit');

async function scrapeMultipleEvents(urls) {
  const limit = pLimit(3); // Maximum 3 concurrent browsers
  
  const promises = urls.map(url => 
    limit(async () => {
      const data = await scrapeWithRetry(url);
      return { url, data };
    })
  );
  
  return Promise.all(promises);
}

Wrapping Up

Scraping Viagogo in 2026 requires a combination of technical sophistication and respect for the platform's infrastructure. The most effective approach uses Playwright with stealth plugins, combined with proxy rotation and fingerprint randomization.

Key takeaways:

  • Simple HTTP requests rarely work due to JavaScript rendering and Cloudflare
  • Playwright + stealth plugins are currently the most reliable method
  • Intercepting API responses is faster than parsing HTML
  • Always implement rate limiting and error handling
  • Consider the legal implications before deploying at scale

For most use cases, I'd recommend starting with the Playwright stealth approach and only adding proxy rotation if you're scraping at high volume. The investment in understanding browser automation pays off not just for Viagogo, but for scraping any modern, JavaScript-heavy website.

Remember: with great scraping power comes great responsibility. Use these techniques ethically, respect server resources, and always consider whether there's an official API or data partnership option before resorting to scraping.