Taking screenshots is one of those tasks that sounds simple until you actually need to do it at scale. Whether you're building automated tests, monitoring competitor websites, or creating visual documentation, you'll quickly run into questions: How do I capture the full page? What about that hidden element? Why is my screenshot blank?

I've spent the last few years working with Puppeteer for various screenshot projects—from simple test snapshots to complex monitoring systems that capture thousands of images daily. The good news? Once you understand the patterns, Puppeteer makes screenshot automation surprisingly straightforward. The better news? There are some lesser-known tricks that can save you hours of frustration.

In this guide, I'll walk you through everything from basic viewport captures to advanced techniques like stitching together screenshots of extremely tall pages and optimizing for performance. By the end, you'll know exactly which approach to use for your specific needs.

Setting Up Puppeteer

Before we dive into screenshots, let's get Puppeteer installed. If you've already got it set up, feel free to skip ahead.

First, make sure you have Node.js installed (version 16 or higher is recommended). Then create a new project directory and install Puppeteer:

mkdir puppeteer-screenshots
cd puppeteer-screenshots
npm init -y
npm install puppeteer

This downloads Puppeteer along with a bundled version of Chromium. The whole package is around 300MB, so it'll take a minute. If you already have Chrome installed and want to use that instead, you can install puppeteer-core and point it to your Chrome binary—but for most use cases, the bundled version is easier.

Here's a basic script to verify everything works:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  await page.goto('https://example.com');
  console.log('Puppeteer is working!');
  await browser.close();
})();

Save that as test.js and run node test.js. If you see "Puppeteer is working!" you're good to go.

Taking a Basic Viewport Screenshot

Let's start simple. The most straightforward screenshot captures whatever is visible in the browser viewport—essentially what you'd see if you opened the page in Chrome:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  
  await page.goto('https://news.ycombinator.com', {
    waitUntil: 'networkidle2'
  });
  
  await page.screenshot({ path: 'hackernews.png' });
  
  await browser.close();
})();

Run this and you'll get a PNG file of Hacker News. Simple enough. But notice that waitUntil: 'networkidle2' option? That's important—it tells Puppeteer to wait until there are no more than 2 network connections active for 500ms. Without it, you might capture the page mid-load with missing images or half-rendered content.

By default, Puppeteer's viewport is 800x600 pixels. That's... not ideal for most modern websites. Let's set a proper viewport size:

await page.setViewport({ 
  width: 1920, 
  height: 1080 
});

Now your screenshots will look like they're taken on a standard desktop monitor. If you're testing mobile views, you might use { width: 375, height: 667 } for an iPhone-sized viewport.

Capturing Full-Page Screenshots

Here's where things get interesting. Most web pages are longer than the viewport, and you'll often want to capture the entire page—including everything below the fold. Puppeteer makes this easy with the fullPage option:

await page.screenshot({ 
  path: 'fullpage.png',
  fullPage: true 
});

That's it. Puppeteer handles scrolling down the page and stitching everything together automatically. This works great for pages up to around 6,000-8,000 pixels tall.

But—and here's something most tutorials don't mention—if you're dealing with really long pages (think 10,000+ pixels), you might run into blank sections or the process failing altogether. I learned this the hard way when building a documentation screenshot tool. The standard fullPage option just couldn't handle extremely tall pages reliably.

The solution? Take multiple viewport-sized screenshots and stitch them together using the Sharp library. I'll show you that technique later in the advanced section.

Screenshotting Specific Elements

Sometimes you don't need the whole page—you just want to capture a specific button, chart, or section. Puppeteer lets you screenshot individual elements:

const element = await page.$('#my-chart');
await element.screenshot({ path: 'chart.png' });

The page.$() method uses CSS selectors to find elements, just like document.querySelector() in browser JavaScript. Here's a practical example capturing a specific product card on an e-commerce site:

const puppeteer = require('puppeteer');

(async () => {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  
  await page.goto('https://scrapingcourse.com/ecommerce');
  
  // Wait for the element to be present
  await page.waitForSelector('.product-card');
  
  // Get the element and screenshot it
  const productCard = await page.$('.product-card');
  await productCard.screenshot({ path: 'product.png' });
  
  await browser.close();
})();

That waitForSelector() call is crucial—it ensures the element is fully loaded before trying to screenshot it. Without it, you might try to capture an element that doesn't exist yet, which will throw an error.

Element screenshots automatically scroll the element into view if it's off-screen, which is a nice touch. They also crop tightly to the element's boundaries, giving you exactly what you need without extra whitespace.

Understanding Screenshot Formats and Quality

Puppeteer supports three image formats: PNG, JPEG, and WebP. Each has different trade-offs:

PNG (default):

await page.screenshot({
  path: 'screenshot.png',
  type: 'png'
});

PNG is lossless, meaning no quality degradation. File sizes are large though—a full-page screenshot can easily be 5-10MB. Use PNG when quality matters most, like for detailed technical documentation or when you need to compare screenshots pixel-by-pixel.

JPEG:

await page.screenshot({
  path: 'screenshot.jpg',
  type: 'jpeg',
  quality: 80  // 0-100, default is 100
});

JPEG compresses aggressively and produces much smaller files. A quality setting of 80-85 usually strikes a good balance between file size and visual quality. One company I worked with reduced their screenshot storage costs by 80% just by switching from PNG to JPEG for non-critical screenshots.

The downside? JPEG doesn't support transparency. If you set omitBackground: true to capture a screenshot without the page background, JPEG will render it as black.

WebP:

await page.screenshot({
  path: 'screenshot.webp',
  type: 'webp',
  quality: 75
});

WebP is the new kid on the block—it offers better compression than JPEG (typically 25-35% smaller files) while maintaining similar quality. It also supports transparency like PNG. The catch? Browser support isn't universal (though it's pretty good now in 2025).

Support for WebP quality settings was added to Puppeteer relatively recently, so make sure you're running a current version if you want to use it.

Using the Clip Region for Precise Screenshots

What if you want to capture a specific rectangular area without targeting a particular element? That's where the clip option comes in:

await page.screenshot({
  path: 'clipped.png',
  clip: {
    x: 100,    // x-coordinate from top-left
    y: 200,    // y-coordinate from top-left
    width: 800,
    height: 600
  }
});

This captures an 800x600 rectangle starting 100 pixels from the left edge and 200 pixels from the top. It's particularly useful when you need to capture a specific area across multiple pages with consistent positioning—like always grabbing the header section or a fixed sidebar.

One trick I've used: combine clip with page.evaluate() to dynamically get an element's position, then screenshot that exact area. This gives you more control than element screenshots in some situations:

const boundingBox = await page.evaluate(() => {
  const element = document.querySelector('.my-element');
  const { x, y, width, height } = element.getBoundingClientRect();
  return { x, y, width, height };
});

await page.screenshot({
  path: 'dynamic-clip.png',
  clip: boundingBox
});

Handling Dynamic Content and Waiting Strategies

The trickiest part of screenshot automation isn't taking the screenshot—it's knowing when to take it. Modern websites load content asynchronously, and if you screenshot too early, you'll capture half-loaded garbage.

Here are the main waiting strategies:

Network Idle:

await page.goto(url, {
  waitUntil: 'networkidle0'  // or 'networkidle2'
});

networkidle0 waits until there are no network connections for 500ms. networkidle2 allows up to 2 connections. Use networkidle2 for most sites—networkidle0 can hang indefinitely on sites with persistent connections like websockets.

Wait for Selector:

await page.waitForSelector('#content-loaded');

This waits for a specific element to appear in the DOM. It's more reliable than network idle for single-page applications that load content via AJAX.

Wait for Function:

await page.waitForFunction(() => {
  return document.querySelectorAll('.product-card').length >= 10;
});

This waits for a custom condition to be true. Really powerful for complex scenarios like "wait until at least 10 products have loaded."

Fixed Timeout:

await page.waitForTimeout(2000);  // Wait 2 seconds

Sometimes you just need to wait a fixed amount of time. It's crude but effective for animations or delayed content. Just don't rely on it as your primary strategy—it's slow and fragile.

For best results, combine strategies:

await page.goto(url, { waitUntil: 'networkidle2' });
await page.waitForSelector('.main-content');
await page.waitForTimeout(500);  // Let animations finish
await page.screenshot({ path: 'screenshot.png' });

Taking Screenshots in Dark Mode

Here's a technique that's surprisingly hard to find documented: capturing screenshots in both light and dark mode. Many sites now support dark mode via the prefers-color-scheme media query, and you can control this in Puppeteer:

const puppeteer = require('puppeteer');

async function captureInBothModes(url) {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  
  // Light mode
  await page.emulateMediaFeatures([{
    name: 'prefers-color-scheme',
    value: 'light'
  }]);
  await page.goto(url, { waitUntil: 'networkidle2' });
  await page.screenshot({ path: 'light-mode.png' });
  
  // Dark mode
  await page.emulateMediaFeatures([{
    name: 'prefers-color-scheme',
    value: 'dark'
  }]);
  await page.reload({ waitUntil: 'networkidle2' });
  await page.screenshot({ path: 'dark-mode.png' });
  
  await browser.close();
}

captureInBothModes('https://example.com');

The key is emulateMediaFeatures(), which tricks the page into thinking the user prefers a specific color scheme. Note that we reload the page after changing the preference—some sites only apply dark mode on initial load.

Batch Screenshots: Processing Multiple URLs

Need to screenshot dozens or hundreds of pages? Here's an efficient pattern that reuses the browser instance and processes URLs in batches:

const puppeteer = require('puppeteer');

async function batchScreenshot(urls, options = {}) {
  const {
    concurrency = 5,
    outputDir = './screenshots',
    fullPage = true
  } = options;
  
  const browser = await puppeteer.launch({
    headless: true,
    args: ['--no-sandbox', '--disable-setuid-sandbox']
  });
  
  // Process in batches
  for (let i = 0; i < urls.length; i += concurrency) {
    const batch = urls.slice(i, i + concurrency);
    
    await Promise.all(
      batch.map(async (url, index) => {
        const page = await browser.newPage();
        
        try {
          await page.goto(url, {
            waitUntil: 'networkidle2',
            timeout: 30000
          });
          
          const filename = `screenshot-${i + index}.png`;
          await page.screenshot({
            path: `${outputDir}/${filename}`,
            fullPage
          });
          
          console.log(`✓ Captured ${url}`);
        } catch (error) {
          console.error(`✗ Failed ${url}:`, error.message);
        } finally {
          await page.close();
        }
      })
    );
  }
  
  await browser.close();
}

// Usage
const urls = [
  'https://example.com',
  'https://github.com',
  'https://stackoverflow.com'
  // ... more URLs
];

batchScreenshot(urls, { concurrency: 3 });

This script processes 3 URLs at a time (adjustable via concurrency). Processing URLs in parallel is much faster than doing them sequentially, but be careful not to overwhelm your system or the target servers. I've found 3-5 concurrent pages to be a good sweet spot.

The try/catch/finally block is important—it ensures pages are always closed even if a screenshot fails, preventing memory leaks.

Advanced: Stitching Screenshots for Very Tall Pages

Remember when I mentioned that fullPage: true can fail on extremely tall pages? Here's the workaround using the Sharp library to stitch together multiple viewport screenshots.

First, install Sharp:

npm install sharp

Now here's the technique:

const puppeteer = require('puppeteer');
const sharp = require('sharp');

async function captureVeryTallPage(url) {
  const browser = await puppeteer.launch();
  const page = await browser.newPage();
  
  await page.setViewport({ width: 1920, height: 1080 });
  await page.goto(url, { waitUntil: 'networkidle2' });
  
  // Get the full page height
  const bodyHeight = await page.evaluate(() => {
    return document.documentElement.scrollHeight;
  });
  
  console.log(`Page height: ${bodyHeight}px`);
  
  const viewportHeight = 1080;
  const screenshots = [];
  
  // Take screenshots in chunks
  for (let y = 0; y < bodyHeight; y += viewportHeight) {
    // Scroll to position
    await page.evaluate((scrollY) => {
      window.scrollTo(0, scrollY);
    }, y);
    
    // Wait for content to load
    await page.waitForTimeout(500);
    
    // Take screenshot as buffer
    const buffer = await page.screenshot({
      encoding: 'binary',
      clip: {
        x: 0,
        y: 0,
        width: 1920,
        height: Math.min(viewportHeight, bodyHeight - y)
      }
    });
    
    screenshots.push(buffer);
    console.log(`Captured segment at ${y}px`);
  }
  
  await browser.close();
  
  // Stitch screenshots together
  console.log('Stitching images...');
  
  const compositeImage = sharp({
    create: {
      width: 1920,
      height: bodyHeight,
      channels: 4,
      background: { r: 255, g: 255, b: 255, alpha: 1 }
    }
  });
  
  const composite = screenshots.map((buffer, index) => ({
    input: buffer,
    top: index * viewportHeight,
    left: 0
  }));
  
  await compositeImage
    .composite(composite)
    .toFile('stitched-fullpage.png');
  
  console.log('Done!');
}

captureVeryTallPage('https://very-long-page.com');

This script:

  1. Gets the total page height
  2. Scrolls down in viewport-sized chunks
  3. Takes a screenshot of each chunk
  4. Uses Sharp to composite them into a single image

The key trick is taking screenshots as binary buffers (encoding: 'binary') rather than saving to disk—it's much faster. We only write the final stitched image to disk.

This technique handles pages of virtually any length. I've used it successfully on pages over 20,000 pixels tall.

Performance Optimization

Taking screenshots can be slow, especially at scale. Here are some optimization techniques I've learned:

1. Use Buffers Instead of Files

If you're going to process the screenshot further (resize, crop, upload), work with buffers instead of writing to disk:

const screenshotBuffer = await page.screenshot({ encoding: 'binary' });
// Now process the buffer...

2. Block Unnecessary Resources

If you're screenshotting your own pages, you can dramatically speed things up by blocking ads, trackers, and other third-party resources:

await page.setRequestInterception(true);

page.on('request', (request) => {
  const blockedDomains = [
    'google-analytics.com',
    'doubleclick.net',
    'facebook.net',
    'googlesyndication.com'
  ];
  
  const url = request.url();
  const shouldBlock = blockedDomains.some(domain => url.includes(domain));
  
  if (shouldBlock) {
    request.abort();
  } else {
    request.continue();
  }
});

This can cut page load times by 50% or more on ad-heavy sites.

3. Optimize For Speed

Puppeteer has a relatively new optimizeForSpeed option that uses faster image encoding:

await page.screenshot({
  path: 'fast.png',
  optimizeForSpeed: true
});

This restricts compression to a faster algorithm. The files might be slightly larger, but screenshot generation is noticeably faster—great when you need to capture hundreds of screenshots quickly.

4. Reuse Browser Instances

Launching a new browser for every screenshot is expensive. Reuse instances when possible:

// Bad - launches browser for each screenshot
async function slowWay(urls) {
  for (const url of urls) {
    const browser = await puppeteer.launch();
    const page = await browser.newPage();
    await page.goto(url);
    await page.screenshot({ path: `${url}.png` });
    await browser.close();
  }
}

// Good - reuses browser
async function fastWay(urls) {
  const browser = await puppeteer.launch();
  
  for (const url of urls) {
    const page = await browser.newPage();
    await page.goto(url);
    await page.screenshot({ path: `${url}.png` });
    await page.close();
  }
  
  await browser.close();
}

The second approach is easily 5-10x faster.

5. Adjust Viewport For Your Needs

Smaller viewports = smaller screenshots = faster processing. If you don't need full 1920x1080, use a smaller size:

await page.setViewport({ width: 1280, height: 720 });

Troubleshooting Common Issues

Blank or Partial Screenshots

This usually means the page isn't fully loaded. Try:

  • Using waitUntil: 'networkidle2' instead of load
  • Adding await page.waitForSelector('.main-content')
  • Inserting a small timeout: await page.waitForTimeout(1000)

TimeoutError: Navigation timeout

The page is taking too long to load. Increase the timeout:

await page.goto(url, { 
  waitUntil: 'networkidle2',
  timeout: 60000  // 60 seconds
});

Screenshots Missing Images

Images might be lazy-loaded. Scroll down the page before screenshotting:

await page.evaluate(async () => {
  await new Promise((resolve) => {
    let totalHeight = 0;
    const distance = 100;
    const timer = setInterval(() => {
      window.scrollBy(0, distance);
      totalHeight += distance;
      
      if (totalHeight >= document.body.scrollHeight) {
        clearInterval(timer);
        resolve();
      }
    }, 100);
  });
});

// Scroll back to top
await page.evaluate(() => window.scrollTo(0, 0));
await page.screenshot({ path: 'screenshot.png' });

Out of Memory Errors

You're probably taking too many screenshots concurrently or not closing pages. Make sure to:

  • Limit concurrency to 3-5 pages
  • Always close pages: await page.close()
  • Close the browser when done: await browser.close()

Wrapping Up

Taking screenshots with Puppeteer starts simple but has depth when you need it. You've now got the tools to handle everything from basic captures to complex scenarios like batch processing, dark mode, and extremely tall pages.

The techniques that have saved me the most time:

  • Element screenshots for focused captures without manual cropping
  • Batch processing with controlled concurrency for speed without overwhelming resources
  • The Sharp stitching technique for pages that break the standard fullPage option
  • Network interception to block unnecessary resources and speed up captures

Puppeteer's screenshot capabilities are powerful, but they're just one part of what it can do. If you're finding yourself writing lots of custom screenshot logic, you might also want to explore dedicated screenshot APIs or services that handle the infrastructure side of things—though there's something satisfying about rolling your own solution that works exactly how you need it.

What are you building with Puppeteer screenshots? Whether it's automated testing, competitor monitoring, or something else entirely, you now have the foundation to make it happen.