Kurva-Krome is a JavaScript framework that combines Playwright and Selenium capabilities with custom CDP connections to avoid anti-bot systems like Cloudflare, Akamai, and DataDome. Unlike traditional automation tools that get flagged immediately, this tool masquerades as a real browser by patching common detection vectors.
Ever tried automating a website only to get hit with that annoying "Checking your browser" screen that never ends? Or watched your Selenium script get stuck in an infinite CAPTCHA loop? You're not alone. The web automation landscape in 2025 feels like trying to sneak past a bouncer who somehow knows you're wearing a fake mustache.
Here's the thing: most automation tools leak their identity faster than a broken faucet. They practically scream "I'M A BOT!" through various fingerprinting techniques. That's where Kurva-Krome comes in – it's the sneaky alternative that actually works against modern anti-bot systems.
I've spent countless hours debugging why my scrapers kept getting blocked, and after diving deep into the CDP (Chrome DevTools Protocol) rabbit hole, I finally understand why tools like this exist. This guide will show you exactly how to set up and use Kurva-Krome, plus some tricks the documentation doesn't mention.
Why Traditional Automation Tools Get Detected (And Why Kurva-Krome Doesn't)
Before we dive into the code, let's talk about why your vanilla Selenium script is about as stealthy as a elephant in a china shop.
Anti-bot systems like Cloudflare maintain catalogs of devices, IP addresses, and behavioral patterns associated with malicious bot networks. They check for telltale signs like:
- The
navigator.webdriver
property: Set totrue
in automated browsers - Missing browser plugins: Real browsers have plugins; headless ones usually don't
- CDP detection: CDP is the underlying protocol used by Puppeteer, Playwright, and Selenium to instrument Chromium-based browsers
- Canvas fingerprinting: Automated browsers render graphics slightly differently
- Mouse movement patterns: Bots move in straight lines; humans don't
Kurva-Krome addresses these issues by:
- Using custom CDP connections that avoid the
Runtime.enable
command (a dead giveaway) - Spoofing realistic browser fingerprints
- Supporting human-like interaction patterns
- Working in headful mode (not headless) to appear more legitimate
Step 1: Set Up Your Environment Without Shooting Yourself in the Foot
First things first – you need Node.js installed. If you don't have it, grab it from nodejs.org. Then clone the repository:
git clone https://github.com/bosniankicks/Kurva-Krome.git
cd Kurva-Krome
npm install
Critical configuration step that everyone misses: You MUST update the Chrome binary location around line 433 in the main file. The default is set for macOS, so if you're on Windows or Linux, you're going to have a bad time.
For Windows users:
const chromePath = 'C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe';
For Linux users:
const chromePath = '/usr/bin/google-chrome-stable';
Pro tip: Use Chrome for Testing instead of your regular Chrome installation. It's specifically designed for automation and won't interfere with your daily browsing. Download it from the Chrome for Testing page.
Step 2: Initialize the Browser Like You Know What You're Doing
Here's where things get interesting. Unlike Puppeteer or Selenium, Kurva-Krome uses a custom initialization process:
const { Kurvaaa } = require('./path-to-kurva');
const { By } = require('selenium-webdriver');
async function startStealthBrowser() {
const kurva = new Kurvaaa();
// This isn't your grandmother's browser launch
const browser = await kurva.browser();
// IMPORTANT: Always add a delay after initialization
// Anti-bots check for immediate navigation after launch
await browser.stop(2000 + Math.random() * 3000);
return { kurva, browser };
}
The browser.stop()
method isn't just a sleep timer – it's mimicking human behavior. Real users don't instantly navigate to pages the millisecond their browser opens.
Step 3: Navigate and Interact Without Getting Flagged
This is where Kurva-Krome shines. Instead of Selenium's robotic interactions, you get human-like control:
async function stealthNavigation(browser) {
// Navigate to a protected site
await browser.get("https://www.nike.com/login");
// Random delay between 1-3 seconds (humans aren't machines)
await browser.stop(1000 + Math.random() * 2000);
// Find elements using multiple strategies
const emailInput = await browser.findElement(By.XPATH, '//input[@type="email"]');
// Type like a human (not all at once)
const email = "test@example.com";
for (let char of email) {
await emailInput.sendKeys(char);
await browser.stop(50 + Math.random() * 150); // Variable typing speed
}
// Use coordinate clicks for stubborn elements
await browser.coordClick(350, 425); // Login button coordinates
// Pro move: Hold clicks for realistic interaction
await browser.holdclick(350, 425, 150); // Hold for 150ms
}
The secret sauce: Notice how we're not just sending all the text at once? That's because anti-bots analyze user behavioral patterns like typing speed, mouse movement, and clicking patterns. Bots type instantly; humans don't.
Step 4: Extract Data and Handle Cookies Like a Pro
Once you're past the anti-bot defenses, you need to extract your data efficiently:
async function extractDataSafely(browser) {
// Grab text content by class name
const username = await browser.grabtxt('username-display', 'class');
console.log('Username:', username);
// Execute custom JavaScript (the nuclear option)
await browser.insert_js(`
// Collect all product prices
const prices = Array.from(document.querySelectorAll('.price'))
.map(el => el.innerText);
window.__prices = prices;
`);
// Screenshot for debugging (saves locally)
await browser.picture('debug-screenshot.png');
// Save cookies for session persistence
await browser.cookies('session-cookies.json');
// Pro tip: Reuse cookies to skip login next time
// This is huge for avoiding detection on subsequent runs
}
The cookie saving feature is underrated. By persisting cookies, you can maintain sessions across runs, which looks way more natural than logging in every single time.
Step 5: Scale Your Operation (Without Getting Rate Limited)
Here's where most people mess up – they try to run 100 instances simultaneously and wonder why they get banned. Smart scaling looks like this:
class SmartScraper {
constructor(maxConcurrency = 3) {
this.maxConcurrency = maxConcurrency;
this.queue = [];
this.running = 0;
}
async addTask(url) {
return new Promise((resolve, reject) => {
this.queue.push({ url, resolve, reject });
this.processQueue();
});
}
async processQueue() {
if (this.running >= this.maxConcurrency || this.queue.length === 0) {
return;
}
this.running++;
const task = this.queue.shift();
try {
const result = await this.scrapeWithKurva(task.url);
task.resolve(result);
} catch (error) {
task.reject(error);
} finally {
this.running--;
// Random delay between requests (1-5 seconds)
await new Promise(r => setTimeout(r, 1000 + Math.random() * 4000));
this.processQueue();
}
}
async scrapeWithKurva(url) {
const kurva = new Kurvaaa();
const browser = await kurva.browser();
try {
await browser.get(url);
// Your scraping logic here
const data = await browser.grabtxt('product-info', 'class');
return data;
} finally {
await kurva.end();
}
}
}
The Way-To-Go Option: When to Ditch Browser Automation Entirely
Sometimes, the smartest move is to not use a browser at all. If you can reverse-engineer the API calls, you can skip the whole anti-bot dance:
// Instead of browser automation, intercept and replay API calls
const https = require('https');
const crypto = require('crypto');
async function bypassWithoutBrowser(targetUrl) {
// Step 1: Use Kurva-Krome once to get valid cookies
const kurva = new Kurvaaa();
const browser = await kurva.browser();
await browser.get(targetUrl);
await browser.cookies('valid-cookies.json');
await kurva.end();
// Step 2: Use those cookies with regular HTTP requests
const cookies = require('./valid-cookies.json');
const cookieString = cookies.map(c => `${c.name}=${c.value}`).join('; ');
// Step 3: Mimic TLS fingerprint (defeats basic TLS fingerprinting)
const ciphers = [
'TLS_AES_128_GCM_SHA256',
'TLS_AES_256_GCM_SHA384',
'TLS_CHACHA20_POLY1305_SHA256'
].join(':');
const options = {
headers: {
'Cookie': cookieString,
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
'Accept': 'application/json',
'Accept-Language': 'en-US,en;q=0.9',
'Cache-Control': 'no-cache'
},
ciphers: ciphers
};
// Now you can make requests without a browser!
// This is 100x faster and uses way less resources
}
This hybrid approach is killer – use Kurva-Krome to establish trust and get cookies, then switch to lightning-fast HTTP requests. We can establish trust score using real web browser and switch session to HTTP library for faster scraping.
Common Pitfalls to Avoid
- Don't run in headless mode - It's tempting, but headless browsers are easier to detect
- Randomize everything - Delays, mouse movements, typing speeds. Patterns = detection
- Respect rate limits - Just because you CAN scrape fast doesn't mean you SHOULD
- Rotate user agents carefully - Use real, recent user agent strings, not ancient ones
- Monitor your success rate - If you suddenly start getting blocked, stop and reassess
Final Thoughts
Kurva-Krome fills a crucial gap in the Node.js ecosystem for undetected automation. While Python has tools like undetected-chromedriver and nodriver, JavaScript developers have been left in the cold – until now.
The cat-and-mouse game between scrapers and anti-bot systems will never end. Bots today are more realistic out of the box, even the cheap ones, and detection methods keep evolving. The key is staying adaptable and having multiple tools in your arsenal.
Remember: with great scraping power comes great responsibility. Don't be that person who crashes someone's server with aggressive scraping. Be respectful, add delays, and consider reaching out to website owners for official API access when possible.