Anti-bot systems don't just check your headers anymore. They watch how your cursor moves. Straight-line jumps from element to element scream "bot" to every major detection system running today.
HumanCursor fixes that. It's a Python library that generates realistic mouse movement for Selenium sessions — curved paths, variable speed, natural acceleration.
By the end of this guide, you'll have a working scraper that moves its cursor like a real person.
What Is HumanCursor?
HumanCursor is a Python package that simulates human-like mouse movements inside automated browser sessions. It uses a natural motion algorithm with variable speed, acceleration, and curvature instead of the instant teleportation that Selenium's default ActionChains produces.
It supports clicking, dragging, scrolling, and hovering — all through paths that look like a real hand moved the mouse. Use it when your scraper gets blocked despite clean fingerprints, because the detection is behavioral.
Why Mouse Movement Matters for Bot Detection
Before you write any code, it helps to understand what you're up against.
Modern anti-bot systems like Cloudflare, DataDome, and PerimeterX don't just fingerprint your browser. They run JavaScript that records every mouse event on the page.
That data includes position, timestamp, velocity, and acceleration. It gets fed into classifiers trained on millions of real user sessions.
Here's what gives bots away:
Perfectly straight paths. Real humans can't move a mouse in a straight line. Our hands produce micro-corrections that create slight curves.
A path from A to B that follows a perfect linear trajectory is an instant red flag.
Constant velocity. Humans accelerate at the start of a movement, cruise in the middle, and decelerate near the target.
Bots using ActionChains.move_to_element() teleport instantly — zero movement time, zero intermediate positions.
Identical timing between actions. Click, wait exactly 500ms, click, wait exactly 500ms. Humans don't work like that. Our inter-action intervals follow a roughly normal distribution with meaningful variance.
No idle movement. Real users move their mouse even when they're reading. Small drifts, repositions, scrolls. A cursor that only moves when interacting with elements is suspicious.
HumanCursor addresses the first three of these signals. It won't help with idle movement — that's something you'd need to add yourself.
Prerequisites
You need three things before starting:
- Python 3.7+ (3.10 or later recommended)
- Google Chrome (latest stable version)
- ChromeDriver matching your Chrome version
Install the packages:
# Terminal
pip install humancursor selenium
HumanCursor pulls in selenium, pyautogui, and numpy as dependencies. If you're on a headless Linux server, pyautogui may complain about missing display — we'll handle that in the troubleshooting section.
Step 1: Set Up Selenium with Chrome
Start with a basic Selenium session. Nothing fancy yet — just get the browser running.
# scraper.py
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
chrome_options = Options()
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument("--disable-blink-features=AutomationControlled")
driver = webdriver.Chrome(options=chrome_options)
driver.get("https://www.scrapingcourse.com/ecommerce/")
The --disable-blink-features=AutomationControlled flag removes the navigator.webdriver property that Selenium sets by default. Without this, many sites block you before HumanCursor even gets a chance to help.
Step 2: Initialize WebCursor
Now attach HumanCursor to your Selenium driver. This is a one-liner.
# scraper.py (continued)
from humancursor import WebCursor
cursor = WebCursor(driver)
That's it. The WebCursor instance now controls all mouse movement in this browser session. Every method you call on cursor will generate a human-like path before performing the action.
One thing to note: WebCursor is fully supported on Chrome and Edge. Firefox and Safari support is listed as "not optimal" by the library's author. Stick with Chrome for production scraping.
Step 3: Move to Elements
The move_to() method is your workhorse. It accepts either a Selenium WebElement or a pair of viewport coordinates.
# Move to an element found by CSS selector
from selenium.webdriver.common.by import By
search_box = driver.find_element(By.CSS_SELECTOR, "input[name='s']")
cursor.move_to(search_box)
The cursor traces a curved path from its current position to the target element. Speed varies along the path — faster in the middle, slower near the start and end.
You can also target a specific point within an element using relative_position:
# Click the right side of a slider
slider = driver.find_element(By.ID, "price-slider")
cursor.move_to(slider, relative_position=[0.8, 0.5])
The relative_position parameter takes [x_ratio, y_ratio] where [0, 0] is the top-left corner and [1, 1] is the bottom-right. This is useful for sliders, maps, or any element where the click position matters.
If you need to move to raw coordinates instead of an element:
# Move to viewport coordinates x=450, y=600
cursor.move_to([450, 600])
# Move by pixel offset from current position
cursor.move_by_offset(200, 170)
Step 4: Click on Elements
click_on() combines movement and clicking into a single call. The cursor moves to the target first, then clicks.
# Click a button
add_to_cart = driver.find_element(By.CSS_SELECTOR, ".add_to_cart_button")
cursor.click_on(add_to_cart)
You can hold the click for a specified duration. This is useful for elements that require a long-press:
# Long-press for 1.7 seconds
cursor.click_on(add_to_cart, click_duration=1.7)
And you can click at a specific position within the element:
# Click the left-third of a navigation bar
nav = driver.find_element(By.CSS_SELECTOR, ".site-navigation")
cursor.click_on(nav, relative_position=[0.2, 0.5])
Step 5: Scroll Elements Into View
Before clicking anything, you need to make sure it's visible. HumanCursor handles this automatically in most cases — click_on() and move_to() call scroll_into_view_of_element() internally.
But sometimes you want to scroll manually, especially to simulate reading behavior:
# Scroll until the element is visible
footer = driver.find_element(By.CSS_SELECTOR, "footer")
cursor.scroll_into_view_of_element(footer)
For sliders and scrollable containers, use control_scroll_bar():
# Set a scroll bar to 75% position
scrollbar = driver.find_element(By.ID, "volume-slider")
cursor.control_scroll_bar(scrollbar, 0.75)
The float argument ranges from 0 (empty) to 1 (full). This works for volume sliders, progress bars, custom scrollbars — anything draggable along a track.
Step 6: Drag and Drop
Drag-and-drop is where HumanCursor really earns its keep. Selenium's built-in drag-and-drop is notoriously unreliable and looks completely robotic.
# Drag from one element to another
source = driver.find_element(By.ID, "draggable-item")
target = driver.find_element(By.ID, "drop-zone")
cursor.drag_and_drop(source, target)
You can also drag from a specific part of the source element:
# Drag from the bottom-right corner of the source
cursor.drag_and_drop(
source,
target,
drag_from_relative_position=[0.9, 0.9]
)
Or drag an element to raw coordinates:
# Drag element to a specific position
cursor.drag_and_drop(source, [640, 320])
Full Working Example: Scraping Product Data
Let's put it all together. This script navigates an e-commerce page, interacts with it like a human would, and extracts product information.
# ecommerce_scraper.py
import time
import random
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from humancursor import WebCursor
# Browser setup
chrome_options = Options()
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument("--disable-blink-features=AutomationControlled")
driver = webdriver.Chrome(options=chrome_options)
cursor = WebCursor(driver)
try:
driver.get("https://www.scrapingcourse.com/ecommerce/")
# Wait for page to load
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.CSS_SELECTOR, ".product"))
)
# Simulate reading — pause before interacting
time.sleep(random.uniform(1.5, 3.0))
# Move to the first product (human would look at it first)
products = driver.find_elements(By.CSS_SELECTOR, ".product")
cursor.move_to(products[0])
time.sleep(random.uniform(0.5, 1.2))
# Click into the product page
link = products[0].find_element(By.CSS_SELECTOR, "a")
cursor.click_on(link)
# Wait for product page
WebDriverWait(driver, 10).until(
EC.presence_of_element_located(
(By.CSS_SELECTOR, ".product_title")
)
)
# Extract data
title = driver.find_element(
By.CSS_SELECTOR, ".product_title"
).text
price = driver.find_element(
By.CSS_SELECTOR, ".price"
).text
print(f"Product: {title}")
print(f"Price: {price}")
finally:
driver.quit()
Notice the random.uniform() calls between actions. HumanCursor handles the movement realism, but you still need to add realistic timing between actions yourself. A human doesn't click a product the instant the page loads — they look at it first.
Using HumanCursor with Proxy Rotation
If you're scraping at any real volume, you need proxies. HumanCursor doesn't handle networking — it only manages cursor movement. You configure proxies through Selenium's Chrome options.
# proxy_scraper.py
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from humancursor import WebCursor
def create_driver_with_proxy(proxy_address):
"""Create a Chrome driver routed through a proxy."""
chrome_options = Options()
chrome_options.add_argument("--start-maximized")
chrome_options.add_argument(
"--disable-blink-features=AutomationControlled"
)
chrome_options.add_argument(f"--proxy-server={proxy_address}")
driver = webdriver.Chrome(options=chrome_options)
return driver
# Rotate through your proxy list
proxies = [
"http://proxy1.example.com:8080",
"http://proxy2.example.com:8080",
"http://proxy3.example.com:8080",
]
for proxy in proxies:
driver = create_driver_with_proxy(proxy)
cursor = WebCursor(driver)
try:
driver.get("https://httpbin.org/ip")
print(driver.find_element("tag name", "pre").text)
finally:
driver.quit()
For authenticated proxies (username:password), you'll need to use a Chrome extension or Selenium Wire since Chrome's --proxy-server flag doesn't support authentication directly.
If you're running residential proxies from a provider like Roundproxies, the proxy address format is typically http://user:pass@gateway.roundproxies.com:port. You'd route this through Selenium Wire or a local proxy forwarder.
Using SystemCursor for Desktop Automation
HumanCursor isn't just for browsers. The SystemCursor class controls your actual OS-level mouse cursor using pyautogui under the hood.
# desktop_automation.py
from humancursor import SystemCursor
import time
cursor = SystemCursor()
# Move to screen coordinates
cursor.move_to([500, 300])
time.sleep(0.5)
# Click at coordinates
cursor.click_on([500, 300])
SystemCursor only accepts coordinate pairs — no DOM elements, since there's no browser context. It supports move_to(), click_on(), and drag_and_drop().
This is useful for automating desktop applications, game bots, or any scenario where you need human-like mouse movement outside a browser.
Using HCScripter to Record Mouse Actions
If you don't want to write coordinate-based scripts by hand, HCScripter lets you record your mouse movements and replay them.
Launch it from the terminal:
# Terminal
python -m humancursor.HCScripter.launch
A GUI window opens where you can:
- Set the output filename and save location
- Press the ON/OFF button to start recording
- Perform your mouse actions normally
- Press Finish to generate a
.pyscript
The generated script contains SystemCursor calls that replay your exact movements. It's a quick way to prototype desktop automation without manually calculating coordinates.
Adding Idle Movement Between Actions
HumanCursor generates realistic paths when you call its methods. But between those calls, the cursor sits completely still. That's a detection signal — real users constantly fidget.
You can simulate this with a background thread that makes small random movements while your main script waits:
# idle_movement.py
import threading
import time
import random
from humancursor import WebCursor
def idle_jitter(cursor, stop_event):
"""Produce small random movements while waiting."""
while not stop_event.is_set():
x_offset = random.randint(-15, 15)
y_offset = random.randint(-10, 10)
try:
cursor.move_by_offset(x_offset, y_offset)
except Exception:
pass # ignore if element context changed
time.sleep(random.uniform(0.3, 1.2))
# Usage in your scraper
stop_event = threading.Event()
jitter_thread = threading.Thread(
target=idle_jitter,
args=(cursor, stop_event),
daemon=True
)
jitter_thread.start()
# ... do your scraping work here ...
# Stop idle movement before precise interactions
stop_event.set()
jitter_thread.join()
Start the jitter thread during "reading" phases — when you've loaded a page and want to simulate a user scanning content. Stop it before precise clicks so the idle movement doesn't interfere with targeting.
The key is keeping the offsets small. Real idle movement is micro-corrections, not sweeping gestures. Stay within 15-20 pixels in any direction.
HumanCursor vs. Other Mouse Libraries
HumanCursor isn't the only option for human-like cursor simulation. Here's how it stacks up:
| Library | Language | Browser Support | OS-Level | Movement Algorithm |
|---|---|---|---|---|
| HumanCursor | Python | Selenium (Chrome/Edge) | Yes (pyautogui) | Natural motion with variable speed |
| ghost-cursor | JavaScript | Puppeteer | No | Bezier curves with overshoot |
| pyclick | Python | None (raw coords only) | Yes | Bezier curves |
| human_mouse | Python | None (raw coords only) | Yes | Bezier + spline interpolation |
ghost-cursor is your best option if you're in the Node.js/Puppeteer ecosystem. It generates Bezier-curved paths with intentional overshoot — the cursor slightly passes the target and corrects back, which is very human. But it has no Python bindings and no Selenium support.
pyclick is simpler than HumanCursor. It generates Bezier paths and can click, but it doesn't integrate with Selenium. You get raw coordinate movement and that's it. If you need basic cursor realism for desktop automation and nothing more, pyclick is lighter weight.
human_mouse adds spline interpolation on top of Bezier curves for smoother trajectories. Like pyclick, it's OS-level only — no browser integration. The movement quality is arguably the best of the bunch, but you'd have to build the Selenium bridge yourself.
HumanCursor wins on convenience for web scraping because it wraps Selenium directly. You pass a WebElement and it handles scrolling into view, path generation, and execution. The tradeoff is that it's Selenium-only. If you're using Playwright or Puppeteer from Python, you'll need to adapt.
There's a community fork called HumanCursor-MacOS that fixes performance issues where AppKit throttles programmatic mouse movement on macOS. If your system-level automation runs painfully slow on a Mac, check that fork.
When HumanCursor Is Not Enough
Be honest about what this library can and can't do. HumanCursor solves one specific problem: making mouse movement look human. It does not fix these:
Browser fingerprinting. If your browser leaks WebDriver flags, canvas fingerprints, or WebGL hashes that identify it as automated, no amount of mouse realism will save you. Use tools like undetected-chromedriver or patch your fingerprints separately.
TLS fingerprinting. Some anti-bot systems (Cloudflare, Akamai) check TLS client hello patterns. Selenium's Chrome profile might not match a real browser's TLS fingerprint perfectly.
Request pattern analysis. If you scrape 10,000 pages in an hour with perfectly consistent timing, the server-side logs will flag you regardless of how your cursor moves.
JavaScript challenges. Cloudflare Turnstile and similar challenges evaluate far more than mouse movement. They check browser APIs, execution timing, and dozens of other signals.
No idle movement. HumanCursor only moves the cursor when you tell it to. Real users constantly produce small mouse movements while reading. You'd need to add background jitter yourself using a separate thread.
The best approach combines HumanCursor with undetected-chromedriver for fingerprint management, random delays between actions, and rotating residential proxies for IP diversity.
Combining HumanCursor with undetected-chromedriver
For production scraping, pair HumanCursor with undetected-chromedriver to handle both cursor movement and browser fingerprinting:
# stealth_scraper.py
import undetected_chromedriver as uc
from humancursor import WebCursor
from selenium.webdriver.common.by import By
import time
import random
# undetected-chromedriver handles fingerprint evasion
driver = uc.Chrome(use_subprocess=True)
# HumanCursor handles mouse movement realism
cursor = WebCursor(driver)
try:
driver.get("https://nowsecure.nl")
time.sleep(random.uniform(3, 5))
# Move around naturally before interacting
cursor.move_to([300, 400])
time.sleep(random.uniform(0.8, 1.5))
cursor.move_to([600, 250])
time.sleep(random.uniform(0.5, 1.0))
# Now interact with the page
page_text = driver.find_element(By.TAG_NAME, "body").text
print(page_text[:500])
finally:
driver.quit()
Install undetected-chromedriver with:
# Terminal
pip install undetected-chromedriver
This combo covers two of the biggest detection vectors. undetected-chromedriver patches the WebDriver flags and fingerprint leaks. HumanCursor makes the mouse behavior look organic. Together they pass most mid-tier anti-bot systems.
Troubleshooting
"No display found" or "PyAutoGUI requires display"
This happens on headless Linux servers. The pyautogui dependency requires an X display.
Fix: If you only need WebCursor (browser automation), this error shouldn't block you — it's triggered by SystemCursor imports. But if it does, set a virtual display:
# Terminal
pip install PyVirtualDisplay
from pyvirtualdisplay import Display
display = Display(visible=0, size=(1920, 1080))
display.start()
Run this before any HumanCursor imports.
"WebDriver element not interactable"
The element exists in the DOM but isn't visible on screen. HumanCursor tries to scroll it into view, but sometimes CSS layouts prevent it.
Fix: Explicitly scroll first, then add a short wait:
driver.execute_script(
"arguments[0].scrollIntoView({block: 'center'});",
element
)
time.sleep(0.5)
cursor.click_on(element)
Cursor moves but click doesn't register
Some sites use JavaScript event listeners that expect specific event sequences (mousedown → mouseup → click). If HumanCursor's click doesn't trigger the expected behavior, fall back to a JavaScript click after moving:
cursor.move_to(element)
time.sleep(random.uniform(0.1, 0.3))
driver.execute_script("arguments[0].click();", element)
You lose some behavioral realism on the click event itself, but the movement path still looks human.
Movement is very slow
HumanCursor's default movement speed works for normal interactions. But if you're navigating many elements quickly, the cumulative delay adds up.
Fix: There's no built-in speed parameter. If speed is a concern, consider whether you actually need human-like movement for every action. Use HumanCursor for the interactions that matter (clicking buttons, filling forms) and fall back to standard Selenium for the rest.
HumanCursor API Reference
Here's a quick reference for every WebCursor method:
| Method | Purpose | Key Parameters |
|---|---|---|
move_to(target) |
Move cursor to element or coordinates | relative_position, absolute_offset, steady |
click_on(target) |
Move to target and click | relative_position, click_duration |
drag_and_drop(source, target) |
Click-hold source, move to target, release | drag_from_relative_position |
move_by_offset(x, y) |
Move cursor by pixel offset | Positive = right/down, negative = left/up |
control_scroll_bar(element, level) |
Set a scrollable element's position | level is 0.0 to 1.0 |
scroll_into_view_of_element(element) |
Scroll until element is visible | Called automatically by other methods |
The steady parameter on move_to() forces a straighter path when set to True. Use this sparingly — the whole point of HumanCursor is curved, natural paths.
Frequently Asked Questions
Does HumanCursor work with Playwright?
Not directly. HumanCursor is built on Selenium's WebDriver API. However, there are community adaptations for Playwright and Patchright (a Playwright fork). These aren't maintained by the original author, so expect some rough edges. If you're committed to Playwright, ghost-cursor's JavaScript implementation or building your own Bezier-based movement function is a more reliable path.
Is HumanCursor still maintained?
As of early 2026, the last PyPI release (v1.1.5) shipped in March 2025. The GitHub repository hasn't seen major updates in over a year. The library still works fine with current Selenium versions, but don't expect new features. The core movement algorithm is solid and stable — it doesn't need constant updates the way a browser fingerprinting tool would.
Can I use HumanCursor in headless mode?
Yes, with caveats. WebCursor works in headless Chrome because it operates on the virtual viewport coordinates that Selenium maintains. The cursor doesn't physically appear on screen, but the JavaScript events fire correctly. SystemCursor does not work in headless mode since it relies on a real OS-level display.
Will HumanCursor bypass Cloudflare Turnstile?
On its own, no. Cloudflare Turnstile evaluates dozens of signals beyond mouse movement — canvas fingerprints, TLS characteristics, JavaScript execution patterns, and more. HumanCursor makes your mouse behavior look more organic, which helps at the margins. But you need it combined with fingerprint patching, proper headers, and residential proxies to stand a real chance against Turnstile.
How many pages per hour can I scrape with HumanCursor?
Significantly fewer than raw Selenium. Each cursor movement takes real time — usually 0.5 to 2 seconds per action depending on distance. A scraper that needs 4-5 mouse actions per page might add 5-8 seconds of cursor movement overhead per page. If throughput matters more than stealth, consider whether you actually need mouse simulation for your target. Many sites only check mouse behavior on specific interactions like login forms or checkout flows, not on every page load.
Wrapping Up
HumanCursor fills a real gap in the Python scraping toolkit. Selenium gives you browser control but makes every cursor action look robotic. HumanCursor generates the curved paths, variable speeds, and natural deceleration that anti-bot systems expect from real users.
The library works best as one layer in a multi-layer evasion strategy. Pair it with undetected-chromedriver for fingerprint management, add random delays between actions, and route traffic through rotating residential proxies. No single tool beats modern anti-bot systems alone.
If you're scraping sites that run behavioral analysis — and in 2026, most protected sites do — HumanCursor is worth adding to your stack.