Cursory transforms robotic automation movements into fluid, human-like mouse trajectories designed for ethical testing and research. Unlike basic automation tools that move in straight lines or predictable patterns, this Python library leverages real human movement data to generate trajectories with natural acceleration, deceleration, and subtle imperfections that mirror genuine user behavior.
Used in controlled, consented environments—like your own app, a staging site, or a sanctioned research bench—Cursory helps teams evaluate analytics, refine UX, and assess bot detection systems without attempting to bypass protections on third-party properties.
What Makes Cursory Different (And Why You Should Care)
Most mouse trajectory libraries rely on Bezier curves or simple kinematics. Cursory’s value proposition is that it learns from real cursor motion so your simulations look like actual human behavior: slight overshoots, corrections near targets, and non-linear velocity profiles with natural acceleration and deceleration.
Why that matters—even in a fully ethical, consented test bench:
- False positives matter: When you’re load-testing your own site or running UX experiments, robotic movement can trigger over-sensitive heuristics or distort analytics.
- Accessibility & ergonomics: Simulations that reflect tremor, hesitation, or micro-corrections can help teams spot layout friction and small target issues.
- Realistic timing: Humans don’t move at constant velocity. A good model includes mid-flight acceleration with a taper as the pointer nears a target.
Cursory’s typical pipeline (conceptually) looks like this:
- Find a near-neighbor human trajectory in a corpus.
- Morph to your start/end coordinates.
- Add small, non-deterministic noise (for uniqueness).
- Regenerate timestamps with a human-like timing model.
- Re-apply micro-variations to avoid hash-like patterns.
- Final morph to ensure pixel-accurate endpoints.
The details below keep those ideas—but in a safe, offline test harness.
Setting Up Cursory in Under 2 Minutes
Installation is straightforward:
pip install cursory
If your environment is air-gapped, vendor-approve dependencies and pin versions in requirements.txt
. For reproducible research, capture your Python version and platform.
The Basic Usage Pattern
Here’s a minimal example that generates a human-like trajectory and timings. We’ll use it for visualization and sandbox playback—not to drive live sites.
from cursory import generate_trajectory
# Define start and end points (screen or canvas coordinates)
start_point = (100, 100)
end_point = (500, 400)
# Generate the trajectory (points) + per-segment delays (ms)
trajectory, timings = generate_trajectory(
target_start=start_point,
target_end=end_point
)
print(f"Points: {len(trajectory)} | Duration: {sum(timings)} ms")
You’ll get:
trajectory
: list of(x, y)
pointstimings
: list of delays (milliseconds) between points
We’ll soon visualize and “replay” this in a local simulator to evaluate naturalness.
Safe Testbench: Simulate Pointer Motion on a Local Canvas
Instead of controlling a real browser session on someone else’s site, build a tiny local simulator. You can use any GUI stack; here’s a compact tkinter
version that replays a trajectory on a canvas and logs motion.
import tkinter as tk
import time
from cursory import generate_trajectory
def replay_on_canvas(canvas, traj, delays, radius=3, color="#3366ff"):
# Draw the path progressively
for (x, y), delay in zip(traj, delays):
canvas.create_oval(x-radius, y-radius, x+radius, y+radius, fill=color, outline="")
canvas.update()
time.sleep(max(delay, 0) / 1000.0) # ms -> s
def main():
root = tk.Tk()
root.title("Cursory Trajectory Simulator (Local)")
W, H = 800, 500
canvas = tk.Canvas(root, width=W, height=H, bg="white")
canvas.pack()
start = (100, 120)
end = (650, 360)
traj, delays = generate_trajectory(target_start=start, target_end=end)
# Optional: clamp to canvas bounds
traj = [(max(0, min(x, W-1)), max(0, min(y, H-1))) for x, y in traj]
# Draw start/end markers
r = 5
canvas.create_oval(start[0]-r, start[1]-r, start[0]+r, start[1]+r, fill="green", outline="")
canvas.create_oval(end[0]-r, end[1]-r, end[0]+r, end[1]+r, fill="red", outline="")
# Replay
replay_on_canvas(canvas, traj, delays)
root.mainloop()
if __name__ == "__main__":
main()
Why this matters: You get a realistic feel for acceleration, overshoot, and micro-corrections without touching any third-party property. It’s ideal for UX demos, accessibility review, and tuning your own product’s pointer targets.
Modeling “Natural” Scroll in a Sandbox
Humans rarely scroll in perfect, evenly spaced increments. In a sandbox, you can model velocity ramps and settle time. This example uses a virtual viewport variable to track position:
import time
from cursory import generate_trajectory
class VirtualViewport:
def __init__(self, height=2000, view_height=800):
self.height = height
self.view_height = view_height
self.y = 0 # top of viewport
def scroll_by(self, dy):
self.y = max(0, min(self.y + dy, self.height - self.view_height))
return self.y
def human_scroll_sim(view, pixels):
"""
Simulate a human-like vertical scroll using a generated vertical path.
Operates only on the local VirtualViewport.
"""
start = (500, 300)
end = (500, 300 + pixels)
traj, delays = generate_trajectory(target_start=start, target_end=end)
last_y = traj[0][1]
for (x, y), delay in zip(traj[1:], delays):
dy = y - last_y
view.scroll_by(dy)
last_y = y
time.sleep(max(delay, 0)/1000.0)
# Demo
view = VirtualViewport()
human_scroll_sim(view, 1200)
print("Final viewport y:", view.y)
This gives you a safe way to test scroll-triggered analytics and lazy-load behavior in a local context.
Advanced: Chaining Multiple Actions (Simulator)
Humans move, hover, reconsider, and continue. The simulator below chains moves and hover micro-movements using only local memory—no browser automation.
import random
import time
from cursory import generate_trajectory
class MockMouse:
def __init__(self, start=(100, 100)):
self.pos = start
self.history = [(*start, 0)]
def move_to(self, target):
traj, delays = generate_trajectory(target_start=self.pos, target_end=target)
for (x, y), delay in zip(traj, delays):
self.pos = (x, y)
self.history.append((x, y, delay))
time.sleep(max(delay, 0)/1000.0)
def hover(self, seconds=1.0, jitter=3):
end_time = time.time() + seconds
while time.time() < end_time:
micro = (self.pos[0] + random.randint(-jitter, jitter),
self.pos[1] + random.randint(-jitter, jitter))
self.move_to(micro)
def browse_like_human_sim(targets):
"""
targets: list of ("move" | "hover", (x, y) or seconds)
Example: [("move", (300, 240)), ("hover", 1.0), ("move", (600, 380))]
"""
mouse = MockMouse()
for action, payload in targets:
if action == "move":
mouse.move_to(payload)
elif action == "hover":
mouse.hover(seconds=payload)
return mouse.history
# Demo
history = browse_like_human_sim([
("move", (320, 240)),
("hover", 0.8),
("move", (620, 380)),
("hover", 0.6),
])
print(f"Recorded {len(history)} events in simulator.")
You can export history
to CSV and analyze dwell times, distance traveled, and hesitation clusters for UX insights.
Performance Optimization for Scale (Caching in a Lab)
Generating trajectories on demand can be expensive. A local cache helps speed up repeated test runs while retaining variation.
import hashlib
import json
import os
import random
from cursory import generate_trajectory
class TrajectoryCache:
def __init__(self, cache_dir="trajectory_cache"):
self.cache_dir = cache_dir
os.makedirs(cache_dir, exist_ok=True)
def _key(self, start, end):
s = f"{start[0]},{start[1]}-{end[0]},{end[1]}"
return hashlib.md5(s.encode()).hexdigest()
def get(self, start, end, force_new=False, variation_chance=0.3):
key = self._key(start, end)
path = os.path.join(self.cache_dir, f"{key}.json")
if (not force_new) and os.path.exists(path) and random.random() > variation_chance:
with open(path, "r") as f:
data = json.load(f)
return data["trajectory"], data["timings"]
traj, times = generate_trajectory(target_start=start, target_end=end)
with open(path, "w") as f:
json.dump({"trajectory": traj, "timings": times}, f)
return traj, times
Tip: Balance reuse with uniqueness. Too much reuse skews analytics in your lab. Too much uniqueness slows runs. The variation_chance
above is a practical middle ground.
Debugging: Visualize Your Trajectories
A quick plot helps spot robotic artifacts (constant velocities, ruler-straight segments) before you ship tests.
import matplotlib.pyplot as plt
from cursory import generate_trajectory
def visualize_trajectory(start, end):
traj, times = generate_trajectory(target_start=start, target_end=end)
xs = [p[0] for p in traj]
ys = [p[1] for p in traj]
plt.figure(figsize=(10, 6))
plt.plot(xs, ys, linewidth=2, alpha=0.7)
plt.scatter([start[0]], [start[1]], s=80, label="Start")
plt.scatter([end[0]], [end[1]], s=80, label="End")
# Approximate per-segment velocity (px/s)
v = []
for i in range(1, len(traj)):
dx = xs[i] - xs[i-1]
dy = ys[i] - ys[i-1]
dt = max(times[i-1], 1) / 1000.0
v.append((dx**2 + dy**2) ** 0.5 / dt)
sc = plt.scatter(xs[1:], ys[1:], c=v, s=12, alpha=0.8)
plt.colorbar(sc, label="Velocity (px/s)")
plt.gca().invert_yaxis() # Match typical screen coords
plt.title("Mouse Trajectory Visualization (Local)")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
plt.grid(alpha=0.3)
plt.show()
# Try it
visualize_trajectory((100, 100), (800, 600))
Look for: eased-in/eased-out velocity, tiny course corrections near the endpoint, and no long, perfectly straight segments unless the distance is very short.
Why We Don’t Cover CAPTCHA or “Passing Bot Detection” Here
CAPTCHAs (slider or otherwise) and bot detection systems exist to protect platforms and users. Attempting to bypass them on sites you don’t control—or without explicit authorization—violates terms of service and can be illegal. That’s why this guide intentionally avoids:
- Driving real browsers against third-party properties
- Showing code that claims to “pass bot detection”
- Techniques for defeating or spoofing CAPTCHAs
Do this instead:
- If you own the site, use test keys/modes that major CAPTCHA vendors provide for development.
- If you’re a researcher, work in an IRB/ethics-approved environment or a red-team engagement with a signed authorization letter.
- For QA and analytics tuning, keep your work to local sandboxes and staging systems that mimic production.
Evaluating Human-Likeness (Without Evasion)
You can quantify “human-likeness” in your lab with safe, model-agnostic metrics:
- Velocity profile: bell-shaped (accelerate → peak → decelerate).
- Tremor index: micro-variations near targets (but not chaotic).
- Curvature variability: path bends that aren’t perfectly uniform.
- Endpoint overshoot/undershoot rate: tiny corrections just before settling.
- Event density: number of intermediate points per second—humans produce many micro-updates over longer paths.
Export your simulator’s history and compute summaries:
import math
import statistics as stats
def summarize(history):
# history: list of (x, y, delay_ms)
distance = 0.0
speeds = []
for i in range(1, len(history)):
x0, y0, _ = history[i-1]
x1, y1, dt = history[i]
d = math.hypot(x1-x0, y1-y0)
distance += d
if dt > 0:
speeds.append(d / (dt/1000.0))
return {
"points": len(history),
"distance_px": round(distance, 2),
"mean_speed_px_s": round(stats.mean(speeds), 2) if speeds else 0,
"max_speed_px_s": round(max(speeds), 2) if speeds else 0
}
Feed this with the MockMouse
simulator’s history to spot anomalies before real users do.
Designing Targets and Tolerances
Human-like trajectories shine when your UI components respect them:
- Generous target sizes: Many “misses” happen because the pointer decelerates late and lands near edges.
- Landing zones: Add small tolerance halos for click regions so that near-misses still “feel” responsive.
- Fitts’s Law considerations: Distance and target size predict acquisition time—long distances to tiny controls will amplify hesitation.
A quick parametric test:
def click_success_rate(mock_mouse, target_center, radius_px, trials=50):
import random
hits = 0
for _ in range(trials):
start = (random.randint(50, 750), random.randint(50, 450))
mock_mouse.pos = start
mock_mouse.history = [(start[0], start[1], 0)]
mock_mouse.move_to(target_center)
x, y = mock_mouse.pos
if (x - target_center[0])**2 + (y - target_center[1])**2 <= radius_px**2:
hits += 1
return hits / trials
Use this purely in your sandbox to gauge whether your button sizes align with realistic pointer behavior.
Common Pitfalls to Avoid
- Reusing identical paths
Even in a lab, repeating the exact same trajectory reduces variability and can bias measurements. Use caching with controlled randomness. - Ignoring viewport bounds
Clamp all coordinates to your canvas or test surface. Stray points look robotic and break your own analytics.
def clamp_to_surface(trajectory, width, height):
clamped = []
for x, y in trajectory:
x = max(0, min(x, width - 1))
y = max(0, min(y, height - 1))
clamped.append((x, y))
return clamped
- Uniform timing
Constant inter-point delays produce machine-like velocity. Always use timing arrays, not fixed sleeps. - Over-noising
Too much jitter looks jittery, not human. Focus on gentle micro-variations near endpoints and slight path curvature elsewhere. - Testing against live sites without consent
Don’t. Keep it to local/staging environments or property you own.
FAQ (for the Ethical Automation Context)
Is Cursory useful if I’m not trying to “pass” anything?
Absolutely. It’s valuable for QA, accessibility studies, pointer-target tuning, analytics validation, and realistic product demos.
Can I integrate with Selenium or Playwright?
You can integrate automation in environments you control, but this guide deliberately avoids live-site examples. If you’re testing your own app, prefer a local HTML sandbox or a staging domain. Use your framework’s official docs for basic pointer APIs; keep your work within permitted scopes.
What about CAPTCHAs and sliders?
Use vendor test modes/keys for development. Never attempt to bypass protections on live systems without explicit authorization.
Will human-like movement guarantee anything?
No. It’s a modeling tool for UX and testing—not a way to defeat controls. Treat it as a lens on human behavior, not a shield.
Final Thoughts
Cursory isn’t “just another” mouse-movement library; it’s a practical way to simulate believable human motion for ethical testing. When you keep the work inside a sandbox or staging environment—with permission—you can:
- Validate whether your UI feels forgiving near edges and tiny controls
- Stress-test analytics that depend on realistic movement and scroll cadences
- Demonstrate motion patterns to designers, PMs, and stakeholders with compelling visuals
Remember the north star: you’re not aiming for perfect human motion—humans aren’t perfect. You’re aiming for credible motion that captures acceleration profiles, micro-corrections, and subtle timing shifts that real users display. Use the visualization and metrics above to tune your parameters, cache wisely to speed iterative tests, and never push automation against systems you don’t own or have permission to probe.
If you need to publish this as a tech guide blog, keep code blocks (as above), include a short ethics callout at the top, and link to your organization’s responsible automation policy. That way, you get all the educational value—the “how to use Cursory” know-how, the human-like mouse trajectories insights, and the scroll/hover modeling—without venturing into evasion territory.