Getting Google search results programmatically used to mean paying expensive API fees or fighting endless bot detection systems. Browser SERP changes that equation entirely.
This open-source tool gives you real-time Google SERP data through remote browsers. You get clean JSON output without the usual headaches of proxies, CAPTCHAs, or fingerprint detection.
In this guide, you'll learn how to set up Browser SERP from scratch. We'll cover installation, configuration, API usage, and production deployment with Docker.
What Is Browser SERP and Why Does It Matter?
Browser SERP is a lightweight API that delivers real-time Google Search results using managed remote browsers from Browser.cash. It handles anti-bot detection automatically while returning structured JSON data.
Unlike traditional SERP scrapers that get blocked constantly, Browser SERP uses residential-grade browser sessions. These sessions look identical to real human users browsing Google.
The tool maintains a pool of warm browser instances. This connection pooling approach keeps response times fast and consistent.
Here's what makes Browser SERP different from paid alternatives:
- Self-hosted: You control the infrastructure and data
- Open-source: MIT license with full code access
- Cost-effective: Pay only for Browser.cash credits, not per-query fees
- Fast: Connection pooling delivers sub-second response times
- Clean output: Structured JSON ready for any application
Prerequisites
Before starting, make sure you have these tools installed on your system.
Required software:
- Node.js 18 or higher
- npm (comes with Node.js)
- Git for cloning the repository
Required accounts:
- Browser.cash API key (sign up at browser.cash/developers)
You can verify your Node.js installation by running this command:
node --version
The output should show v18.0.0 or higher. If not, download the latest LTS version from nodejs.org.
Step 1: Clone and Install Browser SERP
Start by cloning the Browser SERP repository from GitHub. Open your terminal and run these commands.
git clone https://github.com/BrowserCash/browser-serp.git
cd browser-serp
This downloads the complete source code to your local machine. The project structure is straightforward.
Next, install the required npm dependencies:
npm install
This command reads the package.json file and installs all necessary packages. The installation typically takes 30-60 seconds depending on your internet connection.
The main dependencies include TypeScript for type safety and Express for the HTTP server. Everything else handles browser communication and JSON parsing.
After installation completes, you'll see a node_modules folder in your project directory. This contains all the installed packages.
Step 2: Configure Your Environment Variables
Browser SERP uses environment variables for configuration. This keeps sensitive data like API keys out of your code.
Copy the example environment file to create your own:
cp .env.example .env
Now open the .env file in your preferred text editor. You'll see several configuration options available.
The most important setting is your Browser.cash API key:
BROWSER_API_KEY=your_browser_cash_api_key_here
Replace the placeholder with your actual API key from the Browser.cash dashboard. Without this key, the service won't be able to spawn remote browser sessions.
Here are all the available configuration options:
| Variable | Default | Description |
|---|---|---|
| BROWSER_API_KEY | Required | Your Browser.cash API key |
| PORT | 8080 | Port for the API server |
| LOG_LEVEL | info | Logging verbosity (debug, info, warn, error) |
| ALLOWED_ORIGINS | * | CORS allowed origins |
| SERP_POOL_SIZE | 3 | Number of concurrent browser sessions |
| RATE_LIMIT_MAX | 100 | Maximum requests per minute per IP |
For development, the defaults work fine. For production, you'll want to adjust the pool size and rate limits.
Pool size considerations:
A larger pool means more concurrent searches but higher memory usage. Start with 3 sessions and scale up based on your traffic patterns.
Each browser session consumes Browser.cash credits while active. Balance your pool size against your budget constraints.
Step 3: Start the Browser SERP Server
With configuration complete, you're ready to launch the server. Browser SERP supports both development and production modes.
For development with hot reloading:
npm run dev
This watches for file changes and restarts automatically. Perfect for testing and experimentation.
For production deployment:
npm run build
npm start
The build command compiles TypeScript to JavaScript. The start command runs the compiled production code.
You should see output similar to this:
[INFO] Browser SERP server starting...
[INFO] Initializing browser pool with 3 sessions
[INFO] Server listening on http://0.0.0.0:8080
The server is now ready to accept search requests. The pool initialization takes a few seconds as it warms up browser sessions.
Step 4: Make Your First Search Request
Browser SERP exposes a simple REST API for search requests. The main endpoint accepts POST requests with your search parameters.
Basic search request:
curl -X POST http://localhost:8080/api/v1/search \
-H "Content-Type: application/json" \
-d '{
"q": "best programming languages 2025",
"count": 5,
"country": "us"
}'
Let's break down each parameter in this request.
The q parameter contains your search query. This works exactly like typing into Google's search box.
The count parameter specifies how many results you want. The maximum is 20 results per request.
The country parameter sets geographic targeting. Use standard two-letter country codes like "us", "uk", "de", or "fr".
The API response structure:
{
"web": {
"total": 135000000,
"results": [
{
"title": "Best Programming Languages to Learn in 2025",
"url": "https://example.com/guide",
"description": "A comprehensive guide to the top programming languages..."
},
{
"title": "Top 10 Programming Languages for 2025",
"url": "https://example.com/top-10",
"description": "Based on job market data and developer surveys..."
}
]
}
}
Each result contains the title, URL, and description snippet from Google's search results. The total field shows Google's estimated result count.
Handling the response in code:
Here's a Node.js example for integrating Browser SERP into your application:
const searchGoogle = async (query, count = 5) => {
const response = await fetch('http://localhost:8080/api/v1/search', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
q: query,
count: count,
country: 'us'
})
});
const data = await response.json();
return data.web.results;
};
// Usage
const results = await searchGoogle('browser automation tools');
console.log(results);
This function wraps the API call in a reusable format. You can easily adapt it for Python, PHP, or any language with HTTP support.
Python example:
import requests
def search_google(query, count=5, country='us'):
response = requests.post(
'http://localhost:8080/api/v1/search',
json={
'q': query,
'count': count,
'country': country
}
)
return response.json()['web']['results']
# Usage
results = search_google('machine learning tutorials')
for result in results:
print(f"{result['title']}: {result['url']}")
Both examples show how straightforward the API integration is. No complex authentication or SDK required.
Step 5: Monitor and Optimize Performance
Browser SERP includes built-in monitoring endpoints. These help you track performance and troubleshoot issues.
Health check endpoint:
curl http://localhost:8080/health
Response:
{
"ok": true
}
Use this endpoint for load balancer health checks or container orchestration probes.
Pool statistics endpoint:
curl http://localhost:8080/stats
Response:
{
"pool": {
"size": 3,
"available": 2,
"active": 1
}
}
This shows your current browser pool status. The available count indicates ready-to-use sessions. The active count shows sessions currently processing requests.
Interpreting pool statistics:
If available consistently shows 0, your pool is undersized. Consider increasing SERP_POOL_SIZE in your environment variables.
If active is always at maximum, you're hitting capacity limits. Scale up your pool or add rate limiting to protect the service.
Deploying Browser SERP with Docker
Docker simplifies deployment and ensures consistent behavior across environments. Browser SERP includes a ready-to-use Dockerfile.
Building the Docker image:
docker build -t browser-serp .
This creates a production-ready container image. The build process compiles TypeScript and sets up the Node.js runtime.
Running the container:
docker run -p 8080:8080 --env-file .env browser-serp
The -p flag maps port 8080 from the container to your host machine. The --env-file flag loads your configuration.
Docker Compose for easier management:
Create a docker-compose.yml file for more complex deployments:
version: "3.8"
services:
serp:
build: .
ports:
- "8080:8080"
environment:
- BROWSER_API_KEY=${BROWSER_API_KEY}
- SERP_POOL_SIZE=5
- RATE_LIMIT_MAX=200
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
This configuration adds automatic restarts and health monitoring. Perfect for production environments.
Start the service with:
docker-compose up -d
The -d flag runs containers in detached mode (background).
Integrating with Teracrawl for Full Content Extraction
Browser SERP works seamlessly with Teracrawl, another tool from BrowserCash. Together, they create a complete search-to-content pipeline.
While Browser SERP returns search result metadata, Teracrawl fetches and converts full page content to Markdown. This combination is ideal for RAG pipelines and AI applications.
How the integration works:
- Browser SERP searches Google and returns URLs
- Teracrawl visits those URLs and extracts content
- You receive clean Markdown ready for LLM consumption
This pipeline eliminates the need for multiple tools or complex scraping setups.
Setting up the pipeline:
First, ensure Browser SERP is running on port 8080. Then clone and configure Teracrawl:
git clone https://github.com/BrowserCash/teracrawl.git
cd teracrawl
npm install
cp .env.example .env
Configure Teracrawl's environment to point to your Browser SERP instance:
BROWSER_API_KEY=your_browser_cash_api_key
UPSTREAM_SERP_URL=http://localhost:8080
Start Teracrawl:
npm run dev
Teracrawl runs on port 8085 by default. Now you can use the combined /crawl endpoint:
curl -X POST http://localhost:8085/crawl \
-H "Content-Type: application/json" \
-d '{
"q": "how to learn python",
"count": 3
}'
This single request searches Google, fetches the top 3 results, and returns their content as Markdown. Incredibly powerful for research automation.
Common Use Cases for Browser SERP
Understanding where Browser SERP excels helps you maximize its value. Here are the most popular applications.
SEO rank tracking:
Monitor your website's position for target keywords over time. Schedule regular searches and track ranking changes.
const trackRanking = async (keyword, targetDomain) => {
const results = await searchGoogle(keyword, 20);
const position = results.findIndex(r =>
r.url.includes(targetDomain)
);
return position === -1 ? 'Not in top 20' : position + 1;
};
Competitor analysis:
Discover what content ranks for your target keywords. Analyze competitor strategies and identify content gaps.
AI agent web access:
Give your AI assistants real-time web search capabilities. Browser SERP provides the data layer for intelligent research agents.
Market research:
Track trending topics and search patterns. Monitor brand mentions and sentiment across search results.
Lead generation:
Find businesses and contact information through targeted searches. Extract company data from search results for outreach campaigns.
Troubleshooting Common Issues
Even with a well-designed tool, issues can arise. Here's how to diagnose and fix the most common problems.
Issue: Server won't start
Check your Node.js version first:
node --version
Browser SERP requires Node.js 18 or higher. Upgrade if needed.
Also verify your .env file exists and contains a valid BROWSER_API_KEY.
Issue: Requests timing out
Browser pool exhaustion often causes timeouts. Check your pool stats:
curl http://localhost:8080/stats
If available is consistently 0, increase your SERP_POOL_SIZE setting.
Issue: Empty or partial results
Some queries return fewer results than requested. This usually reflects Google's actual result count for that query.
Also check the country parameter. Some searches are heavily geo-targeted.
Issue: 429 Too Many Requests
You've hit the rate limit. Either reduce request frequency or increase RATE_LIMIT_MAX.
For high-volume applications, consider running multiple instances behind a load balancer.
Issue: Browser.cash credits depleting quickly
Each active browser session consumes credits over time. Optimize by:
- Reducing pool size during low-traffic periods
- Implementing request caching for repeated queries
- Using shorter session timeouts
Performance Optimization Tips
Getting the most from Browser SERP requires thoughtful configuration. These tips help maximize throughput while minimizing costs.
Right-size your pool:
Start small and scale based on actual traffic. A pool of 3 sessions handles approximately 50-100 requests per minute comfortably.
Monitor your stats endpoint to find the optimal size.
Implement request caching:
Many search queries repeat frequently. Cache results for 15-30 minutes to reduce redundant requests.
const cache = new Map();
const CACHE_TTL = 15 * 60 * 1000; // 15 minutes
const cachedSearch = async (query) => {
const cacheKey = query.toLowerCase();
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.results;
}
const results = await searchGoogle(query);
cache.set(cacheKey, { results, timestamp: Date.now() });
return results;
};
This simple caching layer dramatically reduces API calls for popular queries.
Batch related queries:
If you need results for multiple related keywords, batch them close together. The warm browser pool serves consecutive requests faster.
Use appropriate result counts:
Only request the results you need. Asking for 20 results when you only use 5 wastes resources and slows response times.
Security Best Practices
Running any API service requires attention to security. Follow these practices to protect your Browser SERP deployment.
Never expose API keys:
Keep your BROWSER_API_KEY in environment variables, never in code. Use secrets management for production deployments.
Restrict CORS origins:
In production, set ALLOWED_ORIGINS to only your application domains:
ALLOWED_ORIGINS=https://yourdomain.com,https://app.yourdomain.com
Use HTTPS in production:
Place Browser SERP behind a reverse proxy (nginx, Caddy) with TLS termination. Never expose the raw HTTP endpoint publicly.
Implement authentication:
For multi-tenant deployments, add an authentication layer. API keys or JWT tokens work well.
Monitor for abuse:
Watch your logs for unusual patterns. Sudden spikes might indicate abuse or attacks.
Comparing Browser SERP to Paid Alternatives
Understanding how Browser SERP stacks up against commercial options helps inform your decision.
| Feature | Browser SERP | SerpApi | Bright Data |
|---|---|---|---|
| Pricing model | Browser.cash credits | Per query | Per query |
| Self-hosted | Yes | No | No |
| Open source | Yes | No | No |
| Setup complexity | Medium | Low | Low |
| Data ownership | Full | Limited | Limited |
| Customization | Unlimited | API limits | API limits |
When to choose Browser SERP:
- You need full control over infrastructure
- Data privacy is critical
- You want to avoid per-query pricing
- You need custom modifications
When to consider paid alternatives:
- You need zero setup time
- You lack infrastructure expertise
- Query volumes are low and predictable
- Support SLAs are required
For most developers building search-powered applications, Browser SERP offers the best balance of cost, control, and capability.
Conclusion
Browser SERP transforms Google search data access from expensive and complicated to affordable and straightforward. You now have everything needed to deploy your own SERP API.
The five steps we covered handle the complete setup process:
- Clone and install the repository
- Configure environment variables
- Start the server
- Make search requests
- Monitor and optimize performance
Remember to scale your browser pool based on actual usage patterns. Start conservative and adjust based on the stats endpoint data.
For advanced use cases, combine Browser SERP with Teracrawl to build complete search-to-content pipelines. This combination powers sophisticated AI applications and research automation.
The open-source nature means you can modify and extend Browser SERP for your specific needs. Check the GitHub repository for updates and community contributions.
Advanced Configuration Options
Beyond the basic setup, Browser SERP supports several advanced configurations for power users.
Custom logging levels:
Set LOG_LEVEL to "debug" during development to see detailed request traces:
LOG_LEVEL=debug
Debug output includes browser pool activity, request timing, and response parsing details. Switch back to "info" for production to reduce log volume.
Rate limiting customization:
The default rate limit protects your service from abuse. Adjust RATE_LIMIT_MAX based on your client needs:
RATE_LIMIT_MAX=50 # Strict limit for public APIs
RATE_LIMIT_MAX=500 # Generous limit for internal services
Rate limits apply per IP address. Legitimate users behind shared IPs might need higher limits.
CORS configuration for web applications:
When building browser-based applications, configure CORS properly:
ALLOWED_ORIGINS=https://myapp.com,https://staging.myapp.com
Comma-separate multiple origins. The wildcard (*) allows all origins but reduces security.
Pool size optimization:
Your optimal pool size depends on traffic patterns and budget. Use this formula as a starting point:
Optimal pool size = (Average concurrent requests) × 1.5
Monitor the stats endpoint during peak hours. If available frequently hits 0, increase pool size. If it's always above 2, you might reduce it.
Building a Complete Search Application
Let's build a practical example that demonstrates Browser SERP in a real application context.
This example creates a simple keyword research tool that finds related search terms.
Project structure:
keyword-tool/
├── server.js
├── package.json
└── .env
package.json:
{
"name": "keyword-tool",
"version": "1.0.0",
"type": "module",
"dependencies": {
"express": "^4.18.2"
}
}
server.js:
import express from 'express';
const app = express();
app.use(express.json());
const SERP_URL = process.env.SERP_URL || 'http://localhost:8080';
async function searchKeywords(baseKeyword) {
// Generate related keyword variations
const modifiers = [
'how to', 'best', 'vs', 'alternative',
'tutorial', 'guide', 'examples'
];
const results = [];
for (const modifier of modifiers) {
const query = `${modifier} ${baseKeyword}`;
const response = await fetch(`${SERP_URL}/api/v1/search`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ q: query, count: 3, country: 'us' })
});
const data = await response.json();
results.push({
keyword: query,
topResults: data.web.results.map(r => r.title)
});
}
return results;
}
app.post('/research', async (req, res) => {
const { keyword } = req.body;
const results = await searchKeywords(keyword);
res.json({ keyword, research: results });
});
app.listen(3000, () => {
console.log('Keyword tool running on port 3000');
});
This simple application expands a base keyword into multiple search variations. Each variation returns the top 3 result titles for content inspiration.
Run the tool and test it:
curl -X POST http://localhost:3000/research \
-H "Content-Type: application/json" \
-d '{"keyword": "python programming"}'
The response reveals what content already ranks for each keyword variation. Use this data to identify content opportunities.
Error Handling Best Practices
Production applications need robust error handling. Browser SERP can fail for various reasons, and your code should handle each gracefully.
Implementing retry logic:
async function searchWithRetry(query, maxRetries = 3) {
for (let attempt = 1; attempt <= maxRetries; attempt++) {
try {
const response = await fetch('http://localhost:8080/api/v1/search', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ q: query, count: 5, country: 'us' }),
signal: AbortSignal.timeout(10000) // 10 second timeout
});
if (!response.ok) {
throw new Error(`HTTP ${response.status}`);
}
return await response.json();
} catch (error) {
console.error(`Attempt ${attempt} failed:`, error.message);
if (attempt === maxRetries) {
throw error;
}
// Exponential backoff
await new Promise(r => setTimeout(r, 1000 * attempt));
}
}
}
This pattern handles temporary failures, network issues, and rate limits. The exponential backoff prevents overwhelming a recovering service.
Graceful degradation:
When Browser SERP is unavailable, provide fallback behavior:
async function searchWithFallback(query) {
try {
return await searchWithRetry(query);
} catch (error) {
console.error('Browser SERP unavailable:', error);
// Return cached results if available
const cached = await getCachedResults(query);
if (cached) {
return { ...cached, fromCache: true };
}
// Return empty results as last resort
return { web: { results: [], error: 'Service unavailable' } };
}
}
Your application continues functioning even during outages. Users see degraded results rather than complete failures.
Frequently Asked Questions
How much does Browser SERP cost to run?
Browser SERP itself is free and open-source. The only cost is Browser.cash credits for the remote browser sessions. Browser.cash offers $25 in free credits for new accounts.
Credit consumption depends on your pool size and active time. A pool of 3 sessions running 8 hours daily typically costs $15-30 per month.
Can Browser SERP search engines besides Google?
Currently, Browser SERP focuses on Google search results. The BrowserCash team may add support for other search engines in future releases.
For Bing or DuckDuckGo results, you'd need to modify the source code or use alternative tools.
Is scraping Google search results legal?
Scraping publicly available search results is generally legal for personal and research use. However, commercial use may have restrictions.
Always review Google's Terms of Service for your specific use case. Consider consulting legal counsel for commercial applications.
How do I scale Browser SERP for high traffic?
For high-traffic deployments, run multiple Browser SERP instances behind a load balancer. Each instance maintains its own browser pool.
Use sticky sessions or implement a shared cache layer (Redis) to prevent redundant searches across instances.
What happens if Browser.cash goes down?
Browser SERP requires active Browser.cash connectivity for browser sessions. If Browser.cash experiences downtime, searches will fail.
For mission-critical applications, implement fallback mechanisms or consider running a secondary SERP solution.