15 Ways to Make HTTP Requests in Node.js

HTTP requests are the backbone of modern web development—they let your Node.js apps talk to APIs, fetch data, and communicate with external services.

Whether you're hitting a REST API, scraping websites, or building microservices, you'll need a reliable way to send requests and handle responses.

In this guide, we'll walk through 15 different methods to make HTTP requests in Node.js, from built-in modules to third-party libraries, and even some unconventional approaches that most developers don't know about.

1. Native Fetch API (Built-in Since Node.js 18)

The Fetch API finally landed in Node.js as a stable feature, meaning you don't need external packages anymore for basic requests.

// Simple GET request
const response = await fetch('https://api.github.com/users/github');
const data = await response.json();
console.log(data.login);

Here's what makes fetch powerful: it returns a promise that resolves to the Response object. You then call methods like .json(), .text(), or .blob() depending on what you're fetching.

For POST requests, you configure it like this:

const response = await fetch('https://jsonplaceholder.typicode.com/posts', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    title: 'New Post',
    body: 'Content here',
    userId: 1
  })
});

const result = await response.json();

The body needs to be stringified because fetch expects a string or stream. The Response object includes properties like status, headers, and methods to parse the body in different formats.

When to use it: Modern projects that need a simple, standard way to make requests without dependencies.

2. HTTP Module (Built-in)

The http module is Node's OG way of making requests. It's low-level, which means more control but also more code.

const http = require('http');

const options = {
  hostname: 'api.github.com',
  path: '/users/github',
  method: 'GET',
  headers: {
    'User-Agent': 'Node.js'
  }
};

Notice we're setting up the request configuration first. GitHub's API requires a User-Agent header, otherwise it'll reject your request.

const req = http.request(options, (res) => {
  let data = '';
  
  res.on('data', (chunk) => {
    data += chunk;
  });
  
  res.on('end', () => {
    console.log(JSON.parse(data));
  });
});

req.on('error', (error) => {
  console.error(error);
});

req.end();

The http module works with streams. Data comes in chunks, so you accumulate them in the data event listener. Once all chunks arrive, the end event fires and you can parse the complete response.

When to use it: When you need fine-grained control over the request/response cycle or you're building something that needs to avoid dependencies.

3. HTTPS Module (Built-in)

The https module is nearly identical to http, but handles SSL/TLS encryption. Most APIs today use HTTPS, so you'll reach for this more often than plain http.

const https = require('https');

const postData = JSON.stringify({
  title: 'Test Post',
  body: 'Content',
  userId: 1
});

const options = {
  hostname: 'jsonplaceholder.typicode.com',
  path: '/posts',
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Content-Length': Buffer.byteLength(postData)
  }
};

For POST requests, you calculate the Content-Length header using Buffer.byteLength() instead of string.length because multibyte characters would mess up the count.

const req = https.request(options, (res) => {
  let data = '';
  
  res.on('data', (chunk) => {
    data += chunk;
  });
  
  res.on('end', () => {
    console.log(JSON.parse(data));
  });
});

req.write(postData);
req.end();

The write() method sends the POST body before calling end() to signal you're done.

When to use it: When you need the built-in security of HTTPS without external dependencies.

4. HTTP/2 Module (Built-in)

HTTP/2 brings multiplexing—multiple requests over a single TCP connection. The http2 module lets you leverage this performance boost.

const http2 = require('http2');

const client = http2.connect('https://nghttp2.org');

First, establish a session with the server. This persistent connection can handle multiple streams.

const req = client.request({
  ':path': '/'
});

req.on('response', (headers) => {
  console.log(headers[':status']);
});

let data = '';
req.on('data', (chunk) => {
  data += chunk;
});

req.on('end', () => {
  console.log(data);
  client.close();
});

req.end();

The :path and :status syntax comes from HTTP/2's use of pseudo-headers. These always start with a colon.

When to use it: High-performance apps that need to make multiple requests to the same server, or when the API explicitly supports HTTP/2.

5. Axios

Axios dominated the Node.js ecosystem for years. It wraps the http/https modules with a cleaner API and automatic JSON transformation.

const axios = require('axios');

// GET request
const response = await axios.get('https://api.github.com/users/github');
console.log(response.data.login);

The big win with Axios: response.data is already parsed JSON. No need to call .json() or JSON.parse().

// POST request with interceptors
const instance = axios.create({
  baseURL: 'https://api.example.com',
  timeout: 5000,
  headers: {'X-Custom-Header': 'value'}
});

instance.interceptors.request.use((config) => {
  config.headers.Authorization = `Bearer ${getToken()}`;
  return config;
});

const response = await instance.post('/users', {
  name: 'John',
  email: 'john@example.com'
});

Interceptors let you modify requests before they're sent. Perfect for adding auth tokens or logging every request.

When to use it: Projects that need interceptors, automatic retries, or request/response transformation.

6. Got

Got is a lighter, more modern alternative to Axios. It was designed with Node.js in mind from the ground up.

const got = require('got');

const response = await got('https://api.github.com/users/github', {
  responseType: 'json'
});

console.log(response.body.login);

Setting responseType: 'json' automatically parses the response. The data lives in response.body, not response.data.

// Streaming large files
const fs = require('fs');
const stream = got.stream('https://example.com/large-file.zip');

stream.pipe(fs.createWriteStream('file.zip'));

Got's streaming support is built-in and works seamlessly with Node.js streams. You don't buffer the entire response in memory.

When to use it: When you need better performance than Axios and want built-in retry logic.

7. Undici Request Method

Undici is Node's official HTTP client built from scratch. It's what powers the native fetch under the hood.

const { request } = require('undici');

const { statusCode, headers, body } = await request('https://api.github.com/users/github');

Undici's request() returns a promise that resolves to an object with statusCode, headers, and body. The body is a stream.

let data = '';
for await (const chunk of body) {
  data += chunk;
}

const parsed = JSON.parse(data);
console.log(parsed.login);

Since body is an async iterable, you use for await...of to consume it. This approach is memory-efficient for large responses.

When to use it: Maximum performance and you don't mind working with streams.

8. Undici Stream Method

Undici's stream method gives you the raw response stream with zero buffering overhead.

const { stream } = require('undici');
const fs = require('fs');

const writeStream = fs.createWriteStream('output.json');

await stream(
  'https://api.github.com/users/github',
  { opaque: writeStream },
  ({ opaque }) => opaque
);

The opaque parameter is a trick to pass your own data through the pipeline. Here, we're piping the response directly to a file without loading it into memory.

// Custom stream processing
await stream(
  'https://api.github.com/events',
  { opaque: null },
  ({ statusCode, headers, opaque, body }) => {
    body.on('data', (chunk) => {
      process.stdout.write(chunk);
    });
    return body;
  }
);

You get access to the raw stream and can process chunks as they arrive. Perfect for processing large datasets on the fly.

When to use it: Streaming large files or processing data incrementally.

9. node-fetch

node-fetch brought the browser's Fetch API to Node.js before it was native. It's still useful for projects that need to support older Node versions.

const fetch = require('node-fetch');

const response = await fetch('https://api.github.com/users/github');
const data = await response.json();

The API is identical to the browser's fetch, which makes code portable between frontend and backend.

// Upload with FormData
const FormData = require('form-data');
const form = new FormData();
form.append('file', fs.createReadStream('document.pdf'));

const response = await fetch('https://api.example.com/upload', {
  method: 'POST',
  body: form
});

node-fetch works with the form-data package for multipart uploads. The form automatically sets the correct Content-Type header with boundary.

When to use it: Projects that need to support Node.js < 18 or want a lightweight fetch polyfill.

10. SuperAgent

SuperAgent uses a chainable API that some developers find more readable than plain fetch.

const superagent = require('superagent');

const response = await superagent
  .get('https://api.github.com/users/github')
  .set('User-Agent', 'SuperAgent');

console.log(response.body.login);

The chaining syntax reads left-to-right. You chain .set() for headers, .query() for URL params, and .send() for the body.

// Automatic retry on failure
const response = await superagent
  .post('https://api.example.com/data')
  .retry(3)
  .send({ key: 'value' });

SuperAgent's .retry() method automatically retries failed requests. No need for external retry libraries.

When to use it: When you prefer a fluent API and need built-in retry logic.

11. Needle

Needle is a lean HTTP client with only two dependencies. It's designed for situations where bundle size matters.

const needle = require('needle');

const response = await needle('get', 'https://api.github.com/users/github', {
  json: true
});

console.log(response.body.login);

The first argument is the HTTP method as a string. Setting json: true automatically parses JSON responses.

// Automatic decompression
const response = await needle('get', 'https://api.example.com/data', {
  compressed: true,
  follow_max: 5
});

Needle handles gzip/deflate compression automatically with compressed: true. It'll add the Accept-Encoding header and decompress the response.

When to use it: Small projects or Lambda functions where you need to minimize bundle size.

12. Phin

Phin is the ultra-lightweight option—it's less than 100 lines of code. Perfect for serverless where cold start times matter.

const p = require('phin');

const response = await p('https://api.github.com/users/github');
console.log(response.body.toString());

Phin returns the body as a Buffer by default. You call .toString() to convert it to a string.

// Parse JSON automatically
const response = await p({
  url: 'https://api.github.com/users/github',
  parse: 'json'
});

console.log(response.body.login);

Setting parse: 'json' tells Phin to run JSON.parse() on the response. Simple and effective.

When to use it: AWS Lambda, Cloudflare Workers, or any environment where package size directly impacts performance.

13. Bent

Bent is a functional approach to HTTP requests. Created by Mikeal Rogers (the original author of the request library).

const bent = require('bent');

const getJSON = bent('json');
const data = await getJSON('https://api.github.com/users/github');

console.log(data.login);

You create a function that's pre-configured with response types. Here, bent('json') returns a function that always parses JSON.

// Pre-configured POST function
const post = bent('https://api.example.com', 'POST', 'json', 200);

const result = await post('/users', {
  name: 'John',
  email: 'john@example.com'
});

The function only succeeds if the status code is 200. Any other status throws an error. This fail-fast approach prevents silent failures.

When to use it: Functional programming fans or when you want to create reusable request functions.

14. Raw TCP with Net Module (The Nuclear Option)

Here's something most developers never do: make HTTP requests by manually writing to a TCP socket. This is how HTTP actually works under the hood.

const net = require('net');

const client = new net.Socket();
client.connect(80, 'example.com');

First, create a TCP socket and connect to port 80 (or 443 for HTTPS, but you'd need to handle TLS manually).

client.on('connect', () => {
  const request = 
    'GET / HTTP/1.1\r\n' +
    'Host: example.com\r\n' +
    'Connection: close\r\n' +
    '\r\n';
  
  client.write(request);
});

This is the raw HTTP protocol. Each line ends with \r\n, and a blank line signals the end of headers. Miss a \r\n and the server won't understand your request.

let response = '';
client.on('data', (chunk) => {
  response += chunk.toString();
});

client.on('end', () => {
  console.log(response);
});

The response includes headers and body mashed together. You'd need to parse the HTTP response manually to extract status codes and headers.

When to use it: Never in production. But it's a great learning exercise to understand how HTTP works at the protocol level. Also useful for debugging network issues or implementing custom protocols.

15. Child Process with cURL (The Wildcard)

Sometimes you just want to shell out to cURL. Maybe you're dealing with weird TLS requirements or need cURL's specific features.

const { exec } = require('child_process');

exec('curl -s https://api.github.com/users/github', (error, stdout, stderr) => {
  if (error) {
    console.error(error);
    return;
  }
  
  const data = JSON.parse(stdout);
  console.log(data.login);
});

The -s flag makes cURL silent, suppressing progress bars. You get just the response body in stdout.

// POST with data
const payload = JSON.stringify({ title: 'Test', body: 'Content' });

exec(`curl -s -X POST https://jsonplaceholder.typicode.com/posts \
  -H "Content-Type: application/json" \
  -d '${payload}'`, (error, stdout) => {
  console.log(JSON.parse(stdout));
});

Watch out for shell injection here. If payload comes from user input, sanitize it first. Better yet, use execFile instead of exec to avoid shell interpretation.

When to use it: Legacy systems where cURL is already installed and configured with specific certificates or proxies that would be painful to replicate in Node.js.

Performance Comparison

Different libraries have wildly different performance characteristics. Based on benchmarks, Undici's stream and pipeline methods are the fastest, hitting 18,000+ requests per second. The native http module with keep-alive clocks in around 9,000 req/s, while Axios sits at the bottom around 5,700 req/s.

But raw speed isn't everything. Axios brings interceptors and automatic retries. Got has better error handling. Fetch is standard across environments. Pick based on your actual needs, not just benchmarks.

Which Method Should You Use?

Here's the decision tree:

Need maximum performance? Use Undici's stream or pipeline methods.

Want something standard and simple? Use native fetch.

Building for older Node versions? Reach for node-fetch or Axios.

Need interceptors or request transformation? Axios is still king here.

Optimizing for bundle size? Phin or Needle are your friends.

Working with streams? Undici or Got handle this best.

Just learning or debugging? Try the http/https modules or even raw TCP to understand the fundamentals.

The ecosystem gives you options. Don't cargo-cult one library because everyone else uses it—match the tool to the job.

Marius Bernard

Marius Bernard

Marius Bernard is a Product Advisor, Technical SEO, & Brand Ambassador at Roundproxies. He was the lead author for the SEO chapter of the 2024 Web and a reviewer for the 2023 SEO chapter.