Making HTTP requests in Go is straightforward thanks to the net/http
package in the standard library. Whether you're building a web scraper, consuming APIs, or connecting microservices, mastering HTTP requests in Go will save you from countless headaches. This guide covers everything from basic GET requests to advanced patterns like connection pooling and retry logic that most tutorials skip.
The bare minimum: Your first GET request
The simplest way to fetch data from a URL is with http.Get()
. Here's the absolute minimum:
package main
import (
"fmt"
"io"
"log"
"net/http"
)
func main() {
resp, err := http.Get("https://api.github.com/users/golang")
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(body))
}
This works, but there are three critical things happening here that beginners often mess up:
First, always check the error. Network requests fail all the time—bad connections, DNS issues, server timeouts. Your code needs to handle this gracefully.
Second, close the response body with defer resp.Body.Close()
. If you don't, you'll leak connections. The default HTTP client maintains a connection pool, and unclosed bodies prevent connection reuse. You'll watch your application slowly consume ports until it crashes.
Third, read the entire body before closing it. The HTTP client only returns connections to the pool after the body is fully read. If you close the body without reading it completely, the connection gets discarded instead of reused.
POST requests with JSON data
Most APIs expect JSON these days. Here's how to POST JSON data properly:
package main
import (
"bytes"
"encoding/json"
"fmt"
"io"
"log"
"net/http"
)
type User struct {
Name string `json:"name"`
Email string `json:"email"`
}
func main() {
user := User{
Name: "Jane Doe",
Email: "jane@example.com",
}
// Marshal the struct to JSON
jsonData, err := json.Marshal(user)
if err != nil {
log.Fatal(err)
}
// Create the POST request
resp, err := http.Post(
"https://jsonplaceholder.typicode.com/users",
"application/json",
bytes.NewBuffer(jsonData),
)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
// Read and display the response
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Status: %s\n", resp.Status)
fmt.Println(string(body))
}
The http.Post()
function takes three arguments: the URL, content type, and an io.Reader
containing the request body. We wrap our JSON bytes in a bytes.NewBuffer
to satisfy the io.Reader
interface.
Pro tip: If you're making multiple requests, don't marshal the same JSON repeatedly. Cache the marshaled bytes and create a new bytes.NewBuffer
for each request.
Custom requests with headers and methods
Sometimes you need more control—custom headers, different HTTP methods, or query parameters. That's when you reach for http.NewRequest()
:
package main
import (
"context"
"fmt"
"io"
"log"
"net/http"
"time"
)
func main() {
// Create a context with timeout
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
// Create a custom request
req, err := http.NewRequestWithContext(
ctx,
http.MethodGet,
"https://api.github.com/repos/golang/go",
nil,
)
if err != nil {
log.Fatal(err)
}
// Add custom headers
req.Header.Set("User-Agent", "my-app/1.0")
req.Header.Set("Accept", "application/vnd.github.v3+json")
req.Header.Set("Authorization", "token YOUR_GITHUB_TOKEN")
// Execute the request
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
log.Fatal(err)
}
fmt.Printf("Status: %d\n", resp.StatusCode)
fmt.Println(string(body))
}
Notice we're using http.NewRequestWithContext()
instead of the plain http.NewRequest()
. The context version is almost always what you want. It lets you set timeouts and cancel requests mid-flight if needed.
Parsing JSON responses
Reading the raw response body works, but you'll almost always want to unmarshal JSON into Go structs:
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
)
type GitHubUser struct {
Login string `json:"login"`
Name string `json:"name"`
Followers int `json:"followers"`
Bio string `json:"bio"`
}
func main() {
resp, err := http.Get("https://api.github.com/users/golang")
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
// Check status code
if resp.StatusCode != http.StatusOK {
log.Fatalf("unexpected status code: %d", resp.StatusCode)
}
var user GitHubUser
if err := json.NewDecoder(resp.Body).Decode(&user); err != nil {
log.Fatal(err)
}
fmt.Printf("User: %s\n", user.Name)
fmt.Printf("Followers: %d\n", user.Followers)
}
Here's something most tutorials won't tell you: json.NewDecoder(resp.Body).Decode()
is better than io.ReadAll()
followed by json.Unmarshal()
for response bodies. The decoder streams the JSON, using less memory for large responses. It also reads the body completely, ensuring proper connection reuse.
However, if you need the raw bytes (for logging or retries), read the body first with io.ReadAll()
, then unmarshal from the byte slice.
Timeout handling: Three ways to do it
Timeouts are critical for production code. Without them, your application will hang indefinitely when services go down. Go offers three levels of timeout control:
1. Client-level timeout (simple but crude)
client := &http.Client{
Timeout: 10 * time.Second,
}
resp, err := client.Do(req)
This timeout applies to the entire request-response cycle: connection establishment, request writing, response headers, and body reading. It's simple but inflexible—every request uses the same timeout.
2. Context timeout (per-request control)
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
client := &http.Client{}
resp, err := client.Do(req)
Context timeouts give you per-request control. You can set different timeouts for different endpoints. The context also propagates through your call stack, letting you cancel downstream operations when a request times out.
Important: If you set both a client timeout and a context timeout, whichever is shorter takes effect. Don't set both unless you have a good reason.
3. Transport-level timeouts (granular control)
transport := &http.Transport{
DialContext: (&net.Dialer{
Timeout: 5 * time.Second, // Connection timeout
KeepAlive: 30 * time.Second,
}).DialContext,
TLSHandshakeTimeout: 10 * time.Second, // TLS handshake
ResponseHeaderTimeout: 10 * time.Second, // Wait for headers
ExpectContinueTimeout: 1 * time.Second,
IdleConnTimeout: 90 * time.Second,
}
client := &http.Client{
Transport: transport,
}
Transport timeouts let you control individual phases of the request. This is overkill for most applications, but when you need fine-grained control, nothing else works.
Connection pooling: Making your requests faster
Here's something that catches people off guard: creating a new http.Client
for every request is terrible for performance. Each client creates its own connection pool, which means you lose all the benefits of connection reuse.
Wrong way (don't do this):
// BAD: Creates a new connection pool every time
func makeRequest(url string) (*http.Response, error) {
client := &http.Client{} // New client every time!
return client.Get(url)
}
Right way (reuse the client):
// Create one client for your entire application
var httpClient = &http.Client{
Timeout: 10 * time.Second,
Transport: &http.Transport{
MaxIdleConns: 100,
MaxIdleConnsPerHost: 100,
IdleConnTimeout: 90 * time.Second,
},
}
func makeRequest(url string) (*http.Response, error) {
return httpClient.Get(url)
}
The default http.DefaultClient
uses DefaultMaxIdleConnsPerHost = 2
, which means only two idle connections per host are kept in the pool. If you're making concurrent requests to the same API, you'll constantly open new connections. Bump MaxIdleConnsPerHost
to match your concurrency level:
transport := http.DefaultTransport.(*http.Transport).Clone()
transport.MaxIdleConns = 100
transport.MaxIdleConnsPerHost = 100 // Increase from default of 2
transport.MaxConnsPerHost = 100 // Limit total connections per host
client := &http.Client{
Transport: transport,
Timeout: 10 * time.Second,
}
One gotcha: if you're making requests to hundreds of different hosts, a large MaxIdleConns
can consume significant memory. Tune it based on your usage pattern.
Retry logic with exponential backoff
Here's where most Go HTTP tutorials end, but production systems need retry logic. Networks are unreliable. APIs have rate limits. Servers restart. Your code needs to handle temporary failures gracefully.
Here's a simple but effective retry implementation with exponential backoff:
package main
import (
"context"
"fmt"
"io"
"log"
"math"
"math/rand"
"net/http"
"time"
)
type RetryConfig struct {
MaxRetries int
InitialWait time.Duration
MaxWait time.Duration
}
func makeRequestWithRetry(ctx context.Context, url string, config RetryConfig) ([]byte, error) {
client := &http.Client{
Timeout: 10 * time.Second,
}
var lastErr error
for attempt := 0; attempt <= config.MaxRetries; attempt++ {
req, err := http.NewRequestWithContext(ctx, http.MethodGet, url, nil)
if err != nil {
return nil, err
}
resp, err := client.Do(req)
if err != nil {
lastErr = err
if attempt < config.MaxRetries {
waitTime := calculateBackoff(attempt, config)
log.Printf("Request failed (attempt %d/%d): %v. Retrying in %v",
attempt+1, config.MaxRetries+1, err, waitTime)
time.Sleep(waitTime)
continue
}
break
}
defer resp.Body.Close()
// Retry on 5xx errors and 429 (rate limit)
if resp.StatusCode >= 500 || resp.StatusCode == http.StatusTooManyRequests {
body, _ := io.ReadAll(resp.Body)
lastErr = fmt.Errorf("server error: %d - %s", resp.StatusCode, string(body))
if attempt < config.MaxRetries {
waitTime := calculateBackoff(attempt, config)
log.Printf("Server error %d (attempt %d/%d). Retrying in %v",
resp.StatusCode, attempt+1, config.MaxRetries+1, waitTime)
time.Sleep(waitTime)
continue
}
break
}
// Success - read and return the body
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, err
}
return body, nil
}
return nil, fmt.Errorf("max retries exceeded: %w", lastErr)
}
func calculateBackoff(attempt int, config RetryConfig) time.Duration {
// Exponential backoff: initialWait * 2^attempt
wait := float64(config.InitialWait) * math.Pow(2, float64(attempt))
// Cap at max wait time
if wait > float64(config.MaxWait) {
wait = float64(config.MaxWait)
}
// Add jitter (±25% randomness) to avoid thundering herd
jitter := wait * 0.25 * (rand.Float64()*2 - 1)
wait += jitter
return time.Duration(wait)
}
func main() {
config := RetryConfig{
MaxRetries: 3,
InitialWait: 1 * time.Second,
MaxWait: 30 * time.Second,
}
ctx := context.Background()
body, err := makeRequestWithRetry(ctx, "https://api.github.com/users/golang", config)
if err != nil {
log.Fatal(err)
}
fmt.Println(string(body))
}
A few things worth noting about this implementation:
Jitter matters: Without jitter, all your failed requests retry at exactly the same time, potentially causing a thundering herd problem. Adding 25% randomness spreads retries across time.
Don't retry everything: We only retry on network errors, 5xx server errors, and 429 rate limit errors. Retrying 4xx client errors (except 429) is pointless—the request won't suddenly start working.
Context cancellation: The retry loop respects context cancellation. If the parent context is cancelled, the request fails immediately instead of continuing to retry.
For production use, consider using a library like github.com/cenkalti/backoff/v4
, which handles edge cases and provides more sophisticated strategies. But understanding the fundamentals helps you debug issues when things go wrong.
Handling redirects
By default, Go's HTTP client follows up to 10 redirects automatically. Sometimes you want to handle redirects yourself or disable them entirely:
client := &http.Client{
CheckRedirect: func(req *http.Request, via []*http.Request) error {
// Return an error to stop following redirects
if len(via) >= 3 {
return fmt.Errorf("too many redirects")
}
// Log redirect chain
log.Printf("Redirecting to: %s", req.URL)
// Return nil to follow the redirect
return nil
},
}
To disable redirects completely, return http.ErrUseLastResponse
:
client := &http.Client{
CheckRedirect: func(req *http.Request, via []*http.Request) error {
return http.ErrUseLastResponse
},
}
Working with cookies
The HTTP client handles cookies automatically if you want it to:
// Create a cookie jar to store cookies
jar, err := cookiejar.New(nil)
if err != nil {
log.Fatal(err)
}
client := &http.Client{
Jar: jar,
Timeout: 10 * time.Second,
}
// First request sets cookies
resp, err := client.Get("https://example.com/login")
// Subsequent requests automatically send those cookies
resp, err = client.Get("https://example.com/dashboard")
To set cookies manually:
req, _ := http.NewRequest(http.MethodGet, "https://example.com", nil)
req.AddCookie(&http.Cookie{
Name: "session_id",
Value: "abc123",
})
Sending multipart form data
File uploads typically use multipart form data. Here's how to send files:
package main
import (
"bytes"
"fmt"
"io"
"log"
"mime/multipart"
"net/http"
"os"
)
func uploadFile(url, filePath string) error {
// Open the file
file, err := os.Open(filePath)
if err != nil {
return err
}
defer file.Close()
// Create a buffer to write our multipart form
var requestBody bytes.Buffer
writer := multipart.NewWriter(&requestBody)
// Create a form file field
part, err := writer.CreateFormFile("file", filePath)
if err != nil {
return err
}
// Copy file contents to the form field
if _, err := io.Copy(part, file); err != nil {
return err
}
// Add other form fields if needed
writer.WriteField("description", "My uploaded file")
// Close the writer before making the request
writer.Close()
// Create the request
req, err := http.NewRequest(http.MethodPost, url, &requestBody)
if err != nil {
return err
}
// Set the content type with the boundary
req.Header.Set("Content-Type", writer.FormDataContentType())
// Send the request
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
body, err := io.ReadAll(resp.Body)
if err != nil {
return err
}
fmt.Printf("Response: %s\n", body)
return nil
}
func main() {
if err := uploadFile("https://httpbin.org/post", "example.txt"); err != nil {
log.Fatal(err)
}
}
The key here is writer.FormDataContentType()
, which sets the correct Content-Type
header including the multipart boundary string.
Debugging HTTP requests
When things go wrong, you need visibility into what's actually being sent and received. The httputil
package helps:
package main
import (
"fmt"
"log"
"net/http"
"net/http/httputil"
)
func main() {
req, err := http.NewRequest(http.MethodGet, "https://api.github.com/users/golang", nil)
if err != nil {
log.Fatal(err)
}
// Dump the request (including headers and body)
dump, err := httputil.DumpRequestOut(req, true)
if err != nil {
log.Fatal(err)
}
fmt.Printf("REQUEST:\n%s\n\n", dump)
client := &http.Client{}
resp, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
// Dump the response
dump, err = httputil.DumpResponse(resp, true)
if err != nil {
log.Fatal(err)
}
fmt.Printf("RESPONSE:\n%s\n", dump)
}
For more detailed tracing of connection establishment, DNS lookups, and TLS handshakes, use the httptrace
package:
package main
import (
"context"
"crypto/tls"
"fmt"
"log"
"net/http"
"net/http/httptrace"
)
func main() {
req, _ := http.NewRequest(http.MethodGet, "https://api.github.com", nil)
trace := &httptrace.ClientTrace{
DNSStart: func(info httptrace.DNSStartInfo) {
fmt.Printf("DNS lookup started for %s\n", info.Host)
},
DNSDone: func(info httptrace.DNSDoneInfo) {
fmt.Printf("DNS lookup done: %v\n", info.Addrs)
},
ConnectStart: func(network, addr string) {
fmt.Printf("Connecting to %s %s\n", network, addr)
},
ConnectDone: func(network, addr string, err error) {
fmt.Printf("Connected to %s %s\n", network, addr)
},
TLSHandshakeStart: func() {
fmt.Println("TLS handshake started")
},
TLSHandshakeDone: func(state tls.ConnectionState, err error) {
fmt.Printf("TLS handshake done: %s\n", state.Version)
},
GotFirstResponseByte: func() {
fmt.Println("Received first response byte")
},
}
req = req.WithContext(httptrace.WithClientTrace(context.Background(), trace))
client := &http.Client{}
_, err := client.Do(req)
if err != nil {
log.Fatal(err)
}
}
This level of visibility is invaluable when debugging connection issues, slow requests, or TLS problems.
Common mistakes and how to avoid them
Not reusing HTTP clients: As mentioned earlier, create one client per application, not per request. The connection pool is tied to the transport, which is tied to the client.
Ignoring status codes: Always check resp.StatusCode
before processing the response. A 200 OK is not the only success code—201, 202, 204 might also be successful depending on the API.
Not setting timeouts: The default http.Client
has no timeout. Your requests will hang forever if the server doesn't respond. Always set a timeout.
Reading the body twice: You can only read resp.Body
once. If you need the content multiple times, read it into a byte slice first:
body, err := io.ReadAll(resp.Body)
// Now you can use body multiple times
Forgetting to check Content-Type: Before unmarshaling JSON, verify the response is actually JSON:
contentType := resp.Header.Get("Content-Type")
if !strings.Contains(contentType, "application/json") {
return fmt.Errorf("unexpected content type: %s", contentType)
}
Wrapping up
Making HTTP requests in Go starts simple with http.Get()
, but production systems need more: timeouts, retry logic, connection pooling, and proper error handling. The patterns in this guide will handle most real-world scenarios.
The key takeaways: reuse your HTTP clients, always set timeouts, close response bodies, and implement exponential backoff for retries. These aren't optional nice-to-haves—they're the difference between code that works in development and code that survives production.
For more advanced use cases like circuit breakers, rate limiting, or custom connection management, you'll want to look at libraries like github.com/hashicorp/go-retryablehttp
or github.com/sony/gobreaker
. But start with the fundamentals here, and you'll understand what those libraries are doing under the hood.