Learn nginx log analysis step by step — start with grep and awk one-liners for quick answers, then …
Go + Nginx: Deploy a Go API Behind a Reverse Proxy Go + Nginx: Deploy a Go API Behind a Reverse Proxy

Summary
Every Go tutorial ends the same way: http.ListenAndServe(":8080", nil) and “it works!” But in production, your users aren’t hitting port 8080. They’re hitting port 80 or 443 through Nginx, and there’s a gap between “my Go server runs” and “my Go server runs behind Nginx in production.”
That gap is full of traps — permission errors, lost client IPs, 502s when your app restarts, and rate limiting that blocks the wrong address. We’ll hit every one of these on purpose so you understand the fixes.
What We’re Building
A JSON API in Go that serves deployment status info — the kind of internal tool every DevOps team builds. We’ll put it behind Nginx as a reverse proxy.
The journey:
Expand your knowledge with Building a URL Shortener: From Linux Networking to Go
- Build a bare Go HTTP server
- Try to bind to port 80 (the permission trap)
- Add Nginx as a reverse proxy — and get a 502
- Fix it, then discover client IPs are all
127.0.0.1 - Fix header forwarding with
X-Forwarded-For - Add rate limiting — and hit the
127.0.0.1problem again - Add graceful shutdown so deploys don’t drop connections
Prerequisites
- Go 1.21+ installed (
go version) - Nginx installed (
nginx -v) - A Linux server (local VM, EC2 instance, or WSL)
- Root or sudo access (for Nginx and port 80)
Step 1: Build a Bare Go API
What: A minimal JSON API that returns deployment status.
Why: This is the baseline — a Go server that works on its own before we add any proxy complexity.
Create your project:
mkdir go-nginx-api && cd go-nginx-api
go mod init go-nginx-api
main.go
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"time"
)
type DeployStatus struct {
Service string `json:"service"`
Version string `json:"version"`
Status string `json:"status"`
Timestamp string `json:"timestamp"`
ClientIP string `json:"client_ip"`
}
func main() {
http.HandleFunc("/api/status", func(w http.ResponseWriter, r *http.Request) {
status := DeployStatus{
Service: "auth-api",
Version: "v1.4.2",
Status: "healthy",
Timestamp: time.Now().Format(time.RFC3339),
ClientIP: r.RemoteAddr,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(status)
})
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, "ok")
})
log.Println("starting server on :8080")
log.Fatal(http.ListenAndServe(":8080", nil))
}
Two endpoints: /api/status returns JSON with deployment info and the client’s IP address, /health returns a simple “ok” for health checks. We include ClientIP in the response so we can see what Go thinks the client’s address is — this becomes important later.
Run it:
go run main.go
Test it in another terminal:
curl http://localhost:8080/api/status
Expected output:
{
"service": "auth-api",
"version": "v1.4.2",
"status": "healthy",
"timestamp": "2026-02-15T14:30:00Z",
"client_ip": "127.0.0.1:54321"
}
The client_ip shows 127.0.0.1 with a random port because you’re curling from localhost. On a remote server, this would show the actual client IP. Remember this — it changes when we add Nginx.
Explore this further in Build and Deploy a Go Lambda Function
Step 2: Try Port 80 (The Permission Trap)
What: Bind the Go server directly to port 80 so users don’t need :8080 in the URL.
Why: This is what every beginner tries first. Ports below 1024 are privileged on Linux — only root can bind to them. Your Go binary running as a normal user will crash.
Change the listen address:
main.go — updated:
Discover related concepts in How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"time"
)
type DeployStatus struct {
Service string `json:"service"`
Version string `json:"version"`
Status string `json:"status"`
Timestamp string `json:"timestamp"`
ClientIP string `json:"client_ip"`
}
func main() {
http.HandleFunc("/api/status", func(w http.ResponseWriter, r *http.Request) {
status := DeployStatus{
Service: "auth-api",
Version: "v1.4.2",
Status: "healthy",
Timestamp: time.Now().Format(time.RFC3339),
ClientIP: r.RemoteAddr,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(status)
})
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, "ok")
})
log.Println("starting server on :80")
log.Fatal(http.ListenAndServe(":80", nil))
}
Run it:
go run main.go
Expected output:
2026/02/15 14:30:00 starting server on :80
2026/02/15 14:30:00 listen tcp :80: bind: permission denied
exit status 1
Permission denied. You could fix this with sudo go run main.go or setcap 'cap_net_bind_service=+ep' on the binary, but both are bad ideas. Running your application as root means a vulnerability in your code gives an attacker root access. Using setcap is fragile — you have to reapply it after every build.
The real fix: keep Go on a high port (8080) and put Nginx in front on port 80. Nginx is designed to run as root for port binding and then drop privileges to a www-data worker. Your Go app stays unprivileged.
Change it back to :8080 before continuing:
main.go — updated:
Discover related concepts in How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
package main
import (
"encoding/json"
"fmt"
"log"
"net/http"
"time"
)
type DeployStatus struct {
Service string `json:"service"`
Version string `json:"version"`
Status string `json:"status"`
Timestamp string `json:"timestamp"`
ClientIP string `json:"client_ip"`
}
func main() {
http.HandleFunc("/api/status", func(w http.ResponseWriter, r *http.Request) {
status := DeployStatus{
Service: "auth-api",
Version: "v1.4.2",
Status: "healthy",
Timestamp: time.Now().Format(time.RFC3339),
ClientIP: r.RemoteAddr,
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(status)
})
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, "ok")
})
log.Println("starting server on :8080")
log.Fatal(http.ListenAndServe(":8080", nil))
}
Step 3: Add Nginx as a Reverse Proxy (And Get a 502)
What: Configure Nginx to listen on port 80 and forward requests to Go on port 8080.
Why: This is the standard production pattern. But if you configure Nginx before starting Go, you’ll get a 502 Bad Gateway — Nginx can’t connect to a backend that isn’t running.
Create the Nginx config:
/etc/nginx/sites-available/go-api
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8080;
}
}
Enable it and reload Nginx:
sudo ln -sf /etc/nginx/sites-available/go-api /etc/nginx/sites-enabled/go-api
sudo rm -f /etc/nginx/sites-enabled/default
sudo nginx -t
sudo systemctl reload nginx
Now test it without starting the Go server:
curl http://localhost/api/status
Expected output:
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<center>nginx/1.24.0</center>
</body>
</html>
502 Bad Gateway. Nginx tried to connect to 127.0.0.1:8080 and nobody was listening. This is the most common Nginx error in production — it means your backend is down, crashed, or hasn’t started yet.
Now start the Go server in a second terminal:
go run main.go
And test again:
curl http://localhost/api/status
Expected output:
{
"service": "auth-api",
"version": "v1.4.2",
"status": "healthy",
"timestamp": "2026-02-15T14:35:00Z",
"client_ip": "127.0.0.1:54322"
}
It works! But look at client_ip — it’s 127.0.0.1. Even if you curl from a different machine, the IP will always be 127.0.0.1 because Nginx is making the connection to Go, not the actual client. Every request looks like it comes from localhost.
This breaks logging, rate limiting, geolocation, and any feature that depends on knowing who the real client is.
Uncover more details in Nginx Log Analysis: From grep to a Go Log Parser
Step 4: Fix Client IP Forwarding
What: Pass the real client IP through Nginx to Go using the X-Forwarded-For header.
Why: Without this, every request to your Go app looks like it came from 127.0.0.1. You can’t do rate limiting per user, you can’t log real IPs, and your abuse detection is blind.
Update the Nginx config:
/etc/nginx/sites-available/go-api — updated:
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8080;
# Forward real client info
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Reload Nginx:
sudo nginx -t && sudo systemctl reload nginx
Now update Go to read the forwarded IP:
main.go — updated:
Discover related concepts in How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
package main
import (
"encoding/json"
"fmt"
"log"
"net"
"net/http"
"strings"
"time"
)
type DeployStatus struct {
Service string `json:"service"`
Version string `json:"version"`
Status string `json:"status"`
Timestamp string `json:"timestamp"`
ClientIP string `json:"client_ip"`
}
// realIP extracts the client's real IP from proxy headers.
// It trusts X-Real-IP and X-Forwarded-For because Nginx sets them.
func realIP(r *http.Request) string {
// X-Real-IP is set by Nginx to the direct client
if ip := r.Header.Get("X-Real-IP"); ip != "" {
return ip
}
// X-Forwarded-For can contain multiple IPs: client, proxy1, proxy2
// The first one is the original client
if forwarded := r.Header.Get("X-Forwarded-For"); forwarded != "" {
parts := strings.Split(forwarded, ",")
return strings.TrimSpace(parts[0])
}
// Fallback to direct connection IP (strips the port)
ip, _, _ := net.SplitHostPort(r.RemoteAddr)
return ip
}
func main() {
http.HandleFunc("/api/status", func(w http.ResponseWriter, r *http.Request) {
status := DeployStatus{
Service: "auth-api",
Version: "v1.4.2",
Status: "healthy",
Timestamp: time.Now().Format(time.RFC3339),
ClientIP: realIP(r),
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(status)
})
http.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, "ok")
})
log.Println("starting server on :8080")
log.Fatal(http.ListenAndServe(":8080", nil))
}
The realIP function checks X-Real-IP first (set by Nginx to the direct client), falls back to X-Forwarded-For (which can chain through multiple proxies — take the first one), and finally falls back to RemoteAddr if neither header exists.
Restart Go and test:
go run main.go
curl http://localhost/api/status
Expected output:
{
"service": "auth-api",
"version": "v1.4.2",
"status": "healthy",
"timestamp": "2026-02-15T14:40:00Z",
"client_ip": "127.0.0.1"
}
From localhost the IP is still 127.0.0.1 — that’s correct, it’s your real IP. But now if you curl from a different machine, you’ll see the actual remote IP instead of 127.0.0.1. The difference is that Go is now reading the real IP from the header instead of the TCP connection.
But there’s a security problem here. What if a malicious client sends a fake X-Forwarded-For header directly to Nginx? They could spoof their IP. Nginx’s $proxy_add_x_forwarded_for appends to the existing header instead of replacing it — so if a client sends X-Forwarded-For: 1.2.3.4, the header becomes 1.2.3.4, REAL_IP. Our realIP function takes the first entry, which is the spoofed one.
The fix is to have Nginx overwrite the header instead of appending:
/etc/nginx/sites-available/go-api — updated:
server {
listen 80;
server_name _;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr; # overwrite, don't append
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Changed $proxy_add_x_forwarded_for to $remote_addr. Now Nginx sets the header to the direct client’s IP regardless of what the incoming request says. This is the right choice when Nginx is the only proxy in your stack. If you have multiple trusted proxies (like a CDN in front of Nginx), you’d keep $proxy_add_x_forwarded_for and adjust the trust logic.
Journey deeper into this topic with Build a Go CLI Tool for AWS S3
sudo nginx -t && sudo systemctl reload nginx
Step 5: Add Rate Limiting (And Hit the 127.0.0.1 Problem Again)
What: Add rate limiting in Nginx to prevent abuse.
Why: Without rate limiting, a single client can hammer your API with thousands of requests per second. Nginx can throttle this before requests even reach Go, saving your app from overload. But the default rate limiting key is $remote_addr — and if you’re not careful, it limits the wrong thing.
First, let’s do it the naive way:
/etc/nginx/sites-available/go-api — updated (broken version):
limit_req_zone $remote_addr zone=api_limit:10m rate=5r/s;
server {
listen 80;
server_name _;
location / {
limit_req zone=api_limit burst=10 nodelay;
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
This config creates a rate limit zone called api_limit: 5 requests per second per IP, with a burst of 10. The zone uses 10MB of shared memory, enough for about 160,000 unique IPs.
Reload and hammer it:
sudo nginx -t && sudo systemctl reload nginx
for i in $(seq 1 20); do
curl -s -o /dev/null -w "%{http_code} " http://localhost/api/status
done
echo
Expected output:
200 200 200 200 200 200 200 200 200 200 200 200 200 200 200 503 503 503 503 503
The first 15 succeed (5 per second rate + 10 burst), then you get 503s. Rate limiting works. But here’s the problem — this limits by $remote_addr, which is the IP of whoever connects to Nginx directly. On a single server, that’s fine. But if you put a load balancer or CDN in front of Nginx, $remote_addr becomes the load balancer’s IP, not the client’s. All clients share one rate limit and hit the cap almost instantly.
The fix is to rate limit by the real client IP:
/etc/nginx/sites-available/go-api — updated (fixed version):
# Use a map to get the real client IP for rate limiting
map $http_x_forwarded_for $rate_limit_key {
default $remote_addr;
"~^(?P<ip>[^,]+)" $ip;
}
limit_req_zone $rate_limit_key zone=api_limit:10m rate=5r/s;
server {
listen 80;
server_name _;
location / {
limit_req zone=api_limit burst=10 nodelay;
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
# Custom error page for rate limiting
error_page 503 @rate_limited;
location @rate_limited {
default_type application/json;
return 429 '{"error": "rate limit exceeded", "retry_after": "1s"}';
}
}
The map block extracts the first IP from X-Forwarded-For (if it exists) and uses it as the rate limit key. If there’s no X-Forwarded-For, it falls back to $remote_addr. We also changed the 503 error to return a JSON 429 with a retry hint — much friendlier for API consumers.
sudo nginx -t && sudo systemctl reload nginx
Test the rate limit error:
for i in $(seq 1 20); do
code=$(curl -s -o /tmp/resp -w "%{http_code}" http://localhost/api/status)
if [ "$code" != "200" ]; then
echo "Request $i: $code $(cat /tmp/resp)"
break
fi
done
Expected output:
Request 16: 429 {"error": "rate limit exceeded", "retry_after": "1s"}
Now clients get a clear JSON error when they’re rate limited, and the limiting is per real client IP even behind a proxy.
Enrich your learning with How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
Step 6: Handle Upstream Failures
What: Configure Nginx to handle Go app crashes gracefully instead of showing ugly 502 errors.
Why: During deploys, your Go app stops and restarts. Without proper Nginx config, users see 502 Bad Gateway for a few seconds. We can do better — serve a proper error response and add health checking.
Update the Nginx config:
/etc/nginx/sites-available/go-api — updated:
map $http_x_forwarded_for $rate_limit_key {
default $remote_addr;
"~^(?P<ip>[^,]+)" $ip;
}
limit_req_zone $rate_limit_key zone=api_limit:10m rate=5r/s;
upstream go_backend {
server 127.0.0.1:8080 max_fails=3 fail_timeout=30s;
}
server {
listen 80;
server_name _;
location / {
limit_req zone=api_limit burst=10 nodelay;
proxy_pass http://go_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts — don't wait forever for a stuck backend
proxy_connect_timeout 5s;
proxy_send_timeout 10s;
proxy_read_timeout 30s;
# Buffer settings
proxy_buffering on;
proxy_buffer_size 4k;
proxy_buffers 8 4k;
}
# Health check endpoint — bypass rate limiting
location /health {
limit_req off;
proxy_pass http://go_backend;
proxy_set_header Host $host;
}
error_page 502 @backend_down;
location @backend_down {
default_type application/json;
return 502 '{"error": "service temporarily unavailable", "retry_after": "5s"}';
}
error_page 503 @rate_limited;
location @rate_limited {
default_type application/json;
return 429 '{"error": "rate limit exceeded", "retry_after": "1s"}';
}
}
The upstream block with max_fails=3 fail_timeout=30s tells Nginx: if the backend fails 3 times, stop sending requests to it for 30 seconds before trying again. This prevents Nginx from hammering a crashed backend.
proxy_connect_timeout 5s means Nginx waits only 5 seconds to establish a connection. Without this, Nginx uses its default of 60 seconds — during which the client hangs with no response.
The /health endpoint bypasses rate limiting so your monitoring and load balancers can always check the service status.
sudo nginx -t && sudo systemctl reload nginx
Stop Go and test:
curl http://localhost/api/status
Expected output:
{"error": "service temporarily unavailable", "retry_after": "5s"}
A clean JSON error instead of an HTML 502 page. Much better for API consumers.
Gain comprehensive insights from Self-Healing Bash: Creating Resilient Functions That Recover From Failures
Step 7: Graceful Shutdown in Go
What: Make the Go server finish in-flight requests before stopping, so deploys don’t drop connections.
Why: When you kill a Go server with kill or stop the systemd service, http.ListenAndServe terminates immediately. Any requests being processed get dropped mid-response. The client gets a broken connection. Combined with Nginx, this means users see 502s during every deploy.
Replace your entire main.go:
main.go — updated:
Discover related concepts in How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
package main
import (
"context"
"encoding/json"
"fmt"
"log"
"net"
"net/http"
"os"
"os/signal"
"strings"
"syscall"
"time"
)
type DeployStatus struct {
Service string `json:"service"`
Version string `json:"version"`
Status string `json:"status"`
Timestamp string `json:"timestamp"`
ClientIP string `json:"client_ip"`
}
func realIP(r *http.Request) string {
if ip := r.Header.Get("X-Real-IP"); ip != "" {
return ip
}
if forwarded := r.Header.Get("X-Forwarded-For"); forwarded != "" {
parts := strings.Split(forwarded, ",")
return strings.TrimSpace(parts[0])
}
ip, _, _ := net.SplitHostPort(r.RemoteAddr)
return ip
}
func main() {
mux := http.NewServeMux()
mux.HandleFunc("/api/status", func(w http.ResponseWriter, r *http.Request) {
status := DeployStatus{
Service: "auth-api",
Version: "v1.4.2",
Status: "healthy",
Timestamp: time.Now().Format(time.RFC3339),
ClientIP: realIP(r),
}
w.Header().Set("Content-Type", "application/json")
json.NewEncoder(w).Encode(status)
})
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, "ok")
})
server := &http.Server{
Addr: ":8080",
Handler: mux,
ReadTimeout: 10 * time.Second,
WriteTimeout: 30 * time.Second,
IdleTimeout: 60 * time.Second,
}
// Start server in a goroutine
go func() {
log.Println("starting server on :8080")
if err := server.ListenAndServe(); err != http.ErrServerClosed {
log.Fatal("server error:", err)
}
}()
// Wait for SIGINT or SIGTERM
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
sig := <-quit
log.Printf("received %s, shutting down...", sig)
// Give in-flight requests 15 seconds to finish
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
log.Printf("forced shutdown: %v", err)
}
log.Println("server stopped")
}
The key change is server.Shutdown(ctx). Instead of killing the server instantly, it:
- Stops accepting new connections
- Waits for in-flight requests to finish (up to 15 seconds)
- Then exits cleanly
We also switched from http.HandleFunc (default mux) to an explicit http.NewServeMux and http.Server. This gives us control over timeouts — ReadTimeout prevents slowloris attacks where a client sends headers very slowly to tie up your server, WriteTimeout prevents a stuck handler from holding a connection forever, and IdleTimeout closes keep-alive connections that have been idle too long.
Run it:
go run main.go
In another terminal, start a slow request and then kill the server:
# Terminal 1: Start the server
go run main.go
# Terminal 2: Hit the server while sending SIGTERM
curl http://localhost:8080/api/status &
kill -SIGTERM $(pgrep -f "go-nginx-api")
Expected output (Terminal 1):
2026/02/15 14:50:00 starting server on :8080
2026/02/15 14:50:05 received interrupt, shutting down...
2026/02/15 14:50:05 server stopped
The server waits for the curl request to complete before exiting. No dropped connections.
Master this concept through How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
Step 8: Wire It Up With systemd
What: Run the Go API as a systemd service so it starts on boot, restarts on crash, and integrates with Nginx.
Why: Running go run main.go in a terminal is fine for development. In production, you need the process managed properly — auto-restart on crash, start on boot, proper logging to journald.
Build the binary first:
go build -o go-nginx-api main.go
sudo cp go-nginx-api /usr/local/bin/
Create the systemd service:
/etc/systemd/system/go-nginx-api.service
[Unit]
Description=Go Nginx API
After=network.target
Wants=nginx.service
[Service]
Type=simple
User=www-data
Group=www-data
ExecStart=/usr/local/bin/go-nginx-api
Restart=on-failure
RestartSec=5
KillSignal=SIGTERM
TimeoutStopSec=20
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadOnlyPaths=/
[Install]
WantedBy=multi-user.target
Key settings:
User=www-data— runs as the same unprivileged user as Nginx workers, not rootKillSignal=SIGTERM— sends SIGTERM on stop, which triggers our graceful shutdownTimeoutStopSec=20— waits 20 seconds for graceful shutdown before force-killing (longer than our 15-second context timeout in Go)Restart=on-failurewithRestartSec=5— if Go crashes, wait 5 seconds and restart- Security directives (
NoNewPrivileges,ProtectSystem,ProtectHome) lock down what the process can do
Start it:
sudo systemctl daemon-reload
sudo systemctl enable go-nginx-api
sudo systemctl start go-nginx-api
Verify:
sudo systemctl status go-nginx-api
curl http://localhost/api/status
Expected output:
● go-nginx-api.service - Go Nginx API
Loaded: loaded (/etc/systemd/system/go-nginx-api.service; enabled)
Active: active (running) since Sat 2026-02-15 14:55:00 UTC
Main PID: 12345 (go-nginx-api)
{
"service": "auth-api",
"version": "v1.4.2",
"status": "healthy",
"timestamp": "2026-02-15T14:55:30Z",
"client_ip": "203.0.113.42"
}
Now try killing it:
sudo systemctl stop go-nginx-api
sudo systemctl status go-nginx-api
The service stops cleanly via SIGTERM, Go finishes in-flight requests, then exits. And if you start it again, everything comes back — Nginx detects the backend is healthy and starts proxying.
Delve into specifics at How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
Deploy Workflow
With everything wired up, deploying a new version looks like this:
# Build new binary
go build -o go-nginx-api main.go
# Copy and restart (brief downtime — a few seconds)
sudo cp go-nginx-api /usr/local/bin/
sudo systemctl restart go-nginx-api
Nginx handles the brief gap — it shows the “service temporarily unavailable” JSON response for the 1-2 seconds between stop and start. For zero-downtime deploys, you’d run two instances on different ports and use Nginx upstream with multiple backends.
Deepen your understanding in Docker Log Management: From docker logs to a Go Log Collector
What We Built
Starting from a bare Go HTTP server, we incrementally built:
- Basic JSON API — the Go “hello world” for HTTP
- The port 80 trap — why running as root is wrong and reverse proxies are right
- Nginx reverse proxy — forward traffic from port 80 to Go on 8080
- Client IP forwarding —
X-Real-IPandX-Forwarded-Forheaders with spoofing protection - Rate limiting — per-client throttling with JSON error responses
- Upstream failure handling — clean errors when the backend is down, with health check bypass
- Graceful shutdown — finish in-flight requests before stopping
- systemd integration — auto-restart, boot start, security hardening
Every Go API in production uses this stack. The details change (TLS termination, multiple upstreams, WebSocket proxying), but the core pattern is always the same: Go on a high port, Nginx on port 80/443, proxy headers forwarded, graceful shutdown on SIGTERM.
Deepen your understanding in Docker Log Management: From docker logs to a Go Log Collector
Cheat Sheet
Copy-paste reference for Go behind Nginx.
Minimal Nginx reverse proxy:
server {
listen 80;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Read real client IP in Go:
func realIP(r *http.Request) string {
if ip := r.Header.Get("X-Real-IP"); ip != "" {
return ip
}
ip, _, _ := net.SplitHostPort(r.RemoteAddr)
return ip
}
Nginx rate limiting:
limit_req_zone $remote_addr zone=api:10m rate=5r/s;
location / {
limit_req zone=api burst=10 nodelay;
proxy_pass http://127.0.0.1:8080;
}
Go graceful shutdown:
server := &http.Server{Addr: ":8080", Handler: mux}
go func() { server.ListenAndServe() }()
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit
ctx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
server.Shutdown(ctx)
systemd service essentials:
[Service]
ExecStart=/usr/local/bin/your-binary
KillSignal=SIGTERM
TimeoutStopSec=20
Restart=on-failure
User=www-data
Key rules to remember:
- Never bind Go to port 80 directly — use Nginx as the reverse proxy
- Always set
proxy_set_header X-Real-IP $remote_addr— without it, every request looks like127.0.0.1 - Use
$remote_addrinstead of$proxy_add_x_forwarded_forwhen Nginx is the only proxy — prevents IP spoofing - Set
proxy_connect_timeoutlower than the default 60s — 5s is usually enough for local backends - Rate limit by
$remote_addrfor simple setups, byX-Forwarded-Forwhen behind a load balancer server.Shutdown()finishes in-flight requests — matchTimeoutStopSecin systemd to be longer than Go’s shutdown timeout- Bypass rate limiting on
/healthso monitoring always works ReadTimeout,WriteTimeout, andIdleTimeouton the Go server prevent resource exhaustion
What's your Go + Nginx setup look like in production? Running multiple upstream backends, using TLS termination, or something else entirely?
Similar Articles
Related Content
More from devops
Learn AWS automation step by step — start with AWS CLI commands for S3, EC2, and IAM, then build the …
Learn config templating step by step — start with envsubst for simple variable substitution, then …
You Might Also Like
No related topic suggestions found.
Contents
- What We’re Building
- Prerequisites
- Step 1: Build a Bare Go API
- Step 2: Try Port 80 (The Permission Trap)
- Step 3: Add Nginx as a Reverse Proxy (And Get a 502)
- Step 4: Fix Client IP Forwarding
- Step 5: Add Rate Limiting (And Hit the 127.0.0.1 Problem Again)
- Step 6: Handle Upstream Failures
- Step 7: Graceful Shutdown in Go
- Step 8: Wire It Up With systemd
- Deploy Workflow
- What We Built
- Cheat Sheet
