Skip main navigation
/user/kayd @ devops :~$ cat 9-ansible-hacks-to-make-your-life-easier-devops.md

Remote Server Configuration: From SSH Loops to a Go Config Tool Remote Server Configuration: From SSH Loops to a Go Config Tool

QR Code linking to: Remote Server Configuration: From SSH Loops to a Go Config Tool
Karandeep Singh
Karandeep Singh
• 32 minutes

Summary

Master remote server configuration with SSH and Go. From bash loops to a complete config tool with parallel execution, idempotent operations, and service verification.

You have three servers. You need to push an nginx config to all of them. You need to reload the service. You need to verify it is running.

You could do it by hand. SSH into each one. Copy the file. Run the reload command. Check the status. Do it again for the next server. And again for the third.

That gets old fast.

This guide starts with basic SSH commands and bash loops. Then it builds a Go tool that does the same thing, but faster, with better error handling, and across many servers at once.

Every step shows the Linux command first. Then the Go equivalent. Then a bug you will hit. Then the fix.

By the end you will have a working config tool that connects to multiple servers, pushes templated configs, verifies services, and runs in parallel.

Step 1: Running Commands on Remote Servers

The most basic operation is running a command on a remote server. SSH does this in one line.

Linux: Single Server Commands

Run a command on a remote server without opening an interactive session.

ssh user@web1 'hostname'

This connects to web1, runs hostname, prints the output, and disconnects. You never see a shell prompt.

Check the linux-kernel/">kernel version.

ssh user@web1 'uname -r'

Check uptime.

ssh user@web1 'uptime'

Output looks like this.

 14:23:07 up 42 days,  3:15,  0 users,  load average: 0.12, 0.08, 0.05

Linux: Multiple Servers

Use a for loop to run the same command on several servers.

for s in web1 web2 web3; do
    echo "--- $s ---"
    ssh user@$s 'uptime'
done

Output.

--- web1 ---
 14:23:07 up 42 days,  3:15,  0 users,  load average: 0.12, 0.08, 0.05
--- web2 ---
 14:23:08 up 18 days,  7:44,  0 users,  load average: 0.03, 0.04, 0.01
--- web3 ---
 14:23:09 up 91 days,  1:02,  0 users,  load average: 0.22, 0.15, 0.10

Each server runs one after the other. This is fine for three servers. It is slow for thirty.

Go: Running SSH Commands

Now do the same thing in Go. Use os/exec to call the ssh binary.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh", fmt.Sprintf("user@%s", server), command)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

func main() {
	servers := []string{"web1", "web2", "web3"}

	for _, server := range servers {
		fmt.Printf("--- %s ---\n", server)
		output, err := runSSH(server, "uptime")
		if err != nil {
			fmt.Fprintf(os.Stderr, "error on %s: %v\n", server, err)
			continue
		}
		fmt.Println(output)
	}
}

This loops through three servers. It runs uptime on each one. It prints the output. If a server fails, it prints the error and moves on.

Bug: SSH Hangs on Unknown Host

You run the tool against a new server. It hangs. No output. No error. Just silence.

The problem is the SSH host key verification prompt.

The authenticity of host 'web4 (10.0.1.4)' can't be established.
ED25519 key fingerprint is SHA256:abc123...
Are you sure you want to continue connecting (yes/no/[fingerprint])?

SSH is waiting for you to type yes. But your program does not send any input. It sits there forever.

Fix: Disable Interactive Prompts

Add SSH flags that skip the host key prompt and disable password authentication.

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

Three flags fix this.

StrictHostKeyChecking=no accepts the host key automatically. In a production environment you would use accept-new instead so that changed keys still trigger a warning. For this guide no keeps things simple.

BatchMode=yes disables all interactive prompts. If SSH cannot authenticate without user input, it fails immediately instead of hanging.

ConnectTimeout=5 gives up after 5 seconds if the server is unreachable. Without this, a down server blocks your entire loop for minutes.

Now the tool either connects quickly or fails quickly. No more hanging.

Here is the full working version of Step 1.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

func main() {
	servers := []string{"web1", "web2", "web3"}

	for _, server := range servers {
		fmt.Printf("--- %s ---\n", server)
		output, err := runSSH(server, "uptime")
		if err != nil {
			fmt.Fprintf(os.Stderr, "error on %s: %v\n", server, err)
			continue
		}
		fmt.Println(output)
	}
}

Step 2: Pushing Config Files to Remote Servers

Running commands is useful. But most server configuration involves copying files. You need to get a config file from your local machine to the right path on the remote server.

Linux: Copy a Single File

Use scp to copy a file to a remote server.

scp nginx.conf user@web1:/tmp/nginx.conf

This puts the file in /tmp/ on the remote server. You do not copy directly to /etc/nginx/ because that requires root access. Instead, copy to /tmp/ first, then move it with sudo.

ssh user@web1 'sudo mv /tmp/nginx.conf /etc/nginx/nginx.conf'

Two commands. Copy, then move.

Linux: Sync an Entire Directory

If you have a directory of config files, rsync is better than scp.

rsync -avz configs/ user@web1:/etc/myapp/

The -a flag preserves permissions and timestamps. The -v flag shows what is being transferred. The -z flag compresses data during transfer.

rsync only copies files that have changed. If you run it again and nothing changed, it transfers nothing. This is already idempotent at the file level.

Linux: Multiple Servers

Push the same config to three servers.

for s in web1 web2 web3; do
    echo "--- $s ---"
    scp nginx.conf user@$s:/tmp/nginx.conf
    ssh user@$s 'sudo mv /tmp/nginx.conf /etc/nginx/nginx.conf'
done

Go: Building a File Pusher

Build a Go function that copies a file to a remote server using scp.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

func pushFile(server string, localPath string, remotePath string) error {
	cmd := exec.Command("scp",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		localPath,
		fmt.Sprintf("user@%s:%s", server, "/tmp/uploaded-config"),
	)
	output, err := cmd.CombinedOutput()
	if err != nil {
		return fmt.Errorf("scp failed: %s: %w", string(output), err)
	}

	moveCmd := fmt.Sprintf("sudo mv /tmp/uploaded-config %s", remotePath)
	_, err = runSSH(server, moveCmd)
	if err != nil {
		return fmt.Errorf("move failed: %w", err)
	}

	return nil
}

func main() {
	servers := []string{"web1", "web2", "web3"}

	for _, server := range servers {
		fmt.Printf("--- %s ---\n", server)
		err := pushFile(server, "nginx.conf", "/etc/nginx/nginx.conf")
		if err != nil {
			fmt.Fprintf(os.Stderr, "error on %s: %v\n", server, err)
			continue
		}
		fmt.Printf("%s: config pushed\n", server)
	}
}

The pushFile function does two things. First it copies the local file to /tmp/ on the server. Then it runs sudo mv to put it in the right place.

Bug: Unnecessary Overwrites

You run the tool. It pushes the config to all three servers. You run it again. It pushes the config to all three servers again. Even though nothing changed.

This means every run triggers a service restart, even when the config is identical. That causes brief connection drops for no reason.

$ go run main.go
--- web1 ---
web1: config pushed
--- web2 ---
web2: config pushed
--- web3 ---
web3: config pushed
$ go run main.go
--- web1 ---
web1: config pushed    # nothing changed, but pushed anyway
--- web2 ---
web2: config pushed    # nothing changed, but pushed anyway
--- web3 ---
web3: config pushed    # nothing changed, but pushed anyway

Fix: Checksum Before Pushing

Compare the local file checksum with the remote file checksum. Skip the push if they match.

package main

import (
	"crypto/sha256"
	"fmt"
	"io"
	"os"
	"os/exec"
	"strings"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

func localChecksum(path string) (string, error) {
	f, err := os.Open(path)
	if err != nil {
		return "", err
	}
	defer f.Close()

	h := sha256.New()
	if _, err := io.Copy(h, f); err != nil {
		return "", err
	}
	return fmt.Sprintf("%x", h.Sum(nil)), nil
}

func remoteChecksum(server string, path string) (string, error) {
	output, err := runSSH(server, fmt.Sprintf("sha256sum %s 2>/dev/null", path))
	if err != nil {
		return "", nil // file does not exist yet
	}
	parts := strings.Fields(output)
	if len(parts) == 0 {
		return "", nil
	}
	return parts[0], nil
}

func pushFile(server string, localPath string, remotePath string) (bool, error) {
	localSum, err := localChecksum(localPath)
	if err != nil {
		return false, fmt.Errorf("local checksum: %w", err)
	}

	remoteSum, err := remoteChecksum(server, remotePath)
	if err != nil {
		return false, fmt.Errorf("remote checksum: %w", err)
	}

	if localSum == remoteSum {
		return false, nil // no change needed
	}

	cmd := exec.Command("scp",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		localPath,
		fmt.Sprintf("user@%s:%s", server, "/tmp/uploaded-config"),
	)
	output, err := cmd.CombinedOutput()
	if err != nil {
		return false, fmt.Errorf("scp failed: %s: %w", string(output), err)
	}

	moveCmd := fmt.Sprintf("sudo mv /tmp/uploaded-config %s", remotePath)
	_, err = runSSH(server, moveCmd)
	if err != nil {
		return false, fmt.Errorf("move failed: %w", err)
	}

	return true, nil
}

func main() {
	servers := []string{"web1", "web2", "web3"}

	for _, server := range servers {
		fmt.Printf("--- %s ---\n", server)
		changed, err := pushFile(server, "nginx.conf", "/etc/nginx/nginx.conf")
		if err != nil {
			fmt.Fprintf(os.Stderr, "error on %s: %v\n", server, err)
			continue
		}
		if changed {
			fmt.Printf("%s: config pushed\n", server)
		} else {
			fmt.Printf("%s: already up to date\n", server)
		}
	}
}

Now pushFile returns a boolean. true means the file was pushed. false means it was already the same. The caller can decide whether to restart the service based on this.

The localChecksum function reads the local file and computes a SHA-256 hash. The remoteChecksum function runs sha256sum on the remote server. If the remote file does not exist, it returns an empty string, which will never match the local hash. So a missing file always triggers a push.

Run it twice now.

$ go run main.go
--- web1 ---
web1: config pushed
--- web2 ---
web2: config pushed
--- web3 ---
web3: config pushed
$ go run main.go
--- web1 ---
web1: already up to date
--- web2 ---
web2: already up to date
--- web3 ---
web3: already up to date

This is idempotent. Run it as many times as you want. It only changes things when it needs to.

Step 3: Template Config Files with Variables

You have three web servers. They all run nginx. But each one has a different server name, a different port, or different upstream backends. You cannot push the same static config to all of them.

You need templates.

Linux: envsubst

The simplest way to template a file in bash is envsubst. It replaces environment variables in a file.

Create a template file called nginx.conf.template.

server {
    listen ${LISTEN_PORT};
    server_name ${SERVER_NAME};

    location / {
        proxy_pass http://127.0.0.1:${APP_PORT};
    }
}

Render it with variables.

export LISTEN_PORT=80
export SERVER_NAME=web1.example.com
export APP_PORT=8080
envsubst < nginx.conf.template > nginx.conf

The output file nginx.conf now has the variables replaced.

server {
    listen 80;
    server_name web1.example.com;

    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

Linux: sed for Simple Replacements

If you do not want to use environment variables, sed works for simple string replacements.

sed 's/LISTEN_PORT/80/g; s/SERVER_NAME/web1.example.com/g; s/APP_PORT/8080/g' \
    nginx.conf.template > nginx.conf

This is fine for one or two values. It gets messy with many.

Linux: Templating for Multiple Servers

Render a different config for each server.

declare -A server_names
server_names[web1]="web1.example.com"
server_names[web2]="web2.example.com"
server_names[web3]="web3.example.com"

for s in web1 web2 web3; do
    export LISTEN_PORT=80
    export SERVER_NAME="${server_names[$s]}"
    export APP_PORT=8080
    envsubst < nginx.conf.template > "/tmp/nginx-${s}.conf"
    scp "/tmp/nginx-${s}.conf" "user@${s}:/tmp/nginx.conf"
    ssh "user@${s}" 'sudo mv /tmp/nginx.conf /etc/nginx/nginx.conf'
done

This works. But the bash is getting long. The variable management is awkward. Error handling is almost nonexistent.

Go: Using text/template

Go has a built-in template engine in the standard library. It is more powerful than envsubst and much cleaner than chaining sed commands.

package main

import (
	"bytes"
	"fmt"
	"os"
	"text/template"
)

type ServerConfig struct {
	ServerName string
	ListenPort int
	AppPort    int
}

const nginxTemplate = `server {
    listen {{.ListenPort}};
    server_name {{.ServerName}};

    location / {
        proxy_pass http://127.0.0.1:{{.AppPort}};
    }
}
`

func renderConfig(tmplText string, data ServerConfig) (string, error) {
	tmpl, err := template.New("config").Parse(tmplText)
	if err != nil {
		return "", fmt.Errorf("parse template: %w", err)
	}

	var buf bytes.Buffer
	err = tmpl.Execute(&buf, data)
	if err != nil {
		return "", fmt.Errorf("execute template: %w", err)
	}

	return buf.String(), nil
}

func main() {
	servers := map[string]ServerConfig{
		"web1": {ServerName: "web1.example.com", ListenPort: 80, AppPort: 8080},
		"web2": {ServerName: "web2.example.com", ListenPort: 80, AppPort: 8081},
		"web3": {ServerName: "web3.example.com", ListenPort: 443, AppPort: 8082},
	}

	for name, config := range servers {
		result, err := renderConfig(nginxTemplate, config)
		if err != nil {
			fmt.Fprintf(os.Stderr, "error rendering %s: %v\n", name, err)
			continue
		}
		fmt.Printf("--- %s ---\n%s\n", name, result)
	}
}

Output.

--- web1 ---
server {
    listen 80;
    server_name web1.example.com;

    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

--- web2 ---
server {
    listen 80;
    server_name web2.example.com;

    location / {
        proxy_pass http://127.0.0.1:8081;
    }
}

--- web3 ---
server {
    listen 443;
    server_name web3.example.com;

    location / {
        proxy_pass http://127.0.0.1:8082;
    }
}

Each server gets its own rendered config. The template is written once. The data is different per server.

Bug: Template Field Name Mismatch

You change the struct field from ServerName to Name but forget to update the template.

type ServerConfig struct {
	Name       string
	ListenPort int
	AppPort    int
}

const nginxTemplate = `server {
    listen {{.ListenPort}};
    server_name {{.ServerName}};

    location / {
        proxy_pass http://127.0.0.1:{{.AppPort}};
    }
}
`

You run it. No error. But the output is wrong.

--- web1 ---
server {
    listen 80;
    server_name <no value>;

    location / {
        proxy_pass http://127.0.0.1:8080;
    }
}

The template tried to access .ServerName. The struct has .Name. Go templates silently render missing fields as <no value>. Your nginx config now has server_name <no value>; which will break nginx.

This is a silent failure. The program exited with status 0. No error message. But the config is wrong.

Fix: Use missingkey=error

Tell the template engine to fail on missing keys.

func renderConfig(tmplText string, data ServerConfig) (string, error) {
	tmpl, err := template.New("config").
		Option("missingkey=error").
		Parse(tmplText)
	if err != nil {
		return "", fmt.Errorf("parse template: %w", err)
	}

	var buf bytes.Buffer
	err = tmpl.Execute(&buf, data)
	if err != nil {
		return "", fmt.Errorf("execute template: %w", err)
	}

	return buf.String(), nil
}

Now when you run it with the mismatched field name, you get a clear error.

error rendering web1: execute template: template: config:2:21: executing "config"
at <.ServerName>: can't evaluate field ServerName in type main.ServerConfig

The error tells you exactly which field is missing and in which template. Fix the template to use {{.Name}} or fix the struct to use ServerName. Either way, the bug is caught immediately instead of producing a broken config.

Here is the corrected version with the struct field matching the template.

package main

import (
	"bytes"
	"fmt"
	"os"
	"text/template"
)

type ServerConfig struct {
	ServerName string
	ListenPort int
	AppPort    int
}

const nginxTemplate = `server {
    listen {{.ListenPort}};
    server_name {{.ServerName}};

    location / {
        proxy_pass http://127.0.0.1:{{.AppPort}};
    }
}
`

func renderConfig(tmplText string, data ServerConfig) (string, error) {
	tmpl, err := template.New("config").
		Option("missingkey=error").
		Parse(tmplText)
	if err != nil {
		return "", fmt.Errorf("parse template: %w", err)
	}

	var buf bytes.Buffer
	err = tmpl.Execute(&buf, data)
	if err != nil {
		return "", fmt.Errorf("execute template: %w", err)
	}

	return buf.String(), nil
}

func main() {
	servers := map[string]ServerConfig{
		"web1": {ServerName: "web1.example.com", ListenPort: 80, AppPort: 8080},
		"web2": {ServerName: "web2.example.com", ListenPort: 80, AppPort: 8081},
		"web3": {ServerName: "web3.example.com", ListenPort: 443, AppPort: 8082},
	}

	for name, config := range servers {
		result, err := renderConfig(nginxTemplate, config)
		if err != nil {
			fmt.Fprintf(os.Stderr, "error rendering %s: %v\n", name, err)
			continue
		}
		fmt.Printf("--- %s ---\n%s\n", name, result)
	}
}

Step 4: Verifying Services After Config Push

Pushing a config file is only half the job. You need to verify that the service is still working after the change. A bad config can take down a server.

Linux: Check Service Status

Check if nginx is running.

ssh user@web1 'systemctl is-active nginx'

Output is one word.

active

If the service is stopped or crashed.

inactive

Or.

failed

Linux: Test Config Syntax Before Reload

Never reload a service without testing the config first.

ssh user@web1 'sudo nginx -t'

Good output.

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Bad output.

nginx: [emerg] unknown directive "servr_name" in /etc/nginx/nginx.conf:3
nginx: configuration file /etc/nginx/nginx.conf test failed

The test catches syntax errors without affecting the running service.

Linux: Reload Without Downtime

If the test passes, reload the service. Reload applies the new config without dropping existing connections.

ssh user@web1 'sudo systemctl reload nginx'

Reload is not restart. Restart stops and starts the process. Reload sends a signal to re-read the config. Existing connections are not interrupted.

Linux: Full Verification Sequence

Put it all together.

for s in web1 web2 web3; do
    echo "--- $s ---"

    # test config
    ssh user@$s 'sudo nginx -t' 2>&1
    if [ $? -ne 0 ]; then
        echo "$s: config test FAILED, skipping reload"
        continue
    fi

    # reload
    ssh user@$s 'sudo systemctl reload nginx'

    # verify
    status=$(ssh user@$s 'systemctl is-active nginx')
    echo "$s: nginx is $status"
done

Go: Build a Verification Step

Add a verify function to the Go tool.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

func testConfig(server string) error {
	output, err := runSSH(server, "sudo nginx -t 2>&1")
	if err != nil {
		return fmt.Errorf("config test failed: %s", output)
	}
	return nil
}

func reloadService(server string) error {
	_, err := runSSH(server, "sudo systemctl reload nginx")
	if err != nil {
		return fmt.Errorf("reload failed: %w", err)
	}
	return nil
}

func checkStatus(server string) (string, error) {
	output, err := runSSH(server, "systemctl is-active nginx")
	if err != nil {
		return output, fmt.Errorf("status check failed: %w", err)
	}
	return output, nil
}

func verifyService(server string) error {
	// step 1: test config
	if err := testConfig(server); err != nil {
		return err
	}
	fmt.Printf("  %s: config test passed\n", server)

	// step 2: reload
	if err := reloadService(server); err != nil {
		return err
	}
	fmt.Printf("  %s: service reloaded\n", server)

	// step 3: check status
	status, err := checkStatus(server)
	if err != nil {
		return err
	}
	if status != "active" {
		return fmt.Errorf("service is %s after reload", status)
	}
	fmt.Printf("  %s: service is active\n", server)

	return nil
}

func main() {
	servers := []string{"web1", "web2", "web3"}

	for _, server := range servers {
		fmt.Printf("--- %s ---\n", server)
		err := verifyService(server)
		if err != nil {
			fmt.Fprintf(os.Stderr, "  %s: FAILED: %v\n", server, err)
			continue
		}
		fmt.Printf("  %s: OK\n", server)
	}
}

The verification has three steps. Test the config syntax. Reload the service. Check that it is active. If any step fails, stop and report the error.

Bug: Reload Reports Success But Service Is Broken

You push a config and run the verification. The config test passes. The reload succeeds. The status check says active. But the server is not actually serving requests.

This happens because systemctl reload can return exit code 0 even when the process encounters an error after re-reading the config. The process is running (so is-active says active) but it is not handling requests properly.

Example: nginx reloads, but a new upstream server in the config is unreachable. nginx is technically running. But requests to that upstream return 502.

$ ssh user@web1 'sudo systemctl reload nginx'
$ echo $?
0
$ ssh user@web1 'systemctl is-active nginx'
active
$ curl -s -o /dev/null -w "%{http_code}" http://web1/
502

Reload succeeded. Status is active. But the service is broken.

Fix: Check the Health Endpoint After Reload

Do not trust the exit code of reload or the output of is-active. Actually test that the service is responding correctly.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
	"time"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

func testConfig(server string) error {
	output, err := runSSH(server, "sudo nginx -t 2>&1")
	if err != nil {
		return fmt.Errorf("config test failed: %s", output)
	}
	return nil
}

func reloadService(server string) error {
	_, err := runSSH(server, "sudo systemctl reload nginx")
	if err != nil {
		return fmt.Errorf("reload failed: %w", err)
	}
	return nil
}

func checkHealth(server string) error {
	// wait briefly for reload to take effect
	time.Sleep(2 * time.Second)

	// curl the health endpoint from the server itself
	output, err := runSSH(server,
		"curl -s -o /dev/null -w '%{http_code}' --max-time 5 http://127.0.0.1/health")
	if err != nil {
		return fmt.Errorf("health check failed: %w", err)
	}
	if output != "200" {
		return fmt.Errorf("health check returned HTTP %s", output)
	}
	return nil
}

func verifyService(server string) error {
	if err := testConfig(server); err != nil {
		return err
	}
	fmt.Printf("  %s: config test passed\n", server)

	if err := reloadService(server); err != nil {
		return err
	}
	fmt.Printf("  %s: service reloaded\n", server)

	if err := checkHealth(server); err != nil {
		return err
	}
	fmt.Printf("  %s: health check passed\n", server)

	return nil
}

func main() {
	servers := []string{"web1", "web2", "web3"}

	for _, server := range servers {
		fmt.Printf("--- %s ---\n", server)
		err := verifyService(server)
		if err != nil {
			fmt.Fprintf(os.Stderr, "  %s: FAILED: %v\n", server, err)
			continue
		}
		fmt.Printf("  %s: OK\n", server)
	}
}

The checkHealth function waits 2 seconds for the reload to take effect, then curls the health endpoint on the server itself. It checks the HTTP status code. If it is not 200, the verification fails.

This catches the cases that systemctl is-active misses. The service might be running, but if it is not serving healthy responses, the verification reports a failure.

The health check runs from the server itself using 127.0.0.1. This avoids network issues between your machine and the server confusing the result.

Step 5: Parallel Execution Across Servers

So far every operation runs on servers one at a time. Server 1 finishes, then server 2 starts. If you have 30 servers and each takes 5 seconds, that is 150 seconds total.

Run them in parallel and it takes 5 seconds total.

Linux: Background Jobs

Bash can run commands in the background with &.

for s in web1 web2 web3; do
    ssh user@$s 'sudo systemctl reload nginx' &
done
wait

The & sends each SSH command to the background. The wait command pauses until all background jobs finish.

Linux: Problem with Background Jobs

This runs in parallel. But there is a problem. You do not know which server failed.

for s in web1 web2 web3; do
    ssh user@$s 'sudo systemctl reload nginx' &
done
wait
echo "Exit code: $?"

The $? after wait gives the exit code of the last job that finished. Not all of them. If web1 and web2 succeeded but web3 failed, you might still see exit code 0 because web2 finished last.

You can capture individual exit codes, but it gets complicated.

pids=()
for s in web1 web2 web3; do
    ssh user@$s 'sudo systemctl reload nginx' &
    pids+=($!)
done

failed=0
for pid in "${pids[@]}"; do
    wait $pid
    if [ $? -ne 0 ]; then
        failed=1
    fi
done

if [ $failed -eq 1 ]; then
    echo "At least one server failed"
fi

This works, but you still do not know which server failed. The PID does not map back to the server name cleanly. This is where bash starts to struggle and a proper programming language helps.

Go: Parallel with Goroutines

Go has goroutines and channels. They make parallel execution straightforward.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
	"sync"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

type Result struct {
	Server string
	Output string
	Err    error
}

func main() {
	servers := []string{"web1", "web2", "web3"}
	results := make(map[string]Result)

	var wg sync.WaitGroup
	for _, server := range servers {
		wg.Add(1)
		go func(s string) {
			defer wg.Done()
			output, err := runSSH(s, "uptime")
			results[s] = Result{Server: s, Output: output, Err: err}
		}(server)
	}
	wg.Wait()

	for _, server := range servers {
		r := results[server]
		if r.Err != nil {
			fmt.Fprintf(os.Stderr, "%s: FAILED: %v\n", r.Server, r.Err)
		} else {
			fmt.Printf("%s: %s\n", r.Server, r.Output)
		}
	}
}

All three servers run at the same time. Each goroutine stores its result in the map. After all goroutines finish, we print the results.

Bug: Concurrent Map Write Panic

You run the program. It crashes.

fatal error: concurrent map writes

goroutine 7 [running]:
runtime.throw({0x4a7c12, 0x17})
    /usr/local/go/src/runtime/panic.go:1077 +0x48
runtime.mapassign_faststr(0xc000060060, {0x4a5e21, 0x4})
...

Two goroutines tried to write to the results map at the same time. Go maps are not safe for concurrent writes. When two goroutines write simultaneously, the program panics.

This is a race condition. It might not crash every time. Sometimes it works fine. Sometimes it panics. That makes it worse, because you might not catch it during testing.

Fix: Use a Channel to Collect Results

Instead of writing to a shared map, send results through a channel. The main goroutine reads from the channel. Only one goroutine writes to each channel at a time.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
	"sync"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

type Result struct {
	Server string
	Output string
	Err    error
}

func main() {
	servers := []string{"web1", "web2", "web3"}
	resultCh := make(chan Result, len(servers))

	var wg sync.WaitGroup
	for _, server := range servers {
		wg.Add(1)
		go func(s string) {
			defer wg.Done()
			output, err := runSSH(s, "uptime")
			resultCh <- Result{Server: s, Output: output, Err: err}
		}(server)
	}

	// close channel after all goroutines finish
	go func() {
		wg.Wait()
		close(resultCh)
	}()

	// collect results
	for r := range resultCh {
		if r.Err != nil {
			fmt.Fprintf(os.Stderr, "%s: FAILED: %v\n", r.Server, r.Err)
		} else {
			fmt.Printf("%s: %s\n", r.Server, r.Output)
		}
	}
}

The channel is buffered with len(servers) capacity. Each goroutine sends its result into the channel. A separate goroutine waits for all workers to finish, then closes the channel. The main loop reads from the channel until it is closed.

No shared map. No race condition. No panic.

Another option is to use sync.Mutex to protect the map.

package main

import (
	"fmt"
	"os"
	"os/exec"
	"strings"
	"sync"
)

func runSSH(server string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", server),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

type Result struct {
	Server string
	Output string
	Err    error
}

func main() {
	servers := []string{"web1", "web2", "web3"}
	results := make(map[string]Result)
	var mu sync.Mutex

	var wg sync.WaitGroup
	for _, server := range servers {
		wg.Add(1)
		go func(s string) {
			defer wg.Done()
			output, err := runSSH(s, "uptime")
			mu.Lock()
			results[s] = Result{Server: s, Output: output, Err: err}
			mu.Unlock()
		}(server)
	}
	wg.Wait()

	for _, server := range servers {
		r := results[server]
		if r.Err != nil {
			fmt.Fprintf(os.Stderr, "%s: FAILED: %v\n", r.Server, r.Err)
		} else {
			fmt.Printf("%s: %s\n", r.Server, r.Output)
		}
	}
}

Both approaches work. Channels are more idiomatic in Go. The mutex approach is simpler when you want to keep the map structure for later lookups. Use whichever is clearer for your use case.

Step 6: Complete Config Tool

Now combine everything from the previous steps into one tool. This tool will read a server list, render templates per server, push configs, verify services, and run in parallel.

Server Configuration Format

Define a simple format for listing servers and their variables. Use Go structs instead of a config file to keep the example self-contained.

type Server struct {
	Name       string
	Host       string
	Role       string
	Vars       map[string]string
}

Each server has a name, hostname, role, and a map of variables for templating.

The Complete Tool

Here is the full program. It is around 300 lines. Read through each section. Every function was built in a previous step.

package main

import (
	"bytes"
	"crypto/sha256"
	"flag"
	"fmt"
	"io"
	"os"
	"os/exec"
	"strings"
	"sync"
	"text/template"
	"time"
)

// --- Types ---

type Server struct {
	Name string
	Host string
	Role string
	Vars map[string]string
}

type Result struct {
	Server  string
	Steps   []string
	Changed bool
	Err     error
}

// --- SSH ---

func runSSH(host string, command string) (string, error) {
	cmd := exec.Command("ssh",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		fmt.Sprintf("user@%s", host),
		command,
	)
	output, err := cmd.CombinedOutput()
	return strings.TrimSpace(string(output)), err
}

// --- Templating ---

func renderTemplate(tmplText string, vars map[string]string) (string, error) {
	tmpl, err := template.New("config").
		Option("missingkey=error").
		Parse(tmplText)
	if err != nil {
		return "", fmt.Errorf("parse template: %w", err)
	}

	var buf bytes.Buffer
	err = tmpl.Execute(&buf, vars)
	if err != nil {
		return "", fmt.Errorf("execute template: %w", err)
	}

	return buf.String(), nil
}

// --- Checksums ---

func checksumBytes(data []byte) string {
	h := sha256.Sum256(data)
	return fmt.Sprintf("%x", h[:])
}

func remoteChecksum(host string, path string) string {
	output, err := runSSH(host, fmt.Sprintf("sha256sum %s 2>/dev/null", path))
	if err != nil {
		return ""
	}
	parts := strings.Fields(output)
	if len(parts) == 0 {
		return ""
	}
	return parts[0]
}

// --- File Push ---

func pushConfig(host string, content string, remotePath string, dryRun bool) (bool, error) {
	localSum := checksumBytes([]byte(content))
	remoteSum := remoteChecksum(host, remotePath)

	if localSum == remoteSum {
		return false, nil
	}

	if dryRun {
		return true, nil
	}

	// write content to a temp file locally
	tmpFile, err := os.CreateTemp("", "config-push-*")
	if err != nil {
		return false, fmt.Errorf("create temp file: %w", err)
	}
	defer os.Remove(tmpFile.Name())

	_, err = tmpFile.WriteString(content)
	if err != nil {
		tmpFile.Close()
		return false, fmt.Errorf("write temp file: %w", err)
	}
	tmpFile.Close()

	// scp to remote
	cmd := exec.Command("scp",
		"-o", "StrictHostKeyChecking=no",
		"-o", "BatchMode=yes",
		"-o", "ConnectTimeout=5",
		tmpFile.Name(),
		fmt.Sprintf("user@%s:%s", host, "/tmp/config-push-tmp"),
	)
	output, err := cmd.CombinedOutput()
	if err != nil {
		return false, fmt.Errorf("scp failed: %s: %w", string(output), err)
	}

	// move to final location
	moveCmd := fmt.Sprintf("sudo mv /tmp/config-push-tmp %s", remotePath)
	_, err = runSSH(host, moveCmd)
	if err != nil {
		return false, fmt.Errorf("move failed: %w", err)
	}

	return true, nil
}

// --- Service Verification ---

func verifyService(host string, service string, healthURL string) error {
	// test config
	testCmd := fmt.Sprintf("sudo %s -t 2>&1", service)
	output, err := runSSH(host, testCmd)
	if err != nil {
		return fmt.Errorf("config test failed: %s", output)
	}

	// reload
	reloadCmd := fmt.Sprintf("sudo systemctl reload %s", service)
	_, err = runSSH(host, reloadCmd)
	if err != nil {
		return fmt.Errorf("reload failed: %w", err)
	}

	// health check
	time.Sleep(2 * time.Second)
	healthCmd := fmt.Sprintf(
		"curl -s -o /dev/null -w '%%{http_code}' --max-time 5 %s",
		healthURL,
	)
	output, err = runSSH(host, healthCmd)
	if err != nil {
		return fmt.Errorf("health check failed: %w", err)
	}
	if output != "200" {
		return fmt.Errorf("health check returned HTTP %s", output)
	}

	return nil
}

// --- Deploy One Server ---

func deployServer(srv Server, tmplText string, remotePath string, service string, healthURL string, dryRun bool) Result {
	result := Result{Server: srv.Name}

	// step 1: render template
	content, err := renderTemplate(tmplText, srv.Vars)
	if err != nil {
		result.Err = fmt.Errorf("render: %w", err)
		return result
	}
	result.Steps = append(result.Steps, "template rendered")

	// step 2: push config
	changed, err := pushConfig(srv.Host, content, remotePath, dryRun)
	if err != nil {
		result.Err = fmt.Errorf("push: %w", err)
		return result
	}
	result.Changed = changed

	if dryRun {
		if changed {
			result.Steps = append(result.Steps, "would push config (changed)")
		} else {
			result.Steps = append(result.Steps, "config unchanged, would skip")
		}
		return result
	}

	if !changed {
		result.Steps = append(result.Steps, "config unchanged, skipped")
		return result
	}
	result.Steps = append(result.Steps, "config pushed")

	// step 3: verify service
	err = verifyService(srv.Host, service, healthURL)
	if err != nil {
		result.Err = fmt.Errorf("verify: %w", err)
		return result
	}
	result.Steps = append(result.Steps, "service verified")

	return result
}

// --- Output ---

const (
	colorReset  = "\033[0m"
	colorRed    = "\033[31m"
	colorGreen  = "\033[32m"
	colorYellow = "\033[33m"
	colorCyan   = "\033[36m"
)

func printResult(r Result) {
	if r.Err != nil {
		fmt.Printf("%s[FAILED]%s %s\n", colorRed, colorReset, r.Server)
		for _, step := range r.Steps {
			fmt.Printf("    %s%s%s\n", colorGreen, step, colorReset)
		}
		fmt.Printf("    %s%s%s\n", colorRed, r.Err, colorReset)
		return
	}

	if r.Changed {
		fmt.Printf("%s[CHANGED]%s %s\n", colorYellow, colorReset, r.Server)
	} else {
		fmt.Printf("%s[OK]%s %s\n", colorGreen, colorReset, r.Server)
	}
	for _, step := range r.Steps {
		fmt.Printf("    %s%s%s\n", colorCyan, step, colorReset)
	}
}

// --- Main ---

func main() {
	dryRun := flag.Bool("dry-run", false, "show what would change without applying")
	flag.Parse()

	// server inventory
	servers := []Server{
		{
			Name: "web1",
			Host: "10.0.1.1",
			Role: "web",
			Vars: map[string]string{
				"ServerName": "web1.example.com",
				"ListenPort": "80",
				"AppPort":    "8080",
			},
		},
		{
			Name: "web2",
			Host: "10.0.1.2",
			Role: "web",
			Vars: map[string]string{
				"ServerName": "web2.example.com",
				"ListenPort": "80",
				"AppPort":    "8081",
			},
		},
		{
			Name: "web3",
			Host: "10.0.1.3",
			Role: "web",
			Vars: map[string]string{
				"ServerName": "web3.example.com",
				"ListenPort": "443",
				"AppPort":    "8082",
			},
		},
	}

	// nginx config template
	nginxTemplate := `server {
    listen {{index . "ListenPort"}};
    server_name {{index . "ServerName"}};

    location / {
        proxy_pass http://127.0.0.1:{{index . "AppPort"}};
    }

    location /health {
        access_log off;
        return 200 "ok";
    }
}
`

	remotePath := "/etc/nginx/sites-enabled/app.conf"
	service := "nginx"
	healthURL := "http://127.0.0.1/health"

	if *dryRun {
		fmt.Printf("%s[DRY RUN]%s showing what would change\n\n", colorYellow, colorReset)
	}

	// run in parallel
	resultCh := make(chan Result, len(servers))
	var wg sync.WaitGroup

	for _, srv := range servers {
		wg.Add(1)
		go func(s Server) {
			defer wg.Done()
			r := deployServer(s, nginxTemplate, remotePath, service, healthURL, *dryRun)
			resultCh <- r
		}(srv)
	}

	go func() {
		wg.Wait()
		close(resultCh)
	}()

	// collect and print results
	var results []Result
	for r := range resultCh {
		results = append(results, r)
	}

	fmt.Println()
	failCount := 0
	changeCount := 0
	okCount := 0

	for _, r := range results {
		printResult(r)
		if r.Err != nil {
			failCount++
		} else if r.Changed {
			changeCount++
		} else {
			okCount++
		}
	}

	// summary
	fmt.Printf("\n%s--- Summary ---%s\n", colorCyan, colorReset)
	fmt.Printf("  Servers: %d\n", len(servers))
	fmt.Printf("  %sChanged: %d%s\n", colorYellow, changeCount, colorReset)
	fmt.Printf("  %sOK: %d%s\n", colorGreen, okCount, colorReset)
	if failCount > 0 {
		fmt.Printf("  %sFailed: %d%s\n", colorRed, failCount, colorReset)
		os.Exit(1)
	}
	fmt.Printf("  %sFailed: %d%s\n", colorGreen, failCount, colorReset)
}

How It Works

The tool does the following for each server.

  1. Renders the nginx template with the server’s variables.
  2. Computes the SHA-256 checksum of the rendered config.
  3. Compares it with the checksum of the file currently on the server.
  4. If they differ, pushes the new config via SCP.
  5. Tests the nginx config syntax on the server.
  6. Reloads the nginx service.
  7. Waits 2 seconds, then curls the health endpoint.
  8. Reports the result.

All servers run in parallel. Results are collected through a channel. No race conditions.

Running the Tool

First deploy.

$ go run main.go

[CHANGED] web1
    template rendered
    config pushed
    service verified
[CHANGED] web2
    template rendered
    config pushed
    service verified
[CHANGED] web3
    template rendered
    config pushed
    service verified

--- Summary ---
  Servers: 3
  Changed: 3
  OK: 0
  Failed: 0

Run it again. Nothing changes.

$ go run main.go

[OK] web1
    template rendered
    config unchanged, skipped
[OK] web2
    template rendered
    config unchanged, skipped
[OK] web3
    template rendered
    config unchanged, skipped

--- Summary ---
  Servers: 3
  Changed: 0
  OK: 3
  Failed: 0

Dry Run Mode

Use --dry-run to see what would change without applying anything.

$ go run main.go --dry-run

[DRY RUN] showing what would change

[CHANGED] web1
    template rendered
    would push config (changed)
[OK] web2
    template rendered
    config unchanged, would skip
[OK] web3
    template rendered
    config unchanged, would skip

--- Summary ---
  Servers: 3
  Changed: 1
  OK: 2
  Failed: 0

Dry run is useful when you want to check which servers would be affected before making changes. It renders templates and compares checksums, but it does not push any files or reload any services.

When One Server Fails

If verification fails on one server, the tool reports it and continues with the others.

$ go run main.go

[CHANGED] web1
    template rendered
    config pushed
    service verified
[FAILED] web2
    template rendered
    config pushed
    verify: health check returned HTTP 502
[CHANGED] web3
    template rendered
    config pushed
    service verified

--- Summary ---
  Servers: 3
  Changed: 2
  OK: 0
  Failed: 1

Server web2 got the new config. The reload succeeded. But the health check returned 502. The tool reports this clearly. You know exactly which server failed and why.

The exit code is 1 when any server fails. This makes the tool usable in scripts and CI pipelines. A non-zero exit code stops the pipeline.

Adding More Servers

To add a server, add an entry to the servers slice.

{
    Name: "web4",
    Host: "10.0.1.4",
    Role: "web",
    Vars: map[string]string{
        "ServerName": "web4.example.com",
        "ListenPort": "80",
        "AppPort":    "8083",
    },
},

The tool will pick it up automatically. No other changes needed.

Adding Different Services

The tool is not limited to nginx. Change the service variable and the healthURL to manage other services.

For example, to manage HAProxy.

service := "haproxy"
healthURL := "http://127.0.0.1:8404/stats"

You would also need a different config template and a different config test command. The verifyService function currently hardcodes nginx -t as the config test. A better version would accept the test command as a parameter.

func verifyService(host string, testCmd string, reloadCmd string, healthURL string) error {
	output, err := runSSH(host, testCmd)
	if err != nil {
		return fmt.Errorf("config test failed: %s", output)
	}

	_, err = runSSH(host, reloadCmd)
	if err != nil {
		return fmt.Errorf("reload failed: %w", err)
	}

	time.Sleep(2 * time.Second)

	healthCmd := fmt.Sprintf(
		"curl -s -o /dev/null -w '%%{http_code}' --max-time 5 %s",
		healthURL,
	)
	output, err = runSSH(host, healthCmd)
	if err != nil {
		return fmt.Errorf("health check failed: %w", err)
	}
	if output != "200" {
		return fmt.Errorf("health check returned HTTP %s", output)
	}

	return nil
}

Now you pass the test and reload commands per service type.

verifyService(
    srv.Host,
    "sudo nginx -t 2>&1",
    "sudo systemctl reload nginx",
    "http://127.0.0.1/health",
)

Or for HAProxy.

verifyService(
    srv.Host,
    "sudo haproxy -c -f /etc/haproxy/haproxy.cfg 2>&1",
    "sudo systemctl reload haproxy",
    "http://127.0.0.1:8404/stats",
)

What You Built

This guide started with a single SSH command. Over six steps, it built up to a complete config tool.

Here is what each step added.

Step 1 covered running commands on remote servers. Single server, multiple servers, and handling SSH host key prompts that hang automated scripts.

Step 2 covered pushing files. SCP to a temp path, then move with sudo. The key improvement was checksumming local and remote files to skip unnecessary pushes.

Step 3 covered templating. Using text/template from Go’s standard library to render per-server configs from a single template. The important lesson was using missingkey=error to catch field mismatches at render time instead of producing broken configs silently.

Step 4 covered service verification. Testing config syntax before reload. Checking the health endpoint after reload instead of trusting systemctl is-active. The reload exit code is not enough.

Step 5 covered parallel execution. Goroutines, channels, and the concurrent map write panic. Two solutions: channels or mutex.

Step 6 combined everything into a single tool with colored output, dry run mode, and a summary showing changed, OK, and failed counts.

The tool uses only the Go standard library. No external dependencies. It shells out to ssh and scp for the actual remote operations. This is simple and works with any server that accepts SSH connections.

For production use, you would want to add a few more things. Reading the server list from a file instead of hardcoding it. SSH key paths as configuration. A maximum concurrency limit so you do not open 500 SSH connections at once. Rollback on failure. Logging to a file. But the core logic is here.

The progression from bash loops to a Go tool is the same progression you will follow in real projects. Bash is fast for one-off tasks. When you need error handling, parallelism, and structured output, a compiled tool is worth the investment.

Keep Reading

Similar Articles

More from devops