Learn nginx log analysis step by step — start with grep and awk one-liners for quick answers, then …
Containers From Scratch in Go Containers From Scratch in Go

Summary
Most people think containers are lightweight VMs. They’re not. A container is just a regular Linux process with three tricks applied to it:
- Namespaces — control what a process can see (its own PIDs, its own hostname, its own filesystem)
- Cgroups — control what a process can use (CPU, memory)
- Chroot — give the process its own root filesystem
That’s it. No hypervisor, no kernel emulation. We’re going to prove this by building a container runtime called Gocker in about 150 lines of Go, using only the standard library.
By the end, you’ll have a working gocker run /bin/sh that gives you an isolated shell with its own PID 1, its own hostname, its own filesystem, and memory/CPU limits — just like Docker.
The full code is on GitHub: karandaid/containers-from-scratch
What We’re Building
A mini container runtime called Gocker that can:
- Run any command in an isolated environment
- Give the process its own PID namespace (PID 1)
- Give it a custom hostname
- Chroot into an Alpine Linux filesystem
- Mount
/procand create device nodes - Limit memory and CPU with cgroups
The journey:
Expand your knowledge with Building a URL Shortener: From Linux Networking to Go
- Run a command in a child process
- Isolate process IDs with PID namespace
- Give it a custom hostname with UTS namespace
- Chroot into an Alpine root filesystem (the simple way — then break it)
- The
/proc/self/exetrick — how real container runtimes work - Mount
/procand/devinside the container - Limit memory and CPU with cgroups
- Replace the process with
syscall.Exec— PID 1 for real
Prerequisites
- Linux (Ubuntu 20.04+ recommended) — namespaces and cgroups are Linux kernel features
- Go 1.21+ installed
- Root access (
sudo) — creating namespaces requires root
macOS won’t work. Docker on Mac secretly runs a Linux VM. The syscalls we use don’t exist in the Darwin kernel.
Deepen your understanding in Docker Log Management: From docker logs to a Go Log Collector
Step 1: Run a Command in a Child Process
What: Use Go’s exec.Command to spawn a shell.
Why: Before we isolate anything, we need a process to isolate. This is the “hello world” — just run /bin/sh as a child process.
mkdir containers-from-scratch && cd containers-from-scratch
go mod init gocker
main.go
package main
import (
"fmt"
"os"
"os/exec"
)
func main() {
fmt.Println("Gocker - a minimal container runtime")
cmd := exec.Command("/bin/sh")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
We connect the child’s stdin, stdout, and stderr to our own so we can type commands and see output. exec.Command creates a child process that runs the given binary.
Build and run:
go build -o gocker .
sudo ./gocker
You get a shell. Type some commands:
echo $$ # Shows PID — it's some large number like 4523
hostname # Shows your host machine's name
ls / # Shows your host machine's filesystem
exit # Leave the shell
This isn’t a container yet — it’s just a shell running as a child process. It can see everything on the host: all processes, all files, the real hostname. Let’s start isolating it.
Explore this further in Process Management: From Linux Commands to a Go Supervisor
Step 2: Isolate Process IDs (PID Namespace)
What: Create a new PID namespace so the child process thinks it’s PID 1.
Why: In a real container, ps aux should only show the container’s own processes, not the host’s. PID namespace is the first step — it gives the process its own set of process IDs, starting from 1.
main.go
package main
import (
"fmt"
"os"
"os/exec"
"syscall"
)
func main() {
fmt.Println("Gocker - a minimal container runtime")
cmd := exec.Command("/bin/sh")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWPID,
}
if err := cmd.Run(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
SysProcAttr lets us set Linux-specific process attributes. CLONE_NEWPID tells the kernel to create a new PID namespace for this child process. Inside that namespace, the child will be PID 1.
Build and run:
go build -o gocker . && sudo ./gocker
echo $$ # Shows 1 — we're PID 1!
The child process thinks it’s PID 1 — that’s the init process of our mini “container.” But try running ps aux:
ps aux # Still shows ALL host processes!
Why? Because ps reads from /proc, and /proc still points to the host’s proc filesystem. We’ll fix this after we isolate the filesystem. For now, the PID is isolated — it’s just that the tools for inspecting it (ps) are still looking at the host.
Discover related concepts in Process Management: From Linux Commands to a Go Supervisor
Step 3: Isolate the Hostname (UTS Namespace)
What: Create a UTS namespace so the container has its own hostname.
Why: When you SSH into a Docker container, the prompt shows the container ID as the hostname, not the host machine’s name. UTS namespace makes this possible.
main.go
package main
import (
"fmt"
"os"
"os/exec"
"syscall"
)
func main() {
fmt.Println("Gocker - a minimal container runtime")
cmd := exec.Command("/bin/sh")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWPID | syscall.CLONE_NEWUTS,
}
if err := cmd.Run(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
We added CLONE_NEWUTS with the | (bitwise OR) operator. Multiple namespace flags combine this way. UTS stands for “Unix Time-Sharing” — it isolates the hostname and domain name.
Build and run:
go build -o gocker . && sudo ./gocker
hostname gocker # Set hostname inside the "container"
hostname # Shows "gocker"
exit
hostname # Back on the host — shows original hostname, unchanged!
The hostname change only affected the child process. The host is untouched. That’s namespace isolation in action.
But setting the hostname manually isn’t great. In Docker, the container gets its hostname automatically. We’ll automate that once we restructure the code in Step 5.
Uncover more details in How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
Step 4: Chroot Into Alpine (The Simple Way)
What: Give the container its own root filesystem using chroot.
Why: Right now, the container sees the host’s entire filesystem. A real container has its own, isolated filesystem — usually a minimal Linux distribution like Alpine.
First, download the Alpine root filesystem:
wget https://dl-cdn.alpinelinux.org/alpine/v3.19/releases/x86_64/alpine-minirootfs-3.19.0-x86_64.tar.gz
mkdir -p gocker-root
tar -xzf alpine-minirootfs-3.19.0-x86_64.tar.gz -C gocker-root
Now you have a complete Linux filesystem in gocker-root/ — about 3MB. This is the same rootfs that Docker’s Alpine image uses.
main.go
package main
import (
"fmt"
"os"
"os/exec"
"syscall"
)
func main() {
fmt.Println("Gocker - a minimal container runtime")
rootfs := "./gocker-root"
cmd := exec.Command("/bin/sh")
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.Dir = "/"
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWPID | syscall.CLONE_NEWUTS,
Chroot: rootfs,
}
if err := cmd.Run(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
SysProcAttr.Chroot tells the kernel to change the root directory for the child process before it starts. cmd.Dir = "/" sets the working directory to / (relative to the new root).
Build and run:
go build -o gocker . && sudo ./gocker
cat /etc/os-release # Shows Alpine Linux!
ls / # Shows Alpine's filesystem, not the host's
ps aux # ERROR: /proc not mounted
We’re inside Alpine! The container can’t see the host filesystem anymore. But ps fails because /proc doesn’t exist in our chroot. Let’s try mounting it:
mount -t proc proc /proc
ps aux # Still shows HOST processes!
Wait — we’re in a new PID namespace, so why does ps show host processes? Here’s the problem: SysProcAttr.Chroot applies chroot before the process enters the new PID namespace. So when we mount /proc inside the container, it still gets the host’s proc filesystem. The namespaces and chroot happen in the wrong order.
This is a real problem. We need to restructure the code so that we’re inside the new namespace before we do the chroot and mount. That requires a trick.
Journey deeper into this topic with Building a YouTube-like System the Simple Way: AWS Lambda and S3
Step 5: The /proc/self/exe Trick
What: Restructure the program so the container process does its own setup after entering the namespace.
Why: The previous step showed that SysProcAttr.Chroot happens at the wrong time. We need the child to set up its own filesystem after it’s inside the new namespace. The solution: the program re-executes itself.
This is how real container runtimes like runc (the runtime behind Docker) work. The parent spawns a child in new namespaces, and the child sets up its own environment.
Replace your entire main.go:
main.go
package main
import (
"fmt"
"os"
"os/exec"
"syscall"
)
func run() {
// Re-execute ourselves with "child" as the first argument
args := append([]string{"child"}, os.Args[2:]...)
cmd := exec.Command("/proc/self/exe", args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWPID | syscall.CLONE_NEWUTS,
}
if err := cmd.Start(); err != nil {
fmt.Println("Error Starting:", err)
os.Exit(1)
}
if err := cmd.Wait(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
func child() {
// We're now INSIDE the new namespaces — do setup here
if err := syscall.Chroot("./gocker-root"); err != nil {
fmt.Println("Error chroot:", err)
os.Exit(1)
}
if err := os.Chdir("/"); err != nil {
fmt.Println("Error chdir:", err)
os.Exit(1)
}
if err := syscall.Sethostname([]byte("gocker")); err != nil {
fmt.Println("Error setting hostname:", err)
os.Exit(1)
}
// Run the user's command
cmd := exec.Command(os.Args[2], os.Args[3:]...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
func main() {
fmt.Println("Gocker - a minimal container runtime")
if len(os.Args) < 3 {
fmt.Println("Usage gocker run <command>")
os.Exit(1)
}
switch os.Args[1] {
case "run":
run()
case "child":
child()
default:
fmt.Println("Unknown command. Usage: gocker run <command>")
os.Exit(1)
}
}
Here’s the magic: /proc/self/exe is a special Linux path that always points to the currently running binary. So exec.Command("/proc/self/exe", "child", ...) re-executes our own gocker binary with "child" as the first argument.
The flow is:
- User runs
sudo ./gocker run /bin/sh main()sees"run"→ callsrun()run()spawns a new copy of itself in new PID + UTS namespaces with"child"as the command- The new copy’s
main()sees"child"→ callschild() child()is now inside the new namespaces, so it does chroot, sets hostname, and runs the user’s command
Build and run:
go build -o gocker . && sudo ./gocker run /bin/sh
hostname # Shows "gocker" — set automatically!
echo $$ # Shows 1 — we're PID 1 inside the namespace
cat /etc/os-release # Alpine Linux
exit
Now the hostname is set automatically, and chroot happens after we enter the namespace. But ps still doesn’t work because /proc isn’t mounted yet.
Enrich your learning with Linux ls Command: The Complete Guide with History, Examples, and Tricks
Step 6: Mount /proc and /dev
What: Mount the proc filesystem and create essential device nodes inside the container.
Why: Without /proc, tools like ps, top, and free won’t work. Without /dev/null, /dev/zero, and /dev/random, many programs crash or behave unexpectedly.
Update the child() function — add the mounts after chroot and chdir:
child() — updated:
func makedev(major, minor uint32) uint64 {
return uint64(major)*256 + uint64(minor)
}
func child() {
if err := syscall.Chroot("./gocker-root"); err != nil {
fmt.Println("Error chroot:", err)
os.Exit(1)
}
if err := os.Chdir("/"); err != nil {
fmt.Println("Error chdir:", err)
os.Exit(1)
}
// Mount /proc — makes ps, top, free work
if err := syscall.Mount("proc", "/proc", "proc", 0, ""); err != nil {
fmt.Println("Error mounting proc:", err)
os.Exit(1)
}
// Mount /dev as tmpfs
if err := syscall.Mount("tmpfs", "/dev", "tmpfs", 0, ""); err != nil {
fmt.Println("Error mounting dev:", err)
os.Exit(1)
}
// Create essential device nodes
devNull := syscall.Mknod("/dev/null", 0666|syscall.S_IFCHR, int(makedev(1, 3)))
if devNull != nil {
fmt.Println("Error creating /dev/null:", devNull)
}
devZero := syscall.Mknod("/dev/zero", 0666|syscall.S_IFCHR, int(makedev(1, 5)))
if devZero != nil {
fmt.Println("Error creating /dev/zero:", devZero)
}
devRandom := syscall.Mknod("/dev/random", 0666|syscall.S_IFCHR, int(makedev(1, 8)))
if devRandom != nil {
fmt.Println("Error creating /dev/random:", devRandom)
}
if err := syscall.Sethostname([]byte("gocker")); err != nil {
fmt.Println("Error setting hostname:", err)
os.Exit(1)
}
cmd := exec.Command(os.Args[2], os.Args[3:]...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
if err := cmd.Run(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
Add "path/filepath" and "strconv" to your imports (you’ll need them for cgroups in the next step).
syscall.Mount("proc", "/proc", "proc", 0, "") mounts the proc filesystem. Because we’re in a new PID namespace and we mounted after entering it, /proc now shows only our container’s processes.
syscall.Mknod creates device nodes. The numbers (1,3 for null, 1,5 for zero, 1,8 for random) are the standard Linux major/minor device numbers. The makedev helper combines them into the format Mknod expects.
Build and run:
go build -o gocker . && sudo ./gocker run /bin/sh
ps aux
Expected output:
PID USER TIME COMMAND
1 root 0:00 /bin/sh
2 root 0:00 ps aux
Only two processes! PID 1 is our shell, PID 2 is the ps command itself. No host processes visible. This is real container isolation.
Gain comprehensive insights from How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
echo "test" > /dev/null # Works!
cat /dev/random | head -1 # Works!
exit
Step 7: Limit Resources with Cgroups
What: Use cgroups to limit how much memory and CPU the container can use.
Why: Without resource limits, a container can eat all the host’s memory and crash everything. Cgroups (control groups) let you set hard limits per process.
Add the setupCgroup function and call it from run() after the child starts:
Add this function:
func setupCgroup(pid int) error {
cgroupMemoryPath := "/sys/fs/cgroup/memory/gocker/"
cgroupCPUPath := "/sys/fs/cgroup/cpu/gocker/"
// Memory limit: 50MB
if err := os.MkdirAll(cgroupMemoryPath, 0755); err != nil {
return fmt.Errorf("create cgroup: %w", err)
}
memLimit := filepath.Join(cgroupMemoryPath, "memory.limit_in_bytes")
if err := os.WriteFile(memLimit, []byte("52428800"), 0644); err != nil {
return fmt.Errorf("set memory limit: %w", err)
}
procs := filepath.Join(cgroupMemoryPath, "cgroup.procs")
if err := os.WriteFile(procs, []byte(strconv.Itoa(pid)), 0644); err != nil {
return fmt.Errorf("assign process: %w", err)
}
// CPU limit: 50%
if err := os.MkdirAll(cgroupCPUPath, 0755); err != nil {
return fmt.Errorf("create cpu cgroup: %w", err)
}
cpuLimit := filepath.Join(cgroupCPUPath, "cpu.cfs_period_us")
if err := os.WriteFile(cpuLimit, []byte("100000"), 0644); err != nil {
return fmt.Errorf("set cpu period: %w", err)
}
cpuQLimit := filepath.Join(cgroupCPUPath, "cpu.cfs_quota_us")
if err := os.WriteFile(cpuQLimit, []byte("50000"), 0644); err != nil {
return fmt.Errorf("set cpu quota: %w", err)
}
cpuProcs := filepath.Join(cgroupCPUPath, "cgroup.procs")
if err := os.WriteFile(cpuProcs, []byte(strconv.Itoa(pid)), 0644); err != nil {
return fmt.Errorf("assign cpu process: %w", err)
}
fmt.Printf("Cgroup: PID %d limited to 50MB memory, 50%% CPU\n", pid)
return nil
}
Update run() — call setupCgroup after cmd.Start():
func run() {
args := append([]string{"child"}, os.Args[2:]...)
cmd := exec.Command("/proc/self/exe", args...)
cmd.Stdin = os.Stdin
cmd.Stdout = os.Stdout
cmd.Stderr = os.Stderr
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWPID | syscall.CLONE_NEWUTS,
}
if err := cmd.Start(); err != nil {
fmt.Println("Error Starting:", err)
os.Exit(1)
}
if err := setupCgroup(cmd.Process.Pid); err != nil {
fmt.Println("Error setting cgroup:", err)
os.Exit(1)
}
if err := cmd.Wait(); err != nil {
fmt.Println("Error:", err)
os.Exit(1)
}
}
Cgroups are just files in /sys/fs/cgroup/. To limit memory, you write the byte limit to memory.limit_in_bytes. To limit CPU, you set the CFS (Completely Fair Scheduler) period and quota. A period of 100000 microseconds (100ms) with a quota of 50000 means the process gets 50ms out of every 100ms — that’s 50% of one CPU core.
Writing the child’s PID to cgroup.procs tells the kernel to apply these limits to that process and all its children.
Build and run:
go build -o gocker . && sudo ./gocker run /bin/sh
Expected output:
Gocker - a minimal container runtime
Cgroup: PID 12345 limited to 50MB memory, 50% CPU
Test the memory limit from inside the container:
# Try to allocate more than 50MB — it'll get killed
dd if=/dev/zero of=/dev/null bs=1M count=100
The process should be killed by the OOM (Out of Memory) killer if it tries to exceed 50MB.
Master this concept through How to Replace Text in Multiple Files with Sed: A Step-by-Step Guide
Step 8: Make the Command PID 1 (syscall.Exec)
What: Replace exec.Command + cmd.Run() in the child with syscall.Exec so the user’s command becomes PID 1.
Why: With cmd.Run(), our Go child process is PID 1 and the user’s command is PID 2. With syscall.Exec, the Go process replaces itself entirely with the user’s command — now it’s truly PID 1 in the container, just like Docker.
Update the end of child() — replace the exec.Command block:
child() — final version, replace the last part:
// Replace this process with the user's command
// This makes it PID 1 — exactly what Docker does
binary, err := exec.LookPath(os.Args[2])
if err != nil {
fmt.Println("Error finding command:", err)
os.Exit(1)
}
if err := syscall.Exec(binary, os.Args[2:], os.Environ()); err != nil {
fmt.Println("Error exec:", err)
os.Exit(1)
}
exec.LookPath finds the full path to the binary (e.g., turns sh into /bin/sh). syscall.Exec replaces the current process entirely — it doesn’t return. After this call, our Go code is gone and the user’s command is running as PID 1.
The difference:
# Before (cmd.Run):
PID 1: our Go child process
PID 2: /bin/sh
# After (syscall.Exec):
PID 1: /bin/sh (our Go process is gone)
Build and run:
go build -o gocker . && sudo ./gocker run /bin/sh
ps aux
Expected output:
PID USER TIME COMMAND
1 root 0:00 /bin/sh
2 root 0:00 ps aux
/bin/sh is PID 1. Not our Go program — the actual shell. This is what Docker does. When you run docker exec and see PID 1, that’s your application, not Docker’s runtime.
Delve into specifics at Process Management: From Linux Commands to a Go Supervisor
What We Built
Starting from exec.Command("/bin/sh"), we incrementally built a real container runtime:
- Child process — run a command as a subprocess
- PID namespace — the container has its own process IDs, starting from 1
- UTS namespace — the container has its own hostname
- Chroot — the container has its own root filesystem (Alpine Linux)
/proc/self/exetrick — parent/child split so setup happens in the right order- Proc and dev mounts —
ps,top,/dev/nullall work inside the container - Cgroups — memory limited to 50MB, CPU limited to 50%
syscall.Exec— the user’s command is PID 1, just like Docker
All of this in about 150 lines of Go with zero external dependencies. That’s all a container is — an isolated process with resource limits.
The full project is on GitHub: karandaid/containers-from-scratch
0Deepen your understanding in Docker Log Management: From docker logs to a Go Log Collector
Cleanup
# Remove cgroup directories (if they persist)
sudo rmdir /sys/fs/cgroup/memory/gocker/ 2>/dev/null
sudo rmdir /sys/fs/cgroup/cpu/gocker/ 2>/dev/null
Next Steps
Gocker covers the basics. Real container runtimes add:
- Network namespace — isolated networking with virtual bridges and port mapping
- OverlayFS — layered filesystems for container images (copy-on-write)
- Seccomp filters — restrict which syscalls the container can make
- User namespaces — run containers without root on the host
- Image pulling — pull images from Docker Hub using the Registry API
Check out Build and Deploy a Go Lambda Function for more Go projects, or Building a Go S3 CLI Tool for Go + AWS SDK patterns.
Deepen your understanding in Docker Log Management: From docker logs to a Go Log Collector
Cheat Sheet
Quick reference for Linux container concepts in Go.
Create namespaces:
cmd.SysProcAttr = &syscall.SysProcAttr{
Cloneflags: syscall.CLONE_NEWPID | // New PID namespace
syscall.CLONE_NEWUTS | // New hostname
syscall.CLONE_NEWNS | // New mount namespace
syscall.CLONE_NEWNET, // New network namespace
}
Chroot:
syscall.Chroot("./rootfs")
os.Chdir("/")
Mount proc:
syscall.Mount("proc", "/proc", "proc", 0, "")
Create device nodes:
syscall.Mknod("/dev/null", 0666|syscall.S_IFCHR, int(1*256+3)) // major 1, minor 3
syscall.Mknod("/dev/zero", 0666|syscall.S_IFCHR, int(1*256+5)) // major 1, minor 5
syscall.Mknod("/dev/random", 0666|syscall.S_IFCHR, int(1*256+8)) // major 1, minor 8
Cgroup memory limit (50MB):
os.MkdirAll("/sys/fs/cgroup/memory/myapp/", 0755)
os.WriteFile("/sys/fs/cgroup/memory/myapp/memory.limit_in_bytes", []byte("52428800"), 0644)
os.WriteFile("/sys/fs/cgroup/memory/myapp/cgroup.procs", []byte(strconv.Itoa(pid)), 0644)
Cgroup CPU limit (50%):
os.MkdirAll("/sys/fs/cgroup/cpu/myapp/", 0755)
os.WriteFile("/sys/fs/cgroup/cpu/myapp/cpu.cfs_period_us", []byte("100000"), 0644)
os.WriteFile("/sys/fs/cgroup/cpu/myapp/cpu.cfs_quota_us", []byte("50000"), 0644)
os.WriteFile("/sys/fs/cgroup/cpu/myapp/cgroup.procs", []byte(strconv.Itoa(pid)), 0644)
Replace process (become PID 1):
binary, _ := exec.LookPath("sh")
syscall.Exec(binary, []string{"sh"}, os.Environ())
// This never returns — the Go process is gone
Key concepts to remember:
- Namespaces control visibility (what a process can see) —
CLONE_NEWPID,CLONE_NEWUTS,CLONE_NEWNS,CLONE_NEWNET - Cgroups control resources (what a process can use) — they’re just files in
/sys/fs/cgroup/ - Chroot changes the root directory — the process thinks
/is wherever you pointed it /proc/self/exealways points to the current binary — container runtimes use this to re-execute themselvessyscall.Execreplaces the current process — it doesn’t return, and the new binary takes over the PID- Containers are not VMs — they’re just processes with isolation and resource limits applied
- Everything requires root (or user namespaces) because creating namespaces is a privileged operation
- CFS quota/period ratio determines CPU percentage:
50000/100000 = 50%of one core
Similar Articles
Related Content
More from devops
Learn AWS automation step by step — start with AWS CLI commands for S3, EC2, and IAM, then build the …
Learn config templating step by step — start with envsubst for simple variable substitution, then …
You Might Also Like
No related topic suggestions found.
Contents
- What We’re Building
- Prerequisites
- Step 1: Run a Command in a Child Process
- Step 2: Isolate Process IDs (PID Namespace)
- Step 3: Isolate the Hostname (UTS Namespace)
- Step 4: Chroot Into Alpine (The Simple Way)
- Step 5: The
/proc/self/exeTrick - Step 6: Mount
/procand/dev - Step 7: Limit Resources with Cgroups
- Step 8: Make the Command PID 1 (
syscall.Exec) - What We Built
- Cleanup
- Next Steps
- Cheat Sheet
